diff options
author | Mike Danes <onemihaid@hotmail.com> | 2018-06-04 00:50:17 +0300 |
---|---|---|
committer | Mike Danes <onemihaid@hotmail.com> | 2018-06-04 21:37:48 +0300 |
commit | 9adb98520dbfd0e7790e232bd042799557b2c719 (patch) | |
tree | 22efa5f4f1861a57eae645025c8701d5908784cb /src/jit/lsraxarch.cpp | |
parent | 183113ec7a16d7bb000e880e5e1ce690aedc7715 (diff) | |
download | coreclr-9adb98520dbfd0e7790e232bd042799557b2c719.tar.gz coreclr-9adb98520dbfd0e7790e232bd042799557b2c719.tar.bz2 coreclr-9adb98520dbfd0e7790e232bd042799557b2c719.zip |
Cleanup LOCKADD handling
LOCKADD nodes are generated rather early and there's no reason for that:
* The CORINFO_INTRINSIC_InterlockedAdd32/64 intrinsics are not actually used. Even if they would be used we can still import them as XADD nodes and rely on lowering to generate LOCKADD when needed.
* gtExtractSideEffList transforms XADD into LOCKADD but this can be done in lowering. LOCKADD is an XARCH specific optimization after all.
Additionally:
* Avoid the need for special handling in LSRA by making GT_LOCKADD a "no value" oper.
* Split LOCKADD codegen from XADD/XCHG codegen, attempting to use the same code for all 3 just makes things more complex.
* The address is always in a register so there's no real need to create an indir node on the fly, the relevant emitter functions can be called directly.
The last point above is actually a CQ issue - we always generate `add [reg], imm`, more complex address modes are not used. Unfortunately this problem starts early, when the importer spills the address to a local variable. If that ever gets fixed then we'll could probably generate a contained LEA in lowering.
Diffstat (limited to 'src/jit/lsraxarch.cpp')
-rw-r--r-- | src/jit/lsraxarch.cpp | 29 |
1 files changed, 5 insertions, 24 deletions
diff --git a/src/jit/lsraxarch.cpp b/src/jit/lsraxarch.cpp index 96af2f6d8a..943c4d3edc 100644 --- a/src/jit/lsraxarch.cpp +++ b/src/jit/lsraxarch.cpp @@ -450,7 +450,6 @@ int LinearScan::BuildNode(GenTree* tree) case GT_XADD: case GT_XCHG: - case GT_LOCKADD: { // TODO-XArch-Cleanup: We should make the indirection explicit on these nodes so that we don't have // to special case them. @@ -462,29 +461,10 @@ int LinearScan::BuildNode(GenTree* tree) RefPosition* addrUse = BuildUse(addr); setDelayFree(addrUse); tgtPrefUse = addrUse; - srcCount = 1; - dstCount = 1; - if (!data->isContained()) - { - RefPosition* dataUse = dataUse = BuildUse(data); - srcCount = 2; - } - - if (tree->TypeGet() == TYP_VOID) - { - // Right now a GT_XADD node could be morphed into a - // GT_LOCKADD of TYP_VOID. See gtExtractSideEffList(). - // Note that it is advantageous to use GT_LOCKADD - // instead of of GT_XADD as the former uses lock.add, - // which allows its second operand to be a contained - // immediate wheres xadd instruction requires its - // second operand to be in a register. - // Give it an artificial type and mark it as an unused value. - // This results in a Def position created but not considered consumed by its parent node. - tree->gtType = TYP_INT; - isLocalDefUse = true; - tree->SetUnusedValue(); - } + assert(!data->isContained()); + BuildUse(data); + srcCount = 2; + assert(dstCount == 1); BuildDef(tree); } break; @@ -771,6 +751,7 @@ bool LinearScan::isRMWRegOper(GenTree* tree) case GT_STORE_BLK: case GT_STORE_OBJ: case GT_SWITCH_TABLE: + case GT_LOCKADD: #ifdef _TARGET_X86_ case GT_LONG: #endif |