Age | Commit message (Collapse) | Author | Files | Lines |
|
JIT/Directed/RVAInit/nested
JIT/Directed/RVAInit/simple
JIT/Regression/CLR-x86-JIT/V1.2-Beta1/b103058/b103058
|
|
Port of #19156.
Avoid promoting structs that contain struct fields that themselves
wrap single simple fields, if those single simple fields are smaller
than their enclosing struct.
Otherwise we run the risk of losing track of the "extra" bytes in the
innner struct.
Addresses #19149.
|
|
Port of #18348 to release/2.1
We need to make sure that if we reorder a callfinally during finally cloning
that the callfinally is actually the one being targeted by the last block in
the try range.
Closes #18332. Linked issue has some more detailed notes.
|
|
|
|
Handle SIMD8/LONG recasts for LCL_FLD
|
|
Lowering needs to insert a `BITCAST` in the case of a `STORE_LCL_FLD` with mismatched `TYP_SIMD8`/`TYP_LONG` operands, just as for `STORE_LCL_VAR`.
|
|
as we could introduce shadow copies of them.
Add new test case: GitHub_17329.cs
|
|
|
|
ARM: Fix reg resolution for doubles
|
|
|
|
Fix inconsistent handling of zero extending casts
|
|
* add repro
* delete BuildShiftRotate for arm
* fix GT_LSH_HI and GT_RSH_LO
* return the special handling for GT_LSH_HI and GT_RSH_LO
* fix the header
|
|
ARM: call compRsvdRegCheck later
|
|
When a `RefTypeFixedReg` is encountered for a floating point register, if it is currently holding a double constant, we need to free both halves - but the current register be either half.
|
|
1) When an odd float register becomes free, we may need to add the corresponding (even) double register to `targetRegsReady` (this was the bug)
2) When an even float register becomes free, we can't add it to `targetRegsReady` unless it's other half is also free.
|
|
For RyuJIT backend, the number of locals can change due to decomposition and lowering.
So, determine whether to reserve a register for accessing large state displacements just prior to register allocation.
This requires ensuring that the stack offset for promoted longs is fixed up, as for promoted reg structs on ARM64.
This is all a bit more difficult on LEGACY_BACKEND because the register allocator can change its mind about whether or not it's going to have a frame, so that is left as before.
|
|
Mark operands of dead FIELD_LIST as unused
|
|
This requires fixing the side-effects check in dead code elimination.
Also, fixes gtSetFlags() to be usable from DCE in the non-legacy case.
|
|
Added test case
|
|
* add repro
* fix
|
|
* add repro
* delete the assert
|
|
This reverts commit 0598b6b8af0fb560837ce0bb8f3c8236c05bbdc9.
Updates to xunit console runner break things in Helix. So reverting
until we can get that part sorted.
Also undoes version updates from #16597; we can't move these forward
yet as we're back to netstandard 1.4 for the time being.
|
|
Remove the special benchmark configs and update benchmark projects accordingly.
Also update other random projects that were referencing benchmark configs.
Benchmarks now build against the default standard.
Addresses #14124, #16126.
|
|
Handle a restored double Interval at block boundary
|
|
During the process of freeing registers that are no longer live at the start of a new block, we may restore a 'previousInterval'. If that is a double (and the freed interval was float), we need to skip the next float.
|
|
We don't usually create an LEA on the rhs of a block copy - we check the type of the indir, and if it's struct we avoid the copy. However, in this case, the rhs was the address of a scalar field within a struct, so the indir was not TYP_STRUCT.
|
|
Handle TYP_SIMD8 correctly in genCodeForLclFld
|
|
When loading a TYP_SIMD8 local field movsd should be used, not movups. Unlike ins_Move_Extend, ins_Load does the right thing and it's consistent with indirs.
|
|
Fixes the second case in DevDev bug 543057
|
|
* create a new phase: StackLevelSetter
* add repro
* Fix grammar mistakes
* use the default hash
* delete values from the map.
* create gentree::OperIsPutArgStkOrSplit
* fix more comments
* delete an extra condition that is always true
* use GTSTRUCT_2_SPECIAL for PutArgStk
* extract fgUseThrowHelperBlocks
* optimize memory for amd64 and additional checks for x86
* change checks
The previous version was wrong, because morph can call fgAddCodeRef several times for the same instruction during different phases.
* fix comments
* fix genJumpToThrowHlpBlk
* small ref in genJumpToThrowHlpBlk
* fix rebase problems.
* use fgUseThrowHelperBlocks instead of !opts.compDbgCode
* add throwHelperBlocksUsed for throughput.
|
|
For casts that are supposed to zero extend the GTF_UNSIGNED must always be set (and obeyed).
Some code failed to set the flag (e.g. when importing add.ovf.un instructions having native int and int32 operands) and some other code failed to check the flag (e.g. x64 codegen, gtFoldExprConst) and instead decided to zero extend based on the cast destination type.
This resulted in discrepancies between ARM64 and x64 codegen and between constant folding performed by gtFoldExprConst and VN's EvalCastForConstantArgs.
|
|
If a byref is passed to a call or used in an assign, assume it escapes and
that subsequent dereferences may be made at arbitrary offsets.
This handles cases where a user takes the address of one field of a struct,
then uses that byref that to access other fields or non-field content within
the struct, and the byref formation and use are split across statements or
across caller-callee contexts.
Closes #16210.
|
|
This fixes a bug that caused an assert in GetSingleTempReg.
In this case we had an an interop call (resulting in RETURNTRAP in its epilog)
when there are live 256 bit values at the callsite.
The bug was that codegen for RETURNTRAP node requested a single temp register
and asserted that there was only one. In this case the set of temp registers
includes floating-point registers that may be needed for saving/restoring the upper
part of 256-bit registers. The fix was for codegen to request a single temp int register.
GetSingleTempReg will assert that there was only one int register in this case.
|
|
If we have a block copy from an enregisterable struct (today, that's just SIMD) to a different type target, it needs to be makred as address-taken, because the destination type is what's used for the copy, and all non-enregisterable destination types expect their source in memory.
Fix #16254
|
|
When we have a lclVar that is being kept alive between its `PUTARG_REG` and the call, we need to take that into account in determining the minimum register requirement for a node.
|
|
* fix DevDiv_546017
* add repro
|
|
Recent shift changes made the JIT_LLsh helper mask the shift count to 6 bits. The other 2 helpers (JIT_LRsh and JIT_LRsz) so now we get inconsistencies such as `(x >> 64) != (x << 64)`.
The ECMA spec says that "the return value is unspecified if shiftAmount is greater than or equal to the width of value" so the JIT has no obligation to implement a particular behavior. But it seems preferable to have all shift instructions behave similarly, it avoids complications and reduces risks.
This also changes `ValueNumStore::EvalOpIntegral` to mask the shift count for 64 bit shifts so it matches `gtFoldExprConst`. Otherwise the produced value depends on the C/C++ compiler's behavior.
|
|
|
|
In a mismatched struct assignment, e.g. using Unsafe.As, we need to retain the OBJ(ADDR(lcl)) on the rhs of the assignment.
Fix #15237
|
|
|
|
* Change ReadOnlySpan indexer to return ref readonly.
Update JIT to handle changes to ReadOnlySpan indexer
Resolving merge conflict and fixing jit importer
Update ReadOnlySpan Enumerator Current to use indexer.
Removing readonly keyword.
* Temporarily disabling Span perf and other tests that use ReadOnlySpan
* Isolating the ref readonly indexer change only to CoreCLR for now.
Reverting the change to Enumerator Current for now
Fix file formatting
Enable Alpine CI (#15502)
* Enable Alpine CI
This enables Alpine CI leg on every PR using the pipelines.
compare type size instead of var_types
get rid of TYP_CHAR
Adding support for Acosh, Asinh, Atanh, and Cbrt to Math and MathF
Updating the PAL layer to support acosh, asinh, atanh, and cbrt
Adding some PAL tests for acosh, asinh, atanh, and cbrt
Adding valuenum support for acosh, asinh, atanh, and cbrt
Lsra Documentation
Update LinearScan section of ryujit-overview.md, and add lsra-detail.md
Refactor Unsafe.cs to get it more in sync with CoreRT. (#15510)
* Refactor Unsafe.cs to get it more in sync with CoreRT.
* Format the document.
* Unifying the copies of Unsafe using ifdefs
* Change exception thrown to PlatformNotSupportedException
* Addressing PR feedback and moving Unsafe to shared.
* Addressing PR feedback
* Addressing PR review - adding intrinsic attribute
Update CoreClr, CoreFx to preview1-26014-01, preview1-26013-12, respectively (#15513)
Revert "Add optional integer offset to OwnedMemory Pin (#15410)"
This reverts commit 8931cfa4ebe94f57698b4c1b3ab5689cd467cb8e.
Get rid of old -altjitcrossgen argument now that CI has been updated
Merge pull request dotnet/corert#5109 from dotnet/nmirror (#15518)
Merge nmirror to master
Signed-off-by: dotnet-bot <dotnet-bot@microsoft.com>
Revert " Revert "[Local GC] Move knowledge of overlapped I/O objects to the EE through four callbacks (#14982)""
Fix typo `_TARGET_ARM` to `_TARGET_ARM_`. This happens mostly in comments except lsra.cpp.
Update CoreClr, CoreFx, PgoData to preview1-26014-04, preview1-26014-03, master-20171214-0043, respectively (#15520)
* Disabling a test that uses ReadOnlySpan indexer
* Temporarily disabling the superpmi test and fixing nit
* Remove debug statements.
|
|
Reduce shift amount modulo 64 to match behavior on other platforms and the
jit optimizer.
Also, fix IL in related test case so it is valid for 32 bits too.
Also, make these two tests pri-0 so they get run with regular CI testing.
Fixes #15442.
|
|
In some cases reachable from IL we may simplify the shift amount,
which sets the MORPHED flag, and then later remorph, leading to an assert
in DEBUG/CHECK builds.
Fix is to clear the MORPHED flag.
Added test case.
|
|
The statements may contain CSE defs which if removed can confuse subsequent
CSE processing.
Fixes #15319.
|
|
* JIT: handle boundary cases for casts of long shifts
Remove the assert that the shift count is non-negative.
Don't try and optimize if the shift count is >= 64 or < 0.
Update test case to cover these values.
Updates the fix from #15236.
Closes #15291.
|
|
Fixed DCE of call nodes which affect stack level
|
|
* JIT: fix bug with int casts of long shifts
The jit is pushing int casts down through long left shifts in ways that can
change computation. The push is only safe if the shift amount is 31 bits or
less. So split the current optimization for shifts into three cases:
* shift amount unknown: don't push the cast
* shift amount > 31: result of cast/shift is zero
* shift amout <= 31: push the cast
Fixes #15077.
|
|
(#15157)
* add the repro
* fix the issue
* Move the test to pri1
|
|
* fix the issue
Do not zero-initialize temps, that were created by jit and do not have GC refs.
The confusion was that `varDsc->lvIsTemp` means `short live` variable, when we wanted to ask is it IL temp or not.
* add a non-stress repro.
|
|
several returns. (#14945)
* add fgNeedReturnSpillTemp
* fix the issue
* add the repro without stress mode
|