Age | Commit message (Collapse) | Author | Files | Lines |
|
It can get rid of possible problems with corrupted NI files if crossgen
will be killed during image saving.
|
|
Change-Id: I7937dfffeb7500d37b4e2da485a563a442b38b8c
|
|
|
|
|
|
coreclr (xmm bug)"
This reverts commit a6695d168396847df8d006a8743c84e5141824af.
|
|
We need to flush instruction cache only for pages that have relocations
instead of full sections because otherwise application's shared clean
memory is increased in some cases on Linux.
|
|
(xmm bug)
|
|
|
|
|
|
* Separate sections READONLY_VCHUNKS and READONLY_DICTIONARY
* Remove relocations for second-level indirection of Vtable in case FEATURE_NGEN_RELOCS_OPTIMIZATIONS is enabled.
Introduce FEATURE_NGEN_RELOCS_OPTIMIZATIONS, under which NGEN specific relocations optimizations are enabled
* Replace push/pop of R11 in stubs with
- str/ldr of R4 in space reserved in epilog for non-tail calls
- usage of R4 with hybrid-tail calls (same as for EmitShuffleThunk)
* Replace push/pop of R11 for function epilog with usage of LR as helper register right before its restore from stack
|
|
* [x86/Linux] Fix marshalling struct with 64-bit types
The System V ABI for i386 defines 4-byte alignment for 64-bit types.
* [Linux/x86] Fix marshalling tests in the case of System V i386 ABI
|
|
|
|
|
|
|
|
|
|
There have been no -pie linker option.
This patch adds -pie linker option into crossgen(for tizen)
This originates from 0024-Add-pie-to-linker-option.patch
|
|
|
|
For some reason, the Alpine docker container running on a SELinux host maps
heap as RWX. When we allocate alternate stack from the heap, we also
change the protection of the first page to PROT_NONE so that it can
serve as a guard page to catch stack overflow. And when we free the
alternate stack, we restore the protection back to PROT_READ |
PROT_WRITE. The restoration fails in Alpine docker container running on
a SELinux host with EPROT failure and the SELinux log reports that an
attempt to change heap to executable was made. So it looks like the
kernel has added the PERM_EXEC to the permissions we have passed to the
mprotect call. There is a code in the mprotect implementation that can
do that, although I don't fully understand the conditions under which it
happens. This is driven by the VM_MAYEXEC flag in the internal VMA block
structure.
To fix that, I've modified the alternate stack allocation to use mmap /
munmap instead of C heap allocation.
|
|
StompWriteBarrier(WriteBarrierOp::StompResize) on ARM (#18107)
|
|
in PAL and GC on ARM and ARM64
|
|
For ARM, doing a secure delegate call requires adding
a custom calling convention argument R4 as the address of the
secure delegate invoke indirection cell. This is done using the
fgMorphArgs nonStandardArgs mechanism, and the argument is added
at the end. For calls with 4 or more register arguments, this
didn't work: we would initially set the non-standard arg as a
non-register argument, and the nonStandardArgs check didn't
consider converting an argument from a stack argument back to
a register argument. The fix allows nonStandardArgs to be either
stack or register arguments, no matter what their place in the
argument list would imply.
Fixes #17738
|
|
This reverts commit 4950b038c84c223ddd9fc198dcf5722d46e21552.
|
|
|
|
|
|
Switch source build property to DotNetBuildFromSource
|
|
exception (#17710) (#17844)
|
|
(#17842)
Add portable PDB caching to StackTrace.
This is the mscorlib side of the change.
|
|
Detect source-build via DotNetBuildFromSource instead of
DotNetBuildOffline which is set for the tarball build.
|
|
|
|
[release/2.1] Remove Alpine 3.6 builds
|
|
This prevents the IL linker from optimizing away some properties/methods
related to tasks that are used by a debugger but are not referenced
anywhere else in coreclr.
This specifically fixes async callstack frames for the xplat C# debugger.
|
|
* Fix Number.ParseNumber to not assume '\0' at the end of a span
This routine was written for parsing strings, which are implicitly null-terminated, and it doesn't factor in string length but instead uses tricks to exit loops when the next character is null. Now that the routine is also used for spans, this is very problematic, as spans need not be null terminated, and generally aren't when they represent slices, and expecting a null termination like this can result in walking off the end of valid memory.
I would like to see all of this code rewritten to use span. In the interim, though, as a short-term fix I've changed all dereferences of the current position to compare against the length of the span (or, rather, a pointer to the end), and pretend that a null terminator was found if we've hit the end.
* Address PR feedback
|
|
The alpine 3.6 builds have been replaced with the more generic
linux-musl builds so removing them.
|
|
Make intra-build containers private
|
|
Fixes #17716
|
|
(#17712) (#17714)
* Preserve pinned flag in {ReadOnly}Memory<T>.Slice
* Address PR feedback.
Signed-off-by: dotnet-bot-corefx-mirror <dotnet-bot@microsoft.com>
|
|
commit b4d701a72c20b695715371a99b48473053b63250
Author: Ahson Khan <ahkha@microsoft.com>
Date: Wed Apr 11 13:43:36 2018 -0700
Add CreateFromPinnedArray to System.Memory ref and add tests (#28992)
* Fixing bug in Memory.Pin and adding API to uapaot baseline
commit 76e01040fcfdb1c652ef1bf4e8e123c7db4e1be8
Author: Ahson Khan <ahkha@microsoft.com>
Date: Mon Apr 16 01:54:54 2018 -0700
Update xml comment for {ReadOnly}Memory.Pin method (#29137)
|
|
|
|
[release/2.1] Add linux-musl build leg
|
|
|
|
|
|
Skip container creation if not in flatcontainer mode
Container creation isn't required and would be incorrect if the ExpectedFeedUrl's account name didn't match AccountName
|
|
[Arm64] Disable SIMD in crossgen (added as part of #14633)
|
|
|
|
|
|
|
|
|
|
Byref pointers need to point within their "host" object -- thus
the alternate name "interior pointers". If the JIT creates and
reports a pointer as a "byref", but it points outside the host
object, and a GC occurs that moves the host object, the byref
pointer will not be updated. If a subsequent calculation puts
the byref "back" into the host object, it will actually be pointing
to garbage, since the host object has moved.
This occurred on ARM with array index calculations, in particular
because ARM doesn't have a single-instruction "base + scale*index + offset"
addressing mode. Thus, we were generating, for the jaggedarr_cs_do
test case, `ProcessJagged3DArray()` function:
```
// r0 = array object, r6 = computed index offset. We mark r4 as a byref.
add r4, r0, r6
// r4 - 32 is the offset of the object we care about. Then we load the array element.
// In this case, the loaded element is a gcref, so r4 becomes a gcref.
ldr r4, [r4-32]
```
We get this math because the user code uses `a[i - 10]`, which is
essentially `a + (i - 10) * 4 + 8` for element size 4. This is optimized
to `a + i * 4 - 32`. In the above code, `r6` is `i * 4`. In this case,
after the first instruction, `r4` can point beyond the array.
If a GC happens, `r4` isn't updated, and the second instruction loads garbage.
There are several fixes:
1. Change array morphing in `fgMorphArrayIndex()` to rearrange the array index
IR node creation to only create a byref pointer that is precise; don't create
"intermediate" byref pointers that don't represent the actual array element
address being computed. The tree matching code that annotates the generated tree
with field sequences needs to be updated to match the new form.
2. Change `fgMoveOpsLeft()` to prevent the left-weighted reassociation optimization
`[byref]+ (ref, [int]+ (int, int)) => [byref]+ ([byref]+ (ref, int), int)`. This
optimization creates "incorrect" byrefs that don't necessarily point within
the host object.
3. Add an additional condition to the `Fold "((x+icon1)+icon2) to (x+(icon1+icon2))"`
morph optimization to prevent merging of constant TYP_REF nodes, which now were
being recognized due to different tree shapes. This was probably always a problem,
but the particular tree shape wasn't seen before.
These fixes are all-platform. However, to reduce risk at this point, the are
enabled for ARM only, under the `FEATURE_PREVENT_BAD_BYREFS` `#ifdef`.
Fixes #17517.
There are many, many diffs.
For ARM32 ngen-based desktop asm diffs, it is a 0.30% improvement across all
framework assemblies. A lot of the diffs seem to be because we CSE the entire
array address offset expression, not just the index expression.
|
|
Apparently there is little or no need for a non-portable Windows build, so
rather than trying to figure out which version of Windows we are building
on, just ignore -PortableBuild=false. We can add a warning or refuse to
accept the switch later if necessary, but for now we need to continue
accepting it to avoid build breaks.
Fixes #14291
|
|
|