diff options
author | Jan Kotas <jkotas@microsoft.com> | 2017-11-30 23:21:40 -0800 |
---|---|---|
committer | GitHub <noreply@github.com> | 2017-11-30 23:21:40 -0800 |
commit | c1e44d9db9cc0d03e5f44af665acefa3589a6883 (patch) | |
tree | 2b9ad535324b85548de84a5eec8eabfe39ca3de5 /Documentation | |
parent | 845c2c248c6450107acd37c5469f56d05b391183 (diff) | |
download | coreclr-c1e44d9db9cc0d03e5f44af665acefa3589a6883.tar.gz coreclr-c1e44d9db9cc0d03e5f44af665acefa3589a6883.tar.bz2 coreclr-c1e44d9db9cc0d03e5f44af665acefa3589a6883.zip |
Jumpstub fixes (#15296)
- Reserve space for jump stubs for precodes and other code fragments at the end of each code heap segment. This is trying
to ensure that eventual allocation of jump stubs for precodes and other code fragments succeeds. Accounting is done
conservatively - reserves more than strictly required. It wastes a bit of address space, but no actual memory. Also,
this reserve is not used to allocate jump stubs for JITed code since the JITing can recover from failure to allocate
the jump stub now. Fixes #14996.
- Improve algorithm to reuse HostCodeHeap segments: Maintain estimated size of the largest free block in HostCodeHeap.
This estimate is updated when allocation request fails, and also when memory is returned to the HostCodeHeap. Fixes #14995.
- Retry JITing on failure to allocate jump stub. Failure to allocate jump during JITing is not fatal anymore. There is
extra memory reserved for jump stubs on retry to ensure that the retry succeeds allocating the jump stubs that it needs
with high probability.
- Respect CodeHeapRequestInfo::getRequestSize for HostCodeHeap. CodeHeapRequestInfo::getRequestSize is used to
throttle code heap segment size for large workloads. Not respecting it in HostCodeHeap lead to too many
too small code heap segments in large workloads.
- Switch HostCodeHeap nibble map to be allocated on regular heap as part. It simplied the math required to estimate
the nibble map size, and allocating on regular heap is overall goodness since it does not need to be executable.
Diffstat (limited to 'Documentation')
-rw-r--r-- | Documentation/design-docs/jump-stubs.md | 27 |
1 files changed, 8 insertions, 19 deletions
diff --git a/Documentation/design-docs/jump-stubs.md b/Documentation/design-docs/jump-stubs.md index 86bf0ac134..54f5ecb602 100644 --- a/Documentation/design-docs/jump-stubs.md +++ b/Documentation/design-docs/jump-stubs.md @@ -188,8 +188,9 @@ still reach their intended target with a rel32 offset, so jump stubs are not expected to be required in most cases. If this attempt to create a jump stub fails, then the generated code -cannot be used, and we hit a fatal error; we have no mechanism currently -to recover from this failure, or to prevent it. +cannot be used, and the VM restarts the compilation with reserving +extra space in the code heap for jump stubs. The reserved extra space +ensures that the retry succeeds with high probability. There are several problems with this system: 1. Because the VM doesn't know whether a `IMAGE_REL_BASED_REL32` @@ -205,8 +206,6 @@ code because the JIT generates `IMAGE_REL_BASED_REL32` relocations for intra-function jumps and calls that it expects and, in fact, requires, not be replaced with jump stubs, because it doesn't expect the register used by jump stubs (RAX) to be trashed. -3. We don't have any mechanism to recover if a jump stub can't be -allocated. In the NGEN case, rel32 calls are guaranteed to always reach, as PE image files are limited to 2GB in size, meaning a rel32 offset is @@ -217,8 +216,8 @@ jump stubs, as described later. ### Failure mitigation -There are several possible mitigations for JIT failure to allocate jump -stubs. +There are several possible alternative mitigations for JIT failure to +allocate jump stubs. 1. When we get into "rel32 overflow" mode, the JIT could always generate large calls, and never generate rel32 offsets. This is obviously somewhat expensive, as every external call, such as every call to a JIT @@ -469,19 +468,9 @@ bytes allocated, to reserve space for one jump stub per FixupPrecode in the chunk. When the FixupPrecode is patched, for LCG methods it will use the pre-allocated space if a jump stub is required. -For the non-LCG, non-FixupPrecode cases, we need a different solution. -It would be easy to similarly allocate additional space for each type of -precode with the precode itself. This might prove expensive. An -alternative would be to ensure, by design, that somehow shared jump stub -space is available, perhaps by reserving it in a shared area when the -precode is allocated, and falling back to a mechanism where the precode -reserves its own jump stub space if shared jump stub space cannot be -allocated. - -A possibly better implementation would be to reserve, but not allocate, -jump stub space at the end of the code heap, similar to how -CodeHeapReserveForJumpStubs works, but instead the reserve amount should -be computed precisely. +For non-LCG, we are reserving, but not allocating, a space at the end +of the code heap. This is similar and in addition to the reservation done by +COMPlus_CodeHeapReserveForJumpStubs. (See https://github.com/dotnet/coreclr/pull/15296). ## Ready2Run |