summaryrefslogtreecommitdiff
path: root/src/vm/method.inl
diff options
context:
space:
mode:
authorKoundinya Veluri <kouvel@users.noreply.github.com>2019-01-11 18:02:10 -0800
committerGitHub <noreply@github.com>2019-01-11 18:02:10 -0800
commit37b9d85941c39cfdce2a2ea877388ab1ab630c68 (patch)
tree004009a0f73f752ecd338a7460473e861609db21 /src/vm/method.inl
parent834f8d9bd3ee5f0095c91e334ed4565a1a740fee (diff)
downloadcoreclr-37b9d85941c39cfdce2a2ea877388ab1ab630c68.tar.gz
coreclr-37b9d85941c39cfdce2a2ea877388ab1ab630c68.tar.bz2
coreclr-37b9d85941c39cfdce2a2ea877388ab1ab630c68.zip
Patch vtable slots and similar when tiering is enabled (#21292)
Patch vtable slots and similar when tiering is enabled For a method eligible for code versioning and vtable slot backpatch: - It does not have a precode (`HasPrecode()` returns false) - It does not have a stable entry point (`HasStableEntryPoint()` returns false) - A call to the method may be: - An indirect call through the `MethodTable`'s backpatchable vtable slot - A direct call to a backpatchable `FuncPtrStub`, perhaps through a `JumpStub` - For interface methods, an indirect call through the virtual stub dispatch (VSD) indirection cell to a backpatchable `DispatchStub` or a `ResolveStub` that refers to a backpatchable `ResolveCacheEntry` - The purpose is that typical calls to the method have no additional overhead when code versioning is enabled Recording and backpatching slots: - In order for all vtable slots for the method to be backpatchable: - A vtable slot initially points to the `MethodDesc`'s temporary entry point, even when the method is inherited by a derived type (the slot's value is not copied from the parent) - The temporary entry point always points to the prestub and is never backpatched, in order to be able to discover new vtable slots through which the method may be called - The prestub, as part of `DoBackpatch()`, records any slots that are transitioned from the temporary entry point to the method's at-the-time current, non-prestub entry point - Any further changes to the method's entry point cause recorded slots to be backpatched in `BackpatchEntryPointSlots()` - In order for the `FuncPtrStub` to be backpatchable: - After the `FuncPtrStub` is created and exposed, it is patched to point to the method's at-the-time current entry point if necessary - Any further changes to the method's entry point cause the `FuncPtrStub` to be backpatched in `BackpatchEntryPointSlots()` - In order for VSD entities to be backpatchable: - A `DispatchStub`'s entry point target is aligned and recorded for backpatching in `BackpatchEntryPointSlots()` - The `DispatchStub` was modified on x86 and x64 such that the entry point target is aligned to a pointer to make it backpatchable - A `ResolveCacheEntry`'s entry point target is recorded for backpatching in `BackpatchEntryPointSlots()` Slot lifetime and management of recorded slots: - A slot is recorded in the `LoaderAllocator` in which the slot is allocated, see `RecordAndBackpatchEntryPointSlot()` - An inherited slot that has a shorter lifetime than the `MethodDesc`, when recorded, needs to be accessible by the `MethodDesc` for backpatching, so the dependent `LoaderAllocator` with the slot to backpatch is also recorded in the `MethodDesc`'s `LoaderAllocator`, see `MethodDescBackpatchInfo::AddDependentLoaderAllocator_Locked()` - At the end of a `LoaderAllocator`'s lifetime, the `LoaderAllocator` is unregistered from dependency `LoaderAllocators`, see `MethodDescBackpatchInfoTracker::ClearDependencyMethodDescEntryPointSlots()` - When a `MethodDesc`'s entry point changes, backpatching also includes iterating over recorded dependent `LoaderAllocators` to backpatch the relevant slots recorded there, see `BackpatchEntryPointSlots()` Synchronization between entry point changes and backpatching slots - A global lock is used to ensure that all recorded backpatchable slots corresponding to a `MethodDesc` point to the same entry point, see `DoBackpatch()` and `BackpatchEntryPointSlots()` for examples Due to startup time perf issues: - `IsEligibleForTieredCompilation()` is called more frequently with this change and in hotter paths. I chose to use a `MethodDesc` flag to store that information for fast retreival. The flag is initialized by `DetermineAndSetIsEligibleForTieredCompilation()`. - Initially, I experimented with allowing a method versionable with vtable slot backpatch to have a precode, and allocated a new precode that would also be the stable entry point when a direct call is necessary. That also allows recording a new slot to be optional - in the event of an OOM, the slot may just point to the stable entry point. There are a large number of such methods and the allocations were slowing down startup perf. So, I had to eliminate precodes for methods versionable with vtable slot backpatch and that in turn means that recording slots is necessary for versionability.
Diffstat (limited to 'src/vm/method.inl')
-rw-r--r--src/vm/method.inl10
1 files changed, 9 insertions, 1 deletions
diff --git a/src/vm/method.inl b/src/vm/method.inl
index 9d55ae9260..f72d42f4c8 100644
--- a/src/vm/method.inl
+++ b/src/vm/method.inl
@@ -182,7 +182,15 @@ inline CodeVersionManager * MethodDesc::GetCodeVersionManager()
inline CallCounter * MethodDesc::GetCallCounter()
{
LIMITED_METHOD_CONTRACT;
- return GetModule()->GetCallCounter();
+ return GetLoaderAllocator()->GetCallCounter();
+}
+#endif
+
+#ifndef CROSSGEN_COMPILE
+inline MethodDescBackpatchInfoTracker * MethodDesc::GetBackpatchInfoTracker()
+{
+ LIMITED_METHOD_CONTRACT;
+ return GetLoaderAllocator()->GetMethodDescBackpatchInfoTracker();
}
#endif