summaryrefslogtreecommitdiff
path: root/src/vm/codeversion.h
AgeCommit message (Collapse)AuthorFilesLines
2019-07-03Fix GCStress modes that do code instrumentation to work with code versioning ↵Koundinya Veluri1-3/+30
(#25261) - SOS changes are in https://github.com/dotnet/diagnostics/pull/369 - Fixes https://github.com/dotnet/coreclr/issues/17646
2019-05-23Add some perf events/data for tiered compilation (#24607)Koundinya Veluri1-1/+2
Add some perf events/data for tiered compilation New events: - `Settings` - Sent when TC is enabled - `Flags` - Currently indicates whether QuickJit and QuickJitForLoops are enabled - `Pause` - Sent when TC is paused (due to a new method being called for the first time) - `Resume` - Sent when TC resumes - `NewMethodCount` - Number of methods called for the first time while tiering was paused - `BackgroundJitStart` - Sent when starting to JIT methods in the background - `PendingMethodCount` - Number of methods currently scheduled for background JIT - `BackgroundJitStop` - Sent when background jitting stops - `PendingMethodCount` - Same as above. When 0, background jitting has completed. - `JittedMethodCount` - Number of methods jitted in the background since the previous BackgroundJitStart event on the same thread Miscellaneous: - Updated method JIT events to include the optimization tier - Added a couple more cases where tiered compilation is disabled for methods that have JIT optimization disabled for some reason - Renamed `Duration` field of the new version of the `ContentionEnd` to `DurationNs` to indicate the units of time - Added `OptimizationTierOptimized` to `NativeCodeVersion::OptimizationTier` to distinguish it from `OptimizationTier1`. `OptimizationTierOptimized` is now used for methods that QuickJit is disabled for, and does not send the tier 1 flag. - For info about the code being generated by the JIT, added info to `PrepareCodeConfig` and stored a pointer to it on the thread object for the current JIT invocation. Info is updated in `PrepareCodeConfig` and used for updating the tier on the code version and for sending the ETL event. - If the JIT decides to use MinOpt when `MethodDesc::IsJitOptimizationDisabled()` is false, the info is not stored. The runtime method event will reflect the JIT's choice, the rundown event will not. - Updated to show optimization tiers in SOS similarly to PerfView
2019-05-13Profiler API to request ReJIT with inliners (#24461)David Mason1-0/+9
This API is necessary for attaching profilers to be able to ReJIT methods and replace everything that uses the old IL.
2019-03-07Fix signed compare warningsSinan Kaya1-1/+1
warning: comparison between signed and unsigned integer expressions [-Wsign-compare] conversions Update src/ToolBox/superpmi/mcs/verbdumptoc.cpp Co-Authored-By: franksinankaya <41809318+franksinankaya@users.noreply.github.com>
2019-02-26Address janvorli's feedback commentsAdeel1-9/+3
2019-02-26Merge branch 'master' into fix/signed-compare-and-narrowingAdeel Mujahid1-7/+9
2019-02-26GCC compatibility fixes #7 (#22810)Sinan Kaya1-2/+2
* Use thread_local for thread local storage on non MSVC targets * Use local copy of visitor rather than function parameter * Remove extra class qualifier * Replace hex number representation in ASM files * Reorder STDAPI and DLLEXPORT * Suppress conversion Suppress warning during hash add casting * Remove anonymous struct src/vm/codeversion.h:112:16: warning: ‘struct NativeCodeVersion::<anonymous union>::SyntheticStorage’ invalid; an anonymous union can only have non-static data members [-fpermissive] struct SyntheticStorage * Remove class declaration Remove extra class declaration * Remove extern C * Add implicit paranthesis src/vm/amd64/virtualcallstubcpu.hpp:735:103: warning: suggest parentheses around ‘-’ in operand of ‘&’ [-Wparentheses] resolveInit.toMiss1 = offsetof(ResolveStub,miss)-(offsetof(ResolveStub,toMiss1)+1) & 0xFF; ^ src/vm/amd64/virtualcallstubcpu.hpp:741:103: warning: suggest parentheses around ‘-’ in operand of ‘&’ [-Wparentheses] resolveInit.toMiss2 = offsetof(ResolveStub,miss)-(offsetof(ResolveStub,toMiss2)+1) & 0xFF; Add parenthesis src/vm/dataimage.cpp:631:55: warning: suggest parentheses around ‘&&’ within ‘||’ [-Wparentheses] previousRvaInfo->rva == rvaInfo->rva && previousRvaInfo->size >= rvaInfo->size Add parenthesis src/debug/daccess/daccess.cpp:6871:29: warning: suggest parentheses around ‘&&’ within ‘||’ [-Wparentheses] _ASSERTE(peFile == NULL && reflectionModule != NULL || peFile != NULL && reflectionModule == NULL); Add parenthesis src/vm/dataimage.cpp:631:57: warning: suggest parentheses around ‘&&’ within ‘||’ [-Wparentheses] (previousRvaInfo->rva == rvaInfo->rva) && (previousRvaInfo->size >= rvaInfo->size) * Initialize member 1 src/ilasm/method.cpp:35:36: warning: operation on ‘((Method*)this)->Method::m_ulColumns[0]’ may be undefined [-Wsequence-point] m_ulColumns[0]=m_ulColumns[0]=0; * Remove unknown compiler option * Abstract DLLEXPORT
2019-02-24Fix comparison and narrowing errors reported by GCCAdeel1-9/+13
2019-02-07Add config option to disable tier 0 JIT (#22370)Koundinya Veluri1-0/+6
Add config option to disable tier 0 JIT Fixes https://github.com/dotnet/coreclr/issues/21856 - For methods that don't have pregenerated code, using tier 0 JIT can improve startup perf, and disabling tier 0 JIT can be useful to sacrifice some startup time to avoid issues of running tier 0 code for too long. In some cases, it may also be desirable to avoid tiering up much later. - A fixed value for the call count indicates that tier 0 call counting is disabled. When disabled, the method starts at tier 1. - Also modified call counting to start from a predetermined threshold and count down to zero, as it simplifies some things, allows for methods to have different thresholds, and likely is what we would want eventually anyway - Took a small step towards eliminating knowledge of specific tier levels in code that should not care, though more is to be done there
2019-01-11Patch vtable slots and similar when tiering is enabled (#21292)Koundinya Veluri1-1/+2
Patch vtable slots and similar when tiering is enabled For a method eligible for code versioning and vtable slot backpatch: - It does not have a precode (`HasPrecode()` returns false) - It does not have a stable entry point (`HasStableEntryPoint()` returns false) - A call to the method may be: - An indirect call through the `MethodTable`'s backpatchable vtable slot - A direct call to a backpatchable `FuncPtrStub`, perhaps through a `JumpStub` - For interface methods, an indirect call through the virtual stub dispatch (VSD) indirection cell to a backpatchable `DispatchStub` or a `ResolveStub` that refers to a backpatchable `ResolveCacheEntry` - The purpose is that typical calls to the method have no additional overhead when code versioning is enabled Recording and backpatching slots: - In order for all vtable slots for the method to be backpatchable: - A vtable slot initially points to the `MethodDesc`'s temporary entry point, even when the method is inherited by a derived type (the slot's value is not copied from the parent) - The temporary entry point always points to the prestub and is never backpatched, in order to be able to discover new vtable slots through which the method may be called - The prestub, as part of `DoBackpatch()`, records any slots that are transitioned from the temporary entry point to the method's at-the-time current, non-prestub entry point - Any further changes to the method's entry point cause recorded slots to be backpatched in `BackpatchEntryPointSlots()` - In order for the `FuncPtrStub` to be backpatchable: - After the `FuncPtrStub` is created and exposed, it is patched to point to the method's at-the-time current entry point if necessary - Any further changes to the method's entry point cause the `FuncPtrStub` to be backpatched in `BackpatchEntryPointSlots()` - In order for VSD entities to be backpatchable: - A `DispatchStub`'s entry point target is aligned and recorded for backpatching in `BackpatchEntryPointSlots()` - The `DispatchStub` was modified on x86 and x64 such that the entry point target is aligned to a pointer to make it backpatchable - A `ResolveCacheEntry`'s entry point target is recorded for backpatching in `BackpatchEntryPointSlots()` Slot lifetime and management of recorded slots: - A slot is recorded in the `LoaderAllocator` in which the slot is allocated, see `RecordAndBackpatchEntryPointSlot()` - An inherited slot that has a shorter lifetime than the `MethodDesc`, when recorded, needs to be accessible by the `MethodDesc` for backpatching, so the dependent `LoaderAllocator` with the slot to backpatch is also recorded in the `MethodDesc`'s `LoaderAllocator`, see `MethodDescBackpatchInfo::AddDependentLoaderAllocator_Locked()` - At the end of a `LoaderAllocator`'s lifetime, the `LoaderAllocator` is unregistered from dependency `LoaderAllocators`, see `MethodDescBackpatchInfoTracker::ClearDependencyMethodDescEntryPointSlots()` - When a `MethodDesc`'s entry point changes, backpatching also includes iterating over recorded dependent `LoaderAllocators` to backpatch the relevant slots recorded there, see `BackpatchEntryPointSlots()` Synchronization between entry point changes and backpatching slots - A global lock is used to ensure that all recorded backpatchable slots corresponding to a `MethodDesc` point to the same entry point, see `DoBackpatch()` and `BackpatchEntryPointSlots()` for examples Due to startup time perf issues: - `IsEligibleForTieredCompilation()` is called more frequently with this change and in hotter paths. I chose to use a `MethodDesc` flag to store that information for fast retreival. The flag is initialized by `DetermineAndSetIsEligibleForTieredCompilation()`. - Initially, I experimented with allowing a method versionable with vtable slot backpatch to have a precode, and allocated a new precode that would also be the stable entry point when a direct call is necessary. That also allows recording a new slot to be optional - in the event of an OOM, the slot may just point to the stable entry point. There are a large number of such methods and the allocations were slowing down startup perf. So, I had to eliminate precodes for methods versionable with vtable slot backpatch and that in turn means that recording slots is necessary for versionability.
2019-01-11Flowing the nativeCodeVersion to DebuggerJitInfo (#21925)Andrew Au1-1/+1
2018-11-28Delete code related to LoaderOptimization and SharedDomain (#21031)Jan Kotas1-1/+1
2018-10-31Remove superfluous 'const' qualifier from trivial return types (#20652)Michał Janiszewski1-2/+2
The 'const' used in this context has no meaning
2018-10-03Add MethodImplOptions.AggressiveOptimization and use it for tiering (#20009)Koundinya Veluri1-11/+5
Add MethodImplOptions.AggressiveOptimization and use it for tiering Part of fix for https://github.com/dotnet/corefx/issues/32235 Workaround for https://github.com/dotnet/coreclr/issues/19751 - Added and set CORJIT_FLAG_AGGRESSIVE_OPT to indicate that a method is flagged with AggressiveOptimization - For a method flagged with AggressiveOptimization, tiering uses a foreground tier 1 JIT on first call to the method, skipping the tier 0 JIT and call counting - When tiering is disabled, a method flagged with AggressiveOptimization does not use r2r-pregenerated code - R2r crossgen does not generate code for a method flagged with AggressiveOptimization
2018-08-01Allow rejit on attach (#19054)David Mason1-0/+1
* change profiler rejit to be enabled by default, and also change the debugger to only give up on setting a breakpoint if a method has been rejitted, rather than just whenever rejit is on * copy corprof changes to the pal version, and change rejit so it is allowable on attach * Change GetILFunctionBody/SetILFunctionBody to be allowed after attach * Also make RequestRevert allowable on attach * change lock order and switch from GC_NOTRIGGER to GC_TRIGGERS in Rejit codepath through the codeversionmanager * make GetReJITIDs callable after attach * change profiler rejit to be enabled by default, and also change the debugger to only give up on setting a breakpoint if a method has been rejitted, rather than just whenever rejit is on * copy corprof changes to the pal version, and change rejit so it is allowable on attach * Change GetILFunctionBody/SetILFunctionBody to be allowed after attach * Also make RequestRevert allowable on attach * change lock order and switch from GC_NOTRIGGER to GC_TRIGGERS in Rejit codepath through the codeversionmanager * make GetReJITIDs callable after attach * rename value to enable/disable rejit on attach, and cache values of profiler rejit and config value * Change places where the jit checks for rejit being enabled to actually check for what it wants, which is whether jump stamps are enabled * get rid of old value that was readded by merge somehow * disallow detach after setting rejit event mask, and prevent it from being set if rejit on attach is turned off * fix incorrect assert * Take the codemanager lock in SetIP
2018-01-25fix pointer in ILCodeVersionNode so it uses PTR_COR_ILMETHOD instead of ↵David Mason1-1/+1
COR_IL_METHOD *, which was causing a crash in the DAC (#16003)
2017-09-07Make dumpmd work with tiered jitting. Now displays previous code addresses ↵David Mason1-2/+2
(#13805) * Make dumpmd work with tiered jitting. Now displays previous code addresses * add tier info and nativecodeversionnode ptr to dumpmd output * fix warnings on non-rejit platforms
2017-07-24Add the runtime code versioning featurenoahfalk1-0/+689
This makes tiered compilation work properly with profiler ReJIT, and positions the runtime to integrate other versioning related features together in the future. See the newly added code-versioning design-doc in this commit for more information. Breaking changes for profilers: See code-versioning-profiler-breaking-changes.md for more details.