summaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
authorJiyoung Yun <jy910.yun@samsung.com>2017-04-13 05:17:19 (GMT)
committerJiyoung Yun <jy910.yun@samsung.com>2017-04-13 05:17:19 (GMT)
commita56e30c8d33048216567753d9d3fefc2152af8ac (patch)
tree7e5d979695fc4a431740982eb1cfecc2898b23a5 /Documentation
parent4b11dc566a5bbfa1378d6266525c281b028abcc8 (diff)
downloadcoreclr-a56e30c8d33048216567753d9d3fefc2152af8ac.zip
coreclr-a56e30c8d33048216567753d9d3fefc2152af8ac.tar.gz
coreclr-a56e30c8d33048216567753d9d3fefc2152af8ac.tar.bz2
Imported Upstream version 2.0.0.11353upstream/2.0.0.11353
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/botr/README.md2
-rw-r--r--Documentation/botr/clr-abi.md10
-rw-r--r--Documentation/botr/garbage-collection.md4
-rw-r--r--Documentation/botr/ryujit-overview.md4
-rw-r--r--Documentation/building/debugging-instructions.md2
-rw-r--r--Documentation/building/linux-instructions.md4
-rw-r--r--Documentation/building/testing-with-corefx.md4
-rw-r--r--Documentation/building/windows-instructions.md65
-rw-r--r--Documentation/coding-guidelines/EventLogging.md2
-rw-r--r--Documentation/coding-guidelines/cross-platform-performance-and-eventing.md2
-rw-r--r--Documentation/design-docs/assemblyloadcontext.md86
-rw-r--r--Documentation/design-docs/eh-writethru.md378
-rw-r--r--Documentation/design-docs/jit-call-morphing.md157
-rw-r--r--Documentation/design-docs/jump-stubs.md518
-rw-r--r--Documentation/design-docs/lsra-throughput.md74
-rw-r--r--Documentation/project-docs/adding_new_public_apis.md1
-rw-r--r--Documentation/project-docs/ci-trigger-phrases.md11
-rw-r--r--Documentation/project-docs/contributing-workflow.md2
-rw-r--r--Documentation/project-docs/contributing.md2
-rw-r--r--Documentation/project-docs/glossary.md67
-rw-r--r--Documentation/project-docs/linux-performance-tracing.md20
-rw-r--r--Documentation/project-docs/profiling-api-status.md26
-rw-r--r--Documentation/workflow/IssuesFeedbackEngagement.md2
-rw-r--r--Documentation/workflow/OfficalAndDailyBuilds.md26
-rw-r--r--Documentation/workflow/RunningTests.md2
-rw-r--r--Documentation/workflow/UsingYourBuild.md290
26 files changed, 1523 insertions, 238 deletions
diff --git a/Documentation/botr/README.md b/Documentation/botr/README.md
index db4ffc1..99e4274 100644
--- a/Documentation/botr/README.md
+++ b/Documentation/botr/README.md
@@ -1,5 +1,5 @@
-#The Book of the Runtime
+# The Book of the Runtime
Welcome to the Book of the Runtime (BOTR) for the .NET Runtime. This contains
a collection of articles about the non-trivial internals of the .NET Runtime. Its
diff --git a/Documentation/botr/clr-abi.md b/Documentation/botr/clr-abi.md
index 6719522..8a226ed 100644
--- a/Documentation/botr/clr-abi.md
+++ b/Documentation/botr/clr-abi.md
@@ -273,7 +273,11 @@ Note that JIT64 does not implement this properly. The C# compiler used to always
## The PSPSym and funclet parameters
-The name *PSPSym* stands for Previous Stack Pointer Symbol. It is how a funclet accesses locals from the main function body. This is not used for x86: the frame pointer on x86 is always preserved when the handlers are invoked.
+The *PSPSym* (which stands for Previous Stack Pointer Symbol) is a pointer-sized local variable used to access locals from the main function body.
+
+CoreRT does not use PSPSym. For filter funclets the VM sets the frame register to be the same as the parent function. For second pass funclets the VM restores all non-volatile registers. The same convention is used across all platforms.
+
+CoreCLR uses PSPSym for all platforms except x86: the frame pointer on x86 is always preserved when the handlers are invoked.
First, two definitions.
@@ -281,7 +285,7 @@ First, two definitions.
*Initial-SP* is the initial value of the stack pointer after the fixed-size portion of the frame has been allocated. That is, before any "alloca"-type allocations.
-The PSPSym is a pointer-sized local variable in the frame of the main function and of each funclet. The value stored in PSPSym is the value of Initial-SP for AMD64 or Caller-SP for other platforms, for the main function. The stack offset of the PSPSym is reported to the VM in the GC information header. The value reported in the GC information is the offset of the PSPSym from Initial-SP for AMD64 or Caller-SP for other platforms. (Note that both the value stored, and the way the value is reported to the VM, differs between architectures. In particular, note that most things in the GC information header are reported as offsets relative to Caller-SP, but PSPSym on AMD64 is one exception, and maybe the only exception.)
+The value stored in PSPSym is the value of Initial-SP for AMD64 or Caller-SP for other platforms, for the main function. The stack offset of the PSPSym is reported to the VM in the GC information header. The value reported in the GC information is the offset of the PSPSym from Initial-SP for AMD64 or Caller-SP for other platforms. (Note that both the value stored, and the way the value is reported to the VM, differs between architectures. In particular, note that most things in the GC information header are reported as offsets relative to Caller-SP, but PSPSym on AMD64 is one exception, and maybe the only exception.)
The VM uses the PSPSym to find other locals it cares about (such as the generics context in a funclet frame). The JIT uses it to re-establish the frame pointer register, so that the frame pointer is the same value in a funclet as it is in the main function body.
@@ -293,8 +297,6 @@ On ARM and ARM64, for all second pass funclets (finally, fault, catch, and filte
Catch, Filter, and Filter-handlers also get an Exception object (GC ref) as an argument (`REG_EXCEPTION_OBJECT`). On AMD64 it is the second argument and thus passed in RDX. On ARM and ARM64 this is the first argument and passed in R0.
-CoreRT does not use PSPSym. For filter funclets the VM sets the frame register to be the same as the parent function. For second pass funclets the VM restores all non-volatile registers. The same convention is used across all platforms.
-
(Note that the JIT64 source code contains a comment that says, "The current CLR doesn't always pass the correct establisher frame to the funclet. Funclet may receive establisher frame of funclet when expecting that of original routine." It indicates this is the reason that a PSPSym is required in all funclets as well as the main function, whereas if the establisher frame was correctly reported, the PSPSym could be omitted in some cases.)
## Funclet Return Values
diff --git a/Documentation/botr/garbage-collection.md b/Documentation/botr/garbage-collection.md
index 789c4e9..c7f4741 100644
--- a/Documentation/botr/garbage-collection.md
+++ b/Documentation/botr/garbage-collection.md
@@ -189,8 +189,8 @@ Code Flow
Terms:
-- **WKS GC:** Workstation GC.
-- **SRV GC:** Server GC
+- **WKS GC:** Workstation GC
+- **SVR GC:** Server GC
Functional Behavior
-------------------
diff --git a/Documentation/botr/ryujit-overview.md b/Documentation/botr/ryujit-overview.md
index ee84a9a..ffbe350 100644
--- a/Documentation/botr/ryujit-overview.md
+++ b/Documentation/botr/ryujit-overview.md
@@ -22,7 +22,7 @@ RyuJIT provides the just in time compilation service for the .NET runtime. The r
* `compileMethod` is the main entry point for the JIT. The EE passes it a `ICorJitInfo` object, and the “info” containing the IL, the method header, and various other useful tidbits. It returns a pointer to the code, its size, and additional GC, EH and (optionally) debug info.
* `getVersionIdentifier` is the mechanism by which the JIT/EE interface is versioned. There is a single GUID (manually generated) which the JIT and EE must agree on.
* `getMaxIntrinsicSIMDVectorLength` communicates to the EE the largest SIMD vector length that the JIT can support.
-* `ICorJitInfo` – this is the interface that the EE implements. It has many methods defined on it that allow the JIT to look up metadata tokens, traverse type signatures, compute field and vtable offsets, find method entry points, construct string literals, etc. This bulk of this interface is inherited from `ICorJitDynamicInfo` which is defined in [src/inc/corinfo.h](https://github.com/dotnet/coreclr/blob/master/src/inc/corinfo.h). The implementation is defined in [src/vm/jitinterface.cpp](https://github.com/dotnet/coreclr/blob/master/src/vm/jitinterface.cpp).
+* `ICorJitInfo` – this is the interface that the EE implements. It has many methods defined on it that allow the JIT to look up metadata tokens, traverse type signatures, compute field and vtable offsets, find method entry points, construct string literals, etc. This bulk of this interface is inherited from `ICorDynamicInfo` which is defined in [src/inc/corinfo.h](https://github.com/dotnet/coreclr/blob/master/src/inc/corinfo.h). The implementation is defined in [src/vm/jitinterface.cpp](https://github.com/dotnet/coreclr/blob/master/src/vm/jitinterface.cpp).
# Internal Representation (IR)
@@ -236,7 +236,7 @@ Utilizes value numbers to propagate and transform based on properties such as no
Optimize array index range checks based on value numbers and assertions.
-## <a name=rationalization"/>Rationalization
+## <a name="rationalization"/>Rationalization
As the JIT has evolved, changes have been made to improve the ability to reason over the tree in both “tree order” and “linear order”. These changes have been termed the “rationalization” of the IR. In the spirit of reuse and evolution, some of the changes have been made only in the later (“backend”) components of the JIT. The corresponding transformations are made to the IR by a “Rationalizer” component. It is expected that over time some of these changes will migrate to an earlier place in the JIT phase order:
diff --git a/Documentation/building/debugging-instructions.md b/Documentation/building/debugging-instructions.md
index 4c33f9a..72f198b 100644
--- a/Documentation/building/debugging-instructions.md
+++ b/Documentation/building/debugging-instructions.md
@@ -89,7 +89,7 @@ This is the full list of commands currently supported by SOS. LLDB is case-sensi
HistObjFind (histobjfind)
HistClear (histclear)
-###Aliases###
+### Aliases ###
By default you can reach all the SOS commands by using: _sos [command\_name]_
However the common commands have been aliased so that you don't need the SOS prefix:
diff --git a/Documentation/building/linux-instructions.md b/Documentation/building/linux-instructions.md
index ddd4274..b14ab28 100644
--- a/Documentation/building/linux-instructions.md
+++ b/Documentation/building/linux-instructions.md
@@ -47,7 +47,7 @@ ellismg@linux:~$ sudo apt-get install cmake llvm-3.5 clang-3.5 lldb-3.6 lldb-3.6
You now have all the required components.
-If you are using Fedora 23 or 24, then you will need to install the following packages:
+If you are using Fedora, then you will need to install the following packages:
`$ sudo dnf install llvm cmake clang lldb-devel libunwind-devel lttng-ust-devel libuuid-devel libicu-devel`
@@ -61,7 +61,7 @@ Set the maximum number of file-handles
To ensure that your system can allocate enough file-handles for the corefx build run `sysctl fs.file-max`. If it is less than 100000, add `fs.file-max = 100000` to `/etc/sysctl.conf`, and then run `sudo sysctl -p`.
-On Fedora 23 or 24:
+On Fedora:
`$ sudo dnf install mono-devel`
diff --git a/Documentation/building/testing-with-corefx.md b/Documentation/building/testing-with-corefx.md
index defc8f8..a400d14 100644
--- a/Documentation/building/testing-with-corefx.md
+++ b/Documentation/building/testing-with-corefx.md
@@ -5,6 +5,10 @@ It may be valuable to use CoreFX tests to validate your changes to CoreCLR or ms
**NOTE:** The `BUILDTOOLS_OVERRIDE_RUNTIME` property no longer works.
+To run CoreFX tests with an updated System.Private.Corelib.dll, [use these instructions](https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/developer-guide.md#testing-with-private-coreclr-bits).
+
+To build CoreFX against the updated System.Private.Corelib.dll - we need to update instructions.
+
**Replace runtime between build.[cmd|sh] and build-tests.[cmd|sh]**
Use the following instructions to test a change to the dotnet/coreclr repo using dotnet/corefx tests. Refer to the [CoreFx Developer Guide](https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/developer-guide.md) for information about CoreFx build scripts.
diff --git a/Documentation/building/windows-instructions.md b/Documentation/building/windows-instructions.md
index 8ba021b..d0e6327 100644
--- a/Documentation/building/windows-instructions.md
+++ b/Documentation/building/windows-instructions.md
@@ -4,26 +4,58 @@ Build CoreCLR on Windows
These instructions will lead you through building CoreCLR.
----------------
-#Environment
+# Environment
You must install several components to build the CoreCLR and CoreFX repos. These instructions were tested on Windows 7+.
## Visual Studio
Visual Studio must be installed. Supported versions:
-- [Visual Studio 2015](https://www.visualstudio.com/downloads/visual-studio-2015-downloads-vs) (Community, Professional, Enterprise). The community version is completely free.
-
-To debug managed code, ensure you have installed at least [Visual Studio 2015 Update 3](https://www.visualstudio.com/en-us/news/releasenotes/vs2015-update3-vs).
-
-Make sure that you install "VC++ Tools". By default, they will not be installed.
-
-To build for Arm32, you need to have [Windows SDK for Windows 10](https://developer.microsoft.com/en-us/windows/downloads) installed.
+- [Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) (Community, Professional, Enterprise). The community version is completely free.
+- [Visual Studio 2017](https://www.visualstudio.com/downloads/) (Community, Professional, Enterprise). The community version is completely free.
+
+For Visual Studio 2015:
+* To debug managed code, ensure you have installed at least [Visual Studio 2015 Update 3](https://www.visualstudio.com/en-us/news/releasenotes/vs2015-update3-vs).
+* Make sure that you install "VC++ Tools". By default, they will not be installed.
+* To build for Arm32, Make sure that you have the Windows SDK for Windows 10 installed (or selected to be installed as part of VS installation). To explicitly install Windows SDK, download it from here: [Windows SDK for Windows 10](https://developer.microsoft.com/en-us/windows/downloads)
+
+For Visual Studio 2017:
+* When doing a 'Workloads' based install, the following are the minimum requirements:
+ * .NET Desktop Development
+ * All Required Components
+ * .NET Framework 4-4.6 Development Tools
+ * Desktop Development with C++
+ * All Required Components
+ * VC++ 2017 v141 Toolset (x86, x64)
+ * Windows 8.1 SDK and UCRT SDK
+ * VC++ 2015.3 v140 Toolset (x86, x64)
+* When doing an 'Individual Components' based install, the following are the minimum requirements:
+ * Under ".NET":
+ * .NET Framework 4.6 targeting pack
+ * .NET Portable Library targeting pack
+ * Under "Code tools":
+ * Static analysis tools
+ * Under "Compilers, build tools, and runtimes":
+ * C# and Visual Basic Roslyn Compilers
+ * MSBuild
+ * VC++ 2015.3 v140 toolset (x86, x64)
+ * VC++ 2017 v141 toolset (x86, x64)
+ * Windows Universal CRT SDK
+ * Under "Development activities":
+ * Visual Studio C++ core features
+ * Under "SDKs, libraries, and frameworks":
+ * Windows 10 SDK or Windows 8.1 SDK
+* To build for Arm32, Make sure that you have the Windows 10 SDK installed (or selected to be installed as part of VS installation). To explicitly install Windows SDK, download it from here: [Windows SDK for Windows 10](https://developer.microsoft.com/en-us/windows/downloads)
+* **Important:** You must have the `msdia120.dll` COM Library registered in order to build the repository.
+ * This binary is registered by default when installing the "VC++ Tools" with Visual Studio 2015
+ * You can also manually register the binary by launching the "Developer Command Prompt for VS2017" with Administrative privileges and running `regsvr32.exe "%VSINSTALLDIR%\Common7\IDE\msdia120.dll"`
+* **Important:** By default, the build will attempt to use VS2015 as the toolset for the build. To build using VS2017 as your toolset, you must use the "Developer Command Prompt for VS2017".
Visual Studio Express is not supported.
-##CMake
+## CMake
-The CoreCLR repo build has been validated using CMake 3.5.2.
+The CoreCLR repo build has been validated using CMake 3.7.2
- Install [CMake](http://www.cmake.org/download) for Windows.
- Add its location (e.g. C:\Program Files (x86)\CMake\bin) to the PATH environment variable.
@@ -31,7 +63,7 @@ The CoreCLR repo build has been validated using CMake 3.5.2.
following the instructions at [Adding to the Default PATH variable](#adding-to-the-default-path-variable)
-##Python
+## Python
Python is used in the build system. We are currently using python 2.7.9, although
any recent (2.4+) version of Python should work, including Python 3.
@@ -40,7 +72,7 @@ any recent (2.4+) version of Python should work, including Python 3.
The installation script has a check box to do this, but you can do it yourself after the fact
following the instructions at [Adding to the Default PATH variable](#adding-to-the-default-path-variable)
-##Git
+## Git
For actual user operations, it is often more convinient to use the GIT features built into Visual Studio 2015.
However the CoreCLR and the tests use the GIT command line utilities directly so you need to install them
@@ -51,20 +83,20 @@ for these to work properly. You can get it from
The installation script has a check box to do this, but you can do it yourself after the fact
following the instructions at [Adding to the Default PATH variable](#adding-to-the-default-path-variable)
-##PowerShell
+## PowerShell
PowerShell is used in the build system. Ensure that it is accessible via the PATH environment variable.
Typically this is %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\.
Powershell version must be 3.0 or higher. This should be the case for Windows 8 and later builds.
- Windows 7 SP1 can install Powershell version 4 [here](https://www.microsoft.com/en-us/download/details.aspx?id=40855).
-##DotNet Core SDK
+## DotNet Core SDK
While not strictly needed to build or tests the .NET Core repository, having the .NET Core SDK installed lets
you use the dotnet.exe command to run .NET Core applications in the 'normal' way. We use this in the
[Using Your Build](Documentation/workflow/UsingYourBuild.md) instructions. Visual Studio 2015 (update 3) should have
installed the .NET Core SDK, but in case it did not you can get it from the [Installing the .Net Core SDK](https://www.microsoft.com/net/core) page.
-##Adding to the default PATH variable
+## Adding to the default PATH variable
The commands above need to be on your command lookup path. Some installers will automatically add them to
the path as part of installation, but if not here is how you can do it.
@@ -79,7 +111,7 @@ and select the 'Path' variable in the 'System variables' (if you want to change
to change it for the currnet user). Simply edit the PATH variable's value and add the directory (with a semicolon separator).
-------------------------------------
-#Building
+# Building
Once all the necessary tools are in place, building is trivial. Simply run build build.cmd script that lives at
the base of the repository.
@@ -117,4 +149,3 @@ Build has a number of options that you can learn about using build -?. Some of
See [Using Your Build](../workflow/UsingYourBuild.md) for instructions on running code with your build.
See [Running Tests](../workflow/RunningTests.md) for instructions on running the tests.
-
diff --git a/Documentation/coding-guidelines/EventLogging.md b/Documentation/coding-guidelines/EventLogging.md
index a53d6e9..8ba84d7 100644
--- a/Documentation/coding-guidelines/EventLogging.md
+++ b/Documentation/coding-guidelines/EventLogging.md
@@ -1,6 +1,6 @@
# CoreClr Event Logging Design
-##Introduction
+## Introduction
Event Logging is a mechanism by which CoreClr can provide a variety of information on it's state. This Logging works by inserting explicit logging calls by the developer within the VM . The Event Logging mechanism is largely based on [ETW- Event Tracing For Windows](https://msdn.microsoft.com/en-us/library/windows/desktop/bb968803(v=vs.85).aspx)
diff --git a/Documentation/coding-guidelines/cross-platform-performance-and-eventing.md b/Documentation/coding-guidelines/cross-platform-performance-and-eventing.md
index f332724..37a3135 100644
--- a/Documentation/coding-guidelines/cross-platform-performance-and-eventing.md
+++ b/Documentation/coding-guidelines/cross-platform-performance-and-eventing.md
@@ -1,6 +1,6 @@
# .NET Cross-Plat Performance and Eventing Design
-##Introduction
+## Introduction
As we bring up CoreCLR on the Linux and OS X platforms, it’s important that we determine how we’ll measure and analyze performance on these platforms. On Windows we use an event based model that depends on ETW, and we have a good amount of tooling that builds on this approach. Ideally, we can extend this model to Linux and OS X and re-use much of the Windows tooling.
diff --git a/Documentation/design-docs/assemblyloadcontext.md b/Documentation/design-docs/assemblyloadcontext.md
new file mode 100644
index 0000000..6da4307
--- /dev/null
+++ b/Documentation/design-docs/assemblyloadcontext.md
@@ -0,0 +1,86 @@
+**LoadContext** can be viewed as a container for assemblies, their code and data (e.g. statics). Whenever an assembly is loaded, it is loaded within a load context - independent of whether the load was triggered explicitly (e.g. via *Assembly.Load*), implicitly (e.g. resolving static assembly references from the manifest) or dynamically (by emitting code on the fly).
+
+This concept is not new to .NET Core but has existed since the days of .NET Framework (see [this](https://blogs.msdn.microsoft.com/suzcook/2003/05/29/choosing-a-binding-context/) for details) where it operated behind the scenes and not exposed for the developer to interact with, aside from loading your assembly in one based upon the API used to perform the load.
+
+In .NET Core, we have exposed a [managed API surface](https://github.com/dotnet/corefx/blob/master/src/System.Runtime.Loader/ref/System.Runtime.Loader.cs) that developers can use to interact with it - to inspect loaded assemblies or create their own **LoadContext** instance. Here are some of the scenarios that motivated this work:
+
+* Ability to load multiple versions of the same assembly within a given process (e.g. for plugin frameworks)
+* Ability to load assemblies explicitly in a context isolated from that of the application.
+* Ability to override assemblies being resolved from application context.
+* Ability to have isolation of statics (as they are tied to the **LoadContext**)
+* Expose LoadContext as a first class concept for developers to interface with and not be a magic.
+
+## Types of LoadContext
+### Default LoadContext
+
+Every .NET Core app has a **LoadContext** instance created during .NET Core Runtime startup that we will refer to as the *Default LoadContext*. All application assemblies (including their transitive closure) are loaded within this **LoadContext** instance.
+
+### Custom LoadContext
+For scenarios that wish to have isolation between loaded assemblies, applications can create their own **LoadContext** instance by deriving from **System.Runtime.Loader.AssemblyLoadContext** type and loading the assemblies within that instance.
+
+Multiple assemblies with the same simple name cannot be loaded into a single load context (*Default* or *Custom*). Also, .Net Core ignores strong name token for assembly binding process.
+
+## How Load is attempted
+
+### Basics
+If an assembly *A1* triggers the load of an assembly *C1*, the latter's load is attempted within the **LoadContext** instance of the former (which is also known as the *RequestingAssembly* or *ParentAssembly*).
+
+Dynamically generated assemblies add a slight twist since they do not have a *ParentAssembly/RequestingAssembly* per-se. Thus, they are associated with the load context of their *Creator Assembly* and any subsequent loads (static or dynamic) will use that load context.
+
+### Resolution Process
+If the assembly was already present in *A1's* context, either because we had successfully loaded it earlier, or because we failed to load it for some reason, we return the corresponding status (and assembly reference for the success case).
+
+However, if *C1* was not found in *A1's* context, the *Load* method override in *A1's* context is invoked.
+
+* For *Custom LoadContext*, this override is an opportunity to load an assembly **before** the fallback (see below) to *Default LoadContext* is attempted to resolve the load.
+
+* For *Default LoadContext*, this override always returns *null* since *Default Context* cannot override itself.
+
+If the *Load* method override does not resolve the load, fallback to *Default LoadContext* is attempted to resolve the load incase the assembly was already loaded there. If the operating context is *Default LoadContext*, there is no fallback attempted since it has nothing to fallback to.
+
+If the *Default LoadContext* fallback also did not resolve the load (or was not applicable), the *Resolving* event is invoked against *A1's* load context. This is the last oppurtunity to attempt to resolve the assembly load. If there are no subscribers for this event, or neither resolved the load, a *FileNotFoundException* is thrown.
+
+## PInvoke Resolution
+
+*Custom LoadContext* can override the **AssemblyLoadContext.LoadUnmanagedDll** method to intercept PInvokes from within the **LoadContext** instance so that can be resolved from custom binaries. If not overridden, or if the resolution is not able to resolve the PInvoke, the default PInvoke mechanism will be used as fallback.
+
+## Constraints
+
+* **System.Private.CoreLib.dll** is only loaded once, and into the **Default LoadContext**, during the .NET Core Runtime startup as it is a logical extension of the same. It cannot be loaded into **Custom LoadContext**.
+* Currently, custom **LoadContext** cannot be unloaded once created. This is a feature we are looking into for a future release.
+* If an attempt is made to load a [Ready-To-Run (R2R)](https://github.com/dotnet/coreclr/blob/master/Documentation/botr/readytorun-overview.md) image from the same location in multiple load context's, then precompiled code can only be used from the first image that got loaded. The subsequent images will have their code JITted. This happens because subsequent loading binaries from the same location results in OS mapping them to the same memory as the previous one was mapped to and thus, could corrupt internal state information required for use precompiled code.
+
+## Tests
+
+Tests are present [here](https://github.com/dotnet/corefx/tree/master/src/System.Runtime.Loader).
+
+## API Surface
+
+Most of the **AssemblyLoadContext** [API surface](https://github.com/dotnet/corefx/blob/master/src/System.Runtime.Loader/ref/System.Runtime.Loader.cs) is self-explanatory. Key APIs/Properties, though, are described below:
+
+### Default
+
+This property will return a reference to the *Default LoadContext*.
+
+### Load
+
+This method should be overriden in a *Custom LoadContext* if the intent is to override the assembly resolution that would be done during fallback to *Defaut LoadContext*
+
+### LoadFromAssemblyName
+
+This method can be used to load an assembly into a load context different from the load context of the currently executing assembly.
+
+### Resolving
+
+This event is raised to give the last oppurtunity to a *LoadContext* instance to attempt to resolve a requested assembly that has neither been resolved by **Load** method, nor by fallback to **Default LoadContext**.
+
+## Assembly Load APIs and LoadContext
+
+As part of .NET Standard 2.0 effort, certain assembly load APIs off the **Assembly** type, which were present in Desktop .NET Framework, have been brought back. The following maps the APIs to the load context in which they will load the assembly:
+
+* Assembly.Load - loads the assembly into the context of the assembly that triggers the load.
+* Assembly.LoadFrom - loads the assembly into the *Default LoadContext*
+* Assembly.LoadFile - creates a new (anonymous) load context to load the assembly into.
+* Assembly.Load(byte[]) - creates a new (anonymous) load context to load the assembly into.
+
+If you need to influence the load process or the load context in which assemblies are loaded, please look at the various Load* APIs exposed by **AssemblyLoadContext** [API surface](https://github.com/dotnet/corefx/blob/master/src/System.Runtime.Loader/ref/System.Runtime.Loader.cs). \ No newline at end of file
diff --git a/Documentation/design-docs/eh-writethru.md b/Documentation/design-docs/eh-writethru.md
new file mode 100644
index 0000000..0afa5a7
--- /dev/null
+++ b/Documentation/design-docs/eh-writethru.md
@@ -0,0 +1,378 @@
+# Exception Handling Write Through Optimization.
+
+Write through is an optimization done on local variables that live across exception handling flow like a handler, filter, or finally so that they can be enregistered - treated as a register candidate - throughout a method. For each variable live across one of these constructs, the minimum requirement is that a store to the variables location on the stack is placed between a reaching definition and any point of control flow leading to the handler, as well as a load between any return from a filter or finally and an upward exposed use. Conceptually this maintains the value of the variable on the stack across the exceptional flow which would kill any live registers. This transformation splits a local variable into multiple enregisterable compiler temporaries backed by the local variable on the stack. For local vars that additionally have appearances within a eh construct, a load from the stack local is inserted to a temp that will be enregistered within the handler.
+
+## Motivation
+
+Historically the JIT has not done this transformation because exception handling was rare and thus the transformation was not worth the compile time. Additionally it was easy to make the recomendation to users to remove EH from performance critical methods since they had control of where the EH appeared. Neither of these points remain true as we increase our focus on cloud workloads. The use of non-blocking async calls are common in performance critical paths for these workloads and async injects exception handling constructs to implement the feature. This in combination with the long standing use of EH in 'foreach' and 'using' statements means that we are seeing EH constructs that are difficult for the user to manage or remove high in the profile (Techempower on Kestrel is a good example). Given these cloud workloads doing the transformation would be a clear benefit.
+
+## Design
+
+The goal of the design is to preserve the constraints listed above - i.e. preserve a correct value on the stack for any local var that crosses an EH edge in the flow graph. To ensure that the broad set of global optimizations can act on the IR shape produced by this transformation and that phase ordering issues do not block enregistration opportunities the write through phase will be staged just prior to SSA build after morph and it will do a full walk of the IR rewriting appearances to proxies as well as inserting reloads at the appropriate blocks in the flow graph as indicated by EH control flow semantics. To preserve the needed values on the stack a store will also be inserted after every definition to copy the new value in the proxy back to the stack location. This will leave non optimal number of stores (too many) but with the strategy that the more expensive analysis to eliminate/better place stores will be staged as a global optimization in a higher compilation tier.
+
+### Throughput
+
+To identify EH crossing local vars global liveness is necessary. This comes at the significant cost of the liveness analysis. To mitigate this the write through phase is staged immediately before SSA build for the global optimizer. Since the typical case is that there is no EH, the liveness analysis in write through can be reused directly by SSA build. For the case where EH local vars are present liveness today must be rebuilt for SSA since new local vars have been added, but incremental update to the RyuJIT liveness analysis can be implemented (worklist based live analysis) to improve the throughput. Additionally the write through transformation does a full IR walk - also expensive - to replace EH local var appearances with proxies and insert transfers to and from the stack for EH flow, given this initial implementations may need to be staged as part of AOT (crossgen) compiles until tiering can move the more expensive analysis out of the startup path.
+
+### Algorithm
+On the IR directly before SSA build:
+- Run global liveness to identify local vars that cross EH boundaries (as a byproduct of this these local vars are marked "do not enregister")
+- Foreach EH local var create a new local var "proxy" that can be enregisterd.
+- Iterate each block in the flow graph doing the following:
+ * Foreach tree in block do a post order traversal and
+ - Replace all appearances of EH local vars with the defined proxy
+ - Insert a copy of proxy definition back to the EH local var (on the stack)
+ * If EH handler entry block insert reloads from EH local var to proxy at block head
+ * If finally or filter exit, insert reloads from EH local var to proxy at successor block heads
+- For method entry block, insert reloads from parameter EH local vars to proxies
+
+At end no proxy should be live across EH flow and all value updates will be written back to the stack location.
+
+## Next steps
+
+The initial prototype that produced the example bellow is currently being improved to make it production ready. At the same time a more extensive suite of example tests are being developed.
+
+- [X] Proof of concept prototype.
+- [ ] Production implementation of WriteThru phase.
+- [ ] Suite of optimization examples/regression tests.
+- [ ] Testing
+ * [ ] Full CI test pass.
+ * [ ] JIT benchmark diffs.
+ * [ ] Kestrel techempower numbers.
+
+## Example
+
+The following is a simple example that shows enregistration for a local var live, and modified, through a catch.
+
+#### Source code snippet
+
+```
+class Enreg01
+{
+ int val;
+ double dist;
+
+ public Enreg01(int x) {
+ val = x;
+ dist = (double)x;
+ }
+
+ [MethodImpl(MethodImplOptions.NoInlining)]
+ public int foo(ref double d) { return (int)d; }
+
+ [MethodImpl(MethodImplOptions.NoInlining)]
+ public int Run()
+ {
+ int sum = val;
+
+ try {
+ TryValue(97);
+ }
+ catch (ValueException e)
+ {
+ Console.WriteLine("Catching {0}", Convert.ToString(e.x));
+ sum += val + e.x;
+ foo(ref dist);
+ sum += val;
+ }
+
+ return sum;
+ }
+
+ [MethodImpl(MethodImplOptions.NoInlining)]
+ public int TryValue(int y)
+ {
+ if (y == 97)
+ {
+ Console.WriteLine("Throwing 97");
+ throw new ValueException(97);
+ }
+ else
+ {
+ return y;
+ }
+ }
+}
+```
+#### Post WriteThru GenTree nodes for Run() method
+
+The Run() contains the catch and is the only method the EH WriteThru modifies.
+
+```
+Creating enregisterable proxies:
+lvaGrabTemp returning 8 (V08 tmp5) (a long lifetime temp) called for Add proxy for EH Write Thru..
+Creating proxy V08 for local var V00
+
+lvaGrabTemp returning 9 (V09 tmp6) (a long lifetime temp) called for Add proxy for EH Write Thru..
+Creating proxy V09 for local var V01
+
+Trees after EH Write Thru
+
+---------------------------------------------------------------------------------------------------------------------------
+BBnum descAddr ref try hnd preds weight [IL range] [jump] [EH region] [flags]
+---------------------------------------------------------------------------------------------------------------------------
+BB01 [00000263A1C161B8] 1 1 [000..007) i label target
+BB02 [00000263A1C162D0] 1 0 BB01 1 [007..012) T0 try { } keep i try label gcsafe
+BB03 [00000263A1C16500] 2 BB02,BB04 1 [050..052) (return) i label target gcsafe
+++++ funclets follow
+BB04 [00000263A1C163E8] 0 0 0 [012..050)-> BB03 ( cret ) H0 F catch { } keep i rare label target gcsafe flet
+-------------------------------------------------------------------------------------------------------------------------------------
+
+------------ BB01 [000..007), preds={} succs={BB02}
+
+***** BB01, stmt 1
+ ( 3, 3) [000123] ------------ * stmtExpr void (IL ???... ???)
+N001 ( 3, 2) [000120] ------------ | /--* lclVar ref V00 this
+N003 ( 3, 3) [000122] -A------R--- \--* = ref
+N002 ( 1, 1) [000121] D------N---- \--* lclVar ref V08 tmp5
+
+***** BB01, stmt 2
+ ( 17, 13) [000005] ------------ * stmtExpr void (IL 0x000...0x006)
+N007 ( 3, 2) [000097] ------------ | /--* lclVar int V09 tmp6
+N009 ( 7, 5) [000098] -A------R--- | /--* = int
+N008 ( 3, 2) [000096] D------N---- | | \--* lclVar int V01 loc0
+N010 ( 17, 13) [000099] -A-XG------- \--* comma void
+N004 ( 6, 5) [000002] ---XG------- | /--* indir int
+N002 ( 1, 1) [000059] ------------ | | | /--* const long 16 field offset Fseq[val]
+N003 ( 4, 3) [000060] -------N---- | | \--* + byref
+N001 ( 3, 2) [000001] ------------ | | \--* lclVar ref V08 tmp5
+N006 ( 10, 8) [000004] -A-XG---R--- \--* = int
+N005 ( 3, 2) [000003] D------N---- \--* lclVar int V09 tmp6
+
+------------ BB02 [007..012), preds={BB01} succs={BB03}
+
+***** BB02, stmt 3
+ ( 16, 10) [000013] ------------ * stmtExpr void (IL 0x007...0x00F)
+N008 ( 16, 10) [000011] --C-G------- \--* call int Enreg01.TryIncrement
+N004 ( 1, 1) [000009] ------------ this in rcx +--* lclVar ref V08 tmp5
+N005 ( 1, 1) [000010] ------------ arg1 in rdx \--* const int 97
+
+------------ BB03 [050..052) (return), preds={BB02,BB04} succs={}
+
+***** BB03, stmt 4
+ ( 3, 3) [000119] ------------ * stmtExpr void (IL ???... ???)
+N001 ( 3, 2) [000116] ------------ | /--* lclVar int V01 loc0
+N003 ( 3, 3) [000118] -A------R--- \--* = int
+N002 ( 1, 1) [000117] D------N---- \--* lclVar int V09 tmp6
+
+***** BB03, stmt 5
+ ( 4, 3) [000017] ------------ * stmtExpr void (IL 0x050...0x051)
+N002 ( 4, 3) [000016] ------------ \--* return int
+N001 ( 3, 2) [000015] ------------ \--* lclVar int V09 tmp6
+
+------------ BB04 [012..050) -> BB03 (cret), preds={} succs={BB03}
+
+***** BB04, stmt 6
+ ( 5, 4) [000021] ------------ * stmtExpr void (IL 0x012...0x012)
+N001 ( 1, 1) [000007] -----O------ | /--* catchArg ref
+N003 ( 5, 4) [000020] -A---O--R--- \--* = ref
+N002 ( 3, 2) [000019] D------N---- \--* lclVar ref V03 tmp0
+
+***** BB04, stmt 7
+ ( 3, 3) [000111] ------------ * stmtExpr void (IL ???... ???)
+N001 ( 3, 2) [000108] ------------ | /--* lclVar ref V00 this
+N003 ( 3, 3) [000110] -A------R--- \--* = ref
+N002 ( 1, 1) [000109] D------N---- \--* lclVar ref V08 tmp5
+
+***** BB04, stmt 8
+ ( 3, 3) [000115] ------------ * stmtExpr void (IL ???... ???)
+N001 ( 3, 2) [000112] ------------ | /--* lclVar int V01 loc0
+N003 ( 3, 3) [000114] -A------R--- \--* = int
+N002 ( 1, 1) [000113] D------N---- \--* lclVar int V09 tmp6
+
+***** BB04, stmt 9
+ ( 59, 43) [000034] ------------ * stmtExpr void (IL 0x013...0x037)
+N021 ( 59, 43) [000031] --CXG------- \--* call void System.Console.WriteLine
+N002 ( 5, 12) [000066] ----G------- | /--* indir ref
+N001 ( 3, 10) [000065] ------------ | | \--* const(h) long 0xB3963070 "Catching {0}"
+N004 ( 9, 15) [000076] -A--G---R-L- arg0 SETUP +--* = ref
+N003 ( 3, 2) [000075] D------N---- | \--* lclVar ref V05 tmp2
+N012 ( 20, 14) [000029] --CXG------- | /--* call ref System.Convert.ToString
+N010 ( 6, 8) [000028] ---XG------- arg0 in rcx | | \--* indir int
+N008 ( 1, 4) [000067] ------------ | | | /--* const long 140 field offset Fseq[x]
+N009 ( 4, 6) [000068] -------N---- | | \--* + byref
+N007 ( 3, 2) [000027] ------------ | | \--* lclVar ref V03 tmp0
+N014 ( 24, 17) [000072] -ACXG---R-L- arg1 SETUP +--* = ref
+N013 ( 3, 2) [000071] D------N---- | \--* lclVar ref V04 tmp1
+N017 ( 3, 2) [000073] ------------ arg1 in rdx +--* lclVar ref V04 tmp1 (last use)
+N018 ( 3, 2) [000077] ------------ arg0 in rcx \--* lclVar ref V05 tmp2 (last use)
+
+***** BB04, stmt 10
+ ( 18, 19) [000044] ------------ * stmtExpr void (IL 0x028... ???)
+N014 ( 1, 1) [000101] ------------ | /--* lclVar int V09 tmp6
+N016 ( 5, 4) [000102] -A------R--- | /--* = int
+N015 ( 3, 2) [000100] D------N---- | | \--* lclVar int V01 loc0
+N017 ( 18, 19) [000103] -A-XG------- \--* comma void
+N010 ( 6, 8) [000039] ---XG------- | /--* indir int
+N008 ( 1, 4) [000081] ------------ | | | /--* const long 140 field offset Fseq[x]
+N009 ( 4, 6) [000082] -------N---- | | \--* + byref
+N007 ( 3, 2) [000038] ------------ | | \--* lclVar ref V03 tmp0 (last use)
+N011 ( 13, 15) [000041] ---XG------- | /--* + int
+N005 ( 4, 4) [000037] ---XG------- | | | /--* indir int
+N003 ( 1, 1) [000079] ------------ | | | | | /--* const long 16 field offset Fseq[val]
+N004 ( 2, 2) [000080] -------N---- | | | | \--* + byref
+N002 ( 1, 1) [000036] ------------ | | | | \--* lclVar ref V08 tmp5
+N006 ( 6, 6) [000040] ---XG------- | | \--* + int
+N001 ( 1, 1) [000035] ------------ | | \--* lclVar int V09 tmp6
+N013 ( 13, 15) [000043] -A-XG---R--- \--* = int
+N012 ( 1, 1) [000042] D------N---- \--* lclVar int V09 tmp6
+
+***** BB04, stmt 11
+ ( 20, 14) [000051] ------------ * stmtExpr void (IL 0x038...0x044)
+N013 ( 20, 14) [000049] --CXGO------ \--* call int Enreg01.foo
+N007 ( 1, 1) [000086] ------------ | /--* const long 8 field offset Fseq[dist]
+N008 ( 3, 3) [000087] ------------ | /--* + byref
+N006 ( 1, 1) [000085] ------------ | | \--* lclVar ref V08 tmp5
+N009 ( 5, 5) [000088] ---XGO-N---- arg1 in rdx +--* comma byref
+N005 ( 2, 2) [000084] ---X-O-N---- | \--* nullcheck byte
+N004 ( 1, 1) [000083] ------------ | \--* lclVar ref V08 tmp5
+N010 ( 1, 1) [000045] ------------ this in rcx \--* lclVar ref V08 tmp5
+
+***** BB04, stmt 12
+ ( 11, 10) [000058] ------------ * stmtExpr void (IL 0x045...0x04D)
+N009 ( 1, 1) [000105] ------------ | /--* lclVar int V09 tmp6
+N011 ( 5, 4) [000106] -A------R--- | /--* = int
+N010 ( 3, 2) [000104] D------N---- | | \--* lclVar int V01 loc0
+N012 ( 11, 10) [000107] -A-XG------- \--* comma void
+N005 ( 4, 4) [000054] ---XG------- | /--* indir int
+N003 ( 1, 1) [000094] ------------ | | | /--* const long 16 field offset Fseq[val]
+N004 ( 2, 2) [000095] -------N---- | | \--* + byref
+N002 ( 1, 1) [000053] ------------ | | \--* lclVar ref V08 tmp5
+N006 ( 6, 6) [000055] ---XG------- | /--* + int
+N001 ( 1, 1) [000052] ------------ | | \--* lclVar int V09 tmp6
+N008 ( 6, 6) [000057] -A-XG---R--- \--* = int
+N007 ( 1, 1) [000056] D------N---- \--* lclVar int V09 tmp6
+
+```
+
+#### Post register allocation and code generation code
+
+```diff
+--- base.asmdmp 2017-03-28 20:40:36.000000000 -0700
++++ wt.asmdmp 2017-03-28 20:41:11.000000000 -0700
+@@ -1,78 +1,85 @@
+ *************** After end code gen, before unwindEmit()
+-G_M16307_IG01: ; func=00, offs=000000H, size=0014H, gcVars=0000000000000000 {}, gcrefRegs=00000000 {}, byrefRegs=00000000 {}, gcvars, byref, nogc <-- Prolog IG
++G_M16307_IG01: ; func=00, offs=000000H, size=0017H, gcVars=0000000000000000 {}, gcrefRegs=00000000 {}, byrefRegs=00000000 {}, gcvars, byref, nogc <-- Prolog IG
+
+ push rbp
++push r14
+ push rdi
+ push rsi
++push rbx
+ sub rsp, 48
+-lea rbp, [rsp+40H]
+-mov qword ptr [V07 rbp-20H], rsp
++lea rbp, [rsp+50H]
++mov qword ptr [V07 rbp-30H], rsp
+ mov gword ptr [V00 rbp+10H], rcx
+
+-G_M16307_IG02: ; offs=000014H, size=000AH, gcVars=0000000000000001 {V00}, gcrefRegs=00000000 {}, byrefRegs=00000000 {}, gcvars, byref
++G_M16307_IG02: ; offs=000017H, size=000AH, gcVars=0000000000000001 {V00}, gcrefRegs=00000000 {}, byrefRegs=00000000 {}, gcvars, byref
+
+-mov rcx, gword ptr [V00 rbp+10H]
+-mov ecx, dword ptr [rcx+16]
+-mov dword ptr [V01 rbp-14H], ecx
++mov rsi, gword ptr [V00 rbp+10H]
++mov edi, dword ptr [rsi+16]
++mov dword ptr [V01 rbp-24H], edi
+
+-G_M16307_IG03: ; offs=00001EH, size=000FH, gcrefRegs=00000000 {}, byrefRegs=00000000 {}, byref
++G_M16307_IG03: ; offs=000021H, size=000EH, gcrefRegs=00000040 {rsi}, byrefRegs=00000000 {}, byref
+
+-mov rcx, gword ptr [V00 rbp+10H]
++mov rcx, rsi ; Elided reload in try region
+ mov edx, 97
+ call Enreg01:TryIncrement(int):int:this
+ nop
+
+-G_M16307_IG04: ; offs=00002DH, size=0003H, gcVars=0000000000000000 {}, gcrefRegs=00000000 {}, byrefRegs=00000000 {}, gcvars, byref
++G_M16307_IG04: ; offs=00002FH, size=0005H, gcVars=0000000000000000 {}, gcrefRegs=00000000 {}, byrefRegs=00000000 {}, gcvars, byref
+
+-mov eax, dword ptr [V01 rbp-14H]
++mov edi, dword ptr [V01 rbp-24H]
++mov eax, edi
+
+-G_M16307_IG05: ; offs=000030H, size=0008H, epilog, nogc, emitadd
++G_M16307_IG05: ; offs=000034H, size=000BH, epilog, nogc, emitadd
+
+-lea rsp, [rbp-10H]
++lea rsp, [rbp-20H]
++pop rbx
+ pop rsi
+ pop rdi
++pop r14
+ pop rbp
+ ret
+
+-G_M16307_IG06: ; func=01, offs=000038H, size=0014H, gcrefRegs=00000004 {rdx}, byrefRegs=00000000 {}, byref, funclet prolog, nogc
++G_M16307_IG06: ; func=01, offs=00003FH, size=0017H, gcrefRegs=00000004 {rdx}, byrefRegs=00000000 {}, byref, funclet prolog, nogc
+
+ push rbp
++push r14
+ push rdi
+ push rsi
++push rbx
+ sub rsp, 48
+ mov rbp, qword ptr [rcx+32]
+ mov qword ptr [rsp+20H], rbp
+-lea rbp, [rbp+40H]
++lea rbp, [rbp+50H]
+
+-G_M16307_IG07: ; offs=00004CH, size=005EH, gcVars=0000000000000001 {V00}, gcrefRegs=00000004 {rdx}, byrefRegs=00000000 {}, gcvars, byref, isz
++G_M16307_IG07: ; offs=000056H, size=0054H, gcVars=0000000000000001 {V00}, gcrefRegs=00000004 {rdx}, byrefRegs=00000000 {}, gcvars, byref, isz
+
+ mov rsi, rdx
+-mov rcx, 0x18A3C473070
+-mov rdi, gword ptr [rcx]
++mov rcx, gword ptr [V00 rbp+10H] ; Reload of proxy register
++mov rdi, rcx ; Missed peep
++mov ecx, dword ptr [V01 rbp-24H] ; Reload of proxy register
++mov ebx, ecx ; Missed peep
++mov rcx, 0x263B3963070
++mov r14, gword ptr [rcx] ; Missed addressing mode
+ mov ecx, dword ptr [rsi+140]
+ call System.Convert:ToString(int):ref
+ mov rdx, rax
+-mov rcx, rdi
++mov rcx, r14
+ call System.Console:WriteLine(ref,ref)
+-mov edx, dword ptr [V01 rbp-14H] ; Elided stack access
+-mov rcx, gword ptr [V00 rbp+10H] ; Elided stack access
+-add edx, dword ptr [rcx+16]
+-add edx, dword ptr [rsi+140]
+-mov dword ptr [V01 rbp-14H], edx ; Elided stack access
+-mov rdx, gword ptr [V00 rbp+10H] ; Elided stack access
+-add rdx, 8
+-mov rcx, gword ptr [V00 rbp+10H] ; Elided stack access
++add ebx, dword ptr [rdi+16]
++add ebx, dword ptr [rsi+140]
++lea rdx, bword ptr [rdi+8]
++mov rcx, rdi
+ call Enreg01:foo(byref):int:this
+-mov eax, dword ptr [V01 rbp-14H] ; Elided stack access
+-mov rdx, gword ptr [V00 rbp+10H] ; Elided stack access
+-add eax, dword ptr [rdx+16]
+-mov dword ptr [V01 rbp-14H], eax ; Elided stack access
++add ebx, dword ptr [rdi+16]
++mov dword ptr [V01 rbp-24H], ebx ; Store of proxy register
+ lea rax, G_M16307_IG04
+
+-G_M16307_IG08: ; offs=0000AAH, size=0008H, funclet epilog, nogc, emitadd
++G_M16307_IG08: ; offs=0000AAH, size=000BH, funclet epilog, nogc, emitadd
+
+ add rsp, 48
++pop rbx
+ pop rsi
+ pop rdi
++pop r14
+ pop rbp
+ ret
+
+```
+
+Summary of diff:
+replaced 6 loads and 2 stores with 2 loads, 1 store, 2 push, 2 pop.
+
+
+
diff --git a/Documentation/design-docs/jit-call-morphing.md b/Documentation/design-docs/jit-call-morphing.md
new file mode 100644
index 0000000..49fd8bf
--- /dev/null
+++ b/Documentation/design-docs/jit-call-morphing.md
@@ -0,0 +1,157 @@
+Morphing of call nodes in RyuJIT
+=========================
+
+Overview
+--------
+
+In C# and IL, and unlike C/C++, the evaluation order of the arguments for calls is strictly defined.
+Basically this is left to right in C# or the IL instruction ordering for IL.
+
+
+One issue that must be addressed is the problem of nested calls. Consider `Foo(x[i], Bar(y))`.
+We first must evaluate `x[i]` and possibly set up the first argument for `Foo()`. But immediately
+after that, we must set up `y` as first argument of `Bar()`. Thus, when we evaluate `x[i]` we will
+need to hold that value someplace while we set up and call `Bar().` Arguments that contain an
+assignment are another issue that we need to address. Most cases of this are rare except for
+post/pre increment, perhaps like this: `Foo(j, a[j++])`. Here `j` is updated via assignment
+when the second arg is evaluated, so the earlier uses of `j` would need to be evaluated and
+saved in a new LclVar.
+
+One simple approach would be to create new single definition, single use LclVars for every argument
+that is passed. This would preserve the evaluation order. However, it would potentially create
+hundreds of LclVar for moderately sized methods and that would overflow the limited number of
+tracked local variables in the JIT. One observation is that many arguments to methods are
+either constants or LclVars and can be set up anytime we want. They usually will not need a
+new LclVar to preserve the order of evaluation rule.
+
+Each argument is an arbitrary expression tree. The JIT tracks a summary of observable side-effects
+using a set of five bit flags in every GenTree node: `GTF_ASG`, `GTF_CALL`, `GTF_EXCEPT`, `GTF_GLOB_REF`,
+and `GTF_ORDER_SIDEEFF`. These flags are propagated up the tree so that the top node has a particular
+flag set if any of its child nodes has the flag set. Decisions about whether to evaluate arguments
+into temp LclVars are made by examining these flags on each of the arguments.
+
+
+*Our design goal for call sites is to create a few temp LclVars as possible, while preserving the
+order of evaluation rules of IL and C#.*
+
+
+Data Structures
+------------
+
+The most important data structure is the `GenTreeCall` node which represents a single
+call site and is created by the Importer when it sees a call in the IL. It is also
+used for internal calls that the JIT needs such as helper calls. Every `GT_CALL` node
+should have a `GTF_CALL` flag set on it. Nodes that may be implemented using a function
+call also should have the `GTF_CALL` flag set on them. The arguments for a single call
+site are held by three fields in the `GenTreeCall`: `gtCallObjp`, `gtCallArgs`, and
+`gtCallLateArgs`. The first one, `gtCallObjp`, contains the instance pointer ("this"
+pointer) when you are calling a method that takes an instance pointer, otherwise it is
+null. The `gtCallArgs` contains all of the normal arguments in a null terminated `GT_LIST`
+format. When the `GenTreeCall` is first created the `gtCallLateArgs` is null and is
+set up later when we call `fgMorphArgs()` during the global Morph of all nodes. To
+accurately record and track all of the information about call site arguments we create
+a `fgArgInfo` that records information and decisions that we make about how each argument
+for this call site is handled. It has a dynamically sized array member `argTable` that
+contains details about each argument. This per-argument information is contained in the
+`fgArgTabEntry` struct.
+
+
+`FEATURE_FIXED_OUT_ARGS`
+-----------------
+
+All architectures support passing a limited number of arguments in registers and the
+additional arguments must be passed on the stack. There are two distinctly different
+approaches for passing arguments of the stack. For the x86 architecture a push
+instruction is typically used to pass a stack based argument. For all other architectures
+that we support we allocate the `lvaOutgoingArgSpaceVar`, which is a variable-sized
+stack based LclVar. Its size is determined by the call site that has the largest
+requirement for stack based arguments. The define `FEATURE_FIXED_OUT_ARGS` is 1 for
+architectures that use the outgoing arg space to pass their stack based arguments.
+There is only one outgoing argument space and any values stored there are considered
+to be killed by the very next call even if the next call doesn't take any stack based
+arguments. For x86, we can push some arguments on the stack for one call and leave
+them there while pushing some new arguments for a nested call. Thus we allow nested
+calls for x86 but do not allow them for the other architectures.
+
+
+Rules for when Arguments must be evaluated into temp LclVars
+-----------------
+
+During the first Morph phase known as global Morph we call `fgArgInfo::ArgsComplete()`
+after we have completed building the `argTable` for `fgArgInfo` struct. This method
+applies the following rules:
+
+1. When an argument is marked as containing an assignment using `GTF_ASG`, then we
+force all previous non-constant arguments to be evaluated into temps. This is very
+conservative, but at this phase of the JIT it is rare to have an assignment subtree
+as part of an argument.
+2. When an argument is marked as containing a call using the `GTF_CALL` flag, then
+we force that argument and any previous argument that is marked with any of the
+`GTF_ALL_EFFECT` flags into temps.
+ * Additionally, for `FEATURE_FIXED_OUT_ARGS`, any previous stack based args that
+ we haven't marked as needing a temp but still need to store in the outgoing args
+ area is marked as needing a placeholder temp using `needPlace`.
+3. We force any arguments that use `localloc` to be evaluated into temps.
+4. We mark any address taken locals with the `GTF_GLOB_REF` flag. For two special
+cases we call `EvalToTmp()` and set up the temp in `fgMorphArgs`. `EvalToTmp`
+records the tmpNum used and sets `isTmp` so that we handle it like the other temps.
+The special cases are for `GT_MKREFANY` and for a `TYP_STRUCT` argument passed by
+value when we can't optimize away the extra copy.
+
+
+Rules use to determine the order of argument evaluation
+-----------------
+
+After calling `ArgsComplete()` the `SortArgs()` method is called to determine the
+optimal way to evaluate the arguments. This sorting controls the order that we place
+the nodes in the `gtCallLateArgs` list.
+
+1. We iterate over the arguments and move any constant arguments to be evaluated
+last and remove them from further consideration by marking them as processed.
+2. We iterate over the arguments and move any arguments that contain calls to be evaluated first and remove them from further consideration by marking them as processed.
+3. We iterate over the arguments and move arguments that must be evaluated into
+temp LclVars to be after the ones that contain calls.
+4. We iterate over the arguments and move arguments that are simple LclVar or
+LclFlds and put them before the constant args.
+5. If there are any remaining arguments, we evaluate them from the most complex
+to the least complex.
+
+
+Evaluating Args into new LclVar temps and the creation of the LateArgs
+-----------------
+
+After calling `SortArgs()`, the `EvalArgsToTemps()` method is called to create
+the temp assignments and to populate the LateArgs list. Arguments that are
+marked with `needTmp == true`.
+
+1. We create an assignment using `gtNewTempAssign`. This assignment replaces
+the original argument in the `gtCallArgs` list. After we create the assignment
+the argument is marked as `isTmp`. The new assignment is marked with the
+`GTF_LATE_ARG` flag.
+2. Arguments that are already marked with `isTmp` are treated similarly as
+above except we don't create an assignment for them.
+3. A `TYP_STRUCT` argument passed by value will have `isTmp` set to true
+and will use a `GT_COPYBLK` or a `GT_COPYOBJ` to perform the assignment of the temp.
+4. The assignment node or the CopyBlock node is referred to as `arg1 SETUP` in the JitDump.
+
+
+Argument that are marked with `needTmp == false`.
+-----------------
+
+1. If this is an argument that is passed in a register, then the existing
+node is moved to the `gtCallLateArgs` list and a new `GT_ARGPLACE` (placeholder)
+node replaces it in the `gtArgList` list.
+2. Additionally, if `needPlace` is true (only for `FEATURE_FIXED_OUT_ARGS`)
+then the existing node is moved to the `gtCallLateArgs` list and a new
+`GT_ARGPLACE` (placeholder) node replaces it in the `gtArgList` list.
+3. Otherwise the argument is left in the `gtCallArgs` and it will be
+evaluated into the outgoing arg area or pushed on the stack.
+
+After the Call node is fully morphed the LateArgs list will contain the arguments
+passed in registers as well as additional ones for `needPlace` marked
+arguments whenever we have a nested call for a stack based argument.
+When `needTmp` is true the LateArg will be a LclVar that was created
+to evaluate the arg (single-def/single-use). When `needTmp` is false
+the LateArg can be an arbitrary expression tree.
diff --git a/Documentation/design-docs/jump-stubs.md b/Documentation/design-docs/jump-stubs.md
new file mode 100644
index 0000000..86bf0ac
--- /dev/null
+++ b/Documentation/design-docs/jump-stubs.md
@@ -0,0 +1,518 @@
+# Jump Stubs
+
+## Overview
+
+On 64-bit platforms (AMD64 (x64) and ARM64), we have a 64-bit address
+space. When the CLR formulates code and data addresses, it generally
+uses short (<64 bit) relative addresses, and attempts to pack all code
+and data relatively close together at runtime, to reduce code size. For
+example, on x64, the JIT generates 32-bit relative call instruction
+sequences, which can refer to a target address +/- 2GB from the source
+address, and which are 5 bytes in size: 1 byte for opcode and 4 bytes
+for a 32-bit IP-relative offset (called a rel32 offset). A call sequence
+with a full 64-bit target address requires 12 bytes, and in addition
+requires a register. Jumps have the same characteristics as calls: there
+are rel32 jumps as well.
+
+In case the short relative address is insufficient to address the target
+from the source address, we have two options: (1) for data, we must
+generate full 64-bit sized addresses, (2) for code, we insert a "jump
+stub", so the short relative call or jump targets a "jump stub" which
+then jumps directly to the target using a full 64-bit address (and
+trashes a register to load that address). Since calls are so common, and
+the need for full 64-bit call sequences so rare, using this design
+drastically improves code size. The need for jump stubs only arises when
+jumps of greater than 2GB range (on x64; 128MB on arm64) are required.
+This only happens when the amount of code in a process is very large,
+such that all the related code can't be packed tightly together, or the
+address space is otherwise tightly packed in the range where code is
+normally allocated, once again preventing from packing code together.
+
+An important issue arises, though: these jump stubs themselves must be
+allocated within short relative range of the small call or jump
+instruction. If that doesn't occur, we encounter a fatal error
+condition, if we have no way for the already generated instruction to
+reach its intended target.
+
+ARM64 has a similar issue: it has a 28-bit relative branch that is the
+preferred branch instruction. The JIT always generates this instruction,
+and requires the VM to generate jump stubs if required. However, the VM
+does not use this form in any of its stubs; it always uses large form
+branches. The remainder of this document will only describe the AMD64
+case.
+
+This document will describe the design and implementation of jump stubs,
+their various users, the design of their allocation, and how we can
+address the problem of failure to allocate required jump stubs (which in
+this document I call "mitigation"), for each case.
+
+## Jump stub creation and management
+
+A jump stub looks like this:
+```
+mov rax, <8-byte address>
+jmp rax
+```
+
+It is 12 bytes in size. Note that it trashes the RAX register. Since it
+is normally used to interpose on a call instruction, and RAX is a
+callee-trashed (volatile) register for amd64 (for both Windows and Linux
+/ System V ABI), this is not a problem. For calls with custom calling
+conventions, like profiler hooks, the VM is careful not to use jump
+stubs that might interfere with those conventions.
+
+Jump stub creation goes through the function `rel32UsingJumpStub()`. It
+takes the rel32 data address, the target address, and computes the
+offset from the source to the target address, and returns this offset.
+Note that the source, or "base", address is the address of the rel32
+data plus 4 bytes, which it assumes due to the rules of the x86/x64
+instruction set which state that the "base" address for computing a
+branch offset is the instruction pointer value, or address, of the
+following instruction, which is the rel32 address plus 4.
+
+If the offset doesn't fit, it computes the allowed address range (e.g.,
+[low ... high]) where a jump stub must be located to create a legal
+rel32 offset, and calls `ExecutionManager::jumpStub()` to create or find
+an appropriate jump stub.
+
+Jump stubs are allocated in the loader heap associated with a particular
+use: either the `LoaderCodeHeap` for normal code, or the `HostCodeHeap`
+for DynamicMethod / LCG functions. Dynamic methods cannot share jump
+stubs, to support unloading individual methods and reclaiming their
+memory. For normal code, jump stubs are reused. In fact, we maintain a
+hash table mapping from jump stub target to the jump stub itself, and
+look up in this table to find a jump stub to reuse.
+
+In case there is no space left for a jump stub in any existing code heap
+in the correct range, a new code heap is attempted to be created in the
+range required by the new jump stub, using the function
+`ClrVirtualAllocWithinRange()`. This function walks the acceptable address
+space range, using OS virtual memory query/allocation APIs, to find and
+allocate a new block of memory in the acceptable range. If this function
+can't find and allocate space in the required range, we have, on AMD64,
+one more fallback: if an emergency jump stub reserve was created using
+the `COMPlus_NGenReserveForjumpStubs` configuration (see below), we
+attempt to find an appropriate, in range, allocation from that emergency
+pool. If all attempts fail to create an allocation in the appropriate
+range, we encounter a fatal error (and tear down the process), with a
+distinguished "out of memory within range" message (using the
+`ThrowOutOfMemoryWithinRange()` function).
+
+## Jump stub allocation failure mitigation
+
+Several strategies have already been created to attempt to lessen the
+occurrence of jump stub allocation failure. The following CLR
+configuration variables are relevant (these can be set in the registry
+as well as the environment, as usual):
+
+* `COMPlus_CodeHeapReserveForJumpStubs`. This value specifies a percentage
+of every code heap to reserve for jump stubs. When a non-jump stub
+allocation in the code heap would eat into the reserved percentage, a
+new code heap is allocated instead, leaving some buffer in the existing
+code heap. The default value is 2.
+* `COMPlus_NGenReserveForjumpStubs`. This value, when non-zero, creates an
+"emergency jump stub reserve". For each NGEN image loaded, an emergency
+jump stub reserve space is calculated by multiplying this number, as a
+percentage, against the loaded native image size. This amount of space
+is allocated, within rel32 range of the NGEN image. An allocation
+granularity for these emergency code heaps exceeds the specific
+requirement, but multiple NGEN images can share the same jump stub
+emergency space heap if it is in range. If an emergency jump stub space
+can't be allocated, the failure is ignored (hopefully in this case any
+required jump stub will be able to be allocated somewhere else). When
+looking to allocate jump stubs, the normal mechanisms for finding jump
+stub space are followed, and only if they fail to find appropriate space
+are the emergency jump stub reserve heaps tried. The default value is
+zero.
+* `COMPlus_BreakOnOutOfMemoryWithinRange`. When set to 1, this breaks into
+the debugger when the specific jump stub allocation failure condition
+occurs.
+
+The `COMPlus_NGenReserveForjumpStubs` mitigation is described publicly
+here:
+https://support.microsoft.com/en-us/help/3152158/out-of-memory-exception-in-a-managed-application-that-s-running-on-the-64-bit-.net-framework.
+(It also mentions, in passing, `COMPlus_CodeHeapReserveForJumpStubs`, but
+only to say not to use it.)
+
+## Jump stubs and the JIT
+
+As the JIT generates code on AMD64, it starts by generating all data and
+code addresses as rel32 IP-relative offsets. At the end of code
+generation, the JIT determines how much code will be generated, and
+requests buffers from the VM to hold the generated artifacts: a buffer
+for the "hot" code, a buffer for the "cold" code (only used in the case
+of hot/cold splitting during NGEN), and a buffer for the read-only data
+(see `ICorJitInfo::allocMem()`). The VM finds allocation space in either
+existing code heaps, or in newly created code heaps, to satisfy this
+request. It is only at this point that the actual addresses where the
+generated code will live is known. Note that the JIT has finalized the
+exact generated code sequences in the function before calling
+`allocMem()`. Then, the JIT issues (or "emits") the generated instruction
+bytes into the provided buffers, as well as telling the VM about
+exception handling ranges, GC information, and debug information.
+When the JIT emits an instruction that includes a rel32 offset (as well
+as for other cases of global pointer references), it calls the VM
+function `ICorJitInfo::recordRelocation()` to tell the VM the address of
+the rel32 data and the intended target address of the rel32 offset. How
+this is handled in the VM depends on whether we are JIT-compiling, or
+compiling for NGEN.
+
+For JIT compilation, the function `CEEJitInfo::recordRelocation()`
+determines the actual rel32 value to use, and fills in the rel32 data in
+the generated code buffer. However, what if the offset doesn't fit in a
+32-bit rel32 space?
+
+Up to this point, the VM has allowed the JIT to always generate rel32
+addresses. It is allowed by the JIT calling
+`ICorJitInfo::getRelocTypeHint()`. If this function returns
+`IMAGE_REL_BASED_REL32`, then the JIT generates a rel32 address. The first
+time in the lifetime of the process when recordRelocation() fails to
+compute an offset that fits in a rel32 space, the VM aborts the
+compilation, and restarts it in a mode where
+`ICorJitInfo::getRelocTypeHint()` never returns `IMAGE_REL_BASED_REL32`.
+That is, the VM never allows the JIT to generate rel32 addresses. This
+is "rel32 overflow" mode. However, this restriction only applies to data
+addresses. The JIT will then load up full 64-bit data addresses in the
+code (which are also subject to relocation), and use those. These 64-bit
+data addresses are guaranteed to reach the entire address space.
+
+The JIT continues to generate rel32 addresses for call instructions.
+After the process is in rel32 overflow mode, if the VM gets a
+`ICorJitInfo::recordRelocation()` that overflows rel32 space, it assumes
+the rel32 address is for a call instruction, and it attempts to build a
+jump stub, and patch the rel32 with the offset to the generated jump
+stub.
+
+Note that in rel32 overflow mode, most call instructions are likely to
+still reach their intended target with a rel32 offset, so jump stubs are
+not expected to be required in most cases.
+
+If this attempt to create a jump stub fails, then the generated code
+cannot be used, and we hit a fatal error; we have no mechanism currently
+to recover from this failure, or to prevent it.
+
+There are several problems with this system:
+1. Because the VM doesn't know whether a `IMAGE_REL_BASED_REL32`
+relocation is for data or for code, in the normal case (before "rel32
+overflow" mode), it assumes the worst, that it is for data. It's
+possible that if all rel32 data accesses fit, and only code offsets
+don't fit, and the VM could distinguish between code and data
+references, that we could generate jump stubs for the too-large code
+offsets, and never go into "rel32 overflow" mode that leads to
+generating 64-bit data addresses.
+2. We can't stress jump stub creation functionality for JIT-generated
+code because the JIT generates `IMAGE_REL_BASED_REL32` relocations for
+intra-function jumps and calls that it expects and, in fact, requires,
+not be replaced with jump stubs, because it doesn't expect the register
+used by jump stubs (RAX) to be trashed.
+3. We don't have any mechanism to recover if a jump stub can't be
+allocated.
+
+In the NGEN case, rel32 calls are guaranteed to always reach, as PE
+image files are limited to 2GB in size, meaning a rel32 offset is
+sufficient to reach from any location in the image to any other
+location. In addition, all control transfers to locations outside the
+image go through indirection stubs. These stubs themselves might require
+jump stubs, as described later.
+
+### Failure mitigation
+
+There are several possible mitigations for JIT failure to allocate jump
+stubs.
+1. When we get into "rel32 overflow" mode, the JIT could always generate
+large calls, and never generate rel32 offsets. This is obviously
+somewhat expensive, as every external call, such as every call to a JIT
+helper, would increase from 5 to 12 bytes. Since it would only occur
+once you are in "rel32 overflow" mode, you already know that the process
+is quite large, so this is perhaps justifiable, though also perhaps
+could be optimized somewhat. This is very simple to implement.
+2. Note that you get into "rel32 overflow" mode even for data addresses.
+It would be useful to verify that the need for large data addresses
+doesn't happen much more frequently than large code addresses.
+3. An alternative is to have two separate overflow modes: "data rel32
+overflow" and "code rel32 overflow", as follows:
+ 1. "data rel32 overflow" is entered by not being able to generate a
+ rel32 offset for a data address. Restart the compile, and all subsequent
+ data addresses will be large.
+ 2. "code rel32 overflow" is entered by not being able to generate a
+ rel32 offset or jump stub for a code address. Restart the compile, and
+ all subsequent external call/jump sequences will be large.
+ These could be independent, which would require distinguishing code and
+ data rel32 to the VM (which might be useful for other reasons, such as
+ enabling better stress modes). Or, we could layer them: "data rel32
+ overflow" would be the current "rel32 overflow" we have today, which we
+ must enter before attempting to generate a jump stub. If a jump stub
+ fails to be created, we fail and retry the compilation again, enter
+ "code rel32 overflow" mode, and all subsequent code (and data) addresses
+ would be large. We would need to add the ability to communicate this new
+ mode from the VM to the JIT, implement large call/jump generation in the
+ JIT, and implement another type of retry in the VM.
+4. Another alternative: The JIT could determine the total number of
+unique external call/jump targets from a function, and report that to
+the VM. Jump stub space for exactly this number would be allocated,
+perhaps along with the function itself (such as at the end), and only if
+we are in a "rel32 overflow" mode. Any jump stub required would come
+from this space (and identical targets would share the same jump stub;
+note that sharing is optional). Since jump stubs would not be shared
+between functions, this requires more space than the current jump stub
+system but would be guaranteed to work and would only kick in when we
+are already experiencing large system behavior.
+
+## Other jump stub creation paths
+
+The VM has several other locations that dynamically generate code or
+patch previously generated code, not related to the JIT generating code.
+These also must use the jump stub mechanism to possibly create jump
+stubs for large distance jumps. The following sections describe these
+cases.
+
+## ReJIT
+
+ReJIT is a CLR profiler feature, currently only implemented for x86 and
+amd64, that allows a profiler to request a function be re-compiled with
+different IL, given by the profiler, and have that newly compiled code
+be used instead of the originally compiled IL. This happens within a
+live process. A single function can be ReJIT compiled more than once,
+and in fact, any number of times. The VM currently implements the
+transfer of control to the ReJIT compiled function by replacing the
+first five bytes of the generated code of the original function with a
+"jmp rel32" to the newly generated code. Call this the "jump patch"
+space. One fundamental requirement for this to work is that every
+function (a) be at least 5 bytes long, and (b) the first 5 bytes of a
+function (except the first, which is the address of the function itself)
+can't be the target of any branch. (As an implementation detail, the JIT
+currently pads the function prolog out to 5 bytes with NOP instructions,
+if required, even if there is enough code following the prolog to
+satisfy the 5-byte requirement if those non-prolog bytes are also not
+branch targets.)
+
+If the newly ReJIT generated code is at an address that doesn't fit in a
+rel32 in the "jmp rel32" patch, then a jump stub is created.
+
+The JIT only creates the required jump patch space if the
+`CORJIT_FLG_PROF_REJIT_NOPS` flag is passed to the JIT. For dynamic
+compilation, this flag is only passed if a profiler is attached and has
+also requested ReJIT services. Note that currently, to enable ReJIT, the
+profiler must be present from process launch, and must opt-in to enable
+ReJIT at process launch, meaning that all JIT generated functions will
+have the jump patch space under these conditions. There will never be a
+mix of functions with and without jump patch space in the process if a
+profiler has enabled ReJIT. A desirable future state from the profiler
+perspective would be to support profiler attach-to-process and ReJIT
+(with function swapping) at any time thereafter. This goal may or may
+not be achieved via the jump stamp space design.
+
+All NGEN and Ready2Run images are currently built with the
+`CORJIT_FLG_PROF_REJIT_NOPS` flag set, to always enable ReJIT using native
+images.
+
+A single function can be ReJIT compiled many times. Only the last ReJIT
+generated function can be active; the previous compilations consume
+address space in the process, but are not collected until the AppDomain
+unloads. Each ReJIT event must update the "jmp rel32" patch to point to
+the new function, and thus each ReJIT event might require a new jump
+stub.
+
+If a situation arises where a single function is ReJIT compiled many
+times, and each time requires a new jump stub, it's possible that all
+jump stub space near the original function can be consumed simply by the
+"leaked" jump stubs created by all the ReJIT compilations for a single
+function. The "leaked" ReJIT compiled functions (since they aren't
+collected until AppDomain unload) also make it more likely that "close"
+code heap address space gets filled up.
+
+### Failure mitigation
+
+A simple mitigation would be to increase the size of the required
+function jump patch space from 5 to 12 bytes. This is a two line change
+in the `CodeGen::genPrologPadForReJit()` function in the JIT. However,
+this would increase the size of all NGEN and Ready2Run images. Note that
+many managed code functions are very small, with very small prologs, so
+this could significantly impact code size (the change could easily be
+measured). For JIT-generated code, where the additional size would only
+be added once a profiler has enabled ReJIT, it seems like the additional
+code size would be easily justified.
+
+Note that a function has at most one active ReJIT companion function.
+When that ReJIT function is no longer used (and thus never again used),
+the associated jump stub is also "leaked", and never used again. We
+could reserve space for a single jump stub for each function, to be used
+by ReJIT, and then, if a jump stub is required for ReJIT, always use
+that space. The JIT could pad the function end by 12 bytes when the
+`CORJIT_FLG_PROF_REJIT_NOPS` flag is passed, and the ReJIT patching code
+could use this reserved space any time it required a jump stub. This
+would require 12 bytes extra bytes to be allocated for every function
+generated when the `CORJIT_FLG_PROF_REJIT_NOPS` flag is passed. These 12
+bytes could also be allocated at the end of the code heap, in the
+address space, but not in the normal working set.
+
+For NGEN and Ready2Run, this would require 12 bytes for every function
+in the image. This is quite a bit more space than the suggested
+mitigation of increasing prolog padding to 12 bytes but only if
+necessary (meaning, only if they aren't already 12 bytes in size).
+Alternatively, NGEN could allocate this space itself in the native
+image, putting it in some distant jump stub data area or section that
+would be guaranteed to be within range (due to the 2GB PE file size
+limitation) but wouldn't consume physical memory unless needed. This
+option would require more complex logic to allocate and find the
+associated jump stub during ReJIT. This would be similar to the JIT
+case, above, of reserving the jump stub in a distant portion of the code
+heap.
+
+## NGEN
+
+NGEN images are built with several tables of code addresses that must be
+patched when the NGEN image is loaded.
+
+### CLR Helpers
+
+During NGEN, the JIT generates either direct or indirect calls to CLR
+helpers. Most are direct calls. When NGEN constructs the PE file, it
+causes these all to branch to (or through, in the case of indirect
+calls) the helper table. When a native image is loaded, it replaces the
+helper number in the table with a 5-byte "jmp rel32" sequence. If the
+rel32 doesn't fit, a jump stub is created. Note that each helper table
+entry is allocated with 8 bytes (only 5 are needed for "jmp rel32", but
+presumably 8 bytes are reserved to improve alignment.)
+
+The code for filling out the helper table is `Module::LoadHelperTable()`.
+
+#### Failure mitigation
+
+A simple fix is to change NGEN to reserve 12 bytes for each direct call
+table entry, to accommodate the 12-byte jump stub sequence. A 5-byte
+"jmp rel32" sequence could still be used, if it fits, but the full 12
+bytes would be used if necessary.
+
+There are fewer than 200 helpers, so a maximum additional overhead would
+be about `200 * (12 - 8) = 800` bytes. That is by far a worst-case
+scenario. Mscorlib.ni.dll itself has 72 entries in the helper table.
+System.XML.ni.dll has 51 entries, which would lead to 288 and 204 bytes
+of additional space, out of 34MB and 12MB total NI file size,
+respectively.
+
+An alternative is to change all helper calls in NGEN to be indirect:
+```
+call [rel32]
+```
+where the [rel32] offset points to an 8-byte address stored in the
+helper table. This method is already used by exactly one helper on
+AMD64: `CORINFO_HELP_STOP_FOR_GC`, in particular because this helper
+doesn't allow us to trash RAX, as required by jump stubs.
+Similarly, Ready2Run images use:
+```
+call [rel32]
+```
+for "hot" helpers and:
+```
+call [rel32]
+```
+to a shared:
+```
+jmp [rel32]
+```
+for cold helpers. We could change NGEN to use the Ready2Run scheme.
+
+Alternatively, we might handle all NGEN jump stub issues by reserving a
+section in the image for jump stubs that reserves virtual address space
+but does not increase the size of the image (in C++ this is the ".bss"
+section). The size of this section could be calculated precisely from
+all the required possible jump stub contributions to the image. Then,
+the jump stub code would allocate jump stubs from this space when
+required for a NGEN image.
+
+### Cross-module inherited methods
+
+Per the comments on `VirtualMethodFixupWorker()`, in an NGEN image,
+virtual slots inherited from cross-module dependencies point to jump
+thunks. The jump thunk invokes code to ensure the method is loaded and
+has a stable entry point, at which point the jump thunk is replaced by a
+"jmp rel32" to that stable entrypoint. This is represented by
+`CORCOMPILE_VIRTUAL_IMPORT_THUNK`. This can require a jump stub.
+
+Similarly, `CORCOMPILE_EXTERNAL_METHOD_THUNK` represents another kind of
+jump thunk in the NGEN image that also can require a jump stub.
+
+#### Failure mitigation
+
+Both external method thunks could be changed to reserve 12 bytes instead
+of just 5 for the jump thunk, to provide for space required for any
+potential jump stub.
+
+## Precode
+
+Precodes are used as temporary entrypoints for functions that will be
+JIT compiled. They are also used for temporary entrypoints in NGEN
+images for methods that need to be restored (i.e., the method code has
+external references that need to be loaded before the code runs). There
+exists `StubPrecode`, `FixupPrecode`, `RemotingPrecode`, and
+`ThisPtrRetBufPrecode`. Each of these generates a rel32 jump and/or call
+that might require a jump stub.
+
+StubPrecode is the "normal" general case. FixupPrecode is the most
+common, and has been heavily size optimized. Each FixupPrecode is 8
+bytes. Generated code calls the FixupPrecode address. Initially, the
+precode invokes code to generate or fix up the method being called, and
+then "fix up" the FixupPrecode itself to jump directly to the native
+code. This final code will be a "jmp rel32", possibly via a jump stub.
+DynamicMethod / LCG uses FixupPrecode. This code path has been found to
+fail in large customer installations.
+
+### Failure mitigation
+
+An implementation has been made which changes the allocation of
+FixupPrecode to pre-allocate space for jump stubs, but only in the case
+of DynamicMethod. (See https://github.com/dotnet/coreclr/pull/9883).
+Currently, FixupPrecode are allocated in "chunks", that share a
+MethodDesc pointer. For LCG, each chunk will have an additional set of
+bytes allocated, to reserve space for one jump stub per FixupPrecode in
+the chunk. When the FixupPrecode is patched, for LCG methods it will use
+the pre-allocated space if a jump stub is required.
+
+For the non-LCG, non-FixupPrecode cases, we need a different solution.
+It would be easy to similarly allocate additional space for each type of
+precode with the precode itself. This might prove expensive. An
+alternative would be to ensure, by design, that somehow shared jump stub
+space is available, perhaps by reserving it in a shared area when the
+precode is allocated, and falling back to a mechanism where the precode
+reserves its own jump stub space if shared jump stub space cannot be
+allocated.
+
+A possibly better implementation would be to reserve, but not allocate,
+jump stub space at the end of the code heap, similar to how
+CodeHeapReserveForJumpStubs works, but instead the reserve amount should
+be computed precisely.
+
+## Ready2Run
+
+There are several DynamicHelpers class methods, used by Ready2Run, which
+may create jump stubs (not all do, but many do). The helpers are
+allocated dynamically when the helper in question is needed.
+
+### Failure mitigation
+
+These helpers could easily be changed to allocate additional, reserved,
+unshared space for a potential jump stub, and that space could be used
+when creating the rel32 offset.
+
+## Compact entrypoints
+
+The compact entrypoints implementation might create jump stubs. However,
+compact entrypoints are not enabled for AMD64 currently.
+
+## Stress modes
+
+Setting `COMPlus_ForceRelocs=1` forces jump stubs to be created in all
+scenarios except for JIT generated code. As described previously, the
+VM doesn't know when the JIT is reporting a rel32 data address or code
+address, and in addition the JIT reports relocations for intra-function
+jumps and calls for which it doesn't expect the register used by the
+jump stub to be trashed, thus we don't force jump stubs to be created
+for all JIT-reported jumps or calls.
+
+We should improve the communication between the JIT and VM such that we
+can reliably force jump stub creation for every rel32 call or jump. In
+addition, we should make sure to enable code to stress the creation of
+jump stubs for every mitigation that is implemented whether that be
+using the existing `COMPlus_ForceRelocs` configuration, or the creation of
+a new configuration option.
diff --git a/Documentation/design-docs/lsra-throughput.md b/Documentation/design-docs/lsra-throughput.md
new file mode 100644
index 0000000..4fd704c
--- /dev/null
+++ b/Documentation/design-docs/lsra-throughput.md
@@ -0,0 +1,74 @@
+Improving LSRA Throughput
+=========================
+
+There are a number of ways in which the current implementation of linear scan register allocation (LSRA) is sub-optimal:
+* I'm not certain that the extra pass that enumerates the nodes before the current `TreeNodeInfoInit` pass must be separate.
+ Further investigation is needed.
+* The identification of opportunities for "containment" (i.e. where the computation of a node's result can be folded into the parent,
+such as a load or store) is done during `Lowering` and communicated to the register allocator via a `gtLsraInfo` field on the node, that
+is otherwise unused, and is basically duplicated when the `RefPosition`s are built for the node.
+ * A more efficient representation of "containment" could allow this to remain in `Lowering`, where existing transformations already
+ take into account the parent context.
+ * This would also have the additional benefit of simplifying the containment check, which is done at least once for each node
+ (at the beginning of `CodeGen::genCodeForTreeNode()`), and then additionally when considering whether the operands of the current
+ node are contained.
+ * Alternatively, the containment analysis could be done during the building of `RefPosition`s, though see below.
+* Similarly, the specification of register requirements is done during the final pass of `Lowering`, and fundamentally requires more
+ space (it must specify register masks for sources, destination and any internal registers). In addition, the requirement for a new
+ register definition (the destination of the node, or any internal registers) is independent of the parent, so this could be done in
+ `LinearScan::buildRefPositionsForNode()` without having to do a dual traversal, unlike the identification of contained nodes.
+* After building `RefPositions`, they are traversed in order to set the last use bits.
+ This is done separately because there are currently inconsistencies between the gtNext/gtPrev links and the actual order of codegen.
+ Once this is resolved, the lastUse bits should be set prior to register allocation by the liveness pass (#7256).
+* The `RefPosition`s are all created prior to beginning the register allocation pass. However, they are only really needed in advance
+ for the lclVars, which, unlike the "tree temps", have multiple definitions and may be live across basic blocks.
+ The `RefPositions`s for the tree temps could potentially be allocated on-the-fly, saving memory and probably improving locality (#7257).
+* The loop over all the candidate registers in `LinearScan::tryAllocateFreeReg()` and in `LinearScan::allocateBusyReg()` could be
+ short-circuited when a register is found that has the best possible score. Additionally, in the case of MinOpts, it could potentially
+ short-circuit as soon as a suitable candidate is found, though one would want to weight the throughput benefit against the code
+ quality impact.
+
+Representing Containedness
+==========================
+My original plan for this was to combine all of the functionality of the current `TreeNodeInfoInit` pass with the building of `RefPositions`,
+and eliminate `gtLsraInfo`.
+The idea was to later consider pulling the containment analysis back into the first phase of `Lowering`.
+However, after beginning down that path (extracting the `TreeNodeInfoInit` methods into separate lsra{arch}.cpp files), I realized that
+there would be a great deal of throw-away work to put the containment analysis into `LinearScan`, only to potentially pull it out later.
+
+Furthermore, the representation of containedness is not very clean:
+* `Lowering` first communicates this as a combination of implicit knowledge of a node's behavior and its `gtLsraInfo.dstCount`.
+* Later, during `CodeGen`, it is determined by combining similar implicit node characteristics with the presence or absence of a register.
+
+I propose instead to do the following:
+* Add a flag to each node to indicate whether or not it is a tree root.
+ * To free up such a flag, I propose to eliminate `GTF_REG_VAL` for the non-`LEGACY_BACKEND`. Doing so will require some additional cleanup,
+ but in the process a number of hacks can be eliminated that are currently there to workaround the fact that the emitter was designed
+ to work with a code generator that dynamically assigned registers, and set that flag when the code had been generated to put it in a
+ register, unlike the RyuJIT backend, which assigns the registers before generating code.
+* Define new register values:
+ * `REG_UNK` is assigned by `Lowering` when a register is required.
+ * `REG_OPT` is assigned by `Lowering` when a register is optional at both definition and use.
+ * `REG_OPT_USE` is assigned by `Lowering` when a register is required at the definition, but optional at the use.
+ * I don't know if we need `REG_OPT_DEF`, but that could be added as well.
+* Having done this, we can greatly simplify `IsContained()`.
+
+It may be more effective to use the extra bit for an actual `GTF_CONTAINED` flag, and that is something we might want to consider
+eventually, but initially it is easier to simplify the containedness check using `GTF_TREE_ROOT` without having to change all the
+places that currently mark nodes as contained.
+
+Combining Containedness Analysis with Lowering
+==============================================
+Once we've changed containedness to use the above representation, we can move the code to set it into the first pass of `Lowering`.
+There are likely to be some phase ordering challenges, but I don't think they will be prohibitive.
+
+Eliminating gtLsraInfo
+======================
+Issue #7225.
+
+After the containedness changes above, all that remains to communicate via `gtLsraInfo` is the register requirements.
+This step would still use the `TreeNodeInfo` data structure and the `TreeNodeInfoInit()` methods, but they would be called as
+each node is handled by `LinearScan::buildRefPositionsForNode()`.
+
+
+
diff --git a/Documentation/project-docs/adding_new_public_apis.md b/Documentation/project-docs/adding_new_public_apis.md
index 289ba7d..cca5095 100644
--- a/Documentation/project-docs/adding_new_public_apis.md
+++ b/Documentation/project-docs/adding_new_public_apis.md
@@ -9,7 +9,6 @@ Many of the CoreFX libraries type-forward their public APIs to the implementatio
**Staging the changes**
Make the changes to CoreCLR, including System.Private.CoreLib
-- Update `coreclr/src/mscorlib/model.xml` with the new APIs. APIs that are not listed in this file will be stripped out prior to publishing.
- Merge the changes
- Wait for a new System.Private.CoreLib to be published. Check the latest published version [here](https://dotnet.myget.org/feed/dotnet-core/package/nuget/Microsoft.TargetingPack.Private.CoreCLR).
diff --git a/Documentation/project-docs/ci-trigger-phrases.md b/Documentation/project-docs/ci-trigger-phrases.md
index 03e071d..dd0e981 100644
--- a/Documentation/project-docs/ci-trigger-phrases.md
+++ b/Documentation/project-docs/ci-trigger-phrases.md
@@ -235,8 +235,16 @@ To trigger a job, post a comment on your PR with "@dotnet-bot {trigger-phrase}".
- **Ubuntu x64 Checked CoreFX JitStressRegs=8 Build & Test:** "test Ubuntu corefx_jitstressregs8"
- **Ubuntu x64 Checked CoreFX JitStressRegs=0x10 Build & Test:** "test Ubuntu corefx_jitstressregs0x10"
- **Ubuntu x64 Checked CoreFX JitStressRegs=0x80 Build & Test:** "test Ubuntu corefx_jitstressregs0x80"
+- **Ubuntu x86 Debug Build:** "test Ubuntu x86 Debug"
+- **Ubuntu x86 Checked Build:** "test Ubuntu x86 Checked"
+- **Ubuntu x86 Release Build:** "test Ubuntu x86 Release"
+- **Ubuntu arm Cross Debug Build & Small Test:** "test Ubuntu arm Cross Debug Build"
+- **Ubuntu arm Cross Checked Build & Small Test:** "test Ubuntu arm Cross Checked Build"
- **Ubuntu arm64 Release Cross Build:** "test Ubuntu arm64"
-- **Ubuntu15.10 x64 Release Priority 0 Build:** "test Ubuntu15.10"
+- **Ubuntu16.04 x64 Release Priority 0 Build:** "test Ubuntu16.04 x64"
+- **Ubuntu16.04 arm Cross Checked Build & Small Test:** "test Ubuntu16.04 arm Cross Checked Build"
+- **Ubuntu16.04 arm Cross Release Build & Small Test:** "test Ubuntu16.04 arm Cross Release Build"
+- **Ubuntu16.10 x64 Release Priority 0 Build:** "test Ubuntu16.10"
- **OSX x64 Release Priority 1 Build & Test:** "test OSX pri1"
- **OSX x64 Release IL RoundTrip Build & Test:** "test OSX ilrt"
- **OSX x64 Release Long-Running GC Build & Test:**: "test OSX Release longgc"
@@ -343,3 +351,4 @@ To trigger a job, post a comment on your PR with "@dotnet-bot {trigger-phrase}".
- **Debian x64 Release Priority 1 Build & Test:** "test Debian8.2 pri1"
- **RedHat x64 Release Priority 0 Build:** "test RHEL7.2"
- **RedHat x64 Release Priority 1 Build & Test:** "test RHEL7.2 pri1"
+- **Tizen arm Cross Checked Build & Small Test:** "test Tizen armel Cross Checked Build"
diff --git a/Documentation/project-docs/contributing-workflow.md b/Documentation/project-docs/contributing-workflow.md
index 70423e5..d88e098 100644
--- a/Documentation/project-docs/contributing-workflow.md
+++ b/Documentation/project-docs/contributing-workflow.md
@@ -6,7 +6,7 @@ You can contribute to .NET Core with issues and PRs. Simply filing issues for pr
Getting Started
===============
-If you are looking at getting your feet wet with some simple (but still beneficial) changes, check out _up for grabs_ issues on the [CoreCLR](https://github.com/dotnet/coreclr/labels/up-for-grabs) and [CoreFX](https://github.com/dotnet/corefx/labels/up%20for%20grabs) repos.
+If you are looking at getting your feet wet with some simple (but still beneficial) changes, check out _up for grabs_ issues on the [CoreCLR](https://github.com/dotnet/coreclr/labels/up-for-grabs) and [CoreFX](https://github.com/dotnet/corefx/labels/up-for-grabs) repos.
For new ideas, please always start with an issue before starting development of an implementation. See [project priorities](project-priorities.md) to understand the Microsoft team's approach to engagement on general improvements to the product. Use [CODE_OWNERS.TXT](https://github.com/dotnet/coreclr/blob/master/CODE_OWNERS.TXT) to find relevant maintainers and @ mention them to ask for feedback on your issue.
diff --git a/Documentation/project-docs/contributing.md b/Documentation/project-docs/contributing.md
index f099bf9..3bedc82 100644
--- a/Documentation/project-docs/contributing.md
+++ b/Documentation/project-docs/contributing.md
@@ -129,7 +129,7 @@ The following file header is the used for .NET Core. Please use it for new files
```
- See [class.cpp](../../src/vm/class.cpp) for an example of the header in a C++ file.
-- See [List.cs](https://github.com/dotnet/corefx/blob/master/src/System.Collections/src/System/Collections/Generic/List.cs) for an example of the header in a C# file.
+- See [List.cs](../../src/mscorlib/src/System/Collections/Generic/List.cs) for an example of the header in a C# file.
Copying Files from Other Projects
---------------------------------
diff --git a/Documentation/project-docs/glossary.md b/Documentation/project-docs/glossary.md
index f4ae1c5..2a4e912 100644
--- a/Documentation/project-docs/glossary.md
+++ b/Documentation/project-docs/glossary.md
@@ -1,37 +1,42 @@
.NET Core Glossary
===
-This glossary defines terms, both common and more niche, that are important to understand when reading .NET Core documents and source code. They are also often used by .NET Core team members and other contributers when conversing on GitHub (issues, PRs), on twitter and other sites.
+This glossary defines terms, both common and more niche, that are important to understand when reading .NET Core documents and source code. They are also often used by .NET Core team members and other contributors when conversing on GitHub (issues, PRs), on twitter and other sites.
As much as possible, we should link to the most authoritative and recent source of information for a term. That approach should be the most helpful for people who want to learn more about a topic.
-* BBT: Microsoft internal early version of C/C++ PGO. See https://www.microsoft.com/windows/cse/bit_projects.mspx.
-* BOTR: Book Of The Runtime.
-* CLR: Common Language Runtime.
-* COMPlus: An early name for the .NET platform, back when it was envisioned as a successor to the COM platform (hence, "COM+"). Used in various places in the CLR infrastructure, most prominently as a common prefix for the names of internal configuration settings. Note that this is different from the product that eventually ended up being named [COM+](https://msdn.microsoft.com/en-us/library/windows/desktop/ms685978.aspx).
-* COR: [Common Object Runtime](http://www.danielmoth.com/Blog/mscorlibdll.aspx). The name of .NET before it was named .NET.
-* DAC: Data Access Component. An abstraction layer over the internal structures in the runtime.
-* EE: Execution Engine.
-* GC: [Garbage Collector](https://github.com/dotnet/coreclr/blob/master/Documentation/botr/garbage-collection.md).
-* IPC: Inter-Process Communicaton
-* JIT: [Just-in-Time](https://github.com/dotnet/coreclr/blob/master/Documentation/botr/ryujit-overview.md) compiler. RyuJIT is the code name for the next generation Just-in-Time(aka "JIT") for the .NET runtime.
-* LCG: Lightweight Code Generation. An early name for [dynamic methods](https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Reflection/Emit/DynamicMethod.cs).
-* MD: MetaData
-* NGen: Native Image Generator.
-* NYI: Not Yet Implemented
-* PAL: [Platform Adaptation Layer](http://archive.oreilly.com/pub/a/dotnet/2002/03/04/rotor.html). Provides an abstraction layer between the runtime and the operating system
-* PE: Portable Executable.
-* PGO: Profile Guided Optimization - see [details](https://blogs.msdn.microsoft.com/vcblog/2008/11/12/pogo/)
-* POGO: Profile Guided Optimization - see [details](https://blogs.msdn.microsoft.com/vcblog/2008/11/12/pogo/)
-* ProjectN: Codename for the first version of [.NET Native for UWP](https://msdn.microsoft.com/en-us/vstudio/dotnetnative.aspx).
-* ReadyToRun: A flavor of native images - command line switch of [crossgen](../building/crossgen.md).
-* Redhawk: Codename for experimental minimal managed code runtime that evolved into [CoreRT](https://github.com/dotnet/corert/).
-* SOS: [Son of Strike](http://blogs.msdn.com/b/jasonz/archive/2003/10/21/53581.aspx). The debugging extension for DbgEng based debuggers. Uses the DAC as an abstraction layer for its operation.
-* SuperPMI: JIT component test framework (super fast JIT testing - it mocks/replays EE in EE-JIT interface) - see [SuperPMI details](https://github.com/dotnet/coreclr/blob/master/src/ToolBox/superpmi/readme.txt).
-* SVR: The CLR used to be built as two variants, with one called "mscorsvr.dll", to mean the "server" version. In particular, it contained the server GC implementation, which was intended for multi-threaded apps capable of taking advantage of multiple processors. In the .NET Framework 2 release, the two variants were merged into "mscorwks.dll". The WKS version was the default, however the SVR version remained available.
-* TPA: Trusted Platform Assemblies used to be a special set of assemblies that comprised the platform assemblies, when it was originally designed. As of today, it is simply the set of assemblies known to constitute the application.
-* URT: Universal Runtime. Ancient name for what ended up being .NET, is used in the WinError facility name FACILITY_URT.
-* VSD: [Virtual Stub Dispatch](../botr/virtual-stub-dispatch.md). Technique of using stubs for virtual method invocations instead of the traditional virtual method table.
-* VM: Virtual machine.
-* WKS: The CLR used to be built as two variants, with one called "mscorwks.dll", to mean the "workstation" version. In particular, it contained the client GC implementation, which was intended for single-threaded apps, independent of how many processors were on the machine. In the .NET Framework 2 release, the two variants were merged into "mscorwks.dll". The WKS version was the default, however the SVR version remained available.
-* ZAP: Original code name for NGen
+| Term | Description |
+| ----- | ------------- |
+| AOT | Ahead-of-time compiler. Converts the MSIL bytecode to native machine code for a specific target CPU architecture. |
+| BBT | Microsoft internal early version of C/C++ PGO. See https://www.microsoft.com/windows/cse/bit_projects.mspx. |
+| BOTR | Book Of The Runtime. |
+| CLR | Common Language Runtime. |
+| COMPlus | An early name for the .NET platform, back when it was envisioned as a successor to the COM platform (hence, "COM+"). Used in various places in the CLR infrastructure, most prominently as a common prefix for the names of internal configuration settings. Note that this is different from the product that eventually ended up being named [COM+](https://msdn.microsoft.com/en-us/library/windows/desktop/ms685978.aspx). |
+| COR | [Common Object Runtime](http://www.danielmoth.com/Blog/mscorlibdll.aspx). The name of .NET before it was named .NET. |
+| DAC | Data Access Component. An abstraction layer over the internal structures in the runtime. |
+| EE | Execution Engine. |
+| GC | [Garbage Collector](https://github.com/dotnet/coreclr/blob/master/Documentation/botr/garbage-collection.md). |
+| IPC | Inter-Process Communicaton. |
+| JIT | [Just-in-Time](https://github.com/dotnet/coreclr/blob/master/Documentation/botr/ryujit-overview.md) compiler. RyuJIT is the code name for the next generation Just-in-Time(aka "JIT") for the .NET runtime. |
+| LCG | Lightweight Code Generation. An early name for [dynamic methods](https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Reflection/Emit/DynamicMethod.cs). |
+| MD | MetaData. |
+| NGen | Native Image Generator. |
+| NYI | Not Yet Implemented. |
+| PAL | [Platform Adaptation Layer](http://archive.oreilly.com/pub/a/dotnet/2002/03/04/rotor.html). Provides an abstraction layer between the runtime and the operating system. |
+| PE | Portable Executable. |
+| PGO | Profile Guided Optimization - see [details](https://blogs.msdn.microsoft.com/vcblog/2008/11/12/pogo/). |
+| POGO | Profile Guided Optimization - see [details](https://blogs.msdn.microsoft.com/vcblog/2008/11/12/pogo/). |
+| ProjectN | Codename for the first version of [.NET Native for UWP](https://msdn.microsoft.com/en-us/vstudio/dotnetnative.aspx). |
+| R2R | Ready-to-Run. A flavor of native images - command line switch of [crossgen](../building/crossgen.md). |
+| Redhawk | Codename for experimental minimal managed code runtime that evolved into [CoreRT](https://github.com/dotnet/corert/). |
+| SOS | [Son of Strike](http://blogs.msdn.com/b/jasonz/archive/2003/10/21/53581.aspx). The debugging extension for DbgEng based debuggers. Uses the DAC as an abstraction layer for its operation. |
+| SuperPMI | JIT component test framework (super fast JIT testing - it mocks/replays EE in EE-JIT interface) - see [SuperPMI details](https://github.com/dotnet/coreclr/blob/master/src/ToolBox/superpmi/readme.txt). |
+| SVR | The CLR used to be built as two variants, with one called "mscorsvr.dll", to mean the "server" version. In particular, it contained the server GC implementation, which was intended for multi-threaded apps capable of taking advantage of multiple processors. In the .NET Framework 2 release, the two variants were merged into "mscorwks.dll". The WKS version was the default, however the SVR version remained available. |
+| TPA | Trusted Platform Assemblies used to be a special set of assemblies that comprised the platform assemblies, when it was originally designed. As of today, it is simply the set of assemblies known to constitute the application. |
+| URT | Universal Runtime. Ancient name for what ended up being .NET, is used in the WinError facility name FACILITY_URT. |
+| UTC | [Universal Tuple Compiler](https://blogs.msdn.microsoft.com/vcblog/2013/06/12/optimizing-c-code-overview/). The Microsoft C++ optimizer back-end that that starts by converting the information from the FrontEnd into tuples – a binary stream of instructions. |
+| UWP | [Universal Windows Platform (UWP)](https://docs.microsoft.com/en-us/windows/uwp/get-started/universal-application-platform-guide) is a platform-homogeneous application architecture available on every device that runs Windows 10. |
+| VSD | [Virtual Stub Dispatch](../botr/virtual-stub-dispatch.md). Technique of using stubs for virtual method invocations instead of the traditional virtual method table. |
+| VM | Virtual machine. |
+| WKS | The CLR used to be built as two variants, with one called "mscorwks.dll", to mean the "workstation" version. In particular, it contained the client GC implementation, which was intended for single-threaded apps, independent of how many processors were on the machine. In the .NET Framework 2 release, the two variants were merged into "mscorwks.dll". The WKS version was the default, however the SVR version remained available. |
+| ZAP | Original code name for NGen. |
diff --git a/Documentation/project-docs/linux-performance-tracing.md b/Documentation/project-docs/linux-performance-tracing.md
index 93d63b4..c9d66fa 100644
--- a/Documentation/project-docs/linux-performance-tracing.md
+++ b/Documentation/project-docs/linux-performance-tracing.md
@@ -3,13 +3,13 @@ Performance Tracing on Linux
When a performance problem is encountered on Linux, these instructions can be used to gather detailed information about what was happening on the machine at the time of the performance problem.
-#Required Tools#
+# Required Tools #
- **perfcollect**: Bash script that automates data collection.
- Available at <http://aka.ms/perfcollect>.
- **PerfView**: Windows-based performance tool that can also analyze trace files collected with Perfcollect.
- Available at <http://aka.ms/perfview>.
-#Preparing Your Machine#
+# Preparing Your Machine #
Follow these steps to prepare your machine to collect a performance trace.
1. Download Perfcollect.
@@ -30,7 +30,7 @@ Follow these steps to prepare your machine to collect a performance trace.
> sudo ./perfcollect install
> ```
-#Collecting a Trace#
+# Collecting a Trace #
1. Have two shell windows available - one for controlling tracing, referred to as **[Trace]**, and one for running the application, referred to as **[App]**.
2. **[App]** Setup the application shell - this enables tracing configuration inside of CoreCLR.
@@ -81,7 +81,7 @@ Follow these steps to prepare your machine to collect a performance trace.
The compressed trace file is now stored in the current working directory.
-#Collecting in a Docker Container#
+# Collecting in a Docker Container #
Perfcollect can be used to collect data for an application running inside a Docker container. The main thing to know is that collecting a trace requires elevated privileges because the [default seccomp profile](https://docs.docker.com/engine/security/seccomp/) blocks a required syscall - perf_events_open.
In order to use the instructions in this document to collect a trace, spawn a new shell inside the container that is privileged.
@@ -94,7 +94,7 @@ Even though the application hosted in the container isn't privileged, this new s
If you want to try tracing in a container, we've written a [demo Dockerfile](https://raw.githubusercontent.com/dotnet/corefx-tools/master/src/performance/perfcollect/docker-demo/Dockerfile) that installs all of the performance tracing pre-requisites, sets the environment up for tracing, and starts a sample CPU-bound app.
-#Filtering#
+# Filtering #
Filtering is implemented on Windows through the latest mechanisms provided with the [EventSource](https://msdn.microsoft.com/en-us/library/system.diagnostics.tracing.eventsource(v=vs.110).aspx) class.
On Linux those mechanisms are not available yet. Instead, there are two environment variables that exist just on linux to do some basic filtering.
@@ -104,10 +104,10 @@ On Linux those mechanisms are not available yet. Instead, there are two environm
Setting one or both of these variables will only enable collecting events that contain the name you specify as a substring. Strings are treated as case insensitive.
-#Viewing a Trace#
+# Viewing a Trace #
Traces are best viewed using PerfView on Windows. Note that we're currently looking into porting the analysis pieces of PerfView to Linux so that the entire investigation can occur on Linux.
-##Open the Trace File##
+## Open the Trace File ##
1. Copy the trace.zip file from Linux to a Windows machine.
2. Download PerfView from <http://aka.ms/perfview>.
3. Run PerfView.exe
@@ -116,7 +116,7 @@ Traces are best viewed using PerfView on Windows. Note that we're currently loo
> PerfView.exe <path to trace.zip file>
> ```
-##Select a View##
+## Select a View ##
PerfView will display the list of views that are supported based on the data contained in the trace file.
- For CPU investigations, choose **CPU stacks**.
@@ -126,10 +126,10 @@ PerfView will display the list of views that are supported based on the data con
For more details on how to interpret views in PerfView, see help links in the view itself, or from the main window in PerfView choose **Help->Users Guide**.
-#Extra Information#
+# Extra Information #
This information is not strictly required to collect and analyze traces, but is provided for those who are interested.
-##Prerequisites##
+## Prerequisites ##
Perfcollect will alert users to any prerequisites that are not installed and offer to install them. Prerequisites can be installed automatically by running:
>```bash
diff --git a/Documentation/project-docs/profiling-api-status.md b/Documentation/project-docs/profiling-api-status.md
index a1f2f17..c689eae 100644
--- a/Documentation/project-docs/profiling-api-status.md
+++ b/Documentation/project-docs/profiling-api-status.md
@@ -1,10 +1,10 @@
-#Status of CoreCLR Profiler APIs
+# Status of CoreCLR Profiler APIs
The notes below will help you determine what profiling APIs are safe to use. The .NET Core project started with the codebase from the desktop CoreCLR/Silverlight so all the profiler APIs present there are also present in the code here. However that doesn't automatically imply that they are all working or being actively tested right now. Our goal is to eventually have everything tested and working across all the supported OSes. As we make progress we'll document it here. If you want to use APIs that we haven't tested yet you are welcome to do so, but you need to do your own testing to determine whether they work. If you do test APIs we haven't gotten to yet, we hope you'll add a note below in the Community Tested API section so that everyone can benefit.
-#Microsoft Tested APIs:
+# Microsoft Tested APIs:
-###Windows
+### Windows
* ICorProfilerCallback:
* `Initialize`
@@ -34,12 +34,12 @@ The notes below will help you determine what profiling APIs are safe to use. The
\* Instrumentation APIs have not been tested on assemblies compiled with Ready2Run technology. Ready2Run is currently used
for all Framework assemblies.
-###Linux
-###OS X
+### Linux
+### OS X
-#Community Tested APIs (please include GitHub handle)
+# Community Tested APIs (please include GitHub handle)
-###Windows
+### Windows
* IProfilerCallback
* ModuleLoadStarted (noahfalk on behalf of one of our vendors)
* JITCompilationStarted (noahfalk on behalf of one of our vendors)
@@ -51,12 +51,12 @@ The notes below will help you determine what profiling APIs are safe to use. The
* IMetaDataAssemblyEmit
* DefineAssemblyRef (noahfalk on behalf of one of our vendors)
-###Linux
-###OS X
+### Linux
+### OS X
-#APIs definitely known not to work yet
-###Windows
-###Linux
+# APIs definitely known not to work yet
+### Windows
+### Linux
* ICorProfilerInfo:
* `SetEnterLeaveFunctionHooks`
@@ -68,7 +68,7 @@ The notes below will help you determine what profiling APIs are safe to use. The
* `COR_PRF_USE_PROFILE_IMAGES`
* `COR_PRF_REQUIRE_PROFILE_IMAGE`
-###OS X
+### OS X
* ICorProfilerInfo:
* `SetEnterLeaveFunctionHooks`
diff --git a/Documentation/workflow/IssuesFeedbackEngagement.md b/Documentation/workflow/IssuesFeedbackEngagement.md
index f83b2b6..4346ed9 100644
--- a/Documentation/workflow/IssuesFeedbackEngagement.md
+++ b/Documentation/workflow/IssuesFeedbackEngagement.md
@@ -16,7 +16,7 @@ Before you log a new issue, you should try using the search tool on the issue pa
If you want to ask a question, or want wider discussion (to see if others share you issue), we encourage you to start a thread
in the [.NET Foundation forums](http://forums.dotnetfoundation.org/).
-###Chat with the CoreCLR Community
+### Chat with the CoreCLR Community
For more real-time feedback you can also start a chat session by clicking on the icons below.
diff --git a/Documentation/workflow/OfficalAndDailyBuilds.md b/Documentation/workflow/OfficalAndDailyBuilds.md
index d5efb93..9d5fcc2 100644
--- a/Documentation/workflow/OfficalAndDailyBuilds.md
+++ b/Documentation/workflow/OfficalAndDailyBuilds.md
@@ -63,17 +63,15 @@ If you click on the images below, you can get more details about the build (incl
and the exact test results (in case your build is failing tests and you are wondering if it is
something affecting all builds).
-| | Debug | Release |
-|---|:-----:|:-------:|
-|**CentOS 7.1**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_centos7.1.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_centos7.1)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_centos7.1.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_centos7.1)|
-|**Debian 8.4**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_debian8.4.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_debian8.4)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_debian8.4.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_debian8.4)|
-|**FreeBSD 10.1**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_freebsd.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_freebsd)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_freebsd.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_freebsd)|
-|**openSUSE 13.2**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_opensuse13.2.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_opensuse13.2)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_opensuse13.2.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_opensuse13.2)|
-|**openSUSE 42.1**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_opensuse42.1.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_opensuse42.1)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_opensuse42.1.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_opensuse42.1)|
-|**OS X 10.11**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_osx.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_osx)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_osx.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_osx)|
-|**Red Hat 7.2**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_rhel7.2.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_rhel7.2)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_rhel7.2.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_rhel7.2)|
-|**Fedora 23**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_fedora23.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_fedora23)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_fedora23.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_fedora23)|
-|**Ubuntu 14.04**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_ubuntu.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_ubuntu)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_ubuntu.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_ubuntu)|
-|**Ubuntu 16.04**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_ubuntu16.04.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_ubuntu16.04)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_ubuntu16.04.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_ubuntu16.04)|
-|**Ubuntu 16.10**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_ubuntu16.10.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_ubuntu16.10)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_ubuntu16.10.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_ubuntu16.10)|
-|**Windows 8.1**|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/debug_windows_nt.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/debug_windows_nt)<br/>[![arm64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/arm64_cross_debug_windows_nt.svg?label=arm64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/arm64_cross_debug_windows_nt)|[![x64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/release_windows_nt.svg?label=x64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/release_windows_nt)<br/>[![arm64 status](https://img.shields.io/jenkins/s/http/dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/arm64_cross_release_windows_nt.svg?label=arm64)](http://dotnet-ci.cloudapp.net/job/dotnet_coreclr/job/master/job/arm64_cross_release_windows_nt)|
+| | X64 Debug | X64 Release | ARM64 Debug | ARM64 Release |
+|---|:-----:|:-------:|:-------:|:-------:|
+|**CentOS 7.1**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_centos7.1/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_centos7.1)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_centos7.1/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_centos7.1)|||
+|**Debian 8.4**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_debian8.4/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_debian8.4)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_debian8.4/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_debian8.4)|||
+|**FreeBSD 10.1**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_freebsd/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_freebsd)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_freebsd/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_freebsd)|||
+|**openSUSE 42.1**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_opensuse42.1/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_opensuse42.1)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_opensuse42.1/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_opensuse42.1)|||
+|**OS X 10.12**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_osx10.12/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_osx10.12)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_osx10.12/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_osx10.12)|||
+|**Red Hat 7.2**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_rhel7.2/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_rhel7.2)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_rhel7.2/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_rhel7.2)|||
+|**Ubuntu 14.04**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_ubuntu/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_ubuntu)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_ubuntu/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_ubuntu)|||
+|**Ubuntu 16.04**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_ubuntu16.04/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_ubuntu16.04)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_ubuntu16.04/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_ubuntu16.04)|||
+|**Ubuntu 16.10**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_ubuntu16.10/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_ubuntu16.10)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_ubuntu16.10/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_ubuntu16.10)|||
+|**Windows 8.1**|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_windows_nt/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/debug_windows_nt)|[![x64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/release_windows_nt/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/release_windows_nt)|[![arm64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/arm64_cross_debug_windows_nt/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/arm64_cross_debug_windows_nt)|[![arm64 status](https://ci.dot.net/job/dotnet_coreclr/job/master/job/arm64_cross_release_windows_nt/badge/icon)](http://ci.dot.net/job/dotnet_coreclr/job/master/job/arm64_cross_release_windows_nt)|
diff --git a/Documentation/workflow/RunningTests.md b/Documentation/workflow/RunningTests.md
index 0cf84be..65957f4 100644
--- a/Documentation/workflow/RunningTests.md
+++ b/Documentation/workflow/RunningTests.md
@@ -1,5 +1,5 @@
-#Running .NET Core Tests
+# Running .NET Core Tests
TODO - Incomplete.
diff --git a/Documentation/workflow/UsingYourBuild.md b/Documentation/workflow/UsingYourBuild.md
index c783dd1..dcb24b3 100644
--- a/Documentation/workflow/UsingYourBuild.md
+++ b/Documentation/workflow/UsingYourBuild.md
@@ -1,209 +1,233 @@
# Using your .NET Core Runtime Build
-We assume that you have successfully built CoreCLR repository and thus have file of the form
+We assume that you have successfully built CoreCLR repository and thus have files of the form
```
bin\Product\<OS>.<arch>.<flavor>\.nuget\pkg\Microsoft.NETCore.Runtime.CoreCLR.<version>.nupkg
```
And now you wish to try it out. We will be using Windows OS as an example and thus will use \ rather
than / for directory separators and things like Windows_NT instead of Linux but it should be
-pretty obvious how to adapt these instructions for other operating systems.
+pretty obvious how to adapt these instructions for other operating systems.
To run your newly built .NET Core Runtime in addition to the application itself, you will need
a 'host' program that will load the Runtime as well as all the other .NET Core Framework
-code that your application needs. The easiest way to get all this other stuff is to simply use the
+code that your application needs. The easiest way to get all this other stuff is to simply use the
standard 'dotnet' host that installs with .NET Core SDK.
The released version of 'dotnet' tool may not be compatible with the live CoreCLR repository. The following steps
-assume use of unreleased version of 'dotnet' tool that is downloaded as part of the CoreCLR repository
-build at `<repo root>\Tools\dotnetcli`. [Add `Tools\dotnetcli` directory to your path](../building/windows-instructions.md#adding-to-the-default-path-variable)
-and type:
+assume use of a dogfood build of the .NET SDK.
-* dotnet -?
+## Acquire the latest nightly .NET Core 2.0 SDK
-and it prints some help text, you are ready.
+- [Win 64-bit Latest](https://dotnetcli.blob.core.windows.net/dotnet/Sdk/master/dotnet-dev-win-x64.latest.zip)
+- [macOS 64-bit Latest](https://dotnetcli.blob.core.windows.net/dotnet/Sdk/master/dotnet-dev-osx-x64.latest.tar.gz)
+- [Others](https://github.com/dotnet/cli/blob/master/README.md#installers-and-binaries)
-### Step 1: Create a App using the Default Runtime
-At this point you can create a new 'Hello World' program in the standard way.
+To setup the SDK download the zip and extract it somewhere and add the root folder to your [path](../building/windows-instructions.md#adding-to-the-default-path-variable)
+or always fully qualify the path to dotnet in the root of this folder for all the instructions in this document.
+
+After setting up dotnet you can verify you are using the newer version by:
+
+`dotnet --info` -- the version should be greater than 2.0.0-*
+
+For another small walkthrough see [Dogfooding .NET Core 2.0 SDK](https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/dogfooding.md).
+
+## Create sample self-contained application
+
+At this point you can create a new 'Hello World' program in the standard way.
```bat
mkdir HelloWorld
cd HelloWorld
-dotnet new
+dotnet new console
```
-### Step 2: Get the Version number of the CoreCLR package you built.
+### Change project to be self-contained
-This makes a 'standard' hello world application but uses the .NET Core Runtime version that
-came with the dotnet.exe tool. First you need to modify your app to ask for the .NET Core
-you have built, and to do that, we need to know the version number of what you built. Get
-this by simply listing the name of the Microsoft.NETCore.Runtime.CoreCLR you built.
+In order to update with your local changes the application needs to be self-contained, as opposed to running on the
+shared framework. In order to do that you will need to add a `RuntimeIdentifier` to your project.
+
+```
+ <PropertyGroup>
+ ...
+ <RuntimeIdentifier>win7-x64</RuntimeIdentifier>
+ </PropertyGroup>
+```
+
+For windows you will want `win7-x64` but for other OS's you will need to set it to the most appropriate one based
+on what you built. You can generally figure that out by looking at the packages you found in your output. In our
+example you will see there is a package with the name `runtime.win7-x64.Microsoft.NETCore.Runtime.CoreCLR.2.0.0-beta-25023-0.nupkg`
+so you will want to put whatever id is between `runtime.` and `Microsoft.NETCore.Runtime.CoreCLR`.
+
+Next you need to restore and publish. The publish step will also trigger a build but you can iterate on build by calling `dotnet build` as
+needed.
```bat
- dir bin\Product\Windows_NT.x64.Release\.nuget\pkg
+dotnet restore
+dotnet publish
```
-and you will get name of the which looks something like this
+After you publish you will find you all the binaries needed to run your application under `bin\Debug\netcoreapp2.0\win7-x64\publish\`.
+To run the application simply run the EXE that is in this publish directory (it is the name of the app, or specified in the project file).
```
- Microsoft.NETCore.Runtime.CoreCLR.1.2.0-beta-24528-0.nupkg
+.\bin\Debug\netcoreapp2.0\win7-x64\publish\HelloWorld.exe
```
-This gets us the version number, in the above case it is 1.2.0-beta-24528-0. We will
-use this in the next step.
+Thus at this point publication directory directory has NO dependency outside that directory (including dotnet.exe). You can copy this publication
+directory to another machine and run the exe in it and it will 'just work' (assuming you are on the same OS). Note that your managed app's
+code is still in the 'app'.dll file, the 'app'.exe file is actually simply a rename of dotnet.exe.
+
+**NOTE**: Normally you would be able to run the application by calling `dotnet run` however there is currently tooling issues which lead to an error similar
+to `A fatal error was encountered. The library 'hostpolicy.dll' required to execute the application was not found in ...` so to workaround that for
+now you have to manually run the application from the publish directory.
+
+
+## Update CoreCLR from raw binary output
-### Step 3: Modify the Project.json for the App to refer to your Runtime.
+Updating CoreCLR from raw binary output is easier for quick one-off testing which is what this set of instructions
+outline but for consuming in a real .NET Core application you should use the nuget package instructions below.
-Replace the HelloWorld\project.json with [project.json](../../tests/src/Common/netcoreapp/project.json), and update
-`1.2.0-beta-XXXXX-X` version number in the dependencies section with the version number for your build of the runtime.
-This is the line that tells the tools that you want YOUR version of the CoreCLR runtime.
+The 'dotnet publish' step above creates a directory that has all the files necessary to run your app
+including the CoreCLR and the parts of CoreFX that were needed. You can use this fact to skip some steps if
+you wish to update the DLLs. For example typically when you update CoreCLR you end up updating one of two DLLs
+
+* coreclr.dll - Most modifications (with the exception of the JIT compiler and tools) that are C++ code update
+ this DLL.
+* System.Private.CoreLib.dll - If you modified C# it will end up here.
+* System.Private.CoreLib.ni.dll - the native image (code) for System.Private.Corelib. If you modify C# code
+you will want to update both of these together in the target installation.
+
+Thus after making a change and building, you can simply copy the updated binary from the `bin\Product\<OS>.<arch>.<flavor>`
+directory to your publication directory (e.g. `helloWorld\bin\Debug\netcoreapp2.0\win7-x64\publish`) to quickly
+deploy your new bits. In a lot of cases it is easiest to just copy everything from here to your publication directory.
+
+You can build just the .NET Library part of the build by doing (debug, for release add 'release' qualifier)
+(on Linux / OSX us ./build.sh)
+```bat
+ .\build skiptests skipnative
```
- "Microsoft.NETCore.Runtime.CoreCLR": "1.2.0-beta-24528-0"
+Which builds System.Private.CoreLib.dll AND System.Private.CoreLib.ni.dll (you will always want both) if you modify
+C# code. If you wish to only compile the coreclr.dll you can do
+ ```bat
+ .\build skiptests skipmscorlib
```
+Note that this technique does not work on .NET Apps that have not been published (that is you have not created
+a directory with all DLLs needed to run the all) That is because the runtime is either fetched from the system-wide
+location that dotnet.exe installed, OR it is fetched from the local nuget package cache (which is where your
+build was put when you did a 'dotnet restore' and had a dependency on your particular runtime). In theory you
+could update these locations in place, but that is not recommended since they are shared more widely.
-The differences between the project.json generated by the tool and the replacement:
+## Update CoreCLR using runtime nuget package
-- Removed Microsoft.NETCore.App platform dependency (`"type": "platform"`). This tells the build system that you don't want to
-use runtime and libraries that came with the dotnet.exe tool but to fetch the dependencies from the Nuget cache. If you don't do this
-the tools will ignore your request to make the app use an explicitly specified runtime.
-- Added the 'runtimes' line at the top level. The runtime name includes the OS name and the architecture name
-you can find the appropriate name for your OS [here](https://github.com/dotnet/core-docs/blob/master/docs/core/rid-catalog.md).
-This tells the tools exactly which flavor of OS and processor architecture you are running on, so it can find the right
-Nuget package for the runtime.
-- Changed netcoreapp1.0 to netcoreapp1.1. This tells the tools that you want to use the latest .NET Core Framework.
-- Expanded Microsoft.NETCore.App metapackage into explicit list of the .NET Core Framework packages because of there is no good published
-build of Microsoft.NETCore.App metapackage for netcoreapp1.1 yet.
+Updating CoreCLR from raw binary output is easier for quick one-off testing but using the nuget package is better
+for referencing your CoreCLR build in your actual application because of it does not require manual copying of files
+around each time the application is built and plugs into the rest of the tool chain. This set of instructions will cover
+the further steps needed to consume the runtime nuget package.
-### Step 4: Place your build directory and beta .NET Core Framework feed on your Nuget Path
+#### 1 - Get the Version number of the CoreCLR package you built.
-You can do this by creating a file named Nuget.Config in the 'HelloWorld' directory with the following XML
-Obviously **you need to update path in the XML to be the path to output directory for your build**.
-On Windows you also have the alternative of modifying the Nuget.Config
-at %HOMEPATH%\AppData\Roaming\Nuget\Nuget.Config (~/.nuget/NuGet/NuGet.Config on Linux) with the new location.
-This will allow your new
-runtime to be used on any 'dotnet restore' run by the current user.
-Alternatively you can skip creating this file and pass the path to your package directory using
-the -s SOURCE qualifer on the dotnet restore command below. The important part is that somehow
-you have told the tools where to find your new package.
+This makes a 'standard' hello world application but uses the .NET Core Runtime version that
+came with the dotnet.exe tool. First you need to modify your app to ask for the .NET Core
+you have built, and to do that, we need to know the version number of what you built. Get
+this by simply listing the name of the Microsoft.NETCore.Runtime.CoreCLR you built.
-```xml
-<configuration>
- <packageRestore>
- <add key="enabled" value="True" />
- </packageRestore>
- <packageSources>
- <add key="Local CoreCLR" value="C:\Users\User\Source\Repos\coreclr-vancem\bin\Product\Windows_NT.x64.Release\.nuget\pkg" />
- <add key="myget.org dotnet-core" value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json" />
- </packageSources>
- <activePackageSource>
- <add key="All" value="(Aggregate source)" />
- </activePackageSource>
-</configuration>
+```bat
+ dir bin\Product\Windows_NT.x64.Release\.nuget\pkg
```
-### Step 5: Restore the Nuget Packages for your application
+and you will get name of the which looks something like this
-This consist of simply running the command
```
- dotnet restore
+ Microsoft.NETCore.Runtime.CoreCLR.2.0.0-beta-25023-0.nupkg
```
-which should find the .NET Runtime package in your build output and unpacks it to the local Nuget cache (on windows this is in %HOMEPATH%\.nuget\packages)
+This gets us the version number, in the above case it is 2.0.0-beta-25023-0. We will
+use this in the next step.
-### Step 6: Run your application
+#### 2 - Add a reference to your runtime package
-You can run your 'HelloWorld' applications by simply executing the following in the 'HelloWorld' directory.
+Add the following lines to your project file:
```
- dotnet run
+ <ItemGroup>
+ <PackageReference Include="Microsoft.NETCore.Runtime.CoreCLR" Version="2.0.0-beta-25023-0" />
+ </ItemGroup>
```
-This will compile and run your app. What the command is really doing is building files in helloWorld\bin\Debug\netcoreapp1.1\win7-x64\
-and then running 'dotnet helloWorld\bin\Debug\netcoreapp1.1\win7-x64\HelloWorld.dll' to actually run the app.
-### Step 6: (Optional) Publish your application
+In your project you should also see a `RuntimeFrameworkVersion` property which represents the
+version of Micorosoft.NETCore.App which is used for all the other dependencies. It is possible
+that libraries between your runtime and that package are far enough apart to cause issues, so
+it is best to have the latest version of Microsoft.NETCore.App package if you are working on the
+latest version of the source in coreclr master branch. You can find the latest package by looking
+at https://dotnet.myget.org/feed/dotnet-core/package/nuget/Microsoft.NETCore.App.
-In Step 5 you will notice that the helloWorld\bin\Debug\netcoreapp1.1\win7-x64 directory does NOT actually contain your Runtime code.
-What is going on is that runtime is being loaded directly out of the local Nuget cache (on windows this is in %HOMEPATH%\.nuget\packages).
-The app can find this cache because of the HelloWorld.runtimeconfig.dev.json file which specifies that that this location should be
-added to the list of places to look for dependencies.
+#### 3 - Place your build directory and beta .NET Core Framework feed on your Nuget source list
-This setup fine for development time, but is not a reasonable way of allowing end users to use your new runtime. Instead what
-you want all the necessary code to be gather up so that the app is self-contained. This is what the following command does.
-```
- dotnet publish
-```
-After running this in the 'HelloWorld' directory you will see that the following path
+By default the dogfooding dotnet SDK will create a Nuget.Config file next to your project, if it doesn't
+you can create one. Your config file will need a source for your local coreclr package directory as well
+as a reference to our nightly dotnet-core feed on myget:
-* helloWorld\bin\Debug\netcoreapp1.1\win7-x64\publish
+```xml
+<configuration>
+ <packageSources>
+ <add key="local coreclr" value="D:\git\coreclr\bin\Product\Windows_NT.x64.Debug\.nuget\pkg" />
+ <add key="dotnet-core" value="https://dotnet.myget.org/F/dotnet-core/api/v3/index.json" />
+ </packageSources>
+</configuration>
+
+```
+Obviously **you need to update path in the XML to be the path to output directory for your build**.
-Has all the binaries needed, including the CoreCLR.dll and System.Private.CoreLib.dll that you build locally. To
-run the application simple run the EXE that is in this publish directory (it is the name of the app, or specified
-in the project.json file). Thus at this point this directory has NO dependency outside this publication directory
-(including dotnet.exe). You can copy this publication directory to another machine and run( the exe in it and
-will 'just work'. Note that your managed app's code is still in the 'app'.dll file, the 'app'.exe file is
-actually simply a rename of dotnet.exe.
+On Windows you also have the alternative of modifying the Nuget.Config
+at `%HOMEPATH%\AppData\Roaming\Nuget\Nuget.Config` (`~/.nuget/NuGet/NuGet.Config` on Linux) with the new location.
+This will allow your new runtime to be used on any 'dotnet restore' run by the current user.
+Alternatively you can skip creating this file and pass the path to your package directory using
+the -s SOURCE qualifer on the dotnet restore command below. The important part is that somehow
+you have told the tools where to find your new package.
-### Step 7: (Optional) Confirm that the app used your new runtime
+Once have made these modifications you will need to rerun the restore and publish as such.
-Congratulations, you have successfully used your newly built runtime. To confirm that everything worked, you
-should compare the file creation timestamps for the CoreCLR.dll and System.Private.Runtime.dll in the publishing
-directory and the build output directory. They should be identical. If not, something went wrong and the
-dotnet tool picked up a different version of your runtime.
+```
+dotnet restore
+dotnet publish
+```
+Now your publication directory should contain your local built CoreCLR builds.
-### Step 8: Update BuildNumberMinor Environment Variable!
+#### 4 - Update BuildNumberMinor Environment Variable
One possible problem with the technique above is that Nuget assumes that distinct builds have distinct version numbers.
-Thus if you modify the source and create a new NuGet package you must it a new version number and use that in your
-application's project.json. Otherwise the dotnet.exe tool will assume that the existing version is fine and you
-won't get the updated bits. This is what the Minor Build number is all about. By default it is 0, but you can
-give it a value by setting the BuildNumberMinor environment variable.
+Thus if you modify the source and create a new NuGet package you must it a new version number and use that in your
+application's project. Otherwise the dotnet.exe tool will assume that the existing version is fine and you
+won't get the updated bits. This is what the Minor Build number is all about. By default it is 0, but you can
+give it a value by setting the BuildNumberMinor environment variable.
```bat
set BuildNumberMinor=3
```
-before packaging. You should see this number show up in the version number (e.g. 1.2.0-beta-24521-03).
+before packaging. You should see this number show up in the version number (e.g. 2.0.0-beta-25023-03).
-As an alternative you can delete the existing copy of the package from the Nuget cache. For example on
-windows (on Linux substitute ~/ for %HOMEPATH%) you could delete
+As an alternative you can delete the existing copy of the package from the Nuget cache. For example on
+windows (on Linux substitute ~/ for %HOMEPATH%) you could delete
```bat
- %HOMEPATH%\.nuget\packages\Microsoft.NETCore.Runtime.CoreCLR\1.2.0-beta-24521-02
+ %HOMEPATH%\.nuget\packages\Microsoft.NETCore.Runtime.CoreCLR\2.0.0-beta-25023-02
```
-which should make things work (but is fragile, confirm wile file timestamps that you are getting the version you expect)
-
+which should make things work (but is fragile, confirm file timestamps that you are getting the version you expect)
-## Step 8.1 (Optional) Quick updates in place.
+## (Optional) Confirm that the app used your new runtime
-The 'dotnet publish' step in step 6 above creates a directory that has all the files necessary to run your app
-including the CoreCLR and the parts of CoreFX that were needed. You can use this fact to skip some steps if
-you wish to update the DLLs. For example typically when you update CoreCLR you end up updating one of two DLLs
-
-* coreclr.dll - Most modifications (with the exception of the JIT compiler and tools) that are C++ code update
- this DLL.
-* System.Private.CoreLib.dll - If you modified C# it will end up here.
-* System.Private.CoreLib.ni.dll - the native image (code) for System.Private.Corelib. If you modify C# code
-you will want to update both of these together in the target installation.
+Congratulations, you have successfully used your newly built runtime. To confirm that everything worked, you
+should compare the file creation timestamps for the CoreCLR.dll and System.Private.Runtime.dll in the publishing
+directory and the build output directory. They should be identical. If not, something went wrong and the
+dotnet tool picked up a different version of your runtime.
-Thus after making a change and building, you can simply copy the updated binary from the `bin\Product\<OS>.<arch>.<flavor>`
-directory to your publication directory (e.g. `helloWorld\bin\Debug\netcoreapp1.1\win7-x64\publish`) to quickly
-deploy your new bits. You can build just the .NET Library part of the build by doing (debug, for release add 'release qualifier)
-(on Linux / OSX us ./build.sh)
-```bat
- .\build skiptests skipnative
+As a hint you could adde some code like:
```
-Which builds System.Private.CoreLib.dll AND System.Private.CoreLib.ni.dll (you will always want both) if you modify
-C# code. If you wish to only compile the coreclr.dll you can do
- ```bat
- .\build skiptests skipmscorlib
+ var coreAssemblyInfo = System.Diagnostics.FileVersionInfo.GetVersionInfo(typeof(object).Assembly.Location);
+ Console.WriteLine($"Hello World from Core {coreAssemblyInfo.ProductVersion}");
```
-Note that this technique does not work on .NET Apps that have not been published (that is you have not created
-a directory with all DLLs needed to run the all) That is because the runtime is either fetched from the system-wide
-location that dotnet.exe installed, OR it is fetched from the local nuget package cache (which is where your
-build was put when you did a 'dotnet restore' and had a dependency on your particular runtime). In theory you
-could update these locations in place, but that is not recommended since they are shared more widely.
-
-### Using your Runtime For Real.
+That should tell you the version and which user and machine build the assembly as well as the commit hash of the code
+at the time of building.
-You can see that it is straightforward for anyone to use your runtime. They just need to modify their project.json
-and modify their NuGet search path. This is the expected way of distributing your modified runtime.
--------------------------
## Using CoreRun to run your .NET Core Application
@@ -212,4 +236,4 @@ Generally using dotnet.exe tool to run your .NET Core application is the preferr
However there is a simpler 'host' for .NET Core applications called 'CoreRun' that can also be used. The value
of this host is that it is simpler (in particular it knows nothing about NuGet), but precisely because of this
it can be harder to use (since you are responsible for insuring all the dependencies you need are gather together)
-See [Using CoreRun To Run .NET Core Application](UsingCoreRun.md) for more.
+See [Using CoreRun To Run .NET Core Application](UsingCoreRun.md) for more.