Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
* Archive CI perf results
I need to diagnose a failure that only reproes in CI but the CI scripts don't archive the logs on failure. This change should fix that.
* Adding pipeline build per feedback
|
|
* Remove hardcoded paths in linkbench scripts
Linkbench has hardcoded paths based off of VS140COMNTOOLS, which not all
machines will have (ie, machines with only VS2017 installed). This
change removes the hardcoded paths, and replaces them with checks to
make sure the tool is on the path (which they will be if we are in a VS
environment of any kind - which we would have been in if
VS140COMNTOOLS was already set).
|
|
* Convert run-xunit-perf to python script
This change merges the two run-xunit-perf scripts (.sh and .cmd) into
one unified python script and updates the pipeline job to use the
pythong script. This change also updates the linux jobs to use the new
build-tests.sh generatelayoutonly command so that we don't need to pull
down corefx from the cloud anymore. The unified python script enables us
to more easily update both linux scripting and windows scripting at the
same time so that one does not lag behind the other (such as when we add
new configurations or options like slicing). This change also turns
linux testing back on by default for PRs.
|
|
|
|
This is mostly for testing purposes.
|
|
Separate large perf benchmarks into their own legs
This change splits the windows perf test stages into 6 pipelined legs per flavor to reduce the amount of time we spend running the perf tests and reduce the total time of the job. This change also decreases the size of the stashed bin directory by deleting the obj directory. Finally, we move the benchstones suite into one directory (moving BenchF and BenchI into a shared dir called Benchstones)
|
|
* Change name fo perf jobs to reflect the actual OS they run on
* readd Ubuntu 14.04
|
|
|
|
* Separate Profile=On and Off for perf pipeline
In pipeline jobs, we can separate these two runs into separate runs.
Doing so decreases the time spent running windows jobs since we are not
running them in sequence. Only run Profile=On scenarios for PRs. This
should reduce the time spent running the PR job.
This change also adds the xunit max iteration parameters.
|
|
* Remove Linux perf from pr pipeline job
We don't have a lot of linux perf machines and the sheer number of PR
jobs is causing them to be overloaded, so the perf leg is taking too
long. Disable them for prs for now until we can increase capacity.
* Disable baseline jobs
|
|
- Added stability prefix to the scenario benchmark (JitBench)
- Specify output directory to the `run-xunit-perf.cmd` script and avoid the extra step to xcopy files to the archive folder.
- Added a command line parser class to the illink scenario, and changed its behavior where it used to fail when a new command line option passed to xUnit was not recognized.
- Save the output log of the tests into the sandbox-logs folder.
- Updating the label of the machine pool used by the illink scenario
|
|
This change converts our perf testing to use pipeline jobs. Pipeline
jobs allow us to do the following:
1) Test on the same commit for each of the test legs
2) Parallelize the build and test steps.
3) Separate the build and test steps from one another. This gives us the
ability to use the same build assets for all of the test legs of the
same configuration. It also allows us to build on virtual machines and
test on perf machines, so we only use the perf resources for testing.
4) Have different test scenarios for PRs and rolling. This isn't
strictly a benefit of pipeline jobs, but certainly is made easier by
them.
5) Allows us to have one trigger for PR jobs which will get us all the
perf testing scenarios.
This change also cleans up the groovy scripting for perf testing.
|
|
This is step one of adding the pipeline job for performance runs. In
this change, we add perf-pipelinejobs.groovy, which defines what Jenkins
will see in the UI. perf-pipeline.groovy is basically an empty job, so
we can test the actual perf-pipeline work after this is checked in,
which is step two.
|