Age | Commit message (Collapse) | Author | Files | Lines |
|
Summary:
closes #18873
Doesn't fail the build on warnings yet.
Also fix most severe shellcheck warnings
Limited to `.jenkins/pytorch/` at this time
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18874
Differential Revision: D14936165
Pulled By: kostmo
fbshipit-source-id: 1ee335695e54fe6c387ef0f6606ea7011dad0fd4
|
|
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18959
ghimport-source-id: a934163fa34cb2019732d5f49dc7290c376bf156
Differential Revision: D14831246
Pulled By: ezyang
fbshipit-source-id: beb92dc4ee8c82f4c8259c081dd72e477fe7a9d0
|
|
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18538
ghimport-source-id: 665b09f158d1c5dd94686d4212792504b55b7f73
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18538 Completely synchronize behavior of Facebook flake8 and public flake8.**
Previously, developers at Facebook had the very funny experience
wherein /usr/local/bin/flake8 behaved differently than a freshly
installed flake8 from pip. In this commit, I add enough ignores to
.flake8 and install enough plugins to make the Facebook flake8
and public flake8 line up exactly. These means you don't have
to care which flake8 you use; they all will report accurate information
on your Python files.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14652336
fbshipit-source-id: ba7776eaa139cf2e3df2e65349da6fd7c99acca4
|
|
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18507
ghimport-source-id: 1c3642befad2da78a7e5f39d6d58732b85c76267
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18507 Upgrade flake8-bugbear to master, fix the new lints.**
It turns out Facebobok is internally using the unreleased master
flake8-bugbear, so upgrading it grabs a few more lints that Phabricator
was complaining about but we didn't get in open source.
A few of the getattr sites that I fixed look very suspicious (they're
written as if Python were a lazy language), but I didn't look more
closely into the matter.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14633682
fbshipit-source-id: fc3f97c87dca40bbda943a1d1061953490dbacf8
|
|
Summary:
closes #17336
Do not overwrite config.yml if script throws an error
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17485
Differential Revision: D14604388
Pulled By: kostmo
fbshipit-source-id: 5024545e3a8711abdbc0800911c766929dbca196
|
|
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18193
ghimport-source-id: 540859cf0b238a9832f45b3f4c2351e3343fc1a2
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18193 Turn on Travis builds for ghstack PRs.**
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14529945
fbshipit-source-id: 4476e996e311a04f2a997ca9b7c4cf2157dd6286
|
|
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18192
ghimport-source-id: 9523a09d7ec202ef08cf0ecdf48c42739ea6b0ce
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18192 Delete bugbear from Python 2 lint.**
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Differential Revision: D14529240
fbshipit-source-id: 1a433b53dd38d1c455e8c0750d97c594ac51ef09
|
|
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18138
ghimport-source-id: be62a71ef98714e6f168a00f84120f612363528e
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18138 Enable flake8-bugbear line length checking.**
flake8-bugbear's line length checker (B950) which permits violations
of up to 10% but specifies the "true" limit when you go over.
I had to ignore a bunch of flake8-bugbear's other checks when I
turned this on. They're good checks though (they're turned on
in fbcode) and we should fix them eventually.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Reviewed By: salexspb
Differential Revision: D14508678
fbshipit-source-id: 2610ecc0dd43cc0788d77f4d024ebd85b26b8d41
|
|
Summary:
Use flake8 installed with mypy checks so that our linter matches fbcode. Mypy type errors also provide valuable signal
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17721
Differential Revision: D14357778
Pulled By: eellison
fbshipit-source-id: d8c9ea3fe3b5f550c3b70fe259e0eabf95e4c92d
|
|
Summary:
reorder some envars for consistency
add readme and notice at the top of config.yml
generate more yaml from Python
closes #17322
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17323
Differential Revision: D14186734
Pulled By: kostmo
fbshipit-source-id: 23b2b2c1960df6f387f1730c8df1ec24a30433fd
|
|
Summary:
Diagram preview:
![binarysmoketests-config-dimensions](https://user-images.githubusercontent.com/261693/53040977-a0f88d00-3437-11e9-9190-796cc243e0f9.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17189
Differential Revision: D14141362
Pulled By: kostmo
fbshipit-source-id: 0625a1234d0307c6be79f17e756ddb1cc445b374
|
|
Summary:
This initial PR splits the `.circleci/config.yml` file into several smaller files that are stitched verbatim back into the original. A proof of concept of dynamically generating yaml for the job configuration list is also introduced.
Since the `config.yml` file must exist in the repo in its final form, there must exist a manual update and check-in step to regenerate `config.yml` from its constituent parts.
Consistency between the checked-in `config.yml` file and the authoritative source data is enforced at build time through TravisCI.
closes #17038
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17039
Reviewed By: yf225
Differential Revision: D14109059
Pulled By: kostmo
fbshipit-source-id: bc04a73145290358854f5a5e552a45e559118fc3
|
|
Summary:
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16628
Differential Revision: D13922097
Pulled By: ezyang
fbshipit-source-id: eb16d90cc61167af5edc0c4e361d7a807a3099e5
|
|
Summary:
It turns out that clang-tidy is bundled with travis's standard trusty distribution, so no need to install it manually.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16164
Differential Revision: D13738986
Pulled By: suo
fbshipit-source-id: d0cd76c615625b2ed7f18951289412989f15849d
|
|
Differential Revision:
D13552080
Original commit changeset: 462a73894c16
fbshipit-source-id: ebfc5aa3343cebabbc24ff39e4e9841a372443e2
|
|
Summary:
Simple check that runs against your PR's changes and complains if running clang-format would have created a change. Does nothing when run against master, so it's "safe" to accept changes that fail this check and it won't break the build.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15543
Reviewed By: soumith
Differential Revision: D13552080
Pulled By: suo
fbshipit-source-id: 462a73894c16e7108806af7fa88440c377d4d0d2
|
|
Summary:
This PR adds clang-format automation:
- It only checks on whitelisted files, so we can enable incrementally without noise
- There is a pre-commit hook provided that will do the same check, plus prompt users to apply the clang-format changes (no change is made without the user agreeing).
My plan is to migrate over whole files at a time, clang-formatting them and then adding them to the whitelist. Doing it this way should avoid too many merge pains (the most you'll have to is run clang-format on the affected file before rebasing).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15254
Differential Revision: D13515888
Pulled By: suo
fbshipit-source-id: d098eabcc97aa228c4dfce8fc096c3b5a45b591f
|
|
Summary:
This will document `torch::from_blob` and such.
soumith ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14381
Differential Revision: D13216560
Pulled By: goldsborough
fbshipit-source-id: 112f60e45e4d38a8a9983fa71e9cc56bc1a73465
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12963
Differential Revision: D10510026
Pulled By: goldsborough
fbshipit-source-id: b6b9634a7a2575ff4e2983321d2e4e5829626347
|
|
Summary:
At long last, we will have clang-tidy enabled in CI. For a while I thought I could clean up the project enough to enable clang-tidy with all checks enabled, but I figure it's smarter to set up the minimal checks and at least have those in CI. We can fix more going forward.
ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12213
Differential Revision: D10183069
Pulled By: goldsborough
fbshipit-source-id: 7ecd2d368258f46efe23a2449c0a206d10f3a769
|
|
Differential Revision: D9919120
Pulled By: goldsborough
fbshipit-source-id: bf14cbe4ab79524495957cb749828046af864aab
|
|
Summary:
This PR adds a .travis.yml check for our C++ documentation. The goal is to avoid any documentation/comments in our C++ code that would break the doxygen output and possibly ruin the C++ documentation site (currently https://pytorch.org/cppdocs).
For this, we:
1. Run doxygen and record any warnings,
2. Filter out some known bogus warnings,
3. Count the remaining warnings,
4. Fail the check if (3) is non-zero.
soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11124
Differential Revision: D9651011
Pulled By: goldsborough
fbshipit-source-id: 30f776d23bb6d6c482c54db32828b4b99547e87b
|
|
Summary:
Flake8 will produce different results on Python 2 and 3. Python 3.7 has __async__ as a reserved word https://github.com/pytorch/pytorch/pull/4999.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9953
Differential Revision: D9035415
Pulled By: soumith
fbshipit-source-id: 8a46e028a2e20a7e3f6d90137020268d65a7cc64
|
|
* Add python typing module as build dependency
* Change output_declarations to be a NamedTuple
* Add mypy configuration files
mypy-files.txt includes a list of all files that should be typed checked
with mypy. Run mypy with `mypy @mypyfiles.txt`.
mypy.ini includes mypy options. Unfortunately this can't be merged with
mypy-files.txt.
Update .travis.yml so that one doesn't have to specify what files to
type check inside it.
* Add RuntimeError on missing `typing` module
Alerts users to the new build dependency.
|
|
* Initial type hints for function_wrapper
* Don't break python 2
* Update TopEnvironment
* Add mypy check to travis
* Add .mypy_cache to .gitignore
|
|
|
|
|
|
* Batchnorm in ATen
This commit moves BatchNorm derivatives into ATen, eliminating
torch/csrc/autograd/functions/batch_normalization.cpp
Some refactoring along the way:
- Functions got renamed to remove _forward from their names
- CuDNN batchnorm forward was modified to return save_mean/save_std instead of
take it as parameters. To avoid returning undefined Variables, these return
(small) uninitialized tensors when they are not used.
- THNN batch normalization takes care of resizing save_mean and save_std on
forward.
- There are some shenanigans re batchnorm backwards in eval mode. I'm tracking
that in #4284
- I decided not to introduce buffers as a proper concept in ATen, which means
that tensors like running_mean/running_var are variables in ATen. This meant
there needed to be some adjustments to how we *trace* such variables; the
new strategy is if we can't find a Value for a variable, we look and see
if we have a Value for the buffer pointed to by the variable, before
finally falling back on constant.
- This PR finally reliably triggered OOM on Travis builds; I fixed this by reducing
the number of parallel jobs.
- Stop using std::string when it's not necessary.
- Remove training parameter from cudnn_batch_norm_backward, because it
doesn't make sense; cuDNN doesn't implement the math for evaluation mode
batchnorm backwards.
- batchnorm_double_backward is now in an anonymous namespace, as it
no longer needs to be called from torch/csrc
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
|
|
|
|
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
This commit adds a new exporter pass which takes a graph and returns
a string of the human-readable protobuf representation of a model.
We have two strategies for how conversions are implemented:
- If a Python autograd function has a primspec static method, we invoke
it to get the Toffee conversion. Use torch.toffee.op to generate the
format expected to be returned. The particular data representation is opaque
and subject to change in the future.
- Otherwise, there's a giant if statement in the exporter, which manually
uses the JIT IR C++ API and Toffee IR C++ protobuf API to convert.
You must check out a copy of the ToffeeIR repo
https://github.com/ProjectToffee/ToffeeIR at torch/lib; at the moment
we don't have a subtree/submodule set up.
Technical debt in this commit:
- To get protobuf headers in scope, we unconditionally add $CONDA_PREFIX/include
to the include path. This needs to be replaced with a more robust mechanism.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
|
|
* Opt into Trusty builds.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Bump to 2.7.9.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
|
|
|
|
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|