Age | Commit message (Collapse) | Author | Files | Lines |
|
Summary:
This PR adds clang-format automation:
- It only checks on whitelisted files, so we can enable incrementally without noise
- There is a pre-commit hook provided that will do the same check, plus prompt users to apply the clang-format changes (no change is made without the user agreeing).
My plan is to migrate over whole files at a time, clang-formatting them and then adding them to the whitelist. Doing it this way should avoid too many merge pains (the most you'll have to is run clang-format on the affected file before rebasing).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15254
Differential Revision: D13515888
Pulled By: suo
fbshipit-source-id: d098eabcc97aa228c4dfce8fc096c3b5a45b591f
|
|
Summary:
This will document `torch::from_blob` and such.
soumith ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14381
Differential Revision: D13216560
Pulled By: goldsborough
fbshipit-source-id: 112f60e45e4d38a8a9983fa71e9cc56bc1a73465
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12963
Differential Revision: D10510026
Pulled By: goldsborough
fbshipit-source-id: b6b9634a7a2575ff4e2983321d2e4e5829626347
|
|
Summary:
At long last, we will have clang-tidy enabled in CI. For a while I thought I could clean up the project enough to enable clang-tidy with all checks enabled, but I figure it's smarter to set up the minimal checks and at least have those in CI. We can fix more going forward.
ezyang apaszke
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12213
Differential Revision: D10183069
Pulled By: goldsborough
fbshipit-source-id: 7ecd2d368258f46efe23a2449c0a206d10f3a769
|
|
Differential Revision: D9919120
Pulled By: goldsborough
fbshipit-source-id: bf14cbe4ab79524495957cb749828046af864aab
|
|
Summary:
This PR adds a .travis.yml check for our C++ documentation. The goal is to avoid any documentation/comments in our C++ code that would break the doxygen output and possibly ruin the C++ documentation site (currently https://pytorch.org/cppdocs).
For this, we:
1. Run doxygen and record any warnings,
2. Filter out some known bogus warnings,
3. Count the remaining warnings,
4. Fail the check if (3) is non-zero.
soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11124
Differential Revision: D9651011
Pulled By: goldsborough
fbshipit-source-id: 30f776d23bb6d6c482c54db32828b4b99547e87b
|
|
Summary:
Flake8 will produce different results on Python 2 and 3. Python 3.7 has __async__ as a reserved word https://github.com/pytorch/pytorch/pull/4999.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9953
Differential Revision: D9035415
Pulled By: soumith
fbshipit-source-id: 8a46e028a2e20a7e3f6d90137020268d65a7cc64
|
|
* Add python typing module as build dependency
* Change output_declarations to be a NamedTuple
* Add mypy configuration files
mypy-files.txt includes a list of all files that should be typed checked
with mypy. Run mypy with `mypy @mypyfiles.txt`.
mypy.ini includes mypy options. Unfortunately this can't be merged with
mypy-files.txt.
Update .travis.yml so that one doesn't have to specify what files to
type check inside it.
* Add RuntimeError on missing `typing` module
Alerts users to the new build dependency.
|
|
* Initial type hints for function_wrapper
* Don't break python 2
* Update TopEnvironment
* Add mypy check to travis
* Add .mypy_cache to .gitignore
|
|
|
|
|
|
* Batchnorm in ATen
This commit moves BatchNorm derivatives into ATen, eliminating
torch/csrc/autograd/functions/batch_normalization.cpp
Some refactoring along the way:
- Functions got renamed to remove _forward from their names
- CuDNN batchnorm forward was modified to return save_mean/save_std instead of
take it as parameters. To avoid returning undefined Variables, these return
(small) uninitialized tensors when they are not used.
- THNN batch normalization takes care of resizing save_mean and save_std on
forward.
- There are some shenanigans re batchnorm backwards in eval mode. I'm tracking
that in #4284
- I decided not to introduce buffers as a proper concept in ATen, which means
that tensors like running_mean/running_var are variables in ATen. This meant
there needed to be some adjustments to how we *trace* such variables; the
new strategy is if we can't find a Value for a variable, we look and see
if we have a Value for the buffer pointed to by the variable, before
finally falling back on constant.
- This PR finally reliably triggered OOM on Travis builds; I fixed this by reducing
the number of parallel jobs.
- Stop using std::string when it's not necessary.
- Remove training parameter from cudnn_batch_norm_backward, because it
doesn't make sense; cuDNN doesn't implement the math for evaluation mode
batchnorm backwards.
- batchnorm_double_backward is now in an anonymous namespace, as it
no longer needs to be called from torch/csrc
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
|
|
|
|
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
This commit adds a new exporter pass which takes a graph and returns
a string of the human-readable protobuf representation of a model.
We have two strategies for how conversions are implemented:
- If a Python autograd function has a primspec static method, we invoke
it to get the Toffee conversion. Use torch.toffee.op to generate the
format expected to be returned. The particular data representation is opaque
and subject to change in the future.
- Otherwise, there's a giant if statement in the exporter, which manually
uses the JIT IR C++ API and Toffee IR C++ protobuf API to convert.
You must check out a copy of the ToffeeIR repo
https://github.com/ProjectToffee/ToffeeIR at torch/lib; at the moment
we don't have a subtree/submodule set up.
Technical debt in this commit:
- To get protobuf headers in scope, we unconditionally add $CONDA_PREFIX/include
to the include path. This needs to be replaced with a more robust mechanism.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
|
|
* Opt into Trusty builds.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Bump to 2.7.9.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
|
|
|
|
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|