diff options
author | Gregory Chanan <gchanan@fb.com> | 2019-02-15 13:38:22 -0800 |
---|---|---|
committer | Facebook Github Bot <facebook-github-bot@users.noreply.github.com> | 2019-02-15 13:41:04 -0800 |
commit | 9b5d3f6f5e54fd25d7a06882ba81a62afc3c2e64 (patch) | |
tree | 1eb8fe7c83f4a792a84bb1479ce4af9659e12dd8 /.circleci | |
parent | 70ee257ad49ec946378970dca7e2a990210d8e1f (diff) | |
download | pytorch-9b5d3f6f5e54fd25d7a06882ba81a62afc3c2e64.tar.gz pytorch-9b5d3f6f5e54fd25d7a06882ba81a62afc3c2e64.tar.bz2 pytorch-9b5d3f6f5e54fd25d7a06882ba81a62afc3c2e64.zip |
Stop reassigning (output) reference arguments in BinaryOps. (#17059)
Summary:
The binary ops that are using TensorIterator do a trick in order to only write the code once for out and non-out variants:
1) Have the non-out variant call the out variant with an undefined tensor.
2) the out variant then reassigns the result tensor to the output of the TensorIterator; this is a no-op in the case where a valid tensor was passed and it correctly propagates the result back to the non-out variant, which is legal because it's just reassigning an undefined tensor.
I believe other solutions to this problem would require an unnecessary reference bump, e.g. defining another out variant that returns a Tensor rather than a reference.
Unfortunately, this doesn't work with const-references, which we want to move our output arguments to be (because const doesn't actually provide const correctness here, and writers mistakenly reassign the parameter in the case it isn't an out variant).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17059
Differential Revision: D14068402
Pulled By: gchanan
fbshipit-source-id: 89fef177a1e174dbe2858e2eae0f6d85460b07d1
Diffstat (limited to '.circleci')
0 files changed, 0 insertions, 0 deletions