diff options
author | Shen Li <shenli@fb.com> | 2019-01-16 09:02:44 -0800 |
---|---|---|
committer | Facebook Github Bot <facebook-github-bot@users.noreply.github.com> | 2019-01-16 09:06:26 -0800 |
commit | a2af554e6f61ce56ef57a3fa2d4aa31cf9d70147 (patch) | |
tree | 2a6fbe0e9caea8c55a3cec2e78d46f42ca76c136 /tools | |
parent | 411173757e02d4cdc16bc551e639a89f30bcb947 (diff) | |
download | pytorch-a2af554e6f61ce56ef57a3fa2d4aa31cf9d70147.tar.gz pytorch-a2af554e6f61ce56ef57a3fa2d4aa31cf9d70147.tar.bz2 pytorch-a2af554e6f61ce56ef57a3fa2d4aa31cf9d70147.zip |
Port legacy all(*) to ATen (#15540)
Summary:
Questions:
1. ~This PR disables `common_dtype` computation [in `TensorIterator.cpp`](https://github.com/mrshenli/pytorch/blob/all/aten/src/ATen/native/TensorIterator.cpp#L489-L491) for `all*` operators. The reason is that, [this code](https://github.com/mrshenli/pytorch/blob/all/aten/src/ATen/native/TensorIterator.cpp#L120) otherwise complains type mismatch, where the `op.tensor` is `type Variable[CPUByteType]` while the `op` is `CPUByteType`. I am not sure if this is the right solution for this problem.~
2. Should I clean up all occurrences of `_th_all` and `_th_all_out` (and `logicalAnd`, `logicalAndAll`)?
3. Do I need to implement derivatives for `all`?
gchanan
Benchmark:
<img width="590" alt="screen shot 2018-12-26 at 3 24 31 pm" src="https://user-images.githubusercontent.com/16999635/50456505-e9596a00-0922-11e9-844e-00c4b4aad7ca.png">
<img width="587" alt="screen shot 2018-12-26 at 3 26 10 pm" src="https://user-images.githubusercontent.com/16999635/50456509-ef4f4b00-0922-11e9-96bf-0a30c8574fe7.png">
<img width="590" alt="screen shot 2018-12-26 at 3 26 54 pm" src="https://user-images.githubusercontent.com/16999635/50456510-ef4f4b00-0922-11e9-8a63-e47988843cc8.png">
<img width="589" alt="screen shot 2018-12-26 at 3 27 16 pm" src="https://user-images.githubusercontent.com/16999635/50456511-ef4f4b00-0922-11e9-9004-2518aebcdc6e.png">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15540
Differential Revision: D13548938
Pulled By: mrshenli
fbshipit-source-id: 5a2e5eef1047decb4c79906cb9f3332034908c9c
Diffstat (limited to 'tools')
-rw-r--r-- | tools/autograd/derivatives.yaml | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/tools/autograd/derivatives.yaml b/tools/autograd/derivatives.yaml index e652b2e618..9ec8f9efea 100644 --- a/tools/autograd/derivatives.yaml +++ b/tools/autograd/derivatives.yaml @@ -138,6 +138,15 @@ - name: alias(Tensor self) self: grad +# The two items below are necessary because TensorIterator doesn't work on +# Variables (codegen does not unwrap the input Tensor for all(*) without this +# line). +- name: all(Tensor self) + self: not_implemented("all") + +- name: all(Tensor self, int64_t dim, bool keepdim) + self: not_implemented("all") + - name: as_strided(Tensor self, IntList size, IntList stride, int64_t? storage_offset) self: as_strided_backward(grad, TensorGeometry(self), size, stride, storage_offset) |