summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGregory Chanan <gchanan@fb.com>2019-04-21 14:11:14 -0700
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>2019-04-21 14:14:27 -0700
commite3523979aed717c1ec927b3153d9eb54c814ceff (patch)
treead94154916cc054bc5b03d4f39050887c8a40c55
parent638ffac359c9b039b92ce1b385ca64529c52de64 (diff)
downloadpytorch-e3523979aed717c1ec927b3153d9eb54c814ceff.tar.gz
pytorch-e3523979aed717c1ec927b3153d9eb54c814ceff.tar.bz2
pytorch-e3523979aed717c1ec927b3153d9eb54c814ceff.zip
Have embedding_dense_backward match JIT signature. (#19521)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19521 ghimport-source-id: 817d3defb5f4ee98bae1f0488f99cb0e9a5226a2 Differential Revision: D15021376 Pulled By: gchanan fbshipit-source-id: 2e29f1d3913f94fab3347dc48676303510d7da46
-rw-r--r--aten/src/ATen/native/native_functions.yaml3
-rw-r--r--tools/autograd/derivatives.yaml1
2 files changed, 2 insertions, 2 deletions
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml
index eefd661abb..884abb2c2c 100644
--- a/aten/src/ATen/native/native_functions.yaml
+++ b/aten/src/ATen/native/native_functions.yaml
@@ -635,8 +635,7 @@
- func: embedding_backward(Tensor grad, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq, bool sparse) -> Tensor
-- func: embedding_dense_backward(Tensor grad_output, IndexTensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> Tensor
- matches_jit_signature: False
+- func: embedding_dense_backward(Tensor grad_output, Tensor indices, int num_weights, int padding_idx, bool scale_grad_by_freq) -> Tensor
dispatch:
CPU: embedding_dense_backward_cpu
CUDA: embedding_dense_backward_cuda
diff --git a/tools/autograd/derivatives.yaml b/tools/autograd/derivatives.yaml
index 301d933b14..3870f90d7a 100644
--- a/tools/autograd/derivatives.yaml
+++ b/tools/autograd/derivatives.yaml
@@ -950,6 +950,7 @@
- name: embedding_dense_backward(Tensor grad_output, Tensor indices, int64_t num_weights, int64_t padding_idx, bool scale_grad_by_freq)
grad_output: embedding_dense_double_backward(grad, indices)
+ indices: non_differentiable
- name: _embedding_bag(Tensor weight, Tensor indices, Tensor offsets, bool scale_grad_by_freq, int64_t mode, bool sparse, Tensor per_sample_weights)
indices: non_differentiable