summaryrefslogtreecommitdiff
path: root/torch/legacy
diff options
context:
space:
mode:
authorEdward Z. Yang <ezyang@mit.edu>2017-12-20 14:19:27 -0500
committerGitHub <noreply@github.com>2017-12-20 14:19:27 -0500
commita88a8ec8278e19f52cfd6e75a685ce0e9200b96b (patch)
treef5e9ada002a807d266335464db6409c62565fa4d /torch/legacy
parent63ac3633f53966d038d5983e83254b7524300680 (diff)
downloadpytorch-a88a8ec8278e19f52cfd6e75a685ce0e9200b96b.tar.gz
pytorch-a88a8ec8278e19f52cfd6e75a685ce0e9200b96b.tar.bz2
pytorch-a88a8ec8278e19f52cfd6e75a685ce0e9200b96b.zip
Convolution derivatives in ATen (#4116)
* Convolution derivatives in ATen This PR introduces ATen implementation of convolution, which dispatches to THNN/CuDNN/nnpack based on input parameters. The general strategy is to compose this function out of the various forward-backward pairs of specific implementations, rather than write a monolithic function with backwards (which is what we did before because the boilerplate of doing it otherwise would have been very high.) The new API provides the following functions: - _convolution, which is a fully generic, native convolution implementation that dispatches to various other convolution implementations depending on input characteristics. This is prefixed with an underscore because it explicitly takes benchmark, deterministic and cudnn_enabled which are implementation details for CuDNN. The intent is to eventually provide a convolution that reads these parameters out of the context using #4104. - _convolution_nogroup is a convolution implementation for non-CuDNN algorithms which don't support group convolution natively. - _convolution_double_backward is the generic double-backwards implementation for convolution. In more detail: - Most functionality from torch/csrc/autograd/functions/convolution.cpp has been moved into aten/src/ATen/native/Convolution.cpp - We continue to make use of ConvParams, but we now construct the parameters upon entry to a function from the function signature (which does not use ConvParams; having convolution take ConvParams directly would require teaching the code generator how to accept these as parameters, complicating ATen's API model) and destruct them when making subprocedure calls. - I introduce a new idiom, input_r, which represents a const Tensor& reference, which will subsequently be assigned to a local Tensor input. This is helpful because a lot of the existing algorithms relied on being able to assign to locals, which is not permitted with a const reference. - The native argument parser now supports std::array<bool,2> inputs (NB: there MUST NOT be a space; this is the same hack as is applied to derivatives.yaml) - Native parser now supports Tensor? arguments, which indicates a nullable tensor. Previously this function was only used by NN methods. - Documentation updates on THNN library - I added an extra fgradInput argument to VolumetricConvolutionMM_updateOutput and VolumetricConvolutionMM_accGradParameters so that its buffer list lines up with the backward argument list. This makes it possible to write derivative for conv3d which previously was not supported (commented out in derivatives.yaml) - Extra double_backward declarations for all convolution backwards functions was added. - You can now use the syntax Tensor? in native_functions.yaml to indicate that a tensor argument is nullable. There are adjustments to propagate this to the Python argument parser. - NNPACK was ported to ATen, and ATen now builds and links against ATen if possible. New AT_NNPACK_ENABLED macro. The nnpack functions are nnpack_spatial_convolution. - Some modest CuDNN convolution refactoring to remove _forward from names. - There's a new cudnn_convolution_backward function to deal with the fact that CuDNN convolution double backward requires you to have computed all gradients in one go. - Variable set_flags now checks if the tensor is undefined, fixing a silent memory corruption. - checkSameType updated to not raise an exception if called with Variable arguments - "no ATen declaration found for" error message is improved to say what available declarations are - make_variable now accepts undefined tensors, and returns an undefined tensor in this case.
Diffstat (limited to 'torch/legacy')
-rw-r--r--torch/legacy/nn/VolumetricConvolution.py2
1 files changed, 2 insertions, 0 deletions
diff --git a/torch/legacy/nn/VolumetricConvolution.py b/torch/legacy/nn/VolumetricConvolution.py
index 6b357fba37..0a3ace4b2e 100644
--- a/torch/legacy/nn/VolumetricConvolution.py
+++ b/torch/legacy/nn/VolumetricConvolution.py
@@ -95,6 +95,7 @@ class VolumetricConvolution(Module):
self.weight,
self.bias,
self.finput,
+ self.fgradInput,
self.kT, self.kW, self.kH,
self.dT, self.dW, self.dH,
self.padT, self.padW, self.padH
@@ -160,6 +161,7 @@ class VolumetricConvolution(Module):
self.gradWeight,
self.gradBias,
self.finput,
+ self.fgradInput,
self.kT, self.kW, self.kH,
self.dT, self.dW, self.dH,
self.padT, self.padW, self.padH,