summaryrefslogtreecommitdiff
path: root/tools/autograd
AgeCommit message (Expand)AuthorFilesLines
2018-01-06Check for out of bounds grads access in derivatives.yamlEdward Z. Yang1-0/+25
2018-01-06s/uses_grad/uses_single_grad/ for more clarity.Edward Z. Yang2-4/+4
2018-01-06Fix two bugs in thnn_conv_depthwise2d_backward gradient.Edward Z. Yang1-1/+1
2018-01-04Improvements around torch.cat on empty Variables (#3602)gchanan1-1/+5
2018-01-04Fix template type for std::array size (#4473)Adam Paszke1-1/+2
2018-01-04Modify derivatives for efficiency and change `destination` to `result` for co...Vishwak Srinivasan1-11/+11
2018-01-03Don't special case NN functions in gen_variable_type.py (#4395)Sam Gross3-174/+148
2018-01-03Move prod, cumprod backwards to C++ (#4394)gchanan3-5/+191
2018-01-02Implement embedding in ATen (#4322)Sam Gross1-0/+6
2018-01-02Add low-precision digamma() and polygamma() functions (#4399)Fritz Obermeyer1-1/+7
2017-12-28Fix type signature of in-place NN functions (#4389)Sam Gross2-7/+15
2017-12-28Update derivative of `expm1`Vishwak Srinivasan1-1/+1
2017-12-28fix some typos (#4379)Vishwak Srinivasan1-1/+1
2017-12-28Adding torch.expm1() and its inplace function (#4350)Vishwak Srinivasan1-0/+3
2017-12-27Split off load_derivatives and gen_autograd_functions from gen_variable_type ...Sam Gross6-479/+512
2017-12-27Support ATen GPU pointwise apply and torch.where. (#4304)gchanan1-1/+1
2017-12-27VariableType clean-up (#4366)Sam Gross2-20/+13
2017-12-24allow optional int tensorSsnL1-1/+4
2017-12-22Fix test_gamma_sample_grad. (#4327)gchanan1-0/+3
2017-12-22Make Variable.is_sparse an attribute (#4308)Sam Gross1-1/+1
2017-12-22Generate grad_input_mask only if it's actually usedAdam Paszke1-5/+13
2017-12-22Remove unused functionsAdam Paszke1-5/+0
2017-12-21Fix default device for Variable.new() (#4307)Sam Gross1-0/+1
2017-12-21Move factional max pooling to ATen (#4290)Sam Gross1-0/+7
2017-12-21Use `where` rather than `_s_where` in `_s_where` backwards so `where` is trac...gchanan1-2/+2
2017-12-21Further improvements to ATen convolution (#4287)Edward Z. Yang2-26/+39
2017-12-21Batchnorm in ATen (#4285)Edward Z. Yang3-21/+43
2017-12-20Document some autograd invariants (#4272)Adam Paszke1-0/+6
2017-12-20Move SELU to ATen (#4269)Sam Gross1-6/+6
2017-12-20Move upsampling to ATen (#4264)Sam Gross1-0/+39
2017-12-20Don't mark index as traceable, and other improvements (#4249)Edward Z. Yang1-2/+13
2017-12-20Convolution derivatives in ATen (#4116)Edward Z. Yang3-6/+75
2017-12-20Implement torch.where(condition, x, y) CPU Variable. (#4259)gchanan1-0/+5
2017-12-20Implement _values() and _indices() methods for sparse variables in python (an...Richard Zou1-1/+1
2017-12-19Move reflection/replication padding to ATen (#4258)Sam Gross1-0/+35
2017-12-19Move adaptive avg pooling 2d/3d to ATen (#4254)Sam Gross2-0/+33
2017-12-18conv_tbc (#3730)James Reed1-0/+3
2017-12-18Replace Variable.volatile with torch.no_grad() (#3970)Sam Gross2-42/+48
2017-12-18Support CPU Apply in ATen and implement standard_gamma using it (#4161)gchanan1-0/+3
2017-12-18Add reduce arg to BCELoss (#4231)Richard Zou1-2/+2
2017-12-18Add python only default init expression; Implement stft, hann/hamming/bartlet...Tongzhou Wang2-8/+58
2017-12-18Implement pin_memory() as a NativeFunction (#4094)Sam Gross2-0/+4
2017-12-18Implement Variable.cuda and Variable.type using ATen (#4139)Sam Gross3-32/+98
2017-12-15Implement Variable._sparse_mask (#4124)Sam Gross5-1/+15
2017-12-15Trace ATen native functions as themselves, not their implementations. (#4127)Edward Z. Yang3-61/+198
2017-12-14Preprocess both inplace and non-inplace nn functions (#4184)Luca Antiga2-9/+4
2017-12-14Accept sparse tensors of corresponding type in VariableType castsAdam Paszke1-1/+1
2017-12-13Refactor generation of NN derivatives (#4096)Luca Antiga2-115/+238
2017-12-13Port batchnorm_double_backward to ATen.Edward Z. Yang2-0/+163
2017-12-13Implement remaining random methods through ATen (#4137)Sam Gross1-1/+25