summaryrefslogtreecommitdiff
path: root/tools/autograd/derivatives.yaml
AgeCommit message (Expand)AuthorFilesLines
2019-01-16Moving torch.norm to ATen using TensorIterator (#15414)jiej1-1/+8
2019-01-16Port the backend of FractionalMaxPool3d from TH to ATen (#15575)Chandler Zuo1-0/+7
2019-01-16Port legacy all(*) to ATen (#15540)Shen Li1-0/+9
2019-01-14`var` for multiple dimensions (#15892)Brennan Vincent1-1/+1
2019-01-03Add mkldnn conv double backward (#15686)Ailing Zhang1-0/+3
2018-12-19Remove python_default_init from ATen and use Optional (#15234)Wanchao Liang1-2/+2
2018-12-19Rename potrs to cholesky_solve (#15334)vishwakftw1-4/+4
2018-12-17[TensorIterator fixing mean to output correct result for half precisi… (#14...Jie1-2/+11
2018-12-17Port nn fold and unfold to c++Roy Li1-0/+6
2018-12-17Bicubic interpolation for nn.functional.interpolate (#9849)David Riazati1-0/+6
2018-12-07Implement `std` for multiple dimensions on CPU devices. (#14535)Brennan Vincent1-1/+1
2018-12-06gradcheck (#14596)Wei Yang1-0/+3
2018-11-30Revert existing no_grad_embedding_renorm_ from aten (#14639)Elias Ellison1-3/+0
2018-11-28Support Embedding + EmbeddingBag in Script + (Ignore flakey test) (#14509)Elias Ellison1-0/+3
2018-11-28Revert D13219647: [pytorch][PR] Support Embedding + EmbeddingBag in ScriptEdward Yang1-3/+0
2018-11-28Support Embedding + EmbeddingBag in Script (#14415)Elias Ellison1-0/+3
2018-11-28Make `mean` function work across multiple dimensions. (#14252)Brennan Vincent1-1/+1
2018-11-28torch.sparse.sum() (#12430)Wei Yang1-0/+3
2018-11-27roll along multiple dimensionsBrian Vaughan1-1/+1
2018-11-27Move Affine grid to C++ (#14392)Elias Ellison1-0/+3
2018-11-27Speed-up "advanced" indexing operations (#13420)Sam Gross1-0/+8
2018-11-26backward for sparse.addmm(D, S, D, alpha, beta) -> D (#13345)Wei Yang1-0/+5
2018-11-21native NN wrappers, including with buffers.Gregory Chanan1-43/+43
2018-11-19Support 'python_module' of 'nn' in native functions. (#14126)Gregory Chanan1-3/+3
2018-11-19Support named return arguments in native_functions. (#14100)Gregory Chanan1-5/+5
2018-11-12Improve mm / addmm error message with sparse tensors (#13796)Gregory Chanan1-7/+2
2018-11-09Make potrs batched (#13453)Vishwak Srinivasan1-2/+2
2018-11-06Native batch norm (#13263)Thomas Viehmann1-10/+10
2018-11-06mm backwards to not depend on TH. (#13575)Gregory Chanan1-2/+2
2018-11-06cumsum/cumprod derivatives not depending on TH. (#13579)Gregory Chanan1-2/+8
2018-11-05Speed up CPU threshold and relu implementation (#13182)Sam Gross1-7/+11
2018-11-05Roll operator t32802531 (#13261)Brian Vaughan1-0/+3
2018-11-02Write gesv derivatives in terms of native function.Gregory Chanan1-5/+1
2018-11-01Rename potrf to cholesky (#12699)vishwakftw1-3/+3
2018-10-31Use non-th versions of some functions when defining backwards. (#13394)Gregory Chanan1-2/+2
2018-10-31Move underscore prefixed th functions _th prefix.Gregory Chanan1-1/+1
2018-10-30More functions moved to native, use _th_ prefix more consistently.Gregory Chanan1-2/+2
2018-10-30Move underscore prefixed linear algebra TH functions to _th prefix.Gregory Chanan1-3/+3
2018-10-30Fix "CUDA Tensor __rsub__ breaks when device is not 0" (#12956)Will Feng1-0/+7
2018-10-30Move _cumsum and _cumprod to _th_ prefixes.Gregory Chanan1-2/+2
2018-10-30Rename th_addmm to _th_addbmm.Gregory Chanan1-1/+1
2018-10-29More Declarations.cwrap functions moved to native, mainly LAPACK, sim… (#13...Gregory Chanan1-4/+4
2018-10-27Batched Inverse (#9949)vishwakftw1-1/+1
2018-10-26Revert "Move batch_norm to ATen/native, speed up (#12368)" (#13191)Lu Fang1-10/+10
2018-10-26Native wrappers for many Declarations.cwrap entriesGregory Chanan1-1/+4
2018-10-26Move ConstantPadNd into ATen (#10885)William Horton1-0/+3
2018-10-25Move batch_norm to ATen/native, speed up (#12368)Thomas Viehmann1-10/+10
2018-10-25Add c10::optional to type syntax (#12582)Wanchao Liang1-1/+1
2018-10-25Add support for reductions to TensorIterator (#11908)Sam Gross1-5/+23
2018-10-24Autograd indices/values and sparse_coo ctor (#13001)Tongzhou Wang1-1/+38