summaryrefslogtreecommitdiff
path: root/tools/autograd
AgeCommit message (Expand)AuthorFilesLines
2021-12-10[release/1.10] Remove fgrad_input from slow_conv2d (#64280) (#69622)Eli Uriegas2-4/+4
2021-09-20Add crow_/col_indices to view types (#63176)=1-0/+2
2021-09-18support using gradients named for outputs in derivatives (#63947)Michael Dagitses3-5/+62
2021-09-18clarify implementation of check_grad_usage (#64439)Michael Dagitses1-13/+15
2021-09-14Make {select,slice,diagonal}_backward primitives wrt autograd (#64933)Richard Zou2-0/+13
2021-09-13Add forward AD for torch.linalg.eigh (#62163)Ivan Yashchuk1-0/+5
2021-09-10`torch.lu`: forward AD support (#64742)Nikita Vedeneev1-0/+1
2021-09-09`torch.lu_solve`: forward AD support (#64646)Nikita Vedeneev1-0/+1
2021-09-08update scatter formula (#64546)kshitij123451-2/+2
2021-09-08Add forward mode differentiation for torch.linalg.cholesky and transpose (#62...Ivan Yashchuk1-0/+1
2021-09-02simplify op name determination into a single forward pass (#64261)Michael Dagitses1-52/+30
2021-09-01Break up "@generated" string so Phabricator shows changesDavid Reiss1-2/+2
2021-08-31Use the correct overloaded name to skip boxed autograd not implemented kernel...soulitzer1-3/+3
2021-08-27Update codegen to use boxed kernel (#63459)soulitzer3-8/+43
2021-08-26Move variabletype functions around (#63330)soulitzer1-0/+1
2021-08-26Derivatives of relu (#63027) (#63089)MengeTM1-4/+0
2021-08-26use `const auto&` as type for grad alias (#63949)Michael Dagitses1-1/+1
2021-08-25Shard python_torch_functions.cpp (#62187)Peter Bell2-756/+64
2021-08-19Poisson zero rate (#61511)Till Hoffmann1-2/+2
2021-08-16Make `torch.lu` differentiable for wide/tall inputs + jit (#61564)Nikita Vedeneev2-3/+3
2021-08-12Implements backward for `torch.lu_solve` (#61681)Nikita Vedeneev2-2/+2
2021-08-12LayerNorm Support in autodiff: (#50467)jiej1-1/+7
2021-08-11Shard python_functions.cpp (#62186)Peter Bell3-22/+45
2021-08-09Mark unused functions with `C10_UNUSED` (#62929)Nikita Shulga1-1/+1
2021-08-09Enable upper for torch.linalg.cholesky (#62434)Rong Rong (AI Infra)1-2/+2
2021-08-06Refactor codegen file sharding (#62184)Peter Bell2-84/+73
2021-08-03adding operator cumulative_trapezoid (#61615)Kevin Tse1-2/+2
2021-08-02[fix] mark non-differentiable ops (#62529)kshitij123451-1/+1
2021-07-29[reland] Refactor Tensor::to to call a primitive that is not copy_. (#62262)Richard Zou4-2/+35
2021-07-28These should be equivalent per the previous formula but breaks xla (#62329)Alban Desmaison1-1/+1
2021-07-27Migrate thnn_conv_depthwise2d from THC to ATen (#62281)Peter Bell1-3/+3
2021-07-27All remaining linear/element-wise formulas (#59993)albanD2-5/+59
2021-07-27Fix out= variant forward grad detection (#60499)albanD1-5/+6
2021-07-27Add forward AD inplace check and fix codegen (#60498)albanD3-15/+65
2021-07-27Revert D29883676: Migrate thnn_conv_depthwise2d from THC to ATenErjia Guan1-3/+3
2021-07-27det_backward: correct, more robust and with complex support [clone] (#61905)Nikita Vedeneev2-2/+4
2021-07-27Migrate thnn_conv_depthwise2d from THC to ATen (#62006)Peter Bell1-3/+3
2021-07-27Add forward mode differentiation for inverse and solve (#62160)Ivan Yashchuk1-0/+2
2021-07-26Revert D29801652: Refactor Tensor::to to call a primitive that is not copy_.Nikita Shulga4-32/+1
2021-07-26Refactor Tensor::to to call a primitive that is not copy_. (#61458)Richard Zou4-1/+32
2021-07-23Implement NumPy-like `frombuffer` tensor constructor. (#59077)Yukio Siraichi1-0/+96
2021-07-21Removed overhead from reshape() call if tensor doesn't need to be changed (#6...Laurence Rouesnel4-1/+6
2021-07-21[quant] Remove calls to .item() for fake_quant_on (#61921)Supriya Rao1-1/+1
2021-07-21[quant] Add a new fused MovingAvg Obs + FakeQuant operator(CPU) (#61570)Supriya Rao1-0/+3
2021-07-20Revert D29794958 + compilation fix (#61937)Nikita Shulga1-1/+1
2021-07-20Revert D29794958: [pytorch][PR] changing trapz to trapezoidNikita Shulga1-1/+1
2021-07-20changing trapz to trapezoid (#61475)Kevin Tse1-1/+1
2021-07-20Remove torch._bmm and remove torch.bmm deterministic arg documentation (#61629)Kurt Mohler1-5/+0
2021-07-16Revert D29698486: [pytorch][PR] Remove torch._bmm and remove torch.bmm determ...Anjali Chourdia1-0/+5
2021-07-16Remove torch._bmm and remove torch.bmm deterministic arg documentation (#61629)Kurt Mohler1-5/+0