summaryrefslogtreecommitdiff
path: root/torch/legacy
AgeCommit message (Expand)AuthorFilesLines
2018-09-20Remove torch/legacy (#11823)Christian Puhrsch160-10881/+1
2018-08-14check attribute existence in torch.legay.nn.SpatialFullConvolution in method ...Ahti Kalervo1-2/+2
2018-08-01Add CELU activation to pytorch (#8551)Xiang Gao1-0/+2
2018-07-18Use _six for inf and nan (#9500)Tongzhou Wang2-5/+7
2018-07-06Implement nn.functional.interpolate based on upsample. (#8591)Ailing Zhang1-3/+10
2018-07-05Test nn.Module on non-contiguous inputs (#9114)Tongzhou Wang5-0/+5
2018-07-02update nn loss tests to use new reduction arg (#9118)Roy Li11-22/+22
2018-07-01combine size_average and reduce args in loss functions (#8018)Roy Li11-44/+33
2018-05-04move softmax/logsoftmax to ATen (#6786)ngimel4-36/+16
2018-04-17Codemod to update our codebase to 0.4 standard (#6641)Tongzhou Wang3-11/+11
2018-03-21Implement MarginRankingLoss as native function and add reduce=True arg to it ...li-roy1-1/+1
2018-02-27add reduce=True arg to MultiMarginLoss (#5150)li-roy1-2/+6
2018-02-23Merge Variable and Tensor classes (#5225)Sam Gross25-33/+31
2018-02-16check attribute existence in SpatialFullConvolution (#5255)Kato Tetsuro1-4/+4
2018-02-13 add reduce=True arg to SoftMarginLoss (#5071)li-roy1-2/+6
2018-02-05add reduce=True argument to MultiLabelMarginLoss (#4924)li-roy1-2/+6
2018-02-02Fix output_nr not incremented correctly (#4812)Tongzhou Wang1-0/+3
2018-01-19Legacy Padding: correct output size with nInputDimMaxim Berman1-4/+4
2018-01-18Allow Variables in calls to type2backend (#4724)Sam Gross2-2/+2
2017-12-20Move SELU to ATen (#4269)Sam Gross1-1/+2
2017-12-20Convolution derivatives in ATen (#4116)Edward Z. Yang1-0/+2
2017-11-13Fix elu double-backwards when applied in-place (#3687)Sam Gross1-1/+0
2017-11-07Add reduce keyword for KLDivLoss (#3330)Richard Zou1-2/+6
2017-11-01Add reduce keyword to L1Loss (#3366)Richard Zou1-2/+6
2017-11-01Implement reduce keyword for SmoothL1Loss (#3382)Richard Zou1-2/+6
2017-10-26Add reduce keyword to NLLLoss and NLLLoss2d (#3080)Richard Zou2-4/+12
2017-10-19Large Softmax and LogSoftmax refactorAdam Paszke4-9/+38
2017-10-13Remove unused parameter 'input' from TanhSam Gross1-1/+0
2017-10-11Remove unused argument 'input' to Sigmoid_updateGradInput (#3079)Sam Gross1-1/+0
2017-10-10Simplify PReLU binding (#3055)Sam Gross1-13/+2
2017-10-06Introduce a `reduce` keyword argument for MSELoss (#2878)Richard Zou3-6/+20
2017-10-06Fix two legacy modules clearing input tensor in clearStateSsnL2-6/+5
2017-10-02Fix typosTaehoon Lee1-3/+3
2017-09-10Added support for nInputDim parameter in legacy Padding class (#2645)Varun Agrawal1-1/+10
2017-09-05fixed issue #2613 in torch/legacy/nn (#2624)Iacopo Poli1-4/+5
2017-08-26Adding implicit padding for 3d average poolingLu Fang1-4/+26
2017-08-25Fix typos.Zhou Mo5-6/+6
2017-08-14Update legacy SoftPlus to add threshold constructor arg.Gregory Chanan1-3/+3
2017-08-10Revert "Fix typos."Gregory Chanan5-6/+6
2017-08-08Fix typos.Zhou Mo5-6/+6
2017-08-01Fix serialization of legacy ClassNLLCriterion with ignore_index.Gregory Chanan1-0/+1
2017-08-01Update legacy ClassNLLCriterion to add ignore_index.Gregory Chanan1-3/+4
2017-07-13Add ignore_index to NLLLoss2dFisher Yu1-3/+8
2017-06-16Remove flattening for torch.dot (#1781)gchanan2-3/+7
2017-06-11Fix lint errors.Gregory Chanan2-4/+8
2017-06-11Add a broadcast parameter to copy_, use it in the library in cases where ther...Gregory Chanan3-4/+4
2017-06-11Add optional warning for backwards incompatible keepdim. Setting torch.utils....Gregory Chanan4-4/+4
2017-06-11Backwards compatible Spatial Normalizations / CrossMapLRN.Gregory Chanan3-9/+10
2017-06-11Test fixes for keepdim=False, suppress warnings on backwards-compatible behav...Gregory Chanan2-2/+2
2017-06-07fix legacy ClassNLLCriterion for upstream changeSoumith Chintala1-2/+4