summaryrefslogtreecommitdiff
path: root/test/common_nn.py
AgeCommit message (Expand)AuthorFilesLines
2019-03-30Turn on F401: Unused import warning. (#18598)Edward Yang1-1/+1
2019-03-27enable more unit tests (#18537)jithunnair-amd1-18/+0
2019-03-18Circular Convolution Function via circular padding (#17240)Narine Kokhlikyan1-0/+173
2019-02-21fix lint (#17366)Soumith Chintala1-11/+22
2019-02-21remove nn.Upsample deprecation warnings from tests (#17352)Soumith Chintala1-99/+99
2019-02-14Speed-up adaptive average pooling for the common case of size=1 output (#17011)ngimel1-0/+12
2019-02-12enable more unit tests in test_nn (#16994)Johannes M Dieterich1-33/+1
2019-02-12fix bicubic upsampling and enable tests (#17020)Johannes M Dieterich1-7/+0
2019-01-16Port the backend of FractionalMaxPool3d from TH to ATen (#15575)Chandler Zuo1-0/+25
2019-01-15Port FractionalMaxPool2d from TH to ATen (#15531)Chandler Zuo1-14/+19
2018-12-17Bicubic interpolation for nn.functional.interpolate (#9849)David Riazati1-0/+49
2018-12-04Enable testing on Loss modules (#14778)David Riazati1-1/+1
2018-12-04Add new reduction mode in kl_div (#14457)Ailing Zhang1-0/+2
2018-12-04interpolate (#14123)Elias Ellison1-11/+11
2018-11-30Move module tests to common_nn (#14578)David Riazati1-1/+1953
2018-11-28Use nn module tests in test_jit (#14238)David Riazati1-2/+2
2018-11-28Revert D13192230: [pytorch][PR] [jit] Use nn module tests in test_jitDavid Riazati1-2/+2
2018-11-27Use nn module tests in test_jit (#14238)David Riazati1-2/+2
2018-11-14Ensure nn Losses check scalar vs non-scalar values.Gregory Chanan1-5/+1
2018-11-01Rename elementwise_mean to mean (#13419)Tongzhou Wang1-25/+25
2018-10-24Enable test_nn embedding tests and use correct warp size in Embedding.cu (#13...iotamudelta1-4/+0
2018-10-17Rename test/common.py to test/common_utils.py (#12794)James Sun1-1/+1
2018-10-08Fix a bunch of warnings in TestNNTongzhou Wang1-2/+2
2018-10-03mark unit tests as working, skip failing unit test (#12313)Johannes M Dieterich1-19/+0
2018-09-02improve docker packages, fix bugs, enable tests, enable FFT (#10893)iotamudelta1-4/+27
2018-08-01Add CELU activation to pytorch (#8551)Xiang Gao1-0/+1
2018-07-31Add CTC loss (#9628)Thomas Viehmann1-6/+48
2018-07-11Accumulate MSELoss reduce=True into accreal instead of real (#9287)Richard Zou1-0/+4
2018-07-05Test nn.Module on non-contiguous inputs (#9114)Tongzhou Wang1-7/+15
2018-07-02update nn loss tests to use new reduction arg (#9118)Roy Li1-79/+80
2018-07-01combine size_average and reduce args in loss functions (#8018)Roy Li1-1/+5
2018-05-31Add memory leak check in CUDA tests (#7270)Tongzhou Wang1-5/+2
2018-05-23[PyTorch] [gradcheck] change backward() to grad() (#7710)Tongzhou Wang1-5/+4
2018-04-17Codemod to update our codebase to 0.4 standard (#6641)Tongzhou Wang1-43/+20
2018-03-21Implement MarginRankingLoss as native function and add reduce=True arg to it ...li-roy1-1/+15
2018-03-17Fix kldiv backward on CUDA (#5814)Richard Zou1-2/+3
2018-03-17 implement TripletMarginLoss as a native function (#5680)li-roy1-0/+17
2018-03-09small fixes to CosineEmbeddingLoss tests (#5681)li-roy1-0/+2
2018-03-08implement CosineEmbeddingLoss as a native function and add reduce arg (#5646)li-roy1-2/+21
2018-03-08Revert "implement CosineEmbeddingLoss as a native function and add reduce arg...Edward Z. Yang1-21/+2
2018-03-08implement CosineEmbeddingLoss as a native function and add reduce arg (#5447)li-roy1-2/+21
2018-03-07remove legacy workaround for hinge embedding loss reference fn (#5596)li-roy1-5/+0
2018-03-01Deprecate variable factory, use torch.tensor instead (#5476)gchanan1-3/+1
2018-02-28More Variable/Tensor clean-ups (#5464)Sam Gross1-22/+1
2018-02-27add reduce=True arg to MultiMarginLoss (#5150)li-roy1-0/+82
2018-02-23Merge Variable and Tensor classes (#5225)Sam Gross1-32/+23
2018-02-15add reduce=True arg to MultiLabelSoftMarginLoss (#5097)li-roy1-8/+1
2018-02-13Move scalar tests from common_nn to legacy_nn. (#5223)gchanan1-169/+0
2018-02-13 add reduce=True arg to SoftMarginLoss (#5071)li-roy1-0/+13
2018-02-09add reduce=True arg to HingeEmbeddingLoss (#5130)li-roy1-0/+22