summaryrefslogtreecommitdiff
path: root/test/test_optim.py
AgeCommit message (Expand)AuthorFilesLines
2019-03-27Adds Cyclical Learning Rate and Momentum (#18001)Sam Pepose1-1/+177
2019-03-22Correctly call superclass setUp in TestCase subclasses. (#18291)Edward Yang1-0/+1
2019-02-09enable unit tests working on ROCm 2.1 (#16871)Johannes M Dieterich1-3/+1
2018-12-18Redefine scheduler to set learning rate using recursive formula (#14010)Chandler Zuo1-18/+318
2018-10-26Shard all of tests based on how many tests exist. (#13160)Zachary DeVito1-1/+5
2018-10-17Rename test/common.py to test/common_utils.py (#12794)James Sun1-1/+1
2018-09-20Remove torch/legacy (#11823)Christian Puhrsch1-136/+0
2018-08-23MIOpen integration, more tests enabled, bug fixes (#10612)Johannes M Dieterich1-7/+0
2018-08-13improve use of ROCm libraries, enable more tests, small fixes (#10406)iotamudelta1-2/+10
2018-08-02ROCm contributions week 29 (#9653)iotamudelta1-1/+2
2018-07-31Changed serialization mechanism of LambdaLR scheduler (#9927)0phoff1-1/+36
2018-07-18Use _six for inf and nan (#9500)Tongzhou Wang1-2/+3
2018-07-05Turn on UBSAN in the OSS build (#8813)Will Feng1-1/+2
2018-05-16Make return uniform in lbfgs step (#7586)Matt Le1-0/+12
2018-05-10added state_dict/load_state_dict for ReduceLROnPlateau (#7201)Changhan Wang1-0/+10
2018-04-19added functionality for state_dict/load_state_dict for lr_scheduler ( Fixes: ...Armen1-0/+34
2018-04-16Adding initial_accumulator_value parameter to Adagrad (#6616)Atul Kumar1-0/+4
2018-04-03fix SGD lr check (#6244)Tongzhou Wang1-0/+4
2018-03-29Fixed some tests by using the correct optimizer (#6116)lazypanda11-4/+4
2018-03-28Added parameter range checks for all optimizers (#6000)lazypanda11-0/+16
2018-02-23Merge Variable and Tensor classes (#5225)Sam Gross1-3/+3
2018-02-20Make ReduceLROnPlateau serializable. (#5300)Marcin Elantkowski1-0/+1
2018-02-11Added check and test for betas parameter in Adam optimizer (#5147)lazypanda11-0/+2
2018-01-14Fix wrong learning rate evaluation in CosineAnnealingLR in Python 2 (#4656)nguyen-binh-minh1-1/+1
2017-12-30fix AMSGrad for SparseAdam (#4314)Dr. Kashif Rasul1-4/+0
2017-12-28Adding description for Optimizers (#4371)Vishwak Srinivasan1-0/+5
2017-12-18added AMSgrad optimizer to Adam and SparseAdam (#4034)Dr. Kashif Rasul1-0/+13
2017-12-18Add Cosine Annealing LR Scheduler (#3311)Kai Arulkumaran1-17/+28
2017-11-28Cast tensors when loading optimizer state dicts (#3658)Adam Paszke1-1/+25
2017-11-06Sparse Adam optimizer for sparse gradients (#3137)SsnL1-6/+19
2017-06-05fix optimizer when given single parameters (instead of an iterable)Yan Wang1-0/+8
2017-05-25add learning rate schedulers (#1370)Jiaming Liu1-1/+155
2017-05-16Fix flaxy test_sparse_adagrad (#1562)Edward Z. Yang1-9/+12
2017-05-03Fix #1447: sparse_mask doesn't make sense with uncoalesced tensors (#1458)Edward Z. Yang1-0/+46
2017-01-30billinear -> bilinear, docs for upsampling, improved docs for Unpooling, pep8...Soumith Chintala1-1/+1
2017-01-28[pep8] Fix most lint automatically with autopep8Luke Yeager1-3/+3
2017-01-25Fixes and improvements (#593)Adam Paszke1-2/+2
2017-01-24Improve optimizer serializationAdam Paszke1-2/+44
2017-01-22Port L-BFGS from Lua optimAdam Paszke1-12/+33
2017-01-16Change .grad attribute of Variables to be a VariableAdam Paszke1-2/+2
2017-01-16Check params type in optimizersAdam Paszke1-0/+4
2016-11-29Add optional weight decay to optim.SGD (#269)Sam Gross1-3/+4
2016-11-08Change optimizer APIAdam Paszke1-1/+7
2016-11-07Add more optimizersAdam Paszke1-0/+273