summaryrefslogtreecommitdiff
path: root/torch/optim
AgeCommit message (Expand)AuthorFilesLines
2019-04-01More type stubs (#18511)Jon Malmaud5-0/+56
2019-03-30Turn on F401: Unused import warning. (#18598)Edward Yang3-14/+12
2019-03-29ReduceLrOnPlateau: best=current -> best=copy(current) (#16364) (#16697)Søren Rasmussen1-1/+2
2019-03-27Adds Cyclical Learning Rate and Momentum (#18001)Sam Pepose1-0/+214
2019-03-19SGD: remove unneeded multiply-add initialization operations (#18114)Neta Zmora1-2/+1
2018-12-18Redefine scheduler to set learning rate using recursive formula (#14010)Chandler Zuo1-24/+38
2018-10-29Add name for required optimizer parameter. (#13202)Jerry Ma1-1/+7
2018-10-24fix lint after new flake8 release added new style constraints (#13047)Soumith Chintala2-2/+2
2018-09-26Small optimization for adam (#12107)Jerry Ma1-1/+1
2018-09-13migrating deprecated calls without abc module for containers (#11515)Jeff Smith1-2/+3
2018-09-07Remove methods that start with an underscore from at::Tensor (#11152)Peter Goldsborough2-4/+4
2018-07-31Changed serialization mechanism of LambdaLR scheduler (#9927)0phoff1-0/+32
2018-07-28Adds the default value for the amsgrad arg to the Adam docstring (#9971)rasbt1-0/+1
2018-07-18Use _six for inf and nan (#9500)Tongzhou Wang1-2/+4
2018-05-23_LRSchedulers getstate include optimizer info (#7757)Ailing1-6/+0
2018-05-16Make return uniform in lbfgs step (#7586)Matt Le1-1/+1
2018-05-10added state_dict/load_state_dict for ReduceLROnPlateau (#7201)Changhan Wang1-0/+7
2018-05-09Make optimizer not complain about parameters with requires_grad=False (#7419)Domagoj Alagić1-2/+0
2018-05-04Clarify patience in ReduceLROnPlateau docs (#7242)Richard Zou1-1/+5
2018-04-27fix lbfgs variable names (#7037)Samuel1-7/+7
2018-04-19added functionality for state_dict/load_state_dict for lr_scheduler ( Fixes: ...Armen1-1/+24
2018-04-17Codemod to update our codebase to 0.4 standard (#6641)Tongzhou Wang2-23/+22
2018-04-16Adding initial_accumulator_value parameter to Adagrad (#6616)Atul Kumar1-3/+6
2018-04-07Fix typos in docs (#6389)Kento NOZAWA1-1/+1
2018-04-03fix SGD lr check (#6244)Tongzhou Wang1-3/+3
2018-03-28Block set from param_group['params'] (#6031)Jiaming Liu1-0/+8
2018-03-28Added parameter range checks for all optimizers (#6000)lazypanda19-0/+67
2018-03-02set default ams param in adam optimizer (#5501)li-roy1-0/+5
2018-02-23Merge Variable and Tensor classes (#5225)Sam Gross2-4/+4
2018-02-20Make ReduceLROnPlateau serializable. (#5300)Marcin Elantkowski1-13/+23
2018-02-15Fixes for docstrings/sphinx rendering of CosineAnnealingLR and Local Response...Martin Drawitsch1-1/+1
2018-02-11Added check and test for betas parameter in Adam optimizer (#5147)lazypanda11-0/+4
2018-01-14Fix wrong learning rate evaluation in CosineAnnealingLR in Python 2 (#4656)nguyen-binh-minh1-1/+1
2018-01-10fixed spelling (#4598)Jon Crall1-1/+1
2018-01-04Fix StepLR docs (#4478)Richard Zou1-2/+2
2017-12-30fix AMSGrad for SparseAdam (#4314)Dr. Kashif Rasul1-21/+3
2017-12-28Adding description for Optimizers (#4371)Vishwak Srinivasan1-0/+11
2017-12-18Replace Variable.volatile with torch.no_grad() (#3970)Sam Gross1-5/+2
2017-12-18added AMSgrad optimizer to Adam and SparseAdam (#4034)Dr. Kashif Rasul3-7/+42
2017-12-18Add Cosine Annealing LR Scheduler (#3311)Kai Arulkumaran1-0/+38
2017-11-28Cast tensors when loading optimizer state dicts (#3658)Adam Paszke2-4/+31
2017-11-09doc: Normalize all true/false in docstrings to ``True|False`` (#3593)Ozan Çağlayan2-3/+3
2017-11-06Sparse Adam optimizer for sparse gradients (#3137)SsnL10-19/+134
2017-10-17dense buffer (#3139)SsnL1-1/+2
2017-10-01Fix typosTaehoon Lee1-1/+1
2017-09-28import lr_scheduler in __init__.pyJiaming Liu1-0/+1
2017-09-20address issue #1488 by using defaultdict in load_state_dictrandxie1-1/+2
2017-08-30Allow param groups to be added to Optimizer dynamically (#2374)Michael Dietz1-35/+53
2017-08-25Fix typos (#2472)Taehoon Lee1-1/+1
2017-08-24fix doc of lr_scheduler (#2280)Tzu-Wei Huang1-2/+2