summaryrefslogtreecommitdiff
path: root/torch
AgeCommit message (Expand)AuthorFilesLines
2018-02-23Improve CUDA extension support (#5324)Peter Goldsborough1-23/+50
2018-02-22add control flow to interpreter (#5293)Zachary DeVito13-328/+781
2018-02-22Traverse sub-blocks in JIT passes (#5329)Adam Paszke10-39/+91
2018-02-22support batch-first in ONNX export of padded sequences (#5360)anderspapitto1-1/+8
2018-02-22[ready] Layer Normalization (#4922)Tongzhou Wang7-152/+436
2018-02-21Improve sparse variable printing. (#5335)gchanan1-11/+15
2018-02-21DDP: 10% of NCCL backend perf improvements with mixed-prec support (#5064)Teng Li1-22/+103
2018-02-21Various dtype improvements. (#5321)gchanan5-21/+71
2018-02-21Improve Function interface (#5221)Peter Goldsborough30-409/+643
2018-02-21Fix the bug of only processing one attribute (#5334)bddppq1-3/+5
2018-02-21Added mixed-precision support in distributed training (#4891)Teng Li1-32/+59
2018-02-20add guards when source of container cannot be retreived (#5317)Soumith Chintala1-1/+7
2018-02-20setup.py and cmake improvements (#5269)Edward Z. Yang1-19/+47
2018-02-20Add numpy-style dtypes to Variable factories. (#5245)gchanan14-5/+338
2018-02-20Make ReduceLROnPlateau serializable. (#5300)Marcin Elantkowski1-13/+23
2018-02-19Implement torch.isnan (#5273)Choongwoo Han1-1/+23
2018-02-19Configurable flushing denormal numbers on CPU (#5294)Choongwoo Han2-0/+41
2018-02-18Emit ternary if in script compiler (#5291)James Reed1-2/+30
2018-02-17Tweak 'detach' docstring. (#5292)Jon Malmaud1-2/+2
2018-02-17Fix ASAN detected global buffer overflows in autograd (#5289)Vedanuj Goswami1-4/+4
2018-02-17Fixes UB when using legacy python functions and mark_non_differentiable (#5275)Richard Zou1-1/+1
2018-02-16Use TORCH_EXTENSION_NAME macro to avoid mismatched module/extension name (#5277)Peter Goldsborough1-2/+7
2018-02-16check attribute existence in SpatialFullConvolution (#5255)Kato Tetsuro1-4/+4
2018-02-16while and if for experimental JIT script (#5176)James Reed3-30/+250
2018-02-15Fix typo in DataParallel docs (#5268)Richard Zou2-2/+2
2018-02-15Add Python frontend to the JIT (#5190)Adam Paszke8-1/+615
2018-02-15Add CUDA support for JIT-compiling C++ extensions (#5226)Peter Goldsborough1-9/+61
2018-02-15add reduce=True arg to MultiLabelSoftMarginLoss (#5097)li-roy2-6/+28
2018-02-15Fixes for docstrings/sphinx rendering of CosineAnnealingLR and Local Response...Martin Drawitsch2-26/+27
2018-02-14Fix GraphExecutor and add more AD formulas (#5215)Adam Paszke10-30/+243
2018-02-14Include __delitem__ for Sequential (#5233)Vishwak Srinivasan1-5/+27
2018-02-13Allow zero-dim tensors to be bound to at::Scalar (#5142)Sam Gross3-17/+34
2018-02-14Ensure Distribution.sample() result is detached (#5086)Fritz Obermeyer7-12/+18
2018-02-13CUDA support for C++ extensions with setuptools (#5207)Peter Goldsborough1-5/+127
2018-02-13Implement symbolic for slice operation (#5204)James Reed1-0/+6
2018-02-13Clarify output shapes of reduce=False losses (#5082)Richard Zou2-14/+15
2018-02-13make explicit about keyword-onlyness of `out` (#5165)Yimeng Zhang1-0/+4
2018-02-13 add reduce=True arg to SoftMarginLoss (#5071)li-roy3-8/+30
2018-02-12Improve Variable interface (#5127)Peter Goldsborough30-554/+883
2018-02-13Make Python functions respect grad mode (#5184)Adam Paszke1-15/+29
2018-02-12Fix compiler error. (#5179)Shrinidhi KL1-2/+2
2018-02-12Allow and warn when indexing a zero-dim Variable (#5114)Sam Gross3-17/+30
2018-02-12Implement Variable.new(...) overloads for sparse tensors (#5117)Sam Gross1-4/+41
2018-02-12Replace NULL with nullptr in autograd (#5162)Peter Goldsborough11-115/+115
2018-02-12Move EmbeddingBag into ATen (#4856)cpuhrsch2-13/+41
2018-02-12#4990, Makes Window build fail quicker (#5175)peterjc1231-1/+3
2018-02-12Fix compound assignment in JIT script (#5178)Adam Paszke2-7/+10
2018-02-12DDP: coalescing many little broadcasts to improve performance (#4978)Teng Li1-9/+29
2018-02-12Fix sign error in TransformedDistribution.cdf() and .icdf() (#5172)Fritz Obermeyer2-12/+55
2018-02-11Added check and test for betas parameter in Adam optimizer (#5147)lazypanda11-0/+4