summaryrefslogtreecommitdiff
path: root/torch/nn
AgeCommit message (Expand)AuthorFilesLines
2019-02-12Add module and name to func created with _jit_internal.boolean_dispatch (#16922)Theo1-8/+24
2019-02-11optionally zero infinite losses in CTCLoss (#16199)Thomas Viehmann2-5/+18
2019-02-10Enhance the documentation for torch.nn.DataParallel (#15993)Derek Kim1-4/+4
2019-02-10fixed LogSigmoid math string that wasn't rendering in documentation (#16900)Travis Johnston1-1/+2
2019-02-07Remove undefined tensor in jit script (#16379)Wanchao Liang1-1/+1
2019-02-06Typofix (#16800)Edward Yang1-1/+1
2019-02-05Use torch.zeros for nn.LSTMDavid Riazati1-3/+3
2019-02-01fix tracing using a dictionary as input (#16616)Michael Suo1-1/+0
2019-01-31Improving docs for MultiLabelSoftMarginLoss (#16644)James Malcolm1-2/+2
2019-01-30Allow list and tuples to be passed as output_size to max_unpool1d (#16489)vishwakftw1-1/+6
2019-01-30Fix the flake8 linterLu Fang1-1/+1
2019-01-26Switch to CUDA implementation if batch size >= 65536 for affine_grid (#16403)vishwakftw1-1/+1
2019-01-24Remove unneeded manual unwrap optionals (#16245)Elias Ellison4-72/+32
2019-01-17Add comment to explain rnn bias vectors (#15843)Aaron Jaech1-0/+2
2019-01-17Cleanup gumbel_softmax (#13339)Egil Martinsson1-59/+42
2019-01-17add if in register_buffer like register_parameters (#16110)FrankHui1-1/+4
2019-01-17Revert batched pdist, improve existing kernel, add test (#15901)Gregory Chanan1-20/+13
2019-01-17Unify the shape notation for all of the pytorch modules (#15741)Sasha Rush10-39/+101
2019-01-17Enhance the documentation for DistributedDataParallel from torch.nn.parallel....Derek Kim1-4/+2
2019-01-16Port the backend of FractionalMaxPool3d from TH to ATen (#15575)Chandler Zuo3-3/+133
2019-01-15Constant prop prim::None (#15979)Elias Ellison1-2/+2
2019-01-15Miscellaneous broken RSTs fixed (#16033)Derek Kim1-2/+2
2019-01-14Fix broken rst of torch.nn.utils.spectral_norm and others (#15995)Derek Kim1-5/+5
2019-01-14Improved the documentation for torch.nn.functional.pad (#15984)Derek Kim1-7/+15
2019-01-13doc fixes (#15990)surgan121-2/+2
2019-01-11Fixed typo in batchnorm docstringsJames Webber1-3/+3
2019-01-10Trivial typo fixings in nn.functional dropout* docstrings (#15951)Derek Kim1-3/+3
2019-01-09Porting legacy reflection_pad2d to ATenShen Li1-1/+0
2019-01-08crelu mentioned (#15825)surgan121-0/+7
2019-01-08Add element-wise multiplication in formulas (#15834)marka171-4/+4
2019-01-08use all_weights instead of _parameters in _flat_weights in rnn (#15766)Natalia Gimelshein1-1/+1
2019-01-04Port replication_pad2d and replication_pad3d to ATen (#15538)Lin Huang1-2/+0
2019-01-03Port legacy reflection_pad1d to ATen (#15480)Shen Li1-1/+0
2018-12-26add from_pretrained method to EmbeddingBag (#15273)David Pollack1-10/+64
2018-12-24Port replication_pad1d to ATen (#15507)Lin Huang1-1/+0
2018-12-21Fixed trivial typos in Dropout2D and Dropout3D classes (#15200)derek1-2/+2
2018-12-20Add option to automatically handle unsorted variable-length sequences in RNNs...Richard Zou2-30/+120
2018-12-20Doc improvement on DDP (#15440)Teng Li1-0/+7
2018-12-20Fix type annotation error. (#15448)Edward Yang1-1/+1
2018-12-20Add at::one_hot (#15208)Gao, Xiang1-0/+49
2018-12-20Add support for batched pdist (#12302)Erik Brinkman1-13/+20
2018-12-18Add RNNCell modules to Script standard library (#14695)David Riazati1-19/+44
2018-12-18Remove fully qualified weak script names (#15364)David Riazati2-86/+87
2018-12-18Add (Un)Fold modules to standard library (#14759)David Riazati2-4/+20
2018-12-17Fix _apply in nn.Module (#15305)Peter Goldsborough2-19/+18
2018-12-17Port nn fold and unfold to c++Roy Li3-92/+4
2018-12-17Bicubic interpolation for nn.functional.interpolate (#9849)David Riazati2-17/+21
2018-12-13Python <-> C++ Frontend inter-op (#13481)Peter Goldsborough2-5/+96
2018-12-12Move adaptive avg pooling 2d to ATen native (#14714)Immanuel Alexander1-1/+0
2018-12-10Update pooling.py (#14998)paland31-1/+1