Age | Commit message (Collapse) | Author | Files | Lines |
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18008
Differential Revision: D14455117
Pulled By: soumith
fbshipit-source-id: 29d9a2e0b36d72bece0bb1870bbdc740c4d1f9d6
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17851
Differential Revision: D14401791
Pulled By: soumith
fbshipit-source-id: ed6d64d6f5985e7ce76dca1e9e376782736b90f9
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17706
Differential Revision: D14346482
Pulled By: ezyang
fbshipit-source-id: 7c85e51c701f6c0947ad324ef19fafda40ae1cb9
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17607
Differential Revision: D14281291
Pulled By: yf225
fbshipit-source-id: 51209c5540932871e45e54ba6d61b3b7d264aa8c
|
|
Summary:
Addresses #15968
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15985
Differential Revision: D13649916
Pulled By: soumith
fbshipit-source-id: a207aea5709a79dba7a6fc541d0a70103f49efff
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/15628
Differential Revision: D13562685
Pulled By: soumith
fbshipit-source-id: 1621fcff465b029142313f717035e935e9159513
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14738
Differential Revision: D13341611
Pulled By: soumith
fbshipit-source-id: 39a49fc60e710cc32a463858c9cee57c182330e2
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12853
Differential Revision: D10458439
Pulled By: SsnL
fbshipit-source-id: ebd259e598327b0c5d63de6b7c182781fe361fbd
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12850
Differential Revision: D10457694
Pulled By: SsnL
fbshipit-source-id: fa64964ff6d41625d9383ca96393017230e4ee0f
|
|
Summary:
include atomicAdd commentary as this is less well known
There is some discussion in #12207
Unfortunately, I cannot seem to get the ..include working in `_tensor_docs.py` and `_torch_docs.py`. I could use a hint for that.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12217
Differential Revision: D10419739
Pulled By: SsnL
fbshipit-source-id: eecd04fb7486bd9c6ee64cd34859d61a0a97ec4e
|
|
Summary:
goldsborough Modify the docs to match the changes made in #4999
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12158
Differential Revision: D10103964
Pulled By: SsnL
fbshipit-source-id: 1b8692da86aca1a52e8d2e6cea76a5ad1f71e058
|
|
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11821
Differential Revision: D9948292
Pulled By: SsnL
fbshipit-source-id: 01c21c129423c0f7844b403e665a8fe021a9c820
|
|
Summary:
"need to be" -> "need not be"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11571
Differential Revision: D9786001
Pulled By: soumith
fbshipit-source-id: 7cc408f5c8bfcc56d4b5c153646f30e1cec37539
|
|
Summary:
This adds a Note on making experiments reproducible.
It also adds Instructions for building the Documentation to `README.md`. Please ping if I missed any requirements.
I'm not sure what to do about the submodule changes. Please advise.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11329
Differential Revision: D9784939
Pulled By: ezyang
fbshipit-source-id: 5c5acbe343d1fffb15bdcb84c6d8d925c2ffcc5e
|
|
Summary:
This is a grab-bag of documentation formatting fixes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9359
Differential Revision: D8831400
Pulled By: soumith
fbshipit-source-id: 8dac02303168b2ea365e23938ee528d8e8c9f9b7
|
|
Summary:
Commits:
1. In extension doc, get rid of all references of `Variable` s (Closes #6947 )
+ also add minor improvements
+ also added a section with links to cpp extension :) goldsborough
+ removed mentions of `autograd.Function.requires_grad` as it's not used anywhere and hardcoded to `return_Py_True`.
2. Fix several sphinx warnings
3. Change `*` in equations in `module/conv.py` to `\times`
4. Fix docs for `Fold` and `Unfold`.
+ Added better shape check for `Fold` (it previously may give bogus result when there are not enough blocks). Added test for the checks.
5. Fix doc saying `trtrs` not available for CUDA (#9247 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9239
Reviewed By: soumith
Differential Revision: D8762492
Pulled By: SsnL
fbshipit-source-id: 13cd91128981a94493d5efdf250c40465f84346a
|
|
* Clarify mp note about sharing a tensor's grad field.
* Address comments
* Address comments
|
|
|
|
* Docs for gradcheck and gradgradcheck; expose gradgradcheck
* address comments
|
|
|
|
* Fix Windows doc for import error
* Fix doc again
* Fix wrong format
|
|
|
|
* [docs] Update broadcasting and cuda semantics notes
* Update multiprocessing.rst
* address comments
* Address comments
|
|
|
|
* Add Windows doc
* some minor fixes
* Fix typo
* more minor fixes
* Fixes on dataloader
|
|
|
|
* Link FAQ section on workers returning same random numbers in DataLoader docs
* explicitly mention section names
|
|
* add total_length to pad_packed_sequence; add example on how to use pack->rnn->unpack with DP
* address comments
* fix typo
|
|
* Fix typo
* Fix typo
* Update faq.rst
|
|
codes (#5936)
This PR enables users to print extra information of their subclassed nn.Module.
Now I simply insert the user-defined string at the ending of module name, which should be discussed in this PR.
Before this PR, users should redefine the __repr__ and copy&paste the source code from Module.
* Add support for extra information on Module
* Rewrite the repr method of Module
* Fix flake8
* Change the __repr__ to get_extra_repr in Linear
* Fix extra new-line for empty line
* Add test for __repr__ method
* Fix bug of block string indent
* Add indent for multi-line repr test.
* Address review comments
* Update tutorial for creating nn.Module
* Fix flake8, add extra_repr of bilinear
* Refactor DropoutNd
* Change to extra_repr in some Modules
* Fix flake8
* Refactor padding modules
* Refactor pooling module
* Fix typo
* Change to extra_repr
* Fix bug for GroupNorm
* Fix bug for LayerNorm
|
|
|
|
* Deprecate ctx.saved_variables via python warning.
Advises replacing saved_variables with saved_tensors.
Also replaces all instances of ctx.saved_variables with ctx.saved_tensors in the
codebase.
Test by running:
```
import torch
from torch.autograd import Function
class MyFunction(Function):
@staticmethod
def forward(ctx, tensor1, tensor2):
ctx.save_for_backward(tensor1, tensor2)
return tensor1 + tensor2
@staticmethod
def backward(ctx, grad_output):
var1, var2 = ctx.saved_variables
return (grad_output, grad_output)
x = torch.randn((3, 3), requires_grad=True)
y = torch.randn((3, 3), requires_grad=True)
model = MyFunction()
model.apply(x, y).sum().backward()
```
and assert the warning shows up.
* Address comments
* Add deprecation test for saved_variables
|
|
|
|
|
|
* Fix LN initialization; Support single int normalized_shape
* disable docstring inheritance
* fix sphinx warnings
|
|
|
|
|
|
* Add a FAQ, for now just 'out of memory' advice.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Updates based on comments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* minor copyedit
|
|
|
|
|
|
* add doc about empty_cache wont increase amount of memory available
* typo
|
|
|
|
* gpu mem allocated
* add test
* addressed some of @apaszke 's comments
* cache stats
* add more comments about test
|
|
* Add empty_cache binding
* cuda.empty_cache document
* update docs
|
|
|
|
|
|
|
|
|
|
|
|
|