diff options
author | Thibault FEVRY <ThibaultFevry@gmail.com> | 2018-02-12 15:08:27 -0500 |
---|---|---|
committer | Adam Paszke <adam.paszke@gmail.com> | 2018-02-12 21:08:27 +0100 |
commit | e39e86f11980297db2b48f41d625561bae1852f5 (patch) | |
tree | 515d12f7cce32108852e18da6e516d00ff0b0e04 /docs/source/notes | |
parent | f38b6f611e0e841496c7d9ad901e07296a253a0d (diff) | |
download | pytorch-e39e86f11980297db2b48f41d625561bae1852f5.tar.gz pytorch-e39e86f11980297db2b48f41d625561bae1852f5.tar.bz2 pytorch-e39e86f11980297db2b48f41d625561bae1852f5.zip |
Remove deprecated references to volatile (#5193)
Diffstat (limited to 'docs/source/notes')
-rw-r--r-- | docs/source/notes/autograd.rst | 5 | ||||
-rw-r--r-- | docs/source/notes/extending.rst | 3 |
2 files changed, 3 insertions, 5 deletions
diff --git a/docs/source/notes/autograd.rst b/docs/source/notes/autograd.rst index c04d74ff72..fcf14b613a 100644 --- a/docs/source/notes/autograd.rst +++ b/docs/source/notes/autograd.rst @@ -11,9 +11,8 @@ programs, and can aid you in debugging. Excluding subgraphs from backward ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Every Variable has two flags: :attr:`requires_grad` and :attr:`volatile`. -They both allow for fine grained exclusion of subgraphs from gradient -computation and can increase efficiency. +Every Variable has a flag: :attr:`requires_grad` that allows for fine grained +exclusion of subgraphs from gradient computation and can increase efficiency. .. _excluding-requires_grad: diff --git a/docs/source/notes/extending.rst b/docs/source/notes/extending.rst index e232bd59e9..6b4d3bb2b5 100644 --- a/docs/source/notes/extending.rst +++ b/docs/source/notes/extending.rst @@ -155,8 +155,7 @@ This is how a ``Linear`` module can be implemented:: # they won't appear in .parameters() (doesn't apply to buffers), and # won't be converted when e.g. .cuda() is called. You can use # .register_buffer() to register buffers. - # nn.Parameters can never be volatile and, different than Variables, - # they require gradients by default. + # nn.Parameters require gradients by default. self.weight = nn.Parameter(torch.Tensor(output_features, input_features)) if bias: self.bias = nn.Parameter(torch.Tensor(output_features)) |