summaryrefslogtreecommitdiff
path: root/docs/source/notes
diff options
context:
space:
mode:
authorSsnL <tongzhou.wang.1994@gmail.com>2019-01-14 07:28:50 -0800
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>2019-01-14 07:31:51 -0800
commit300dcc3b96b2b8672570c4c9e6038cb5cff75afd (patch)
treeb0a64eebc9a9730fccf50a19b9a4b4739808666f /docs/source/notes
parent7c08f1083e56ce7e9799a15b36947c46052f0058 (diff)
downloadpytorch-300dcc3b96b2b8672570c4c9e6038cb5cff75afd.tar.gz
pytorch-300dcc3b96b2b8672570c4c9e6038cb5cff75afd.tar.bz2
pytorch-300dcc3b96b2b8672570c4c9e6038cb5cff75afd.zip
Add cuda.reset_max_memory_* (#15985)
Summary: Addresses #15968 Pull Request resolved: https://github.com/pytorch/pytorch/pull/15985 Differential Revision: D13649916 Pulled By: soumith fbshipit-source-id: a207aea5709a79dba7a6fc541d0a70103f49efff
Diffstat (limited to 'docs/source/notes')
-rw-r--r--docs/source/notes/cuda.rst8
1 files changed, 4 insertions, 4 deletions
diff --git a/docs/source/notes/cuda.rst b/docs/source/notes/cuda.rst
index 212f68e694..7cf2fe6ad3 100644
--- a/docs/source/notes/cuda.rst
+++ b/docs/source/notes/cuda.rst
@@ -74,9 +74,9 @@ You can force synchronous computation by setting environment variable
operation is actually executed, so the stack trace does not show where it was
requested.)
-As an exception, several functions such as :meth:`~torch.Tensor.to` and
-:meth:`~torch.Tensor.copy_` admit an explicit :attr:`non_blocking` argument,
-which lets the caller bypass synchronization when it is unnecessary.
+As an exception, several functions such as :meth:`~torch.Tensor.to` and
+:meth:`~torch.Tensor.copy_` admit an explicit :attr:`non_blocking` argument,
+which lets the caller bypass synchronization when it is unnecessary.
Another exception is CUDA streams, explained below.
CUDA streams
@@ -118,7 +118,7 @@ unused memory managed by the allocator will still show as if used in
:meth:`~torch.cuda.max_memory_allocated` to monitor memory occupied by
tensors, and use :meth:`~torch.cuda.memory_cached` and
:meth:`~torch.cuda.max_memory_cached` to monitor memory managed by the caching
-allocator. Calling :meth:`~torch.cuda.empty_cache` can release all **unused**
+allocator. Calling :meth:`~torch.cuda.empty_cache` releases all **unused**
cached memory from PyTorch so that those can be used by other GPU applications.
However, the occupied GPU memory by tensors will not be freed so it can not
increase the amount of GPU memory available for PyTorch.