summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorTongzhou Wang <ssnl@users.noreply.github.com>2019-04-20 21:36:54 -0700
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>2019-04-20 21:39:59 -0700
commit6d307db5b465b9679f8bcc6b2e075db5782b2906 (patch)
treeb3ef8a14893f64348ef013a7b28a39a5c1caf955
parent26f1c6d4d421995cb054ccc268a926d91551503c (diff)
downloadpytorch-6d307db5b465b9679f8bcc6b2e075db5782b2906.tar.gz
pytorch-6d307db5b465b9679f8bcc6b2e075db5782b2906.tar.bz2
pytorch-6d307db5b465b9679f8bcc6b2e075db5782b2906.zip
Move cuFFT plan cache note outside Best Practices (#19538)
Summary: I mistakenly put it there. Pull Request resolved: https://github.com/pytorch/pytorch/pull/19538 Differential Revision: D15026500 Pulled By: soumith fbshipit-source-id: 0c13499571fdfd789c3bd1c4b58abd870725d422
-rw-r--r--docs/source/notes/cuda.rst56
1 files changed, 28 insertions, 28 deletions
diff --git a/docs/source/notes/cuda.rst b/docs/source/notes/cuda.rst
index 2ecffc004e..4fd02947be 100644
--- a/docs/source/notes/cuda.rst
+++ b/docs/source/notes/cuda.rst
@@ -123,6 +123,34 @@ cached memory from PyTorch so that those can be used by other GPU applications.
However, the occupied GPU memory by tensors will not be freed so it can not
increase the amount of GPU memory available for PyTorch.
+.. _cufft-plan-cache:
+
+cuFFT plan cache
+----------------
+
+For each CUDA device, an LRU cache of cuFFT plans is used to speed up repeatedly
+running FFT methods (e.g., :func:`torch.fft`) on CUDA tensors of same geometry
+with same configuration. Because some cuFFT plans may allocate GPU memory,
+these caches have a maximum capacity.
+
+You may control and query the properties of the cache of current device with
+the following APIs:
+
+* ``torch.backends.cuda.cufft_plan_cache.max_size`` gives the capacity of the
+ cache (default is 4096 on CUDA 10 and newer, and 1023 on older CUDA versions).
+ Setting this value directly modifies the capacity.
+
+* ``torch.backends.cuda.cufft_plan_cache.size`` gives the number of plans
+ currently residing in the cache.
+
+* ``torch.backends.cuda.cufft_plan_cache.clear()`` clears the cache.
+
+To control and query plan caches of a non-default device, you can index the
+``torch.backends.cuda.cufft_plan_cache`` object with either a :class:`torch.device`
+object or a device index, and access one of the above attributes. E.g., to set
+the capacity of the cache for device ``1``, one can write
+``torch.backends.cuda.cufft_plan_cache[1].max_size = 10``.
+
Best practices
--------------
@@ -272,31 +300,3 @@ There are significant caveats to using CUDA models with
:mod:`~torch.multiprocessing`; unless care is taken to meet the data handling
requirements exactly, it is likely that your program will have incorrect or
undefined behavior.
-
-.. _cufft-plan-cache:
-
-cuFFT plan cache
-^^^^^^^^^^^^^^^^
-
-For each CUDA device, an LRU cache of cuFFT plans is used to speed up repeatedly
-running FFT methods (e.g., :func:`torch.fft`) on CUDA tensors of same geometry
-with same configuration. Because some cuFFT plans may allocate GPU memory,
-these caches have a maximum capacity.
-
-You may control and query the properties of the cache of current device with
-the following APIs:
-
-* ``torch.backends.cuda.cufft_plan_cache.max_size`` gives the capacity of the
- cache (default is 4096 on CUDA 10 and newer, and 1023 on older CUDA versions).
- Setting this value directly modifies the capacity.
-
-* ``torch.backends.cuda.cufft_plan_cache.size`` gives the number of plans
- currently residing in the cache.
-
-* ``torch.backends.cuda.cufft_plan_cache.clear()`` clears the cache.
-
-To control and query plan caches of a non-default device, you can index the
-``torch.backends.cuda.cufft_plan_cache`` object with either a :class:`torch.device`
-object or a device index, and access one of the above attributes. E.g., to set
-the capacity of the cache for device ``1``, one can write
-``torch.backends.cuda.cufft_plan_cache[1].max_size = 10``.