summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorTongzhou Wang <tongzhou.wang.1994@gmail.com>2018-11-07 11:56:09 -0800
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>2018-11-07 11:58:12 -0800
commit044d00516ccd6572c0d6ab6d54587155b02a3b86 (patch)
treea472cbe9f38c960613082c0905ac42e5bcd52490 /docs
parentafc7dbd58671f14a1bb68714dbd5de4108a7830b (diff)
downloadpytorch-044d00516ccd6572c0d6ab6d54587155b02a3b86.tar.gz
pytorch-044d00516ccd6572c0d6ab6d54587155b02a3b86.tar.bz2
pytorch-044d00516ccd6572c0d6ab6d54587155b02a3b86.zip
Rename DistBackend -> Backend (#11830)
Summary: Also add docs for get_backend, Backend, and reduce_op fixes #11803 cc The controller you requested could not be found. pietern apaszke Pull Request resolved: https://github.com/pytorch/pytorch/pull/11830 Differential Revision: D9927991 Pulled By: SsnL fbshipit-source-id: a2ffb70826241ba84264f36f2cb173e00b19af48
Diffstat (limited to 'docs')
-rw-r--r--docs/source/distributed.rst25
1 files changed, 20 insertions, 5 deletions
diff --git a/docs/source/distributed.rst b/docs/source/distributed.rst
index 49d9d96ee6..d5cea86df5 100644
--- a/docs/source/distributed.rst
+++ b/docs/source/distributed.rst
@@ -75,6 +75,10 @@ joined.
.. autofunction:: init_process_group
+.. autoclass:: Backend
+
+.. autofunction:: get_backend
+
.. autofunction:: get_rank
.. autofunction:: get_world_size
@@ -107,7 +111,8 @@ package. ``group_name`` is deprecated as well.
import torch.distributed as dist
# Use address of one of the machines
- dist.init_process_group(backend, init_method='tcp://10.1.1.20:23456', rank=args.rank, world_size=4)
+ dist.init_process_group(backend, init_method='tcp://10.1.1.20:23456',
+ rank=args.rank, world_size=4)
Shared file-system initialization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -118,7 +123,7 @@ with ``file://`` and contain a path to a non-existent file (in an existing
directory) on a shared file system. File-system initialization will automatically
create that file if it doesn't exist, but will not delete the file. Therefore, it
is your responsibility to make sure that the file is cleaned up before the next
-init_process_group call on the same file path/name.
+:func:`init_process_group` call on the same file path/name.
Note that automatic rank assignment is not supported anymore in the latest
distributed package and ``group_name`` is deprecated as well.
@@ -130,11 +135,11 @@ distributed package and ``group_name`` is deprecated as well.
.. warning::
This method does not clean up and remove the file and it is your responsibility
to remove the file at the end of the training. This is especially important
- if you plan to call init_process_group multiple times on the same file name.
+ if you plan to call :func:`init_process_group` multiple times on the same file name.
In other words, if the file is not removed/cleaned up and you call
- init_process_group again on that file, it is unexpected behavior and will cause
+ :func:`init_process_group` again on that file, it is unexpected behavior and will cause
failures. The rule of thumb here is that, make sure that the file is non-existent or
- empty everytime init_process_group is called.
+ empty everytime :func:`init_process_group` is called.
::
@@ -229,6 +234,16 @@ Collective functions
.. autofunction:: barrier
+.. autoclass:: ReduceOp
+
+.. class:: reduce_op
+
+ Deprecated enum-like class for reduction operations: ``SUM``, ``PRODUCT``,
+ ``MIN``, and ``MAX``.
+
+ :class:`~torch.distributed.ReduceOp` is recommended to use instead.
+
+
Multi-GPU collective functions
------------------------------