From d957481d072a434dfd116ac2255180e09c86fac5 Mon Sep 17 00:00:00 2001 From: Prayag Verma Date: Thu, 18 Feb 2016 10:13:34 +0530 Subject: Fix a typo in docs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit `peformance` → `performance` --- docs/multigpu.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'docs') diff --git a/docs/multigpu.md b/docs/multigpu.md index 01cfb893..d91acef9 100644 --- a/docs/multigpu.md +++ b/docs/multigpu.md @@ -17,7 +17,7 @@ updated model, 0\-\>2, and then 0\-\>1, 2\-\>3. For best performance, P2P DMA access between devices is needed. Without P2P access, for example crossing PCIe root complex, data is copied through host and effective exchange bandwidth is greatly reduced. -Current implementation has a "soft" assumption that the devices being used are homogeneous. In practice, any devices of the same general class should work together, but performance and total size is limited by the smallest device being used. e.g. if you combine a TitanX and a GTX980, peformance will be limited by the 980. Mixing vastly different levels of boards, e.g. Kepler and Fermi, is not supported. +Current implementation has a "soft" assumption that the devices being used are homogeneous. In practice, any devices of the same general class should work together, but performance and total size is limited by the smallest device being used. e.g. if you combine a TitanX and a GTX980, performance will be limited by the 980. Mixing vastly different levels of boards, e.g. Kepler and Fermi, is not supported. "nvidia-smi topo -m" will show you the connectivity matrix. You can do P2P through PCIe bridges, but not across socket level links at this time, e.g. across CPU sockets on a multi-socket motherboard. -- cgit v1.2.3