diff options
author | Brennan Vincent <btv@fb.com> | 2019-01-15 19:55:13 -0800 |
---|---|---|
committer | Facebook Github Bot <facebook-github-bot@users.noreply.github.com> | 2019-01-15 19:57:57 -0800 |
commit | fb68d813be3ccbab3cbf74008a11071b9a646c75 (patch) | |
tree | c45efb56589285e46f2756d40e0bee4c38b32ee7 /test | |
parent | 5353847b191757995b24c0f6412c4290faff76fc (diff) | |
download | pytorch-fb68d813be3ccbab3cbf74008a11071b9a646c75.tar.gz pytorch-fb68d813be3ccbab3cbf74008a11071b9a646c75.tar.bz2 pytorch-fb68d813be3ccbab3cbf74008a11071b9a646c75.zip |
Fix logic errors when accumulating reductions in output (CUDA) (#16023)
Summary:
The correct logic is as follows:
* If there is an earlier split, we need to combine with its result
* If there is *not* a later split, we need to project before saving into the output.
This should partially f i x #15837 . For example:
```
In [7]: a=torch.ones([1838860800], dtype=torch.float, device="cuda:1")
In [8]: a.mean()
Out[8]: tensor(1., device='cuda:1')
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16023
Differential Revision: D13678449
Pulled By: umanwizard
fbshipit-source-id: ab5078484c88e96bb30121b5cf24a0e8b0a8c2f8
Diffstat (limited to 'test')
-rw-r--r-- | test/test_cuda.py | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/test/test_cuda.py b/test/test_cuda.py index 5130d23ffe..03bd195f89 100644 --- a/test/test_cuda.py +++ b/test/test_cuda.py @@ -107,6 +107,7 @@ def cast_tensor(tensor, t): S = 10 M = 50 +G = 275000000 def make_tensor(t, *sizes): @@ -243,6 +244,10 @@ def large_2d_lapack(t): return t(1000, 1000).normal_() +def giant_1d_ones(t): + return t(G).copy_(torch.ones(G)) + + def long_type(t): return torch.cuda.LongTensor if 'cuda' in t.__module__ else torch.LongTensor @@ -356,6 +361,10 @@ tests = [ ('mean', small_3d, lambda t: []), ('mean', small_3d, lambda t: [-1], 'neg_dim'), ('mean', small_3d, lambda t: [1], 'dim'), + ('mean', giant_1d_ones, lambda t: [], '64bit_indexing', + # Double here because otherwise the CPU result will be + # wrong. + [torch.DoubleTensor]), ('mode', small_3d, lambda t: []), ('mode', small_3d, lambda t: [1], 'dim'), ('mode', small_3d, lambda t: [-1], 'neg_dim'), |