summaryrefslogtreecommitdiff
path: root/torch/cuda
AgeCommit message (Expand)AuthorFilesLines
2019-02-25Restore current streams on dst device after switching streams (#17439)Shen Li1-2/+11
2019-02-19Bool tensor. Part 0: Boolean storage implementation (#16810)Iurii Zdebskyi1-1/+5
2019-02-19Optional arg fixes (#17222)surgan121-2/+2
2019-02-06improve error message (#16719)Soumith Chintala1-0/+1
2019-01-30Fix the flake8 linterLu Fang1-1/+1
2019-01-22Add default_stream() and enhance current_stream() (#16200)Shen Li1-3/+25
2019-01-19Unify device() return type in Stream, Event, and Tensor (#16150)Shen Li1-1/+4
2019-01-18Change current device in stream context manager if necessary (#16128)Shen Li1-3/+3
2019-01-17Fix trivial typos in torch.cuda._utils (#16026)Derek Kim1-6/+6
2019-01-17Move all Stream and Event Python implementation to C++ (#15937)Shen Li2-76/+64
2019-01-14Add cuda.reset_max_memory_* (#15985)SsnL1-2/+52
2019-01-09Wrap C10 CUDAStream instead of cudaStream_t in THCPStreamShen Li1-1/+1
2019-01-07Move Stream.query() implementation down to C++ (#15737)Shen Li1-9/+3
2019-01-03A quick fix for Stream operation errors on non-current device (#15689)Shen Li1-5/+7
2018-12-27Update cuda.get/set_rng_state doc (#14324)SsnL1-5/+12
2018-12-17Bicubic interpolation for nn.functional.interpolate (#9849)David Riazati1-1/+1
2018-12-14record unit time in torch.cuda.event (#15221)Krishna Kalyan1-1/+1
2018-12-09_get_device_index supports parsing device stringsSsnL1-0/+3
2018-11-07Give broadcast_coalesced tensors different version counters (#13594)Tongzhou Wang1-1/+7
2018-10-15Rewrite http://pytorch.org -> https://pytorch.org throughout project (#12636)Evan Klitzke1-2/+2
2018-08-27Make torch.cuda.* take device objects; Update distributed docs (#10833)Tongzhou Wang3-49/+85
2018-08-14Also set stdin to subprocess pipe in FindCUDA windows popen call (#10379)Matt Dawkins1-1/+1
2018-07-06Move nccl scatter and gather to C++ (#9117)Peter Goldsborough1-45/+2
2018-06-05fix type mismatch while call torch._C._cuda_setDevice (#8065)LaiyuanGong1-1/+1
2018-04-22Static linkage for CUDA (#6807)Soumith Chintala1-1/+1
2018-03-30Use THC cached CUDA device property when get_device_name and get_device_capab...Tongzhou Wang1-4/+3
2018-02-27Delete dead Tensor code paths (#5417)Sam Gross2-182/+1
2018-02-27DataParallel: GPU imbalance warning (#5376)Carl Lemaire1-0/+8
2018-02-23Merge Variable and Tensor classes (#5225)Sam Gross4-18/+5
2018-02-08warn that CUDA capability 3.0 and 5.0 is no longer supported (#5125)Soumith Chintala1-4/+12
2018-02-06Use Variable instead of Tensor in Function.forward (#4786)Sam Gross1-7/+10
2018-02-02Replace async with non_blocking for Python 3.7 (#4999)Peter Goldsborough1-1/+1
2018-01-30Lazy init in set device, also should not be called in getDevCount (#4918)Christian Sarofeen1-2/+1
2018-01-30make torch.cuda.empty_cache() a no-op when cuda is not initialized (#4936)albanD1-2/+2
2018-01-29Add missing _lazy_init in cuda python functionsalbanD1-1/+3
2018-01-28fix indentationSsnL1-4/+5
2018-01-27Improve `torch.cuda.empty_cache` documentation (#4879)Tongzhou Wang1-20/+40
2018-01-21More documentation for CUDA stream functions. (#4756)Yongjik Kim2-1/+28
2018-01-19Fix Python docs for broadcast and braodcast_coalesced (#4727)Sam Gross1-4/+8
2018-01-18Move broadcast and broadcast_coalesced to C++Adam Paszke1-38/+4
2018-01-09Methods for checking CUDA memory usage (#4511)Tongzhou Wang1-0/+63
2017-12-14Add function to explicitly initialize PyTorch CUDA state. (#4180)Edward Z. Yang1-0/+13
2017-12-14fix typo (#4175)Richard Zou1-1/+1
2017-12-04Add streams and comms as optional arguments (#3968)Sam Gross1-17/+22
2017-11-22Make integer parameters and buffers immune to float(), double() and half() (#...Luca Antiga2-0/+12
2017-11-09add warnings if device capability is less than ideal (#3601)Soumith Chintala1-1/+35
2017-11-09doc: Normalize all true/false in docstrings to ``True|False`` (#3593)Ozan Çağlayan1-4/+4
2017-11-08Improve Windows Compatibility (for csrc/scripts) (#2941)peterjc1232-2/+48
2017-11-07Exposing emptyCache from allocator (#3518)SsnL1-0/+7
2017-11-01comments and case where not all sparse (#3370)SsnL1-8/+14