diff options
author | Peter Goldsborough <peter@goldsborough.me> | 2018-02-13 15:02:50 -0800 |
---|---|---|
committer | Edward Z. Yang <ezyang@mit.edu> | 2018-02-13 15:02:50 -0800 |
commit | 1b71e78d133eb156bf037f0f12550032d1b90bd8 (patch) | |
tree | 60d54f772eb15539b943bd16b2c7dd3140956c96 /.jenkins/test.sh | |
parent | 232ce18a4132376641a4a400ff3689d32e87ef5e (diff) | |
download | pytorch-1b71e78d133eb156bf037f0f12550032d1b90bd8.tar.gz pytorch-1b71e78d133eb156bf037f0f12550032d1b90bd8.tar.bz2 pytorch-1b71e78d133eb156bf037f0f12550032d1b90bd8.zip |
CUDA support for C++ extensions with setuptools (#5207)
This PR adds support for convenient CUDA integration in our C++ extension mechanism. This mainly involved figuring out how to get setuptools to use nvcc for CUDA files and the regular C++ compiler for C++ files. I've added a mixed C++/CUDA test case which works great.
I've also added a CUDAExtension and CppExtension function that constructs a setuptools.Extension with "usually the right" arguments, which reduces the required boilerplate to write an extension even more. Especially for CUDA, where library_dir (CUDA_HOME/lib64) and libraries (cudart) have to be specified as well.
Next step is to enable this with our "JIT" mechanism.
NOTE: I've had to write a small find_cuda_home function to find the CUDA install directory. This logic is kind of a duplicate of tools/setup_helpers/cuda.py, but that's not available in the shipped PyTorch distribution. The function is also fairly short. Let me know if it's fine to duplicate this logic.
* CUDA support for C++ extensions with setuptools
* Remove printf in CUDA test kernel
* Remove -arch flag in test/cpp_extensions/setup.py
* Put wrap_compile into BuildExtension
* Add guesses for CUDA_HOME directory
* export PATH to CUDA location in test.sh
* On Python2, sys.platform has the linux version number
Diffstat (limited to '.jenkins/test.sh')
-rwxr-xr-x | .jenkins/test.sh | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/.jenkins/test.sh b/.jenkins/test.sh index ec0e50503a..5fdfd18acc 100755 --- a/.jenkins/test.sh +++ b/.jenkins/test.sh @@ -12,6 +12,8 @@ export PATH=/opt/conda/bin:$PATH if [[ "$JOB_NAME" == *cuda* ]]; then export LD_LIBRARY_PATH=/usr/local/cuda/lib64/stubs:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH + # The ccache wrapper should be able to find the real nvcc + export PATH="/usr/local/cuda/bin:$PATH" else export PATH=/opt/python/${PYTHON_VERSION}/bin:$PATH export LD_LIBRARY_PATH=/opt/python/${PYTHON_VERSION}/lib:$LD_LIBRARY_PATH |