diff options
author | Xiaodong Wang <xdwang@fb.com> | 2019-02-11 12:27:12 -0800 |
---|---|---|
committer | Facebook Github Bot <facebook-github-bot@users.noreply.github.com> | 2019-02-11 13:18:36 -0800 |
commit | af0c79eed4080e9937e1ab5bbcaec7b27a249e49 (patch) | |
tree | 0d26f8b93c577dee4135343694c9a9e2fd8cb70f /caffe2/core | |
parent | 29f096cc70cf9cc1a317ae7107228215b7dde60b (diff) | |
download | pytorch-af0c79eed4080e9937e1ab5bbcaec7b27a249e49.tar.gz pytorch-af0c79eed4080e9937e1ab5bbcaec7b27a249e49.tar.bz2 pytorch-af0c79eed4080e9937e1ab5bbcaec7b27a249e49.zip |
Catch cudaError_t return val (nodiscard in rocm) (#16399)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16399
Catching cudaError_t return values in a few places, because it's nodiscard in rocm. Unless we add -Wno-unused-result, it'll end up with a compilation error.
Also in c10/cuda/test, check whether a host has GPU or not. We were silently throwing out the error before (so not really testing the cuda api).
Reviewed By: bddppq
Differential Revision: D13828281
fbshipit-source-id: 587d1cc31c20b836ce9594e3c18f067d322b2934
Diffstat (limited to 'caffe2/core')
-rw-r--r-- | caffe2/core/context_gpu.h | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/caffe2/core/context_gpu.h b/caffe2/core/context_gpu.h index 0e62708f16..9eb7fe5c83 100644 --- a/caffe2/core/context_gpu.h +++ b/caffe2/core/context_gpu.h @@ -203,7 +203,7 @@ class CAFFE2_CUDA_API CUDAContext final : public BaseContext { // FinishDeviceComputation must be called on the same cpu thread as // SwitchToDevice() void FinishDeviceComputation() override { - cudaStreamSynchronize(getCudaObjects().GetStream(gpu_id_)); + CUDA_ENFORCE(cudaStreamSynchronize(getCudaObjects().GetStream(gpu_id_))); cudaError_t error = cudaGetLastError(); if (error != cudaSuccess) { CAFFE_THROW("Encountered CUDA error: ", cudaGetErrorString(error)); @@ -390,7 +390,7 @@ struct CAFFE2_CUDA_API PinnedCPUAllocator final : public at::Allocator { if (err == cudaErrorInvalidValue) { free(data); // Calling cudaGetLastError will reset the cuda error. - cudaGetLastError(); + cudaError_t _err = cudaGetLastError(); } else { // For all other errors, still do a cuda check. CUDA_ENFORCE(err); |