summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2019-04-02Bool Tensor for CUDA (#18166)Iurii Zdebskyi20-166/+268
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18166 ghimport-source-id: a8e2ba2d966e49747a55701c4f6863c5e24d6f14 Stack from [ghstack](https://github.com/ezyang/ghstack): * **#18166 Bool Tensor for CUDA** * #18165 Resolved comments from Bool Tensor for CPU PR ------ This PR enables bool tensor creation and some basic operations for the CPU backend. This is a part of Bool Tensor feature implementation work. The whole plan looks like this: 1. Storage Implementation [Done] 2. Tensor Creation. a) CPU [Done] b) CUDA [This PR] 3. Tensor Conversions. 4. Tensor Indexing. 5. Tensor Operations. 6. Back compatibility related changes. Change: Enable bool tensor in CUDA with the following operations: torch.zeros torch.tensor torch.ones torch.rand/rand_like/randint/randint_like torch.full torch.full_like torch.empty torch.empty_like Tested via unit tests and local scripts. Differential Revision: D14605104 fbshipit-source-id: b7d7340a7d70edd03a109222d271e68becba762c
2019-04-02Add helpful information to the gradient/inplace operation exception (#18523)Jan Schlüter1-6/+26
Summary: To debug a `one of the variables needed for gradient computation has been modified by an inplace operation` error, I wanted to know *which* variable has been modified, so I extended the error message with what information is easily available at this point. Before: ``` RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation ``` After: ``` RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [80, 1]], which is output 0 of UnsqueezeBackward0, is at version 1, not expected version 0. Hint: enable anomaly detection to find the forward pass operation which modified it. ``` The hint to enable anomaly detection is only shown when it is not enabled. It's meant to save people some googling. I'd even go further and reference `torch.autograd.set_detect_anomaly(True)`, but maybe we're not running Python? Disclaimer: I haven't looked at other parts of the code to check if using `std::stringstream` is acceptable practice, let me know if it isn't. Similarly, I haven't checked about indentation practices. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18523 Differential Revision: D14683249 Pulled By: soumith fbshipit-source-id: f97a99d4aabea7461df766d66cd72300b48e2350
2019-04-02build_variables.py: turn on link_whole for _C_impl library. (#18763)Mikhail Zolotukhin1-0/+4
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18763 Without `link_whole` flag in opt-builds some of the files are not linked into `_C_impl` library, which causes some of static initializers not to run (namely, registering an cutomPythonOperation from python_interpreter.cpp). This diff fixes it. Differential Revision: D14732471 fbshipit-source-id: 57cff6b4b6d479ad7ab7fd29f677746d91d6ff45
2019-04-02Fix windows msbuild bug (#18748)vaeksare1-0/+1
Summary: Fix the bug introduced by #18681 where an undefined variable was being used to limit max cpu count when building for Windows without Ninja. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18748 Differential Revision: D14733209 Pulled By: soumith fbshipit-source-id: 52fc0dd4dde99da75a6956b63f02da2e647eed4f
2019-04-02torch.cross' dim default changed to c10::optional instead of int=-1 (#17582)Igor Fedan16-85/+203
Summary: Argument dim=-1 doesn't work for torch.cross. The signature of the torch.cross has been changed to c10::optional<int64_t> dim instead of int64_t. So based on document "If dim is not given, it defaults to the first dimension found with the size 3." and if dim is specified (even negative) it will use the correspondent dim. Fixes #17229 Pull Request resolved: https://github.com/pytorch/pytorch/pull/17582 Differential Revision: D14483063 Pulled By: ifedan fbshipit-source-id: f9699093ec401cb185fd33ca4563c8a46cdcd746
2019-04-02Fix multi-configuration on Windows CMake (CUDA) (#18548)Sacha2-19/+3
Summary: Multiple configurations is the default (eg. Release;Debug) on Windows and this check always broke this configuration as CMAKE_BUILD_TYPE was not set. The workaround was to always set CMAKE_BUILD_TYPE to Debug or Release, which was very unfortunate. The correct method is to use generator expressions that expand depending on the current CONFIG being processed. Side note: Anywhere else CMAKE_BUILD_TYPE is checked should probably be fixed too. Note that the CMakeLists.txt forces it in to Release mode. However, I came across this error when importing the prebuilt Config in to another project, where CMAKE_BUILD_TYPE was not set. > 3>CMake Error at pre_built/pytorch-1.0.1/share/cmake/Caffe2/public/cuda.cmake:380 (message): > 3> Unknown cmake build type: Proper support for configurations would mean we can build debug and release at the same time and as you can see, it is less CMake code. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18548 Differential Revision: D14730790 Pulled By: ezyang fbshipit-source-id: 70ae16832870d742c577c34a50ec7564c3da0afb
2019-04-02Fix flake8 issues in gragrad testIgor Fedan3-4/+3
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18727 Differential Revision: D14724887 Pulled By: ifedan fbshipit-source-id: 8c1db6460303e746e4aea0142302b8d61277c067
2019-04-02Register operators by passing arguments to RegisterOperators constructor ↵Sebastian Messmer3-4/+45
(#18577) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18577 This is also part of the legacy API and we need to support it if we want to replace it. Reviewed By: dzhulgakov Differential Revision: D14671432 fbshipit-source-id: 007abf4ab816647a509fc08e35d79b6c1aa55b03
2019-04-02Allow registering an operator schema without a kernel (#18551)Sebastian Messmer2-25/+68
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18551 This is helpful for defining a set of operators as an interface but not adding concrete kernels just yet. The registration logic will ensure that any other libraries that add kernels for these schemas exactly match the schema defined here. Reviewed By: dzhulgakov Differential Revision: D14660208 fbshipit-source-id: 7adb5a4876cff5a0ad21d92d8c450cb889f00cc3
2019-04-02Improve compiler error messages of the op registration API (#18550)Sebastian Messmer5-28/+50
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18550 When the operator registration API is used wrongly, in most cases we should now get a nice compiler error instead of weird template error messages. This is done by making the enable_if conditions more broad so they also match error cases, but then having static_asserts against these error cases inside the function. Before that, since the function didn't match, the error message said something like "no function found to match your call", now it will show the error message specified in the static_asserts. Reviewed By: dzhulgakov Differential Revision: D14659178 fbshipit-source-id: 7ca4fb72d9051eadf0a7e2717b962bf1213a52b2
2019-04-02Improve and test error messages for signature mismatches (#18547)Sebastian Messmer8-285/+295
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18547 - Argument indices in the error messages are 1-indexed not 0-indexed. - Add test cases that a mismatching signature actually shows the correct error messages Reviewed By: dzhulgakov Differential Revision: D14656695 fbshipit-source-id: 55e45634baa3117e18b8687ea6b2a2f83715bdf6
2019-04-02Enable gmock and fix system gtest issue (#18706)Sebastian Messmer1-4/+15
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18706 - Enable gmock - Fix issue where the gtest source files in third_party would include system gtest headers Reviewed By: ezyang Differential Revision: D14715302 fbshipit-source-id: 5335390913e651bda85c69d7ea9b5c1bce58f172
2019-04-02Emergency workaround for apt-get failure. (#18733)Edward Yang2-4/+24
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18733 ghimport-source-id: b56766fb4b1084d8a7947cf622275d44e325141b Stack from [ghstack](https://github.com/ezyang/ghstack): * **#18733 Emergency workaround for apt-get failure.** Signed-off-by: Edward Z. Yang <ezyang@fb.com> Reviewed By: dreiss Differential Revision: D14725779 fbshipit-source-id: 6855347853a3f13461ca267ed563e2db5815166e
2019-04-02Fix clang-tidy errors in torch/csrc/distributedPieter Noordhuis1-2/+3
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18709 Differential Revision: D14725936 Pulled By: pietern fbshipit-source-id: 307bc446d53da5d0e04d730bb51b7fb29212ace3
2019-04-02Undefined behavior with memset of std::string to 0 (#18703)Eli Amesefe1-1/+4
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18703 `zeroPtr` is sometimes a `std::string` tensor, so `memset` to 0 is undefined behavior. This might be accidentally safe with `std::string` implementation that use SSO (Small String Optimization), but will crash otherwise. Reviewed By: zheng-xq Differential Revision: D14714458 fbshipit-source-id: 012a18464e6514d38ff791509b88ddc3fc55b2b1
2019-04-02Revert D14717015: [pytorch][PR] fix nccl compilation to make sure it ↵Soumith Chintala1-1/+1
compiles for architectures that pytorch compiles for Differential Revision: D14717015 Original commit changeset: 4aac036f57e5 fbshipit-source-id: c820b8dfb27564271e6b80e133fe655658a7c25c
2019-04-02Automatic update of fbcode/onnx to f0d7df2c643c4e37f1fd7735ef02c972c4d19fb5 ↵Lu Fang1-0/+0
(#18695) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18695 Previous import was fb1a80692c1ab0bd27b1072f2e7bffacba336777 Included changes: - **[f0d7df2c](https://github.com/onnx/onnx/commit/f0d7df2c)**: fix testcase names of maxpool_2d_ceil and averagepool_2d_ceil (#1896) <karljang> Reviewed By: zrphercule Differential Revision: D14709993 fbshipit-source-id: 7fe2145a481ea2c1b6d85ba1c85c662200a53241
2019-04-02Adding pin_memory kwarg to zeros, ones, empty, ... tensor constructors. (#18455)Vitaly Fedyunin19-151/+280
Summary: Make it possible to construct a pinned memory tensor without creating a storage first and without calling pin_memory() function. It is also faster, as copy operation is unnecessary. Supported functions: ```python torch.rand_like(t, pin_memory=True) torch.randn_like(t, pin_memory=True) torch.empty_like(t, pin_memory=True) torch.full_like(t, 4, pin_memory=True) torch.zeros_like(t, pin_memory=True) torch.ones_like(t, pin_memory=True) torch.tensor([10,11], pin_memory=True) torch.randn(3, 5, pin_memory=True) torch.rand(3, pin_memory=True) torch.zeros(3, pin_memory=True) torch.randperm(3, pin_memory=True) torch.empty(6, pin_memory=True) torch.ones(6, pin_memory=True) torch.eye(6, pin_memory=True) torch.arange(3, 5, pin_memory=True) ``` Part of the bigger: `Remove Storage` plan. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18455 Reviewed By: ezyang Differential Revision: D14672084 Pulled By: VitalyFedyunin fbshipit-source-id: 9d0997ec00f59500ee018f8b851934d334012124
2019-04-02Improve Backend comment. (#18567)Edward Yang1-8/+14
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18567 ghimport-source-id: 1e50e611a3afcfae86828b7afe06c3fdc6a7bef7 Stack from [ghstack](https://github.com/ezyang/ghstack): * **#18567 Improve Backend comment.** Signed-off-by: Edward Z. Yang <ezyang@fb.com> Reviewed By: dzhulgakov Differential Revision: D14666189 fbshipit-source-id: 64a41c4a998b1a59ff780d1ae06fa16e5ef3c7c4
2019-04-02Expose alias multinomial methods to ATen (#17904)vishwakftw9-13/+129
Summary: This PR exposes the multinomialAliasSetup and multinomialAliasDraw methods. cc: neerajprad Pull Request resolved: https://github.com/pytorch/pytorch/pull/17904 Differential Revision: D14700205 Pulled By: ezyang fbshipit-source-id: 16462fb1f1ef1d560fd586632ea356b23e966ee3
2019-04-02Update cpp_extension.py (#18638)BloodAxe1-0/+1
Summary: Hi. It seems that when building CPP-extensions with CUDA for Windows, an `extra_cuda_cflags` options are not properly forwarded to `nvcc`. Use of extra CUDA options is necessary to build, for instance, a InplaceABN (https://github.com/mapillary/inplace_abn), which requires `--expt-extended-lambda` option. This PR adds one line that correctly appends `extra_cuda_cflags`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18638 Differential Revision: D14704270 Pulled By: ezyang fbshipit-source-id: e1e330d193d9afd5707a5437a74c0499460d2b90
2019-04-02fix typoMark Pare1-1/+1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18653 Differential Revision: D14713920 Pulled By: ezyang fbshipit-source-id: 170295a162dd23916c1dcc9330918d33277cc9ed
2019-04-02Kill LegacyBridge functions that don't do multiple dispatch. (#18696)Gregory Chanan3-70/+41
Summary: At some point, we needed these functions to deal with autograd dispatching to the sparse of TH version of a backwards. But we rewrote all backwards definitions in terms of native functions, so this is no longer necessary. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18696 Differential Revision: D14710834 Pulled By: gchanan fbshipit-source-id: b22568c58eefc79d672555bd8832398ccd965cb7
2019-04-02Updating submodulessvcscm1-0/+0
Reviewed By: zpao fbshipit-source-id: da3cd711bb81b07c6c284426ffc5e10a969b0d2b
2019-04-01add Int8FCRelu (#18673)Jongsoo Park4-24/+59
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18673 Add a fused FC + Relu Reviewed By: csummersea Differential Revision: D14667055 fbshipit-source-id: d88fefba008fc0ca450291532d2b320694c6b785
2019-04-01Fix uninitialized value in pickler (#18678)David Riazati2-3/+11
Summary: Fixes #18671 Pull Request resolved: https://github.com/pytorch/pytorch/pull/18678 Differential Revision: D14708969 Pulled By: driazati fbshipit-source-id: d372c6e3a2a3d3fc48d8afc1fa6807f2ce0e5c6e
2019-04-01fixes multiprocessing serialization for integer nn.Parameter (#18639)Soumith Chintala2-2/+20
Summary: Fixes https://github.com/pytorch/pytorch/issues/17345 Pull Request resolved: https://github.com/pytorch/pytorch/pull/18639 Differential Revision: D14711565 Pulled By: soumith fbshipit-source-id: 0063ed138a215b95d6571dcd68b18569714abe19
2019-04-01fix nccl compilation to make sure it compiles for architectures that pytorch ↵Soumith Chintala1-1/+1
compiles for (#18704) Summary: cc: t-vi gchanan zou3519 This fixes https://github.com/pytorch/pytorch/issues/18359 Pull Request resolved: https://github.com/pytorch/pytorch/pull/18704 Differential Revision: D14717015 Pulled By: soumith fbshipit-source-id: 4aac036f57e564b05d759662e8ad7a80170901c0
2019-04-01More type stubs (#18511)Jon Malmaud12-6/+179
Summary: Added stubs for: * The `device` module * The `cuda` module * Parts of the `optim` module * Began adding stubs for the `autograd` module. I'll annotate more later but `no_grad` and friends are probably the most used exports from it so it seemed like a good place to start. This would close #16996, although comments on that issue reference other missing stubs so maybe it's worth keeping open as an umbrella issue. The big remaining missing package is `nn`. Also added a `py.typed` file so mypy will pick up on the type stubs. That closes #17639. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18511 Differential Revision: D14715053 Pulled By: ezyang fbshipit-source-id: 9e4882ac997063650e6ce47604b3eaf1232c61c9
2019-04-01NCCL build fix WITH_DISTRIBUTED=1.Gregory Chanan1-0/+1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18691 Reviewed By: ezyang Differential Revision: D14706205 Pulled By: gchanan fbshipit-source-id: 802f19bfd7df3703c0dbce03036e2f2e32eb3efb
2019-04-01caffe2 - set up correct inheritance structure for remaining operator test ↵Duc Ngo5-12/+12
classes (#18622) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18622 Set up correct inheritance structure for remaining operator test classes Reviewed By: ezyang Differential Revision: D14685941 fbshipit-source-id: a6b1b3be325935b7fec7515be13a4994b3016bf0
2019-04-01Peephole Optimize Shape Ops (#18549)Elias Ellison2-0/+86
Summary: Peephole optimize ops that just require Dimensioned Tensor Type, which is what we specialize graphs on. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18549 Differential Revision: D14690827 Pulled By: eellison fbshipit-source-id: 9d7439eb584f0a5b877f5aa53cf80150f00e7e5f
2019-04-01Deprecated lambda based API (#18542)Sebastian Messmer2-3/+823
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18542 This adds the deprecated API for defining kernels as lambdas. The new API for defining kernels as lambdas was introduced in D14653005. Reviewed By: dzhulgakov Differential Revision: D14653551 fbshipit-source-id: 99900f1436716c69e52c83b68333b642ec2c8558
2019-04-01deprecated function based API (#18444)Sebastian Messmer2-1/+881
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18444 This adds the deprecated function based API to c10::RegisterOperators(). This is the API currently exposed under jit::RegisterOperators() and we need to support it for backwards compatibility. Reviewed By: dzhulgakov Differential Revision: D14514218 fbshipit-source-id: c77676851cfd431d66f18fd8038cf153a3a7d7cc
2019-04-01Revert "Tensor construction codemod(raw_mutable_data) (#16373)" (#18680)Junjie Bai27-116/+129
Summary: This reverts commit d73c830e236f5b980e5c91914b818d150b60278c. We have observed significant perf drop when training ResNext101 with multiple amd GPUs: Before: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-bench/1636/console 2 GPUs ResNext training got 150\~160 imgs/sec 4 GPUs ResNext training got 270\~280 imgs/sec After: https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-bench/1637/console Both 2 and 4 GPUs ResNext training drop to 110\~120 imgs/sec Similar perf drop are seen on ResNet50 training jobs as well. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18680 Differential Revision: D14702941 Pulled By: bddppq fbshipit-source-id: 828141805afc23f25c08d4a2eb6d4b99f817c128
2019-04-01C++ handler for gradient reduction (#18251)Pieter Noordhuis6-0/+590
Summary: This commit adds the `c10d::Reducer` class that hooks into autograd and performs gradient bucketing and reduction. These are the core parts of `nn.parallel.DistributedDataParallel` that up to now were only usable for CUDA models. This should enable the following: * Distributed data parallelism for models defined using the C++ frontend. * Allow overlap of gradient computation and reduction for non-CUDA models. * Enable distributed data parallelism for models with some unused parameters. This does not include any logic for computing bucket assignment, which can be done separately; either by observing autograd execution order (this is what Apex does), or by assigning buckets based on some maximum byte size, or both. Also see #17757 and #13273. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18251 Reviewed By: mrshenli Differential Revision: D14571899 Pulled By: pietern fbshipit-source-id: 20f95eefd288dfe8cfffe0a28ca22fa7c9c3cd4c
2019-04-01Updating submodulessvcscm1-0/+0
Reviewed By: zpao fbshipit-source-id: 735fc388bff7066e8f46526266a73bf35e121442
2019-04-01add ConvRelu schema (#18693)Jongsoo Park2-3/+7
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18693 As title Reviewed By: protonu Differential Revision: D14662880 fbshipit-source-id: 3664faa660a04e1f528a413d2a1700b872c3c684
2019-04-01offload scripts from win-test.shKarl Ostmo2-24/+31
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18601 Differential Revision: D14711856 Pulled By: kostmo fbshipit-source-id: 75fe620541fe2903f69a53dbd1b6d51a0d718113
2019-04-01Some fixes for the build script on Windows (#18681)peter1-1/+8
Summary: Fixes https://discuss.pytorch.org/t/pytorch-build-from-source-on-windows/40288/13?u=peterjc123. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18681 Differential Revision: D14711039 Pulled By: soumith fbshipit-source-id: f7e1a94b163064c055670b2925cd4502e7773599
2019-04-01Fix for double backwards tests (#18190)Igor Fedan2-3/+30
Summary: If none of the outputs require_grad, we don't actually check gradgrad, instead we will check that their numerical gradients are 0. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18190 Differential Revision: D14563388 Pulled By: ifedan fbshipit-source-id: a4eb94c9eb60f14dbe6986cd8cef1fe78a7bc839
2019-04-01Add string index/slice operations (#18247)David Riazati5-5/+110
Summary: Adds support for string indexing (`"a"[0]`) and slicing (`"abc"[1:3]`) to script. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18247 Differential Revision: D14574486 Pulled By: driazati fbshipit-source-id: 4b42aa0881e5398ea7f112be46c0335e6e19dced
2019-04-01Re-land Parsing file check (#18570)eellison8-18/+258
Summary: The last time I tried to land it there was a merge race with the docs coverage test lol. Re-landing with the fix. Re-land of https://github.com/pytorch/pytorch/pull/18304 Pull Request resolved: https://github.com/pytorch/pytorch/pull/18570 Reviewed By: driazati Differential Revision: D14707285 Pulled By: eellison fbshipit-source-id: 3a0265928aa8cad78961723d8bf0fbf871fdb71d
2019-04-01Create Node2Vec ModuleKeeperRu Li2-1/+2
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18504 Reviewed By: sunnieshang Differential Revision: D14632091 fbshipit-source-id: d4544866552dc6bcbc7515be9e88cb11e7622a44
2019-04-01use acc16 only when n>128 and k>128 in Skylake (#18672)Jongsoo Park1-6/+18
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18672 In Skylake, when n < 128 or k < 128, acc16 is slower. Reviewed By: jianyuh Differential Revision: D14700576 fbshipit-source-id: 80ca9f1af4626637eed9c5ca49f95ae744811189
2019-04-01Move ideep singleton registration to ATen from C2. (#18335)Gregory Chanan2-1/+9
Summary: Since we are going to add ideep to ATen, and ATen is always compiled, it makes sense to have the registration in ATen rather than C2. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18335 Reviewed By: bddppq Differential Revision: D14578652 Pulled By: gchanan fbshipit-source-id: 4d77fcfc21a362b21d5291a127498aa722548873
2019-04-01Create torch/lib directory before copying _C.lib on Windows environment. ↵Shuichi KITAGUCHI1-0/+6
(#18666) Summary: `python setup.py develop` fails with following messages. ~~~ ... -- Building with NumPy bindings -- Not using cuDNN -- Not using MIOpen -- Not using CUDA -- Using MKLDNN -- Not using NCCL -- Building without distributed package Copying extension caffe2.python.caffe2_pybind11_state Copying caffe2.python.caffe2_pybind11_state from torch\Lib\site-packages\caffe2\python\caffe2_pybind11_state.cp37-win_amd64.pyd to C:\data\source\pytorch\build\lib.win-amd64-3.7\caffe2\python\caffe2_pybind11_state.cp37-win_amd64.pyd copying torch\Lib\site-packages\caffe2\python\caffe2_pybind11_state.cp37-win_amd64.pyd -> C:\data\source\pytorch\build\lib.win-amd64-3.7\caffe2\python building 'torch._C' extension creating build\temp.win-amd64-3.7 creating build\temp.win-amd64-3.7\Release creating build\temp.win-amd64-3.7\Release\torch creating build\temp.win-amd64-3.7\Release\torch\csrc ... creating C:\data\source\pytorch\build\lib.win-amd64-3.7\torch C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\bin\HostX64\x64\link.exe /nologo /INCREMENTAL:NO /LTCG /nodefaultlib:libucrt.lib ucrt.lib /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:C:\data\source\pytorch\torch\lib /LIBPATH:C:\data\dlenv\libs /LIBPATH:C:\data\dlenv\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\ATLMFC\lib\x64" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\um\x64" shm.lib torch_python.lib /EXPORT:PyInit__C build\temp.win-amd64-3.7\Release\torch/csrc/stub.obj /OUT:build\lib.win-amd64-3.7\torch\_C.cp37-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.7\Release\torch/csrc\_C.cp37-win_amd64.lib /NODEFAULTLIB:LIBCMT.LIB ライブラリ build\temp.win-amd64-3.7\Release\torch/csrc\_C.cp37-win_amd64.lib とオブジェクト build\temp.win-amd64-3.7\Release\torch/csrc\_C.cp37-win_amd64.exp を作成中 コード生成しています。 コード生成が終了しました。 copying build\lib.win-amd64-3.7\torch\_C.cp37-win_amd64.pyd -> torch copying build\lib.win-amd64-3.7\caffe2\python\caffe2_pybind11_state.cp37-win_amd64.pyd -> caffe2\python copying build/temp.win-amd64-3.7/Release/torch/csrc/_C.cp37-win_amd64.lib -> build/lib.win-amd64-3.7/torch/lib/_C.lib error: could not create 'build/lib.win-amd64-3.7/torch/lib/_C.lib': No such file or directory ~~~ When `python setup.py install` is executed, `torch/lib` has been created by previous process (copying many files) and this copy succeeds. But in develop mode, that process does not executed and this copy fails. This patch creates `torch/lib` directory if do not exist. Pull Request resolved: https://github.com/pytorch/pytorch/pull/18666 Differential Revision: D14704269 Pulled By: ezyang fbshipit-source-id: b2d7c698a906b945bf34bb78f17b91b4fdfd3294
2019-04-01Move flags that do not work on MSVC (#18686)Sacha1-2/+2
Summary: MSVC errors on these flags as they are not supported Pull Request resolved: https://github.com/pytorch/pytorch/pull/18686 Differential Revision: D14704254 Pulled By: ezyang fbshipit-source-id: 936d33ed6b7474d7774a49505cdac50dbe8dd99a
2019-03-31Fix unused lambda capture warnings (#18662)Junjie Bai1-1/+1
Summary: ``` aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.DEFAULT.cpp:109:104: warning: lambda capture 'combs' is not used [-Wunused-lambda-capture] parallel_for(0, combs, internal::GRAIN_SIZE / (16 * m), [p, self_start, self_end, n, m, res_start, combs](int64_t k, int64_t end) { ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/18662 Differential Revision: D14699379 Pulled By: bddppq fbshipit-source-id: 5062d4327bb5f7b485c2ffa30c98e10576416f03
2019-03-31handle a rare case of histogram min is inf/nan (#18239)Jongsoo Park2-6/+14
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18239 When min is inf or nan, we get UBSAN errors Reviewed By: csummersea Differential Revision: D14537668 fbshipit-source-id: e70ffb5ecd2b10793356070c69fdabf8f25b203e