summaryrefslogtreecommitdiff
path: root/cmake
AgeCommit message (Collapse)AuthorFilesLines
2018-05-09[Caffe2] [feature request] Add gradient operators for IDEEP (#7234)Jinghui1-2/+0
* Add gradient operators for IDEEP Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Add gradient test cases for IDEEP Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Upgrade third_party/ideep Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Refine SumOp for IDEEP Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Share input buffer in fallback op if possible Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Fallback ConvTranspose op for IDEEP Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Fix bug introduced by the patch of sharing input buffer Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Share output buffer in fallback operators Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Remove IDEEP to resolve repo issue Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Reflash IDEEP repo Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Remove redundant lines in IDEEP Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Fallback operators for IDEEP (Flatten, ResizeLike, Transpose, and Reshape) Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
2018-05-08onnx werror is now opt in (#7390)anderspapitto1-7/+0
2018-05-08Use a CI specific onnx namespace to catch hardcoded ones in the code (#7369)bddppq1-1/+3
2018-05-08[build] Make ATen buildable without all Caffe2 by root cmake (#7295)Orion Reblitz-Richardson3-34/+45
* Make ATen buildable without all Caffe2 by root cmake * Fix typo in aten cmake * Set BUILD_ATEN from USE_ATEN as compat * Only set BUILD_ATEN from USE_ATEN when on * Have USE_GLOO only set when BUILD_CAFFE2
2018-05-07fix build (#7348)anderspapitto1-1/+7
2018-05-07Remove cdft library requirement from MKL (#7246)Yinghai Lu1-1/+1
2018-05-07Use sccache for Windows build (#7331)Will Feng1-1/+1
2018-05-04set ONNX_NO_WERROR (#7296)anderspapitto1-0/+1
2018-05-02[build] Setup to build ATen from root CMake file (#7163)Orion Reblitz-Richardson7-179/+1105
* Setup to build ATen from root CMake file * Move aten/src/TH/cmake into cmake/Modules * Add special code path for FindMKL for merge
2018-05-01Separate "-Xcompiler <...>" into 2 elements because ${nvcc_flags} (when ↵xkszltl1-4/+4
using CUDA_SEPARABLE_COMPILATION) doesn't recognize it. (#7118) This solves the "nvcc fatal : Unknown option 'Xcompiler -MD'" issue where nvcc gets -'Xcompiler -MD'.
2018-04-30Introducing onnx-tensorrt to third_party (#7119)Yinghai Lu1-2/+2
2018-04-30Add dependency from caffe2_gpu to ATen in CMake (#7117)bddppq1-3/+6
2018-04-28[Caffe2] Follow-up of onnx-trt API change (#7076)Yinghai Lu1-1/+1
* Follow-up of onnx-trt API change * indent * comments
2018-04-25Statically linking CUDA for Anaconda builds (#6680)Paul Jesse Hellemn4-15/+88
* Statically linking CUDA for Anaconda builds * typo * Adding a summary line * Comments * Typo fix * Fix faulty parameter passing * Removing problem CUDA modules for now * Fixing unused debugging function * Turning off static cuda linking until script changes are in * Disabling mkl
2018-04-25[caffe2][cmake][opencl] Wrong directories were being included, which might ↵Bram Wasti1-1/+1
break systems without opencl in the system headers (#6972)
2018-04-24[Caffe2] Clean up ideep integration (#6881)Yinghai Lu2-10/+16
* Clean up ideep integrtation * . * Remove redundant code in convnet benchmark * MKL ON * Do not add -mavx2 everywhere * . * Comments * rename * .
2018-04-22[feature request] [Caffe2] Enable MKLDNN support for inference (#6699)Jinghui3-122/+214
* Add operators based-on IDEEP interfaces Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Enable IDEEP as a caffe2 device Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Add test cases for IDEEP ops Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Add IDEEP as a caffe2 submodule Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Skip test cases if no IDEEP support Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Correct cmake options for IDEEP Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Add dependences on ideep libraries Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Fix issues in IDEEP conv ops and etc. Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Move ideep from caffe2/ideep to caffe2/contrib/ideep Signed-off-by: Gu Jinghui <jinghui.gu@intel.com> * Update IDEEP to fix cmake issue Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Fix cmake issue caused by USE_MKL option Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com> * Correct comments in MKL cmake file Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
2018-04-22[Caffe2] Fix cuda.cmake (#6821)Yinghai Lu1-1/+1
* Fix cmake * .
2018-04-22fix typo (#6824)Yinghai Lu2-2/+2
2018-04-20Add option cache to speed up cmake build (#6737)Yangqing Jia2-29/+42
* Add option cache to speed up cmake build * Also only run autogen_init_py_files once
2018-04-20[caffe2][opencl] Add OpenCL context (#6777)Bram Wasti2-0/+9
2018-04-17Update from Facebook (#6692)Orion Reblitz-Richardson1-1/+1
* [GanH][Easy]: Add assertion to adaptive weighting layer 0 weight causes numeric instability and exploding ne * [Easy] Add cast op before computing norm in diagnose options As LpNorm only takes floats we add a manual casting here. * Introduce a new caching device allocator `cudaMalloc` and `cudaFree` calls are slow, and become slower the more GPUs there are. Essentially, they grab a host-wide (not device-wide) lock because GPU memory is transparently shared across all GPUs. Normally, this isn't much of a concern since workloads allocate memory upfront, and reuse it during later computation. However, under some computation models (specifically, memory conserving approaches like checkpoint-and-recompute, see https://medium.com/@yaroslavvb/fitting-larger-networks-into-memory-583e3c758ff9) this assumption is no longer true. In these situations, `cudaMalloc` and `cudaFree` are common and frequent. Furthermore, in data parallel contexts, these calls happen at nearly the same time from all GPUs worsening lock contention. A common solution to this problem is to add a custom allocator. In fact, nVIDIA provides one out of the box: CUB, which Caffe2 already supports. Unfortunately, the CUB allocator suffers from very high fragmentation. This is primarily because it is a "buddy" allocator which neither splits nor merges free cached blocks. Study https://github.com/NVlabs/cub/blob/1.8.0/cub/util_allocator.cuh#L357 if you want to convince yourself. This diff adapts a caching allocator from the Torch codebase https://github.com/torch/cutorch/blob/master/lib/THC/THCCachingAllocator.cpp which does splitting and merging and ends up working really well, at least for workloads like the checkpoint-and-recompute computation models noted above. I simplified the implementation a little bit, made it a bit more C++-like. I also removed a bunch of stream synchronization primitives for this diff. I plan to add them back in subsequent diffs. * Report reader progress in fblearner workflows Integrate with fblearner progress reporting API and add support to report training progress from reader nodes. If reader is constructed with batch limits, report based on finished batch vs total batch. The finished batch may be more than total batch because we evaludate if we should stop processing everytime we dequeue a split. If no limit for the reader, report based on finished splits (Hive files) vs total splits. This is fairly accurate. * [GanH][Diagnose]: fix plotting 1. ganh diagnose needs to set plot options 2. modifier's blob name is used for metric field can need to be fixed before generating net * Automatic update of fbcode/onnx to 985af3f5a0f7e7d29bc0ee6b13047e7ead9c90c8 * Make CompositeReader stops as soon as one reader finishes Previously, CompositeReader calls all readers before stopping. It results in flaky test since the last batch may be read by different threads; resulting in dropped data. * [dper] make sure loss is not nan as desc. * [rosetta2] [mobile-vision] Option to export NHWC order for RoIWarp/RoIAlign Thanks for finding this @stzpz and @wangyanghan. Looks like NHWC is more optimized. For OCR though it doesn't yet help since NHWC uses more mem b/w but will soon become important. * Intra-op parallel FC operator Intra-op parallel FC operator * [C2 Proto] extra info in device option passing extra information in device option design doc: https://fb.quip.com/yAiuAXkRXZGx * Unregister MKL fallbacks for NCHW conversions * Tracing for more executors Modified Tracer to work with other executors and add more tracing * Remove ShiftActivationDevices() * Check for blob entry iff it is present When processing the placeholders ops, ignore if the blob is not present in the blob_to_device. * Internalize use of eigen tensor Move use of eigen tensor out of the header file so we don't get template partial specialization errors when building other libraries. * feature importance for transformed features. * - Fix unused parameter warnings The changes in this diff comments out unused parameters. This will allow us to enable -Wunused-parameter as error. #accept2ship * add opencv dependencies to caffe2 The video input op requires additional opencv packages. This is to add them to cmake so that it can build * Add clip_by_value option in gradient clipping Add clip_by_value option in gradient clipping when the value is bigger than max or smaller than min, do the clip * std::round compat
2018-04-11[caffe2] Update ReduceOps (#6497)Xiaomeng Yang1-0/+3
* Update ReduceMean * Add reduce mean to math * Update cuda flag * Update Eigen::Tensor ctor * Remove unused variables * Skip ReduceTensorGPUTest if no gpus * Add NOMINMAX for windows * Fix lpnorm_op in windows
2018-04-11[caffe2] Minor changes in NNPACK CMake scripts (#6532)Marat Dukhan2-42/+1
- Tell NNPACK to not link pthreadpool, but only its headers - Remove FindNNPACK.cmake as it is no longer used
2018-04-11[Caffe2] Add support to TensorRT (#6150)Yinghai Lu3-0/+45
* Add support to TensorRT * Removed License header * Bind input/output by position * Comments * More comments * Add benchmark * Add warning for performance degradation on large batch * Address comments * comments
2018-04-08[pytorch] add static linkage support for CuDNN and NCCL (#6410)Soumith Chintala1-1/+8
* when linking static CUDA libs, additional dep on culibos.a * add USE_STATIC_NCCL option * add USE_STATIC_CUDNN option * remove libATen soversion * add caffe, caffe2 folders to setup.py exclude list
2018-04-06[caffe2] Always build NNPACK together with Caffe2 (#6365)Marat Dukhan1-9/+3
Caffe2 started with an option to use NNPACK pre-installed in the system. Now this option is mostly legacy, as Caffe2 can include NNPACK in its own build on all platforms. Due to problems when pre-installed NNPACK is built with different dependencies or compiler options, we decided to remove this option and alwyas build NNPACK with Caffe2. This change makes Caffe2 always build NNPACK as part of its own build, and updates NNPACK and cpuinfo submodules.
2018-04-05Modify cmake dedent function to make it compatible with Windows. (#6296)harrysummer1-3/+2
2018-04-04Change ATen to use Caffe2/cmake upstream FindCUDA (#6240)Edward Z. Yang1-0/+7
* Remove ATen's copy of FindCUDA Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Minor bugfix for updated FindCUDA. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Use cl.exe as the host compiler even when clcache.exe is set. Upstream merge request at https://gitlab.kitware.com/cmake/cmake/merge_requests/1933 H/t peterjc123 who contributed the original version of this patch. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Include CMakeInitializeConfigs polyfill from ATen. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Tweak the regex so it actually works on Windows. Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2018-04-04Update FindCUDA to cmake master as of 561238bb6f07a5ab31293928bd98f6f… (#6241)Edward Z. Yang4-28/+88
* Update FindCUDA to cmake master as of 561238bb6f07a5ab31293928bd98f6f8911d8bc1 NB: I DID have to apply one local patch; it's the `include_guard` change. Should be obvious next time you do an update. Relevant commits: commit 23119366e9d4e56e13c1fdec9dbff5e8f8c55ee5 Author: Edward Z. Yang <ezyang@fb.com> Date: Wed Mar 28 11:33:56 2018 -0400 FindCUDA: Make nvcc configurable via CUDA_NVCC_EXECUTABLE env var This is useful if, for example, you want ccache to be used for nvcc. With the current behavior, cmake always picks up /usr/local/cuda/bin/nvcc, even if there is a ccache nvcc stub in the PATH. Allowing for CUDA_NVCC_EXECUTABLE lets us work around the problem. Signed-off-by: Edward Z. Yang <ezyang@fb.com> commit e743fc8e9137692232f0220ac901f5a15cbd62cf Author: Henry Fredrick Schreiner <henry.fredrick.schreiner@cern.ch> Date: Thu Mar 15 15:30:50 2018 +0100 FindCUDA/select_compute_arch: Add support for CUDA as a language Even though this is an internal module, we can still prepare it to be used in another public-facing module outside of `FindCUDA`. Issue: #16586 commit 193082a3c803a6418f0f1b5976dc34a91cf30805 Author: luz.paz <luzpaz@users.noreply.github.com> Date: Thu Feb 8 06:27:21 2018 -0500 MAINT: Misc. typos Found via `codespell -q 3 -I ../cmake-whitelist.txt`. commit 9f74aaeb7d6649241c4a478410e87d092c462960 Author: Brad King <brad.king@kitware.com> Date: Tue Jan 30 08:18:11 2018 -0500 FindCUDA: Fix regression in per-config flags Changes in commit 48f7e2d300 (Unhardcode the CMAKE_CONFIGURATION_TYPES values, 2017-11-27) accidentally left `CUDA_configuration_types` undefined, but this is used in a few places to handle per-config flags. Restore it. Fixes: #17671 commit d91b2d9158cbe5d65bfcc8f7512503d7f226ad91 Author: luz.paz <luzpaz@users.noreply.github.com> Date: Wed Jan 10 12:34:14 2018 -0500 MAINT: Misc. typos Found via `codespell` commit d08f3f551fa94b13a1d43338eaed68bcecb95cff Merge: 1be22978e 1f4d7a071 Author: Brad King <brad.king@kitware.com> Date: Wed Jan 10 15:34:57 2018 +0000 Merge topic 'unhardcode-configuration-types' 1f4d7a07 Help: Add references and backticks in LINK_FLAGS prop_tgt 48f7e2d3 Unhardcode the CMAKE_CONFIGURATION_TYPES values Acked-by: Kitware Robot <kwrobot@kitware.com> Merge-request: !1345 commit 5fbfa18fadf945963687cd95627c1bc62b68948a Merge: bc88329e5 ff41a4b81 Author: Brad King <brad.king@kitware.com> Date: Tue Jan 9 14:26:35 2018 +0000 Merge topic 'FindCUDA-deduplicate-c+std-host-flags' ff41a4b8 FindCUDA: de-duplicates C++11 flag when propagating host flags. Acked-by: Kitware Robot <kwrobot@kitware.com> Merge-request: !1628 commit bc88329e5ba7b1a14538f23f4fa223ac8d6d5895 Merge: 89d127463 fab1b432e Author: Brad King <brad.king@kitware.com> Date: Tue Jan 9 14:26:16 2018 +0000 Merge topic 'msvc2017-findcuda' fab1b432 FindCUDA: Update to properly find MSVC 2017 compiler tools Acked-by: Kitware Robot <kwrobot@kitware.com> Acked-by: Robert Maynard <robert.maynard@kitware.com> Merge-request: !1631 commit 48f7e2d30000dc57c31d3e3ab81077950704a587 Author: Beren Minor <beren.minor+git@gmail.com> Date: Mon Nov 27 19:22:11 2017 +0100 Unhardcode the CMAKE_CONFIGURATION_TYPES values This removes duplicated code for per-config variable initialization by providing a `cmake_initialize_per_config_variable(<PREFIX> <DOCSTRING>)` function. This function initializes a `<PREFIX>` cache variable from `<PREFIX>_INIT` and unless the `CMAKE_NOT_USING_CONFIG_FLAGS` variable is defined, does the same with `<PREFIX>_<CONFIG>` from `<PREFIX>_<CONFIG>_INIT` for every `<CONFIG>` in `CMAKE_CONFIGURATION_TYPES` for multi-config generators or `CMAKE_BUILD_TYPE` for single-config generators. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Polyfill CMakeInitializeConfigs Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Tweak condition for when to use bundled FindCUDA support. Signed-off-by: Edward Z. Yang <ezyang@fb.com> * Comment out include_guard. Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2018-04-04[Caffe2] Always build local protobuf library with -fPIC (#6264)bddppq1-4/+5
* Always build local protobuf library with -fPIC * .
2018-04-03Expunge ATen submodule; use the in-tree copy. (#6235)Edward Z. Yang1-1/+1
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2018-04-03Changes without protoc conditions (#6142)Paul Jesse Hellemn1-3/+4
2018-03-28Use local FindCUDA for CMake < 3.7Orion Reblitz-Richardson1-0/+2
2018-03-28Update CAFFE2_LINK_LOCAL_PROTOBUF functionality.Orion Reblitz-Richardson4-19/+84
* Continuation of https://github.com/caffe2/caffe2/pull/2306 and based on Yangqing's PR at https://github.com/caffe2/caffe2/pull/2326 * Put caffe2_protos as static library and link it whole to libcaffe2.so * For protobuf::libprotobuf, only link it to libcaffe2_protos (and hence libcaffe2.so), but not any downstream library. This avoids manipulating protobuf objects across dll boundaries. * After the above, during linking one will receive complaint that fixed_address_empty_string is not found. This is because we compiled protobuf with hidden visibility, and the fact that the generated caffe2.pb.h has an inline function that invokes the inline function in protobuf GetEmptyStringAlreadyInited() * Added sed-like commands to replace the generated header to use caffe2::GetEmptyStringAlreadyInited() instead. And, in proto_utils.cc, implement a function that essentially routes the function call to protobuf's internal one. The reason this works is that, caffe2::G... is visible globally, and libcaffe2.so is able to see the real protobuf one. This ensures that we are always calling protobuf functions that are inside libcaffe2.so.
2018-03-26Caffe2-onnx exporter (#2248)Yinghai Lu1-0/+7
* caffe2-onnx frontend * Remove Python part of the conversion code * nit * convert more ops * Address commmetns
2018-03-26Strip down onnx to only pb definitions in mobile build (#2426)bddppq1-1/+7
2018-03-26Initialize cpuinfo in the thread poolMarat Dukhan1-0/+7
Thread pool called cpuinfo_get_processors_count() without initializing cpuinfo. Only by luck it didn't make Caffe2 single-threaded: threadpool is initialized after NNPACK, and NNPACK initializes cpuinfo itself. This commit also updates cpuinfo to a version that aborts with a fatal error if its used uninitialized.
2018-03-21Remove more protobuf APIs. (#2348)Yangqing Jia1-2/+11
* Wrap ShutdownProtobufLibrary * Remove text_format.h header and only put the function in proto_utils.h * ParseFromString returns bool
2018-03-19Remove USE_THREADS since it is needed explicitly. (#2322)Yangqing Jia3-7/+7
2018-03-19Add an option for Caffe2 to link with local protobuf. (#2306)Yangqing Jia2-6/+31
2018-03-18put caffe2_protos to a standalone target (#2302)Yangqing Jia1-3/+0
2018-03-12Onnx caffe2 backend (#2039)Yinghai Lu2-1/+28
* C++ version of ONNX->Caffe2 backend * use namespace ONNX_NAMESPACE * Fix Build * Comments * Change namespace from onnx_caffe2 to caffe2::onnx
2018-03-09Use cpuinfo instead of Android's libcpufeatures in Android buildMarat Dukhan2-21/+30
2018-03-06Bump minimum CMake version to 3.2Marat Dukhan2-31/+12
CMake 3.2 is required to properly track dependencies in projects imported as ExternalProject_Add (BUILD_BYPRODUCTS parameter). Users on Ubuntu 14.04 LTS would need to install and use cmake3 package for configurations. Users of other popular distributions generally have a recent enough CMake package.
2018-03-06Caffe2 module update: move observers as well as binaries. (#2145)Yangqing Jia7-9/+37
* Caffe2 module update: move observers as well as binaries. * Add threads linkage * Add Threads dependency to public interface
2018-03-06Fix OSS buildDmytro Dzhulgakov1-0/+18
Fix OSS build broken after D6946982 by adding CMake detection variable (https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-gcc4.9-ubuntu14.04-build/1343/console)
2018-03-05Add Numa support (#2152)Yangqing Jia2-0/+46
2018-03-05Add C++ preprocessor define CAFFE2_USE_EXCEPTION_PTR to guard use of ↵Kutta Srinivasan1-2/+25
std::exception_ptr
2018-03-05Update Python information shown in CMake summary (#2132)bddppq1-1/+1
* Do not show Python library in cmake summary as we no longer link with libpython * Show python include dirs in cmake summary