Age | Commit message (Collapse) | Author | Files | Lines |
|
* Add gradient operators for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Add gradient test cases for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Upgrade third_party/ideep
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Refine SumOp for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Share input buffer in fallback op if possible
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fallback ConvTranspose op for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fix bug introduced by the patch of sharing input buffer
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Share output buffer in fallback operators
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Remove IDEEP to resolve repo issue
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Reflash IDEEP repo
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Remove redundant lines in IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fallback operators for IDEEP
(Flatten, ResizeLike, Transpose, and Reshape)
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
|
|
|
|
|
|
* Make ATen buildable without all Caffe2 by root cmake
* Fix typo in aten cmake
* Set BUILD_ATEN from USE_ATEN as compat
* Only set BUILD_ATEN from USE_ATEN when on
* Have USE_GLOO only set when BUILD_CAFFE2
|
|
|
|
|
|
|
|
|
|
* Setup to build ATen from root CMake file
* Move aten/src/TH/cmake into cmake/Modules
* Add special code path for FindMKL for merge
|
|
using CUDA_SEPARABLE_COMPILATION) doesn't recognize it. (#7118)
This solves the "nvcc fatal : Unknown option 'Xcompiler -MD'" issue where nvcc gets -'Xcompiler -MD'.
|
|
|
|
|
|
* Follow-up of onnx-trt API change
* indent
* comments
|
|
* Statically linking CUDA for Anaconda builds
* typo
* Adding a summary line
* Comments
* Typo fix
* Fix faulty parameter passing
* Removing problem CUDA modules for now
* Fixing unused debugging function
* Turning off static cuda linking until script changes are in
* Disabling mkl
|
|
break systems without opencl in the system headers (#6972)
|
|
* Clean up ideep integrtation
* .
* Remove redundant code in convnet benchmark
* MKL ON
* Do not add -mavx2 everywhere
* .
* Comments
* rename
* .
|
|
* Add operators based-on IDEEP interfaces
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Enable IDEEP as a caffe2 device
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Add test cases for IDEEP ops
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Add IDEEP as a caffe2 submodule
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Skip test cases if no IDEEP support
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Correct cmake options for IDEEP
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Add dependences on ideep libraries
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fix issues in IDEEP conv ops and etc.
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Move ideep from caffe2/ideep to caffe2/contrib/ideep
Signed-off-by: Gu Jinghui <jinghui.gu@intel.com>
* Update IDEEP to fix cmake issue
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Fix cmake issue caused by USE_MKL option
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
* Correct comments in MKL cmake file
Signed-off-by: Gu, Jinghui <jinghui.gu@intel.com>
|
|
* Fix cmake
* .
|
|
|
|
* Add option cache to speed up cmake build
* Also only run autogen_init_py_files once
|
|
|
|
* [GanH][Easy]: Add assertion to adaptive weighting layer
0 weight causes numeric instability and exploding ne
* [Easy] Add cast op before computing norm in diagnose options
As LpNorm only takes floats we add a manual casting here.
* Introduce a new caching device allocator
`cudaMalloc` and `cudaFree` calls are slow, and become slower the
more GPUs there are. Essentially, they grab a host-wide (not device-wide) lock
because GPU memory is transparently shared across all GPUs. Normally, this
isn't much of a concern since workloads allocate memory upfront, and reuse it
during later computation.
However, under some computation models (specifically, memory conserving
approaches like checkpoint-and-recompute, see
https://medium.com/@yaroslavvb/fitting-larger-networks-into-memory-583e3c758ff9)
this assumption is no longer true. In these situations, `cudaMalloc` and
`cudaFree` are common and frequent. Furthermore, in data parallel contexts,
these calls happen at nearly the same time from all GPUs worsening lock
contention.
A common solution to this problem is to add a custom allocator. In fact,
nVIDIA provides one out of the box: CUB, which Caffe2 already supports.
Unfortunately, the CUB allocator suffers from very high fragmentation. This is
primarily because it is a "buddy" allocator which neither splits nor merges
free cached blocks. Study
https://github.com/NVlabs/cub/blob/1.8.0/cub/util_allocator.cuh#L357 if you
want to convince yourself.
This diff adapts a caching allocator from the Torch codebase
https://github.com/torch/cutorch/blob/master/lib/THC/THCCachingAllocator.cpp
which does splitting and merging and ends up working really well, at least for
workloads like the checkpoint-and-recompute computation models noted above.
I simplified the implementation a little bit, made it a bit more C++-like. I
also removed a bunch of stream synchronization primitives for this diff. I
plan to add them back in subsequent diffs.
* Report reader progress in fblearner workflows
Integrate with fblearner progress reporting API and add support to report training progress from reader nodes.
If reader is constructed with batch limits, report based on finished batch vs total batch. The finished batch may be more than total batch because we evaludate if we should stop processing everytime we dequeue a split.
If no limit for the reader, report based on finished splits (Hive files) vs total splits. This is fairly accurate.
* [GanH][Diagnose]: fix plotting
1. ganh diagnose needs to set plot options
2. modifier's blob name is used for metric field can need to be fixed before
generating net
* Automatic update of fbcode/onnx to 985af3f5a0f7e7d29bc0ee6b13047e7ead9c90c8
* Make CompositeReader stops as soon as one reader finishes
Previously, CompositeReader calls all readers before stopping. It results in flaky test since the last batch may be read by different threads; resulting in dropped data.
* [dper] make sure loss is not nan
as desc.
* [rosetta2] [mobile-vision] Option to export NHWC order for RoIWarp/RoIAlign
Thanks for finding this @stzpz and @wangyanghan. Looks like NHWC is more
optimized. For OCR though it doesn't yet help since NHWC uses more mem b/w but
will soon become important.
* Intra-op parallel FC operator
Intra-op parallel FC operator
* [C2 Proto] extra info in device option
passing extra information in device option
design doc: https://fb.quip.com/yAiuAXkRXZGx
* Unregister MKL fallbacks for NCHW conversions
* Tracing for more executors
Modified Tracer to work with other executors and add more tracing
* Remove ShiftActivationDevices()
* Check for blob entry iff it is present
When processing the placeholders ops, ignore if the blob is not present in the blob_to_device.
* Internalize use of eigen tensor
Move use of eigen tensor out of the header file so we don't get template partial specialization errors when building other libraries.
* feature importance for transformed features.
* - Fix unused parameter warnings
The changes in this diff comments out unused parameters.
This will allow us to enable -Wunused-parameter as error.
#accept2ship
* add opencv dependencies to caffe2
The video input op requires additional opencv packages. This is to add them to
cmake so that it can build
* Add clip_by_value option in gradient clipping
Add clip_by_value option in gradient clipping
when the value is bigger than max or smaller than min, do the clip
* std::round compat
|
|
* Update ReduceMean
* Add reduce mean to math
* Update cuda flag
* Update Eigen::Tensor ctor
* Remove unused variables
* Skip ReduceTensorGPUTest if no gpus
* Add NOMINMAX for windows
* Fix lpnorm_op in windows
|
|
- Tell NNPACK to not link pthreadpool, but only its headers
- Remove FindNNPACK.cmake as it is no longer used
|
|
* Add support to TensorRT
* Removed License header
* Bind input/output by position
* Comments
* More comments
* Add benchmark
* Add warning for performance degradation on large batch
* Address comments
* comments
|
|
* when linking static CUDA libs, additional dep on culibos.a
* add USE_STATIC_NCCL option
* add USE_STATIC_CUDNN option
* remove libATen soversion
* add caffe, caffe2 folders to setup.py exclude list
|
|
Caffe2 started with an option to use NNPACK pre-installed in the system.
Now this option is mostly legacy, as Caffe2 can include NNPACK in its own build on all platforms.
Due to problems when pre-installed NNPACK is built with different dependencies or compiler options, we decided to remove this option and alwyas build NNPACK with Caffe2.
This change makes Caffe2 always build NNPACK as part of its own build, and updates NNPACK and cpuinfo submodules.
|
|
|
|
* Remove ATen's copy of FindCUDA
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Minor bugfix for updated FindCUDA.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Use cl.exe as the host compiler even when clcache.exe is set.
Upstream merge request at https://gitlab.kitware.com/cmake/cmake/merge_requests/1933
H/t peterjc123 who contributed the original version of this patch.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Include CMakeInitializeConfigs polyfill from ATen.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Tweak the regex so it actually works on Windows.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
* Update FindCUDA to cmake master as of 561238bb6f07a5ab31293928bd98f6f8911d8bc1
NB: I DID have to apply one local patch; it's the `include_guard` change. Should
be obvious next time you do an update.
Relevant commits:
commit 23119366e9d4e56e13c1fdec9dbff5e8f8c55ee5
Author: Edward Z. Yang <ezyang@fb.com>
Date: Wed Mar 28 11:33:56 2018 -0400
FindCUDA: Make nvcc configurable via CUDA_NVCC_EXECUTABLE env var
This is useful if, for example, you want ccache to be used
for nvcc. With the current behavior, cmake always picks up
/usr/local/cuda/bin/nvcc, even if there is a ccache nvcc
stub in the PATH. Allowing for CUDA_NVCC_EXECUTABLE lets
us work around the problem.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
commit e743fc8e9137692232f0220ac901f5a15cbd62cf
Author: Henry Fredrick Schreiner <henry.fredrick.schreiner@cern.ch>
Date: Thu Mar 15 15:30:50 2018 +0100
FindCUDA/select_compute_arch: Add support for CUDA as a language
Even though this is an internal module, we can still prepare it to
be used in another public-facing module outside of `FindCUDA`.
Issue: #16586
commit 193082a3c803a6418f0f1b5976dc34a91cf30805
Author: luz.paz <luzpaz@users.noreply.github.com>
Date: Thu Feb 8 06:27:21 2018 -0500
MAINT: Misc. typos
Found via `codespell -q 3 -I ../cmake-whitelist.txt`.
commit 9f74aaeb7d6649241c4a478410e87d092c462960
Author: Brad King <brad.king@kitware.com>
Date: Tue Jan 30 08:18:11 2018 -0500
FindCUDA: Fix regression in per-config flags
Changes in commit 48f7e2d300 (Unhardcode the CMAKE_CONFIGURATION_TYPES
values, 2017-11-27) accidentally left `CUDA_configuration_types`
undefined, but this is used in a few places to handle per-config flags.
Restore it.
Fixes: #17671
commit d91b2d9158cbe5d65bfcc8f7512503d7f226ad91
Author: luz.paz <luzpaz@users.noreply.github.com>
Date: Wed Jan 10 12:34:14 2018 -0500
MAINT: Misc. typos
Found via `codespell`
commit d08f3f551fa94b13a1d43338eaed68bcecb95cff
Merge: 1be22978e 1f4d7a071
Author: Brad King <brad.king@kitware.com>
Date: Wed Jan 10 15:34:57 2018 +0000
Merge topic 'unhardcode-configuration-types'
1f4d7a07 Help: Add references and backticks in LINK_FLAGS prop_tgt
48f7e2d3 Unhardcode the CMAKE_CONFIGURATION_TYPES values
Acked-by: Kitware Robot <kwrobot@kitware.com>
Merge-request: !1345
commit 5fbfa18fadf945963687cd95627c1bc62b68948a
Merge: bc88329e5 ff41a4b81
Author: Brad King <brad.king@kitware.com>
Date: Tue Jan 9 14:26:35 2018 +0000
Merge topic 'FindCUDA-deduplicate-c+std-host-flags'
ff41a4b8 FindCUDA: de-duplicates C++11 flag when propagating host flags.
Acked-by: Kitware Robot <kwrobot@kitware.com>
Merge-request: !1628
commit bc88329e5ba7b1a14538f23f4fa223ac8d6d5895
Merge: 89d127463 fab1b432e
Author: Brad King <brad.king@kitware.com>
Date: Tue Jan 9 14:26:16 2018 +0000
Merge topic 'msvc2017-findcuda'
fab1b432 FindCUDA: Update to properly find MSVC 2017 compiler tools
Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: Robert Maynard <robert.maynard@kitware.com>
Merge-request: !1631
commit 48f7e2d30000dc57c31d3e3ab81077950704a587
Author: Beren Minor <beren.minor+git@gmail.com>
Date: Mon Nov 27 19:22:11 2017 +0100
Unhardcode the CMAKE_CONFIGURATION_TYPES values
This removes duplicated code for per-config variable initialization by
providing a `cmake_initialize_per_config_variable(<PREFIX> <DOCSTRING>)`
function.
This function initializes a `<PREFIX>` cache variable from `<PREFIX>_INIT`
and unless the `CMAKE_NOT_USING_CONFIG_FLAGS` variable is defined, does
the same with `<PREFIX>_<CONFIG>` from `<PREFIX>_<CONFIG>_INIT` for every
`<CONFIG>` in `CMAKE_CONFIGURATION_TYPES` for multi-config generators or
`CMAKE_BUILD_TYPE` for single-config generators.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Polyfill CMakeInitializeConfigs
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Tweak condition for when to use bundled FindCUDA support.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
* Comment out include_guard.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
* Always build local protobuf library with -fPIC
* .
|
|
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
|
|
|
|
|
|
* Continuation of https://github.com/caffe2/caffe2/pull/2306 and based on Yangqing's PR at https://github.com/caffe2/caffe2/pull/2326
* Put caffe2_protos as static library and link it whole to libcaffe2.so
* For protobuf::libprotobuf, only link it to libcaffe2_protos (and hence libcaffe2.so), but not any downstream library. This avoids manipulating protobuf objects across dll boundaries.
* After the above, during linking one will receive complaint that fixed_address_empty_string is not found. This is because we compiled protobuf with hidden visibility, and the fact that the generated caffe2.pb.h has an inline function that invokes the inline function in protobuf GetEmptyStringAlreadyInited()
* Added sed-like commands to replace the generated header to use caffe2::GetEmptyStringAlreadyInited() instead. And, in proto_utils.cc, implement a function that essentially routes the function call to protobuf's internal one. The reason this works is that, caffe2::G... is visible globally, and libcaffe2.so is able to see the real protobuf one. This ensures that we are always calling protobuf functions that are inside libcaffe2.so.
|
|
* caffe2-onnx frontend
* Remove Python part of the conversion code
* nit
* convert more ops
* Address commmetns
|
|
|
|
Thread pool called cpuinfo_get_processors_count() without initializing cpuinfo. Only by luck it didn't make Caffe2 single-threaded: threadpool is initialized after NNPACK, and NNPACK initializes cpuinfo itself.
This commit also updates cpuinfo to a version that aborts with a fatal error if its used uninitialized.
|
|
* Wrap ShutdownProtobufLibrary
* Remove text_format.h header and only put the function in proto_utils.h
* ParseFromString returns bool
|
|
|
|
|
|
|
|
* C++ version of ONNX->Caffe2 backend
* use namespace ONNX_NAMESPACE
* Fix Build
* Comments
* Change namespace from onnx_caffe2 to caffe2::onnx
|
|
|
|
CMake 3.2 is required to properly track dependencies in projects imported as ExternalProject_Add (BUILD_BYPRODUCTS parameter).
Users on Ubuntu 14.04 LTS would need to install and use cmake3 package for configurations. Users of other popular distributions generally have a recent enough CMake package.
|
|
* Caffe2 module update: move observers as well as binaries.
* Add threads linkage
* Add Threads dependency to public interface
|
|
Fix OSS build broken after D6946982 by adding CMake detection variable
(https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-gcc4.9-ubuntu14.04-build/1343/console)
|
|
|
|
std::exception_ptr
|
|
* Do not show Python library in cmake summary as we no longer link with libpython
* Show python include dirs in cmake summary
|