summaryrefslogtreecommitdiff
path: root/tools
diff options
context:
space:
mode:
authorPeter Goldsborough <psag@fb.com>2018-09-20 20:36:22 -0700
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>2018-09-20 20:39:34 -0700
commitd712a7174193e47833a10fdcba7a00b9e94f46ac (patch)
tree262f5d01f58ea127b679e0be93e7221a791383d2 /tools
parent30521a37ad7cb3a13b73a2fb950e941939691c6d (diff)
downloadpytorch-d712a7174193e47833a10fdcba7a00b9e94f46ac.tar.gz
pytorch-d712a7174193e47833a10fdcba7a00b9e94f46ac.tar.bz2
pytorch-d712a7174193e47833a10fdcba7a00b9e94f46ac.zip
Protobuf serialization (#11619)
Summary: This PR serves two purposes: 1. Design an abstraction over a serialization scheme for C++ modules, optimizers and tensors in general, 2. Add serialization to the ONNX/PyTorch proto format. This is currently a rough prototype I coded up today, to get quick feedback. For this I propose the following serialization interface within the C++ API: ```cpp namespace torch { namespace serialize { class Reader { public: virtual ~Reader() = default; virtual void read(const std::string& key, Tensor& tensor, bool is_buffer = false) = 0; virtual void finish() { } }; class Writer { public: virtual ~Reader() = default; virtual void writer(const std::string& key, const Tensor& tensor, bool is_buffer = false) = 0; virtual void finish() { } }; }} // namespace torch::serialize ``` There are then subclasses of these two for (1) Cereal and (2) Protobuf (called the "DefaultWriter" and "DefaultReader" to hide the implementation details). See `torch/serialize/cereal.h` and `torch/serialize/default.h`. This abstraction and subclassing for these two allows us to: 1. Provide a cereal-less serialization forward that we can ship and iterate on going forward, 2. Provide no-friction backwards compatibility with existing C++ API uses, mainly StarCraft. The user-facing API is (conceptually): ```cpp void torch::save(const Module& module, Writer& writer); void torch::save(const Optimizer& optimizer, Writer& writer); void torch::read(Module& module, Reader& reader); void torch::read(Optimizer& optimizer, Reader& reader); ``` with implementations for both optimizers and modules that write into the `Writer` and read from the `Reader` ebetica ezyang zdevito dzhulgakov Pull Request resolved: https://github.com/pytorch/pytorch/pull/11619 Differential Revision: D9984664 Pulled By: goldsborough fbshipit-source-id: e03afaa646221546e7f93bb8dfe3558e384a5847
Diffstat (limited to 'tools')
-rw-r--r--tools/build_libtorch.py3
-rwxr-xr-xtools/build_pytorch_libs.sh5
2 files changed, 0 insertions, 8 deletions
diff --git a/tools/build_libtorch.py b/tools/build_libtorch.py
index db698a2412..1f7d709d31 100644
--- a/tools/build_libtorch.py
+++ b/tools/build_libtorch.py
@@ -9,7 +9,6 @@ from setup_helpers.cuda import USE_CUDA
if __name__ == '__main__':
# Placeholder for future interface. For now just gives a nice -h.
parser = argparse.ArgumentParser(description='Build libtorch')
- parser.add_argument('--use-cereal', action='store_true')
options = parser.parse_args()
os.environ['BUILD_TORCH'] = 'ON'
@@ -25,8 +24,6 @@ if __name__ == '__main__':
command.append('--use-cuda')
if os.environ.get('USE_CUDA_STATIC_LINK', False):
command.append('--cuda-static-link')
- if options.use_cereal:
- command.append('--use-cereal')
command.append('caffe2')
sys.stdout.flush()
diff --git a/tools/build_pytorch_libs.sh b/tools/build_pytorch_libs.sh
index 37d816775f..01cb82f49c 100755
--- a/tools/build_pytorch_libs.sh
+++ b/tools/build_pytorch_libs.sh
@@ -22,7 +22,6 @@ USE_NNPACK=0
USE_MKLDNN=0
USE_GLOO_IBVERBS=0
CAFFE2_STATIC_LINK_CUDA=0
-TORCH_USE_CEREAL=0
RERUN_CMAKE=1
while [[ $# -gt 0 ]]; do
case "$1" in
@@ -47,9 +46,6 @@ while [[ $# -gt 0 ]]; do
--cuda-static-link)
CAFFE2_STATIC_LINK_CUDA=1
;;
- --use-cereal)
- TORCH_USE_CEREAL=1
- ;;
*)
break
;;
@@ -194,7 +190,6 @@ function build() {
-DTHCUNN_SO_VERSION=1 \
-DTHD_SO_VERSION=1 \
-DUSE_CUDA=$USE_CUDA \
- -DTORCH_USE_CEREAL=$TORCH_USE_CEREAL \
-DBUILD_EXAMPLES=OFF \
-DBUILD_TEST=$BUILD_TEST \
-DNO_NNPACK=$((1-$USE_NNPACK)) \