summaryrefslogtreecommitdiff
path: root/aten/README.md
diff options
context:
space:
mode:
authorZach DeVito <zdevito@fb.com>2017-06-20 19:15:26 -0700
committerEdward Z. Yang <ezyang@mit.edu>2017-11-02 19:53:36 -0400
commite32210658de4dc01d95b1fdfe95725d1154950e9 (patch)
treef85423d870e7544a93caf05173a7250395d3bf6c /aten/README.md
parent7a5987123fe8b758a45fd83cf9fc6407ae44ce95 (diff)
downloadpytorch-e32210658de4dc01d95b1fdfe95725d1154950e9.tar.gz
pytorch-e32210658de4dc01d95b1fdfe95725d1154950e9.tar.bz2
pytorch-e32210658de4dc01d95b1fdfe95725d1154950e9.zip
add readme and generated files for Type/Tensor/Functions to a doc folder to make it possible to view headers without building the library
Diffstat (limited to 'aten/README.md')
-rw-r--r--aten/README.md121
1 files changed, 121 insertions, 0 deletions
diff --git a/aten/README.md b/aten/README.md
new file mode 100644
index 0000000000..6d2ddd5f50
--- /dev/null
+++ b/aten/README.md
@@ -0,0 +1,121 @@
+# ATen: A TENsor library
+
+ATen is a simple tensor library thats exposes the Tensor operations in Torch
+and PyTorch directly in C++11. The wrapper respects the semantics of operators
+in PyTorch, except minor details due to differences between C++ in Python in
+the way default arguments are handled. See the [documentation for tensors](http://pytorch.org/docs/tensors.html) in PyTorch for what these operations do.
+ATen's API is auto-generated from the same declarations PyTorch uses so the
+two APIs will track each other over time.
+
+Tensor types are resolved dynamically, such that the API is generic and
+does not include templates. That is, there is one `Tensor` type. It can hold a
+CPU or CUDA Tensor, and the tensor may have Doubles, Float, Ints, etc. This design
+makes it easy to write generic code without templating everything.
+
+See the _generated_ [`Tensor.h` file](doc/Tensor.h) and [`Functions.h` file](doc/Tensor.h) for the provided API. Excerpt:
+```c++
+Tensor atan2(const Tensor & other) const;
+Tensor & atan2_(const Tensor & other);
+Tensor pow(Scalar exponent) const;
+Tensor pow(const Tensor & exponent) const;
+Tensor & pow_(Scalar exponent);
+Tensor & pow_(const Tensor & exponent);
+Tensor lerp(const Tensor & end, Scalar weight) const;
+Tensor & lerp_(const Tensor & end, Scalar weight);
+Tensor histc() const;
+Tensor histc(int64_t bins) const;
+Tensor histc(int64_t bins, Scalar min) const;
+Tensor histc(int64_t bins, Scalar min, Scalar max) const;
+```
+
+Inplace operations are also provided, and always suffixed by `_` to indicate they will modify the Tensor.
+
+### Installation
+
+TH/THC/THNN/THCUNN are provided (as git subtrees), so the repo is standalone. You will need a C++11 compiler, cmake, and the pyyaml python package.
+```
+
+# Install pyyaml used by python code generation to read API declarations
+
+# OSX: if you don't have pip
+sudo easy_install pip
+# Ubuntu: if you don't have pip
+apt-get -y install python-pip
+
+# if you don't have pyyaml
+sudo pip install pyyaml
+
+mkdir build
+cd build
+cmake .. -DCMAKE_INSTALL_PREFIX=/where/you/want # specify your dest directory
+make install
+```
+
+### Example usage
+
+Here is a simple example; again, the syntax follows Torch semantics.
+
+```c++
+using namespace at; // assumed in the following
+
+Tensor d = CPU(kFloat).ones({3, 4});
+Tensor r = CPU(kFloat).zeros({3,4})
+for(auto i = 0; i < 100000; i++) {
+ r = r.add(d);
+ // equivalently
+ r = r + d;
+ // or
+ r += d;
+}
+```
+
+Want this running on the GPU?
+```c++
+using namespace at; // assumed in the following
+
+Tensor d = CUDA(kFloat).ones({3, 4});
+Tensor r = CUDA(kFloat).zeros({3,4})
+for(auto i = 0; i < 100000; i++) {
+ r = r.add(d);
+ // equivalently
+ r = r + d;
+ // or
+ r += d;
+}
+```
+
+Expressions like `CUDA(kFloat)` are first-class `at::Type` objects that represent
+the type of a Tensor and are used to create Tensors when their type cannot be
+inferred. See the _generated_ [Type header](doc/Type.h) for its API.
+
+See more in [sample files](src/ATen/test).
+
+### Creating your kernel
+
+It is easy to create new kernels, thanks to the `dispatch<>()` templated function. Example:
+```c++
+
+// a simple sum kernel (for CPU only)
+template<typename T>
+struct sum_op {
+ // dispatch handles variable arguments for you
+ Tensor CPU(const Type & t, Tensor & x_)
+ {
+ Tensor x = x_.contiguous();
+ auto x_p = x.data<T>();
+ int64_t size = x.numel();
+ T sum = 0;
+ for(int64_t i = 0; i < size; i++) {
+ sum += x_p[i];
+ }
+ return sum;
+ };
+ Tensor CUDA(Tensor& x) {
+ throw std::invalid_argument("device not supported");
+ };
+};
+
+Tensor a = CPU(kFloat).rand({3, 7});
+std::cout << a << std::endl;
+std::cout << dispatch<sum_op>(a.type(),a) << " == " << a.sum() << std::endl;
+```