summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorAdam Paszke <adam.paszke@gmail.com>2017-05-21 12:20:50 +0200
committerAdam Paszke <adam.paszke@gmail.com>2017-05-21 12:20:50 +0200
commit0c5598c66819f375b35b414c7ffe65cb990ef31f (patch)
treecd195895394a4344951fb766988fd5dc76230ead /README.md
parentfeaee29bfe1a021d2368c197f650b70926577ba4 (diff)
downloadpytorch-0c5598c66819f375b35b414c7ffe65cb990ef31f.tar.gz
pytorch-0c5598c66819f375b35b414c7ffe65cb990ef31f.tar.bz2
pytorch-0c5598c66819f375b35b414c7ffe65cb990ef31f.zip
Update build status matrix
Diffstat (limited to 'README.md')
-rw-r--r--README.md19
1 files changed, 10 insertions, 9 deletions
diff --git a/README.md b/README.md
index fc0f5406b7..9de45a30f2 100644
--- a/README.md
+++ b/README.md
@@ -20,11 +20,12 @@ We are in an early-release Beta. Expect some adventures and rough edges.
- [Releases and Contributing](#releases-and-contributing)
- [The Team](#the-team)
-| System | Python | Status |
+| System | 2.7 | 3.5 |
| --- | --- | --- |
-| Linux CPU | 2.7.8, 2.7, 3.5, nightly | [![Build Status](https://travis-ci.org/pytorch/pytorch.svg?branch=master)](https://travis-ci.org/pytorch/pytorch) |
-| Linux GPU | 2.7 | [![Build Status](http://build.pytorch.org:8080/buildStatus/icon?job=pytorch-master-py2)](https://build.pytorch.org/job/pytorch-master-py2) |
-| Linux GPU | 3.5 | [![Build Status](http://build.pytorch.org:8080/buildStatus/icon?job=pytorch-master-py3)](https://build.pytorch.org/job/pytorch-master-py3) |
+| Linux CPU | [![Build Status](https://travis-ci.org/pytorch/pytorch.svg?branch=master)](https://travis-ci.org/pytorch/pytorch) | [![Build Status](https://travis-ci.org/pytorch/pytorch.svg?branch=master)](https://travis-ci.org/pytorch/pytorch) |
+| Linux GPU | [![Build Status](http://build.pytorch.org:8080/buildStatus/icon?job=pytorch-master-py2-linux)](https://build.pytorch.org/job/pytorch-master-py2-linux) | [![Build Status](http://build.pytorch.org:8080/buildStatus/icon?job=pytorch-master-py3-linux)](https://build.pytorch.org/job/pytorch-master-py3-linux) |
+| macOS CPU | [![Build Status](http://build.pytorch.org:8080/buildStatus/icon?job=pytorch-master-py2-osx-cpu)](https://build.pytorch.org/job/pytorch-master-py2-osx-cpu) | [![Build Status](http://build.pytorch.org:8080/buildStatus/icon?job=pytorch-master-py3-osx-cpu)](https://build.pytorch.org/job/pytorch-master-py3-osx-cpu) |
+
## More about PyTorch
@@ -116,9 +117,9 @@ We hope you never spend hours debugging your code because of bad stack traces or
### Fast and Lean
-PyTorch has minimal framework overhead. We integrate acceleration libraries
-such as Intel MKL and NVIDIA (CuDNN, NCCL) to maximize speed.
-At the core, its CPU and GPU Tensor and Neural Network backends
+PyTorch has minimal framework overhead. We integrate acceleration libraries
+such as Intel MKL and NVIDIA (CuDNN, NCCL) to maximize speed.
+At the core, its CPU and GPU Tensor and Neural Network backends
(TH, THC, THNN, THCUNN) are written as independent libraries with a C99 API.
They are mature and have been tested for years.
@@ -204,7 +205,7 @@ nvidia-docker run --rm -ti --ipc=host pytorch-cudnnv6
```
Please note that pytorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.
for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
-should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.
+should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.
## Getting Started
@@ -222,7 +223,7 @@ Three pointers to get you started:
## Releases and Contributing
-PyTorch has a 90 day release cycle (major releases).
+PyTorch has a 90 day release cycle (major releases).
It's current state is Beta, we expect no obvious bugs. Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues).
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.