diff options
author | Adam Paszke <adam.paszke@gmail.com> | 2017-09-29 08:52:35 -0700 |
---|---|---|
committer | Adam Paszke <adam.paszke@gmail.com> | 2017-10-19 19:51:10 +0200 |
commit | 98e67448fa78bd1bc6f05920ad03efceecc10066 (patch) | |
tree | 50a690a709583559368145aacce9a460b184ce0c /docs | |
parent | 3a4ca7a2696ac5f8d3a32108648f588bbc2b1eaa (diff) | |
download | pytorch-98e67448fa78bd1bc6f05920ad03efceecc10066.tar.gz pytorch-98e67448fa78bd1bc6f05920ad03efceecc10066.tar.bz2 pytorch-98e67448fa78bd1bc6f05920ad03efceecc10066.zip |
Large Softmax and LogSoftmax refactor
- Cleaned up THNN and THCUNN code and kernels
- Improved THCUNN kernel performance 5x, making it match cuDNN performance
- Added support for computing softmax over arbitrary dims
NOTE: The default dim for 3D inputs is now 1 (used to be 0)
- Both functions now accept inputs with arbitrarily many dimensions
- Autograd functions no longer save the input (it's unnecessary)
- Added cuDNN bindings for softmax, but they are unused as THCUNN
matches or even exceeds cuDNN performance
Diffstat (limited to 'docs')
0 files changed, 0 insertions, 0 deletions