Age | Commit message (Collapse) | Author | Files | Lines | |
---|---|---|---|---|---|
2015-08-09 | Multi-GPU | Cyprien Noel | 1 | -0/+1 | |
- Parallelize batches among GPUs and tree-reduce the gradients - The effective batch size scales with the number of devices - Batch size is multiplied by the number of devices - Split batches between GPUs, and tree-reduce the gradients - Detect machine topology (twin-GPU boards, P2P connectivity) - Track device in syncedmem (thanks @thatguymike) - Insert a callback in the solver for minimal code change - Accept list for gpu flag of caffe tool, e.g. '-gpu 0,1' or '-gpu all'. Run on default GPU if no ID given. - Add multi-GPU solver test - Deterministic architecture for reproducible runs | |||||
2014-10-02 | add factory header to caffe hpp | Yangqing Jia | 1 | -0/+1 | |
2014-08-06 | LICENSE governs the whole project so strip file headers | Evan Shelhamer | 1 | -1/+0 | |
2014-07-25 | include benchmark.hpp | Yangqing Jia | 1 | -3/+3 | |
2014-03-27 | Standardize copyright, add root-level CONTRIBUTORS credit | Evan Shelhamer | 1 | -1/+1 | |
2013-11-21 | remove remaining distributed solver stuff | Yangqing Jia | 1 | -1/+0 | |
2013-11-06 | working asynchronous sgd code. may have errors. | Yangqing Jia | 1 | -0/+1 | |
2013-10-17 | cleaning codes | Yangqing Jia | 1 | -0/+4 | |
2013-10-15 | Reorganization of codes. | Yangqing Jia | 1 | -0/+15 | |