index
:
platform/upstream/caffeonacl
accepted/tizen_5.0_unified
accepted/tizen_5.5_unified
accepted/tizen_5.5_unified_mobile_hotfix
accepted/tizen_5.5_unified_wearable_hotfix
accepted/tizen_unified
armcl-v18.11
master
sandbox/daeinki/armcl-v18.08
sandbox/nmerinov/llvm
tizen
tizen_5.0
tizen_5.5
tizen_5.5_mobile_hotfix
tizen_5.5_tv
tizen_5.5_wearable_hotfix
Domain: Machine Learning / ML Framework; Licenses: BSD-2-Clause;
Inki Dae <inki.dae@samsung.com>
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
include
/
caffe
Age
Commit message (
Expand
)
Author
Files
Lines
2016-06-03
Add level and stages to Net constructor
Luke Yeager
1
-0
/
+1
2016-06-01
Add LSTMLayer and LSTMUnitLayer, with tests
Jeff Donahue
1
-0
/
+154
2016-06-01
Add RNNLayer, with tests
Jeff Donahue
1
-0
/
+47
2016-06-01
Add RecurrentLayer: an abstract superclass for other recurrent layer types
Jeff Donahue
1
-0
/
+187
2016-05-16
Add cuDNN v5 support, drop cuDNN v3 support
Felix Abecassis
4
-3
/
+24
2016-05-04
add parameter layer for learning any bottom
Jonathan L Long
1
-0
/
+45
2016-05-04
Merge pull request #3995 from ZhouYzzz/python-phase
Jon Long
1
-0
/
+1
2016-04-20
Don't set map_size=1TB in util/db_lmdb
Luke Yeager
1
-5
/
+8
2016-04-15
Allow the python layer have attribute "phase"
ZhouYzzz
1
-0
/
+1
2016-04-14
CropLayer: groom comments
Evan Shelhamer
1
-0
/
+9
2016-03-05
Merge pull request #3590 from junshi15/GPUUtilities
Jon Long
1
-0
/
+5
2016-03-05
Merge pull request #3588 from junshi15/P2psyncPrepare
Jon Long
1
-1
/
+4
2016-03-05
split p2psync::run()
Jun Shi
1
-1
/
+4
2016-03-05
Crop: fixes, tests and negative axis indexing.
max argus
1
-2
/
+2
2016-03-05
Extend Crop to N-D, changed CropParameter.
max argus
1
-2
/
+20
2016-03-05
add CropLayer: crop blob to another blob's dimensions with offsets
Jonathan L Long
1
-0
/
+49
2016-03-04
add check and find GPU device utilities
Jun Shi
1
-0
/
+5
2016-02-26
Deprecate ForwardPrefilled(), Forward(bottom, loss) in lieu of dropping
Evan Shelhamer
1
-0
/
+9
2016-02-25
collect Net inputs from Input layers
Evan Shelhamer
1
-2
/
+11
2016-02-25
drop Net inputs + Forward with bottoms
Evan Shelhamer
1
-27
/
+7
2016-02-25
deprecate input fields and upgrade automagically
Evan Shelhamer
1
-0
/
+6
2016-02-25
add InputLayer for Net input
Evan Shelhamer
1
-0
/
+44
2016-02-20
tranpose parameter added to IP layer to support tied weights in an autoencode...
Youssef Kashef
1
-0
/
+1
2016-01-27
Merge pull request #3022 from jeffdonahue/expose-param-display-names
Jeff Donahue
1
-0
/
+3
2016-01-26
Merge pull request #3591 from jeffdonahue/scale-bias-layer
Evan Shelhamer
2
-0
/
+137
2016-01-26
Merge pull request #3132 from bwilbertz/LastIterationLoss
Jeff Donahue
1
-0
/
+3
2016-01-22
Separation and generalization of ChannelwiseAffineLayer into BiasLayer
Jeff Donahue
3
-103
/
+137
2016-01-22
Version 1.0.0-rc3
Luke Yeager
1
-0
/
+4
2016-01-22
Add ChannelwiseAffine for batch norm
Dmytro Mishkin
1
-0
/
+103
2016-01-22
Merge pull request #3388 from mohomran/exponential_linear_units
Evan Shelhamer
1
-0
/
+86
2016-01-04
Exposing layer top and bottom names to python
philkr
1
-0
/
+12
2015-12-28
add support for N-D dilated convolution
Fisher Yu
2
-8
/
+8
2015-12-28
add support for 2D dilated convolution
Fisher Yu
4
-8
/
+23
2015-12-10
Fix CuDNNConvolutionLayer for cuDNN v4
Felix Abecassis
1
-0
/
+3
2015-12-04
ELU layer with basic tests
Mohamed Omran
1
-0
/
+86
2015-12-02
Merge pull request #3404 from BonsaiAI/remove-hamming-dist
Jon Long
1
-7
/
+0
2015-12-02
Remove hamming_distance and popcount
Tea
1
-7
/
+0
2015-12-01
Merge pull request #3285 from longjon/cuda-dead-cpp
Evan Shelhamer
1
-8
/
+2
2015-12-01
dismantle layer headers
Evan Shelhamer
69
-3406
/
+4261
2015-11-28
Secure temporary file creation
Tea
1
-7
/
+16
2015-11-28
Secure implementation of MakeTempDir
T.E.A de Souza
1
-6
/
+15
2015-11-27
Merge pull request #3320 from BonsaiAI/disambiguate-dtype
Ronghang Hu
1
-2
/
+3
2015-11-26
replace snprintf with a C++98 equivalent
Tea
1
-0
/
+18
2015-11-22
Merge pull request #3296 from cdoersch/normalize_batch
Jeff Donahue
1
-3
/
+8
2015-11-22
Better normalization options for SoftmaxWithLoss layer.
Carl Doersch
1
-3
/
+8
2015-11-20
Convert std::max args to Dtype
Tea
1
-2
/
+3
2015-11-19
Fix MaxTopBlobs in Accuracy Layer
Ronghang Hu
1
-1
/
+1
2015-11-12
Fix loss of last iteration when average_loss > 1
Benedikt Wilbertz
1
-0
/
+3
2015-11-10
Merge pull request #3295 from timmeinhardt/fix_issue_3274
Evan Shelhamer
1
-7
/
+7
2015-11-10
Merge pull request #3310 from gustavla/contrastive-doc-fix
Evan Shelhamer
1
-2
/
+2
[next]