index
:
platform/upstream/caffeonacl
accepted/tizen_5.0_unified
accepted/tizen_5.5_unified
accepted/tizen_5.5_unified_mobile_hotfix
accepted/tizen_5.5_unified_wearable_hotfix
accepted/tizen_unified
armcl-v18.11
master
sandbox/daeinki/armcl-v18.08
sandbox/nmerinov/llvm
tizen
tizen_5.0
tizen_5.5
tizen_5.5_mobile_hotfix
tizen_5.5_tv
tizen_5.5_wearable_hotfix
Domain: Machine Learning / ML Framework; Licenses: BSD-2-Clause;
Inki Dae <inki.dae@samsung.com>
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
include
Age
Commit message (
Expand
)
Author
Files
Lines
2015-03-25
Merge pull request #2160 from TorosFanny/master
Jeff Donahue
1
-1
/
+1
2015-03-24
replace cuDNN alphas and betas with coefficient values
Evan Shelhamer
1
-5
/
+9
2015-03-24
switch to cuDNN R2
Simon Layton
4
-23
/
+25
2015-03-19
change resorce to resource
TorosFanny
1
-1
/
+1
2015-03-13
shuffle data
wieschol
1
-0
/
+2
2015-03-11
PReLU Layer and its tests
Takuya Narihira
1
-0
/
+84
2015-03-09
AccuracyLayer: add ignore_label param
max argus
1
-0
/
+6
2015-03-09
Fixup AccuracyLayer like SoftmaxLossLayer in #1970 -- fixes #2063
Jeff Donahue
1
-0
/
+1
2015-03-07
whitespace in common.hpp
Jonathan L Long
1
-1
/
+1
2015-03-06
Fix references to plural names in API documentation
Christos Nikolaou
2
-6
/
+6
2015-03-04
expose Solver::Restore() as public and Solver.restore() in pycaffe
Evan Shelhamer
1
-4
/
+4
2015-03-04
include/caffe/common.hpp: add <climits> for INT_MAX (now in blob.hpp)
Jeff Donahue
1
-0
/
+1
2015-03-03
Add option not to reshape to Blob::FromProto; use when loading Blobs
Jeff Donahue
1
-1
/
+1
2015-03-03
SoftmaxLossLayer generalized like SoftmaxLayer
Jeff Donahue
1
-0
/
+2
2015-03-03
SoftmaxLayer: generalized Blob axes
Jeff Donahue
1
-0
/
+3
2015-03-03
SliceLayer: generalized Blob axes
Jeff Donahue
1
-5
/
+3
2015-03-03
ConcatLayer: generalized Blob axes
Jeff Donahue
1
-9
/
+7
2015-03-03
common_layers.hpp: remove unused "Blob col_bob_"
Jeff Donahue
1
-2
/
+0
2015-03-03
FlattenLayer: generalized Blob axes
Jeff Donahue
1
-6
/
+0
2015-03-03
Fix sparse GaussianFiller for new IPLayer weight axes
Jeff Donahue
1
-3
/
+2
2015-03-03
add offset, {data,diff}_at nd blob accessors
Jeff Donahue
1
-2
/
+24
2015-03-03
Add BlobShape message; use for Net input shapes
Jeff Donahue
1
-0
/
+1
2015-03-03
Blobs are ND arrays (for N not necessarily equals 4).
Jeff Donahue
1
-21
/
+125
2015-02-19
[docs] add check mode hint to CPU-only mode error
Evan Shelhamer
1
-1
/
+1
2015-02-19
Merge pull request #1910 from philkr/encoded
Evan Shelhamer
1
-0
/
+2
2015-02-19
Repeal revert of #1878
Evan Shelhamer
1
-23
/
+13
2015-02-19
added a force_encoded_color flag to the data layer. Printing a warning if ima...
philkr
1
-0
/
+2
2015-02-19
Revert "Merge pull request #1878 from philkr/encoded"
Evan Shelhamer
1
-13
/
+23
2015-02-17
comment fix: Decaf -> Caffe
Jonathan L Long
1
-1
/
+1
2015-02-17
[pycaffe] fix bug in Python layer setup
Jonathan L Long
1
-1
/
+1
2015-02-17
construct Net from file and phase
Evan Shelhamer
1
-1
/
+1
2015-02-17
pass phase to transformer through layer
Evan Shelhamer
2
-4
/
+3
2015-02-17
give phase to Net and Layer
Evan Shelhamer
6
-12
/
+15
2015-02-16
[pycaffe] allow Layer to be extended from Python
Jonathan L Long
1
-0
/
+68
2015-02-16
LayerRegistry uses shared_ptr instead of raw pointers
Jonathan L Long
1
-5
/
+6
2015-02-16
Merge pull request #1878 from philkr/encoded
Evan Shelhamer
1
-23
/
+13
2015-02-16
improve CMake build
Anatoly Baksheev
1
-1
/
+1
2015-02-16
Cleaning up the encoded flag. Allowing any image (cropped or gray scale) to b...
philkr
1
-23
/
+13
2015-02-13
Add gradient clipping -- limit L2 norm of parameter gradients
Jeff Donahue
1
-0
/
+1
2015-02-13
add Net::param_owners accessor for param sharing info
Jeff Donahue
1
-0
/
+1
2015-02-13
Blob: add scale_{data,diff} methods and tests
Jeff Donahue
1
-0
/
+5
2015-02-13
SoftmaxWithLossLayer fix: takes exactly 2 bottom blobs (inherited from
Jeff Donahue
1
-3
/
+0
2015-02-09
Fixes for CuDNN layers: only destroy handles if setup
Jeff Donahue
3
-6
/
+12
2015-02-07
Allow using arrays with n_ * size_ > 2^31
Dmitry Ulyanov
1
-1
/
+1
2015-02-06
groom #1416
Evan Shelhamer
1
-3
/
+1
2015-02-06
removed needs_reshape_ and ChangeBatchSize is now set_batch_size
manuele
1
-2
/
+1
2015-02-06
MemoryDataLayer now correctly consumes batch_size elements
manuele
2
-1
/
+3
2015-02-06
MemoryDataLayer now accepts dynamic batch_size
manuele
1
-0
/
+1
2015-02-06
Added opencv vector<Mat> to memory data layer with tests
manuele
2
-0
/
+15
2015-02-06
Added GPU implementation of SoftmaxWithLossLayer.
Sagan Bolliger
1
-2
/
+5
[next]