index
:
platform/upstream/caffeonacl
accepted/tizen_5.0_unified
accepted/tizen_5.5_unified
accepted/tizen_5.5_unified_mobile_hotfix
accepted/tizen_5.5_unified_wearable_hotfix
accepted/tizen_unified
armcl-v18.11
master
sandbox/daeinki/armcl-v18.08
sandbox/nmerinov/llvm
tizen
tizen_5.0
tizen_5.5
tizen_5.5_mobile_hotfix
tizen_5.5_tv
tizen_5.5_wearable_hotfix
Domain: Machine Learning / ML Framework; Licenses: BSD-2-Clause;
Inki Dae <inki.dae@samsung.com>
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
src
/
caffe
/
layers
/
softmax_loss_layer.cpp
Age
Commit message (
Expand
)
Author
Files
Lines
2015-03-08
SoftmaxLossLayer fix: canonicalize input axis
Jeff Donahue
1
-1
/
+2
2015-03-03
SoftmaxLossLayer generalized like SoftmaxLayer
Jeff Donahue
1
-19
/
+23
2015-02-16
LayerRegistry uses shared_ptr instead of raw pointers
Jonathan L Long
1
-1
/
+1
2015-02-06
Added GPU implementation of SoftmaxWithLossLayer.
Sagan Bolliger
1
-0
/
+4
2015-02-05
Layer type is a string
Jeff Donahue
1
-3
/
+3
2015-01-29
Merge pull request #1654 from longjon/softmax-missing-values
Evan Shelhamer
1
-9
/
+35
2015-01-21
SoftmaxWithLossLayer: use CreateLayer so that a CuDNNSoftmaxLayer
Jeff Donahue
1
-0
/
+4
2014-12-30
clean up formatting in SoftmaxLossLayer
Jonathan L Long
1
-2
/
+1
2014-12-30
add spatial normalization option to SoftmaxLossLayer
Jonathan L Long
1
-2
/
+11
2014-12-30
add missing value support to SoftmaxLossLayer
Jonathan L Long
1
-5
/
+23
2014-12-29
remove SoftmaxLossLayer CPU_ONLY stubs, since there is no GPU version
Jonathan L Long
1
-5
/
+1
2014-11-23
use DCHECK in SoftmaxLossLayer so that bounds checking has no perf penalty wi...
Jonathan L Long
1
-2
/
+2
2014-11-22
in SoftmaxLossLayer, check label >= 0 in addition to upper bound
Jonathan L Long
1
-0
/
+1
2014-10-13
move registration code to corresponding cpp files.
Yangqing Jia
1
-2
/
+1
2014-10-11
Minor fixes. (1) cudnn.hpp uses CHECK_EQ internally, so it needs to include ...
Kevin James Matzen
1
-1
/
+3
2014-09-19
fix types of (Layer)SetUp, Reshape, Forward, and Backward calls
Jonathan L Long
1
-14
/
+14
2014-09-18
split off Reshape for loss layers
Jonathan L Long
1
-0
/
+7
2014-08-19
softmax and softmax loss layers work across channels
Jonathan L Long
1
-6
/
+13
2014-08-13
Generalize loss by allowing any top blob to be used as a loss in which
Jeff Donahue
1
-17
/
+11
2014-08-06
LICENSE governs the whole project so strip file headers
Evan Shelhamer
1
-2
/
+0
2014-08-04
Fix header alphabetization lint errors.
Jeff Donahue
1
-1
/
+1
2014-07-18
reapply namespace change
Yangqing Jia
1
-4
/
+2
2014-07-17
stub out GPU layer methods to crash loudly in CPU-only mode
Evan Shelhamer
1
-0
/
+4
2014-07-03
replace all memcpy by caffe_copy
Evan Shelhamer
1
-1
/
+1
2014-06-26
change Backward interface: propagate_down is a vector -- use to fix
Jeff Donahue
1
-12
/
+17
2014-06-25
fix SOFTMAX_LOSS to work with loss top blob interface
Evan Shelhamer
1
-1
/
+13
2014-06-19
Now Loss layers would return the loss in the top blob if requested
Sergio
1
-0
/
+3
2014-06-08
layers declare their names and number of input/output blobs, and don't
Jeff Donahue
1
-2
/
+1
2014-03-27
Standardize copyright, add root-level CONTRIBUTORS credit
Evan Shelhamer
1
-1
/
+1
2014-03-19
revert unnecessary reordering of lines in softmaxwithlosslayer backward
Jeff Donahue
1
-1
/
+1
2014-03-19
change specification of forward/backward function and fix layer
Jeff Donahue
1
-8
/
+14
2014-03-18
lint, except for rand/rand_r
Evan Shelhamer
1
-1
/
+2
2014-02-26
Splitting source files between CUDA and CPU code.
Eric Tzeng
1
-0
/
+59