summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)AuthorFilesLines
2015-08-06Merge pull request #2462 from longjon/correct-python-exceptionsEvan Shelhamer1-25/+4
Handle Python layer exceptions correctly
2015-07-29[docs] fix contrastive loss eqEvan Shelhamer1-4/+4
make documented equation match the correct implementation of the `max(margin - d, 0)^2` term in the loss. see #2321
2015-07-14tiny fix in Layer::Backward documentationYoussef Kashef1-1/+1
2015-06-30bilinear filler -- useful for interpolation with DeconvolutionLayerTakuya Narihira1-0/+56
This filler is a convenience for interpolating with DeconvolutionLayer or smoothing + downsampling with ConvolutionLayer for stride > 1.
2015-06-05Split db.hpp into leveldb_db.hpp and lmdb_db.hppSergio Guadarrama3-136/+164
2015-06-02Add ReductionLayer to reduce any number of "tail" axes to a scalar valueJeff Donahue1-0/+45
Currently implements operations SUM, MEAN, ASUM (sum of absolute values), and SUMSQ (sum of squares)
2015-06-02Add LogLayerJeff Donahue3-0/+73
2015-06-02FilterLayer cleanup and bugfix for GPU backwardJeff Donahue1-3/+3
-caffe_set -> caffe_gpu_set (backward was segfaulting before) -remove uses of 'offset' (to support >4D blobs) -change var++ -> ++var (per Google style guide) -cleanup comments/whitespace
2015-06-02Filter Layer implementedmanuele1-0/+63
2015-05-29Merge pull request #2511 from flx42/fix_illegal_mode_changesEvan Shelhamer1-13/+15
Fix invalid mode changes during tests
2015-05-29Merge pull request #1977 from shelhamer/accum-gradEvan Shelhamer2-2/+6
Decouple the computational batch size and minibatch size by accumulating gradients
2015-05-29Merge pull request #2410 from sguada/datum_transformJeff Donahue1-0/+36
Datum transform
2015-05-28directly normalize accumulated gradientsEvan Shelhamer1-0/+1
`SGDSolver::Normalize()` normalizes accumulated gradients by scaling inversely to the accumulation as `1 / iter_size`. This fixes accumulation for AdaGrad and is more obvious than fooling with rates and decays in 55585f5.
2015-05-27zero-init param diffs in gradient checkerJonathan L Long1-2/+5
2015-05-27Solver::MakeUpdate() -> Solver::ApplyUpdateEvan Shelhamer1-4/+4
Designate `Solver::ApplyUpdate()` as the core method to compute and apply parameter updates given the current state of the Net. Make `Solver::ComputeUpdateValue()` a subordinate call overloaded by the `SGDSolver`s to take care of optimization algorithm details.
2015-05-26Refactor solvers regularization and logging codeCyprien Noel1-5/+7
2015-05-26Add classes GPUDeviceTest and CPUDeviceTest.Felix Abecassis1-0/+8
These new classes can be used to implement test cases that are only running on the GPU or the CPU. The goal is to move all calls to Caffe::set_mode() inside the test framework, to discourage any test to change the mode halfway through the execution, which is documented to be illegal.
2015-05-26Merge pull request #1946 from nickcarlevaris/msra_initEvan Shelhamer1-9/+62
Add MSRAFiller, an Xavier-like filler designed for use with ReLUs
2015-05-26include comment on Saxe and sqrt(2) scaling factorEvan Shelhamer1-0/+3
although different and independent, the derivation of Saxe et al. with regards to the scaling factor might be of interest.
2015-05-26Added MSRAFiller, an Xavier-like filler designed for use with ReLUsNick Carlevaris-Bianco1-9/+59
...instead of tanh. Based on paper: He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," 2015. - add VarianceNorm option to FillerParameters which allows one to normalize by fan_in, fan_out or their average. - update XavierFiller to use the VarianceNorm option (default behavior unchanged). - add tests for MSRAFiller and XavierFiller.
2015-05-26Refactor types FloatCPU and DoubleCPU into a new type CPUDevice<T>Felix Abecassis1-17/+11
Similarly, FloatGPU and DoubleGPU are replaced by a new type GPUDevice<T>.
2015-05-16Merge pull request #2466 from ducha-aiki/mvn-lessJeff Donahue1-0/+1
Remove unnecessary variance computation from backward in MVN layer
2015-05-15Merge pull request #2095 from mtamburrano/skip_propagate_down_paramJeff Donahue1-0/+3
Added param skip_propagate_down to LayerParameter
2015-05-15Remove unnecessary variance computation from backward in MVN layerDmytro Mishkin1-0/+1
2015-05-15Added "propagate_down" param to LayerParametermanuele1-0/+3
2015-05-14[pycaffe] correct exceptions from Python; remove PyErr_PrintJonathan L Long1-25/+4
Previously, PyErr_Print was used to print Python exceptions. This has the side effect of clearing the exception, which results in (an additional) SystemError in the Python interpreter. Exception printing from the caffe binary tool is re-added in a future commit.
2015-05-14Add ReshapeParameter axis and num_axes to reshape only a particular spanJeff Donahue1-2/+0
of the input shape
2015-05-14ReshapeLayer fixups for ND blobsJeff Donahue1-13/+18
2015-05-14Added a Reshape layer for copying-free modification of blob dimensions.Simon Safar1-0/+35
2015-05-14Merge pull request #2177 from pgao/spp_layerJeff Donahue1-0/+66
Spatial Pyramid Pooling Layer
2015-05-14Spatial Pyramid Pooling LayerPETER_GAO1-0/+66
2015-05-14Merge pull request #2115 from longjon/bogus-cross-entropy-gpuJeff Donahue1-2/+0
Remove bogus implementation of SigmoidCrossEntropyLossLayer's Forward_gpu
2015-05-14remove bogus implementation of SigmoidCrossEntropyLossLayer::Forward_gpuJonathan L Long1-2/+0
It was a verbatim copy of Forward_cpu; there is no proper GPU implementation.
2015-05-14Merge pull request #2168 from longjon/spurious-net-includesJeff Donahue2-2/+0
Remove spurious inclusions of net.hpp
2015-05-14Merge pull request #2165 from longjon/auto-reshapeJeff Donahue1-0/+1
Always call Layer::Reshape in Layer::Forward
2015-05-14Merge pull request #2456 from longjon/python-layer-objectJeff Donahue1-7/+6
Use bp::object instead of PyObject* for self in Python layer
2015-05-13remove superfluous empty destructorsJonathan L Long1-2/+0
The removed definitions do nothing; these classes already have virtual destructors inherited from their respective base classes.
2015-05-13[pycaffe] use bp::object instead of PyObject* for self in Python layerTakuya Narihira1-7/+6
This simply allows direct use of the nicer bp::object interface.
2015-05-05Merge pull request #2414 from tnarihi/fix-prelu-redanduncyJeff Donahue1-1/+2
Fix #2406: wrong thread blocks setting for PReLU
2015-05-04Modify for better readability regarding temporary bufffer for backwardTakuya Narihira1-1/+2
computation
2015-04-26fix a typo that GFLAGS_GFLAGS_H_ -> GFLAGS_GFAGS_H_gdh19951-1/+1
2015-04-08Added InferBlobShape to data_transformer.Sergio Guadarrama1-0/+36
2015-03-25Merge pull request #2160 from TorosFanny/masterJeff Donahue1-1/+1
change resorce to resource
2015-03-24replace cuDNN alphas and betas with coefficient valuesEvan Shelhamer1-5/+9
Give cuDNN {0, 1} constants for controlling accumulation through the alpha and beta coefficients.
2015-03-24switch to cuDNN R2Simon Layton4-23/+25
2015-03-19remove spurious net.hpp includesJonathan L Long2-2/+0
2015-03-19always call Layer::Reshape in Layer::ForwardJonathan L Long1-0/+1
There are no cases where Forward is called without Reshape, so we can simplify the call structure.
2015-03-19change resorce to resourceTorosFanny1-1/+1
2015-03-13shuffle datawieschol1-0/+2
2015-03-11PReLU Layer and its testsTakuya Narihira1-0/+84
described in Kaiming He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", arxiv 2015. Belows are commit message histories that I had while developing. PReLULayer takes FillerParameter for init PReLU testing consistency with ReLU Fix : PReLU test concistency check PReLU tests in-place computation, and it failed in GPU Fix: PReLU in-place backward in GPU PReLULayer called an incorrect API for copying data (caffe_gpu_memcpy). First argment of `caffe_gpu_memcpy` should be size of memory region in byte. I modified to use `caffe_copy` function. Fix: style errors Fix: number of axes of input blob must be >= 2 Use 1D blob, zero-D blob. Rename: hw -> dim