Age | Commit message (Collapse) | Author | Files | Lines | |
---|---|---|---|---|---|
2015-08-06 | Merge pull request #2462 from longjon/correct-python-exceptions | Evan Shelhamer | 1 | -25/+4 | |
Handle Python layer exceptions correctly | |||||
2015-07-29 | [docs] fix contrastive loss eq | Evan Shelhamer | 1 | -4/+4 | |
make documented equation match the correct implementation of the `max(margin - d, 0)^2` term in the loss. see #2321 | |||||
2015-07-14 | tiny fix in Layer::Backward documentation | Youssef Kashef | 1 | -1/+1 | |
2015-06-30 | bilinear filler -- useful for interpolation with DeconvolutionLayer | Takuya Narihira | 1 | -0/+56 | |
This filler is a convenience for interpolating with DeconvolutionLayer or smoothing + downsampling with ConvolutionLayer for stride > 1. | |||||
2015-06-05 | Split db.hpp into leveldb_db.hpp and lmdb_db.hpp | Sergio Guadarrama | 3 | -136/+164 | |
2015-06-02 | Add ReductionLayer to reduce any number of "tail" axes to a scalar value | Jeff Donahue | 1 | -0/+45 | |
Currently implements operations SUM, MEAN, ASUM (sum of absolute values), and SUMSQ (sum of squares) | |||||
2015-06-02 | Add LogLayer | Jeff Donahue | 3 | -0/+73 | |
2015-06-02 | FilterLayer cleanup and bugfix for GPU backward | Jeff Donahue | 1 | -3/+3 | |
-caffe_set -> caffe_gpu_set (backward was segfaulting before) -remove uses of 'offset' (to support >4D blobs) -change var++ -> ++var (per Google style guide) -cleanup comments/whitespace | |||||
2015-06-02 | Filter Layer implemented | manuele | 1 | -0/+63 | |
2015-05-29 | Merge pull request #2511 from flx42/fix_illegal_mode_changes | Evan Shelhamer | 1 | -13/+15 | |
Fix invalid mode changes during tests | |||||
2015-05-29 | Merge pull request #1977 from shelhamer/accum-grad | Evan Shelhamer | 2 | -2/+6 | |
Decouple the computational batch size and minibatch size by accumulating gradients | |||||
2015-05-29 | Merge pull request #2410 from sguada/datum_transform | Jeff Donahue | 1 | -0/+36 | |
Datum transform | |||||
2015-05-28 | directly normalize accumulated gradients | Evan Shelhamer | 1 | -0/+1 | |
`SGDSolver::Normalize()` normalizes accumulated gradients by scaling inversely to the accumulation as `1 / iter_size`. This fixes accumulation for AdaGrad and is more obvious than fooling with rates and decays in 55585f5. | |||||
2015-05-27 | zero-init param diffs in gradient checker | Jonathan L Long | 1 | -2/+5 | |
2015-05-27 | Solver::MakeUpdate() -> Solver::ApplyUpdate | Evan Shelhamer | 1 | -4/+4 | |
Designate `Solver::ApplyUpdate()` as the core method to compute and apply parameter updates given the current state of the Net. Make `Solver::ComputeUpdateValue()` a subordinate call overloaded by the `SGDSolver`s to take care of optimization algorithm details. | |||||
2015-05-26 | Refactor solvers regularization and logging code | Cyprien Noel | 1 | -5/+7 | |
2015-05-26 | Add classes GPUDeviceTest and CPUDeviceTest. | Felix Abecassis | 1 | -0/+8 | |
These new classes can be used to implement test cases that are only running on the GPU or the CPU. The goal is to move all calls to Caffe::set_mode() inside the test framework, to discourage any test to change the mode halfway through the execution, which is documented to be illegal. | |||||
2015-05-26 | Merge pull request #1946 from nickcarlevaris/msra_init | Evan Shelhamer | 1 | -9/+62 | |
Add MSRAFiller, an Xavier-like filler designed for use with ReLUs | |||||
2015-05-26 | include comment on Saxe and sqrt(2) scaling factor | Evan Shelhamer | 1 | -0/+3 | |
although different and independent, the derivation of Saxe et al. with regards to the scaling factor might be of interest. | |||||
2015-05-26 | Added MSRAFiller, an Xavier-like filler designed for use with ReLUs | Nick Carlevaris-Bianco | 1 | -9/+59 | |
...instead of tanh. Based on paper: He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," 2015. - add VarianceNorm option to FillerParameters which allows one to normalize by fan_in, fan_out or their average. - update XavierFiller to use the VarianceNorm option (default behavior unchanged). - add tests for MSRAFiller and XavierFiller. | |||||
2015-05-26 | Refactor types FloatCPU and DoubleCPU into a new type CPUDevice<T> | Felix Abecassis | 1 | -17/+11 | |
Similarly, FloatGPU and DoubleGPU are replaced by a new type GPUDevice<T>. | |||||
2015-05-16 | Merge pull request #2466 from ducha-aiki/mvn-less | Jeff Donahue | 1 | -0/+1 | |
Remove unnecessary variance computation from backward in MVN layer | |||||
2015-05-15 | Merge pull request #2095 from mtamburrano/skip_propagate_down_param | Jeff Donahue | 1 | -0/+3 | |
Added param skip_propagate_down to LayerParameter | |||||
2015-05-15 | Remove unnecessary variance computation from backward in MVN layer | Dmytro Mishkin | 1 | -0/+1 | |
2015-05-15 | Added "propagate_down" param to LayerParameter | manuele | 1 | -0/+3 | |
2015-05-14 | [pycaffe] correct exceptions from Python; remove PyErr_Print | Jonathan L Long | 1 | -25/+4 | |
Previously, PyErr_Print was used to print Python exceptions. This has the side effect of clearing the exception, which results in (an additional) SystemError in the Python interpreter. Exception printing from the caffe binary tool is re-added in a future commit. | |||||
2015-05-14 | Add ReshapeParameter axis and num_axes to reshape only a particular span | Jeff Donahue | 1 | -2/+0 | |
of the input shape | |||||
2015-05-14 | ReshapeLayer fixups for ND blobs | Jeff Donahue | 1 | -13/+18 | |
2015-05-14 | Added a Reshape layer for copying-free modification of blob dimensions. | Simon Safar | 1 | -0/+35 | |
2015-05-14 | Merge pull request #2177 from pgao/spp_layer | Jeff Donahue | 1 | -0/+66 | |
Spatial Pyramid Pooling Layer | |||||
2015-05-14 | Spatial Pyramid Pooling Layer | PETER_GAO | 1 | -0/+66 | |
2015-05-14 | Merge pull request #2115 from longjon/bogus-cross-entropy-gpu | Jeff Donahue | 1 | -2/+0 | |
Remove bogus implementation of SigmoidCrossEntropyLossLayer's Forward_gpu | |||||
2015-05-14 | remove bogus implementation of SigmoidCrossEntropyLossLayer::Forward_gpu | Jonathan L Long | 1 | -2/+0 | |
It was a verbatim copy of Forward_cpu; there is no proper GPU implementation. | |||||
2015-05-14 | Merge pull request #2168 from longjon/spurious-net-includes | Jeff Donahue | 2 | -2/+0 | |
Remove spurious inclusions of net.hpp | |||||
2015-05-14 | Merge pull request #2165 from longjon/auto-reshape | Jeff Donahue | 1 | -0/+1 | |
Always call Layer::Reshape in Layer::Forward | |||||
2015-05-14 | Merge pull request #2456 from longjon/python-layer-object | Jeff Donahue | 1 | -7/+6 | |
Use bp::object instead of PyObject* for self in Python layer | |||||
2015-05-13 | remove superfluous empty destructors | Jonathan L Long | 1 | -2/+0 | |
The removed definitions do nothing; these classes already have virtual destructors inherited from their respective base classes. | |||||
2015-05-13 | [pycaffe] use bp::object instead of PyObject* for self in Python layer | Takuya Narihira | 1 | -7/+6 | |
This simply allows direct use of the nicer bp::object interface. | |||||
2015-05-05 | Merge pull request #2414 from tnarihi/fix-prelu-redanduncy | Jeff Donahue | 1 | -1/+2 | |
Fix #2406: wrong thread blocks setting for PReLU | |||||
2015-05-04 | Modify for better readability regarding temporary bufffer for backward | Takuya Narihira | 1 | -1/+2 | |
computation | |||||
2015-04-26 | fix a typo that GFLAGS_GFLAGS_H_ -> GFLAGS_GFAGS_H_ | gdh1995 | 1 | -1/+1 | |
2015-04-08 | Added InferBlobShape to data_transformer. | Sergio Guadarrama | 1 | -0/+36 | |
2015-03-25 | Merge pull request #2160 from TorosFanny/master | Jeff Donahue | 1 | -1/+1 | |
change resorce to resource | |||||
2015-03-24 | replace cuDNN alphas and betas with coefficient values | Evan Shelhamer | 1 | -5/+9 | |
Give cuDNN {0, 1} constants for controlling accumulation through the alpha and beta coefficients. | |||||
2015-03-24 | switch to cuDNN R2 | Simon Layton | 4 | -23/+25 | |
2015-03-19 | remove spurious net.hpp includes | Jonathan L Long | 2 | -2/+0 | |
2015-03-19 | always call Layer::Reshape in Layer::Forward | Jonathan L Long | 1 | -0/+1 | |
There are no cases where Forward is called without Reshape, so we can simplify the call structure. | |||||
2015-03-19 | change resorce to resource | TorosFanny | 1 | -1/+1 | |
2015-03-13 | shuffle data | wieschol | 1 | -0/+2 | |
2015-03-11 | PReLU Layer and its tests | Takuya Narihira | 1 | -0/+84 | |
described in Kaiming He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", arxiv 2015. Belows are commit message histories that I had while developing. PReLULayer takes FillerParameter for init PReLU testing consistency with ReLU Fix : PReLU test concistency check PReLU tests in-place computation, and it failed in GPU Fix: PReLU in-place backward in GPU PReLULayer called an incorrect API for copying data (caffe_gpu_memcpy). First argment of `caffe_gpu_memcpy` should be size of memory region in byte. I modified to use `caffe_copy` function. Fix: style errors Fix: number of axes of input blob must be >= 2 Use 1D blob, zero-D blob. Rename: hw -> dim |