Age | Commit message (Collapse) | Author | Files | Lines | |
---|---|---|---|---|---|
2016-02-25 | add InputLayer for Net input | Evan Shelhamer | 1 | -0/+44 | |
Create an input layer to replace oddball Net `input` fields. | |||||
2016-02-20 | tranpose parameter added to IP layer to support tied weights in an ↵ | Youssef Kashef | 1 | -0/+1 | |
autoencoder. Arguments to matrix multiplication function are conditioned on this parameter, no actual transposing takes place. test ip gradient computation with transpose on | |||||
2016-01-27 | Merge pull request #3022 from jeffdonahue/expose-param-display-names | Jeff Donahue | 1 | -0/+3 | |
Net: expose param_display_names_ | |||||
2016-01-26 | Merge pull request #3591 from jeffdonahue/scale-bias-layer | Evan Shelhamer | 2 | -0/+137 | |
Scale and Bias Layers | |||||
2016-01-26 | Merge pull request #3132 from bwilbertz/LastIterationLoss | Jeff Donahue | 1 | -0/+3 | |
Fix loss of last iteration when average_loss > 1 | |||||
2016-01-22 | Separation and generalization of ChannelwiseAffineLayer into BiasLayer | Jeff Donahue | 3 | -103/+137 | |
and ScaleLayer. The behavior of ChannelwiseAffineLayer can be reproduced by a ScaleLayer with `scale_param { bias_term: true }`. BiasLayer and ScaleLayer each take 1 or 2 bottoms, with the output having the same shape as the first. The second input -- either another bottom or a learned parameter -- will have its axes (virtually) broadcast and tiled to have the same shape as the first, after which elementwise addition (Bias) or multiplication (Scale) is performed. | |||||
2016-01-22 | Version 1.0.0-rc3 | Luke Yeager | 1 | -0/+4 | |
2016-01-22 | Add ChannelwiseAffine for batch norm | Dmytro Mishkin | 1 | -0/+103 | |
2016-01-22 | Merge pull request #3388 from mohomran/exponential_linear_units | Evan Shelhamer | 1 | -0/+86 | |
Exponential Linear Units | |||||
2016-01-04 | Exposing layer top and bottom names to python | philkr | 1 | -0/+12 | |
2015-12-28 | add support for N-D dilated convolution | Fisher Yu | 2 | -8/+8 | |
2015-12-28 | add support for 2D dilated convolution | Fisher Yu | 4 | -8/+23 | |
2015-12-10 | Fix CuDNNConvolutionLayer for cuDNN v4 | Felix Abecassis | 1 | -0/+3 | |
Add a macro to check the current cuDNN version | |||||
2015-12-04 | ELU layer with basic tests | Mohamed Omran | 1 | -0/+86 | |
2015-12-02 | Merge pull request #3404 from BonsaiAI/remove-hamming-dist | Jon Long | 1 | -7/+0 | |
Remove hamming_distance and popcount | |||||
2015-12-02 | Remove hamming_distance and popcount | Tea | 1 | -7/+0 | |
2015-12-01 | Merge pull request #3285 from longjon/cuda-dead-cpp | Evan Shelhamer | 1 | -8/+2 | |
Remove dead preprocessor code for number of CUDA threads | |||||
2015-12-01 | dismantle layer headers | Evan Shelhamer | 69 | -3406/+4261 | |
No more monolithic includes: split layers into their own headers for modular inclusion and build. | |||||
2015-11-28 | Secure temporary file creation | Tea | 1 | -7/+16 | |
2015-11-28 | Secure implementation of MakeTempDir | T.E.A de Souza | 1 | -6/+15 | |
2015-11-27 | Merge pull request #3320 from BonsaiAI/disambiguate-dtype | Ronghang Hu | 1 | -2/+3 | |
Cast std::max args to Dtype | |||||
2015-11-26 | replace snprintf with a C++98 equivalent | Tea | 1 | -0/+18 | |
2015-11-22 | Merge pull request #3296 from cdoersch/normalize_batch | Jeff Donahue | 1 | -3/+8 | |
Better normalization options for SoftmaxWithLoss layer | |||||
2015-11-22 | Better normalization options for SoftmaxWithLoss layer. | Carl Doersch | 1 | -3/+8 | |
2015-11-20 | Convert std::max args to Dtype | Tea | 1 | -2/+3 | |
2015-11-19 | Fix MaxTopBlobs in Accuracy Layer | Ronghang Hu | 1 | -1/+1 | |
Fix the typo "MaxTopBlos" to "MaxTopBlobs". This typo causes maximum top number to be incorrect. | |||||
2015-11-12 | Fix loss of last iteration when average_loss > 1 | Benedikt Wilbertz | 1 | -0/+3 | |
refactor duplicate code into separate update function for smoothed loss fix naming convention | |||||
2015-11-10 | Merge pull request #3295 from timmeinhardt/fix_issue_3274 | Evan Shelhamer | 1 | -7/+7 | |
[bug] fix issue #3274 -- shape argmax top carefully | |||||
2015-11-10 | Merge pull request #3310 from gustavla/contrastive-doc-fix | Evan Shelhamer | 1 | -2/+2 | |
[doc] Fix consistent typo in contrastive loss | |||||
2015-11-10 | OSX 10.10 (and more) use Accelerate Framework instead of veclib | ixartz | 1 | -0/+5 | |
2015-11-10 | Replace unistd functions with cross platform counterparts | Tea | 1 | -19/+11 | |
2015-11-09 | DOC: Fix consistent typo in contrastive loss | Gustav Larsson | 1 | -2/+2 | |
If a pair is similar, it should take the squared distance and not the distance. This is clearly what the code is doing. | |||||
2015-11-06 | Fix ArgMaxLayer::Reshape for any num of bottom axes | Tim Meinhardt | 1 | -7/+7 | |
2015-11-04 | remove dead cpp code for number of CUDA threads | Jonathan L Long | 1 | -8/+2 | |
__CUDA_ARCH__ is not defined in host code; the #if was vacuous and misleading. | |||||
2015-10-30 | Merge pull request #3082 from gustavla/pycaffe-snapshot | Evan Shelhamer | 1 | -5/+5 | |
Expose `Solver::Snapshot` to pycaffe | |||||
2015-10-22 | Merge pull request #3229 from cdoersch/batchnorm2 | Jeff Donahue | 1 | -1/+67 | |
Yet another batch normalization PR | |||||
2015-10-22 | Cleanup batch norm layer, include global stats computation | Carl Doersch | 1 | -23/+41 | |
2015-10-20 | Added batch normalization layer with test and examples | Dmytro Mishkin | 1 | -1/+49 | |
2015-10-21 | Clean redundant/unnecessary headers | Kang Kim | 8 | -13/+1 | |
2015-10-21 | Move HDF5 defines to data_layers header | Kang Kim | 2 | -3/+3 | |
2015-10-16 | Add automatic upgrade for solver type | Ronghang Hu | 2 | -0/+13 | |
2015-10-16 | Change solver type to string and provide solver registry | Ronghang Hu | 4 | -4/+149 | |
2015-10-16 | Split solver code into one file per solver class | Ronghang Hu | 2 | -153/+147 | |
2015-10-16 | Merge pull request #3089 from shelhamer/groom-conv | Evan Shelhamer | 1 | -2/+2 | |
[style] groom im2col + col2im for clarity | |||||
2015-10-16 | rearrange upgrade helpers | Evan Shelhamer | 1 | -9/+9 | |
order from general helpers to specific upgrades in chronological order. | |||||
2015-10-15 | Merge pull request #3160 from shelhamer/cudnnV3 | Evan Shelhamer | 1 | -2/+72 | |
Basic cuDNN v3 support | |||||
2015-10-15 | Initial cuDNN v3 support | Simon Layton | 1 | -2/+72 | |
2015-10-13 | Merge pull request #2966 from cdoersch/batch_reindex_layer | Jeff Donahue | 1 | -0/+69 | |
BatchReindexLayer to shuffle, subsample, and replicate examples in a batch | |||||
2015-10-07 | BatchReindexLayer to shuffle, subsample, and replicate examples in a batch | Carl Doersch | 1 | -0/+69 | |
2015-09-30 | Merge pull request #3069 from timmeinhardt/argmax | Evan Shelhamer | 1 | -3/+11 | |
Add argmax_param "axis" to maximise output along the specified axis |