summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)AuthorFilesLines
2016-02-25add InputLayer for Net inputEvan Shelhamer1-0/+44
Create an input layer to replace oddball Net `input` fields.
2016-02-20tranpose parameter added to IP layer to support tied weights in an ↵Youssef Kashef1-0/+1
autoencoder. Arguments to matrix multiplication function are conditioned on this parameter, no actual transposing takes place. test ip gradient computation with transpose on
2016-01-27Merge pull request #3022 from jeffdonahue/expose-param-display-namesJeff Donahue1-0/+3
Net: expose param_display_names_
2016-01-26Merge pull request #3591 from jeffdonahue/scale-bias-layerEvan Shelhamer2-0/+137
Scale and Bias Layers
2016-01-26Merge pull request #3132 from bwilbertz/LastIterationLossJeff Donahue1-0/+3
Fix loss of last iteration when average_loss > 1
2016-01-22Separation and generalization of ChannelwiseAffineLayer into BiasLayerJeff Donahue3-103/+137
and ScaleLayer. The behavior of ChannelwiseAffineLayer can be reproduced by a ScaleLayer with `scale_param { bias_term: true }`. BiasLayer and ScaleLayer each take 1 or 2 bottoms, with the output having the same shape as the first. The second input -- either another bottom or a learned parameter -- will have its axes (virtually) broadcast and tiled to have the same shape as the first, after which elementwise addition (Bias) or multiplication (Scale) is performed.
2016-01-22Version 1.0.0-rc3Luke Yeager1-0/+4
2016-01-22Add ChannelwiseAffine for batch normDmytro Mishkin1-0/+103
2016-01-22Merge pull request #3388 from mohomran/exponential_linear_unitsEvan Shelhamer1-0/+86
Exponential Linear Units
2016-01-04Exposing layer top and bottom names to pythonphilkr1-0/+12
2015-12-28add support for N-D dilated convolutionFisher Yu2-8/+8
2015-12-28add support for 2D dilated convolutionFisher Yu4-8/+23
2015-12-10Fix CuDNNConvolutionLayer for cuDNN v4Felix Abecassis1-0/+3
Add a macro to check the current cuDNN version
2015-12-04ELU layer with basic testsMohamed Omran1-0/+86
2015-12-02Merge pull request #3404 from BonsaiAI/remove-hamming-distJon Long1-7/+0
Remove hamming_distance and popcount
2015-12-02Remove hamming_distance and popcountTea1-7/+0
2015-12-01Merge pull request #3285 from longjon/cuda-dead-cppEvan Shelhamer1-8/+2
Remove dead preprocessor code for number of CUDA threads
2015-12-01dismantle layer headersEvan Shelhamer69-3406/+4261
No more monolithic includes: split layers into their own headers for modular inclusion and build.
2015-11-28Secure temporary file creationTea1-7/+16
2015-11-28Secure implementation of MakeTempDirT.E.A de Souza1-6/+15
2015-11-27Merge pull request #3320 from BonsaiAI/disambiguate-dtypeRonghang Hu1-2/+3
Cast std::max args to Dtype
2015-11-26replace snprintf with a C++98 equivalentTea1-0/+18
2015-11-22Merge pull request #3296 from cdoersch/normalize_batchJeff Donahue1-3/+8
Better normalization options for SoftmaxWithLoss layer
2015-11-22Better normalization options for SoftmaxWithLoss layer.Carl Doersch1-3/+8
2015-11-20Convert std::max args to DtypeTea1-2/+3
2015-11-19Fix MaxTopBlobs in Accuracy LayerRonghang Hu1-1/+1
Fix the typo "MaxTopBlos" to "MaxTopBlobs". This typo causes maximum top number to be incorrect.
2015-11-12Fix loss of last iteration when average_loss > 1Benedikt Wilbertz1-0/+3
refactor duplicate code into separate update function for smoothed loss fix naming convention
2015-11-10Merge pull request #3295 from timmeinhardt/fix_issue_3274Evan Shelhamer1-7/+7
[bug] fix issue #3274 -- shape argmax top carefully
2015-11-10Merge pull request #3310 from gustavla/contrastive-doc-fixEvan Shelhamer1-2/+2
[doc] Fix consistent typo in contrastive loss
2015-11-10OSX 10.10 (and more) use Accelerate Framework instead of veclibixartz1-0/+5
2015-11-10Replace unistd functions with cross platform counterpartsTea1-19/+11
2015-11-09DOC: Fix consistent typo in contrastive lossGustav Larsson1-2/+2
If a pair is similar, it should take the squared distance and not the distance. This is clearly what the code is doing.
2015-11-06Fix ArgMaxLayer::Reshape for any num of bottom axesTim Meinhardt1-7/+7
2015-11-04remove dead cpp code for number of CUDA threadsJonathan L Long1-8/+2
__CUDA_ARCH__ is not defined in host code; the #if was vacuous and misleading.
2015-10-30Merge pull request #3082 from gustavla/pycaffe-snapshotEvan Shelhamer1-5/+5
Expose `Solver::Snapshot` to pycaffe
2015-10-22Merge pull request #3229 from cdoersch/batchnorm2Jeff Donahue1-1/+67
Yet another batch normalization PR
2015-10-22Cleanup batch norm layer, include global stats computationCarl Doersch1-23/+41
2015-10-20Added batch normalization layer with test and examplesDmytro Mishkin1-1/+49
2015-10-21Clean redundant/unnecessary headersKang Kim8-13/+1
2015-10-21Move HDF5 defines to data_layers headerKang Kim2-3/+3
2015-10-16Add automatic upgrade for solver typeRonghang Hu2-0/+13
2015-10-16Change solver type to string and provide solver registryRonghang Hu4-4/+149
2015-10-16Split solver code into one file per solver classRonghang Hu2-153/+147
2015-10-16Merge pull request #3089 from shelhamer/groom-convEvan Shelhamer1-2/+2
[style] groom im2col + col2im for clarity
2015-10-16rearrange upgrade helpersEvan Shelhamer1-9/+9
order from general helpers to specific upgrades in chronological order.
2015-10-15Merge pull request #3160 from shelhamer/cudnnV3Evan Shelhamer1-2/+72
Basic cuDNN v3 support
2015-10-15Initial cuDNN v3 supportSimon Layton1-2/+72
2015-10-13Merge pull request #2966 from cdoersch/batch_reindex_layerJeff Donahue1-0/+69
BatchReindexLayer to shuffle, subsample, and replicate examples in a batch
2015-10-07BatchReindexLayer to shuffle, subsample, and replicate examples in a batchCarl Doersch1-0/+69
2015-09-30Merge pull request #3069 from timmeinhardt/argmaxEvan Shelhamer1-3/+11
Add argmax_param "axis" to maximise output along the specified axis