summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Expand)AuthorFilesLines
2016-03-05Merge pull request #3590 from junshi15/GPUUtilitiesJon Long1-0/+5
2016-03-05Merge pull request #3588 from junshi15/P2psyncPrepareJon Long1-1/+4
2016-03-05split p2psync::run()Jun Shi1-1/+4
2016-03-05Crop: fixes, tests and negative axis indexing.max argus1-2/+2
2016-03-05Extend Crop to N-D, changed CropParameter.max argus1-2/+20
2016-03-05add CropLayer: crop blob to another blob's dimensions with offsetsJonathan L Long1-0/+49
2016-03-04add check and find GPU device utilitiesJun Shi1-0/+5
2016-02-26Deprecate ForwardPrefilled(), Forward(bottom, loss) in lieu of droppingEvan Shelhamer1-0/+9
2016-02-25collect Net inputs from Input layersEvan Shelhamer1-2/+11
2016-02-25drop Net inputs + Forward with bottomsEvan Shelhamer1-27/+7
2016-02-25deprecate input fields and upgrade automagicallyEvan Shelhamer1-0/+6
2016-02-25add InputLayer for Net inputEvan Shelhamer1-0/+44
2016-02-20tranpose parameter added to IP layer to support tied weights in an autoencode...Youssef Kashef1-0/+1
2016-01-27Merge pull request #3022 from jeffdonahue/expose-param-display-namesJeff Donahue1-0/+3
2016-01-26Merge pull request #3591 from jeffdonahue/scale-bias-layerEvan Shelhamer2-0/+137
2016-01-26Merge pull request #3132 from bwilbertz/LastIterationLossJeff Donahue1-0/+3
2016-01-22Separation and generalization of ChannelwiseAffineLayer into BiasLayerJeff Donahue3-103/+137
2016-01-22Version 1.0.0-rc3Luke Yeager1-0/+4
2016-01-22Add ChannelwiseAffine for batch normDmytro Mishkin1-0/+103
2016-01-22Merge pull request #3388 from mohomran/exponential_linear_unitsEvan Shelhamer1-0/+86
2016-01-04Exposing layer top and bottom names to pythonphilkr1-0/+12
2015-12-28add support for N-D dilated convolutionFisher Yu2-8/+8
2015-12-28add support for 2D dilated convolutionFisher Yu4-8/+23
2015-12-10Fix CuDNNConvolutionLayer for cuDNN v4Felix Abecassis1-0/+3
2015-12-04ELU layer with basic testsMohamed Omran1-0/+86
2015-12-02Merge pull request #3404 from BonsaiAI/remove-hamming-distJon Long1-7/+0
2015-12-02Remove hamming_distance and popcountTea1-7/+0
2015-12-01Merge pull request #3285 from longjon/cuda-dead-cppEvan Shelhamer1-8/+2
2015-12-01dismantle layer headersEvan Shelhamer69-3406/+4261
2015-11-28Secure temporary file creationTea1-7/+16
2015-11-28Secure implementation of MakeTempDirT.E.A de Souza1-6/+15
2015-11-27Merge pull request #3320 from BonsaiAI/disambiguate-dtypeRonghang Hu1-2/+3
2015-11-26replace snprintf with a C++98 equivalentTea1-0/+18
2015-11-22Merge pull request #3296 from cdoersch/normalize_batchJeff Donahue1-3/+8
2015-11-22Better normalization options for SoftmaxWithLoss layer.Carl Doersch1-3/+8
2015-11-20Convert std::max args to DtypeTea1-2/+3
2015-11-19Fix MaxTopBlobs in Accuracy LayerRonghang Hu1-1/+1
2015-11-12Fix loss of last iteration when average_loss > 1Benedikt Wilbertz1-0/+3
2015-11-10Merge pull request #3295 from timmeinhardt/fix_issue_3274Evan Shelhamer1-7/+7
2015-11-10Merge pull request #3310 from gustavla/contrastive-doc-fixEvan Shelhamer1-2/+2
2015-11-10Replace unistd functions with cross platform counterpartsTea1-19/+11
2015-11-09DOC: Fix consistent typo in contrastive lossGustav Larsson1-2/+2
2015-11-06Fix ArgMaxLayer::Reshape for any num of bottom axesTim Meinhardt1-7/+7
2015-11-04remove dead cpp code for number of CUDA threadsJonathan L Long1-8/+2
2015-10-30Merge pull request #3082 from gustavla/pycaffe-snapshotEvan Shelhamer1-5/+5
2015-10-22Merge pull request #3229 from cdoersch/batchnorm2Jeff Donahue1-1/+67
2015-10-22Cleanup batch norm layer, include global stats computationCarl Doersch1-23/+41
2015-10-20Added batch normalization layer with test and examplesDmytro Mishkin1-1/+49
2015-10-21Clean redundant/unnecessary headersKang Kim8-13/+1
2015-10-21Move HDF5 defines to data_layers headerKang Kim2-3/+3