summaryrefslogtreecommitdiff
path: root/include/caffe
AgeCommit message (Expand)AuthorFilesLines
2016-06-03Add level and stages to Net constructorLuke Yeager1-0/+1
2016-06-01Add LSTMLayer and LSTMUnitLayer, with testsJeff Donahue1-0/+154
2016-06-01Add RNNLayer, with testsJeff Donahue1-0/+47
2016-06-01Add RecurrentLayer: an abstract superclass for other recurrent layer typesJeff Donahue1-0/+187
2016-05-16Add cuDNN v5 support, drop cuDNN v3 supportFelix Abecassis4-3/+24
2016-05-04add parameter layer for learning any bottomJonathan L Long1-0/+45
2016-05-04Merge pull request #3995 from ZhouYzzz/python-phaseJon Long1-0/+1
2016-04-20Don't set map_size=1TB in util/db_lmdbLuke Yeager1-5/+8
2016-04-15Allow the python layer have attribute "phase"ZhouYzzz1-0/+1
2016-04-14CropLayer: groom commentsEvan Shelhamer1-0/+9
2016-03-05Merge pull request #3590 from junshi15/GPUUtilitiesJon Long1-0/+5
2016-03-05Merge pull request #3588 from junshi15/P2psyncPrepareJon Long1-1/+4
2016-03-05split p2psync::run()Jun Shi1-1/+4
2016-03-05Crop: fixes, tests and negative axis indexing.max argus1-2/+2
2016-03-05Extend Crop to N-D, changed CropParameter.max argus1-2/+20
2016-03-05add CropLayer: crop blob to another blob's dimensions with offsetsJonathan L Long1-0/+49
2016-03-04add check and find GPU device utilitiesJun Shi1-0/+5
2016-02-26Deprecate ForwardPrefilled(), Forward(bottom, loss) in lieu of droppingEvan Shelhamer1-0/+9
2016-02-25collect Net inputs from Input layersEvan Shelhamer1-2/+11
2016-02-25drop Net inputs + Forward with bottomsEvan Shelhamer1-27/+7
2016-02-25deprecate input fields and upgrade automagicallyEvan Shelhamer1-0/+6
2016-02-25add InputLayer for Net inputEvan Shelhamer1-0/+44
2016-02-20tranpose parameter added to IP layer to support tied weights in an autoencode...Youssef Kashef1-0/+1
2016-01-27Merge pull request #3022 from jeffdonahue/expose-param-display-namesJeff Donahue1-0/+3
2016-01-26Merge pull request #3591 from jeffdonahue/scale-bias-layerEvan Shelhamer2-0/+137
2016-01-26Merge pull request #3132 from bwilbertz/LastIterationLossJeff Donahue1-0/+3
2016-01-22Separation and generalization of ChannelwiseAffineLayer into BiasLayerJeff Donahue3-103/+137
2016-01-22Version 1.0.0-rc3Luke Yeager1-0/+4
2016-01-22Add ChannelwiseAffine for batch normDmytro Mishkin1-0/+103
2016-01-22Merge pull request #3388 from mohomran/exponential_linear_unitsEvan Shelhamer1-0/+86
2016-01-04Exposing layer top and bottom names to pythonphilkr1-0/+12
2015-12-28add support for N-D dilated convolutionFisher Yu2-8/+8
2015-12-28add support for 2D dilated convolutionFisher Yu4-8/+23
2015-12-10Fix CuDNNConvolutionLayer for cuDNN v4Felix Abecassis1-0/+3
2015-12-04ELU layer with basic testsMohamed Omran1-0/+86
2015-12-02Merge pull request #3404 from BonsaiAI/remove-hamming-distJon Long1-7/+0
2015-12-02Remove hamming_distance and popcountTea1-7/+0
2015-12-01Merge pull request #3285 from longjon/cuda-dead-cppEvan Shelhamer1-8/+2
2015-12-01dismantle layer headersEvan Shelhamer69-3406/+4261
2015-11-28Secure temporary file creationTea1-7/+16
2015-11-28Secure implementation of MakeTempDirT.E.A de Souza1-6/+15
2015-11-27Merge pull request #3320 from BonsaiAI/disambiguate-dtypeRonghang Hu1-2/+3
2015-11-26replace snprintf with a C++98 equivalentTea1-0/+18
2015-11-22Merge pull request #3296 from cdoersch/normalize_batchJeff Donahue1-3/+8
2015-11-22Better normalization options for SoftmaxWithLoss layer.Carl Doersch1-3/+8
2015-11-20Convert std::max args to DtypeTea1-2/+3
2015-11-19Fix MaxTopBlobs in Accuracy LayerRonghang Hu1-1/+1
2015-11-12Fix loss of last iteration when average_loss > 1Benedikt Wilbertz1-0/+3
2015-11-10Merge pull request #3295 from timmeinhardt/fix_issue_3274Evan Shelhamer1-7/+7
2015-11-10Merge pull request #3310 from gustavla/contrastive-doc-fixEvan Shelhamer1-2/+2