summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Expand)AuthorFilesLines
2015-03-25Merge pull request #2160 from TorosFanny/masterJeff Donahue1-1/+1
2015-03-24replace cuDNN alphas and betas with coefficient valuesEvan Shelhamer1-5/+9
2015-03-24switch to cuDNN R2Simon Layton4-23/+25
2015-03-19change resorce to resourceTorosFanny1-1/+1
2015-03-13shuffle datawieschol1-0/+2
2015-03-11PReLU Layer and its testsTakuya Narihira1-0/+84
2015-03-09AccuracyLayer: add ignore_label parammax argus1-0/+6
2015-03-09Fixup AccuracyLayer like SoftmaxLossLayer in #1970 -- fixes #2063Jeff Donahue1-0/+1
2015-03-07whitespace in common.hppJonathan L Long1-1/+1
2015-03-06Fix references to plural names in API documentationChristos Nikolaou2-6/+6
2015-03-04expose Solver::Restore() as public and Solver.restore() in pycaffeEvan Shelhamer1-4/+4
2015-03-04include/caffe/common.hpp: add <climits> for INT_MAX (now in blob.hpp)Jeff Donahue1-0/+1
2015-03-03Add option not to reshape to Blob::FromProto; use when loading BlobsJeff Donahue1-1/+1
2015-03-03SoftmaxLossLayer generalized like SoftmaxLayerJeff Donahue1-0/+2
2015-03-03SoftmaxLayer: generalized Blob axesJeff Donahue1-0/+3
2015-03-03SliceLayer: generalized Blob axesJeff Donahue1-5/+3
2015-03-03ConcatLayer: generalized Blob axesJeff Donahue1-9/+7
2015-03-03common_layers.hpp: remove unused "Blob col_bob_"Jeff Donahue1-2/+0
2015-03-03FlattenLayer: generalized Blob axesJeff Donahue1-6/+0
2015-03-03Fix sparse GaussianFiller for new IPLayer weight axesJeff Donahue1-3/+2
2015-03-03add offset, {data,diff}_at nd blob accessorsJeff Donahue1-2/+24
2015-03-03Add BlobShape message; use for Net input shapesJeff Donahue1-0/+1
2015-03-03Blobs are ND arrays (for N not necessarily equals 4).Jeff Donahue1-21/+125
2015-02-19[docs] add check mode hint to CPU-only mode errorEvan Shelhamer1-1/+1
2015-02-19Merge pull request #1910 from philkr/encodedEvan Shelhamer1-0/+2
2015-02-19Repeal revert of #1878Evan Shelhamer1-23/+13
2015-02-19added a force_encoded_color flag to the data layer. Printing a warning if ima...philkr1-0/+2
2015-02-19Revert "Merge pull request #1878 from philkr/encoded"Evan Shelhamer1-13/+23
2015-02-17comment fix: Decaf -> CaffeJonathan L Long1-1/+1
2015-02-17[pycaffe] fix bug in Python layer setupJonathan L Long1-1/+1
2015-02-17construct Net from file and phaseEvan Shelhamer1-1/+1
2015-02-17pass phase to transformer through layerEvan Shelhamer2-4/+3
2015-02-17give phase to Net and LayerEvan Shelhamer6-12/+15
2015-02-16[pycaffe] allow Layer to be extended from PythonJonathan L Long1-0/+68
2015-02-16LayerRegistry uses shared_ptr instead of raw pointersJonathan L Long1-5/+6
2015-02-16Merge pull request #1878 from philkr/encodedEvan Shelhamer1-23/+13
2015-02-16improve CMake buildAnatoly Baksheev1-1/+1
2015-02-16Cleaning up the encoded flag. Allowing any image (cropped or gray scale) to b...philkr1-23/+13
2015-02-13Add gradient clipping -- limit L2 norm of parameter gradientsJeff Donahue1-0/+1
2015-02-13add Net::param_owners accessor for param sharing infoJeff Donahue1-0/+1
2015-02-13Blob: add scale_{data,diff} methods and testsJeff Donahue1-0/+5
2015-02-13SoftmaxWithLossLayer fix: takes exactly 2 bottom blobs (inherited fromJeff Donahue1-3/+0
2015-02-09Fixes for CuDNN layers: only destroy handles if setupJeff Donahue3-6/+12
2015-02-07Allow using arrays with n_ * size_ > 2^31Dmitry Ulyanov1-1/+1
2015-02-06groom #1416Evan Shelhamer1-3/+1
2015-02-06removed needs_reshape_ and ChangeBatchSize is now set_batch_sizemanuele1-2/+1
2015-02-06MemoryDataLayer now correctly consumes batch_size elementsmanuele2-1/+3
2015-02-06MemoryDataLayer now accepts dynamic batch_sizemanuele1-0/+1
2015-02-06Added opencv vector<Mat> to memory data layer with testsmanuele2-0/+15
2015-02-06Added GPU implementation of SoftmaxWithLossLayer.Sagan Bolliger1-2/+5