summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)AuthorFilesLines
2016-04-20Don't set map_size=1TB in util/db_lmdbLuke Yeager1-13/+52
Instead, double the map size on the MDB_MAP_FULL exception.
2016-04-14CropLayer: groom commentsEvan Shelhamer2-23/+8
2016-04-14[fix] CropLayer: check dimension bounds only for cropped dimensionsEvan Shelhamer1-5/+4
check only the dimensions to be cropped for compatible sizes and offsets
2016-04-14[test] CropLayer: test dimensions check to reveal bounds checking bugEvan Shelhamer1-0/+18
2016-04-08Solving issue with exp layer with base eemmanuel maggiori2-1/+22
2016-04-05Net: setting `propagate_down: true` forces backpropJeff Donahue2-6/+10
2016-04-04test_net.cpp: add TestForcePropagateDownJeff Donahue1-0/+102
2016-03-05Merge pull request #3590 from junshi15/GPUUtilitiesJon Long1-0/+42
Add functions to check and grab GPU
2016-03-05Merge pull request #3588 from junshi15/P2psyncPrepareJon Long2-8/+14
Refine P2PSync
2016-03-05split p2psync::run()Jun Shi2-8/+14
2016-03-05Crop: more tests and test tuning.Evan Shelhamer1-73/+110
Changes are: reduce test blob dims for speed use standard Gaussian filler, polish formatting and rename tests, test HW crop and 5D crop, standard gradient checks
2016-03-05Crop: fixes, tests and negative axis indexing.max argus4-26/+261
2016-03-05Extend Crop to N-D, changed CropParameter.max argus3-50/+190
2016-03-05add CropLayer: crop blob to another blob's dimensions with offsetsJonathan L Long3-1/+147
configure offset(s) through proto definition.
2016-03-04add check and find GPU device utilitiesJun Shi1-0/+42
2016-02-29refuse to upgrade net with layer/layers inconsistencyEvan Shelhamer1-4/+6
die loudly if a net definition (prototxt) mixes proto formats by defining both `layer` and `layers` fields instead of complaining but discarding and continuing. fix #3381
2016-02-29fix input field -> input layer net upgrade: only convert full defsEvan Shelhamer1-20/+26
convert inputs in legacy definitions (prototxt), but simply strip inputs from legacy weights (caffemodel). fix #3750
2016-02-29check all net upgrade conditionsEvan Shelhamer1-1/+2
check all conditions all the time; V0 -> V1 and V1 -> V2 do not suffice.
2016-02-28Merge pull request #3725 from shaibagon/drop_nd_blobsJeff Donahue1-2/+2
supporting N-D Blobs in Dropout layer Reshape
2016-02-28supporting N-D Blobs in Dropout layer Reshapeshai1-2/+2
fixing lint errors
2016-02-26Deprecate ForwardPrefilled(), Forward(bottom, loss) in lieu of droppingEvan Shelhamer1-0/+12
Relax removal of `Forward()` variations by deprecating instead.
2016-02-25collect Net inputs from Input layersEvan Shelhamer1-0/+6
Restore the list of net inputs for compatibility with the pycaffe and matcaffe interfaces and downstream C++.
2016-02-25drop Net inputs + Forward with bottomsEvan Shelhamer6-227/+51
Drop special cases for `input` fields, the `Net` input members, and the `Net` interface for Forward with bottoms along with Forward() / ForwardPrefilled() distinction.
2016-02-25deprecate input fields and upgrade automagicallyEvan Shelhamer2-3/+48
2016-02-25add InputLayer for Net inputEvan Shelhamer2-2/+38
Create an input layer to replace oddball Net `input` fields.
2016-02-25Merge pull request #3612 from kashefy/tied_weights_ip_transposeJeff Donahue4-15/+303
Tied weights with transpose flag for InnerProduct layer
2016-02-20tranpose parameter added to IP layer to support tied weights in an ↵Youssef Kashef4-15/+303
autoencoder. Arguments to matrix multiplication function are conditioned on this parameter, no actual transposing takes place. test ip gradient computation with transpose on
2016-02-15Remove useless LevelDB includeFelix Abecassis1-1/+0
The tests could not compile with USE_LEVELDB=0 and LevelDB missing from the system
2016-02-02Nicely prints GPU namesSergei Nikolaev1-0/+1
2016-01-26Remove incorrect cast of gemm int arg to Dtype in BiasLayerJeff Donahue1-1/+1
2016-01-26Merge pull request #3591 from jeffdonahue/scale-bias-layerEvan Shelhamer7-1/+1580
Scale and Bias Layers
2016-01-26Merge pull request #3602 from jeffdonahue/rm-cuda-propsJeff Donahue2-6/+0
Remove unnecessary CAFFE_TEST_CUDA_PROP declarations
2016-01-26Merge pull request #3132 from bwilbertz/LastIterationLossJeff Donahue1-13/+24
Fix loss of last iteration when average_loss > 1
2016-01-26Remove unnecessary CAFFE_TEST_CUDA_PROP declarationsJeff Donahue2-6/+0
2016-01-26Prevent in-place computation in ReshapeLayer and FlattenLayerKang Kim2-0/+4
2016-01-26Merge pull request #3496 from jeffdonahue/fix-testdatatransformer-leaksJeff Donahue1-74/+62
TestDataTransformer: fix some memory leaks
2016-01-22Separation and generalization of ChannelwiseAffineLayer into BiasLayerJeff Donahue10-448/+1577
and ScaleLayer. The behavior of ChannelwiseAffineLayer can be reproduced by a ScaleLayer with `scale_param { bias_term: true }`. BiasLayer and ScaleLayer each take 1 or 2 bottoms, with the output having the same shape as the first. The second input -- either another bottom or a learned parameter -- will have its axes (virtually) broadcast and tiled to have the same shape as the first, after which elementwise addition (Bias) or multiplication (Scale) is performed.
2016-01-22Version 1.0.0-rc3Luke Yeager1-0/+4
2016-01-22Add ChannelwiseAffine for batch normDmytro Mishkin4-1/+451
2016-01-22Merge pull request #3388 from mohomran/exponential_linear_unitsEvan Shelhamer4-1/+178
Exponential Linear Units
2016-01-20Merge pull request #3536 from intelcaffe/im2col-speedupJon Long1-32/+61
Performance related update of im2col() and col2im() functions
2016-01-20Workaround for inplace max pooling issuethatguymike1-1/+10
2016-01-20Performance related update of im2col() and col2im() functionsMariusz Moczala1-32/+61
2016-01-05Speeding up the GPU solversphilkr12-161/+223
2015-12-29TestDataTransformer: fix some memory leaks caused by use of 'new'Jeff Donahue1-74/+62
2015-12-28remove extra space before +Fisher Yu1-1/+1
2015-12-28enable dilated deconvolutionJonathan L Long2-4/+3
Since the underlying routines are shared, we need only upgrade compute_output_shape.
2015-12-28add short description of dilation to caffe.protoJonathan L Long1-0/+3
2015-12-28disable dilated deconvolutionFisher Yu1-0/+3
2015-12-28add and improve tests for dilated convolution/im2colFisher Yu2-6/+163