Age | Commit message (Collapse) | Author | Files | Lines | |
---|---|---|---|---|---|
2016-04-20 | Don't set map_size=1TB in util/db_lmdb | Luke Yeager | 1 | -13/+52 | |
Instead, double the map size on the MDB_MAP_FULL exception. | |||||
2016-04-14 | CropLayer: groom comments | Evan Shelhamer | 2 | -23/+8 | |
2016-04-14 | [fix] CropLayer: check dimension bounds only for cropped dimensions | Evan Shelhamer | 1 | -5/+4 | |
check only the dimensions to be cropped for compatible sizes and offsets | |||||
2016-04-14 | [test] CropLayer: test dimensions check to reveal bounds checking bug | Evan Shelhamer | 1 | -0/+18 | |
2016-04-08 | Solving issue with exp layer with base e | emmanuel maggiori | 2 | -1/+22 | |
2016-04-05 | Net: setting `propagate_down: true` forces backprop | Jeff Donahue | 2 | -6/+10 | |
2016-04-04 | test_net.cpp: add TestForcePropagateDown | Jeff Donahue | 1 | -0/+102 | |
2016-03-05 | Merge pull request #3590 from junshi15/GPUUtilities | Jon Long | 1 | -0/+42 | |
Add functions to check and grab GPU | |||||
2016-03-05 | Merge pull request #3588 from junshi15/P2psyncPrepare | Jon Long | 2 | -8/+14 | |
Refine P2PSync | |||||
2016-03-05 | split p2psync::run() | Jun Shi | 2 | -8/+14 | |
2016-03-05 | Crop: more tests and test tuning. | Evan Shelhamer | 1 | -73/+110 | |
Changes are: reduce test blob dims for speed use standard Gaussian filler, polish formatting and rename tests, test HW crop and 5D crop, standard gradient checks | |||||
2016-03-05 | Crop: fixes, tests and negative axis indexing. | max argus | 4 | -26/+261 | |
2016-03-05 | Extend Crop to N-D, changed CropParameter. | max argus | 3 | -50/+190 | |
2016-03-05 | add CropLayer: crop blob to another blob's dimensions with offsets | Jonathan L Long | 3 | -1/+147 | |
configure offset(s) through proto definition. | |||||
2016-03-04 | add check and find GPU device utilities | Jun Shi | 1 | -0/+42 | |
2016-02-29 | refuse to upgrade net with layer/layers inconsistency | Evan Shelhamer | 1 | -4/+6 | |
die loudly if a net definition (prototxt) mixes proto formats by defining both `layer` and `layers` fields instead of complaining but discarding and continuing. fix #3381 | |||||
2016-02-29 | fix input field -> input layer net upgrade: only convert full defs | Evan Shelhamer | 1 | -20/+26 | |
convert inputs in legacy definitions (prototxt), but simply strip inputs from legacy weights (caffemodel). fix #3750 | |||||
2016-02-29 | check all net upgrade conditions | Evan Shelhamer | 1 | -1/+2 | |
check all conditions all the time; V0 -> V1 and V1 -> V2 do not suffice. | |||||
2016-02-28 | Merge pull request #3725 from shaibagon/drop_nd_blobs | Jeff Donahue | 1 | -2/+2 | |
supporting N-D Blobs in Dropout layer Reshape | |||||
2016-02-28 | supporting N-D Blobs in Dropout layer Reshape | shai | 1 | -2/+2 | |
fixing lint errors | |||||
2016-02-26 | Deprecate ForwardPrefilled(), Forward(bottom, loss) in lieu of dropping | Evan Shelhamer | 1 | -0/+12 | |
Relax removal of `Forward()` variations by deprecating instead. | |||||
2016-02-25 | collect Net inputs from Input layers | Evan Shelhamer | 1 | -0/+6 | |
Restore the list of net inputs for compatibility with the pycaffe and matcaffe interfaces and downstream C++. | |||||
2016-02-25 | drop Net inputs + Forward with bottoms | Evan Shelhamer | 6 | -227/+51 | |
Drop special cases for `input` fields, the `Net` input members, and the `Net` interface for Forward with bottoms along with Forward() / ForwardPrefilled() distinction. | |||||
2016-02-25 | deprecate input fields and upgrade automagically | Evan Shelhamer | 2 | -3/+48 | |
2016-02-25 | add InputLayer for Net input | Evan Shelhamer | 2 | -2/+38 | |
Create an input layer to replace oddball Net `input` fields. | |||||
2016-02-25 | Merge pull request #3612 from kashefy/tied_weights_ip_transpose | Jeff Donahue | 4 | -15/+303 | |
Tied weights with transpose flag for InnerProduct layer | |||||
2016-02-20 | tranpose parameter added to IP layer to support tied weights in an ↵ | Youssef Kashef | 4 | -15/+303 | |
autoencoder. Arguments to matrix multiplication function are conditioned on this parameter, no actual transposing takes place. test ip gradient computation with transpose on | |||||
2016-02-15 | Remove useless LevelDB include | Felix Abecassis | 1 | -1/+0 | |
The tests could not compile with USE_LEVELDB=0 and LevelDB missing from the system | |||||
2016-02-02 | Nicely prints GPU names | Sergei Nikolaev | 1 | -0/+1 | |
2016-01-26 | Remove incorrect cast of gemm int arg to Dtype in BiasLayer | Jeff Donahue | 1 | -1/+1 | |
2016-01-26 | Merge pull request #3591 from jeffdonahue/scale-bias-layer | Evan Shelhamer | 7 | -1/+1580 | |
Scale and Bias Layers | |||||
2016-01-26 | Merge pull request #3602 from jeffdonahue/rm-cuda-props | Jeff Donahue | 2 | -6/+0 | |
Remove unnecessary CAFFE_TEST_CUDA_PROP declarations | |||||
2016-01-26 | Merge pull request #3132 from bwilbertz/LastIterationLoss | Jeff Donahue | 1 | -13/+24 | |
Fix loss of last iteration when average_loss > 1 | |||||
2016-01-26 | Remove unnecessary CAFFE_TEST_CUDA_PROP declarations | Jeff Donahue | 2 | -6/+0 | |
2016-01-26 | Prevent in-place computation in ReshapeLayer and FlattenLayer | Kang Kim | 2 | -0/+4 | |
2016-01-26 | Merge pull request #3496 from jeffdonahue/fix-testdatatransformer-leaks | Jeff Donahue | 1 | -74/+62 | |
TestDataTransformer: fix some memory leaks | |||||
2016-01-22 | Separation and generalization of ChannelwiseAffineLayer into BiasLayer | Jeff Donahue | 10 | -448/+1577 | |
and ScaleLayer. The behavior of ChannelwiseAffineLayer can be reproduced by a ScaleLayer with `scale_param { bias_term: true }`. BiasLayer and ScaleLayer each take 1 or 2 bottoms, with the output having the same shape as the first. The second input -- either another bottom or a learned parameter -- will have its axes (virtually) broadcast and tiled to have the same shape as the first, after which elementwise addition (Bias) or multiplication (Scale) is performed. | |||||
2016-01-22 | Version 1.0.0-rc3 | Luke Yeager | 1 | -0/+4 | |
2016-01-22 | Add ChannelwiseAffine for batch norm | Dmytro Mishkin | 4 | -1/+451 | |
2016-01-22 | Merge pull request #3388 from mohomran/exponential_linear_units | Evan Shelhamer | 4 | -1/+178 | |
Exponential Linear Units | |||||
2016-01-20 | Merge pull request #3536 from intelcaffe/im2col-speedup | Jon Long | 1 | -32/+61 | |
Performance related update of im2col() and col2im() functions | |||||
2016-01-20 | Workaround for inplace max pooling issue | thatguymike | 1 | -1/+10 | |
2016-01-20 | Performance related update of im2col() and col2im() functions | Mariusz Moczala | 1 | -32/+61 | |
2016-01-05 | Speeding up the GPU solvers | philkr | 12 | -161/+223 | |
2015-12-29 | TestDataTransformer: fix some memory leaks caused by use of 'new' | Jeff Donahue | 1 | -74/+62 | |
2015-12-28 | remove extra space before + | Fisher Yu | 1 | -1/+1 | |
2015-12-28 | enable dilated deconvolution | Jonathan L Long | 2 | -4/+3 | |
Since the underlying routines are shared, we need only upgrade compute_output_shape. | |||||
2015-12-28 | add short description of dilation to caffe.proto | Jonathan L Long | 1 | -0/+3 | |
2015-12-28 | disable dilated deconvolution | Fisher Yu | 1 | -0/+3 | |
2015-12-28 | add and improve tests for dilated convolution/im2col | Fisher Yu | 2 | -6/+163 | |