Age | Commit message (Collapse) | Author | Files | Lines | |
---|---|---|---|---|---|
2018-01-31 | v0.5.0 | huifang | 19 | -421/+934 | |
2017-08-26 | add support acl batch normal,direct conv, local connect, concat layers | honggui | 7 | -51/+362 | |
2017-06-02 | 1. Porting Caffe onto ARM Compute Library. | Yao Honggui | 17 | -3/+968 | |
2. The release version is 0.2.0 | |||||
2017-03-07 | Merge pull request #4630 from BlGene/load_hdf5_fix | Evan Shelhamer | 1 | -2/+2 | |
Made load_hd5 check blob dims by default, instead of reshaping. | |||||
2017-01-18 | Merge pull request #5098 from yaronli/master | Evan Shelhamer | 1 | -1/+4 | |
check leveldb iterator status for snappy format | |||||
2017-01-17 | Merge pull request #4563 from cypof/nccl | Evan Shelhamer | 15 | -233/+166 | |
adopt NVIDIA's NCCL for multi-GPU and switch interface to python | |||||
2017-01-06 | Using default from proto for prefetch | Cyprien Noel | 1 | -3/+0 | |
2017-01-06 | Python layers should build on multiprocess & solver_cnt; enable with bindings | Marian Gläser | 1 | -1/+1 | |
2017-01-06 | Switched multi-GPU to NCCL | Cyprien Noel | 15 | -231/+167 | |
2016-12-21 | Use mkl_malloc when use mkl | Tomasz Socha | 1 | -0/+12 | |
2016-12-16 | check leveldb iterator status for snappy format. | liyangguang | 1 | -1/+4 | |
2016-11-25 | Revert "solver: check and set type to reconcile class and proto" | Evan Shelhamer | 1 | -2/+0 | |
as pointed out by #5028 this does not achieve what it intended, and furthermore causes trouble with direct solver instantiation. revert commit e52451de914312b80a83459cb160c2f72a5b4fea | |||||
2016-11-21 | solver: check and set type to reconcile class and proto | Evan Shelhamer | 1 | -0/+2 | |
the solver checks its proto type (SolverParameter.type) on instantiation: - if the proto type is unspecified it's set according to the class type `Solver::type()` - if the proto type and class type conflict, the solver dies loudly this helps avoid accidental instantiation of a different solver type than intended when the solver def and class differ. guaranteed type information in the SolverParameter will simplify multi-solver coordination too. | |||||
2016-11-16 | sigmoid cross-entropy loss: normalize loss by different schemes | Evan Shelhamer | 1 | -0/+11 | |
sig-ce loss handles all the same normalizations as the softmax loss; refer to #3296 for more detail. this preserves the default normalization for sig-ce loss: batch size. | |||||
2016-11-15 | sigmoid cross-entropy loss: ignore selected targets by `ignore_label` | Evan Shelhamer | 1 | -0/+5 | |
sig-ce learns to ignore by zeroing out the loss/diff at targets equal to the configured `ignore_label`. n.b. as of now the loss/diff are not properly normalized when there are ignored targets. sig-ce loss should adopt the same normalization options as softmax loss. | |||||
2016-11-01 | corrected typo in accuracy_layer.hpp: MaxTopBlos -> MaxTopBlobs | baecchi | 1 | -1/+1 | |
2016-10-27 | sigmoid cross-entropy loss: add GPU forward for full GPU mode | Evan Shelhamer | 1 | -0/+2 | |
close #3004 | |||||
2016-10-22 | Fix: made load_hd5 check blob dims by default. | max argus | 1 | -2/+2 | |
Size checks are needed for loading parameters to avoid strange bugs when loading data we continue to reshape. | |||||
2016-09-12 | batch norm: auto-upgrade old layer definitions w/ param messages | Evan Shelhamer | 1 | -0/+6 | |
automatically strip old batch norm layer definitions including `param` messages. the batch norm layer used to require manually masking its state from the solver by setting `param { lr_mult: 0 }` messages for each of its statistics. this is now handled automatically by the layer. | |||||
2016-09-12 | batch norm: hide statistics from solver, simplifying layer definition | Evan Shelhamer | 1 | -4/+2 | |
batch norm statistics are not learnable parameters subject to solver updates, so they must be shielded from the solver. `BatchNorm` layer now masks its statistics for itself by zeroing parameter learning rates instead of relying on the layer definition. n.b. declaring `param`s for batch norm layers is no longer allowed. | |||||
2016-09-12 | [docs] identify batch norm layer blobs | Evan Shelhamer | 1 | -11/+12 | |
2016-09-09 | [docs] clarify handling of bias and scaling by BiasLayer, ScaleLayer | Evan Shelhamer | 3 | -15/+15 | |
A bias/scaling can be applied wherever desired by defining the respective layers, and `ScaleLayer` can handle both as a memory optimization. | |||||
2016-08-29 | Merge pull request #4647 from ClimbsRocks/patch-3 | Jeff Donahue | 1 | -1/+1 | |
changes "c++" to "C++" for consistency | |||||
2016-08-29 | Merge pull request #4646 from ClimbsRocks/patch-2 | Jeff Donahue | 1 | -1/+1 | |
fixes typo- duplicate "a a" | |||||
2016-08-28 | changes "c++" to "C++" for consistency | Preston Parry | 1 | -1/+1 | |
2016-08-28 | fixes typo- duplicate "a a" | Preston Parry | 1 | -1/+1 | |
2016-08-28 | updates tense in docs | Preston Parry | 1 | -1/+1 | |
"could" seems to imply for some reason that something is blocking one from calling the registered layers. "can" lays out more directly that a user can choose to do this. | |||||
2016-08-18 | Merge pull request #3272 from ixartz/master | Evan Shelhamer | 1 | -0/+5 | |
[cmake] OSX 10.10 (and more) use Accelerate Framework instead of veclib | |||||
2016-06-03 | Add level and stages to Net constructor | Luke Yeager | 1 | -0/+1 | |
This internal functionality will be exposed through the various interfaces in subsequent commits Also adds C++ tests for all-in-one nets | |||||
2016-06-01 | Add LSTMLayer and LSTMUnitLayer, with tests | Jeff Donahue | 1 | -0/+154 | |
2016-06-01 | Add RNNLayer, with tests | Jeff Donahue | 1 | -0/+47 | |
2016-06-01 | Add RecurrentLayer: an abstract superclass for other recurrent layer types | Jeff Donahue | 1 | -0/+187 | |
2016-05-16 | Add cuDNN v5 support, drop cuDNN v3 support | Felix Abecassis | 4 | -3/+24 | |
cuDNN v4 is still supported. | |||||
2016-05-04 | add parameter layer for learning any bottom | Jonathan L Long | 1 | -0/+45 | |
2016-05-04 | Merge pull request #3995 from ZhouYzzz/python-phase | Jon Long | 1 | -0/+1 | |
Allow the python layer have attribute "phase" | |||||
2016-04-20 | Don't set map_size=1TB in util/db_lmdb | Luke Yeager | 1 | -5/+8 | |
Instead, double the map size on the MDB_MAP_FULL exception. | |||||
2016-04-15 | Allow the python layer have attribute "phase" | ZhouYzzz | 1 | -0/+1 | |
2016-04-14 | CropLayer: groom comments | Evan Shelhamer | 1 | -0/+9 | |
2016-03-05 | Merge pull request #3590 from junshi15/GPUUtilities | Jon Long | 1 | -0/+5 | |
Add functions to check and grab GPU | |||||
2016-03-05 | Merge pull request #3588 from junshi15/P2psyncPrepare | Jon Long | 1 | -1/+4 | |
Refine P2PSync | |||||
2016-03-05 | split p2psync::run() | Jun Shi | 1 | -1/+4 | |
2016-03-05 | Crop: fixes, tests and negative axis indexing. | max argus | 1 | -2/+2 | |
2016-03-05 | Extend Crop to N-D, changed CropParameter. | max argus | 1 | -2/+20 | |
2016-03-05 | add CropLayer: crop blob to another blob's dimensions with offsets | Jonathan L Long | 1 | -0/+49 | |
configure offset(s) through proto definition. | |||||
2016-03-04 | add check and find GPU device utilities | Jun Shi | 1 | -0/+5 | |
2016-02-26 | Deprecate ForwardPrefilled(), Forward(bottom, loss) in lieu of dropping | Evan Shelhamer | 1 | -0/+9 | |
Relax removal of `Forward()` variations by deprecating instead. | |||||
2016-02-25 | collect Net inputs from Input layers | Evan Shelhamer | 1 | -2/+11 | |
Restore the list of net inputs for compatibility with the pycaffe and matcaffe interfaces and downstream C++. | |||||
2016-02-25 | drop Net inputs + Forward with bottoms | Evan Shelhamer | 1 | -27/+7 | |
Drop special cases for `input` fields, the `Net` input members, and the `Net` interface for Forward with bottoms along with Forward() / ForwardPrefilled() distinction. | |||||
2016-02-25 | deprecate input fields and upgrade automagically | Evan Shelhamer | 1 | -0/+6 | |
2016-02-25 | add InputLayer for Net input | Evan Shelhamer | 1 | -0/+44 | |
Create an input layer to replace oddball Net `input` fields. |