Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Get back 'USE CPU' print for caffe train
|
|
OpenCV, LMDB, LevelDB and Snappy are made optional via switches
(USE_OPENCV, USE_LMDB, USE_LEVELDB) available for Make and CMake
builds. Since Snappy is a LevelDB dependency, its use is determined by
USE_LEVELDB. HDF5 is left bundled because it is used for serializing
weights and solverstates.
|
|
|
|
|
|
|
|
Add signal handler and early exit/snapshot to Solver.
Add signal handler and early exit/snapshot to Solver.
Also check for exit and snapshot when testing.
Skip running test after early exit.
Fix more lint.
Rebase on master.
Finish rebase on master.
Fixups per review comments.
Redress review comments.
Lint.
Correct error message wording.
|
|
- Parallelize batches among GPUs and tree-reduce the gradients
- The effective batch size scales with the number of devices
- Batch size is multiplied by the number of devices
- Split batches between GPUs, and tree-reduce the gradients
- Detect machine topology (twin-GPU boards, P2P connectivity)
- Track device in syncedmem (thanks @thatguymike)
- Insert a callback in the solver for minimal code change
- Accept list for gpu flag of caffe tool, e.g. '-gpu 0,1' or '-gpu all'.
Run on default GPU if no ID given.
- Add multi-GPU solver test
- Deterministic architecture for reproducible runs
|
|
Update parse_log.py
|
|
|
|
Handle Python layer exceptions correctly
|
|
Correct parsing (exponential notation learning rates were not being interpreted correctly)
|
|
Python log parser improvements
|
|
larger than 1
|
|
|
|
Correct the index for blob_loss_weights during output. Previously it was set to test_score index by mistake.
|
|
|
|
Always call Layer::Reshape in Layer::Forward
|
|
|
|
Over version introduced in https://github.com/BVLC/caffe/pull/1384
Highlights:
* Interface change: column order is now determined by using a list of `OrderedDict` objects instead of `dict` objects, which obviates the need to pass around a tuple with the column orders.
* The outputs are now named according to their names in the network protobuffer; e.g., if your top is named `loss`, then the corresponding column header will also be `loss`; we no longer rename it to, e.g., `TrainingLoss` or `TestLoss`.
* Fixed the bug/feature of the first version where the initial learning rate was always NaN.
* Add optional parameter to specify output table delimiter. It's still a comma by default.
You can use Matlab code from [this gist](https://gist.github.com/drdan14/d8b45999c4a1cbf7ad85) to verify that your results are the same before and after the changes introduced in this pull request. That code assumes that your `top` names are `accuracy` and `loss`, but you can modify the code if that's not true.
|
|
There are no cases where Forward is called without Reshape, so we can
simplify the call structure.
|
|
|
|
|
|
|
|
This reverts the encoding cleanup since it breaks data processing for
existing inputs as discussed in #1901.
|
|
[cmake] CMAKE_SOURCE/BINARY_DIR to PROJECT_SOURCE/BINARY_DIR
|
|
|
|
|
|
Groom handling of encoded image inputs
|
|
|
|
be encoded. Allowing for a change in encoded (jpg -> png vice versa) and cleaning up some unused functions.
|
|
(whoohoo)
|
|
|
|
Simple database wrappers
|
|
Nets are better serialized as a single binaryproto or saved however
desired through the Python and MATLAB interfaces.
|
|
|
|
Improve const-ness of Net
|
|
(this might better be handled by making all args flags...)
|
|
|
|
Output format is unchanged (except that csv.DictWriter insists on writing ints as 0.0 instead of 0)
|
|
Was previously using `Train net output #M: loss = X` lines, but there may not be exactly one of those (e.g., for GoogLeNet, which has multiple loss layers); I believe that `Iteration N, loss = X` is the aggregated loss.
If there's only one loss layer, these two values will be equal and it won't matter. Otherwise, we presumably want to report the aggregated loss.
|
|
|
|
Make timing more precise using double and microseconds
|
|
mean_values
|
|
|
|
Conflicts:
tools/convert_imageset.cpp
|
|
|
|
is now responsible for serialization. Refactored the tests so that they reuse the same code for each value type and backend configuration.
|
|
|
|
having each iterator hold a shared pointer to the DB. I manually specified a deconstructor for the LeveldbState to make it clear what order these two things need to be deallocated in.
|