summaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2015-09-25Fix parse_log.sh against "prefetch queue empty" messagesDmytro Mishkin1-1/+6
2015-09-17Merge pull request #3074 from ronghanghu/show-use-cpuRonghang Hu1-0/+1
Get back 'USE CPU' print for caffe train
2015-09-17Separate IO dependenciesTea2-0/+8
OpenCV, LMDB, LevelDB and Snappy are made optional via switches (USE_OPENCV, USE_LMDB, USE_LEVELDB) available for Make and CMake builds. Since Snappy is a LevelDB dependency, its use is determined by USE_LEVELDB. HDF5 is left bundled because it is used for serializing weights and solverstates.
2015-09-16Get back 'USE CPU' print for caffe trainRonghang Hu1-0/+1
2015-09-04Update extract_features.cppLumin Zhou1-1/+1
2015-09-01Show output from convert_imageset toolLuke Yeager1-2/+4
2015-08-22Add signal handler and early exit/snapshot to Solver.J Yegerlehner1-1/+31
Add signal handler and early exit/snapshot to Solver. Add signal handler and early exit/snapshot to Solver. Also check for exit and snapshot when testing. Skip running test after early exit. Fix more lint. Rebase on master. Finish rebase on master. Fixups per review comments. Redress review comments. Lint. Correct error message wording.
2015-08-09Multi-GPUCyprien Noel1-31/+80
- Parallelize batches among GPUs and tree-reduce the gradients - The effective batch size scales with the number of devices - Batch size is multiplied by the number of devices - Split batches between GPUs, and tree-reduce the gradients - Detect machine topology (twin-GPU boards, P2P connectivity) - Track device in syncedmem (thanks @thatguymike) - Insert a callback in the solver for minimal code change - Accept list for gpu flag of caffe tool, e.g. '-gpu 0,1' or '-gpu all'. Run on default GPU if no ID given. - Add multi-GPU solver test - Deterministic architecture for reproducible runs
2015-08-07Merge pull request #2634 from mlopezantequera/patch-2Jeff Donahue1-1/+1
Update parse_log.py
2015-08-06[pycaffe,build] include Python first in caffe toolEvan Shelhamer1-5/+5
2015-08-06Merge pull request #2462 from longjon/correct-python-exceptionsEvan Shelhamer1-1/+15
Handle Python layer exceptions correctly
2015-06-22Update parse_log.pyManuel1-1/+1
Correct parsing (exponential notation learning rates were not being interpreted correctly)
2015-05-30Merge pull request #2350 from drdan14/log-parser-python-improvedEvan Shelhamer1-63/+102
Python log parser improvements
2015-05-27fix the bug with db_type when the number of features to be extracted is ↵Mohammad Norouzi1-1/+2
larger than 1
2015-05-26add leading zeros to keys in feature DB filesMohammad Norouzi1-1/+1
2015-05-18fix blob_loss_weights index in test() in caffe.cppRonghang Hu1-2/+2
Correct the index for blob_loss_weights during output. Previously it was set to test_score index by mistake.
2015-05-14print Python exceptions when using Python layer with the caffe toolJonathan L Long1-1/+15
2015-05-14Merge pull request #2165 from longjon/auto-reshapeJeff Donahue1-3/+0
Always call Layer::Reshape in Layer::Forward
2015-04-25fix typo: swap the titles of xlabel and ylabelTakuma Wakamori1-2/+2
2015-04-22Improvements to python log parserDaniel Golden1-63/+102
Over version introduced in https://github.com/BVLC/caffe/pull/1384 Highlights: * Interface change: column order is now determined by using a list of `OrderedDict` objects instead of `dict` objects, which obviates the need to pass around a tuple with the column orders. * The outputs are now named according to their names in the network protobuffer; e.g., if your top is named `loss`, then the corresponding column header will also be `loss`; we no longer rename it to, e.g., `TrainingLoss` or `TestLoss`. * Fixed the bug/feature of the first version where the initial learning rate was always NaN. * Add optional parameter to specify output table delimiter. It's still a comma by default. You can use Matlab code from [this gist](https://gist.github.com/drdan14/d8b45999c4a1cbf7ad85) to verify that your results are the same before and after the changes introduced in this pull request. That code assumes that your `top` names are `accuracy` and `loss`, but you can modify the code if that's not true.
2015-03-19always call Layer::Reshape in Layer::ForwardJonathan L Long1-3/+0
There are no cases where Forward is called without Reshape, so we can simplify the call structure.
2015-03-07extract_features preserves feature shapeJ Yegerlehner1-3/+3
2015-03-07Load weights from multiple caffemodels.J Yegerlehner1-2/+15
2015-02-19Repeal revert of #1878Evan Shelhamer2-14/+20
2015-02-19Revert "Merge pull request #1878 from philkr/encoded"Evan Shelhamer2-20/+14
This reverts the encoding cleanup since it breaks data processing for existing inputs as discussed in #1901.
2015-02-19Merge pull request #1899 from philkr/project_source_dirEvan Shelhamer1-1/+1
[cmake] CMAKE_SOURCE/BINARY_DIR to PROJECT_SOURCE/BINARY_DIR
2015-02-18Changing CMAKE_SOURCE/BINARY_DIR to PROJECT_SOURCE/BINARY_DIRphilkr1-1/+1
2015-02-17tools make net with phaseEvan Shelhamer2-6/+3
2015-02-16Merge pull request #1878 from philkr/encodedEvan Shelhamer2-14/+20
Groom handling of encoded image inputs
2015-02-16improve CMake buildAnatoly Baksheev1-19/+28
2015-02-16Cleaning up the encoded flag. Allowing any image (cropped or gray scale) to ↵philkr2-14/+20
be encoded. Allowing for a change in encoded (jpg -> png vice versa) and cleaning up some unused functions.
2015-02-05get rid of NetParameterPrettyPrint as layer is now after inputsJeff Donahue1-6/+1
(whoohoo)
2015-02-05automagic upgrade for v1->v2Jeff Donahue2-9/+19
2015-01-29Merge pull request #1748 from longjon/db-wrappersEvan Shelhamer3-56/+57
Simple database wrappers
2015-01-24drop dump_network toolEvan Shelhamer1-82/+0
Nets are better serialized as a single binaryproto or saved however desired through the Python and MATLAB interfaces.
2015-01-19use db wrappersJonathan L Long3-56/+57
2015-01-16Merge pull request #1686 from longjon/net-constJon Long1-2/+2
Improve const-ness of Net
2015-01-15check for enough args to convert_imagesetEvan Shelhamer1-1/+1
(this might better be handled by making all args flags...)
2015-01-09improve const-ness of NetJonathan L Long1-2/+2
2014-12-08Store data in lists of dicts and use csv packageDaniel Golden1-20/+34
Output format is unchanged (except that csv.DictWriter insists on writing ints as 0.0 instead of 0)
2014-12-08Take train loss from `Iteration N, loss = X` linesDaniel Golden1-16/+19
Was previously using `Train net output #M: loss = X` lines, but there may not be exactly one of those (e.g., for GoogLeNet, which has multiple loss layers); I believe that `Iteration N, loss = X` is the aggregated loss. If there's only one loss layer, these two values will be equal and it won't matter. Otherwise, we presumably want to report the aggregated loss.
2014-12-08Created parse_log.py, competitor to parse_log.shDaniel Golden2-6/+167
2014-10-15Added CPUTimerSergio2-17/+24
Make timing more precise using double and microseconds
2014-10-15Upgrade compute_image_mean to use gflags, accept list_of_images, and print ↵Sergio1-15/+44
mean_values
2014-10-15Change caffe time to do forward/backward and accumulate time per layerSergio1-19/+30
2014-10-15Added encoded option and check_size to convert_imagesetSergio1-5/+20
Conflicts: tools/convert_imageset.cpp
2014-10-14Renamed Database interface to Dataset.Kevin James Matzen3-35/+34
2014-10-14Templated the key and value types for the Database interface. The Database ↵Kevin James Matzen3-24/+17
is now responsible for serialization. Refactored the tests so that they reuse the same code for each value type and backend configuration.
2014-10-14Changed Database::buffer_t to Database::key_type and Database::value_typeKevin James Matzen3-6/+6
2014-10-14The LevelDB iterator/DB deallocation order bug is pretty much fixed by ↵Kevin James Matzen1-1/+0
having each iterator hold a shared pointer to the DB. I manually specified a deconstructor for the LeveldbState to make it clear what order these two things need to be deallocated in.