diff options
author | Sergio <sguada@gmail.com> | 2014-06-25 22:09:39 -0700 |
---|---|---|
committer | Sergio <sguada@gmail.com> | 2014-06-25 22:12:10 -0700 |
commit | 560e11e8973b510c99ca3c3ac035ab01854beede (patch) | |
tree | 1b55fa1732fd40bc628bba4d2ab5e4474bee7312 /docs | |
parent | 16fb55edc27ddbc800b6d0278b95ca8ff0b2d913 (diff) | |
download | caffeonacl-560e11e8973b510c99ca3c3ac035ab01854beede.tar.gz caffeonacl-560e11e8973b510c99ca3c3ac035ab01854beede.tar.bz2 caffeonacl-560e11e8973b510c99ca3c3ac035ab01854beede.zip |
Added top-1 and top-5 accuracy for the caffe networks to docs
Diffstat (limited to 'docs')
-rw-r--r-- | docs/getting_pretrained_models.md | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/docs/getting_pretrained_models.md b/docs/getting_pretrained_models.md index bd342f70..bbac5ac4 100644 --- a/docs/getting_pretrained_models.md +++ b/docs/getting_pretrained_models.md @@ -17,11 +17,13 @@ This page will be updated as more models become available. - The bundled model is the iteration 310,000 snapshot. - The best validation performance during training was iteration 313,000 with validation accuracy 57.412% and loss 1.82328. +- This model obtains a top-1 accuracy 57.4% and a top-5 accuracy 80.4% on the validation set, using just the center crop. (Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy) **AlexNet**: Our training of the Krizhevsky architecture, which differs from the paper's methodology by (1) not training with the relighting data-augmentation and (2) initializing non-zero biases to 0.1 instead of 1. (2) was found necessary for training, as initialization to 1 gave flat loss. Download the model (243.9MB) by running `examples/imagenet/get_caffe_alexnet_model.sh` from the Caffe root directory. - The bundled model is the iteration 360,000 snapshot. - The best validation performance during training was iteration 358,000 with validation accuracy 57.258% and loss 1.83948. +- This model obtains a top-1 accuracy 57.1% and a top-5 accuracy 80.2% on the validation set, using just the center crop. (Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy) Additionally, you will probably eventually need some auxiliary data (mean image, synset list, etc.): run `data/ilsvrc12/get_ilsvrc_aux.sh` from the root directory to obtain it. |