summaryrefslogtreecommitdiff
path: root/docs/getting_pretrained_models.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/getting_pretrained_models.md')
-rw-r--r--docs/getting_pretrained_models.md2
1 files changed, 2 insertions, 0 deletions
diff --git a/docs/getting_pretrained_models.md b/docs/getting_pretrained_models.md
index bd342f70..bbac5ac4 100644
--- a/docs/getting_pretrained_models.md
+++ b/docs/getting_pretrained_models.md
@@ -17,11 +17,13 @@ This page will be updated as more models become available.
- The bundled model is the iteration 310,000 snapshot.
- The best validation performance during training was iteration 313,000 with
validation accuracy 57.412% and loss 1.82328.
+- This model obtains a top-1 accuracy 57.4% and a top-5 accuracy 80.4% on the validation set, using just the center crop. (Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy)
**AlexNet**: Our training of the Krizhevsky architecture, which differs from the paper's methodology by (1) not training with the relighting data-augmentation and (2) initializing non-zero biases to 0.1 instead of 1. (2) was found necessary for training, as initialization to 1 gave flat loss. Download the model (243.9MB) by running `examples/imagenet/get_caffe_alexnet_model.sh` from the Caffe root directory.
- The bundled model is the iteration 360,000 snapshot.
- The best validation performance during training was iteration 358,000 with
validation accuracy 57.258% and loss 1.83948.
+- This model obtains a top-1 accuracy 57.1% and a top-5 accuracy 80.2% on the validation set, using just the center crop. (Using the average of 10 crops, (4 + 1 center) * 2 mirror, should obtain a bit higher accuracy)
Additionally, you will probably eventually need some auxiliary data (mean image, synset list, etc.): run `data/ilsvrc12/get_ilsvrc_aux.sh` from the root directory to obtain it.