diff options
author | Evan Shelhamer <shelhamer@imaginarynumber.net> | 2014-08-28 16:28:51 -0700 |
---|---|---|
committer | Sergey Karayev <sergeykarayev@gmail.com> | 2014-09-04 01:53:18 +0100 |
commit | 39f7a4d327d6ca044114db600c2de1324fb43c1e (patch) | |
tree | b3cb5707959c8bcc6db3cd95333db5bec097f870 /models | |
parent | bcc12ef597f5eec04c582fe16e65dbb12a3b84f8 (diff) | |
download | caffeonacl-39f7a4d327d6ca044114db600c2de1324fb43c1e.tar.gz caffeonacl-39f7a4d327d6ca044114db600c2de1324fb43c1e.tar.bz2 caffeonacl-39f7a4d327d6ca044114db600c2de1324fb43c1e.zip |
proofread model zoo
Diffstat (limited to 'models')
-rw-r--r-- | models/bvlc_reference_caffenet/readme.md | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/models/bvlc_reference_caffenet/readme.md b/models/bvlc_reference_caffenet/readme.md index 1fbdbe12..d1c6269a 100644 --- a/models/bvlc_reference_caffenet/readme.md +++ b/models/bvlc_reference_caffenet/readme.md @@ -7,13 +7,12 @@ sha1: 4c8d77deb20ea792f84eb5e6d0a11ca0a8660a46 caffe_commit: 709dc15af4a06bebda027c1eb2b3f3e3375d5077 --- -This model is the result of following the Caffe [instructions](http://caffe.berkeleyvision.org/gathered/examples/imagenet.html) on training an ImageNet model. -This model is a replication of the model described in the [AlexNet](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) publication with some differences: +This model is the result of following the Caffe [ImageNet model training instructions](http://caffe.berkeleyvision.org/gathered/examples/imagenet.html). +It is a replication of the model described in the [AlexNet](http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks) publication with some differences: - not training with the relighting data-augmentation; - the order of pooling and normalization layers is switched (in CaffeNet, pooling is done before normalization). - This model is snapshot of iteration 310,000. The best validation performance during training was iteration 313,000 with validation accuracy 57.412% and loss 1.82328. This model obtains a top-1 accuracy 57.4% and a top-5 accuracy 80.4% on the validation set, using just the center crop. |