summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSergey Karayev <sergeykarayev@gmail.com>2014-09-04 01:08:19 +0100
committerSergey Karayev <sergeykarayev@gmail.com>2014-09-04 03:59:14 +0100
commitda715ea07af8ea42487d333e3441d138320da36a (patch)
tree67343cbf6e317d0ad4a50d0961db1d4e081911c4
parentd5e9739e5261de5f30832c06d849da5265bc95c6 (diff)
downloadcaffeonacl-da715ea07af8ea42487d333e3441d138320da36a.tar.gz
caffeonacl-da715ea07af8ea42487d333e3441d138320da36a.tar.bz2
caffeonacl-da715ea07af8ea42487d333e3441d138320da36a.zip
removed mention of getting_pretrained_models page and old paths
-rw-r--r--docs/getting_pretrained_models.md16
-rw-r--r--docs/index.md4
-rw-r--r--docs/model_zoo.md68
-rw-r--r--examples/classification.ipynb9
-rw-r--r--examples/detection.ipynb5
-rw-r--r--examples/filter_visualization.ipynb9
-rw-r--r--examples/finetune_flickr_style/readme.md4
-rw-r--r--examples/imagenet/imagenet_full_conv.prototxt1
-rw-r--r--examples/imagenet/readme.md55
-rw-r--r--examples/net_surgery.ipynb9
-rw-r--r--examples/web_demo/app.py4
-rw-r--r--examples/web_demo/readme.md6
-rw-r--r--matlab/caffe/matcaffe_batch.m3
-rw-r--r--matlab/caffe/matcaffe_init.m4
-rwxr-xr-xpython/classify.py4
-rwxr-xr-xpython/detect.py4
16 files changed, 104 insertions, 101 deletions
diff --git a/docs/getting_pretrained_models.md b/docs/getting_pretrained_models.md
deleted file mode 100644
index 70d8f7bd..00000000
--- a/docs/getting_pretrained_models.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-layout: default
----
-
-# Pre-trained models
-
-[BVLC](http://bvlc.eecs.berkeley.edu) aims to provide a variety of high quality pre-trained models.
-Note that unlike Caffe itself, these models usually have licenses **academic research / non-commercial use only**.
-
-## TODO
-
-Write something about the model zoo.
-
-## Auxiliary Data
-
-Additionally, you will probably eventually need some auxiliary data (mean image, synset list, etc.): run `data/ilsvrc12/get_ilsvrc_aux.sh` from the root directory to obtain it.
diff --git a/docs/index.md b/docs/index.md
index 47191ba8..83ba236b 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -38,8 +38,8 @@ Slides about the Caffe architecture, *updated 03/14*.
A 4-page report for the ACM Multimedia Open Source competition.
- [Installation instructions](/installation.html)<br />
Tested on Ubuntu, Red Hat, OS X.
-* [Pre-trained models](/getting_pretrained_models.html)<br />
-BVLC provides ready-to-use models for non-commercial use.
+* [Model Zoo](/model_zoo.html)<br />
+BVLC suggests a standard distribution format for Caffe models, and provides trained models for non-commercial use.
* [Developing & Contributing](/development.html)<br />
Guidelines for development and contributing to Caffe.
* [API Documentation](/doxygen/)<br />
diff --git a/docs/model_zoo.md b/docs/model_zoo.md
index b4cbeb1d..490bb68f 100644
--- a/docs/model_zoo.md
+++ b/docs/model_zoo.md
@@ -1,31 +1,59 @@
# Caffe Model Zoo
+Lots of people have used Caffe to train models of different architectures and applied to different problems, ranging from simple regression to AlexNet-alikes to Siamese networks for image similarity to speech applications.
+
+To lower the friction of sharing these models, we introduce the model zoo framework:
+
+- A standard format for packaging Caffe model info.
+- Tools to upload/download model info to/from Github Gists, and to download trained `.caffemodel` binaries.
+- A central wiki page for sharing model info Gists.
+
+## Where to get trained models
+
+First of all, we provide some trained models out of the box.
+Each one of these can be downloaded by running `scripts/download_model_binary.py <dirname>` where `<dirname>` is specified below:
+
+- **BVLC Reference CaffeNet** in `models/bvlc_reference_caffenet`: AlexNet trained on ILSVRC 2012, with a minor variation from the version as described in the NIPS 2012 paper.
+- **BVLC AlexNet** in `models/bvlc_alexnet`: AlexNet trained on ILSVRC 2012, almost exactly as described in NIPS 2012.
+- **BVLC Reference R-CNN ILSVRC-2013** in `models/bvlc_reference_rcnn_ilsvrc13`: pure Caffe implementation of [R-CNN](https://github.com/rbgirshick/rcnn).
+
+User-provided models are posted to a public-editable [wiki page](https://github.com/BVLC/caffe/wiki/Model-Zoo).
+
+## Model info format
+
A caffe model is distributed as a directory containing:
-- solver/model prototxt(s)
-- model binary file, with .caffemodel extension
-- readme.md, containing:
- - YAML header:
- - model file URL or (torrent magnet link) and MD5 hash
- - Caffe commit hash use to train this model
- - [optional] github gist id
- - license type or text
- - main body: free-form description/details
-- helpful scripts
-It is up to the user where to host the model file.
-Dropbox or their own server are both fine.
+- Solver/model prototxt(s)
+- Readme.md containing
+ - YAML frontmatter
+ - Caffe version used to train this model (tagged release or commit hash).
+ - [optional] file URL and SHA1 of the trained `.caffemodel`.
+ - [optional] github gist id.
+ - Information about what data the model was trained on, explanation of modeling choices, etc.
+ - License information.
+- [optional] Other helpful scripts.
+
+## Hosting model info
+
+Github Gist is a good format for model info distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.
+
+- `scripts/download_model_from_gist.sh <gist_id>`: downloads the non-binary files from a Gist into `<dirname>`
+- `scripts/upload_model_to_gist.sh <dirname>`: uploads non-binary files in the model directory as a Github Gist and prints the Gist ID. If `gist_id` is already part of the `<dirname>/readme.md` frontmatter, then updates existing Gist.
-We provide scripts:
+### Hosting trained models
-- publish_model_as_gist.sh: uploads non-binary files in the model directory as a Github Gist and returns the id. If gist id is already part of the readme, then updates existing gist.
-- download_model_from_gist.sh <gist_id>: downloads the non-binary files from a Gist.
-- download_model_binary.py: downloads the .caffemodel from the URL specified in readme.
+It is up to the user where to host the `.caffemodel` file.
+We host our BVLC-provided models on our own server.
+Dropbox also works fine (tip: make sure that `?dl=1` is appended to the end of the URL).
-The Gist is a good format for distribution because it can contain multiple files, is versionable, and has in-browser syntax highlighting and markdown rendering.
+- `scripts/download_model_binary.py <dirname>`: downloads the `.caffemodel` from the URL specified in the `<dirname>/readme.md` frontmatter and confirms SHA1.
-The existing models distributed with Caffe can stay bundled with Caffe, so I am re-working them all into this format.
-All relevant examples will be updated to start with `cd models/model_of_interest && ../scripts/download_model_binary.sh`.
## Tasks
-- get the imagenet example to work with the new prototxt location
+x get the imagenet example to work with the new prototxt location
+x make wiki page for user-submitted models
+- add flickr model to the user-submitted models wiki page
+x make docs section listing bvlc-distributed models
+- write the publish_model_as_gist script
+- write the download_model_from_gist script
diff --git a/examples/classification.ipynb b/examples/classification.ipynb
index 4d4a738a..7c01b9e2 100644
--- a/examples/classification.ipynb
+++ b/examples/classification.ipynb
@@ -3,7 +3,6 @@
"description": "Use the pre-trained ImageNet model to classify images with the Python interface.",
"example_name": "ImageNet classification",
"include_in_docs": true,
- "signature": "sha256:4f8d4c079c30d20ef4b6818e9672b1741fd1377354e5b83e291710736cecd24f"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -19,7 +18,7 @@
"\n",
"Caffe provides a general Python interface for models with `caffe.Net` in `python/caffe/pycaffe.py`, but to make off-the-shelf classification easy we provide a `caffe.Classifier` class and `classify.py` script. Both Python and MATLAB wrappers are provided. However, the Python wrapper has more features so we will describe it here. For MATLAB, refer to `matlab/caffe/matcaffe_demo.m`.\n",
"\n",
- "Before we begin, you must compile Caffe and install the python wrapper by setting your `PYTHONPATH`. If you haven't yet done so, please refer to the [installation instructions](installation.html). This example uses our pre-trained ImageNet model, an ILSVRC12 image classifier. You can download it (232.57MB) by running `examples/imagenet/get_caffe_reference_imagenet_model.sh`. Note that this pre-trained model is licensed for academic research / non-commercial use only.\n",
+ "Before we begin, you must compile Caffe and install the python wrapper by setting your `PYTHONPATH`. If you haven't yet done so, please refer to the [installation instructions](installation.html). This example uses our pre-trained CaffeNet model, an ILSVRC12 image classifier. You can download it by running `./scripts/download_model_binary.py models/bvlc_reference_caffenet`. Note that this pre-trained model is licensed for academic research / non-commercial use only.\n",
"\n",
"Ready? Let's start."
]
@@ -41,8 +40,8 @@
"\n",
"# Set the right path to your model definition file, pretrained model weights,\n",
"# and the image you would like to classify.\n",
- "MODEL_FILE = 'imagenet/imagenet_deploy.prototxt'\n",
- "PRETRAINED = 'imagenet/caffe_reference_imagenet_model'\n",
+ "MODEL_FILE = '../models/bvlc_reference_caffenet/deploy.prototxt'\n",
+ "PRETRAINED = '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'\n",
"IMAGE_FILE = 'images/cat.jpg'"
],
"language": "python",
@@ -404,4 +403,4 @@
"metadata": {}
}
]
-} \ No newline at end of file
+}
diff --git a/examples/detection.ipynb b/examples/detection.ipynb
index beae000a..e1620417 100644
--- a/examples/detection.ipynb
+++ b/examples/detection.ipynb
@@ -3,7 +3,6 @@
"description": "Run a pretrained model as a detector in Python.",
"example_name": "R-CNN detection",
"include_in_docs": true,
- "signature": "sha256:8a744fbbb9ed80acab471247eaf50c27dcbd652105404df9feca599939f0c0ee"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -26,7 +25,7 @@
"\n",
"- [Selective Search](http://koen.me/research/selectivesearch/) is the region proposer used by R-CNN. The [selective_search_ijcv_with_python](https://github.com/sergeyk/selective_search_ijcv_with_python) Python module takes care of extracting proposals through the selective search MATLAB implementation. To install it, download the module and name its directory `selective_search_ijcv_with_python`, run the demo in MATLAB to compile the necessary functions, then add it to your `PYTHONPATH` for importing. (If you have your own region proposals prepared, or would rather not bother with this step, [detect.py](https://github.com/BVLC/caffe/blob/master/python/detect.py) accepts a list of images and bounding boxes as CSV.)\n",
"\n",
- "- Follow the [model instructions](http://caffe.berkeleyvision.org/getting_pretrained_models.html) to get the Caffe R-CNN ImageNet model.\n",
+ "-Run `./scripts/download_model_binary.py models/bvlc_reference_caffenet` to get the Caffe R-CNN ImageNet model.\n",
"\n",
"With that done, we'll call the bundled `detect.py` to generate the region proposals and run the network. For an explanation of the arguments, do `./detect.py --help`."
]
@@ -37,7 +36,7 @@
"input": [
"!mkdir -p _temp\n",
"!echo `pwd`/images/fish-bike.jpg > _temp/det_input.txt\n",
- "!../python/detect.py --crop_mode=selective_search --pretrained_model=imagenet/caffe_rcnn_imagenet_model --model_def=imagenet/rcnn_imagenet_deploy.prototxt --gpu --raw_scale=255 _temp/det_input.txt _temp/det_output.h5"
+ "!../python/detect.py --crop_mode=selective_search --pretrained_model=models/bvlc_reference_rcnn_ilsvrc13/bvlc_reference_rcnn_ilsvrc13.caffemodel --model_def=models/bvlc_reference_rcnn_ilsvrc13/deploy.prototxt --gpu --raw_scale=255 _temp/det_input.txt _temp/det_output.h5"
],
"language": "python",
"metadata": {},
diff --git a/examples/filter_visualization.ipynb b/examples/filter_visualization.ipynb
index 5fdcbe25..4d69dc0f 100644
--- a/examples/filter_visualization.ipynb
+++ b/examples/filter_visualization.ipynb
@@ -3,7 +3,6 @@
"description": "Extracting features and visualizing trained filters with an example image, viewed layer-by-layer.",
"example_name": "Filter visualization",
"include_in_docs": true,
- "signature": "sha256:b1b0457e2b10110aca847a718a3fe631ebcfce63a61cbc33653244f52b1ff4af"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -54,15 +53,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Follow the [instructions](http://caffe.berkeleyvision.org/getting_pretrained_models.html) for getting the pretrained models, load the net, specify test phase and CPU mode, and configure input preprocessing."
+ "Run `./scripts/download_model_binary.py models/bvlc_reference_caffenet` to get the pretrained CaffeNet model, load the net, specify test phase and CPU mode, and configure input preprocessing."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
- "net = caffe.Classifier(caffe_root + 'examples/imagenet/imagenet_deploy.prototxt',\n",
- " caffe_root + 'examples/imagenet/caffe_reference_imagenet_model')\n",
+ "net = caffe.Classifier(caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt',\n",
+ " caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\n",
"net.set_phase_test()\n",
"net.set_mode_cpu()\n",
"# input preprocessing: 'data' is the name of the input blob == net.inputs[0]\n",
@@ -598,4 +597,4 @@
"metadata": {}
}
]
-} \ No newline at end of file
+}
diff --git a/examples/finetune_flickr_style/readme.md b/examples/finetune_flickr_style/readme.md
index da584f00..68b2778c 100644
--- a/examples/finetune_flickr_style/readme.md
+++ b/examples/finetune_flickr_style/readme.md
@@ -60,13 +60,13 @@ We'll also need the ImageNet-trained model, which you can obtain by running `get
Now we can train! (You can fine-tune in CPU mode by leaving out the `-gpu` flag.)
- caffe % ./build/tools/caffe train -solver examples/finetune_flickr_style/flickr_style_solver.prototxt -weights examples/imagenet/caffe_reference_imagenet_model -gpu 0
+ caffe % ./build/tools/caffe train -solver examples/finetune_flickr_style/flickr_style_solver.prototxt -weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel -gpu 0
[...]
I0828 22:10:04.025378 9718 solver.cpp:46] Solver scaffolding done.
I0828 22:10:04.025388 9718 caffe.cpp:95] Use GPU with device ID 0
- I0828 22:10:04.192004 9718 caffe.cpp:107] Finetuning from examples/imagenet/caffe_reference_imagenet_model
+ I0828 22:10:04.192004 9718 caffe.cpp:107] Finetuning from models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel
[...]
diff --git a/examples/imagenet/imagenet_full_conv.prototxt b/examples/imagenet/imagenet_full_conv.prototxt
index 570efae5..395b0f01 100644
--- a/examples/imagenet/imagenet_full_conv.prototxt
+++ b/examples/imagenet/imagenet_full_conv.prototxt
@@ -1,3 +1,4 @@
+# This file is for the net_surgery.ipynb example notebook.
name: "CaffeNetConv"
input: "data"
input_dim: 1
diff --git a/examples/imagenet/readme.md b/examples/imagenet/readme.md
index 8ce36449..97a68f30 100644
--- a/examples/imagenet/readme.md
+++ b/examples/imagenet/readme.md
@@ -1,6 +1,6 @@
---
title: ImageNet tutorial
-description: Train and test "CaffeNet" on ImageNet challenge data.
+description: Train and test "CaffeNet" on ImageNet data.
category: example
include_in_docs: true
priority: 1
@@ -9,16 +9,16 @@ priority: 1
Brewing ImageNet
================
-We are going to describe a reference implementation for the approach first proposed by Krizhevsky, Sutskever, and Hinton in their [NIPS 2012 paper](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf).
-Since training the whole model takes some time and energy, we provide a model, trained in the same way as we describe here, to help fight global warming.
-If you would like to simply use the pretrained model, check out the [Pretrained ImageNet](../../getting_pretrained_models.html) page.
-*Note that the pretrained model is for academic research / non-commercial use only*.
-
-To clarify, by ImageNet we actually mean the ILSVRC12 challenge, but you can easily train on the whole of ImageNet as well, just with more disk space, and a little longer training time.
+This guide is meant to get you ready to train your own model on your own data.
+If you just want an ImageNet-trained network, then note that since training takes a lot of energy and we hate global warming, we provide the CaffeNet model trained as described below in the [model zoo](/model_zoo.html).
Data Preparation
----------------
+*The guide specifies all paths and assumes all commands are executed from the root caffe directory.*
+
+*By "ImageNet" we here mean the ILSVRC12 challenge, but you can easily train on the whole of ImageNet as well, just with more disk space, and a little longer training time.*
+
We assume that you already have downloaded the ImageNet training data and validation data, and they are stored on your disk like:
/path/to/imagenet/train/n01440764/n01440764_10026.JPEG
@@ -26,44 +26,39 @@ We assume that you already have downloaded the ImageNet training data and valida
You will first need to prepare some auxiliary data for training. This data can be downloaded by:
- cd $CAFFE_ROOT/data/ilsvrc12/
- ./get_ilsvrc_aux.sh
+ ./data/get_ilsvrc_aux.sh
The training and validation input are described in `train.txt` and `val.txt` as text listing all the files and their labels. Note that we use a different indexing for labels than the ILSVRC devkit: we sort the synset names in their ASCII order, and then label them from 0 to 999. See `synset_words.txt` for the synset/name mapping.
-You may want to resize the images to 256x256 in advance. By default, we do not explicitly do this because in a cluster environment, one may benefit from resizing images in a parallel fashion, using mapreduce. For example, Yangqing used his lightedweighted [mincepie](https://github.com/Yangqing/mincepie) package to do mapreduce on the Berkeley cluster. If you would things to be rather simple and straightforward, you can also use shell commands, something like:
+You may want to resize the images to 256x256 in advance. By default, we do not explicitly do this because in a cluster environment, one may benefit from resizing images in a parallel fashion, using mapreduce. For example, Yangqing used his lightweight [mincepie](https://github.com/Yangqing/mincepie) package. If you prefer things to be simpler, you can also use shell commands, something like:
for name in /path/to/imagenet/val/*.JPEG; do
convert -resize 256x256\! $name $name
done
-Go to `$CAFFE_ROOT/examples/imagenet/` for the rest of this guide.
-
-Take a look at `create_imagenet.sh`. Set the paths to the train and val dirs as needed, and set "RESIZE=true" to resize all images to 256x256 if you haven't resized the images in advance.
-Now simply create the leveldbs with `./create_imagenet.sh`. Note that `ilsvrc12_train_leveldb` and `ilsvrc12_val_leveldb` should not exist before this execution. It will be created by the script. `GLOG_logtostderr=1` simply dumps more information for you to inspect, and you can safely ignore it.
+Take a look at `examples/imagenet/create_imagenet.sh`. Set the paths to the train and val dirs as needed, and set "RESIZE=true" to resize all images to 256x256 if you haven't resized the images in advance.
+Now simply create the leveldbs with `examples/imagenet/create_imagenet.sh`. Note that `examples/imagenet/ilsvrc12_train_leveldb` and `examples/imagenet/ilsvrc12_val_leveldb` should not exist before this execution. It will be created by the script. `GLOG_logtostderr=1` simply dumps more information for you to inspect, and you can safely ignore it.
Compute Image Mean
------------------
The model requires us to subtract the image mean from each image, so we have to compute the mean. `tools/compute_image_mean.cpp` implements that - it is also a good example to familiarize yourself on how to manipulate the multiple components, such as protocol buffers, leveldbs, and logging, if you are not familiar with them. Anyway, the mean computation can be carried out as:
- ./make_imagenet_mean.sh
+ ./examples/imagenet/make_imagenet_mean.sh
which will make `data/ilsvrc12/imagenet_mean.binaryproto`.
-Network Definition
-------------------
-
-The network definition follows strictly the one in Krizhevsky et al. You can find the detailed definition at `examples/imagenet/imagenet_train_val.prototxt`. Note the paths in the data layer --- if you have not followed the exact paths in this guide you will need to change the following lines:
+Model Definition
+----------------
- source: "ilvsrc12_train_leveldb"
- mean_file: "../../data/ilsvrc12/imagenet_mean.binaryproto"
+We are going to describe a reference implementation for the approach first proposed by Krizhevsky, Sutskever, and Hinton in their [NIPS 2012 paper](http://books.nips.cc/papers/files/nips25/NIPS2012_0534.pdf).
-to point to your own leveldb and image mean.
+The network definition (`models/bvlc_reference_caffenet/train_val.prototxt`) follows the one in Krizhevsky et al.
+Note that if you deviated from file paths suggested in this guide, you'll need to adjust the relevant paths in the `.prototxt` files.
-If you look carefully at `imagenet_train_val.prototxt`, you will notice several `include` sections specifying either `phase: TRAIN` or `phase: TEST`. These sections allow us to define two closely related networks in one file: the network used for training and the network used for testing. These two networks are almost identical, sharing all layers except for those marked with `include { phase: TRAIN }` or `include { phase: TEST }`. In this case, only the input layers and one output layer are different.
+If you look carefully at `models/bvlc_reference_caffenet/train_val.prototxt`, you will notice several `include` sections specifying either `phase: TRAIN` or `phase: TEST`. These sections allow us to define two closely related networks in one file: the network used for training and the network used for testing. These two networks are almost identical, sharing all layers except for those marked with `include { phase: TRAIN }` or `include { phase: TEST }`. In this case, only the input layers and one output layer are different.
-**Input layer differences:** The training network's `data` input layer draws its data from `ilsvrc12_train_leveldb` and randomly mirrors the input image. The testing network's `data` layer takes data from `ilsvrc12_val_leveldb` and does not perform random mirroring.
+**Input layer differences:** The training network's `data` input layer draws its data from `examples/imagenet/ilsvrc12_train_leveldb` and randomly mirrors the input image. The testing network's `data` layer takes data from `examples/imagenet/ilsvrc12_val_leveldb` and does not perform random mirroring.
**Output layer differences:** Both networks output the `softmax_loss` layer, which in training is used to compute the loss function and to initialize the backpropagation, while in validation this loss is simply reported. The testing network also has a second output layer, `accuracy`, which is used to report the accuracy on the test set. In the process of training, the test network will occasionally be instantiated and tested on the test set, producing lines like `Test score #0: xxx` and `Test score #1: xxx`. In this case score 0 is the accuracy (which will start around 1/1000 = 0.001 for an untrained network) and score 1 is the loss (which will start around 7 for an untrained network).
@@ -76,18 +71,14 @@ We will also lay out a protocol buffer for running the solver. Let's make a few
* The network will be trained with momentum 0.9 and a weight decay of 0.0005.
* For every 10,000 iterations, we will take a snapshot of the current status.
-Sound good? This is implemented in `examples/imagenet/imagenet_solver.prototxt`. Again, you will need to change the first line:
-
- net: "imagenet_train_val.prototxt"
-
-to point to the actual path if you have changed it.
+Sound good? This is implemented in `models/bvlc_reference_caffenet/solver.prototxt`.
Training ImageNet
-----------------
Ready? Let's train.
- ./build/tools/caffe train --solver=examples/imagenet/imagenet_solver.prototxt
+ ./build/tools/caffe train --solver=models/bvlc_reference_caffenet/solver.prototxt
Sit back and enjoy!
@@ -98,11 +89,11 @@ On my K20 machine, every 20 iterations take about 36 seconds to run, so effectiv
Resume Training?
----------------
-We all experience times when the power goes out, or we feel like rewarding ourself a little by playing Battlefield (does someone still remember Quake?). Since we are snapshotting intermediate results during training, we will be able to resume from snapshots. This can be done as easy as:
+We all experience times when the power goes out, or we feel like rewarding ourself a little by playing Battlefield (does anyone still remember Quake?). Since we are snapshotting intermediate results during training, we will be able to resume from snapshots. This can be done as easy as:
./build/tools/caffe train --solver=examples/imagenet/imagenet_solver.prototxt --snapshot=examples/imagenet/caffe_imagenet_10000.solverstate
-where in the script `imagenet_train_1000.solverstate` is the solver state snapshot that stores all necessary information to recover the exact solver state (including the parameters, momentum history, etc).
+where in the script `imagenet_train_10000.solverstate` is the solver state snapshot that stores all necessary information to recover the exact solver state (including the parameters, momentum history, etc).
Parting Words
-------------
diff --git a/examples/net_surgery.ipynb b/examples/net_surgery.ipynb
index 2d8bbb10..336c8457 100644
--- a/examples/net_surgery.ipynb
+++ b/examples/net_surgery.ipynb
@@ -3,7 +3,6 @@
"description": "How to do net surgery and manually change model parameters, making a fully-convolutional classifier for dense feature extraction.",
"example_name": "Editing model parameters",
"include_in_docs": true,
- "signature": "sha256:10c551b31a64c2210f6094dbb603f26c206a7b72cd99032f475cb5023edcdc43"
},
"nbformat": 3,
"nbformat_minor": 0,
@@ -27,7 +26,7 @@
"cell_type": "code",
"collapsed": false,
"input": [
- "!diff imagenet/imagenet_full_conv.prototxt imagenet/imagenet_deploy.prototxt"
+ "!diff imagenet/imagenet_full_conv.prototxt ../models/bvlc_reference_caffenet/deploy.prototxt"
],
"language": "python",
"metadata": {},
@@ -144,7 +143,7 @@
"import caffe\n",
"\n",
"# Load the original network and extract the fully-connected layers' parameters.\n",
- "net = caffe.Net('imagenet/imagenet_deploy.prototxt', 'imagenet/caffe_reference_imagenet_model')\n",
+ "net = caffe.Net('../models/bvlc_reference_caffenet/deploy.prototxt', 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\n",
"params = ['fc6', 'fc7', 'fc8']\n",
"# fc_params = {name: (weights, biases)}\n",
"fc_params = {pr: (net.params[pr][0].data, net.params[pr][1].data) for pr in params}\n",
@@ -179,7 +178,7 @@
"collapsed": false,
"input": [
"# Load the fully-convolutional network to transplant the parameters.\n",
- "net_full_conv = caffe.Net('imagenet/imagenet_full_conv.prototxt', 'imagenet/caffe_reference_imagenet_model')\n",
+ "net_full_conv = caffe.Net('imagenet/imagenet_full_conv.prototxt', '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel')\n",
"params_full_conv = ['fc6-conv', 'fc7-conv', 'fc8-conv']\n",
"# conv_params = {name: (weights, biases)}\n",
"conv_params = {pr: (net_full_conv.params[pr][0].data, net_full_conv.params[pr][1].data) for pr in params_full_conv}\n",
@@ -350,4 +349,4 @@
"metadata": {}
}
]
-} \ No newline at end of file
+}
diff --git a/examples/web_demo/app.py b/examples/web_demo/app.py
index f7f46ce6..d33fc92f 100644
--- a/examples/web_demo/app.py
+++ b/examples/web_demo/app.py
@@ -98,9 +98,9 @@ def allowed_file(filename):
class ImagenetClassifier(object):
default_args = {
'model_def_file': (
- '{}/examples/imagenet/imagenet_deploy.prototxt'.format(REPO_DIRNAME)),
+ '{}/models/bvlc_reference_caffenet/deploy.prototxt'.format(REPO_DIRNAME)),
'pretrained_model_file': (
- '{}/examples/imagenet/caffe_reference_imagenet_model'.format(REPO_DIRNAME)),
+ '{}/models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'.format(REPO_DIRNAME)),
'mean_file': (
'{}/python/caffe/imagenet/ilsvrc_2012_mean.npy'.format(REPO_DIRNAME)),
'class_labels_file': (
diff --git a/examples/web_demo/readme.md b/examples/web_demo/readme.md
index 3c8fdc06..fe74b9ef 100644
--- a/examples/web_demo/readme.md
+++ b/examples/web_demo/readme.md
@@ -13,7 +13,11 @@ priority: 10
The demo server requires Python with some dependencies.
To make sure you have the dependencies, please run `pip install -r examples/web_demo/requirements.txt`, and also make sure that you've compiled the Python Caffe interface and that it is on your `PYTHONPATH` (see [installation instructions](/installation.html)).
-Make sure that you have obtained the Caffe Reference ImageNet Model and the ImageNet Auxiliary Data ([instructions](/getting_pretrained_models.html)).
+Make sure that you have obtained the Reference CaffeNet Model and the ImageNet Auxiliary Data:
+
+ ./scripts/download_model_binary.py models/bvlc_reference_caffenet
+ ./data/ilsvrc12/get_ilsvrc_aux.sh
+
NOTE: if you run into trouble, try re-downloading the auxiliary files.
## Run
diff --git a/matlab/caffe/matcaffe_batch.m b/matlab/caffe/matcaffe_batch.m
index 3cb7f144..f6d1aa83 100644
--- a/matlab/caffe/matcaffe_batch.m
+++ b/matlab/caffe/matcaffe_batch.m
@@ -27,9 +27,8 @@ if ischar(list_im)
filename = list_im;
list_im = read_cell(filename);
end
-% Adjust the batch size to match with imagenet_deploy.prototxt
+% Adjust the batch size and dim to match with models/bvlc_reference_caffenet/deploy.prototxt
batch_size = 10;
-% Adjust dim to the output size of imagenet_deploy.prototxt
dim = 1000;
disp(list_im)
if mod(length(list_im),batch_size)
diff --git a/matlab/caffe/matcaffe_init.m b/matlab/caffe/matcaffe_init.m
index 4e4ef8bf..7cc69357 100644
--- a/matlab/caffe/matcaffe_init.m
+++ b/matlab/caffe/matcaffe_init.m
@@ -8,11 +8,11 @@ if nargin < 1
end
if nargin < 2 || isempty(model_def_file)
% By default use imagenet_deploy
- model_def_file = '../../examples/imagenet/imagenet_deploy.prototxt';
+ model_def_file = '../../models/bvlc_reference_caffenet/deploy.prototxt';
end
if nargin < 3 || isempty(model_file)
% By default use caffe reference model
- model_file = '../../examples/imagenet/caffe_reference_imagenet_model';
+ model_file = '../../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel';
end
diff --git a/python/classify.py b/python/classify.py
index ddc5429f..873b5e38 100755
--- a/python/classify.py
+++ b/python/classify.py
@@ -31,13 +31,13 @@ def main(argv):
parser.add_argument(
"--model_def",
default=os.path.join(pycaffe_dir,
- "../examples/imagenet/imagenet_deploy.prototxt"),
+ "../models/bvlc_reference_caffenet/deploy.prototxt"),
help="Model definition file."
)
parser.add_argument(
"--pretrained_model",
default=os.path.join(pycaffe_dir,
- "../examples/imagenet/caffe_reference_imagenet_model"),
+ "../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel"),
help="Trained model weights file."
)
parser.add_argument(
diff --git a/python/detect.py b/python/detect.py
index 4598fc7a..bc8c0703 100755
--- a/python/detect.py
+++ b/python/detect.py
@@ -46,13 +46,13 @@ def main(argv):
parser.add_argument(
"--model_def",
default=os.path.join(pycaffe_dir,
- "../examples/imagenet/imagenet_deploy.prototxt"),
+ "../models/bvlc_reference_caffenet/deploy.prototxt.prototxt"),
help="Model definition file."
)
parser.add_argument(
"--pretrained_model",
default=os.path.join(pycaffe_dir,
- "../examples/imagenet/caffe_reference_imagenet_model"),
+ "../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel"),
help="Trained model weights file."
)
parser.add_argument(