diff options
author | Evan Shelhamer <shelhamer@imaginarynumber.net> | 2014-09-24 14:48:04 -0700 |
---|---|---|
committer | Evan Shelhamer <shelhamer@imaginarynumber.net> | 2014-09-24 14:48:04 -0700 |
commit | 90bd50d4c09e3ba1beebfb30cb63556c28fa8c1b (patch) | |
tree | 2bdd39f981fad63519cff5e650a2b3e020c6f8ce /examples | |
parent | 2c8d946b67d5024ec56d6abc85ea53ed33df8fa4 (diff) | |
parent | 4bb198ae4431f3c24b6faf9ab3a63841a50b1ced (diff) | |
download | caffeonacl-90bd50d4c09e3ba1beebfb30cb63556c28fa8c1b.tar.gz caffeonacl-90bd50d4c09e3ba1beebfb30cb63556c28fa8c1b.tar.bz2 caffeonacl-90bd50d4c09e3ba1beebfb30cb63556c28fa8c1b.zip |
Back-merge
Fixed param order of cv::Size in cv::resize
switch examples to lmdb (except for custom data loaders)
fix cifar10 paths so they can be run from caffe root
default backend to lmdb for image conversion and mean computation
Diffstat (limited to 'examples')
-rwxr-xr-x | examples/cifar10/create_cifar10.sh | 2 | ||||
-rwxr-xr-x | examples/imagenet/create_imagenet.sh | 10 | ||||
-rw-r--r-- | examples/mnist/convert_mnist_data.cpp | 7 | ||||
-rwxr-xr-x | examples/mnist/create_mnist.sh | 2 | ||||
-rw-r--r-- | examples/mnist/readme.md | 9 |
5 files changed, 15 insertions, 15 deletions
diff --git a/examples/cifar10/create_cifar10.sh b/examples/cifar10/create_cifar10.sh index ad5038e0..dfba7cca 100755 --- a/examples/cifar10/create_cifar10.sh +++ b/examples/cifar10/create_cifar10.sh @@ -13,6 +13,6 @@ rm -rf $EXAMPLE/cifar10_train_leveldb $EXAMPLE/cifar10_test_leveldb echo "Computing image mean..." ./build/tools/compute_image_mean $EXAMPLE/cifar10_train_leveldb \ - $EXAMPLE/mean.binaryproto + $EXAMPLE/mean.binaryproto leveldb echo "Done." diff --git a/examples/imagenet/create_imagenet.sh b/examples/imagenet/create_imagenet.sh index a286b8fe..e912ac43 100755 --- a/examples/imagenet/create_imagenet.sh +++ b/examples/imagenet/create_imagenet.sh @@ -1,5 +1,5 @@ #!/usr/bin/env sh -# Create the imagenet leveldb inputs +# Create the imagenet lmdb inputs # N.B. set the path to the imagenet train + val data dirs EXAMPLE=examples/imagenet @@ -34,7 +34,7 @@ if [ ! -d "$VAL_DATA_ROOT" ]; then exit 1 fi -echo "Creating train leveldb..." +echo "Creating train lmdb..." GLOG_logtostderr=1 $TOOLS/convert_imageset \ --resize_height=$RESIZE_HEIGHT \ @@ -42,9 +42,9 @@ GLOG_logtostderr=1 $TOOLS/convert_imageset \ --shuffle \ $TRAIN_DATA_ROOT \ $DATA/train.txt \ - $EXAMPLE/ilsvrc12_train_leveldb + $EXAMPLE/ilsvrc12_train_lmdb -echo "Creating val leveldb..." +echo "Creating val lmdb..." GLOG_logtostderr=1 $TOOLS/convert_imageset \ --resize_height=$RESIZE_HEIGHT \ @@ -52,6 +52,6 @@ GLOG_logtostderr=1 $TOOLS/convert_imageset \ --shuffle \ $VAL_DATA_ROOT \ $DATA/val.txt \ - $EXAMPLE/ilsvrc12_val_leveldb + $EXAMPLE/ilsvrc12_val_lmdb echo "Done." diff --git a/examples/mnist/convert_mnist_data.cpp b/examples/mnist/convert_mnist_data.cpp index 19040153..2749e452 100644 --- a/examples/mnist/convert_mnist_data.cpp +++ b/examples/mnist/convert_mnist_data.cpp @@ -1,6 +1,5 @@ -// -// This script converts the MNIST dataset to the leveldb format used -// by caffe to perform classification. +// This script converts the MNIST dataset to a lmdb (default) or +// leveldb (--backend=leveldb) format used by caffe to load data. // Usage: // convert_mnist_data [FLAGS] input_image_file input_label_file // output_db_file @@ -176,7 +175,7 @@ int main(int argc, char** argv) { #endif gflags::SetUsageMessage("This script converts the MNIST dataset to\n" - "the leveldb/lmdb format used by Caffe to perform classification.\n" + "the lmdb/leveldb format used by Caffe to load data.\n" "Usage:\n" " convert_mnist_data [FLAGS] input_image_file input_label_file " "output_db_file\n" diff --git a/examples/mnist/create_mnist.sh b/examples/mnist/create_mnist.sh index 8c43dc33..06ecc27d 100755 --- a/examples/mnist/create_mnist.sh +++ b/examples/mnist/create_mnist.sh @@ -1,5 +1,5 @@ #!/usr/bin/env sh -# This script converts the mnist data into leveldb/lmdb format, +# This script converts the mnist data into lmdb/leveldb format, # depending on the value assigned to $BACKEND. EXAMPLE=examples/mnist diff --git a/examples/mnist/readme.md b/examples/mnist/readme.md index ac1a0b7d..44e0091f 100644 --- a/examples/mnist/readme.md +++ b/examples/mnist/readme.md @@ -19,7 +19,7 @@ You will first need to download and convert the data format from the MNIST websi cd $CAFFE_ROOT/examples/mnist ./create_mnist.sh -If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist-train-leveldb`, and `mnist-test-leveldb`. +If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist_train_lmdb`, and `mnist_test_lmdb`. ## LeNet: the MNIST Classification Model @@ -37,13 +37,14 @@ Specifically, we will write a `caffe::NetParameter` (or in python, `caffe.proto. ### Writing the Data Layer -Currently, we will read the MNIST data from the leveldb we created earlier in the demo. This is defined by a data layer: +Currently, we will read the MNIST data from the lmdb we created earlier in the demo. This is defined by a data layer: layers { name: "mnist" type: DATA data_param { - source: "mnist-train-leveldb" + source: "mnist_train_lmdb" + backend: LMDB batch_size: 64 scale: 0.00390625 } @@ -51,7 +52,7 @@ Currently, we will read the MNIST data from the leveldb we created earlier in th top: "label" } -Specifically, this layer has name `mnist`, type `data`, and it reads the data from the given leveldb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \[0,1\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob. +Specifically, this layer has name `mnist`, type `data`, and it reads the data from the given lmdb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \[0,1\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob. ### Writing the Convolution Layer |