summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rwxr-xr-xexamples/cifar10/create_cifar10.sh2
-rwxr-xr-xexamples/imagenet/create_imagenet.sh10
-rw-r--r--examples/mnist/convert_mnist_data.cpp7
-rwxr-xr-xexamples/mnist/create_mnist.sh2
-rw-r--r--examples/mnist/readme.md9
-rw-r--r--models/bvlc_alexnet/train_val.prototxt6
-rw-r--r--models/bvlc_reference_caffenet/train_val.prototxt6
7 files changed, 23 insertions, 19 deletions
diff --git a/examples/cifar10/create_cifar10.sh b/examples/cifar10/create_cifar10.sh
index ad5038e0..dfba7cca 100755
--- a/examples/cifar10/create_cifar10.sh
+++ b/examples/cifar10/create_cifar10.sh
@@ -13,6 +13,6 @@ rm -rf $EXAMPLE/cifar10_train_leveldb $EXAMPLE/cifar10_test_leveldb
echo "Computing image mean..."
./build/tools/compute_image_mean $EXAMPLE/cifar10_train_leveldb \
- $EXAMPLE/mean.binaryproto
+ $EXAMPLE/mean.binaryproto leveldb
echo "Done."
diff --git a/examples/imagenet/create_imagenet.sh b/examples/imagenet/create_imagenet.sh
index a286b8fe..e912ac43 100755
--- a/examples/imagenet/create_imagenet.sh
+++ b/examples/imagenet/create_imagenet.sh
@@ -1,5 +1,5 @@
#!/usr/bin/env sh
-# Create the imagenet leveldb inputs
+# Create the imagenet lmdb inputs
# N.B. set the path to the imagenet train + val data dirs
EXAMPLE=examples/imagenet
@@ -34,7 +34,7 @@ if [ ! -d "$VAL_DATA_ROOT" ]; then
exit 1
fi
-echo "Creating train leveldb..."
+echo "Creating train lmdb..."
GLOG_logtostderr=1 $TOOLS/convert_imageset \
--resize_height=$RESIZE_HEIGHT \
@@ -42,9 +42,9 @@ GLOG_logtostderr=1 $TOOLS/convert_imageset \
--shuffle \
$TRAIN_DATA_ROOT \
$DATA/train.txt \
- $EXAMPLE/ilsvrc12_train_leveldb
+ $EXAMPLE/ilsvrc12_train_lmdb
-echo "Creating val leveldb..."
+echo "Creating val lmdb..."
GLOG_logtostderr=1 $TOOLS/convert_imageset \
--resize_height=$RESIZE_HEIGHT \
@@ -52,6 +52,6 @@ GLOG_logtostderr=1 $TOOLS/convert_imageset \
--shuffle \
$VAL_DATA_ROOT \
$DATA/val.txt \
- $EXAMPLE/ilsvrc12_val_leveldb
+ $EXAMPLE/ilsvrc12_val_lmdb
echo "Done."
diff --git a/examples/mnist/convert_mnist_data.cpp b/examples/mnist/convert_mnist_data.cpp
index 19040153..2749e452 100644
--- a/examples/mnist/convert_mnist_data.cpp
+++ b/examples/mnist/convert_mnist_data.cpp
@@ -1,6 +1,5 @@
-//
-// This script converts the MNIST dataset to the leveldb format used
-// by caffe to perform classification.
+// This script converts the MNIST dataset to a lmdb (default) or
+// leveldb (--backend=leveldb) format used by caffe to load data.
// Usage:
// convert_mnist_data [FLAGS] input_image_file input_label_file
// output_db_file
@@ -176,7 +175,7 @@ int main(int argc, char** argv) {
#endif
gflags::SetUsageMessage("This script converts the MNIST dataset to\n"
- "the leveldb/lmdb format used by Caffe to perform classification.\n"
+ "the lmdb/leveldb format used by Caffe to load data.\n"
"Usage:\n"
" convert_mnist_data [FLAGS] input_image_file input_label_file "
"output_db_file\n"
diff --git a/examples/mnist/create_mnist.sh b/examples/mnist/create_mnist.sh
index 8c43dc33..06ecc27d 100755
--- a/examples/mnist/create_mnist.sh
+++ b/examples/mnist/create_mnist.sh
@@ -1,5 +1,5 @@
#!/usr/bin/env sh
-# This script converts the mnist data into leveldb/lmdb format,
+# This script converts the mnist data into lmdb/leveldb format,
# depending on the value assigned to $BACKEND.
EXAMPLE=examples/mnist
diff --git a/examples/mnist/readme.md b/examples/mnist/readme.md
index ac1a0b7d..44e0091f 100644
--- a/examples/mnist/readme.md
+++ b/examples/mnist/readme.md
@@ -19,7 +19,7 @@ You will first need to download and convert the data format from the MNIST websi
cd $CAFFE_ROOT/examples/mnist
./create_mnist.sh
-If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist-train-leveldb`, and `mnist-test-leveldb`.
+If it complains that `wget` or `gunzip` are not installed, you need to install them respectively. After running the script there should be two datasets, `mnist_train_lmdb`, and `mnist_test_lmdb`.
## LeNet: the MNIST Classification Model
@@ -37,13 +37,14 @@ Specifically, we will write a `caffe::NetParameter` (or in python, `caffe.proto.
### Writing the Data Layer
-Currently, we will read the MNIST data from the leveldb we created earlier in the demo. This is defined by a data layer:
+Currently, we will read the MNIST data from the lmdb we created earlier in the demo. This is defined by a data layer:
layers {
name: "mnist"
type: DATA
data_param {
- source: "mnist-train-leveldb"
+ source: "mnist_train_lmdb"
+ backend: LMDB
batch_size: 64
scale: 0.00390625
}
@@ -51,7 +52,7 @@ Currently, we will read the MNIST data from the leveldb we created earlier in th
top: "label"
}
-Specifically, this layer has name `mnist`, type `data`, and it reads the data from the given leveldb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \[0,1\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob.
+Specifically, this layer has name `mnist`, type `data`, and it reads the data from the given lmdb source. We will use a batch size of 64, and scale the incoming pixels so that they are in the range \[0,1\). Why 0.00390625? It is 1 divided by 256. And finally, this layer produces two blobs, one is the `data` blob, and one is the `label` blob.
### Writing the Convolution Layer
diff --git a/models/bvlc_alexnet/train_val.prototxt b/models/bvlc_alexnet/train_val.prototxt
index 69b8916d..717b6fa4 100644
--- a/models/bvlc_alexnet/train_val.prototxt
+++ b/models/bvlc_alexnet/train_val.prototxt
@@ -5,7 +5,8 @@ layers {
top: "data"
top: "label"
data_param {
- source: "examples/imagenet/ilsvrc12_train_leveldb"
+ source: "examples/imagenet/ilsvrc12_train_lmdb"
+ backend: LMDB
batch_size: 256
}
transform_param {
@@ -21,7 +22,8 @@ layers {
top: "data"
top: "label"
data_param {
- source: "examples/imagenet/ilsvrc12_val_leveldb"
+ source: "examples/imagenet/ilsvrc12_val_lmdb"
+ backend: LMDB
batch_size: 50
}
transform_param {
diff --git a/models/bvlc_reference_caffenet/train_val.prototxt b/models/bvlc_reference_caffenet/train_val.prototxt
index d6d64073..073d8aef 100644
--- a/models/bvlc_reference_caffenet/train_val.prototxt
+++ b/models/bvlc_reference_caffenet/train_val.prototxt
@@ -5,7 +5,8 @@ layers {
top: "data"
top: "label"
data_param {
- source: "examples/imagenet/ilsvrc12_train_leveldb"
+ source: "examples/imagenet/ilsvrc12_train_lmdb"
+ backend: LMDB
batch_size: 256
}
transform_param {
@@ -21,7 +22,8 @@ layers {
top: "data"
top: "label"
data_param {
- source: "examples/imagenet/ilsvrc12_val_leveldb"
+ source: "examples/imagenet/ilsvrc12_val_lmdb"
+ backend: LMDB
batch_size: 50
}
transform_param {