summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorKeiji Yoshida <yoshida.keiji.84@gmail.com>2015-07-12 20:06:31 +0900
committerKeiji Yoshida <yoshida.keiji.84@gmail.com>2015-07-12 20:06:31 +0900
commit4ccc052e6e2bd4293dfb530febc9bf441786363d (patch)
tree2ded7179749600e493b92197dca517eff1837498 /docs
parent7e5608f3ceeeb4a3610bd82678432500c63256fc (diff)
downloadcaffeonacl-4ccc052e6e2bd4293dfb530febc9bf441786363d.tar.gz
caffeonacl-4ccc052e6e2bd4293dfb530febc9bf441786363d.tar.bz2
caffeonacl-4ccc052e6e2bd4293dfb530febc9bf441786363d.zip
Update net_layer_blob.md
Diffstat (limited to 'docs')
-rw-r--r--docs/tutorial/net_layer_blob.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/docs/tutorial/net_layer_blob.md b/docs/tutorial/net_layer_blob.md
index e8b7bd31..d6df7374 100644
--- a/docs/tutorial/net_layer_blob.md
+++ b/docs/tutorial/net_layer_blob.md
@@ -19,7 +19,7 @@ Blobs conceal the computational and mental overhead of mixed CPU/GPU operation b
The conventional blob dimensions for batches of image data are number N x channel K x height H x width W. Blob memory is row-major in layout, so the last / rightmost dimension changes fastest. For example, in a 4D blob, the value at index (n, k, h, w) is physically located at index ((n * K + k) * H + h) * W + w.
-- Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images B = 256.
+- Number / N is the batch size of the data. Batch processing achieves better throughput for communication and device processing. For an ImageNet training batch of 256 images N = 256.
- Channel / K is the feature dimension e.g. for RGB images K = 3.
Note that although many blobs in Caffe examples are 4D with axes for image applications, it is totally valid to use blobs for non-image applications. For example, if you simply need fully-connected networks like the conventional multi-layer perceptron, use 2D blobs (shape (N, D)) and call the InnerProductLayer (which we will cover soon).