summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorEvan Shelhamer <shelhamer@imaginarynumber.net>2014-10-14 23:48:38 -0700
committerEvan Shelhamer <shelhamer@imaginarynumber.net>2014-10-14 23:48:38 -0700
commit5c4125da3ece192eeabd46dbb9e289071a7637b1 (patch)
tree2353dba872f08b20e5a52f6b01118b6cdae5283f /docs
parent4302304ddc678723d5ea308f637fcd4d45252c76 (diff)
downloadcaffeonacl-5c4125da3ece192eeabd46dbb9e289071a7637b1.tar.gz
caffeonacl-5c4125da3ece192eeabd46dbb9e289071a7637b1.tar.bz2
caffeonacl-5c4125da3ece192eeabd46dbb9e289071a7637b1.zip
[docs] proofreading suggested by @cNikolaou
Diffstat (limited to 'docs')
-rw-r--r--docs/tutorial/net_layer_blob.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/tutorial/net_layer_blob.md b/docs/tutorial/net_layer_blob.md
index b500aaa6..1f0966f8 100644
--- a/docs/tutorial/net_layer_blob.md
+++ b/docs/tutorial/net_layer_blob.md
@@ -26,7 +26,7 @@ Note that although we have designed blobs with its dimensions corresponding to i
Caffe operations are general with respect to the channel dimension / K. Grayscale and hyperspectral imagery are fine. Caffe can likewise model and process arbitrary vectors in blobs with singleton. That is, the shape of blob holding 1000 vectors of 16 feature dimensions is 1000 x 16 x 1 x 1.
-Parameter blob dimensions vary according to the type and configuration of the layer. For a convolution layer with 96 filters of 11 x 11 spatial dimension and 3 inputs the blob is 96 x 3 x 11 x 11. For an inner product / fully-connected layer with 1000 output channels and 1024 input channels the parameter blob is 1 x 1 x 1000 x 4096.
+Parameter blob dimensions vary according to the type and configuration of the layer. For a convolution layer with 96 filters of 11 x 11 spatial dimension and 3 inputs the blob is 96 x 3 x 11 x 11. For an inner product / fully-connected layer with 1000 output channels and 1024 input channels the parameter blob is 1 x 1 x 1000 x 1024.
For custom data it may be necessary to hack your own input preparation tool or data layer. However once your data is in your job is done. The modularity of layers accomplishes the rest of the work for you.
@@ -85,7 +85,7 @@ Developing custom layers requires minimal effort by the compositionality of the
The net jointly defines a function and its gradient by composition and auto-differentiation. The composition of every layer's output computes the function to do a given task, and the composition of every layer's backward computes the gradient from the loss to learn the task. Caffe models are end-to-end machine learning engines.
-The net is a set of layers connected in a computation graph -- a DAG / directed acyclic graph to be exact. Caffe does all the bookkeeping for any DAG of layers to ensure correctness of the forward and backward passes. A typical net begins with a data layer that loads from disk and ends with a loss layer that computes the objective for a task such as classification or reconstruction.
+The net is a set of layers connected in a computation graph -- a directed acyclic graph (DAG) to be exact. Caffe does all the bookkeeping for any DAG of layers to ensure correctness of the forward and backward passes. A typical net begins with a data layer that loads from disk and ends with a loss layer that computes the objective for a task such as classification or reconstruction.
The net is defined as a set of layers and their connections in a plaintext modeling language.
A simple logistic regression classifier