diff options
author | Jeff Donahue <jeff.donahue@gmail.com> | 2015-03-25 17:44:37 -0700 |
---|---|---|
committer | Jeff Donahue <jeff.donahue@gmail.com> | 2015-05-14 18:22:00 -0700 |
commit | fa6169ee799f97f80d33d6b4525c7fd4b891774a (patch) | |
tree | fc3d81fd987a8f837f2289b744fb2cbeb56b3af4 /docs | |
parent | 4fb3c9e6a5ac80804c910639d14651c2ecdcb5f3 (diff) | |
download | caffeonacl-fa6169ee799f97f80d33d6b4525c7fd4b891774a.tar.gz caffeonacl-fa6169ee799f97f80d33d6b4525c7fd4b891774a.tar.bz2 caffeonacl-fa6169ee799f97f80d33d6b4525c7fd4b891774a.zip |
ReshapeLayer fixups for ND blobs
Diffstat (limited to 'docs')
-rw-r--r-- | docs/tutorial/layers.md | 33 |
1 files changed, 15 insertions, 18 deletions
diff --git a/docs/tutorial/layers.md b/docs/tutorial/layers.md index 422ee01f..c4529e6a 100644 --- a/docs/tutorial/layers.md +++ b/docs/tutorial/layers.md @@ -421,15 +421,11 @@ The `FLATTEN` layer is a utility layer that flattens an input of shape `n * c * #### Reshape -* LayerType: `RESHAPE` -* CPU implementation: `./src/caffe/layers/reshape_layer.cpp` -* CUDA GPU implementation: `./src/caffe/layers/reshape_layer.cu` +* Layer type: `Reshape` +* Implementation: `./src/caffe/layers/reshape_layer.cpp` * Parameters (`ReshapeParameter reshape_param`) - Optional: (also see detailed description below) - - `num` [default 0] - - `channels` [default 0] - - `width` [default 0] - - `height` [default 0] + - `shape` * Input - a single blob with arbitrary dimensions @@ -438,28 +434,29 @@ The `FLATTEN` layer is a utility layer that flattens an input of shape `n * c * * Sample - layers { + layer { name: "reshape" - type: RESHAPE + type: "Reshape" bottom: "input" top: "output" - reshape_param { - num: 0 # copy the dimension from below - channels: 2 - width: 3 - height: -1 # infer it from the other dimensions + shape { + dim: 0 # copy the dimension from below + dim: 2 + dim: 3 + dim: -1 # infer it from the other dimensions + } } } -The `RESHAPE` layer can be used to change the dimensions of its input, without changing its data. Just like the `FLATTEN` layer, only the dimensions are changed, no data is copied in the process. +The `Reshape` layer can be used to change the dimensions of its input, without changing its data. Just like the `Flatten` layer, only the dimensions are changed; no data is copied in the process. Output dimensions are specified by the `ReshapeParam` proto. Positive numbers are used directly, setting the corresponding dimension of the output blob. In addition, two special values are accepted for any of the target dimension values: -* **0** means "copy the respective dimension of the bottom layer". That is, if the bottom layer has 2 channels, the top one will have 2 channels too, given `channels: 0` as target dimension. Since the default value of all the target dimensions is 0, omitting any of the target dimensions will also cause it to be copied. -* **-1** stands for "infer this from the other dimensions". This behavior is similar to that of -1 in *numpy*'s or `[]` for *MATLAB*'s reshape: this dimension is calculated to keep the overall element count the same as in the bottom layer. If this is not possible, an error is raised. Also, at most one -1 can be used in a reshape operation. +* **0** means "copy the respective dimension of the bottom layer". That is, if the bottom has 2 as its 1st dimension, the top will have 2 as its 1st dimension as well, given `dim: 0` as the 1st target dimension. +* **-1** stands for "infer this from the other dimensions". This behavior is similar to that of -1 in *numpy*'s or `[]` for *MATLAB*'s reshape: this dimension is calculated to keep the overall element count the same as in the bottom layer. At most one -1 can be used in a reshape operation. -As another example, giving `num: 0, channels: -1, height: 1, width: 1` as parameters makes the layer behave in exactly the same way as the `FLATTEN` layer. +As another example, specifying `reshape_param { shape { dim: 0 dim: -1 } }` makes the layer behave in exactly the same way as the `Flatten` layer. #### Concatenation |