summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--docs/tutorial/layers.md6
1 files changed, 3 insertions, 3 deletions
diff --git a/docs/tutorial/layers.md b/docs/tutorial/layers.md
index eabc792b..7362aac2 100644
--- a/docs/tutorial/layers.md
+++ b/docs/tutorial/layers.md
@@ -39,7 +39,7 @@ In contrast, other layers (with few exceptions) ignore the spatial structure of
- `n * c_i * h_i * w_i`
* Output
- `n * c_o * h_o * w_o`, where `h_o = (h_i + 2 * pad_h - kernel_h) / stride_h + 1` and `w_o` likewise.
-* Sample (as seen in `./examples/imagenet/imagenet_train_val.prototxt`)
+* Sample (as seen in `./models/bvlc_reference_caffenet/train_val.prototxt`)
layer {
name: "conv1"
@@ -83,7 +83,7 @@ The `Convolution` layer convolves the input image with a set of learnable filter
- `n * c * h_i * w_i`
* Output
- `n * c * h_o * w_o`, where h_o and w_o are computed in the same way as convolution.
-* Sample (as seen in `./examples/imagenet/imagenet_train_val.prototxt`)
+* Sample (as seen in `./models/bvlc_reference_caffenet/train_val.prototxt`)
layer {
name: "pool1"
@@ -197,7 +197,7 @@ In general, activation / Neuron layers are element-wise operators, taking one bo
* Parameters (`ReLUParameter relu_param`)
- Optional
- `negative_slope` [default 0]: specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0.
-* Sample (as seen in `./examples/imagenet/imagenet_train_val.prototxt`)
+* Sample (as seen in `./models/bvlc_reference_caffenet/train_val.prototxt`)
layer {
name: "relu1"