summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAaron Schumacher <ajschumacher@gmail.com>2015-06-15 19:53:05 -0400
committerAaron Schumacher <ajschumacher@gmail.com>2015-06-15 19:53:05 -0400
commitd512d0c5c5efc276db7c864b6b5b6da41824cd88 (patch)
tree4248e0987e2979228c600a3faaa7dbaf3a6ceb8b
parentd163455d1e837814bc9cbb1b896e0830d933b66f (diff)
downloadcaffeonacl-d512d0c5c5efc276db7c864b6b5b6da41824cd88.tar.gz
caffeonacl-d512d0c5c5efc276db7c864b6b5b6da41824cd88.tar.bz2
caffeonacl-d512d0c5c5efc276db7c864b6b5b6da41824cd88.zip
typo: "a fixed steps" to "at fixed steps"
fixing in the correct place as per @shelhamer's advice from #2602
-rw-r--r--examples/mnist/readme.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/examples/mnist/readme.md b/examples/mnist/readme.md
index 269e53ab..413d4a1f 100644
--- a/examples/mnist/readme.md
+++ b/examples/mnist/readme.md
@@ -283,5 +283,5 @@ and you will be using CPU for training. Isn't that easy?
MNIST is a small dataset, so training with GPU does not really introduce too much benefit due to communication overheads. On larger datasets with more complex models, such as ImageNet, the computation speed difference will be more significant.
-### How to reduce the learning rate a fixed steps?
+### How to reduce the learning rate at fixed steps?
Look at lenet_multistep_solver.prototxt