diff options
author | Evan Shelhamer <shelhamer@imaginarynumber.net> | 2015-07-27 10:58:11 -0700 |
---|---|---|
committer | Evan Shelhamer <shelhamer@imaginarynumber.net> | 2015-07-27 10:59:11 -0700 |
commit | de6d444445261fc9859143bfd969d538ae7a2108 (patch) | |
tree | f525b9a897c21dce68942bd4e0712d349af0ed4e /docs | |
parent | ad8d12941f711b161a060db1b1ff35c7b5184a3d (diff) | |
download | caffeonacl-de6d444445261fc9859143bfd969d538ae7a2108.tar.gz caffeonacl-de6d444445261fc9859143bfd969d538ae7a2108.tar.bz2 caffeonacl-de6d444445261fc9859143bfd969d538ae7a2108.zip |
[docs] clear up PYTHONPATH confusion
Use the same language as the installation page to explain the Python
module path.
reported by @sdemyanov
Diffstat (limited to 'docs')
-rw-r--r-- | docs/tutorial/interfaces.md | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/docs/tutorial/interfaces.md b/docs/tutorial/interfaces.md index 12963318..40602948 100644 --- a/docs/tutorial/interfaces.md +++ b/docs/tutorial/interfaces.md @@ -11,8 +11,8 @@ The command line interface -- cmdcaffe -- is the `caffe` tool for model training **Training**: `caffe train` learns models from scratch, resumes learning from saved snapshots, and fine-tunes models to new data and tasks: -* All training requires a solver configuration through the `-solver solver.prototxt` argument. -* Resuming requires the `-snapshot model_iter_1000.solverstate` argument to load the solver snapshot. +* All training requires a solver configuration through the `-solver solver.prototxt` argument. +* Resuming requires the `-snapshot model_iter_1000.solverstate` argument to load the solver snapshot. * Fine-tuning requires the `-weights model.caffemodel` argument for the model initialization. For example, you can run: @@ -31,8 +31,7 @@ For a full example of fine-tuning, see examples/finetuning_on_flickr_style, but **Testing**: `caffe test` scores models by running them in the test phase and reports the net output as its score. The net architecture must be properly defined to output an accuracy measure or loss as its output. The per-batch score is reported and then the grand average is reported last. - # - # score the learned LeNet model on the validation set as defined in the + # score the learned LeNet model on the validation set as defined in the # model architeture lenet_train_test.prototxt caffe test -model examples/mnist/lenet_train_test.prototxt -weights examples/mnist/lenet_iter_10000.caffemodel -gpu 0 -iterations 100 @@ -63,7 +62,8 @@ The Python interface -- pycaffe -- is the `caffe` module and its scripts in caff Tutorial IPython notebooks are found in caffe/examples: do `ipython notebook caffe/examples` to try them. For developer reference docstrings can be found throughout the code. -Compile pycaffe by `make pycaffe`. The module dir caffe/python/caffe should be installed in your PYTHONPATH for `import caffe`. +Compile pycaffe by `make pycaffe`. +Add the module directory to your `$PYTHONPATH` by `export PYTHONPATH=/path/to/caffe/python:$PYTHONPATH` or the like for `import caffe`. ## MATLAB @@ -182,7 +182,7 @@ To get a layer's type (string): #### Forward and backward Forward pass can be done using `net.forward` or `net.forward_prefilled`. Function `net.forward` takes in a cell array of N-D arrays containing data of input blob(s) and outputs a cell array containing data from output blob(s). Function `net.forward_prefilled` uses existing data in input blob(s) during forward pass, takes no input and produces no output. After creating some data for input blobs like `data = rand(net.blobs('data').shape);` you can run - + res = net.forward({data}); prob = res{1}; @@ -202,7 +202,7 @@ Or net.blobs('prob').set_diff(prob_diff); net.backward_prefilled(); data_diff = net.blobs('data').get_diff(); - + **However, the backward computation above doesn't get correct results, because Caffe decides that the network does not need backward computation. To get correct backward results, you need to set `'force_backward: true'` in your network prototxt.** After performing forward or backward pass, you can also get the data or diff in internal blobs. For example, to extract pool5 features after forward pass: |