diff options
Diffstat (limited to 'tests/tools/tflite_benchmark_model/README.md')
-rw-r--r-- | tests/tools/tflite_benchmark_model/README.md | 12 |
1 files changed, 6 insertions, 6 deletions
diff --git a/tests/tools/tflite_benchmark_model/README.md b/tests/tools/tflite_benchmark_model/README.md index 8d997639f..a71a2fa1c 100644 --- a/tests/tools/tflite_benchmark_model/README.md +++ b/tests/tools/tflite_benchmark_model/README.md @@ -9,7 +9,7 @@ of runs. Aggregrate latency statistics are reported after running the benchmark. The instructions below are for running the binary on Desktop and Android, for iOS please use the -[iOS benchmark app](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite/tools/benchmark/ios). +[iOS benchmark app](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark/ios). ## Parameters @@ -45,14 +45,14 @@ and the following optional parameters: bazel build -c opt \ --config=android_arm \ --cxxopt='--std=c++11' \ - tensorflow/contrib/lite/tools/benchmark:benchmark_model + tensorflow/lite/tools/benchmark:benchmark_model ``` (2) Connect your phone. Push the binary to your phone with adb push (make the directory if required): ``` -adb push bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model /data/local/tmp +adb push bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model /data/local/tmp ``` (3) Make the binary executable. @@ -79,14 +79,14 @@ adb shell /data/local/tmp/benchmark_model \ (1) build the binary ``` -bazel build -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model +bazel build -c opt tensorflow/lite/tools/benchmark:benchmark_model ``` (2) Run on your compute graph, similar to the Android case but without the need of adb shell. For example: ``` -bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model \ +bazel-bin/tensorflow/lite/tools/benchmark/benchmark_model \ --graph=mobilenet_quant_v1_224.tflite \ --num_threads=4 ``` @@ -126,7 +126,7 @@ bazel build -c opt \ --config=android_arm \ --cxxopt='--std=c++11' \ --copt=-DTFLITE_PROFILING_ENABLED \ - tensorflow/contrib/lite/tools/benchmark:benchmark_model + tensorflow/lite/tools/benchmark:benchmark_model ``` This compiles TFLite with profiling enabled, now you can run the benchmark binary like before. The binary will produce detailed statistics for each operation similar to those shown below: |