summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
author오형석/On-Device Lab(SR)/Staff Engineer/삼성전자 <hseok82.oh@samsung.com>2019-05-02 12:57:26 +0900
committer이춘석/On-Device Lab(SR)/Staff Engineer/삼성전자 <chunseok.lee@samsung.com>2019-05-02 12:57:26 +0900
commitda0116fd7847f53079b2bbf505648a40c2e6c8e0 (patch)
tree1337623b7c778a06b7fedec527cffd00e49bd1f5 /docs
parent58c58e7de0a433033191926fc7f19062b8cddf72 (diff)
downloadnnfw-da0116fd7847f53079b2bbf505648a40c2e6c8e0.tar.gz
nnfw-da0116fd7847f53079b2bbf505648a40c2e6c8e0.tar.bz2
nnfw-da0116fd7847f53079b2bbf505648a40c2e6c8e0.zip
Update sample app document (#5104)
- Describe runtime with tensorfliw lite is not official support - Tensorflow -> tensorflow lite - Update chapter "Build a model": title, etc. Signed-off-by: Hyeongseok Oh <hseok82.oh@samsung.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/howto/HowtoMakeSampleAppOnNnfw.md26
1 files changed, 15 insertions, 11 deletions
diff --git a/docs/howto/HowtoMakeSampleAppOnNnfw.md b/docs/howto/HowtoMakeSampleAppOnNnfw.md
index 7a8689b99..0bd65c4b4 100644
--- a/docs/howto/HowtoMakeSampleAppOnNnfw.md
+++ b/docs/howto/HowtoMakeSampleAppOnNnfw.md
@@ -1,24 +1,28 @@
# How to make a sample app on nnfw
-## Build a model
+Our runtime `neurun` support `NNAPI` as interface currently. To use `NNAPI` efficiently, one of solution is to use tensorflow lite. We support additional library to help using tensorflow lite in `/libs/tflite`. (this library is not official support)
-Include necessary headers. // TODO: which `BuiltinOpResolver`: ours or the one from tensorflow? Remove one of these includes
+To use tensorflow lite, you need to prepare tensorflow lite model file, and you should know input/output tensor name. Then write sample app.
+
+## Prepare loaded tensorflow lite model object
+
+You can select one of kernel register: tensorflow lite official kernel register or extended register (for pre-implemented custom op)
```
#include "tensorflow/contrib/lite/kernels/register.h"
#include "tflite/ext/kernels/register.h"
```
+To use tensorflow lite interpreter, need tensorflow lite interpreter session header
```
-#include "tflite/NNAPISession.h"
+#include "tflite/InterpreterSession.h"
```
-For NNAPI usage, or
+For NNAPI usage, need NNAPI session header
```
-#include "tflite/InterpreterSession.h"
+#include "tflite/NNAPISession.h"
```
-for TfLite interpreter.
-Load the model into `FlatBuffer`, create a `BuiltinOpResolver` and construct a tensorflow interpreter builder using them:
+Load the model object into `FlatBuffer`, create a tensorflow lite operator resolver `BuiltinOpResolver` and construct a tensorflow interpreter builder using them:
```
tflite::StderrReporter error_reporter;
auto model = tflite::FlatBufferModel::BuildFromFile(model_file.c_str(), &error_reporter);
@@ -35,19 +39,19 @@ std::unique_ptr<tflite::Interpreter> interpreter;
builder(&interpreter);
```
-Create a tensorflow session to use NNAPI:
+Create a tensorflow lite session to use NNAPI:
```
std::shared_ptr<nnfw::tflite::Session> sess = std::make_shared<nnfw::tflite::NNAPISession>(interpreter.get());
```
-If you want to use TfLite interpreter instead of NNAPI, then:
+If you want to use tensorflow lite interpreter instead of NNAPI, then:
```
std::shared_ptr<nnfw::tflite::Session> sess = std::make_shared<nnfw::tflite::InterpreterSession>(interpreter.get());
```
`NNAPISession` constructs a computational graph from the interpreter and builds the model.
-## Initialize the model
+## Prepare tensors memory allocation and model input for inference
Allocate the memory for tensors of `tflite::Interpreter`:
```
@@ -87,7 +91,7 @@ Run the inference
sess->run();
```
-Get the result from `interpreter->outputs()`. This is tensorflow specific, not nnfw, so one can use any method, that is applicable to Tensorflow, e.g.:
+Get the result from `interpreter->outputs()`. This is tensorflow lite specific, not nnfw, so one can use any method, that is applicable to tensorflow lite, e.g.:
```
for (const auto &id : interpreter->outputs())
{