diff options
-rw-r--r-- | docs/howto/HowtoMakeSampleAppOnNnfw.md | 26 |
1 files changed, 15 insertions, 11 deletions
diff --git a/docs/howto/HowtoMakeSampleAppOnNnfw.md b/docs/howto/HowtoMakeSampleAppOnNnfw.md index 7a8689b99..0bd65c4b4 100644 --- a/docs/howto/HowtoMakeSampleAppOnNnfw.md +++ b/docs/howto/HowtoMakeSampleAppOnNnfw.md @@ -1,24 +1,28 @@ # How to make a sample app on nnfw -## Build a model +Our runtime `neurun` support `NNAPI` as interface currently. To use `NNAPI` efficiently, one of solution is to use tensorflow lite. We support additional library to help using tensorflow lite in `/libs/tflite`. (this library is not official support) -Include necessary headers. // TODO: which `BuiltinOpResolver`: ours or the one from tensorflow? Remove one of these includes +To use tensorflow lite, you need to prepare tensorflow lite model file, and you should know input/output tensor name. Then write sample app. + +## Prepare loaded tensorflow lite model object + +You can select one of kernel register: tensorflow lite official kernel register or extended register (for pre-implemented custom op) ``` #include "tensorflow/contrib/lite/kernels/register.h" #include "tflite/ext/kernels/register.h" ``` +To use tensorflow lite interpreter, need tensorflow lite interpreter session header ``` -#include "tflite/NNAPISession.h" +#include "tflite/InterpreterSession.h" ``` -For NNAPI usage, or +For NNAPI usage, need NNAPI session header ``` -#include "tflite/InterpreterSession.h" +#include "tflite/NNAPISession.h" ``` -for TfLite interpreter. -Load the model into `FlatBuffer`, create a `BuiltinOpResolver` and construct a tensorflow interpreter builder using them: +Load the model object into `FlatBuffer`, create a tensorflow lite operator resolver `BuiltinOpResolver` and construct a tensorflow interpreter builder using them: ``` tflite::StderrReporter error_reporter; auto model = tflite::FlatBufferModel::BuildFromFile(model_file.c_str(), &error_reporter); @@ -35,19 +39,19 @@ std::unique_ptr<tflite::Interpreter> interpreter; builder(&interpreter); ``` -Create a tensorflow session to use NNAPI: +Create a tensorflow lite session to use NNAPI: ``` std::shared_ptr<nnfw::tflite::Session> sess = std::make_shared<nnfw::tflite::NNAPISession>(interpreter.get()); ``` -If you want to use TfLite interpreter instead of NNAPI, then: +If you want to use tensorflow lite interpreter instead of NNAPI, then: ``` std::shared_ptr<nnfw::tflite::Session> sess = std::make_shared<nnfw::tflite::InterpreterSession>(interpreter.get()); ``` `NNAPISession` constructs a computational graph from the interpreter and builds the model. -## Initialize the model +## Prepare tensors memory allocation and model input for inference Allocate the memory for tensors of `tflite::Interpreter`: ``` @@ -87,7 +91,7 @@ Run the inference sess->run(); ``` -Get the result from `interpreter->outputs()`. This is tensorflow specific, not nnfw, so one can use any method, that is applicable to Tensorflow, e.g.: +Get the result from `interpreter->outputs()`. This is tensorflow lite specific, not nnfw, so one can use any method, that is applicable to tensorflow lite, e.g.: ``` for (const auto &id : interpreter->outputs()) { |