summaryrefslogtreecommitdiff
path: root/docs/nnfw
diff options
context:
space:
mode:
Diffstat (limited to 'docs/nnfw')
-rw-r--r--docs/nnfw/2018/fig/nnfw_architecture.pngbin0 -> 28876 bytes
-rw-r--r--docs/nnfw/2018/fig/nnfw_architecture.pptxbin0 -> 72036 bytes
-rw-r--r--docs/nnfw/2018/roadmap.md123
-rw-r--r--docs/nnfw/HowToImplementOperatorKernel.md1
-rw-r--r--docs/nnfw/fig/nnfw_architecture.pngbin0 -> 280284 bytes
-rw-r--r--docs/nnfw/fig/nnfw_architecture.pptxbin0 -> 45709 bytes
-rw-r--r--docs/nnfw/fig/nnfw_behavior.pngbin0 -> 14254 bytes
-rw-r--r--docs/nnfw/fig/nnfw_behavior.pptxbin0 -> 59844 bytes
-rw-r--r--docs/nnfw/howto.md38
-rw-r--r--docs/nnfw/howto/BuildTFfromSource.md66
-rw-r--r--docs/nnfw/howto/CrossBuildForAarch64.md77
-rw-r--r--docs/nnfw/howto/CrossBuildForAndroid.md52
-rw-r--r--docs/nnfw/howto/CrossBuildForArm.md118
-rw-r--r--docs/nnfw/howto/HowToAddUnittest.md31
-rw-r--r--docs/nnfw/howto/HowToRunNnpackge.md75
-rw-r--r--docs/nnfw/howto/HowToTestManualy.md62
-rw-r--r--docs/nnfw/howto/HowToUseDockerImage.md154
-rw-r--r--docs/nnfw/howto/HowToUseNNFWAPI.md63
-rw-r--r--docs/nnfw/howto/HowtoMakeSampleAppOnNnfw.md132
-rw-r--r--docs/nnfw/howto/RemoteDebuggingForVSCode.md147
-rw-r--r--docs/nnfw/howto/device/xu3-dip.pngbin0 -> 262925 bytes
-rw-r--r--docs/nnfw/howto/device/xu3_tizen.md140
-rw-r--r--docs/nnfw/howto/device/xu3_ubuntu.md114
-rw-r--r--docs/nnfw/howto/device/xu4_tizen.md228
-rw-r--r--docs/nnfw/howto/device/xu4_ubuntu.md99
-rw-r--r--docs/nnfw/op_list.md71
-rw-r--r--docs/nnfw/roadmap.md76
-rw-r--r--docs/nnfw/tests/Convolution_manual_3x3.xlsxbin0 -> 19844 bytes
-rw-r--r--docs/nnfw/tests/Softmax_manual.xlsxbin0 -> 15940 bytes
29 files changed, 1867 insertions, 0 deletions
diff --git a/docs/nnfw/2018/fig/nnfw_architecture.png b/docs/nnfw/2018/fig/nnfw_architecture.png
new file mode 100644
index 000000000..d183e2b56
--- /dev/null
+++ b/docs/nnfw/2018/fig/nnfw_architecture.png
Binary files differ
diff --git a/docs/nnfw/2018/fig/nnfw_architecture.pptx b/docs/nnfw/2018/fig/nnfw_architecture.pptx
new file mode 100644
index 000000000..3e5b4fad5
--- /dev/null
+++ b/docs/nnfw/2018/fig/nnfw_architecture.pptx
Binary files differ
diff --git a/docs/nnfw/2018/roadmap.md b/docs/nnfw/2018/roadmap.md
new file mode 100644
index 000000000..aca206889
--- /dev/null
+++ b/docs/nnfw/2018/roadmap.md
@@ -0,0 +1,123 @@
+This document describes roadmap of 2018 NN Runtime (or _nnfw_) project.
+
+# Goal
+This project _nnfw_ aims at providing a high-performance, on-device neural network (NN) inference
+framework that performs inference of a given NN model on processors, such as CPU, GPU, or NPU, in
+the target platform, such as Tizen and SmartMachine Platform (SMP).
+
+# Architecture
+![nnfw_architecture](./fig/nnfw_architecture.png)
+
+The figure above illustrates the overall architecture and scope of _nnfw_, which consists of ML
+Framework and NN Runtime, as well as NN Compute that is provided by the platform:
+1. ML Framework
+ - Provide TensorFlow (TF) Lite on Tizen and SMP
+ - We chose TF Lite as a standard ML framework in _nnfw_ for this year, since TF Lite is
+ lightweight compared to other ML frameworks and its community is rapidly growing. We expect
+ supporting TF Lite on Samsung's OS platforms would be beneficial to Samsung's diverse
+ business areas and AI solutions.
+ - Provide TF Lite C# API for Tizen .NET
+ - Considering the existing TF Lite supports only C++ and Java API, C# API for TF Lite would
+ be a great complement to TF Lite and natural extension for Tizen.
+1. NN Runtime
+ - Provide a common runtime interface, which is Android NN API
+ - Android NN API (NN API for short) was selected for seamless integration with TF Lite. As
+ long as our NN runtime provides NN API as an interface, TF Lite can link to our NN runtime
+ without any modification.
+ - Although we borrowed NN API as the runtime's interface, we plan to design and implement the
+ runtime itself by ourselves. For the implementation, we will utilize ARM Compute Library
+ (ACL) for NN operation acceleration on ARM CPU and GPU.
+1. NN Compute
+ - Provide computation acceleration library, such as ACL, or device driver for NPU
+ - This layer will be provided by OS platform, and we will use the library or device driver as it
+ is. We may request a specific version to the Platform team, but we don't expect we will be
+ modifying the library.
+
+# Deliverables
+- On-Device AI SW Stack (a.k.a STAR Lite) for Tizen
+- On-Device AI SW Stack for SMP
+- ML Framework that can run ADAS models
+
+# Milestones
+## Project Milestones
+- Support all 50 TF Lite operations on ARM CPU and GPU
+- Support all 29 operations of NN API on ARM CPU and GPU
+- Support InceptionV3 and MobileNet, written in TF Lite model format, on ARM CPU and GPU
+
+## Monthly Milestones
+(These will be updated as we proceed with the project and can estimate development time more
+accurately.)
+- March: Set up milestones, tasks, workgroups, initial code structure, and build/test infra
+- April: Run InceptionV3 using ACL on the Tizen TM2 and ODroid XU4
+ - Mid of April: Establish a full SW stack that is ready to run InceptionV3
+- May: Run MobileNet on Tizen / Tizen M1 release
+- June: Run ADAS models on Tizen / STAR Platform 2nd release
+- September: Tizen M2 release / STAR Platform 3rd release
+- October: SMP v1.0 release / STAR Platform v1.0 release
+
+# Tasks
+Below is an overall list of major topics (tasks) throughout the project this year. For the details
+of each topic, please visit each topic's issue page.
+Please note that the list might not be complete and thus it could be updated as we make progress in
+the project and discuss more about the implementation details.
+
+## ML Framework
+### Technical Goals
+- Provide TF Lite on Tizen and SMP
+- Develop TF Lite C# API for Tizen .NET
+
+### Milestones
+- March
+ 1. Enable Tizen build / C# API / test code
+ 1. Complete enabling Tizen build and test codes / Test infra / Benchmark
+- Mid April
+ 1. Complete all tasks needed to run InceptionV3
+- May
+ 1. Support custom operators to run ADAS models
+ 1. Complete all test codes and benchmarks
+
+### Tasks
+- Visit [#74](https://github.sec.samsung.net/STAR/nnfw/issues/74) for the list of tasks, issue
+ tracking, and discussions.
+
+## NN Runtime
+- NN Runtime is an actual implementation of NN API.
+
+### Technical Goals
+- Develop an NN model interpreter targeting ARM CPU and GPU
+- Develop a device memory manager
+- Develop an operation scheduler supporting both CPU and GPU
+
+### Milestones
+- March: Run simple NN with CPU backend
+ 1. Prepare a working vertical SW stack of NN runtime
+- Mid of April (for testing): Run InceptionV3 with ACL backend and CPU backend
+ 1. Evaluate performance of InceptionV3 and improve performance for ADAS if necessary
+- May (Tizen M1)
+ 1. Optimize NN runtime (improving interpreter or using IR from
+ [nncc](https://github.sec.samsung.net/STAR/nncc))
+ 1. Implement more operators of NN API
+
+### Tasks
+- Visit [#72](https://github.sec.samsung.net/STAR/nnfw/issues/72) for the list of tasks, issue
+ tracking, and discussions.
+
+## NN API Operations
+### Technical Goals
+- Implement NN operations optimized for ARM CPU and GPU
+
+### Milestones
+- March: Run convolution using `tflite_run`
+ - Test framework: ?
+- Mid of April : InceptionV3 complete on CPU/GPU
+ - For ADAS, we need to make the performance to be goods as we can make.
+- May: optimized kernels for InceptionV3 on CPU/GPU
+
+### Tasks
+- Visit [#73](https://github.sec.samsung.net/STAR/nnfw/issues/73) for the list of tasks, issue
+ tracking, and discussions.
+
+# Workgroups (WGs)
+- We organize WGs for major topics above, and each WG will be working on its own major topic by
+ breaking it into small tasks/issues, performing them inside WG, and collaborating between WGs.
+- The WG information can be found [here](workgroups.md).
diff --git a/docs/nnfw/HowToImplementOperatorKernel.md b/docs/nnfw/HowToImplementOperatorKernel.md
new file mode 100644
index 000000000..715575a5f
--- /dev/null
+++ b/docs/nnfw/HowToImplementOperatorKernel.md
@@ -0,0 +1 @@
+Under preparation. Coming soon!
diff --git a/docs/nnfw/fig/nnfw_architecture.png b/docs/nnfw/fig/nnfw_architecture.png
new file mode 100644
index 000000000..566151e4a
--- /dev/null
+++ b/docs/nnfw/fig/nnfw_architecture.png
Binary files differ
diff --git a/docs/nnfw/fig/nnfw_architecture.pptx b/docs/nnfw/fig/nnfw_architecture.pptx
new file mode 100644
index 000000000..9a4e8fbb7
--- /dev/null
+++ b/docs/nnfw/fig/nnfw_architecture.pptx
Binary files differ
diff --git a/docs/nnfw/fig/nnfw_behavior.png b/docs/nnfw/fig/nnfw_behavior.png
new file mode 100644
index 000000000..b7527b48c
--- /dev/null
+++ b/docs/nnfw/fig/nnfw_behavior.png
Binary files differ
diff --git a/docs/nnfw/fig/nnfw_behavior.pptx b/docs/nnfw/fig/nnfw_behavior.pptx
new file mode 100644
index 000000000..bac51f363
--- /dev/null
+++ b/docs/nnfw/fig/nnfw_behavior.pptx
Binary files differ
diff --git a/docs/nnfw/howto.md b/docs/nnfw/howto.md
new file mode 100644
index 000000000..2c28453bd
--- /dev/null
+++ b/docs/nnfw/howto.md
@@ -0,0 +1,38 @@
+## Build Requires
+
+If you are building this project, then the following modules must be installed on your system:
+
+- CMake
+- Boost C++ libraries
+
+```
+$ sudo apt-get install cmake libboost-all-dev
+```
+
+## How to use (simple) NNAPI Binding
+
+This repo provides a T/F Lite Model loader(named ``tflite_run``), and simple NNAPI binding.
+
+Let's type the following commands, and see what happens!
+```
+$ make install
+$ USE_NNAPI=1 LD_LIBRARY_PATH="$(pwd)/Product/obj/runtimes/logging:$(pwd)/Product/out/lib" Product/out/bin/tflite_run [T/F Lite Flatbuffer Model Path]
+```
+
+## How to get pre-built T/F Lite Flatbuffer models?
+Google provides several pre-built T/F Lite models. Please check [this page](https://www.tensorflow.org/lite/models)
+
+
+## Build How-to
+- [Cross building for ARM](howto/CrossBuildForArm.md)
+- [Cross building for AARCH64](howto/CrossBuildForAarch64.md)
+- [Build using prebuilt docker image](howto/HowToUseDockerImage.md)
+
+
+## Other how-to documents
+- [Building TensorFlow and TOCO from source](howto/BuildTFfromSource.md)
+- [How to setup XU3 with Ubuntu 16.04](howto/device/xu3_ubuntu.md)
+- [How to setup XU4 with Ubuntu 16.04](howto/device/xu4_ubuntu.md)
+- [How to add unittest using gtest](howto/HowToAddUnittest.md)
+- [How to manually test NNFW on single model/input pair](howto/HowToTestManualy.md)
+- [How to use nnfw API](howto/HowToUseNNFWAPI.md)
diff --git a/docs/nnfw/howto/BuildTFfromSource.md b/docs/nnfw/howto/BuildTFfromSource.md
new file mode 100644
index 000000000..3880d5ab9
--- /dev/null
+++ b/docs/nnfw/howto/BuildTFfromSource.md
@@ -0,0 +1,66 @@
+# Building TensorFlow and TOCO from source
+
+You can build TensorFlow and tools including `TOCO` from source.
+Please read
+[Installing TensorFlow from Sources](https://www.tensorflow.org/install/install_sources)
+for full description.
+
+## Install Bazel
+
+Follow [Installing Bazel](https://docs.bazel.build/versions/master/install.html)
+- For Ubuntu, follow [Installing Bazel on Ubuntu](https://docs.bazel.build/versions/master/install-ubuntu.html)
+
+These are the actual steps to install using apt package manager:
+```
+sudo apt-get install openjdk-8-jdk
+```
+```
+echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" \
+| sudo tee /etc/apt/sources.list.d/bazel.list
+curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
+```
+```
+sudo apt-get update && sudo apt-get install bazel
+```
+```
+sudo apt-get upgrade bazel
+```
+
+## Install python packages
+
+```
+sudo apt-get install python-numpy python-dev python-pip python-wheel
+```
+
+## Configure
+
+```
+cd external/tensorflow
+./configure
+```
+
+Select options like this page: https://www.tensorflow.org/install/install_sources#ConfigureInstallation
+
+## Build with Bazel
+
+```
+bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
+bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
+```
+
+If you have any problems while building, please fire an issue.
+
+## Uninstall if already installed
+
+You may skip this if you haven't installed
+```
+pip uninstall /tmp/tensorflow_pkg/tensorflow-1.6.0rc1-cp27-cp27mu-linux_x86_64.whl
+```
+
+## Install TensorFlow and tools
+
+```
+pip install /tmp/tensorflow_pkg/tensorflow-1.6.0rc1-cp27-cp27mu-linux_x86_64.whl --user
+```
+
+You should see installed `toco` at `~/.local/bin` folder.
diff --git a/docs/nnfw/howto/CrossBuildForAarch64.md b/docs/nnfw/howto/CrossBuildForAarch64.md
new file mode 100644
index 000000000..9f0af85b8
--- /dev/null
+++ b/docs/nnfw/howto/CrossBuildForAarch64.md
@@ -0,0 +1,77 @@
+# Cross building for AARCH64 (ARM64)
+
+In nnfw, we use `AARCH64` on build files such as Makefile, CMakeLists.txt and so on.
+
+## Prepare Ubuntu RootFS
+
+Install required packages
+
+```
+sudo apt-get install qemu qemu-user-static binfmt-support debootstrap
+```
+
+Use `build_rootfs.sh` script to prepare Root File System. You should have `sudo`
+
+```
+sudo ./tools/cross/build_rootfs.sh aarch64
+```
+- supports `arm`(default) and `aarch64` architecutre for now
+- supports `xenial`(default) and `trusty` release
+
+To see the options,
+```
+./tools/cross/build_rootfs.sh -h
+```
+
+RootFS will be prepared at `tools/cross/rootfs/aarch64` folder.
+
+### Prepare RootFS at alternative folder
+
+Use `ROOTFS_DIR` to a full path to prepare at alternative path.
+
+```
+ROOTFS_DIR=/home/user/rootfs/aarch64-xenial sudo ./tools/cross/build_rootfs.sh aarch64
+```
+
+### Using proxy
+
+If you need to use proxy server while building the rootfs, use `--setproxy` option.
+
+```
+# for example,
+sudo ./tools/cross/build_rootfs.sh aarch64 --setproxy="1.2.3.4:8080"
+# or
+sudo ./tools/cross/build_rootfs.sh aarch64 --setproxy="proxy.server.com:8888"
+```
+
+This will put `apt` proxy settings in `rootfs/etc/apt/apt.conf.d/90proxy` file
+for `http`, `https` and `ftp` protocol.
+
+## Cross build for AARCH64
+
+Install cross compilers
+```
+sudo apt-get install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
+```
+
+Build and install ARM Compute Library
+```
+CROSS_BUILD=1 TARGET_ARCH=aarch64 make acl
+```
+Mostly you only need once of ACL build. This will build and install to
+`Product/(target_arch-os)/out/bin` folder.
+- this is required for `AARCH64` on Ubuntu
+
+Give `TARGET_ARCH` variable to set the target architecture
+```
+CROSS_BUILD=1 TARGET_ARCH=aarch64 make
+CROSS_BUILD=1 TARGET_ARCH=aarch64 make install
+```
+- supports `armv7l` and `aarch64` for now
+
+If you used `ROOTFS_DIR` to prepare in alternative folder,
+you should also give this to makefile.
+```
+CROSS_BUILD=1 ROOTFS_DIR=/home/user/rootfs/aarch64-xenial TARGET_ARCH=aarch64 make
+CROSS_BUILD=1 ROOTFS_DIR=/home/user/rootfs/aarch64-xenial TARGET_ARCH=aarch64 make install
+```
diff --git a/docs/nnfw/howto/CrossBuildForAndroid.md b/docs/nnfw/howto/CrossBuildForAndroid.md
new file mode 100644
index 000000000..ab9d04e92
--- /dev/null
+++ b/docs/nnfw/howto/CrossBuildForAndroid.md
@@ -0,0 +1,52 @@
+# Cross building for Android
+
+Supported Architecture : AARCH64 only (ARM32 is not supported yet)
+
+## Prepare Android NDK
+
+Use `tools/cross/build_android_ndk.sh` script to prepare Android NDK. This is recommended way to build Android NDK.
+You may download it yourself from the offical Android NDK website, but the script does a little more than just downloading and unzipping.
+
+## Build
+
+### Host Environment Requirements
+
+With Ubuntu 16.04, everything is fine except one. CMake 3.6.0 or later is required for Android NDK CMake support.
+So if you want to use Docker, please use `infra/docker/Dockerfile.1804` which is based on Ubuntu 18.04. It has CMake 3.10.2.
+
+```bash
+docker build --network host -t nnas1804 -f infra/docker/Dockerfile.1804 infra/docker
+```
+
+### Get prebuilt ARM Compute Library
+
+Download prebuilt binary from [github](https://github.com/ARM-software/ComputeLibrary/releases). Check the version we support and platform(Android).
+
+Then extract the tarball and we will use the ones in `lib/android-arm64-v8a-neon-cl`. The following files are used.
+
+```
+libarm_compute_core.so
+libarm_compute_graph.so
+libarm_compute.so
+```
+
+### Build and install the runtime
+
+Some tools/libs are still not supported and those are not built by default - mostly due to dependency on Boost library.
+Please refer to `infra/nnfw/cmake/options/options_aarch64-android.cmake` for details.
+
+Different from cross build for linux,
+
+- `NDK_DIR` is required
+
+Here is an example of using Makefile.
+
+```bash
+cp -n Makefile.template Makefile
+
+TARGET_OS=android \
+CROSS_BUILD=1 \
+NDK_DIR=/path/android-tools/r20/ndk \
+EXT_ACL_FOLDER=/path/arm_compute-v19.05-bin-android/lib/android-arm64-v8a-neon-cl \
+make install
+```
diff --git a/docs/nnfw/howto/CrossBuildForArm.md b/docs/nnfw/howto/CrossBuildForArm.md
new file mode 100644
index 000000000..07b4a17b3
--- /dev/null
+++ b/docs/nnfw/howto/CrossBuildForArm.md
@@ -0,0 +1,118 @@
+# Cross building for ARM
+
+## Prepare Ubuntu RootFS
+
+Install required packages
+
+```
+sudo apt-get install qemu qemu-user-static binfmt-support debootstrap
+```
+
+Use `build_rootfs.sh` script to prepare Root File System. You should have `sudo`
+
+```
+sudo ./tools/cross/build_rootfs.sh arm
+```
+- supports `arm`(default) and `aarch` architecutre for now
+- supports `xenial`(default) `trusty`, and `bionic` release
+
+To see the options,
+```
+./tools/cross/build_rootfs.sh -h
+```
+
+RootFS will be prepared at `tools/cross/rootfs/arm` folder.
+
+### Prepare RootFS at alternative folder
+
+Use `ROOTFS_DIR` to a full path to prepare at alternative path.
+
+```
+ROOTFS_DIR=/home/user/rootfs/arm-xenial sudo ./tools/cross/build_rootfs.sh arm
+```
+
+### Using proxy
+
+If you need to use proxy server while building the rootfs, use `--setproxy` option.
+
+```
+# for example,
+sudo ./tools/cross/build_rootfs.sh arm --setproxy="1.2.3.4:8080"
+# or
+sudo ./tools/cross/build_rootfs.sh arm --setproxy="proxy.server.com:8888"
+```
+
+This will put `apt` proxy settings in `rootfs/etc/apt/apt.conf.d/90proxy` file
+for `http`, `https` and `ftp` protocol.
+
+## Install ARM Cross Toolchain
+
+We recommend you have g++ >= 6 installed on your system because NN generated tests require it.
+
+- On Ubuntu 16.04 or older, follow the next steps:
+
+```
+cd ~/your/path
+wget https://releases.linaro.org/components/toolchain/binaries/7.2-2017.11/arm-linux-gnueabihf/gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf.tar.xz
+tar xvf gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf.tar.xz
+echo 'PATH=~/your/path/gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf/bin:$PATH' >> ~/.bashrc
+```
+
+- On Ubuntu 18.04 LTS, you can install using `apt-get`.
+Choose g++ version whatever you prefer: 6, 7 or 8.
+
+```
+sudo apt-get install g++-{6,7,8}-arm-linux-gnueabihf
+```
+
+Make sure you get `libstdc++.so` updated on your target with your new toolchain's corresponding one.
+
+For example, if you installed gcc-linaro-7.2.1-2017.11 above, do
+
+```
+wget https://releases.linaro.org/components/toolchain/binaries/7.2-2017.11/arm-linux-gnueabihf/runtime-gcc-linaro-7.2.1-2017.11-arm-linux-gnueabihf.tar.xz
+tar xvf runtime-gcc-linaro-7.2.1-2017.11-arm-linux-gnueabihf.tar.xz
+```
+
+Then, copy `libstdc++.so.6.0.24` into `/usr/lib/arm-linux-gnueabihf`, and update symbolic links on your device.
+
+## Build and install ARM Compute Library
+
+Mostly you only need once of ACL build.
+
+ACL will be automatically installed in `externals/acl` when you build nnfw without any changes.
+
+You can check ACL source information in `cmake/packages/ARMComputeSourceConfig.cmake`
+
+## Build nnfw
+
+Give `TARGET_ARCH` variable to set the target architecture.
+
+If you used `ROOTFS_DIR` to prepare in alternative folder, you should also give this to makefile.
+
+```
+CROSS_BUILD=1 TARGET_ARCH=armv7l make all install
+
+# If ROOTFS_DIR is in alternative folder
+ROOTFS_DIR=/path/to/your/rootfs/arm \
+CROSS_BUILD=1 TARGET_ARCH=armv7l make all install
+```
+
+You can also omit the `CROSS_BUILD=1` option if you explicitly pass `ROOTFS_DIR`. In that case, if
+the `TARGET_ARCH` are differs from the hostarchitecture, the make script automatically applies
+`CROSS_BUILD=1`. So, if you set `ROOTFS_DIR` as an environment variable, you can simply perform
+normal build and cross build as follows.
+
+```
+export ROOTFS_DIR = xxx
+...
+make all install # do normal build
+TARGET_ARCH = armv7l make all install # do cross build
+```
+
+## Run test
+
+```
+ ./tests/scripts/test_driver.sh --artifactpath=. \
+ --frameworktest_list_file=tests/scripts/list/neurun_frameworktest_list.armv7l.acl_cl.txt
+```
diff --git a/docs/nnfw/howto/HowToAddUnittest.md b/docs/nnfw/howto/HowToAddUnittest.md
new file mode 100644
index 000000000..5bb75b258
--- /dev/null
+++ b/docs/nnfw/howto/HowToAddUnittest.md
@@ -0,0 +1,31 @@
+# How to Add Unittest using gtest(googletest)
+
+### 1. make own test code
+```
+#include "gtest/gtest.h"
+
+TEST(TFLite_test_case, simple_test)
+{
+ EXPECT_EQ(1, 1);
+}
+```
+
+### 2. Find and prepare package `googletest` to your test executable
+```
+find_nnfw_package(GTest QUITE)
+if(NOT GTest_FOUND)
+ ## Cannot find and prepare googletest package
+ return()
+endif(NOT GTest_FOUND)
+add_executable($YOURTEST_TARGET yourtest1.cc yourtest2.cc)
+```
+
+### 3. Link test executable against libgtest.a and libgtest_main.a (+ pthread)
+```
+target_link_libraries($YOURTEST_TARGET gtest gtest_main pthread)
+```
+
+### 4. Install test executable into Product/out/unittest
+```
+install(TARGETS $YOURTEST_TARGET DESTINATION unittest)
+```
diff --git a/docs/nnfw/howto/HowToRunNnpackge.md b/docs/nnfw/howto/HowToRunNnpackge.md
new file mode 100644
index 000000000..93dd74e83
--- /dev/null
+++ b/docs/nnfw/howto/HowToRunNnpackge.md
@@ -0,0 +1,75 @@
+# How To Run 'nnpackage' (for beginners)
+
+## 0. Environment
+
+This document is based on an experience with ...
+
+```
+- Architecture : armhf
+- OS : ubuntu 18.04
+```
+
+## 1. What is 'nnpackage'?
+
+'nnpackage' is the input of nnfw and the output of nncc.
+
+'nnpackage' contains all data (such as model, MANIFEST, custom_op) that requires to run a given model.
+
+'nnpackage' is a Zip archive in the following structure:
+
+```
+nnpackage
+├── custom_op
+├── metadata
+│ └── MANIFEST
+└── mymodel.model
+```
+
+For more information, find the document [nnpackage/spec/10_packaging_and_manifest.md](../../../nnpackage/spec/10_packaging_and_manifest.md)
+
+## 2. How to generate nnpackage?
+
+'nnpackage' can be generated from either '.circle' or '.tflite'.
+
+In this example, we generate 'nnpackage' from '.tflite'.
+
+ [1] Find 'model2nnpkg.sh'.
+ ```
+ nnfw/tools/nnpackage_tool/model2nnpkg/model2nnpkg.sh
+ ```
+
+ [2] Get any \*.tflite model file.
+ You can simply use a file in test framework directory, 'nnfw/tests/framework/cache/'.
+ If you don't have /cache directory, download them with command
+ ```
+ cd nnfw
+ MODELFILE_SERVER={MODELFILE_SERVER_LINK} ./tests/framework/run_test.sh --download=on
+
+ For {MODELFILE_SERVER_LINK}, put appropriate server link.
+ ```
+ In this example, we will use 'nnfw/tests/framework/cache/add/1D/add_test1.tflite'
+
+ [3] Simply run.
+ ```
+ $./model2nnpkg.sh add_test1
+ ```
+ Now, you got add_test1 directory. Check into the directory to find the hierchical structure inside.
+
+## 3. How to set up an environment and run?
+
+ [1] Build 'nnfw'.
+
+ After build, you can see an execution file 'nnfw/Product/armv7l-linux.debug/out/bin/nnpackage_run'.
+ For how to build, check out the document [docs/nnfw/howto/CrossBuildForArm.md](../../../docs/nnfw/howto/CrossBuildForArm.md).
+
+ [2] Install package 'libhdf5-cpp-100'.
+ ```
+ $ sudo apt install libhdf5-cpp-100
+ ```
+
+ [3] Run nnpackage.
+ ```
+ $ ./nnpackage_run add_test1
+ ```
+ Note that you need to put an whole 'add_test_1' directory,
+ because 'nnpackage' means an archive, not a single file.
diff --git a/docs/nnfw/howto/HowToTestManualy.md b/docs/nnfw/howto/HowToTestManualy.md
new file mode 100644
index 000000000..bb36cc67b
--- /dev/null
+++ b/docs/nnfw/howto/HowToTestManualy.md
@@ -0,0 +1,62 @@
+# How to test NNFW on single model/input pair
+
+1. Select backend through environment variables:
+ * acl_cl: `export OP_BACKEND_ALLOPS=acl_cl`
+ * acl_neon: `export OP_BACKEND_ALLOPS=acl_neon`
+ * cpu: `export OP_BACKEND_ALLOPS=cpu`
+ * different backends for different operations:
+ ```
+ unset OP_BACKEND_ALLOPS
+ export OP_BACKEND_Conv2D=cpu
+ export OP_BACKEND_MaxPool2D=acl_cl
+ export OP_BACKEND_AvgPool2D=acl_neon
+ ```
+
+2. Select executor through environment variable:
+ * linear: `export EXECUTOR=Linear`
+ * dataflow: `export EXECUTOR=Dataflow`
+ * parallel: `export EXECUTOR=Parallel`
+
+## Test NNFW through NNAPI
+
+### Testing on random input
+1. Generate random input, get reference result using tflite interpreter, dump input and result into file:
+ ```
+ /path/to/tflite_run --tflite /path/to/model.tflite --dump /path/to/out.dat
+ ```
+2. Inference with NNFW NNAPI and compare result with reference one:
+ ```
+ USE_NNAPI=1 /path/to/tflite_run --tflite /path/to/model.tflite ---compare /path/to/out.dat
+ ```
+
+### Testing on particular input
+1. Prepare input:
+
+ `tflite_run` consumes input as sequence of floats.
+
+ For example, you could convert `.jpg` image into such format file with next python3 script:
+ ```
+ from PIL import Image
+ import numpy as np
+
+ img = Image.open("./image.jpg")
+ np_img = np.array(img.getdata()).reshape(img.size[0], img.size[1], 3).astype(np.float32) / 255.
+
+ with open('./converted_image.dat', 'wb') as f:
+ for i in np_img.flatten('C'):
+ f.write(i)
+ ```
+
+2. Get reference result using tflite interpreter, dump input and result into file:
+
+ ```
+ /path/to/tflite_run --tflite /path/to/model.tflite --input /path/to/input.dat --dump /path/to/out.dat
+ ```
+3. Inference with NNFW NNAPI and compare result with reference one:
+ ```
+ USE_NNAPI=1 /path/to/tflite_run --tflite /path/to/model.tflite ---compare /path/to/out.dat
+ ```
+
+## Test NNFW through NNPackage
+
+TODO: fill this section when NNPackage will be implemented
diff --git a/docs/nnfw/howto/HowToUseDockerImage.md b/docs/nnfw/howto/HowToUseDockerImage.md
new file mode 100644
index 000000000..2c8d98f58
--- /dev/null
+++ b/docs/nnfw/howto/HowToUseDockerImage.md
@@ -0,0 +1,154 @@
+# How to use docker image of nnfw
+
+We have a docker image to build `nnfw` repo.
+
+This docker image is built from https://github.sec.samsung.net/STAR/nnfw/blob/master/infra/docker/Dockerfile and based on Ubuntu 16.04.
+And prebuilt docker image is available from Samsung private docker registry.
+
+This document describes how to use prebuilt docker image when developing `nnfw`.
+
+## How to install docker
+
+Follow [Installing Docker](https://docs.docker.com/)
+
+- For Ubuntu, follow [Installing Docker on Ubuntu](https://docs.docker.com/install/linux/docker-ce/ubuntu/)
+
+These are the actual steps to install using apt package manager:
+```
+$ sudo apt-get install \
+ apt-transport-https \
+ ca-certificates \
+ curl \
+ software-properties-common
+$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
+$ sudo apt-key fingerprint 0EBFCD88
+```
+```
+$ sudo add-apt-repository \
+ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
+ $(lsb_release -cs) \
+ stable"
+$ sudo apt-get update
+```
+```
+$ sudo apt-get install docker-ce
+```
+
+## Configure docker daemon
+
+1. Set HTTP/HTTPS proxy
+
+ * For Ubuntu, follow [Setting HTTP/HTTPS proxy environment variables](https://docs.docker.com/v17.09/engine/admin/systemd/#httphttps-proxy)
+
+If you are behind an HTTP or HTTPS proxy server, you will need to add this configuration in the Docker systemd service file.
+These are the actual steps to set an HTTP/HTTPS proxy environment variable:
+```
+$ sudo mkdir -p /etc/systemd/system/docker.service.d
+$ sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
+```
+```
+[Service]
+Environment="HTTP_PROXY=http://10.112.1.184:8080/" "HTTPS_PROXY=https://10.112.1.184:8080/" "NO_PROXY=localhost,127.0.0.1"
+```
+```
+$ sudo systemctl daemon-reload
+$ sudo systemctl restart docker
+$ systemctl show --property=Environment docker
+```
+
+2. Edit configuration file of docker daemon
+
+First you have to add Samsung private docker reigstry to your docker daemon.
+Depending on your docker daemon installed, there are two ways of configuration.
+
+
+If there is a `/etc/default/docker`, please edit the file as below.
+```
+$ sudo vi /etc/default/docker
+
+DOCKER_OPTS="--insecure-registry npuci.mooo.com:5000"
+```
+
+If there is a `/etc/docker/daemon.json`, please edit the file as below.
+```
+{
+ ...,
+ "insecure-registries": [..., "npuci.mooo.com:5000"]
+}
+```
+
+3. Then restart docker daemon as below.
+
+```
+$ sudo service docker restart // Ubuntu 14.04
+
+or
+
+$ sudo systemctl restart docker // Ubuntu 16.04
+```
+
+## Install docker image of `nnfw`
+
+Let's pull docker image for `nnfw` repo and tag it to `nnas:latest`
+
+```
+$ docker pull npuci.mooo.com:5000/star/nnfw/nnas:latest
+$ docker tag npuci.mooo.com:5000/star/nnfw/nnas:latest nnas:latest
+```
+
+## Build docker image instead of pull
+
+You can build docker image in your environment instead of pull docker image from server.
+
+```
+$ cd nnfw
+$ ./nnas build-docker-image
+```
+
+Default docker image name is `nnas`. If you want to change image name, set environment variable `DOCKER_IMAGE_NAME`
+
+```
+$ cd nnfw
+$ DOCKER_IMAGE_NAME=nnas_test ./nnas build-docker-image
+```
+
+You can use options supported by `docker build` command (ex. `--network` or `--build-arg` option)
+
+In case of error with a message : 'Temporary failure resolving..', try to build with '--network host' option
+
+```
+$ cd nnfw
+$ ./nnas build-docker-image --network host --build-arg UBUNTU_MIRROR="kr.archive.ubuntu.com"
+```
+
+## Use docker image to build `neurun`
+Three different targets for `nnfw` can be built using docker image.
+
+1. Build `neurun` for `x86_64` target
+```
+$ cd nnfw
+$ docker run --rm -v $(pwd):/opt/nnfw -w /opt/nnfw nnas make install
+```
+or use `docker_build_test_x64.sh` for convenience as below.
+```
+$ cd nnfw
+$ ./infra/scripts/docker_build_test_x64.sh
+```
+You can find built artifacts at `nnfw/Product/x86_64-linux.debug`.
+
+2. Cross build `neurun` for ARM on x86_64 host
+
+You should prepare RootFS, following [Cross Building for ARM](./CrossBuildForArm.md) except ACL build and cross build steps. Then execute below commands. If your RootFS directory is different with below directory, change it to correct path and ensure the path is absolute.
+```
+$ cd nnfw
+$ ROOTFS_DIR=$(pwd)/tools/cross/rootfs/arm \
+./infra/scripts/docker_build_cross_arm_neurun.sh
+```
+You can find built artifacts at `nnfw/Product/armv7l-linux.debug/`.
+
+3. Build `neurun` for Tizen ARM package on x86_64 host
+```
+$ cd nnfw
+$ ./infra/scripts/docker_build_tizen_gbs.sh
+```
+You can find built artifacts at `Product/out/rpm`.
diff --git a/docs/nnfw/howto/HowToUseNNFWAPI.md b/docs/nnfw/howto/HowToUseNNFWAPI.md
new file mode 100644
index 000000000..e09343275
--- /dev/null
+++ b/docs/nnfw/howto/HowToUseNNFWAPI.md
@@ -0,0 +1,63 @@
+# Prepare nnpackage
+
+## Convert tensorflow pb file to nnpackage
+Follow the [compiler guide](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nncc/Release_2019/tutorial.md) to generate nnpackge from tensorflow pb file
+
+## Convert tflite file to nnpackage
+Please see [model2nnpkg](https://github.sec.samsung.net/STAR/nnfw/tree/master/tools/nnpackage_tool/model2nnpkg) for converting from tflite model file.
+
+# Build app with nnfw API
+
+Here are basic steps to build app with [nnfw C API](https://github.sec.samsung.net/STAR/nnfw/blob/master/runtime/neurun/api/include/nnfw.h)
+
+1) Initialize nnfw_session
+``` c
+nnfw_session *session = nullptr;
+nnfw_create_session(&session);
+```
+2) Load nnpackage
+``` c
+nnfw_load_model_from_file(session, nnpackage_path);
+```
+3) (Optional) Assign a specific backend to operations
+``` c
+ // Use acl_neon backend for CONV_2D and acl_cl for otherwise.
+ // Note that defalut backend is acl_cl
+ nnfw_set_op_backend(session, "CONV_2D", "acl_neon");
+```
+
+4) Compilation
+``` c
+ // Compile model
+ nnfw_prepare(session);
+```
+
+5) Prepare Input/Output
+``` c
+ // Prepare input. Here we just allocate dummy input arrays.
+ std::vector<float> input;
+ nnfw_tensorinfo ti;
+ nnfw_input_tensorinfo(session, 0, &ti); // get first input's info
+ uint32_t input_elements = num_elems(&ti);
+ input.resize(input_elements);
+ // TODO: Please add initialization for your input.
+ nnfw_set_input(session, 0, ti.dtype, input.data(), sizeof(float) * input_elements);
+
+ // Prepare output
+ std::vector<float> output;
+ nnfw_output_tensorinfo(session, 0, &ti); // get first output's info
+ uint32_t output_elements = num_elems(&ti);
+ output.resize(output_elements);
+ nnfw_set_output(session, 0, ti.dtype, output.data(), sizeof(float) * output_elements);
+```
+6) Inference
+``` c
+ // Do inference
+ nnfw_run(session);
+```
+## Run Inference with app on the target devices
+reference app : [minimal app](https://github.sec.samsung.net/STAR/nnfw/blob/master/runtime/neurun/sample/minimal)
+
+```
+$ ./minimal path_to_nnpackage_directory
+```
diff --git a/docs/nnfw/howto/HowtoMakeSampleAppOnNnfw.md b/docs/nnfw/howto/HowtoMakeSampleAppOnNnfw.md
new file mode 100644
index 000000000..d272a8390
--- /dev/null
+++ b/docs/nnfw/howto/HowtoMakeSampleAppOnNnfw.md
@@ -0,0 +1,132 @@
+# How to make a sample app on nnfw
+
+Our runtime `neurun` support `NNAPI` as interface currently. To use `NNAPI` efficiently, one of solution is to use tensorflow lite. We support additional library to help using tensorflow lite in `/libs/tflite`. (this library is not official support)
+
+To use tensorflow lite, you need to prepare tensorflow lite model file, and you should know input/output tensor name. Then write sample app.
+
+## Prepare loaded tensorflow lite model object
+
+You can select one of kernel register: tensorflow lite official kernel register or extended register (for pre-implemented custom op)
+```
+#include "tensorflow/lite/kernels/register.h"
+#include "tflite/ext/kernels/register.h"
+```
+
+To use tensorflow lite interpreter, need tensorflow lite interpreter session header
+```
+#include "tflite/InterpreterSession.h"
+```
+
+For NNAPI usage, need NNAPI session header
+```
+#include "tflite/NNAPISession.h"
+```
+
+Load the model object into `FlatBuffer`, create a tensorflow lite operator resolver `BuiltinOpResolver` and construct a tensorflow interpreter builder using them:
+```
+tflite::StderrReporter error_reporter;
+auto model = tflite::FlatBufferModel::BuildFromFile(model_file.c_str(), &error_reporter);
+
+// TODO: determine which BuiltinOpResolver and prepend namespace
+BuiltinOpResolver resolver;
+
+tflite::InterpreterBuilder builder(*model, resolver);
+```
+
+Create a tensorflow interpreter and init the builder using it:
+```
+std::unique_ptr<tflite::Interpreter> interpreter;
+builder(&interpreter);
+```
+
+Create a tensorflow lite session to use NNAPI:
+```
+std::shared_ptr<nnfw::tflite::Session> sess = std::make_shared<nnfw::tflite::NNAPISession>(interpreter.get());
+```
+
+If you want to use tensorflow lite interpreter instead of NNAPI, then:
+```
+std::shared_ptr<nnfw::tflite::Session> sess = std::make_shared<nnfw::tflite::InterpreterSession>(interpreter.get());
+```
+
+`NNAPISession` constructs a computational graph from the interpreter and builds the model.
+
+## Prepare tensors memory allocation and model input for inference
+
+Allocate the memory for tensors of `tflite::Interpreter`:
+```
+sess->prepare();
+```
+
+Prepare inputs. How to prepare is out of scope and task specific.<br/>
+Copy the input data into model, i.e. into `interpreter->inputs`. This is tensorflow specific, not nnfw, so one can use any method, that is applicable to Tensorflow, e.g.:
+```
+for (const auto &id : interpreter->inputs())
+{
+ if (interpreter->tensor(id)->name == input_name)
+ {
+ float *p = interpreter->tensor(id)->data.f;
+
+ for (int y = 0; y < height; ++y)
+ {
+ for (int x = 0; x < width; ++x)
+ {
+ for (int c = 0; c < channel; ++c)
+ {
+ *p++ = data[y * width * channel + x * channel + c];
+ }
+ }
+ }
+ }
+}
+```
+where:<br/>
+`input_name` - name of the inputs of the model;<br/>
+`data` - source vector of size `height * width * channel`.
+
+## Run the inference and get outputs
+
+Run the inference
+```
+sess->run();
+```
+
+Get the result from `interpreter->outputs()`. This is tensorflow lite specific, not nnfw, so one can use any method, that is applicable to tensorflow lite, e.g.:
+```
+for (const auto &id : interpreter->outputs())
+{
+ if (interpreter->tensor(id)->name == output_name)
+ {
+ float *p = interpreter->tensor(id)->data.f;
+
+ for (int i = 0; i < result.capacity(); ++i)
+ {
+ result.push_back(p[i]);
+ }
+ }
+}
+```
+where:<br/>
+`output_name` - name of the outputs of the model;<br/>
+`result` - float vector, where to put output. Its size can be calculated using
+```
+for (const auto &id : interpreter->outputs())
+{
+ if (interpreter->tensor(id)->name == output_name)
+ {
+ TfLiteTensor *t = interpreter->tensor(id);
+ int v = 1;
+ for (int i = 0; i < t->dims->size; ++i)
+ {
+ v *= t->dims->data[i];
+ }
+ return v;
+ }
+}
+return -1;
+```
+
+Release the session
+```
+sess->teardown();
+```
diff --git a/docs/nnfw/howto/RemoteDebuggingForVSCode.md b/docs/nnfw/howto/RemoteDebuggingForVSCode.md
new file mode 100644
index 000000000..c83a09bd5
--- /dev/null
+++ b/docs/nnfw/howto/RemoteDebuggingForVSCode.md
@@ -0,0 +1,147 @@
+# Remote Debugging for Visual Studio Code
+
+This document describes how to debug nnfw on arm devices using visual studio code.
+
+## Install gdb-multiarch on build host
+
+1. Install `gdb-multiarch`
+
+```bash
+$ sudo apt install gdb-multiarch
+```
+
+## Configure VS code on build host
+
+1. Install `Native Debug` extension on VS code
+
+2. Setup GDB environment on VS code
+
+- Debug -> Add configuration -> GDB: Connect to gdbserver
+- Change configuration as below
+ - Change `<TARGET_IP>` to IP of your target
+ - The default port number for gdbserver is 2345. You can change this number.
+ - You can change `executable` configuration from `tflite_run` to other binaries you want to debug.
+
+```json
+{
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "type": "gdb",
+ "request": "attach",
+ "name": "Attach to gdbserver",
+ "gdbpath": "/usr/bin/gdb-multiarch",
+ "executable": "./Product/armv7l-linux.debug/out/bin/tflite_run",
+ "target": "<TARGET_IP>:2345",
+ "remote": true,
+ "printCalls": true,
+ "cwd": "${workspaceRoot}",
+ "valuesFormatting": "parseText"
+ }
+ ]
+}
+```
+
+## Install gdbserver and debugging symbols at target
+
+You need to setup a target device for remote debugging.
+
+1. Install `gdbserver`
+```bash
+$ sudo apt install gdbserver
+```
+
+2. Install `libc6-dbg` and copy debugging symbols
+```bash
+$ sudo apt install libc6-dbg
+$ sudo mkdir -p /lib/.debug
+$ sudo ln -s /usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so /lib/.debug
+```
+
+## Run remote debugging
+
+1. Start gdbserver on target
+
+```bash
+gdbserver --multi :<PORT> <BINARY_PATH> <EXECUTION_ARGUMENTS>
+```
+
+Example
+```bash
+gdbserver --multi :2345 Product/armv7l-linux.debug/out/bin/tflite_run ../models/slice_test.tflite
+```
+
+2. Connect to gdbserver using VS code
+
+- Setup breakpoints on any code you want.
+
+- Click F5 to start remote debugging.
+
+- Program will execute and exit if no breakpoint exists.
+
+## Optional: Setup rootfs on build host
+
+When debugging starts, `gdb` downloads shared libraries that nnfw uses from the target device.
+This process makes `gdb` to wait for shared library download to finish for every debugging start.
+
+To reduce shared library loading, you can setup an arm root file system on your build host and use it.
+
+1. Create arm root file system
+
+Following [CrossBuildForArm](docs/nnfw/howto/CrossBuildForArm.md) to create an arm root file system.
+
+You can use an arm root file system created for arm cross-compile.
+
+2. Install `libc6-dbg` on arm root file system
+
+`<ROOTF_DIR>` should point ARM root file system.
+
+Default path is `tools/cross/rootfs/arm` folder.
+
+```bash
+$ sudo chroot <ROOTFS_DIR>
+$ apt install libc6-dbg
+$ exit
+```
+
+3. Create symbolic link of nnfw on arm rootfs
+
+`gdb` will use source code folder at sysroot.
+
+```bash
+$ ln -s <NNFW_DIR> <ROOTFS_DIR>/<NNFW_DIR>
+```
+Example
+```bash
+$ ln -s /home/user/nnfw /home/user/nnfw/tools/cross/rootfs/arm/home/user/nnfw
+```
+
+4. Setup `.gdbinit` file on nnfw folder
+
+`gdb` will use `<ROOTFS_DIR>` to find arm related symbols.
+
+```bash
+set sysroot <ROOTFS_DIR>
+set debug-file-directory <ROOTFS_DIR>/usr/lib/debug
+```
+
+# Troubleshooting
+
+### Unable to open 'unordered_map.h'
+
+If you are using docker to build nnfw, you should download and decompress gcc-linaro at `/opt` folder
+
+```bash
+wget https://releases.linaro.org/components/toolchain/binaries/6.3-2017.02/arm-linux-gnueabihf/gcc-linaro-6.3.1-2017.02-x86_64_arm-linux-gnueabihf.tar.xz -O gcc-hardfp.tar.xz
+sudo tar -xf gcc-hardfp.tar.xz -C /opt/ && sudo rm -rf gcc-hardfp.tar.xz
+```
+
+### Skip STL files
+
+Step into (F11) will debug STL files such as `unordered_map` or `vector`.
+
+To skip those files from debugging, you can add below line to `.gdbinit` file.
+
+```bash
+skip -gfile /opt/gcc-linaro-6.3.1-2017.02-x86_64_arm-linux-gnueabihf/arm-linux-gnueabihf/include/c++/6.3.1/bits/*
+```
diff --git a/docs/nnfw/howto/device/xu3-dip.png b/docs/nnfw/howto/device/xu3-dip.png
new file mode 100644
index 000000000..59c0be3f2
--- /dev/null
+++ b/docs/nnfw/howto/device/xu3-dip.png
Binary files differ
diff --git a/docs/nnfw/howto/device/xu3_tizen.md b/docs/nnfw/howto/device/xu3_tizen.md
new file mode 100644
index 000000000..6473ab9a8
--- /dev/null
+++ b/docs/nnfw/howto/device/xu3_tizen.md
@@ -0,0 +1,140 @@
+# About
+
+This will describe how to flash microSD with Tizen-5.5 for ODroid XU3.
+
+Host environment is Ubuntu 18.04
+
+This document will explain the only on eMMC + XU3.
+
+# Download files
+
+## Images
+
+Boot
+- https://download.tizen.org/snapshots/tizen/unified/latest/images/standard/tv-boot-armv7l-odroidxu3/
+- download the biggest file
+
+Root FS
+- https://download.tizen.org/snapshots/tizen/unified/latest/images/standard/tv-wayland-armv7l-odroidu3/
+- download the biggest file
+
+U-Boot images
+```
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/bl1.bin.hardkernel
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/bl2.bin.hardkernel.1mb_uboot
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/tzsw.bin.hardkernel
+```
+
+You also need `u-boot-mmc.bin` that is inside `tizen-unified_20180425.2_tv-boot-armv7l-odroidxu3.tar.gz` file.
+```
+tar xvf tizen-unified_20180425.2_tv-boot-armv7l-odroidxu3.tar.gz u-boot-mmc.bin
+```
+
+
+## Flashing script
+
+Download [sd_fusing_xu4.sh](https://git.tizen.org/cgit/platform/kernel/u-boot/plain/scripts/tizen/sd_fusing_xu4.sh?h=tizen)
+
+This file name has `xu4` but it works on also xu3.
+
+
+## Files
+
+```
+dragon@loki:~/Works/tizen/odroid-xu3/flashing$ ls -l
+total 1316
+-rw-rw-r-- 1 dragon dragon 15616 9월 5 14:41 bl1.bin.hardkernel
+-rw-rw-r-- 1 dragon dragon 14592 9월 5 14:41 bl2.bin.hardkernel.1mb_uboot
+-rw-rw-r-- 1 dragon dragon 262144 9월 5 14:41 tzsw.bin.hardkernel
+-rwxr-xr-x 1 dragon dragon 1048576 9월 4 15:17 u-boot-mmc.bin
+```
+
+# Flash
+
+Host environment
+- Ubuntu 18.04
+- eMMC connected through microUSB from xu3 to host
+
+## Flash boot files
+
+on target
+```
+...
+
+CPU: Exynos5422 @ 800 MHz
+
+Model: Odroid XU3 based on EXYNOS5422
+Board: Odroid XU3 based on EXYNOS5422
+Type: xu3
+DRAM: 2 GiB
+MMC: EXYNOS DWMMC: 0, EXYNOS DWMMC: 1
+In: serial
+Out: serial
+Err: serial
+Net: No ethernet found.
+Hit any key to stop autoboot: 0
+ODROID-XU3 #
+
+ODROID-XU3 # mmc list
+EXYNOS DWMMC: 0 (eMMC)
+EXYNOS DWMMC: 1
+
+ODROID-XU3 # ums 0 mmc 0
+
+UMS: LUN 0, dev 0, hwpart 0, sector 0x0, count 0x1d5a000
+
+/
+```
+
+then on host
+```
+$ sudo fdisk -l
+..........
+
+Partition table entries are not in disk order
+
+Disk /dev/sdh: 32.0 GB, 32010928128 bytes
+
+64 heads, 32 sectors/track, 30528 cylinders, total 62521344 sectors
+
+Units = sectors of 1 * 512 = 512 bytes
+
+Sector size (logical/physical): 512 bytes / 512 bytes
+
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+
+Disk identifier: 0x00000000
+
+
+Device Boot Start End Blocks Id System
+
+/dev/sdh1 * 8192 139263 65536 e W95 FAT16 (LBA) ..........
+```
+
+```
+$ sudo ../sd_fusing_xu4.sh -d /dev/sdh --format \
+ -b bl1.bin.hardkernel bl2.bin.hardkernel.1mb_uboot tzsw.bin.hardkernel u-boot-mmc.bin
+...
+```
+
+`--format` option will, 1) delete current partition 2) create new partition table, 3) format each partitions.
+
+- If you meet `./sd_fusing_xu4-u1604.sh: line 147: pv: command not found` message and want to remove this message, install pv package by `sudo apt-get install pv`
+
+## Flash image files
+```
+$ sudo ../sd_fusing_xu4.sh -d /dev/sdh \
+ -b tizen-unified_20190905.1_tv-boot-armv7l-odroidxu3.tar.gz \
+ tizen-unified_20190905.1_tv-wayland-armv7l-odroidxu3.tar.gz
+```
+
+# After boot
+
+Follow [xu4_tizen](xu4_tizen.md)
+
+# References
+
+- http://suprem.sec.samsung.net/confluence/display/KS/Odroid+XU3
+- http://suprem.sec.samsung.net/confluence/pages/viewpage.action?pageId=104635990
+- http://suprem.sec.samsung.net/confluence/pages/viewpage.action?spaceKey=TPLAB&title=XU3+Image+Flashing
+- http://download.tizen.org/snapshots/tizen/unified/latest/images/standard/
diff --git a/docs/nnfw/howto/device/xu3_ubuntu.md b/docs/nnfw/howto/device/xu3_ubuntu.md
new file mode 100644
index 000000000..38dbc69b0
--- /dev/null
+++ b/docs/nnfw/howto/device/xu3_ubuntu.md
@@ -0,0 +1,114 @@
+## How to setup XU3 with Ubuntu 16.04
+
+Ref: https://wiki.odroid.com/old_product/odroid-xu3/odroid-xu3
+
+MicroSD card images
+- https://dn.odroid.com/5422/ODROID-XU3/Ubuntu/
+
+Latest image (as of writing this file)
+- https://dn.odroid.com/5422/ODROID-XU3/Ubuntu/ubuntu-16.04.3-4.14-minimal-odroid-xu4-20171213.img.xz
+- Flash with `WinFlashTool`
+
+MicroSD boot DIP settings
+- ![image](xu3-dip.png)
+
+SW1-1,2 | 1st Boot media
+-- | --
+ON ON | eMMC
+OFF ON | MicroSD card
+
+Boot
+- login with serial console
+- password: `root`/`odroid`
+
+Set ethernet
+`/etc/network/interfaces`
+```
+# interfaces(5) file used by ifup(8) and ifdown(8)
+# Include files from /etc/network/interfaces.d:
+source-directory /etc/network/interfaces.d
+
+auto lo eth0
+iface lo inet loopback
+
+iface eth0 inet static
+ address 10.113.xxx.yyy
+ netmask 255.255.255.0
+ network 10.113.xxx.0
+ broadcast 10.113.xxx.255
+ gateway 10.113.xxx.1
+ dns-nameservers 10.32.192.11 10.32.193.11 8.8.8.8
+```
+Change `xxx.yyy` to your IP address.
+
+Reboot and login with SSH
+
+### Add proxy settings
+
+Add `/etc/apt/apt.conf.d/90proxies`
+```
+Acquire::http::proxy "http://10.112.1.184:8080/";
+Acquire::https::proxy "http://10.112.1.184:8080/";
+Acquire::ftp::proxy "ftp://10.112.1.184:8080/";
+```
+
+Add `/etc/profile.d/proxy.sh`
+```
+#!/bin/bash
+
+# Proxy
+export HTTP_PROXY=http://10.112.1.184:8080/
+export HTTPS_PROXY=https://10.112.1.184:8080/
+```
+
+### Update and install programs
+
+```
+sudo apt-get update
+sudo apt-get upgrade
+sudo apt-get install vim nfs-common
+```
+
+### For convenience
+
+Edit `~/.profile`
+```
+export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.
+```
+
+### MALI GPU driver
+
+https://developer.arm.com/products/software/mali-drivers/user-space
+
+Download at `Odroid-XU3` section
+- https://developer.arm.com/-/media/Files/downloads/mali-drivers/user-space/odroid-xu3/malit62xr12p004rel0linux1fbdev.tar.gz?revision=b4f9b859-ac02-408e-9729-c1e50d3a9c6c
+
+Extract and copy to `/usr/lib/fbdev`
+
+File list
+```
+$ll /usr/lib/fbdev/
+
+total 22520
+drwxr-xr-x 2 root root 4096 Feb 21 02:35 ./
+drwxr-xr-x 57 root root 4096 Feb 21 08:33 ../
+lrwxrwxrwx 1 root root 11 Feb 21 02:35 libEGL.so -> libEGL.so.1*
+lrwxrwxrwx 1 root root 10 Feb 21 02:35 libEGL.so.1 -> libmali.so*
+lrwxrwxrwx 1 root root 17 Feb 21 02:35 libGLESv1_CM.so -> libGLESv1_CM.so.1*
+lrwxrwxrwx 1 root root 10 Feb 21 02:35 libGLESv1_CM.so.1 -> libmali.so*
+lrwxrwxrwx 1 root root 14 Feb 21 02:35 libGLESv2.so -> libGLESv2.so.2*
+lrwxrwxrwx 1 root root 10 Feb 21 02:35 libGLESv2.so.2 -> libmali.so*
+lrwxrwxrwx 1 root root 14 Feb 21 02:35 libOpenCL.so -> libOpenCL.so.1*
+lrwxrwxrwx 1 root root 10 Feb 21 02:35 libOpenCL.so.1 -> libmali.so*
+-rwxr-xr-x 1 root root 21471208 Feb 21 02:35 libmali.so*
+-rwxr-xr-x 1 root root 1580048 Feb 21 02:35 liboffline_compiler_api.so*
+```
+
+Add `/etc/ld.so.conf.d/malifbdev.conf`
+```
+# arm mali
+/usr/lib/fbdev
+```
+
+Rename `arm-linux-gnueabihf_EGL.conf` to `arm-linux-gnueabihf_EGL.conf.not`
+- This is to disable mesa (software emulator of EGL)
diff --git a/docs/nnfw/howto/device/xu4_tizen.md b/docs/nnfw/howto/device/xu4_tizen.md
new file mode 100644
index 000000000..a270bef1b
--- /dev/null
+++ b/docs/nnfw/howto/device/xu4_tizen.md
@@ -0,0 +1,228 @@
+# About
+
+This will describe how to flash microSD with Tizen for ODroid XU4.
+
+Tested host environment is Ubuntu 16.04, target environment is Tizen 5.5
+
+# Download files
+
+## Images
+
+Boot
+- https://download.tizen.org/snapshots/tizen/unified/latest/images/standard/tv-boot-armv7l-odroidxu3/
+- download the biggest file
+
+Root FS
+- https://download.tizen.org/snapshots/tizen/unified/latest/images/standard/tv-wayland-armv7l-odroidxu3/
+- download the biggest file
+
+If you cannot access directories `tv-boot-armv7l-odroidxu3` or `tv-wayland-armv7l-odroidxu3`, or cannot find images in those directories, go to https://download.tizen.org/snapshots/tizen/unified/ and find latest snapshot including images for Odroid-XU3.
+
+U-Boot images
+```
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/bl1.bin.hardkernel
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/bl2.bin.hardkernel.1mb_uboot
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/tzsw.bin.hardkernel
+```
+
+## Flashing script
+
+Download `sd_fusing_xu4.sh` from https://git.tizen.org/cgit/platform/kernel/u-boot/plain/scripts/tizen/sd_fusing_xu4.sh?h=tizen
+
+This file works on Ubuntu 16.04 and 18.04
+
+Make it executable
+```
+chmod u+x sd_fusing_xu4.sh
+```
+
+
+## Files
+
+You should see like this
+```
+-rw-r--r-- 1 hseok82 hseok82 15616 11월 5 13:56 bl1.bin.hardkernel
+-rw-r--r-- 1 hseok82 hseok82 14592 11월 5 13:56 bl2.bin.hardkernel.1mb_uboot
+-rwxrwxr-x 1 hseok82 hseok82 8040 11월 5 13:53 sd_fusing_xu4.sh
+-rw-rw-r-- 1 hseok82 hseok82 10515369 11월 5 14:01 tizen-unified_20191105.1_tv-boot-armv7l-odroidxu3.tar.gz
+-rw-rw-r-- 1 hseok82 hseok82 465487683 11월 5 14:01 tizen-unified_20191105.1_tv-wayland-armv7l-odroidxu3.tar.gz
+-rw-r--r-- 1 hseok82 hseok82 262144 11월 5 13:56 tzsw.bin.hardkernel
+```
+
+# Flash
+
+Host environment
+- Ubuntu 16.04
+- microSD connected through USB Reader as `/dev/sdd` file.
+
+## Flash boot files and image files
+
+Give `--format` if it's a new flash memory.
+```
+sudo ./sd_fusing_xu4.sh --format \
+-d /dev/sdd \
+-b bl1.bin.hardkernel bl2.bin.hardkernel.1mb_uboot tzsw.bin.hardkernel \
+tizen-unified_20191105.1_tv-boot-armv7l-odroidxu3.tar.gz \
+tizen-unified_20191105.1_tv-wayland-armv7l-odroidxu3.tar.gz
+```
+Change `/dev/sdd` to your configuration.
+
+You will be asked to confirm format when used `--format`. Please type `y` to continue.
+```
+/dev/sdd will be formatted, Is it OK? [y/n]
+y
+```
+
+You can omit `--format` from the second time and followings.
+```
+sudo ./sd_fusing_xu4.sh \
+-d /dev/sdd \
+-b bl1.bin.hardkernel bl2.bin.hardkernel.1mb_uboot tzsw.bin.hardkernel \
+tizen-unified_20191105.1_tv-boot-armv7l-odroidxu3.tar.gz \
+tizen-unified_20191105.1_tv-wayland-armv7l-odroidxu3.tar.gz
+```
+`--format` option will, 1) delete current partition 2) create new partition table, 3) format each partitions.
+
+- If you meet `./sd_fusing_xu4.sh: line 147: pv: command not found` message and want to remove this message, install pv package by `sudo apt-get install pv`
+
+# Boot with Tizen
+
+Follow the steps
+
+Step 1.
+- Take out eMMC memory card if you have any
+
+Step 2.
+- Plug-In microSD with Tizen
+
+Step 3. Set boot switch
+- Refer https://wiki.odroid.com/odroid-xu4/hardware/hardware
+- Set `Boot mode selector` switch on the bottom of the board to `uSD`
+
+Step 4. Connect Serial Console port with USB of Host computer
+- Install `minicom`
+```
+sudo apt-get install minicom
+```
+- Add yourself to the group `dialout`
+ - `sudo vi /etc/group`
+- Use serial terminal program like `minicom` (note that `/dev/ttyUSB1` might be different in your environment.)
+```
+minicom --baudrate 115200 --device /dev/ttyUSB1
+```
+- Use `CTRL-a z o` > `Serial port setup` to enter the dialog
+- Baud should be `115200-8N1`
+- Set configuration `Hardware Flow Control` to `No` to enable communication(keyboard typing..)
+- `Save setup as dfl` in configuration
+- If you are connecting from Windows or Mac my need to install the driver
+ - https://www.silabs.com/products/development-tools/software/usb-to-uart-bridge-vcp-drivers
+ - Use `PuTTY` for Windows.
+
+Step 5. Connect Power
+- You should see the boot logs...
+
+Step 6. Login root
+- login `root` pwd `tizen`
+
+# After boot
+
+## Slow down the fan speed
+
+If the fan noise is disturbing, you can slow down a little.
+
+```
+echo "100" > /sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1
+```
+This will slow down the speed to 100. Range is from 0 to 255. "0" to make it stop. "255" for maximum speed.
+This value resets automatically and after reboot so may have to set the value every time you reboot and when fan noise loud again.
+
+Other solution is changing cpu governors policy for big core to `ondemand`
+
+```
+echo ondemand | tee /sys/devices/system/cpu/cpu{0..7}/cpufreq/scaling_governor
+```
+
+## Remount root file system writable
+
+Default ROOT FS (except `/opt/usr`) is read-only. If you want to modify FS, you need to remount as wriable.
+
+```
+mount -o remount,rw /
+```
+
+This is resets after reboot so need to fix `/etc/fstab` when you want to mount FS with wriable on every boot
+
+## Wide console
+
+```
+stty cols 200
+```
+
+## Setting IP Address of Target Device
+
+Use `connmanctl`
+
+**CAUTION** PLEASE DO THIS IN YOUR TARGET DEVICE. RUNNING THIS IN YOUR HOST MAY DAMAGE.
+
+Step 1. Get the service name
+- You first need to connect Ethernet cable.
+```
+connmanctl services
+```
+Will drop something like this
+```
+*AR Wired ethernet_1a43230d5dfa_cable
+```
+
+Step 2. Use `config` to set the IP address
+```
+connmanctl config ethernet_1a43230d5dfa_cable --ipv4 manual 10.113.XXX.YYY 255.255.255.0 10.113.XXX.1
+connmanctl config ethernet_1a43230d5dfa_cable --nameservers 10.32.192.11 10.32.193.11
+```
+where `XXX.YYY` is your address for the target board.
+
+Setting for proxy can be done with connmanctl but don't know how to check.
+```
+connmanctl config ethernet_1a43230d5dfa_cable --proxy manual http://10.112.1.184:8080/
+```
+You can use environment variable but still don't know how to check.
+
+
+This information remains after reboot.
+
+# Connecting with SDB
+
+Default Tizen image has running SDBD in the device with default port (26101).
+
+In your Linux or Windows with `sdb` command,
+```
+sdb connect 10.113.XXX.YYY
+```
+Result will be something like
+```
+* Server is not running. Start it now on port 26099 *
+* Server has started successfully *
+connecting to 10.113.xxx.yyy:26101 ...
+connected to 10.113.xxx.yyy:26101
+```
+With `sdb devices`,
+```
+sdb devices
+List of devices attached
+10.113.xxx.yyy:26101 device xu3
+```
+It comes up with `xu3` as our `xu4` also uses same image `xu3` image.
+
+# (Optional) Install OpenCL
+
+To use arm compute CL backend, install OpenCL.
+You can get OpenCL for tizen in Tizen Mali DDK.
+
+# Known issue
+- `ls -al` of root folder shows strange output.
+
+# Reference
+- https://wiki.tizen.org/Quick_guide_for_odroidxu4
+- and the mail got from "김석원님"
+- https://magazine.odroid.com/wp-content/uploads/odroid-xu4-user-manual.pdf
+ - https://magazine.odroid.com/odroid-xu4
diff --git a/docs/nnfw/howto/device/xu4_ubuntu.md b/docs/nnfw/howto/device/xu4_ubuntu.md
new file mode 100644
index 000000000..7b8a3aa2b
--- /dev/null
+++ b/docs/nnfw/howto/device/xu4_ubuntu.md
@@ -0,0 +1,99 @@
+## How to use XU4 with Ubuntu 16.04
+
+Ref: https://wiki.odroid.com/odroid-xu4/odroid-xu4
+
+eMMC card pre-installed Ubuntu 16.04
+
+Preparation for IO via serial cable
+- Refer to `minicom` section in xu4_tizen.md
+- To find the name of serial device, plug your odroid into your host machine and power it on. Then, run the following on your host:
+ ```
+ $ dmesg | grep tty
+ [ 0.000000] console [tty0] enabled
+ [322282.017985] usb 2-1: cp210x converter now attached to ttyUSB0
+ ```
+- Use `CTRL-a z o` > `Serial port setup` to enter the dialog
+- Set configuration `Serial Device` to `/dev/ttyUSB0` for the name of serial device
+- Baud should be `115200-8N1`
+- Set configuration `Hardware Flow Control` to `No` to enable communication(keyboard typing..)
+
+Connect
+- Connect eMMC to bottom of the board
+- Connect Serial Console to Host USB
+- Connect power and boot
+
+Login with serial console. you can login with `root` or default `odroid` account
+- `root` password: `odroid`
+- `odroid `password: `odroid`
+
+Set ethernet
+`/etc/network/interfaces`
+```
+# interfaces(5) file used by ifup(8) and ifdown(8)
+# Include files from /etc/network/interfaces.d:
+source-directory /etc/network/interfaces.d
+
+auto lo eth0
+iface lo inet loopback
+
+iface eth0 inet static
+ address 10.113.xxx.yyy
+ netmask 255.255.255.0
+ network 10.113.xxx.0
+ broadcast 10.113.xxx.255
+ gateway 10.113.xxx.1
+ dns-nameservers 10.32.192.11 10.32.193.11 8.8.8.8
+```
+Change `xxx.yyy` to your IP address.
+
+Reboot and login with SSH
+
+### Add proxy settings
+
+Add `/etc/apt/apt.conf.d/90proxies`
+```
+Acquire::http::proxy "http://10.112.1.184:8080/";
+Acquire::https::proxy "http://10.112.1.184:8080/";
+Acquire::ftp::proxy "ftp://10.112.1.184:8080/";
+```
+
+Add `/etc/profile.d/proxy.sh`
+```
+#!/bin/bash
+
+# Proxy
+export HTTP_PROXY=http://10.112.1.184:8080/
+export HTTPS_PROXY=https://10.112.1.184:8080/
+```
+
+### Update and install programs
+
+```
+sudo apt-get update
+sudo apt-get upgrade
+sudo apt-get install vim nfs-common
+```
+
+### MALI GPU driver
+
+Driver files are pre-installed in eMMC as follows
+```
+odroid@odroid:/usr/lib/arm-linux-gnueabihf/mali-egl$ ll
+total 20136
+drwxr-xr-x 2 root root 4096 Aug 20 2017 ./
+drwxr-xr-x 106 root root 90112 Mar 26 08:32 ../
+-rw-r--r-- 1 root root 38 Apr 30 2017 ld.so.conf
+-rwxr-xr-x 1 root root 2752 Apr 30 2017 libEGL.so*
+lrwxrwxrwx 1 root root 9 Apr 30 2017 libEGL.so.1 -> libEGL.so*
+lrwxrwxrwx 1 root root 9 Apr 30 2017 libEGL.so.1.4 -> libEGL.so*
+-rwxr-xr-x 1 root root 2752 Apr 30 2017 libGLESv1_CM.so*
+lrwxrwxrwx 1 root root 15 Apr 30 2017 libGLESv1_CM.so.1 -> libGLESv1_CM.so*
+lrwxrwxrwx 1 root root 15 Apr 30 2017 libGLESv1_CM.so.1.1 -> libGLESv1_CM.so*
+-rwxr-xr-x 1 root root 2752 Apr 30 2017 libGLESv2.so*
+lrwxrwxrwx 1 root root 12 Apr 30 2017 libGLESv2.so.2 -> libGLESv2.so*
+lrwxrwxrwx 1 root root 12 Apr 30 2017 libGLESv2.so.2.0 -> libGLESv2.so*
+-rwxr-xr-x 1 root root 20493444 May 8 2017 libmali.so*
+-rwxr-xr-x 1 root root 2752 Apr 30 2017 libOpenCL.so*
+lrwxrwxrwx 1 root root 12 Apr 30 2017 libOpenCL.so.1 -> libOpenCL.so*
+lrwxrwxrwx 1 root root 12 Apr 30 2017 libOpenCL.so.1.1 -> libOpenCL.so*
+```
diff --git a/docs/nnfw/op_list.md b/docs/nnfw/op_list.md
new file mode 100644
index 000000000..a19c0937a
--- /dev/null
+++ b/docs/nnfw/op_list.md
@@ -0,0 +1,71 @@
+# List of Operations Supported by Runtime
+
+The list is based on commit 6f09c89f90216aed7df792.
+
+**Notice: There may be some restrictions on the support of each operation. Details will be updated soon.**
+
+
+| Operaion Name | acl_cl | acl_neon | srcn | cpu |
+| -------------------------- | --- | ----- | -- | --- |
+| Abs | O | O | | |
+| Add | O | O | O | O |
+| ArgMax | O | O | | |
+| AvgPool2D | O | O | | |
+| BatchToSpaceND | O | O | | |
+| Cast | O | O | | |
+| Comparison | O | O | | |
+| Concat | O | O | | O |
+| Conv2D | O | O | O | O |
+| Custom | | | | O |
+| DepthToSpace | O | O | | |
+| DepthwiseConv2D | O | O | O | O |
+| Dequantize | O | O | | |
+| Div | O | O | | |
+| EmbeddingLookup | O | O | | |
+| Exp | O | O | | |
+| Floor | O | O | | |
+| FullyConnected | O | O | | O |
+| Gather | O | O | | O |
+| HashtableLookup | O | O | | |
+| InstanceNorm | O | O | O | |
+| L2Normalization | O | O | | |
+| L2Pool2D | O | O | | |
+| LSTM | O | O | | |
+| LocalResponseNormalization | O | O | | |
+| LogicalAnd | O | O | | |
+| LogicalNot | O | O | | |
+| LogicalOr | O | O | | |
+| Logistic | O | O | | O |
+| Max | O | O | | |
+| MaxPool2D | O | O | | O |
+| Mean | O | O | | |
+| Min | O | O | | |
+| Mul | O | O | | O |
+| Neg | O | O | | |
+| PReLU | O | O | | |
+| Pack | O | O | | |
+| Pad | O | O | | O |
+| Permute | O | O | | O |
+| RNN | O | O | | |
+| RSQRT | O | O | | |
+| ReLU | O | O | | |
+| ReLU1 | O | O | | |
+| ReLU6 | O | O | | |
+| ReduceMax | O | O | | |
+| ReduceMin | O | O | | |
+| ReduceSum | O | O | | |
+| Reshape | O | O | | O |
+| ResizeBilinear | O | O | | |
+| SQRT | O | O | | |
+| Softmax | O | O | | O |
+| SpaceToBatchND | O | O | | |
+| SpaceToDepth | O | O | | |
+| Split | O | O | | |
+| SquaredDifference | O | O | | |
+| Squeeze | O | O | | O |
+| StridedSlice | O | O | | |
+| Sub | O | O | | O |
+| Tanh | O | O | | |
+| TopKV2 | O | | | |
+| Transpose | O | O | | |
+| TransposeConv | O | O | O | |
diff --git a/docs/nnfw/roadmap.md b/docs/nnfw/roadmap.md
new file mode 100644
index 000000000..c04bab66b
--- /dev/null
+++ b/docs/nnfw/roadmap.md
@@ -0,0 +1,76 @@
+This document describes roadmap of 2019 NN Runtime (or _nnfw_) project.
+
+# Goal
+
+This project _nnfw_ aims at providing a high-performance, on-device neural network (NN) inference
+framework that performs inference of a given NN model on processors, such as CPU, GPU, or NPU, in
+the target platform, such as Tizen and Android.
+
+Last year in 2018, we already saw significant gains in accelerating with a single CPU or GPU
+back-end. Now we want to gain more benefits by using a mixture of CPU and GPU according to each
+operation characteristic. It could give us an opportunity to have a high degree of freedom in terms
+of operator coverage, and possibly provide better performance compared to single back-end
+acceleration.
+
+On the other hand, we are going to introduce a new compiler to the front-end. This will support a
+variety of deep learning frameworks in relatively spacious host PC environments, while the runtime
+running on the target device is intended to take a smaller burden. In this process, the compiler and
+the runtime will effectively share information among themselves by the Common IR, which is referred
+to as the NN Package.
+
+# Architecture
+
+![nnfw_architecture](./fig/nnfw_architecture.png)
+
+The figure above illustrates the overall architecture and scope of _nnfw_, along with _nncc_, a
+sibling project, to help understand. In this document, we will deal specifically with _nnfw_.
+
+The _nnfw_ can be divided into three parts which is NN API and NN Runtime, as well as NN Compute
+that is provided by the platform.
+
+1. NN API
+ - Provide a common interface to application.
+ - Last year, Android NN API was selected for seamless integration with TF Lite. As long as our
+ NN runtime provides Android NN API as an interface, TF Lite can link to our NN runtime without
+ any modification.
+ - In choosing Android NN API, we expected standardization and rapid adoption. But the results
+ were far less than that. We could not control its specifications, and its growth rate was too
+ slow to accommodate our needs. So we try to define our own new one, NN Runtime API, in this
+ year. (Once the new API is stable, we provide a way to replace the Android NN API and it will
+ naturally be deprecated.)
+1. NN Runtime
+ - It already provides significant performance improvements using CPU or GPU acceleration. Now we
+ want to add the flexibility to this by providing various functions suitable to specific device
+ configuration.
+ - Mixed back-end acceleration enables various usage scenarios according to device-specific CPU
+ or GPU configurations and usage conditions.
+ - By introducing an interpreter, it will respond to dynamic conditions that the compiler can not
+ handle, and will effectively utilize the memory through the memory manager.
+1. NN Compute
+ - Provide computation acceleration library, such as ACL, or device driver for NPU.
+ - This layer will be provided by OS platform, and we will use the library or device driver as it
+ is. We may request a specific version to the Platform team, but we don't expect we will be
+ modifying the library.
+ - In this year, we will also introduce an extension mechanism to support custom operations on
+ this part.
+
+# Deliverables
+
+- On-Device AI SW stack for Tizen
+ + Advanced runtime support with interpreter, memory manager, and execution planner.
+ + Provides back-end flexibility, such as CPU/GPU mixed acceleration
+ + Well designed custom op support.
+ + Basic infrastructure for NPU support.
+- Specification and implementation of Common IR and Runtime API
+
+# Milestones
+
+- [Project Milestones](https://github.sec.samsung.net/orgs/STAR/projects/1)
+- [Monthly Milestones](https://github.sec.samsung.net/STAR/nnfw/projects/25)
+
+# Workgroups (WGs)
+
+- We organize WGs for major topics, and each WG will be working on its own major topic by breaking
+ it into small tasks/issues, performing them inside WG, and collaborating between WGs.
+- The WG information can be found [here](workgroups.md).
+
diff --git a/docs/nnfw/tests/Convolution_manual_3x3.xlsx b/docs/nnfw/tests/Convolution_manual_3x3.xlsx
new file mode 100644
index 000000000..7211f6ab3
--- /dev/null
+++ b/docs/nnfw/tests/Convolution_manual_3x3.xlsx
Binary files differ
diff --git a/docs/nnfw/tests/Softmax_manual.xlsx b/docs/nnfw/tests/Softmax_manual.xlsx
new file mode 100644
index 000000000..5ad4b8b2b
--- /dev/null
+++ b/docs/nnfw/tests/Softmax_manual.xlsx
Binary files differ