summaryrefslogtreecommitdiff
path: root/docs/nncc
diff options
context:
space:
mode:
Diffstat (limited to 'docs/nncc')
-rw-r--r--docs/nncc/README.md56
-rw-r--r--docs/nncc/design.md10
-rw-r--r--docs/nncc/getting_started.md73
-rw-r--r--docs/nncc/images/nncc_components.pngbin0 -> 45359 bytes
-rw-r--r--docs/nncc/images/nncc_idef0_a0.pngbin0 -> 50434 bytes
-rw-r--r--docs/nncc/images/nncc_idef0_a1.pngbin0 -> 86576 bytes
-rw-r--r--docs/nncc/images/nncc_idef0_a12.pngbin0 -> 42778 bytes
-rw-r--r--docs/nncc/project/detailed_level_design.md329
-rw-r--r--docs/nncc/project/development_document.md257
-rw-r--r--docs/nncc/project/high_level_design.md457
-rw-r--r--docs/nncc/project/requirements_specification.md272
-rw-r--r--docs/nncc/project/test_plan.md442
-rw-r--r--docs/nncc/project_guide.md27
-rw-r--r--docs/nncc/roadmap.md6
-rw-r--r--docs/nncc/v1.0.0/getting_started.md59
-rw-r--r--docs/nncc/v1.0.0/operation-list.md34
-rw-r--r--docs/nncc/v1.0.0/tutorial.md49
-rw-r--r--docs/nncc/v1.1.0/nncc_in_tizen_studio.md52
-rw-r--r--docs/nncc/v1.1.0/nncc_in_visual_studio.md61
19 files changed, 2184 insertions, 0 deletions
diff --git a/docs/nncc/README.md b/docs/nncc/README.md
new file mode 100644
index 000000000..203b4aa45
--- /dev/null
+++ b/docs/nncc/README.md
@@ -0,0 +1,56 @@
+# 1. nnas SDK
+
+_describe simply that current version is 1.0.0, and nnas SDK has nncc and nnfw._
+
+ _we use symantic versioning. Provide link to https://semver.org/_
+
+_simply mention that we go with apache license_
+
+# 2. nncc
+
+_please write a short description_
+_for example, what is this compiler_
+_design philosophy and advantages of this compiler_
+
+## 2.1. Architecture
+
+_For example, simple architecture or compiling flow, showing we're cool_
+
+## 2.2. Getting Started
+
+This section will explain how to install and compile a Tensorflow model file.
+
+### 2.2.1. Supported Environment
+
+_x86, ubuntu 16.04... versions of Tensorflow that produce models.. frozen file..., ... etc..._
+
+### 2.2.2. How to Install
+
+_please write how to install_
+
+### 2.2.3. How to Compile and Package
+
+_what is 'nnpackage'?_
+_environment variables_
+_compiling inception v3 pb file and packaging into an nnpackage_
+_explaining files in an nnpackage_
+_an example with custom op_
+
+## 2.3. List of Supported Operations
+
+_separate md file_
+_showing a list of [ tensorflow op , circle op, limitation ]_
+
+## 2.4. Benchmark
+
+_inception v3 (we have shorter ops)_
+_instance normalization (link to runtime performance)_
+_showing we have bright future_
+
+## 2.5. Support
+
+_report a bug into our github_
+
+## 2.6. Revision History
+
+_separate md file where SDK 1.0.0 and future version history are maintained_
diff --git a/docs/nncc/design.md b/docs/nncc/design.md
new file mode 100644
index 000000000..a01d6fec4
--- /dev/null
+++ b/docs/nncc/design.md
@@ -0,0 +1,10 @@
+This document describes basic principles behind _nncc_ design.
+
+## Goals and non-goals
+
+As mentioned in README.md, _nncc_ aims to provide a general framework for compiling a given NN model
+to an artifact that runs on a target device (such as CPU, GPU, or NPU).
+
+More specifically, _nncc_ aims to create an efficient artifact (in terms of throughput or memory)
+for a specific target via focusing on a restricted set of NN operations. It is not the goal of _nncc_
+to support all the known NN operations although _nncc_ will keep trying to broaden its coverage.
diff --git a/docs/nncc/getting_started.md b/docs/nncc/getting_started.md
new file mode 100644
index 000000000..8f01bd2a4
--- /dev/null
+++ b/docs/nncc/getting_started.md
@@ -0,0 +1,73 @@
+#### Prerequisites
+
+The following toolchains are needed to build _nncc_ project:
+ - CMake (>= 3.1)
+ - g++ (>= 4.8)
+
+#### How to build _nncc_ with docker
+
+_nncc_ provides ``Dockerfile`` in order to make it easy to setup development environment.
+
+One may build ``nncc`` docker image with the following command:
+```
+nncc$ cat infra/docker/Dockerfile | docker build -t nncc -
+...
+```
+
+By default, this ``Dockerfile`` uses "archive.ubuntu.com" which may be quite slow. One may use mirror site via ``UBUNTU_MIRROR`` variable.
+For example, one may enable the use of ``kr.archive.ubuntu.com`` via the following command
+```
+nncc$ cat infra/docker/Dockerfile | docker build --build-arg UBUNTU_MIRROR="kr.archive.ubuntu.com" -t nncc -
+...
+```
+
+One who works behind proxy should provide proxy configuration via the following command:
+```
+nncc$ cat infra/docker/Dockerfile | docker build --build-arg HTTP_PROXY=<HTTP proxy address> --build-arg HTTPS_PROXY=<HTTPS proxy address> -t nncc -
+...
+```
+One may use simplified command if ``HTTP_PROXY`` and ``HTTPS_PROXY`` environment variables are already set:
+```
+nncc$ export
+...
+declare -x HTTP_PROXY=...
+declare -x HTTPS_PROXY=...
+...
+nncc$ cat infra/docker/Dockerfile | docker build --build-arg HTTP_PROXY --build-arg HTTPS_PROXY -t nncc -
+...
+```
+
+Note that these configurations are orthogonal to each other. One may freely combine these options as follows:
+```
+nncc$ cat infra/docker/Dockerfile | docker build --build-arg HTTP_PROXY --build-arg HTTPS_PROXY --build-arg UBUNTU_MIRROR="kr.archive.ubuntu.com" -t nncc -
+```
+
+One may easily build _nncc_ with the following command once ``nncc`` docker image is built.
+```
+nncc$ ./nncc docker-nncc configure
+...
+nncc$ ./nncc docker-nncc build
+...
+```
+
+#### How to build _nncc_ with ninja
+
+You may build _nncc_ with ninja (instead of make) if ninja is available. Please try the following commands:
+```
+nncc$ rm -rf build
+nncc$ ./nncc configure -G Ninja
+nncc$ ./nncc build
+```
+
+#### How to build and run _nncc_ unittests
+
+_nncc_ includes various unittests to check its correctness. One may build and run these unittests via the following command:
+```
+nncc$ rm -rf build
+nncc$ ./nncc configure -DENABLE_TEST=1
+nncc$ ./nncc build
+nncc$ ./nncc test
+```
+
+**NOTE** As _nncc_ unittests are implemented on top of google test framework (_gtest_), _nncc_ build script will automatically download _gtest_ 1.8 from public GitHub.
+If you are not able to access public GitHub from your machine, please override download URL via ``GTEST_URL`` environment variable.
diff --git a/docs/nncc/images/nncc_components.png b/docs/nncc/images/nncc_components.png
new file mode 100644
index 000000000..becd63d14
--- /dev/null
+++ b/docs/nncc/images/nncc_components.png
Binary files differ
diff --git a/docs/nncc/images/nncc_idef0_a0.png b/docs/nncc/images/nncc_idef0_a0.png
new file mode 100644
index 000000000..9ba09681f
--- /dev/null
+++ b/docs/nncc/images/nncc_idef0_a0.png
Binary files differ
diff --git a/docs/nncc/images/nncc_idef0_a1.png b/docs/nncc/images/nncc_idef0_a1.png
new file mode 100644
index 000000000..c5ebec5d9
--- /dev/null
+++ b/docs/nncc/images/nncc_idef0_a1.png
Binary files differ
diff --git a/docs/nncc/images/nncc_idef0_a12.png b/docs/nncc/images/nncc_idef0_a12.png
new file mode 100644
index 000000000..dabcad718
--- /dev/null
+++ b/docs/nncc/images/nncc_idef0_a12.png
Binary files differ
diff --git a/docs/nncc/project/detailed_level_design.md b/docs/nncc/project/detailed_level_design.md
new file mode 100644
index 000000000..50fb8fa13
--- /dev/null
+++ b/docs/nncc/project/detailed_level_design.md
@@ -0,0 +1,329 @@
+# SW Detailed Level Design
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | ----------------- | ----------------- | ------------ |
+| 0.1 | 2018.06.20 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.06.21 | SE member review | Alexey Kondrashov | |
+| 1.0 | 2018.06.22 | Final DR1 version | Vostokov Sergey | Sung-Jae Lee |
+
+**Terminology and Abbreviation**
+
+| | |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+**References**
+
+\[1\] Vostokov Sergey, [SW Requirements Specification](requirements_specification.md)
+
+\[2\] Vostokov Sergey, [SW High-Level Design](high_level_design.md)
+
+## Overview
+
+### Scope
+
+The main goal of the project is to develop a compiler for neural
+networks to produce executable artefact for specified SW and HW
+platform.
+
+The development scope includes the following components:
+
+ - Develop importer module to parse, verify and represent NN model for
+ further optimization and compilation
+ - Develop code emitters to produce executable binary for CPU and GPU
+
+
+**2018 year goals:**
+
+ - Support TensorFlow Lite NN model format
+ - Support Caffe NN model format
+ - Support Caffe2 NN model format (Optional)
+ - Support compilation of MobileNet NN
+ - Support compilation of Inception v3 NN
+ - Support ARM CPU
+ - Support ARM GPU (Mali)
+ - Support Tizen OS
+ - Support SmartMachine OS
+(Optional)
+
+| Product | Target Model Name | Comment |
+| ------------------- | ------------------------------ | ---------------- |
+| Tizen phone | Tizen TM2 | Reference device |
+| Tizen device | Odroid XU4 | Reference board |
+| SmartMachine target | Microvision mv8890, exynos8890 | Reference device |
+
+Table 1-1. Target Model
+
+### Design Consideration
+
+Deep learning software demands reliability and performance. The common
+approach which comes from the history is to develop a SW framework
+(machine learning framework) which would compute each step of the neural
+network inference process using supported hardware. This approach is
+used in many popular solutions like Google Tensorflow/Tensorflow Lite,
+Caffe/2, etc. Traditionally, neural network developers build a
+computation graph and then an appropriate machine learning framework
+interprets it. The latest discoveries in AI field show that the
+node-visitor method of execution is inefficient. As a result, a second
+approach has been worked out by the industry, which is a neural network
+compiler that executes code more efficiently.
+
+This document presents the design of the *nncc*, a neural network
+compiler collection. The design should provide the easiest way to extend
+the functionality of the *nncc* by adding new modules with the following
+features:
+
+ - Support neural networks produced by various machine learning
+ frameworks;
+ - Produce an artefact taking advantages of various hardware
+ including specialized processors like NPU;
+ - Apply new domain specific optimization techniques over given NN.
+
+Non-functional requirements to the developed software are well-described
+in the SW Requirements Specification, such requirements are not shown
+here to avoid duplication.
+
+### Constraints
+
+See constraints in SW Requirements Specification.
+
+
+<table>
+<colgroup>
+<col style="width: 24%" />
+<col style="width: 64%" />
+<col style="width: 10%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Item</th>
+<th>Assumptions, Dependencies and the Constraints</th>
+<th>Reference</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Tizen SW Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>Tizen API</li>
+<li>Tizen kernel</li>
+<li>Tizen FW</li>
+<li>Tizen SDK</li>
+<li>Tizen naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="www.tizen.org" class="uri">www.tizen.org</a> <br>- <a href="wiki.tizen.org" class="uri">wiki.tizen.org</a> <br>- <a href="developer.tizen.org" class="uri">developer.tizen.org</a></td>
+</tr>
+<tr class="even">
+<td>SmartMachine OS Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>SmartMachine API</li>
+<li>SmartMachine kernel</li>
+<li>SmartMachine FW</li>
+<li>SmartMachine SDK</li>
+<li>SmartMachine naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="http://suprem.sec.samsung.net/confluence/pages/viewpage.action?pageId=81833987">Platform confluence</a> <br>- <a href="https://github.sec.samsung.net/RS7-SmartMachine">Github</a> <br>- <a href="http://suprem.sec.samsung.net/confluence/display/ASEC/Adaptive+AUTOSAR">Functional Safety confluence</a></td>
+</tr>
+<tr class="odd">
+<td>Host OS</td>
+<td>Linux-based OS (Ubuntu, Archlinux, etc)</td>
+<td>- <a href="https://www.ubuntu.com/">Ubuntu site</a> <br>- <a href="https://www.archlinux.org/">Archlinux site</a></td>
+</tr>
+<tr class="even">
+<td>Tizen target HW</td>
+<td>The reference device should be provided: Tizen TM2</td>
+<td></td>
+</tr>
+<tr class="odd">
+<td>SmartMachine target HW</td>
+<td>The reference device should be provided</td>
+<td></td>
+</tr>
+</tbody>
+</table>
+Table 1-2. Assumptions, Dependecies and the Constraints</caption>
+
+## SW Detailed Structure Design
+
+### SW Block Structure
+
+Top-Level Components of the nncc descriped in HLD. More detailed
+structure and class diagram will be available after development
+completion.
+
+### SW Block Feature
+
+1. Initialization: configure all internal modules (see
+ [{Initialization} Detailed Design](#initialization-detailed-design))
+2. Frontend: Import NN model (see [{Import NN model} Detailed
+ Design](#import-nn-model-detailed-design))
+ - *Caffe frontend*: includes the parser of Caffe NN model format,
+ verifier to ensure that parsed data is valid and consentient,
+ and Caffe-specific IR converter
+ - *Caffe2 frontend*: includes the parser of Caffe2 NN model
+ format, verifier to ensure that parsed data is valid and
+ consentient, and Caffe2-specific IR converter to Model IR
+ - *Tensorflow Lite frontend*: includes the parser of Tensorflow NN
+ model format with automatic version recognition feature,
+ verifier to ensure that parsed data is valid and consentient,
+ and Tensorflow Lite-specific IR converter to Model IR
+3. Backend: Generate the code (see [{Generate the code} Detailed
+ Design](#generate-the-code-detailed-design))
+ - *Interpreter:* As it was described in SW High-Level Document
+ imported NN model may proceed through three step of Intermediate
+ representation: Model IR, Coarse-Grained IR, Fine-Grained IR.
+ The Interpreter backend uses each this IR to do inference of
+ given NN model. As the output, the user gets the resulting
+ calculation of all NN ops included into original computation
+ graph.
+ - *Binary*:This type refers to generating binary code that can be
+ executed on the target device. NN compiler can generate code
+ that is either executed solely on CPU or takes advantage of the
+ GPU when possible if the corresponding target was specified. The
+ user may want to incorporate 3rd party libraries included into
+ target firmware or delivered with the application package. In
+ this case, the compiler prepares the data following EABI
+ convention and embeds an invocation of high-level functions by
+ appropriate symbol.
+ - *Soft*: Resulting program is a generated source code in
+ high-level programming language C or C++. Here there are two
+ options: the first one is to generate the source code that does
+ not depend on libraries outside of itself, with the exception of
+ system libraries. The second one is to include the code to
+ invoke high-level functions from 3rd party libraries. For
+ example, it may be an invocation of matrix multiplication from
+ GEMM library.
+
+## SW Detailed Operation Design
+
+### {Initialization} Detailed Design
+
+#### Major Function
+
+To provide a valid configuration session for all modules of *nncc* using
+user input from the command line/config file/environment variables.
+
+#### Operation Sequence
+
+Initialization of the *nncc* includes command line option processing,
+configuration of its subsystems as well as any error checking possible
+at this stage. It consists of the following steps:
+
+1. Collect all command line options and verify their format for
+ validity (no syntax errors etc.)
+
+2. Check for validity and then process general options
+
+3. Load subsystem modules
+
+4. For each one of them:
+
+ - Configure
+ - Pass command line options
+ - Check command line options for validity (for example, check
+ that every required option is present)
+
+At the end of this process each subsystem is configured and has access
+to all data needed for its operation.
+
+### {Import NN model} Detailed Design
+
+#### Major Function
+
+To convert given NN model from framework-specific IR to Model IR for
+further processing.
+
+#### Operation Sequence
+
+As you may see on the diagram, neural network import is the main
+function of the compiler front-end part. The result of this operation is
+a computation graph which is presented as Model IR.
+
+![image](../images/nncc_idef0_a12.png)
+
+The import process consists of three parts:
+
+1. NN model parsing
+2. Verification of the result from the previous step
+3. Converting the model to the Model IR
+
+During the first step, file or files containing the model are read and
+represented in some format specific to each NN framework.
+
+Verification step is included to ensure that:
+
+ - None of the files constituting the model are damaged
+ - Model format corresponds to the specified one
+ - Version of the model format corresponds to the specified one
+
+The most important step is accurately converting the model from the
+framework-specific representation to the Model IR. This conversion
+includes:
+
+ - *Translation of the NN model computation graph to the Model IR
+ computation graph.* During the translation new nodes may be
+ introduced - for example, a high-level NN operation may be split
+ into a few smaller ones.
+ - *NN model parameter layout conversion.* The way parameters (also
+ known as weights) of a model are layed out in each specific NN
+ framework may differ, and it is necessary to convert such layout
+ into a unified format.
+ - *NN operation parameter conversion.* Each NN operation has a set
+ of its own parameters describing the way this operation should be
+ performed, and these parameters also differ between frameworks.
+
+Resulting Model IR is equivalent to the initial NN model in terms of how
+NN model inputs would be transformed into its outputs if all the
+operations in the Model IR were executed.
+
+### {Generate the code} Detailed Design
+
+Development in progress. Will be described on Completion DR.
+
+## Interface Design
+
+Development in progress. Will be described on DR2.
+
+## SW Code Structure
+
+| Directory | Description |
+| ------------------------ | -------------------------------------------------------------------- |
+| / | source codes of the build system, main README file |
+| /contrib | Incubating projects |
+| /doc | Contains the documentation of the project |
+| /doc/project | Contains project management documents (SRS, SDD, STD, HLD, DLD, etc) |
+| /libs | Contains the source of the libraries which are used by the nncc |
+| /libs/core | Contains the source code of the core library of nncc |
+| /libs/frontend | Contains the source code of supported frontend's plugins |
+| /libs/frontend/caffe | The source code for the Caffe frontend |
+| /libs/frontend/caffe2 | The source code for the Caffe2 frontend |
+| /libs/frontend/tflite | The source code for the Tensorflow Lite frontend |
+| /libs/backend | Contains the source code of supported backend’ plugins |
+| /libs/backend/cpu | Contains the source code of CPU backend |
+| /libs/backend/gpu | Contains the source code of GPU backend |
+| /libs/backend/3rd\_party | Contains the source code of backend to utilize 3rd party libraries |
+| /scripts | Various scripts for building and testing the nncc |
+| /tools | The source code of the executables |
diff --git a/docs/nncc/project/development_document.md b/docs/nncc/project/development_document.md
new file mode 100644
index 000000000..8315dd3b6
--- /dev/null
+++ b/docs/nncc/project/development_document.md
@@ -0,0 +1,257 @@
+# SW Development Document
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | --------------------------- | --------------- | ------------ |
+| 0.1 | 2018.04.12 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.04.16 | SE member in-charge review | Ilya Lopatin | |
+| 1.0 | 2018.04.17 | Final Execution DR version | Vostokov Sergey | Sung-Jae Lee |
+| 1.1 | 2018.04.17 | Add SW Quality Verification | Vostokov Sergey | Sung-Jae Lee |
+
+**Terminology and Abbreviation**
+
+| | |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+## Project Overview
+
+### Purpose and Scope
+
+The main goal of the project is to develop a compiler for neural networks to produce executable artefact for specified SW and HW platform.
+
+The development scope includes the following components:
+
+ - Develop importer module to parse, verify and represent NN model for further optimization and compilation
+ - Develop code emitters to produce executable binary for CPU and GPU
+
+
+**2018 year goals:**
+
+ - Support TensorFlow Lite NN model format
+ - Support Caffe NN model format
+ - Support Caffe2 NN model format (Optional)
+ - Support compilation of MobileNet NN
+ - Support compilation of Inception v3 NN
+ - Support ARM CPU
+ - Support ARM GPU (Mali)
+ - Support Tizen OS
+ - Support SmartMachine OS (Optional)
+
+| Product | Target Model Name | Comment |
+| ------------------- | ------------------------------ | ---------------- |
+| Tizen phone | Tizen TM2 | Reference device |
+| Tizen device | Odroid XU4 | Reference board |
+| SmartMachine target | Microvision mv8890, exynos8890 | Reference device |
+
+### Assumptions, Dependencies and Constraints
+
+<table>
+<colgroup>
+<col style="width: 26%" />
+<col style="width: 46%" />
+<col style="width: 26%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Item</th>
+<th>Assumptions, Dependencies and the Constraints</th>
+<th>Reference</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Tizen SW Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>Tizen API</li>
+<li>Tizen kernel</li>
+<li>Tizen FW</li>
+<li>Tizen SDK</li>
+<li>Tizen naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td><ul>
+<li><a href="www.tizen.org" class="uri">www.tizen.org</a></li>
+<li><a href="wiki.tizen.org" class="uri">wiki.tizen.org</a></li>
+<li><a href="developer.tizen.org" class="uri">developer.tizen.org</a></li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>SmartMachine OS Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>SmartMachine API</li>
+<li>SmartMachine kernel</li>
+<li>SmartMachine FW</li>
+<li>SmartMachine SDK</li>
+<li>SmartMachine naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="http://suprem.sec.samsung.net/confluence/pages/viewpage.action?pageId=81833987">Platform confluence</a> <br>- <a href="https://github.sec.samsung.net/RS7-SmartMachine">Github</a> <br>- <a href="http://suprem.sec.samsung.net/confluence/display/ASEC/Adaptive+AUTOSAR">Functional Safety confluence</a></td>
+</tr>
+<tr class="odd">
+<td>Host OS</td>
+<td>Linux-based OS (Ubuntu, Archlinux, etc)</td>
+<td>- <a href="https://www.ubuntu.com/">Ubuntu site</a> <br>- <a href="https://www.archlinux.org/">Archlinux site</a></td>
+</tr>
+<tr class="even">
+<td>Tizen target HW</td>
+<td>The reference device should be provided: Tizen TM2</td>
+<td></td>
+</tr>
+<tr class="odd">
+<td>SmartMachine target HW</td>
+<td>The reference device should be provided</td>
+<td></td>
+</tr>
+</tbody>
+</table>
+
+## Development Plan And Result
+
+### Development Schedule
+
+| Task | Deliverable | Plan start | Plan end | Result start | Result end | Responsibility |
+| ------------------------------------ | --------------------------------- | ---------- | -------- | ------------ | ---------- | -------------- |
+| Prepare SW requirements | SRS | 04.2018 | 04.2018 | | | S. Vostokov |
+| Prepare initial SW Test Document | STD | 04.2018 | 04.2018 | | | S. Vostokov |
+| Prepare Initial Project Plan | SDD | 04.2018 | 04.2018 | | | S. Vostokov |
+| Prepare SW Test Document | STD | 04.2018 | 06.2018 | | | S. Vostokov |
+| Prepare design document | HLD, DLD | 05.2018 | 08.2018 | | | S. Vostokov |
+| Prepare test result | STD, UTR | 04.2018 | 10.2018 | | | S. Vostokov |
+| Prepare project completion documents | SDD, Project completion report | 05.2018 | 12.2018 | | | S. Vostokov |
+| Implement Caffe Importer | Caffe NN model Importer | 05.2018 | 09.2018 | | | S. Vostokov |
+| Implement code emitter for CPU | Code emitter | 05.2018 | 09.2018 | | | S. Vostokov |
+| Implement TF Lite Importer | TensorFlow Lite NN model Importer | 05.2018 | 11.2018 | | | S. Vostokov |
+| Implement code emitter for GPU | Code emitter | 02.2018 | 11.2018 | | | S. Vostokov |
+
+### SW Metrics
+
+| Category | Metric | Collection Method | Collection Period | Planned | Actual | Responsibility |
+| -------- | ---------------------------------------------------------------------- | ------------------------ | ----------------------- | ----------------- | ------ | -------------- |
+| Quality | Test pass rate | GTest | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Quality | Defects density | Defect management system | 22.02.2018 - 31.12.2018 | \<= 1 defect/KLOC | | S. Vostokov |
+| Quality | Defects removal rate | Defect management system | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Quality | Critical defects | Static analysis | 22.02.2018 - 31.12.2018 | 0 | | S. Vostokov |
+| Quality | Major defects | Static analysis | 22.02.2018 - 31.12.2018 | 0 | | S. Vostokov |
+| Quality | Code review issue removal | Samsung Research github | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Quality | Comments Rate | `cloc` tool | 22.02.2018 - 31.12.2018 | Exceed 20% | | S. Vostokov |
+| Quality | Cyclomatic Complexity | SVACE | 22.02.2018 - 31.12.2018 | \< 50 | | S. Vostokov |
+| Quality | Unused Items (Unused Files, Unused Functions, Unused Global Variables) | gcc/g++ | 22.02.2018 - 31.12.2018 | 0 | | S. Vostokov |
+| Process | Project On-time Completion Rate | PLM | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Process | Milestone On-time Completion Rate | PLM | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Process | Process compliance | Audit | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+
+### SW Configurations Management
+
+#### Document
+
+| No | Configuration Item | Location | Submitter |
+| -- | ---------------------------- | -------- | ----------- |
+| 1 | SW Requirement Specification | PLM | S. Vostokov |
+| 2 | SW Development Document | PLM | S. Vostokov |
+| 3 | SW High Level Document | PLM | S. Vostokov |
+| 4 | SW Detailed Level Document | PLM | S. Vostokov |
+| 5 | SW System Test Document | PLM | S. Vostokov |
+| 6 | SW Unit Test Report | PLM | S. Vostokov |
+
+#### SW Source Code
+
+SW Repository:
+<https://github.sec.samsung.net/STAR/nncc>
+
+ git clone https://github.sec.samsung.net/STAR/nncc.git
+
+#### Baseline
+
+| Phase | Baseline Name | SW Configuration Item |
+| ------------------ | ------------------ | ------------------------------------------------------------------------------------------- |
+| 04.2018 Plan | Execution DR | SW Requirement Specification, SW Development Document, System Test Document initial version |
+| 06.2018 Execution | DR1 | System Test Document |
+| 08.2018 Execution | Design document | SW High Level Document, SW Detailed Design Document |
+| 09.2018 Execution | DR2 | |
+| 10.2018 Execution | Test report | SW System Test Document (result), SW Unit Test Report |
+| 12.2018 Completion | Project Completion | Project Completion Report |
+
+## SW Quality Verification
+
+### SW Verification
+
+| No | Verification Item | Quality Goal | Tool | Phase | Development Team Member in Charge | Result | Note |
+| -- | -------------------------------- | ------------------------------------------ | -------- | --------- | --------------------------------- | ------ | ---- |
+| 1 | Open source License Verification | Clear violations of open source obligation | ProtexIP | Execution | Vostokov Sergey | | |
+| 2 | Potential Defect | Fix all defects | Svace | Test | Vostokov Sergey | | |
+| 3 | System Defect | Fix Critical/ Major defects | Github | Test | Vostokov Sergey | | |
+
+### Static Analysis
+
+| No | Activity | Schedule | Result | Comment |
+| -- | --------------------------- | ---------- | ------ | ------- |
+| 1 | SA Verification I (SVACE) | 28.09.2018 | | |
+| 2 | SA Verification II (SVACE) | 30.11.2018 | | |
+| 2 | SA Verification III (SVACE) | 31.12.2018 | | |
+
+### Coding Standard
+
+| No | Activity | Schedule | Result | Comment |
+| -- | ----------------------------------------------------- | -------- | ------ | ------- |
+| 1 | Coding standard enforcement with `clang-format` tool. | Regular | | |
+
+
+### Convergence (integration testing)
+
+Out of scope since the integration with other SW is not required by SW
+Requirement Specification.
+
+### Dynamic Analysis
+
+| No | Activity | Schedule | Result | Comment |
+| -- | ------------------- | ---------- | ------ | ------- |
+| 1 | DA Verification I | 28.09.2018 | | |
+| 2 | DA Verification II | 30.11.2018 | | |
+| 2 | DA Verification III | 31.12.2018 | | |
+
+
+### Architecture Analysis
+
+SW architecture verification is managed by HQ.
+
+### SW Security
+
+Out of the project scope since the project is not related to SW security.
+
+### Code Review
+
+| No | Activity | Schedule | Result | Comment |
+| -- | ----------- | -------- | ------ | ------------------------------------------------------------------- |
+| 1 | Code review | Regular | | All code is reviewed manually using `github` tool before committing |
+
+## Risk Management
+
+| Priority | Risk Description | Risk Reduction Solution | Schedule | Result | Responsibility |
+| -------- | ------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ----------------- | ------ | -------------- |
+| 1 | Project scope is changed due extra HQ request | Discuss the new requirements via email and messenger, update SRS | 02.2018 - 12.2018 | | S. Vostokov |
+| 2 | Unavoidable technical difficulties during requirements implementation | Submit requirements changes and get confirmation from HQ | 02.2018 - 12.2018 | | S. Vostokov |
+| 3 | Not enough HR | Hire team members as soon as possible, request assistance from other teams | 02.2018 - 12.2018 | | S. Vostokov |
+| 4 | Use of GPL code | Minimize usage of GPL code, wrap GPL modules with well-defined interfaces so they can be easily replaced. | 02.2018 - 12.2018 | | S. Vostokov |
+| 5 | Requirements would change due external or internal circumstances, e.g. new technology or product launch | Discuss project changes and make corrections | 02.2018 - 12.2018 | | S. Vostokov |
+
diff --git a/docs/nncc/project/high_level_design.md b/docs/nncc/project/high_level_design.md
new file mode 100644
index 000000000..a15aaca4a
--- /dev/null
+++ b/docs/nncc/project/high_level_design.md
@@ -0,0 +1,457 @@
+# SW High Level Design
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | ----------------- | ----------------- | ------------ |
+| 0.1 | 2018.05.25 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.06.21 | SE member review | Alexey Kondrashov | |
+| 1.0 | 2018.06.22 | Final DR1 version | Vostokov Sergey | Sung-Jae Lee |
+
+**Terminology and Abbreviation**
+
+| Terminology | Description |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+**References**
+
+\[1\] Vostokov Sergey, [SW Requirements Specification](requirements_specification.md)
+
+## Overview
+
+### Scope
+
+The main goal of the project is to develop a compiler for neural
+networks to produce executable artefact for specified SW and HW
+platform.
+
+The development scope includes the following components:
+
+ - Develop importer module to parse, verify and represent NN model for
+ further optimization and compilation
+ - Develop code emitters to produce executable binary for CPU and GPU
+
+
+**2018 year goals:**
+
+ - Support TensorFlow Lite NN model format
+ - Support Caffe NN model format
+ - Support Caffe2 NN model format (Optional)
+ - Support compilation of MobileNet NN
+ - Support compilation of Inception v3 NN
+ - Support ARM CPU
+ - Support ARM GPU (Mali)
+ - Support Tizen OS
+ - Support SmartMachine OS(Optional)
+
+| Product | Target Model Name | Comment |
+| ------------------- | ------------------------------ | ---------------- |
+| Tizen phone | Tizen TM2 | Reference device |
+| Tizen device | Odroid XU4 | Reference board |
+| SmartMachine target | Microvision mv8890, exynos8890 | Reference device |
+
+Table 1-1. Target Model
+
+### Design Consideration
+
+Deep learning software demands reliability and performance. The common
+approach which comes from the history is to develop a SW framework
+(machine learning framework) which would compute each step of the neural
+network inference process using supported hardware. This approach is
+used in many popular solutions like Google Tensorflow/Tensorflow Lite,
+Caffe/2, etc. Traditionally, neural network developers build a
+computation graph and then an appropriate machine learning framework
+interprets it. The latest discoveries in AI field show that the
+node-visitor method of execution is inefficient. As a result, a second
+approach has been worked out by the industry, which is a neural network
+compiler that executes code more efficiently.
+
+This document presents the design of the *nncc*, a neural network
+compiler collection. The design should provide the easiest way to extend
+the functionality of the *nncc* by adding new modules with the following
+features:
+
+ - Support neural networks produced by various machine learning
+ frameworks;
+ - Produce an artefact taking advantages of various hardware
+ including specialized processors like NPU;
+ - Apply new domain specific optimization techniques over given NN.
+
+### Constraints
+
+See constraints in SW Requirements Specification.
+
+<table>
+<colgroup>
+<col style="width: 24%" />
+<col style="width: 64%" />
+<col style="width: 10%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Item</th>
+<th>Assumptions, Dependencies and the Constraints</th>
+<th>Reference</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Tizen SW Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>Tizen API</li>
+<li>Tizen kernel</li>
+<li>Tizen FW</li>
+<li>Tizen SDK</li>
+<li>Tizen naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="www.tizen.org" class="uri">www.tizen.org</a> <br>- <a href="wiki.tizen.org" class="uri">wiki.tizen.org</a> <br>- <a href="developer.tizen.org" class="uri">developer.tizen.org</a></td>
+</tr>
+<tr class="even">
+<td>SmartMachine OS Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>SmartMachine API</li>
+<li>SmartMachine kernel</li>
+<li>SmartMachine FW</li>
+<li>SmartMachine SDK</li>
+<li>SmartMachine naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="http://suprem.sec.samsung.net/confluence/pages/viewpage.action?pageId=81833987">Platform confluence</a> <br>- <a href="https://github.sec.samsung.net/RS7-SmartMachine">Github</a> <br>- <a href="http://suprem.sec.samsung.net/confluence/display/ASEC/Adaptive+AUTOSAR">Functional Safety confluence</a></td>
+</tr>
+<tr class="odd">
+<td>Host OS</td>
+<td>Linux-based OS (Ubuntu, Archlinux, etc)</td>
+<td>- <a href="https://www.ubuntu.com/">Ubuntu site</a> <br>- <a href="https://www.archlinux.org/">Archlinux site</a></td>
+</tr>
+<tr class="even">
+<td>Tizen target HW</td>
+<td>The reference device should be provided: Tizen TM2</td>
+<td></td>
+</tr>
+<tr class="odd">
+<td>SmartMachine target HW</td>
+<td>The reference device should be provided</td>
+<td></td>
+</tr>
+</tbody>
+</table>
+Table 1-2. Assumptions, Dependecies and the Constraints</caption>
+
+## SW System Architecture Design
+
+### Overall Architecture
+
+The picture below presents the result of high-level analysis of the
+requirements which **nncc** should satisfy. It describes the main
+function **Compilation** of the compiler collection using IDEF0
+(functional modeling) notation. The full information on IDEF family of
+modeling languages is available at this link on [Wikipedia:
+IDEF](https://en.wikipedia.org/wiki/IDEF).
+
+![image](../images/nncc_idef0_a0.png)
+
+Figure 1. Top-Level Context Diagram of compilation function.
+
+
+The short explanation of the **Figure 1**:
+
+**1. Input entities:**
+
+ - *NN Model instance:* It is the main input of *nncc*. The compiler
+ takes from a user information describing a neural network which
+ should be compiled. In most cases, this NN is produced by a
+ machine learning framework and stored in one or many files. The
+ contents of these files constitute the essence of the neural
+ network. Here it is denoted as an instance of NN model.
+ - *Command line options:* In order to provide the most convenient
+ way to use the compiler, it should be configurable. Current design
+ presents a tool which has a Command Line Interface (CLI). Command
+ line options are a symbolic representation of directions
+ instructing the compiler how to set up a working session to get
+ the desired result.
+
+**2. Output:**
+
+ - *Target binaries:* Everything that is produced by the compilation
+ operation. In general case the result may consist of one or more
+ files. Each of them may be one of the following: an executable, a
+ source code file, a log/verification/error report. For example,
+ when we require the compiler to compile a neural network for
+ execution on GPU, the output artefact may be OpenCL/C/C++ source
+ code, or a binary containing invocation of the procedures
+ delegating the calculations to GPU.
+
+**3. Rules and notations:**
+
+ - *NN Model specification:* Each machine learning framework has its
+ own architecture design and uses its own format to
+ serialize/deserialize computation graphs which represent neural
+ networks. On a storage device, it may be saved as a file or many
+ files using a unique markup of binary data. To enable *nncc* to
+ read such data and process it, in the future it should recognize
+ the format of the container. Importer/parser subsystem of *nncc*
+ stores the full knowledge of the NN specifications and is
+ responsible for reading and parsing NN models (see [Import NN
+ model](#import-nn-model)).
+ - *High-Level and Low-Level Optimization techniques:* Before
+ deployment, a neural network developer might want to verify their
+ product and optimize it by size and performance. There are many
+ techniques for reducing the common size of neural network weights
+ and improving performance of the inference. NN optimization
+ activity can be automated by implementing each technique in the
+ middleend according to its specifications (see [Apply
+ Optimizations](#apply-optimizations)).
+ - *Target Runtime Environment (TRE):* In the case when the compiler
+ produces the binary for execution on a specific SW platform, it
+ should take into account the common API of this SW Platform. It
+ includes the full public API of a chosen OS available to the 3rd
+ party developers.
+ - *Target Instruction Set Architecture (Target ISA):* Resulting
+ artefact is always executed on a SW Platform using some specified
+ API. The user may want to generate the artefact that would use
+ OpenBlas or Arm Compute Library or something else (if supported by
+ the compiler), to perform calculations. In order to provide such
+ possibility, *nncc* should be aware of the API to the specified
+ 3rd party libraries.
+ - *Device specifications:* Some of the optimization techniques may
+ take into account the technological features of the computing
+ device, like the time to perform some specific calculations. Such
+ information is very helpful during optimization of the final code
+ of the compiled artefact because it may be used to select an
+ optimal sequence of command invocations in order to achieve the
+ best performance.
+
+**4. Mechanism:**
+
+ - *Optimizing NN Compiler:* The implemented compiler itself. Since
+ *nncc* is dedicated to producing the code for the most efficient
+ execution, we may regard the tool as optimizing.
+ - *Host OS:* Since the compiler is a tool that works in some SW
+ Environment, the main Top-Level SW system is an Operating System.
+ In the SW Requirements specification it may be defined as a
+ Linux-like OS, for example Ubuntu, Archlinux, etc.
+
+### Composition of Architecture
+
+The compiler consists of three main parts: frontend, middleend, backend.
+Together they form a Neural Network instance processing pipeline.
+Moreover, there is one additional part that is in charge of the compiler
+configuration.
+
+![image](../images/nncc_components.png)
+
+Figure 2. Top-Level Components of the
+*nncc*.
+
+| Layer or Subsystem Name | Description |
+| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
+| Frontend | Imports a specified Neural Network, presents it as a computation graph |
+| Middleend | Provides various optimizations over the computation graph; at the end transforms it to internal IR |
+| Backend | Produces the specified artefact as a result of compilation procedure using specified parameters describing the target OS, target HW, etc |
+| Configuration system | Accepts command line options and configures *nncc* according to their contents |
+
+
+The detailed decomposition of the main function **Compilation** is
+presented on the diagram A1 below.
+
+### Interface
+
+Similar to any console application the *nncc* CLI accepts two types of
+options:
+
+ - Options that have values, for example, a name of the output executable
+ - Options that don't have values (switches) that turn various features on and off
+
+Additionally, options can be general and subsystem-specific.
+
+General options direct the process of the neural network compilation as
+a whole, and also control the utility functions like the verbosity of
+the messages that *nncc* outputs during the compilation process.
+
+Subsystem-specific options control each respective subsystem:
+
+ - Frontend subsystem takes options that point to the NN model to
+ compile, which format it has, which version of the format and so
+ on.
+ - Middleend subsystem takes options that either turn on specific
+ optimizations for the NN model, or just point at the more desired
+ outcome, for example "target performance efficiency" or "target
+ memory efficiency".
+ - Backend subsystem takes options that describe the desired target
+ device or architecture and so on.
+
+For better usability, high-level options are also supported. A single
+high-level option is mapped to a group of lower level options, similarly
+to how it is done with conventional compiler drivers, like gcc. This way
+by choosing a single Middleend option "target performance", nncc will
+automatically choose a number of performance optimizations by itself.
+
+## SW System Operation Design
+
+The Figure 3 presents a more detailed composition of the main function
+**Compilation**. As it was shown in previous section [Composition of
+Architecture](#composition-of-architecture) it is composed of 5
+subfunctions:
+
+ - Setup and configure each module - *Block 1* (See
+ [Initialization](#initialization) section)
+ - Import the specified neural network - *Block 2* (See [Import NN
+ model](#import-nn-model) section)
+ - Apply High-Level optimizations - *Block 3* (See [Apply
+ Optimizations](#apply-optimizations) section)
+ - Apply Low-Level optimizations - *Block 4* (See [Apply
+ Optimizations](#apply-optimizations) section)
+ - Generate the output code for specified target - *Block 5* (See
+ [Generate the code](#generate-the-code) section)
+
+![image](../images/nncc_idef0_a1.png)
+
+Figure 3. Decomposition of top-Level function **Compilation**.
+
+### Initialization
+
+At this stage the initialization of all submodules of the *nncc*
+happens. This procedure starts from command line option processing till
+selection of all required and correctly configured modules. At the
+parsing stage the configuration system checks its own consistency. If
+command line option set is not enought to establish a valid
+configuration the environment variables will be used. Also, almost all
+configuration options can be read from config file if it is specified in
+command line.
+
+### Import NN model
+
+The major function of the *nncc* frontend is to import specified NN
+model. It means that frontend should recognize the format of given NN
+model, parse all internal structures (load computation graph using
+framework specific IR: NN topology, NN ops, weights), verify their
+correctness and convert to Model IR.
+
+### Apply Optimizations
+
+There are two levels of neural network optimizations in *nncc*.
+
+First one is High-Level Optimizations, they are applied to the Model IR,
+which is output by the NN Import subsystem.
+
+#### High-Level Optimizations
+
+High-Level optimizations can be divided into two groups:
+
+ - optimizations aimed at reducing the size of the resulting model -
+ *size optimizations*
+ - optimizations aimed at reducing the inference time of the model -
+ *performance optimizations*
+
+These two groups are not mutually exclusive. Some optimization
+techniques positively affect both size and performance, while some of
+them might reduce the size of the model at some performance cost.
+
+High-Level Optimizations in this sense are purely
+neural-network-specific, as they attempt to improve the model by
+manipulating the computation graph and the weights. For example, some
+techniques search for unused parts of the computation graph and remove
+them, or they search for the parts of the graph that can be merged
+together and thus gain some performance. Other techniques manipulate the
+neural network weights - either reduce their amount or modify their
+values in a way that allows for the reduced storage consumption.
+
+Currently, High-Level Optimizations are out of scope of the project.
+
+#### Low-Level Optimization
+
+The Low-Level Optimizations are applied by the compiler closer to the
+end of the whole compilation process, before the executable generation.
+The input for this stage of *nncc* is the Coarse-Grained IR, which is
+output but High-Level Optimization subsystem.
+
+### Generate the code
+
+Present architecture allows for several backend solutions, depending on
+target specified. Those solutions can be divided into 3 types:
+
+ - *Interpretation.* At every step inference can be carried out by
+ interpreting IR produced after that step.
+ - *Soft backend.* Resulting program can be generated as source code
+ in high-level programming language (e.g., C/C++) that does not
+ depend on libraries outside of itself, with the exception of
+ system libraries.
+ - *Hardware (Binary) backend.* This type refers to generating binary
+ code that can be executed on target device. NN compiler can
+ generate code that is either executed solely on CPU, or takes
+ advantage of the GPU when possible if corresponding target was
+ specified.
+
+Third-party libraries incorporation can be done either in form of source
+code or by compiling a binary artefact.
+
+## Appendix 1. Traceability Matrix
+
+The following table shows mapping between SW Requirements Specification
+and SW High-Level Design
+Document.
+
+| Requirement | Description | Section |
+| ----------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- |
+| RF-1 (Frontend: Tensorflow Lite) | The compiler should support import of NN model in Tensorflow Lite format (parsing & verification of data scheme v0-v3, 50 NN ops) | [Import NN model](#import-nn-model) |
+| RF-2 (Frontend: Caffe) | The compiler should support import of NN model in Caffe format (parsing & verification) | [Import NN model](#import-nn-model) |
+| RF-3 (Frontend: Caffe2 (Optional)) | The compiler should support import of NN model in Caffe2 format (parsing & verification) | [Import NN model](#import-nn-model) |
+| RF-4 (Frontend: lossless import) | The frontend should use the lossless approach while it is converting any NN model to IR | [Import NN model](#import-nn-model) |
+| RF-5 (Frontend: Inception\_v3) | The frontend should successful import the Inception V3 NN model | [Import NN model](#import-nn-model) |
+| RF-6 (Frontend: MobileNet) | The frontend should successful import the MobileNet NN model | [Import NN model](#import-nn-model) |
+| RF-7 (Backend: ARM CPU) | The compiler should produce executable for ARM CPU | [Generate the code](#generate-the-code) |
+| RF-8 (Backend: ARM GPU) | The compiler should produce the binary that takes advantages of GPU when it was specified before compilation | [Generate the code](#generate-the-code) |
+| RF-9 (Backend: Artefact type) | The compiler should produce executable as a shared library or as a static library | [Generate the code](#generate-the-code) |
+| RF-10 (Backend: Inception\_v3) | The compiler should produce the valid compiled artefact for Inception v3 NN model | [Generate the code](#generate-the-code) |
+| RF-11 (Backend: MobileNet) | The compiler should produce the valid compiled artefact for MobileNet NN model | [Generate the code](#generate-the-code) |
+| RF-12 (Config: command line) | The compiler should get configuration parameters from command line | [Initialization](#initialization) |
+| RF-13 (Config: config file (Optional)) | The compiler should get configuration parameters from config file | [Initialization](#initialization) |
+| RF-14 (Config: environment variable (Optional)) | The compiler should get configuration parameters from environment variables | [Initialization](#initialization) |
+| RF-15 (Artefact: result) | The artefact should provide comparable result to the original NN model for the same input data | [Generate the code](#generate-the-code) |
+| RF-16 (Artefact: input verifications) | The artefact should verify any input data and check consistency | [Generate the code](#generate-the-code) |
+| RF-17 (Artefact: GPU) | The artefact should take advantage of the GPU for GPU-enabled operations | [Generate the code](#generate-the-code) |
+| RF-18 (Artefact: CPU) | The artefact should take advantage of CPU if it was specified | [Generate the code](#generate-the-code) |
+
+**Design Module of S/W Architecture**
+
+| Requirement | Import NN model | Generate the code | Initialization |
+| ----------------------------------------------- | --------------- | ----------------- | -------------- |
+| RF-1 (Frontend: Tensorflow Lite) | O | | |
+| RF-2 (Frontend: Caffe) | O | | |
+| RF-3 (Frontend: Caffe2 (Optional)) | O | | |
+| RF-4 (Frontend: lossless import) | O | | |
+| RF-5 (Frontend: Inception\_v3) | O | | |
+| RF-6 (Frontend: MobileNet) | O | | |
+| RF-7 (Backend: ARM CPU) | | O | |
+| RF-8 (Backend: ARM GPU) | | O | |
+| RF-9 (Backend: Artefact type) | | O | |
+| RF-10 (Backend: Inception\_v3) | | O | |
+| RF-11 (Backend: MobileNet) | | O | |
+| RF-12 (Config: command line) | | | O |
+| RF-13 (Config: config file (Optional)) | | | O |
+| RF-14 (Config: environment variable (Optional)) | | | O |
+| RF-15 (Artefact: result) | | O | |
+| RF-16 (Artefact: input verifications) | | O | |
+| RF-17 (Artefact: GPU) | | O | |
+| RF-18 (Artefact: CPU) | | O | |
diff --git a/docs/nncc/project/requirements_specification.md b/docs/nncc/project/requirements_specification.md
new file mode 100644
index 000000000..7a6fce762
--- /dev/null
+++ b/docs/nncc/project/requirements_specification.md
@@ -0,0 +1,272 @@
+# SW Requirements Specification
+
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | ------------------------------------------ | ------------------ | ------------ |
+| 0.1 | 2018.04.11 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.04.11 | SE member in-charge review | Aleksei Kondrashov | |
+| 1.0 | 2018.04.13 | Final Execution DR version | Vostokov Sergey | Sung-Jae Lee |
+| 1.1 | 2018.05.24 | Add new requirement in Source code section | Vostokov Sergey | Sung-Jae Lee |
+
+## Introduction
+
+### Purpose and scope
+
+The main goal of the project is to develop a compiler for neural
+networks to produce executable artefact for specified SW and HW
+platform.
+
+The development scope includes the following components:
+
+ - Develop importer module to parse, verify and represent NN model for
+ further optimization and compilation
+ - Develop code emitters to produce executable binary for CPU and GPU
+
+2018 year goals:
+
+ - Support TensorFlow Lite NN model format
+ - Support Caffe NN model format
+ - Support Caffe2 NN model format (Optional)
+ - Support compilation of MobileNet NN
+ - Support compilation of Inception v3 NN
+ - Support ARM CPU
+ - Support ARM GPU (Mali)
+ - Support Tizen OS
+ - Support SmartMachine OS (Optional)
+
+### Terminology and Abbreviation
+
+| | |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+### SW System Architecture
+
+The main components of the compiler are the following:
+
+ - Configuration system
+ - Importer (convert supported NN model to Model IR before
+ optimization)
+ - High-Level optimization (Applies HW independent optimizations)
+ - Low-Level optimization (Applies optimizations appropriate to the
+ specified target HW)
+ - Code emitter (Produces the binary to take advantages of CPU and/or
+ GPU)
+
+![image](../images/nncc_idef0_a1.png)
+
+### Relevant Industry Standards
+
+Architecture design is described using IDEF notation. Since the nncc is a part of open source STAR Platform project
+any other industry standards not required and/or applicable.
+
+## SW Functional Requirements
+
+### Frontend
+
+| ID | Requirement Name | Description |
+| ---- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
+| RF-1 | Frontend: Tensorflow Lite | The compiler should support import of NN model in Tensorflow Lite format (parsing & verification of data scheme v0-v3, 50 NN ops) |
+| RF-2 | Frontend: Caffe | The compiler should support import of NN model in Caffe format (parsing & verification) |
+| RF-3 | Frontend: Caffe2 (Optional) | The compiler should support import of NN model in Caffe2 format (parsing & verification) |
+| RF-4 | Frontend: lossless import | The front-end should use the lossless approach while it is converting any NN model to IR |
+| RF-5 | Frontend: Inception\_v3 | The front-end should successful import the Inception V3 NN model |
+| RF-6 | Frontend: MobileNet | The front-end should successful import the MobileNet NN model |
+
+### High-Level optimization
+
+No special requirements
+
+### Low-Level optimization
+
+No special requirements
+
+### Backend
+
+| ID | Requirement Name | Description |
+| ----- | ---------------------- | ------------------------------------------------------------------------------------------------------------ |
+| RF-7 | Backend: ARM CPU | The compiler should produce executable for ARM CPU |
+| RF-8 | Backend: ARM GPU | The compiler should produce the binary that takes advantages of GPU when it was specified before compilation |
+| RF-9 | Backend: Artefact type | The compiler should produce executable as a shared library or as a static library |
+| RF-10 | Backend: Inception\_v3 | The compiler should produce the valid compiled artefact for Inception v3 NN model |
+| RF-11 | Backend: MobileNet | The compiler should produce the valid compiled artefact for MobileNet NN model |
+
+### Configuration
+
+| ID | Requirement Name | Description |
+| ----- | --------------------------------------- | --------------------------------------------------------------------------- |
+| RF-12 | Config: command line | The compiler should get configuration parameters from command line |
+| RF-13 | Config: config file (Optional) | The compiler should get configuration parameters from config file |
+| RF-14 | Config: environment variable (Optional) | The compiler should get configuration parameters from environment variables |
+
+### Compiled Artefact
+
+| ID | Requirement Name | Description |
+| ----- | ----------------------------- | ---------------------------------------------------------------------------------------------- |
+| RF-15 | Artefact: result | The artefact should provide comparable result to the original NN model for the same input data |
+| RF-16 | Artefact: input verifications | The artefact should verify any input data and check consistency |
+| RF-17 | Artefact: GPU | The artefact should take advantage of the GPU for GPU-enabled operations |
+| RF-18 | Artefact: CPU | The artefact should take advantage of CPU if it was specified |
+
+## SW Non-Functional Requirements
+
+### The compiler
+
+#### Performance
+
+No special requirements
+
+#### SW capacity
+
+No special requirements
+
+#### Reliability
+
+| ID | Requirement Name | Description |
+| ----- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RNF-1 | Reliability: input | The compiler should produce correct executable in order to utilize CPU and GPU when the correct input data is provided. If the incorrect input data are provided the compiler should not produce a compiled artefact, but inform user about all errors which were met |
+
+#### Security
+
+No special requirements
+
+#### Usability
+
+No special requirements
+
+#### Availability
+
+No special requirements
+
+#### Maintainability
+
+No special
+requirements
+
+#### Extendibility
+
+| ID | Requirement Name | Description |
+| ----- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------- |
+| RNF-2 | Extendibility: frontend | The compiler design and implementations should provide possibility to add new features to front-end: new NN models format |
+| RNF-3 | Extendibility: backend | The compiler design and implementations should provide possibility to add new features to backend (new targets) |
+
+#### Testability
+
+| ID | Requirement Name | Description |
+| ----- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RNF-4 | Testability: environment | The test environment should be built in order to verify compiler functionality, product build status, artefact build/execution status, artefact calculation result and calculation memory footprint and performance |
+
+#### Portability
+
+| ID | Requirement Name | Description |
+| ----- | ------------------ | --------------------------------------------------- |
+| RNF-5 | Portability: Linux | The compiler should be portable with Linux-based OS |
+
+#### Scalability
+
+No special requirements
+
+#### Expandability
+
+No special
+requirements
+
+#### Configurability
+
+| ID | Requirement Name | Description |
+| ----- | --------------------------------------- | --------------------------------------------------------------------------------- |
+| RNF-6 | Configurability: command line | The compiler should support applying configuration through command line options. |
+| RNF-7 | Configurability: file (Optional) | The compiler should support applying configuration through configuration file. |
+| RNF-8 | Configurability: environment (Optional) | The compiler should support applying configuration through environment variables. |
+
+### The compiled artefact
+
+No special
+requirements
+
+### The source code
+
+| ID | Requirement Name | Description |
+| ------ | ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RNF-9 | Legislation | All source code files should follows its original license and general project license without any conflicts |
+| RNF-10 | Legitimacy | The project should have its own general license |
+| RNF-11 | Coding style | Each source code file should follow the one defined for the project coding style |
+| RNF-12 | Contrib | RNF-9, RNF-10, RNF-11 are applicable only for the final release version of source code. These requirements are not applicable to the source code placed in development branch or any folder which is used as temporary storage for the source code under development. |
+
+## SW Interface Requirements
+
+### The compiler interface
+
+#### User Interface
+
+| ID | Requirement Name | Description |
+| ----- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
+| RIF-1 | Compiler UI: no interaction | The compiler should not require any user interation during compilation (completed compilations, fatal exit) |
+| RIF-2 | Compiler UI: CLI | The compiler is considering as a command line tool which proceed parameters from command line and/or config file, environment variables |
+| RIF-3 | Compiler UI: input | The compiler should provide the facility to specify NN model to be compiled |
+| RIF-4 | Compiler UI: target device | The compiler should provide the facility to specify result target device (CPU or GPU) |
+| RIF-5 | Compiler UI: target platform | The compiler should provide the facility to specify result target SW platform |
+| RIF-6 | Compiler UI: output | The compiler should provide the facility to specify result target name |
+| RIF-7 | Compiler UI: target type | The compiler should provide the facility to specify result target type: shared or static library |
+
+#### Hardware Interface
+
+| ID | Requirement Name | Description |
+| ----- | -------------------------------- | --------------------------------------------------------------------------- |
+| RIF-8 | Compiler HWI: x86\_64 executable | The solution should provide executables to run on x86\_64-compatible system |
+
+#### Software Interface
+
+| ID | Requirement Name | Description |
+| ------ | ------------------------------------------ | ------------------------------------------------------------------------------------------------ |
+| RIF-9 | Compiler SWI: frontend plugin | The compiler should provide the SW interface in order to add support of the new NN model formats |
+| RIF-10 | Compiler SWI: backend plugin (HW) | The compiler should provide the SW interface in order to add support of the new HW |
+| RIF-11 | Compiler SWI: backend plugin (SW Platform) | The compiler should provide the SW interface in order to add support of the new SW Platform |
+
+#### Communication Interface
+
+No requirements for communication interface.
+
+### The compiled artefact interface
+
+#### User Interface
+
+| ID | Requirement Name | Description |
+| ------ | ------------------- | ----------------------------------- |
+| RIF-12 | Artefact UI: no GUI | Command line UI in text is suitable |
+
+#### Hardware Interface
+
+| ID | Requirement Name | Description |
+| ------ | ----------------- | ----------------------------------------------------------------------------- |
+| RIF-13 | Artefact HWI: CPU | The artefact should use ARM CPU instruction set when it was built for ARM CPU |
+| RIF-14 | Artefact HWI: GPU | The artefact should use ARM GPU instruction set when it was build for ARM GPU |
+
+#### Software Interface
+
+| ID | Requirement Name | Description |
+| ------ | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| RIF-15 | Artefact SWI: GPU driver | The artefact should use ARM GPU driver to invoke calculations when it was built for ARM GPU |
+| RIF-16 | Artefact SWI: C/C++ header | The artefact should provide C/C++ interface in order to use it in other applications |
+| RIF-17 | Artefact SWI: shared type | The compiled artefact should be a shared library in order to share it between several executables when it was specified before compilation |
+| RIF-18 | Artefact SWI: static type | The compiled artefact should be a static library in order to be built-in to an executable when it was specified before compilation |
+| RIF-19 | Artefact SWI: Info | The artefact should provide SW interface in order to get the actual status of calculation process (progress, errors, final result) |
+
+#### Communication Interface
+
+No requirements for communication interface.
diff --git a/docs/nncc/project/test_plan.md b/docs/nncc/project/test_plan.md
new file mode 100644
index 000000000..a1f0f0a97
--- /dev/null
+++ b/docs/nncc/project/test_plan.md
@@ -0,0 +1,442 @@
+# SW System Test Document
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | -------------------------- | ------------------ | ------------ |
+| 0.1 | 2018.04.12 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.04.13 | SE member in-charge review | Aleksei Kondrashov | |
+| 1.0 | 2018.04.17 | Final Execution DR version | Vostokov Sergey | Sung-Jae Lee |
+| 1.1 | 2018.06.20 | DR1 version | Vostokov Sergey | Sung-Jae Lee |
+
+**Terminology and Abbreviation**
+
+| | |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+**References**
+
+\[1\] Vostokov Sergey, [SW Requirements Specification](requirements_specification.md)
+
+## SW System Test Overview
+
+### Purpose
+
+Software testing is an investigation to provide the quality of the
+product under test and to reduce risk of its failure to users or
+customers. Purpose of testing is to detect software failures so that
+defects may be discovered and corrected.
+
+Software system test procedure is a collection of processes and methods
+used to ensure quality. An additional goal is to make sure that the
+product follows regulations and meets the quality standards expected by
+the customer.
+
+### Scope
+
+As the number of possible tests for every software is practically
+infinite, we use some strategy to select tests that are feasible for the
+available time and resources.
+
+Software system tests attempt to cover requirements listed in the [SW
+Requirement
+Specification](https://github.sec.samsung.net/STAR/nncc/doc/project/requirements_specification.md).
+
+Since the projest outcome is a compiler then its testing are in
+different domain than many other kinds of application or system testing.
+They are dedicated to find all possible issues that cause the following
+bugs:
+
+ - Compiler crashes (also known as an ICE or Internal Compiler Error)
+
+ - Compiler hangs (kind of infinite loop in the compiler)
+
+ - Bad code generation (a result of incorrect compiler output):
+
+ - Bad code generation that leads to a crash in the application
+ - “Silent” bad code generation
+
+ - Compiler throughput issues (Issues that affect the amount of time
+ the compiler takes to compile code )
+
+ - Code quality issues (Issues that affect the performance of the
+ compiled application)
+
+ - Compiler feature correctness issues (This class of bugs involves the
+ compiler generating correct code, but not doing what a particular
+ feature specifies should be
+done)
+
+## SW System Test Items
+
+### Functions to be tested
+
+| Feature | Test Item ID | Test Item description |
+| ---------------------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RF-1, RIF-3 - RIF-7 | TST-1 | Test suite checks NN ops import from Tensorflow Lite format by loading NN model that consists of a single NN op. One test for each NN op. |
+| RF-2, RIF-3 - RIF-7 | TST-2 | Test suite checks NN ops import from Caffe format by loading NN model that consists of a single NN op. One test for each NN op. |
+| RF-3, RIF-3 - RIF-7 | TST-3 | Test suite checks NN ops import from Caffe2 format by loading NN model that consists of a single NN op. One test for each NN op. |
+| RF-5, RIF-3 - RIF-7 | TST-4 | The test should verify successful loading the Inception V3 NN model |
+| RF-6, RIF-3 - RIF-7 | TST-5 | The test should verify successful loading the MobileNet NN model |
+| RF-4 | TST-6 | The test suite should automatically verify the completeness of information that was read from the raw data by comparing it with serialized raw data from Model IR |
+| RF-7, RF-18, RIF-13 | TST-7 | The unit test should automatically verify successful execution of binary on target ARM CPU |
+| RF-8, RF-17, RIF-14, RIF-15 | TST-8 | The unit test should automatically verify successful execution of calculation on GPU |
+| RF-9, RNF-1, RIF-17, RIF-18 | TST-9 | Unit test should verify the existence and format of binary (shared or static) in accordance to specified options |
+| RF-10 | TST-10 | Unit test should verify that compiler produces a compiled artefact for the Inception V3 NN model (Validity of compiled artefact is checked by other tests) |
+| RF-11 | TST-11 | Unit test should verify that compiler produces a compiled artefact for the MobileNet NN model (Validity of compiled artefact is checked by other tests) |
+| RF-12, RF-13, RF-14, RNF-6, RNF-7, RNF-8 | TST-12 | The test suite should verify correctness of configuration object by unit testing |
+| RF-15, RNF-1 | TST-13 | The test suite is to verify the correctness of calculations by comparing the result of original NN model and the result of compiled artefact on the same input data |
+| RF-16 | TST-14 | Unit test should verify that the incorrect input data is processed with an error message without unexpected termination of the application |
+| RNF-4, RNF-5, RIF-8 | TST-15 | A Linux-based OS should be used while the test environment are built. |
+| RIF-16 | TST-16 | The unit test should verify the existence and validity of generated C/C++ header for compiled artefact |
+
+Table 2-1. Test Item
+
+**The following requirements can be tested only manually:**
+
+ - Non-functional requirements: RNF-2, RNF-3 (They would be tested
+ during development)
+ - Interface requirements: RIF-1, RIF-2, RIF-9 - RIF-12, RIF-19
+
+### Functions not to be tested
+
+The following requirements cannot be tested:
+
+ - The source code requirements (RNF-9. RNF-10. RNF-11)
+
+## SW System Test Procedure
+
+### Test approaches
+
+While implementation of the project deliverables several kinds of
+testing are used. All of them are performed automatically by continuous
+integration system since it is developed. CI system subscribes on source
+code modification in the version control system. The configuration does
+not allow any changes to be merged into the main line if these changes
+do not pass merge mandatory tests.
+
+ - **Code style check** (Merge mandatory test): to verify consistency
+ of coding style
+ - **Build test** (Merge mandatory test): to verify the current build
+ - **Unit tests**: to verify SW system consistency. All new implemented
+ features, code refactoring, optimizations must not cause unit test
+ failure. Each unit test reflect the exact logic of testing
+ component, thus, it should be adopted any time when program logic
+ changes.
+ - **System tests**: to verify the feature quality as well as
+ compliance with its specified requirements.
+ - **Manual-based UI testing approach**: for interface requirements,
+ which cannot be automated
+
+### Test Pass/Fail Criteria
+
+All tests (unit/system) must be executed without any issues at any time
+for newly implemented, refactored, or changed code.
+
+### Test Start/Suspension/Resumption criteria
+
+Two mandatory tests (code style check and build test) are performed for
+every pool request (PR) before it is merged. The configuration of
+continuous integration system (CI) does not allow to merge the changes
+into devel branch if they does not pass the tests.
+
+Unit and feature testing are performed for the devel branch
+automatically. The merge to master branch (release) are possible when
+all these tests passed.
+
+### Regression Test strategy
+
+If a new issue is detected and it is not covered by an existing test
+then a new test will be developed. In other case the issue should be
+resolved.
+
+### Test tools
+
+| | |
+| ------------------------------- | ------------------------------------------------------------------------------------ |
+| Source code static verification | AEGIS (CODE pre-commit test suite: static/structure/open source violation analyzers) |
+| Test execution | CMake |
+| Defect management | Samsung Research GitHub |
+| Continuous Integration system | HQ CI (CODE) |
+
+Table 3-1. Test Tools
+
+## SW System Test Schedule Plan
+
+### Test task & schedule
+
+| | | | |
+| -------------- | ----------------------- | -------------- | -------------------------------------- |
+| Task | Schedule | Responsibility | Detailed Task |
+| Unit testing | 01.04.2018 - 31.12.2018 | All | All unit tests should be carried out |
+| System testing | 01.04.2018 - 31.12.2018 | All | All system tests should be carried out |
+
+Table 4-1. Test Tasks and Schedule
+
+### Test Resource organization plan
+
+#### Test environment
+
+| Type/Model | Operating System | Usage |
+| ---------- | --------------------------------- | ------------------------------------------------------------------------ |
+| PC/x86 | Ubuntu GNU/Linux version \>=14.04 | Build system with unit tests. System and system tests are performed too. |
+| Tizen TM2 | Tizen | Unit and system testing |
+| Odroid XU4 | Tizen | Unit and system testing |
+
+Table 4-2. Hardware / Operating System
+
+| Type | Spec | Usage |
+| ------------------- | ----------------------------------------------------- | ------------------------------------------------------------------------------- |
+| Library | Google test | Organize test code and provide utility methods |
+| VCS | Samsung github | The source code version controlling system |
+| CI | CODE | The HQ CI system |
+| Build system | CMake | Run test and check status |
+| Device connectivity | sdb | Send tools to the device and provide shell to run it |
+| Management tool | The CODE (Collaborative Open Development Environment) | Source code version control, code review, issue tracker, Continuous Integration |
+
+Table 4-3. Software
+
+### Risk management plan
+
+| Risk | Description | Probability | Countermeasures |
+| ------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | --------------------------------------------------------------------------------------- |
+| SmartMachine OS SDK toolchain is not available | In order to support compilation for SmartMachine OS the SDK is required. The compiler would have dependency of a SmartMachine OS SDK toolchain. | High | Suspend support of SmartMachine OS, and make plans when SmartMachine OS SDK is released |
+| SmartMachine OS targets are not available | To perform testing of executables for SmartMachine OS the specified targets are required. | High | Request targets or SW emulator when SmartMachine OS is released |
+| HQ CI does not support target testing | Some tests required the target devices to be run on it. The provided CI system may not support such type of testing. | High | Set CI environment on site |
+| Targets for testing/development are not available | Full automatic testing may take a long time. It also required target devices to execute the binaries. | Medium | Request/Buy enough amount of devices |
+
+Table 4-5. Risk Management
+
+### SW configuration management plan
+
+#### SW Configuration items identification
+
+| No | Document number | SW configuration Item | File name |
+| -- | ------------------------- | ------------------------------ | ------------------------------------------- |
+| 1 | SRR-RAJ0118ZZ-BWRF-STD001 | System Test Document | 18 NN compiler and Optimizer (STD) v1.0.pdf |
+| 2 | SRR-RAJ0118ZZ-BWRF-STS001 | System Test Case Specification | 18 NN compiler and Optimizer (STS) v1.0.pdf |
+| 3 | SRR-RAJ0118ZZ-BWRF-UTR001 | Unit Test Report | 18 NN compiler and Optimizer (UTR) v1.0.pdf |
+
+Table 4-6. SW Configuration Items List
+
+#### Directory Structure
+
+| Directory | Description |
+| ------------------------ | -------------------------------------------------------------------- |
+| / | source codes of the build system, main README file |
+| /contrib | Incubating projects |
+| /doc | Contains the documentation of the project |
+| /doc/project | Contains project management documents (SRS, SDD, STD, HLD, DLD, etc) |
+| /libs | Contains the source of the libraries which are used by the nncc |
+| /libs/core | Contains the source code of the core library of nncc |
+| /libs/frontend | Contains the source code of supported frontend's plugins |
+| /libs/frontend/caffe | The source code for the Caffe frontend |
+| /libs/frontend/caffe2 | The source code for the Caffe2 frontend |
+| /libs/frontend/tflite | The source code for the Tensorflow Lite frontend |
+| /libs/backend | Contains the source code of supported backend plugins |
+| /libs/backend/cpu | Contains the source code of CPU backend |
+| /libs/backend/gpu | Contains the source code of GPU backend |
+| /libs/backend/3rd\_party | Contains the source code of backend to utilize 3rd party libraries |
+| /scripts | Various scripts for building and testing the nncc |
+| /tools | The source code of the executables |
+
+Table 4-7. Directory Structure
+
+#### Baseline
+
+| Test Round | Baseline Name | Configuration Item | Schedule |
+| ---------- | ------------- | ---------------------------------------------------- | ---------- |
+| Round 1 | The nncc v0.5 | SRR-RAJ0118ZZ-BWRF-STD001, SRR-RAJ0118ZZ-BWRF-UTR001 | 01.09.2018 |
+| Round 2 | The nncc v1.0 | SRR-RAJ0118ZZ-BWRF-STD002, SRR-RAJ0118ZZ-BWRF-UTR002 | 01.12.2018 |
+
+Table 4-8. Baselines
+
+## SW System Test Case
+
+| TestItem ID | Testcase ID | Test Procedures | Expected Results |
+| ----------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| TST-1 | TST-1-1 | Import a NN consisting of a single Tensorflow Lite ADD operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-2 | Import a NN consisting of a single Tensorflow Lite AVERAGE\_POOL\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-3 | Import a NN consisting of a single Tensorflow Lite CONCATENATION operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-4 | Import a NN consisting of a single Tensorflow Lite CONV\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-5 | Import a NN consisting of a single Tensorflow Lite DEPTHWISE\_CONV\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-6 | Import a NN consisting of a single Tensorflow Lite DEQUANTIZE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-7 | Import a NN consisting of a single Tensorflow Lite EMBEDDING\_LOOKUP operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-8 | Import a NN consisting of a single Tensorflow Lite FULLY\_CONNECTED operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-9 | Import a NN consisting of a single Tensorflow Lite HASHTABLE\_LOOKUP operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-10 | Import a NN consisting of a single Tensorflow Lite L2\_NORMALIZATION operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-11 | Import a NN consisting of a single Tensorflow Lite L2\_POOL\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-12 | Import a NN consisting of a single Tensorflow Lite LOCAL\_RESPONSE\_NORMALIZATION operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-13 | Import a NN consisting of a single Tensorflow Lite LOGISTIC operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-14 | Import a NN consisting of a single Tensorflow Lite LSH\_PROJECTION operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-15 | Import a NN consisting of a single Tensorflow Lite LSTM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-16 | Import a NN consisting of a single Tensorflow Lite MAX\_POOL\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-17 | Import a NN consisting of a single Tensorflow Lite MUL operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-18 | Import a NN consisting of a single Tensorflow Lite RELU operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-19 | Import a NN consisting of a single Tensorflow Lite RELU\_N1\_TO\_1 operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-20 | Import a NN consisting of a single Tensorflow Lite RELU6 operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-21 | Import a NN consisting of a single Tensorflow Lite RESHAPE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-22 | Import a NN consisting of a single Tensorflow Lite RESIZE\_BILINEAR operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-23 | Import a NN consisting of a single Tensorflow Lite RNN operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-24 | Import a NN consisting of a single Tensorflow Lite SOFTMAX operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-25 | Import a NN consisting of a single Tensorflow Lite SPACE\_TO\_DEPTH operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-26 | Import a NN consisting of a single Tensorflow Lite SVDF operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-27 | Import a NN consisting of a single Tensorflow Lite TANH operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-28 | Import a NN consisting of a single Tensorflow Lite CONCAT\_EMBEDDINGS operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-29 | Import a NN consisting of a single Tensorflow Lite SKIP\_GRAM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-30 | Import a NN consisting of a single Tensorflow Lite CALL operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-31 | Import a NN consisting of a single Tensorflow Lite CUSTOM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-32 | Import a NN consisting of a single Tensorflow Lite EMBEDDING\_LOOKUP\_SPARSE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-33 | Import a NN consisting of a single Tensorflow Lite PAD operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-34 | Import a NN consisting of a single Tensorflow Lite UNIDIRECTIONAL\_SEQUENCE\_RNN operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-35 | Import a NN consisting of a single Tensorflow Lite GATHER operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-36 | Import a NN consisting of a single Tensorflow Lite BATCH\_TO\_SPACE\_ND operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-37 | Import a NN consisting of a single Tensorflow Lite SPACE\_TO\_BATCH\_ND operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-38 | Import a NN consisting of a single Tensorflow Lite TRANSPOSE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-39 | Import a NN consisting of a single Tensorflow Lite MEAN operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-40 | Import a NN consisting of a single Tensorflow Lite SUB operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-41 | Import a NN consisting of a single Tensorflow Lite DIV operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-42 | Import a NN consisting of a single Tensorflow Lite SQUEEZE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-43 | Import a NN consisting of a single Tensorflow Lite UNIDIRECTIONAL\_SEQUENCE\_LSTM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-44 | Import a NN consisting of a single Tensorflow Lite STRIDED\_SLICE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-45 | Import a NN consisting of a single Tensorflow Lite BIDIRECTIONAL\_SEQUENCE\_RNN operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-46 | Import a NN consisting of a single Tensorflow Lite EXP operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-47 | Import a NN consisting of a single Tensorflow Lite TOPK\_V2 operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-48 | Import a NN consisting of a single Tensorflow Lite SPLIT operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-49 | Import a NN consisting of a single Tensorflow Lite LOG\_SOFTMAX operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-50 | Import a NN consisting of a single Tensorflow Lite DELEGATE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-51 | Import a NN consisting of a single Tensorflow Lite BIDIRECTIONAL\_SEQUENCE\_LSTM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-52 | Import a NN consisting of a single Tensorflow Lite CAST operation | During import no crashes or error messages occurred |
+| TST-2 | TST-2-1 | Import a NN consisting of Caffe ImageData layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-2 | Import a NN consisting of Caffe Data layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-3 | Import a NN consisting of Caffe HDF5Input layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-4 | Import a NN consisting of two Caffe layers - Input layer and HDF5Output layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-5 | Import a NN consisting of Caffe Input layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-6 | Import a NN consisting of Caffe WindowData layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-7 | Import a NN consisting of Caffe MemoryData layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-8 | Import a NN consisting of Caffe DummyData layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-9 | Import a NN consisting of two Caffe layers - Input layer and Convolution layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-10 | Import a NN consisting of two Caffe layers - Input layer and Pooling layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-11 | Import a NN consisting of two Caffe layers - Input layer and SPP layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-12 | Import a NN consisting of two Caffe layers - Input layer and Crop layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-13 | Import a NN consisting of two Caffe layers - Input layer and Deconvolution layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-14 | Import a NN consisting of two Caffe layers - Input layer and Im2Col layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-15 | Import a NN consisting of two Caffe layers - Input layer and Recurrent layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-16 | Import a NN consisting of two Caffe layers - Input layer and RNN layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-17 | Import a NN consisting of two Caffe layers - Input layer and LSTM layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-18 | Import a NN consisting of two Caffe layers - Input layer and InnerProduct layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-19 | Import a NN consisting of two Caffe layers - Input layer and Dropout layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-20 | Import a NN consisting of two Caffe layers - Input layer and Embed layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-21 | Import a NN consisting of two Caffe layers - Input layer and LRN layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-22 | Import a NN consisting of two Caffe layers - Input layer and MVN layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-23 | Import a NN consisting of two Caffe layers - Input layer and BatchNorm layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-24 | Import a NN consisting of two Caffe layers - Input layer and ReLU layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-25 | Import a NN consisting of two Caffe layers - Input layer and PReLU layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-26 | Import a NN consisting of two Caffe layers - Input layer and ELU layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-27 | Import a NN consisting of two Caffe layers - Input layer and Sigmoid layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-28 | Import a NN consisting of two Caffe layers - Input layer and TanH layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-29 | Import a NN consisting of two Caffe layers - Input layer and AbsVal layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-30 | Import a NN consisting of two Caffe layers - Input layer and Power layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-31 | Import a NN consisting of two Caffe layers - Input layer and Exp layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-32 | Import a NN consisting of two Caffe layers - Input layer and Log layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-33 | Import a NN consisting of two Caffe layers - Input layer and BNLL layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-34 | Import a NN consisting of two Caffe layers - Input layer and Threshold layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-35 | Import a NN consisting of two Caffe layers - Input layer and Bias layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-36 | Import a NN consisting of two Caffe layers - Input layer and Scale layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-37 | Import a NN consisting of two Caffe layers - Input layer and Flatten layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-38 | Import a NN consisting of two Caffe layers - Input layer and Reshape layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-39 | Import a NN consisting of two Caffe layers - Input layer and BatchReindex layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-40 | Import a NN consisting of two Caffe layers - Input layer and Split layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-41 | Import a NN consisting of two Caffe layers - Input layer and Concat layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-42 | Import a NN consisting of two Caffe layers - Input layer and Slice layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-43 | Import a NN consisting of two Caffe layers - Input layer and Eltwise layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-44 | Import a NN consisting of two Caffe layers - Input layer and Filter layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-45 | Import a NN consisting of two Caffe layers - Input layer and Parameter layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-46 | Import a NN consisting of two Caffe layers - Input layer and Reduction layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-47 | Import a NN consisting of two Caffe layers - Input layer and Silence layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-48 | Import a NN consisting of two Caffe layers - Input layer and ArgMax layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-49 | Import a NN consisting of two Caffe layers - Input layer and Softmax layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-50 | Import a NN consisting of two Caffe layers - Input layer and Python layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-51 | Import a NN consisting of two Caffe layers - Input layer and MultinomialLogisticLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-52 | Import a NN consisting of two Caffe layers - Input layer and Infogain layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-53 | Import a NN consisting of two Caffe layers - Input layer and SoftmaxWithLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-54 | Import a NN consisting of two Caffe layers - Input layer and EuclideanLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-55 | Import a NN consisting of two Caffe layers - Input layer and HingeLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-56 | Import a NN consisting of two Caffe layers - Input layer and SigmoidCrossEntropyLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-57 | Import a NN consisting of two Caffe layers - Input layer and Accuracy layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-58 | Import a NN consisting of two Caffe layers - Input layer and ContrastiveLoss layer | During import no crashes or error messages occurred |
+| TST-3 | TST-3-1 | Import a NN consisting of a single Caffe2 Add operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-2 | Import a NN consisting of a single Caffe2 AveragePool2D operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-3 | Import a NN consisting of a single Caffe2 Concat operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-4 | Import a NN consisting of a single Caffe2 Conv2D operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-5 | Import a NN consisting of a single Caffe2 FC operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-6 | Import a NN consisting of a single Caffe2 LRN operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-7 | Import a NN consisting of a single Caffe2 Sigmoid operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-8 | Import a NN consisting of a single Caffe2 MaxPool2D operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-9 | Import a NN consisting of a single Caffe2 Mul operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-10 | Import a NN consisting of a single Caffe2 Relu operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-11 | Import a NN consisting of a single Caffe2 Reshape operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-12 | Import a NN consisting of a single Caffe2 Softmax operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-13 | Import a NN consisting of a single Caffe2 Tanh operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-14 | Import a NN consisting of a single Caffe2 PadImage operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-15 | Import a NN consisting of a single Caffe2 BatchToSpace operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-16 | Import a NN consisting of a single Caffe2 SpaceToBatch operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-17 | Import a NN consisting of a single Caffe2 Transpose operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-18 | Import a NN consisting of a single Caffe2 Mean operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-19 | Import a NN consisting of a single Caffe2 Sub operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-20 | Import a NN consisting of a single Caffe2 Div operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-21 | Import a NN consisting of a single Caffe2 Squeeze operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-22 | Import a NN consisting of a single Caffe2 Exp operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-23 | Import a NN consisting of a single Caffe2 TopK operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-24 | Import a NN consisting of a single Caffe2 Split operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-25 | Import a NN consisting of a single Caffe2 Cast operation | During import no crashes or error messages occurred |
+| TST-4 | TST-4-1 | Import Inception V3 NN model | During import no crashes or error messages occurred |
+| TST-5 | TST-5-1 | Import MobileNet NN model | During import no crashes or error messages occurred |
+| TST-6 | TST-6-1 | Import Inception V3 NN model, serialize all model weights, compare serialized data with the initial NN model | Test executed successfully, serialized weights are equal to initial model weights |
+| TST-6 | TST-6-2 | Import MobileNet NN model, serialize all model weigths, compare serialized data with the initial NN model | Test executed successfully, serialized weights are equal to initial model weights |
+| TST-7 | TST-7-1 | Generate binary for the Inception V3 NN model and run its inference on a device with ARM CPU | Test executed successfully, no crashes occurred, inference result was output, amount and format of the outputs corresponds to the expected NN model outputs |
+| TST-7 | TST-7-2 | Generate binary for the MobileNet NN model and run its inference on a device with ARM CPU | Test executed successfully, no crashes occurred, inference result was output, amount and format of the outputs corresponds to the expected NN model outputs |
+| TST-8 | TST-8-1 | Generate binary for the Inception V3 NN model and run its inference on a GPU-enabled device | Test executed successfully, no crashes occurred, inference result was output, amount and format of the outputs corresponds to the expected NN model outputs |
+| TST-8 | TST-8-2 | Generate binary for the MobileNet V3 NN model and run its inference on a GPU-enabled device | Test executed successfully, no crashes occurred, inference result was output, amount and format of the outputs corresponds to the expected NN model outputs |
+| TST-9 | TST-9-1 | Provide correct NN model, compile it as a static library, then check that corresponding binary exists and it is a static library | Test executed successfully |
+| TST-9 | TST-9-2 | Provide correct NN model, compile it as a shared library, then check that corresponding binary exists and it is a shared library | Test executed successfully |
+| TST-9 | TST-9-3 | Provide incorrect model, compile it as a static library, then check that no compiled artifact is produced | Test executed successfully |
+| TST-9 | TST-9-4 | Provide incorrect model, compile it as a shared library, then check that no compiled artifact is produced | Test executed successfully |
+| TST-10 | TST-10-1 | Check that a static library is provided after compiling Inception V3 as a static library | Test executed successfully |
+| TST-10 | TST-10-2 | Check that a shared library is provided after compiling Inception V3 as a shared library | Test executed successfully |
+| TST-11 | TST-11-1 | Check that a static library is provided after compiling MobileNet as a static library | Test executed successfully |
+| TST-11 | TST-11-2 | Check that a shared library is provided after compiling MobileNet as a shared library | Test executed successfully |
+| TST-12 | TST-12-1 | Check that configuration object is constructed correctly when getting configuration parameters from command line | Test executed successfully |
+| TST-12 | TST-12-2 | Check that configuration object is constructed correctly when getting configuration parameters from config file | Test executed successfully |
+| TST-12 | TST-12-3 | Check that configuration object is constructed correctly when getting configuration parameters from environment variables | Test executed successfully |
+| TST-13 | TST-13-1 | Compile Inception V3 as static library for CPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-2 | Compile Inception V3 as shared library for CPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-3 | Compile Inception V3 as static library for GPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-4 | Compile Inception V3 as shared library for GPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-5 | Compile MobileNet as static library for CPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-6 | Compile MobileNet as shared library for CPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-7 | Compile MobileNet as static library for GPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-8 | Compile MobileNet as shared library for GPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-14 | TST-14-1 | Provide compiled Inception V3 artifact with invalid input, check that no unexpected termination occurs | Test executed successfully |
+| TST-14 | TST-14-2 | Provide compiled Inception V3 artifact with invalid input, check that an error message is provided | Test executed successfully |
+| TST-14 | TST-14-3 | Provide compiled MobileNet artifact with invalid input, check that no unexpected termination occurs | Test executed successfully |
+| TST-14 | TST-14-4 | Provide compiled MobileNet artifact with invalid input, check that an error message is provided | Test executed successfully |
+| TST-15 | TST-15-1 | Check that the OS used during test environment build is Linux-based | Test executed successfully |
+| TST-16 | TST-16-1 | Compile a valid NN model, then check that C/C++ header corresponding to compiled artifact exists | Test executed successfully |
+| TST-16 | TST-16-2 | Compile a valid NN model, then if C/C++ header corresponding to compiled artifact exists, verify its validity | Test executed successfully |
+
+Table 5-1. System Test case
diff --git a/docs/nncc/project_guide.md b/docs/nncc/project_guide.md
new file mode 100644
index 000000000..af6a5acfd
--- /dev/null
+++ b/docs/nncc/project_guide.md
@@ -0,0 +1,27 @@
+### How to create your own project
+_nncc_ aims to make it easy to develop optimized, retargetable NN compilers. Anyone or team interested in _nncc_ can create a new incubating project.
+
+#### Subject
+Subject is related to NN(Neural Network) complier. Some examples are below, but not limited:
+- NN IR(Intermediate Representation)
+- Extended frontend and backend
+- High-performance (model optimization, memory optimization, scheduling, etc.)
+- Tools (verification, benchmark, visualization, etc.)
+- Tutorial, testbed
+
+#### How to propose
+There is no formal proposal process. Anyone can submit an issue or a PR as a starting point of a proposal. It would be helpful that the submissions have documents or descriptions containing the followings to share your idea and concept and attract new contibutors to your project (not mandatory):
+- Overview, goal or architecture description to explain your project
+- How-to guide including building and running your programs
+
+#### Directory to use
+- A directory under `compiler/`, which starts with your project name.
+
+#### Requirement
+- A project should follow the formal review process that _nncc_ is currently using [[(How to create a Pull Request (in contribution guide)](contribution_guide.md#how-to-create-a-pull-request)].
+
+#### How to enable format checker
+- Create a `.FORMATCHECKED` file in your project directory for format checker to check the source code of the directory and its subdirectories.
+
+#### How to contribute`
+Anyone who wants to contribute can create and submit PRs and issues following [nncc contribution_guide](contribution_guide.md). _nncc_ always welcomes your contribution.
diff --git a/docs/nncc/roadmap.md b/docs/nncc/roadmap.md
new file mode 100644
index 000000000..d2227e8be
--- /dev/null
+++ b/docs/nncc/roadmap.md
@@ -0,0 +1,6 @@
+## 2018
+
+In 2018, _nncc_ will provide Caffe/TensorFlow Lite frontends and ARM CPU/GPU backends built on top of
+well-specified common (re-targetable) intermediate representation (IR) which is expressive enough to
+encode Inception(v3) and MobileNet, and is flexible enough to support next-gen H/W architectures, such
+as DSP or NPU.
diff --git a/docs/nncc/v1.0.0/getting_started.md b/docs/nncc/v1.0.0/getting_started.md
new file mode 100644
index 000000000..ee8014042
--- /dev/null
+++ b/docs/nncc/v1.0.0/getting_started.md
@@ -0,0 +1,59 @@
+# Getting Started
+
+## Environments
+
+Currently, Ubuntu 16.04 is officially supported as development environment.
+Other environments may be available but not confirmed.
+
+## How to compile your own model
+
+### What should we preapare
+
+- Tensorflow model file (`.pb` file)
+ - TensorFlow model file should be frozen. [[How to freeze?]](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py)
+ - Only inference operations are supported. Training operations are not supported yet.
+ - Quantization is not yet supported.
+ - `device` attribute should not have `GPU` value.
+- Model information file (`.info` file)
+ - `.info` file should include 4 things.
+ - Specification of input or output
+ - name of input/output node
+ - type of input/output node
+ - shape of input/output node
+ - Example format is written below.
+ ```
+ # input/output, node_name, node_type, node_shape
+
+ input, input:0, TF_FLOAT, [1, 299, 299, 3]
+ output, InceptionV3/Predictions/Reshape_1:0, TF_FLOAT, [1, 1001]
+ ```
+
+### How to compile
+
+1. Generate `nnpkg` using `.pb` file and `.info` file.
+ ```sh
+ tf2nnpkg --graphdef <model.pb> --info <model.info> -o <path/to/generate>
+ ```
+
+1. Check if all files are generated correctly.
+ - Directory name of `nnpkg` is prefix of `.pb` file.
+ - For example, if there is `model.pb` file, directory name will be `model`.
+ ```
+ path/to/generate
+ └ model
+ ├ model.circle
+ └ metadata
+ └ MANIFEST
+ ```
+
+1. Check if `MANIFEST` contents are correct.
+ ```sh
+ $ cat path/to/generate/model/metadata/MANIFEST
+ {
+ "major-version" : "1",
+ "minor-version" : "0",
+ "patch-version" : "0",
+ "models" : [ "model.circle" ],
+ "model-types" : [ "circle" ]
+ }
+ ```
diff --git a/docs/nncc/v1.0.0/operation-list.md b/docs/nncc/v1.0.0/operation-list.md
new file mode 100644
index 000000000..9a43eb518
--- /dev/null
+++ b/docs/nncc/v1.0.0/operation-list.md
@@ -0,0 +1,34 @@
+# List of TensorFlow Operations Supported by nncc
+
+The list of TensorFlow operations supported by nncc as follows:
+
+**Notice: There may be some restrictions on the support of each operation. Details will be updated soon.**
+
+- Add
+- AvgPool
+- BiasAdd
+- ConcatV2
+- Const
+- Conv2D
+- Conv2DBackpropInput
+- DepthwiseConv2dNative
+- FusedBatchNorm
+- Identity
+- MaxPool
+- Mean
+- Mul
+- Pad
+- Placeholder
+- RealDiv
+- Relu
+- Relu6
+- Reshape
+- Rsqrt
+- Shape
+- Softmax
+- Sqrt
+- SquaredDifference
+- Squeeze
+- StopGradient
+- Sub
+- Tanh
diff --git a/docs/nncc/v1.0.0/tutorial.md b/docs/nncc/v1.0.0/tutorial.md
new file mode 100644
index 000000000..9d1f97e67
--- /dev/null
+++ b/docs/nncc/v1.0.0/tutorial.md
@@ -0,0 +1,49 @@
+# Tutorial
+
+Let's compile Inception_v3 model and make a nnpackage!
+
+## Prepare inception_v3 files
+
+1. Download pre-trained `inception_v3.pb` model file.
+ ```sh
+ $ wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz
+ $ tar -xvf inception_v3_2018_04_27.tgz
+ ```
+1. Create model information file as `inception_v3.info`.
+ ```
+ $ cat > inception_v3.info << "END"
+ input, input:0, TF_FLOAT, [1, 299, 299, 3]
+ output, InceptionV3/Predictions/Reshape_1:0, TF_FLOAT, [1, 1001]
+ END
+ ```
+
+## Let's compile inception_v3
+
+1. Generate `nnpkg`. In this tutorial, let's generate to current directory.
+ ```sh
+ tf2nnpkg --use-tf2circle \
+ --graphdef inception_v3.pb \
+ --info inception_v3.info \
+ -o .
+ ```
+
+## Check whether compilation is well done
+
+- Check if all files are generated correctly.
+ ```
+ inception_v3
+ ├ inception_v3.circle
+ └ metadata
+ └ MANIFEST
+ ```
+- Check if `MANIFEST` contents are correct.
+ ```sh
+ $ cat inception_v3/metadata/MANIFEST
+ {
+ "major-version" : "1",
+ "minor-version" : "0",
+ "patch-version" : "0",
+ "models" : [ "inception_v3.circle" ],
+ "model-types" : [ "circle" ]
+ }
+ ```
diff --git a/docs/nncc/v1.1.0/nncc_in_tizen_studio.md b/docs/nncc/v1.1.0/nncc_in_tizen_studio.md
new file mode 100644
index 000000000..d0f89a49b
--- /dev/null
+++ b/docs/nncc/v1.1.0/nncc_in_tizen_studio.md
@@ -0,0 +1,52 @@
+# nncc for Tizen Studio Plugin
+
+## Environments
+
+- Windows 10
+
+## How to install nncc in Tizen Studio
+
+### Things to prepare
+
+- Tizen Studio with IDE
+- Tizen Studio Package Manager
+ - Will be automatically installed when Tizen Studio is installed
+- Firewall Registration
+ - To add a repository at Package Manager, firewall registration must be applied in advance.
+ - IP Address : 107.110.2.162
+ - Service Port : 80(TCP)
+
+### Installation of SDK
+
+1. Execute Package Manager of Tizen Studio.
+1. Click cogwheel at right-top side.
+1. Click `Extension SDK`.
+1. Click `+` button.
+1. Write `http://107.110.2.162/packages/ai_tool_ext/` at `Repository`, and anything at `Name`.
+1. Click `OK`. And then click `OK` again. Refresh progress will be run.
+1. At `Extension SDK` tab, click `install` of `nnas`
+
+## Tutorial
+Let's create nnpackage in Tizen Studio!
+
+1. Enter [File] - [New] - [Tizen Project].
+1. Select `Sample` and click `Next`.
+1. Select `Mobile` with any version and click `Next`.
+1. Select `Web Application` and click `Next`.
+1. Select `Application` - `App Callee` and click `Next`.
+1. Write `AppCallee` at `Project name` and click `Finish`.
+1. Click `Finish`. (Default project name is `AppCallee`)
+1. After project `AppCallee` was created, click `AppCallee` at Project Explorer.
+1. Click `AI extension` (AI chip icon) at the top.
+1. Give `.pb` file path to `Model File` and `.info` file path to `info file`.
+ - Information about `.pb` and `.info`, please refer to [Getting Started](../v1.0.0/getting_started.md#10)
+1. Click `OK`. Generating circle file progress will be done.
+1. Check whether nnpackage is created in `AppCallee\res\shared` folder.
+ - Suppose that `model.pb` and `model.info` were used
+ ```
+ AppCallee\res\shared
+ └ model
+ ├ model.circle
+ └ metadata
+ └ MANIFEST
+ ``` \ No newline at end of file
diff --git a/docs/nncc/v1.1.0/nncc_in_visual_studio.md b/docs/nncc/v1.1.0/nncc_in_visual_studio.md
new file mode 100644
index 000000000..bc9e59fa9
--- /dev/null
+++ b/docs/nncc/v1.1.0/nncc_in_visual_studio.md
@@ -0,0 +1,61 @@
+# nncc for Visual Studio Tizen Extension
+
+## Environments
+
+- Windows 10
+
+## How to install nncc in Visual Studio
+
+### Things to prepare
+
+- Visual Studio 2019 for Windows
+ - Version Status
+ - Community version : Not available yet
+ - Professional version : Available
+ - Enterprise version : Available
+ - Needed Workload
+ - .NET Desktop Development
+ - If above workload was not installed, please install it using Visual Studio Installer.
+ - Under 2019 version, some details can be different
+ - Express version : Not available
+ - Other versions : Not confirmed
+ - Refer to https://developer.tizen.org/development/visual-studio-tools-tizen/installing-visual-studio-tools-tizen
+- Tizen Baseline SDK
+ - Install `nnas` by using Package Manager. For details, [click here.](nncc_in_tizen_studio.md)
+
+### Installation
+
+1. Download `VisualStudioToolsForTizen_2019AI_3.1.0116.1.vsix` from the release page.
+1. Execute the `vsix` file.
+ - Do not execute Visual Studio during this step. If executed, the process will wait infinitely.
+1. Open Visual Studio and click `Continue without code`.
+1. Enter [Tools] - [NuGet Package Manager] - [Package Manager Settings] - [NuGet Package Manager - Package Sources]
+1. Click green `+` button to add new package source.
+1. Set like the following. Then, click `Update`.
+ - `Name` : write `Tizen.NET.SDK`
+ - `Source`: write `https://tizen.myget.org/F/dotnet/api/v3/index.json`
+1. <b>Only when</b> `nuget.org` is not found in `Available package sources`, follow below three steps.
+ - Click green `+` button
+ - Set `Name` as `nuget.org` and set `Source` as `https://api.nuget.org/v3/index.json`
+ - Click `Update`
+1. Click `OK`.
+
+## Tutorial
+Let's create nnpackage in Visual Studio!
+
+1. Open Visual Studio.
+1. Enter [File] - [New] - [Project].
+1. Select `AI App Project` and click `Next`.
+1. Click `Create`. (Default project name is `AIAppTemplate`)
+1. A dialog pops up. Enter the path of your `model.pb` and `model.info` into the dialog.
+ - In this version, names of model file and info file <b>must be</b> `model.pb` and `model.info`.
+ - Detailed information about `.pb` file and `.info` file is in [getting_started](../v1.0.0/getting_started.md#12)
+1. Open `AIAppTemplate_App.cs` in `AIAppTemplate` and build it.
+1. If build succeeded, nnpackage will be found at `AIAppTemplate\res\shared` folder.
+ ```
+ AIAppTemplate\res\shared
+ └ model
+ ├ model.circle
+ └ metadata
+ └ MANIFEST
+ ```