summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/HowToContribute.md24
-rw-r--r--docs/UseDoxygen.md36
-rw-r--r--docs/doxygen/Doxyfile2500
-rw-r--r--docs/fig/compiler_flow.pngbin0 -> 56456 bytes
-rw-r--r--docs/fig/nnfw_compiler_structure.pngbin0 -> 75343 bytes
-rw-r--r--docs/fig/nnfw_compiler_structure.pptxbin0 -> 40532 bytes
-rw-r--r--docs/fig/nnfw_components.pngbin0 -> 82620 bytes
-rw-r--r--docs/fig/nnfw_components.pptxbin0 -> 46596 bytes
-rw-r--r--docs/fig/nnfw_nativeapi_flow.pngbin0 -> 105745 bytes
-rw-r--r--docs/fig/nnfw_nativeapi_flow.pptxbin0 -> 51156 bytes
-rw-r--r--docs/fig/nnfw_nnapi_flow.pngbin0 -> 52314 bytes
-rw-r--r--docs/fig/nnfw_nnapi_flow.pptxbin0 -> 45988 bytes
-rw-r--r--docs/fig/nnfw_runtime_behavior.pngbin0 -> 51473 bytes
-rw-r--r--docs/fig/nnfw_runtime_behavior.pptxbin0 -> 45204 bytes
-rw-r--r--docs/fig/nnfw_runtime_structure.pngbin0 -> 64652 bytes
-rw-r--r--docs/fig/nnfw_runtime_structure.pptxbin0 -> 41044 bytes
-rw-r--r--docs/fig/runtime_nativeapi_flow.pngbin0 -> 63638 bytes
-rw-r--r--docs/nncc/README.md56
-rw-r--r--docs/nncc/design.md10
-rw-r--r--docs/nncc/getting_started.md73
-rw-r--r--docs/nncc/images/nncc_components.pngbin0 -> 45359 bytes
-rw-r--r--docs/nncc/images/nncc_idef0_a0.pngbin0 -> 50434 bytes
-rw-r--r--docs/nncc/images/nncc_idef0_a1.pngbin0 -> 86576 bytes
-rw-r--r--docs/nncc/images/nncc_idef0_a12.pngbin0 -> 42778 bytes
-rw-r--r--docs/nncc/project/detailed_level_design.md329
-rw-r--r--docs/nncc/project/development_document.md257
-rw-r--r--docs/nncc/project/high_level_design.md457
-rw-r--r--docs/nncc/project/requirements_specification.md272
-rw-r--r--docs/nncc/project/test_plan.md442
-rw-r--r--docs/nncc/project_guide.md27
-rw-r--r--docs/nncc/roadmap.md6
-rw-r--r--docs/nncc/v1.0.0/getting_started.md59
-rw-r--r--docs/nncc/v1.0.0/operation-list.md34
-rw-r--r--docs/nncc/v1.0.0/tutorial.md49
-rw-r--r--docs/nncc/v1.1.0/nncc_in_tizen_studio.md52
-rw-r--r--docs/nncc/v1.1.0/nncc_in_visual_studio.md61
-rw-r--r--docs/nnfw/2018/fig/nnfw_architecture.png (renamed from docs/fig/nnfw_architecture.png)bin28876 -> 28876 bytes
-rw-r--r--docs/nnfw/2018/fig/nnfw_architecture.pptx (renamed from docs/fig/nnfw_architecture.pptx)bin72036 -> 72036 bytes
-rw-r--r--docs/nnfw/2018/roadmap.md (renamed from docs/roadmap.md)0
-rw-r--r--docs/nnfw/HowToImplementOperatorKernel.md (renamed from docs/HowToImplementOperatorKernel.md)0
-rw-r--r--docs/nnfw/fig/nnfw_architecture.pngbin0 -> 280284 bytes
-rw-r--r--docs/nnfw/fig/nnfw_architecture.pptxbin0 -> 45709 bytes
-rw-r--r--docs/nnfw/fig/nnfw_behavior.png (renamed from docs/fig/nnfw_behavior.png)bin14254 -> 14254 bytes
-rw-r--r--docs/nnfw/fig/nnfw_behavior.pptx (renamed from docs/fig/nnfw_behavior.pptx)bin59844 -> 59844 bytes
-rw-r--r--docs/nnfw/howto.md (renamed from docs/howto.md)4
-rw-r--r--docs/nnfw/howto/BuildTFfromSource.md (renamed from docs/howto/BuildTFfromSource.md)0
-rw-r--r--docs/nnfw/howto/CrossBuildForAarch64.md (renamed from docs/howto/CrossBuildForAarch64.md)24
-rw-r--r--docs/nnfw/howto/CrossBuildForAndroid.md52
-rw-r--r--docs/nnfw/howto/CrossBuildForArm.md (renamed from docs/howto/CrossBuildForArm.md)45
-rw-r--r--docs/nnfw/howto/HowToAddUnittest.md (renamed from docs/howto/HowToAddUnittest.md)0
-rw-r--r--docs/nnfw/howto/HowToRunNnpackge.md75
-rw-r--r--docs/nnfw/howto/HowToTestManualy.md62
-rw-r--r--docs/nnfw/howto/HowToUseDockerImage.md (renamed from docs/howto/HowToUseDockerImage.md)60
-rw-r--r--docs/nnfw/howto/HowToUseNNFWAPI.md63
-rw-r--r--docs/nnfw/howto/HowtoMakeSampleAppOnNnfw.md132
-rw-r--r--docs/nnfw/howto/RemoteDebuggingForVSCode.md147
-rw-r--r--docs/nnfw/howto/device/xu3-dip.png (renamed from docs/howto/device/xu3-dip.png)bin262925 -> 262925 bytes
-rw-r--r--docs/nnfw/howto/device/xu3_tizen.md140
-rw-r--r--docs/nnfw/howto/device/xu3_ubuntu.md (renamed from docs/howto/device/xu3_ubuntu.md)0
-rw-r--r--docs/nnfw/howto/device/xu4_tizen.md (renamed from docs/howto/device/xu4_tizen.md)99
-rw-r--r--docs/nnfw/howto/device/xu4_ubuntu.md (renamed from docs/howto/device/xu4_ubuntu.md)0
-rw-r--r--docs/nnfw/op_list.md71
-rw-r--r--docs/nnfw/roadmap.md76
-rw-r--r--docs/nnfw/tests/Convolution_manual_3x3.xlsx (renamed from docs/tests/Convolution_manual_3x3.xlsx)bin19844 -> 19844 bytes
-rw-r--r--docs/nnfw/tests/Softmax_manual.xlsx (renamed from docs/tests/Softmax_manual.xlsx)bin15940 -> 15940 bytes
-rw-r--r--docs/project/2018_high_level_design.md79
-rw-r--r--docs/project/2018_requirement_specification.md113
-rw-r--r--docs/release/release_note_1.0.0.md65
-rw-r--r--docs/release/release_note_1.1.0.md40
-rw-r--r--docs/workgroups.md19
70 files changed, 3259 insertions, 2851 deletions
diff --git a/docs/HowToContribute.md b/docs/HowToContribute.md
index e62666998..c6f89c3cf 100644
--- a/docs/HowToContribute.md
+++ b/docs/HowToContribute.md
@@ -19,8 +19,8 @@ This section explains the steps to create a pull request (PR).
1. Create an issue
- Maintainers will accept your contribution only when it is well aligned with the [roadmap and
- design principles](./roadmap.md) of _nnfw_. So, it is optional, but recommended for contributors
+ Maintainers will accept your contribution only when it is well aligned with the roadmap and
+ design principles of [_nnfw_](./nnfw/roadmap.md) and [_nncc_](./nncc/roadmap.md). So, it is optional, but recommended for contributors
to create an issue and have a discussion with maintainers before writing code.
1. Create a draft PR
@@ -53,10 +53,16 @@ This section explains the steps to create a pull request (PR).
1. Request review
- Please assign reviewers if you need review from them. Maintainers will honor your review request,
- and accept your pull request only when all the reviewer approve your pull request. Note that this
- does **NOT** mean that you should assign reviewers. Maintainers (or reviewers) will review your
- pull request even without explicit review request.
+ It is recommended to assign reviewers yourself. Maintainers will honor your review request,
+ and accept your pull request only when
+
+ - Approved by 1+ reviewers
+ - 0 rejection(Request Changes)
+ - 0 pending review request
+ - All the reviewers in the list must approve your pull request
+
+ You can add/remove pending review requests in the middle of the review process. Maintainers
+ (or reviewers) could review your pull request even without explicit review request.
1. Update per feedback
@@ -64,9 +70,3 @@ This section explains the steps to create a pull request (PR).
your pull request upon such feedbacks. These update commits will be squashed into the first
commit of your pull request later. Please do **NOT** include a sign-off message or write a full
description for update commits.
-
-
-# Note
-
-This document is originated from the [contribution guide in
-nncc](https://github.sec.samsung.net/STAR/nncc/blob/master/doc/contribution_guide.md).
diff --git a/docs/UseDoxygen.md b/docs/UseDoxygen.md
new file mode 100644
index 000000000..1b016c0ec
--- /dev/null
+++ b/docs/UseDoxygen.md
@@ -0,0 +1,36 @@
+# How to generate documentation from source code using doxygen
+
+## Install doxygen
+
+If you want to use doxygen to generate documentation on Ubuntu, please install packages
+
+```
+$ sudo apt install doxygen
+```
+
+## Generate documentation
+
+### Pre-defined configuration
+
+You can find pre-defined configuration at `infra/doxygen/Doxyfile`
+
+### Option 1: Use pre-defined configuration
+
+You can use pre-defined configuration directly at nnas's root path
+
+```
+<nnas-root-path>$ doxygen infra/doxygen/Doxyfile
+```
+
+Generated documentation html is in `doxygen/html`
+
+### Option 2: Use nnas command (recommand)
+
+You can use nnas command `doxygen`
+
+```
+$ <nnas-root-path>/nnas doxygen
+```
+
+Generated documentation html is in your workspace directory: `<NNAS_WORKSPACE>/doxygen/html`
+Default workspace directory is `build`
diff --git a/docs/doxygen/Doxyfile b/docs/doxygen/Doxyfile
deleted file mode 100644
index 632282770..000000000
--- a/docs/doxygen/Doxyfile
+++ /dev/null
@@ -1,2500 +0,0 @@
-# Doxyfile 1.8.13
-
-# This file describes the settings to be used by the documentation system
-# doxygen (www.doxygen.org) for a project.
-#
-# All text after a double hash (##) is considered a comment and is placed in
-# front of the TAG it is preceding.
-#
-# All text after a single hash (#) is considered a comment and will be ignored.
-# The format is:
-# TAG = value [value, ...]
-# For lists, items can also be appended using:
-# TAG += value [value, ...]
-# Values that contain spaces should be placed between quotes (\" \").
-
-#---------------------------------------------------------------------------
-# Project related configuration options
-#---------------------------------------------------------------------------
-
-# This tag specifies the encoding used for all characters in the config file
-# that follow. The default is UTF-8 which is also the encoding used for all text
-# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
-# built into libc) for the transcoding. See http://www.gnu.org/software/libiconv
-# for the list of possible encodings.
-# The default value is: UTF-8.
-
-DOXYFILE_ENCODING = UTF-8
-
-# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
-# double-quotes, unless you are using Doxywizard) that should identify the
-# project for which the documentation is generated. This name is used in the
-# title of most generated pages and in a few other places.
-# The default value is: My Project.
-
-PROJECT_NAME = nnfw
-
-# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
-# could be handy for archiving the generated documentation or if some version
-# control system is used.
-
-PROJECT_NUMBER =
-
-# Using the PROJECT_BRIEF tag one can provide an optional one line description
-# for a project that appears at the top of each page and should give viewer a
-# quick idea about the purpose of the project. Keep the description short.
-
-PROJECT_BRIEF =
-
-# With the PROJECT_LOGO tag one can specify a logo or an icon that is included
-# in the documentation. The maximum height of the logo should not exceed 55
-# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
-# the logo to the output directory.
-
-PROJECT_LOGO =
-
-# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
-# into which the generated documentation will be written. If a relative path is
-# entered, it will be relative to the location where doxygen was started. If
-# left blank the current directory will be used.
-
-OUTPUT_DIRECTORY =
-
-# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
-# directories (in 2 levels) under the output directory of each output format and
-# will distribute the generated files over these directories. Enabling this
-# option can be useful when feeding doxygen a huge amount of source files, where
-# putting all generated files in the same directory would otherwise causes
-# performance problems for the file system.
-# The default value is: NO.
-
-CREATE_SUBDIRS = NO
-
-# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII
-# characters to appear in the names of generated files. If set to NO, non-ASCII
-# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode
-# U+3044.
-# The default value is: NO.
-
-ALLOW_UNICODE_NAMES = NO
-
-# The OUTPUT_LANGUAGE tag is used to specify the language in which all
-# documentation generated by doxygen is written. Doxygen will use this
-# information to generate all constant output in the proper language.
-# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
-# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
-# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
-# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
-# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
-# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
-# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
-# Ukrainian and Vietnamese.
-# The default value is: English.
-
-OUTPUT_LANGUAGE = English
-
-# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
-# descriptions after the members that are listed in the file and class
-# documentation (similar to Javadoc). Set to NO to disable this.
-# The default value is: YES.
-
-BRIEF_MEMBER_DESC = YES
-
-# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
-# description of a member or function before the detailed description
-#
-# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
-# brief descriptions will be completely suppressed.
-# The default value is: YES.
-
-REPEAT_BRIEF = YES
-
-# This tag implements a quasi-intelligent brief description abbreviator that is
-# used to form the text in various listings. Each string in this list, if found
-# as the leading text of the brief description, will be stripped from the text
-# and the result, after processing the whole list, is used as the annotated
-# text. Otherwise, the brief description is used as-is. If left blank, the
-# following values are used ($name is automatically replaced with the name of
-# the entity):The $name class, The $name widget, The $name file, is, provides,
-# specifies, contains, represents, a, an and the.
-
-ABBREVIATE_BRIEF = "The $name class" \
- "The $name widget" \
- "The $name file" \
- is \
- provides \
- specifies \
- contains \
- represents \
- a \
- an \
- the
-
-# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
-# doxygen will generate a detailed section even if there is only a brief
-# description.
-# The default value is: NO.
-
-ALWAYS_DETAILED_SEC = NO
-
-# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
-# inherited members of a class in the documentation of that class as if those
-# members were ordinary class members. Constructors, destructors and assignment
-# operators of the base classes will not be shown.
-# The default value is: NO.
-
-INLINE_INHERITED_MEMB = NO
-
-# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
-# before files name in the file list and in the header files. If set to NO the
-# shortest path that makes the file name unique will be used
-# The default value is: YES.
-
-FULL_PATH_NAMES = YES
-
-# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
-# Stripping is only done if one of the specified strings matches the left-hand
-# part of the path. The tag can be used to show relative paths in the file list.
-# If left blank the directory from which doxygen is run is used as the path to
-# strip.
-#
-# Note that you can specify absolute paths here, but also relative paths, which
-# will be relative from the directory where doxygen is started.
-# This tag requires that the tag FULL_PATH_NAMES is set to YES.
-
-STRIP_FROM_PATH = ../../../nnfw
-
-# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
-# path mentioned in the documentation of a class, which tells the reader which
-# header file to include in order to use a class. If left blank only the name of
-# the header file containing the class definition is used. Otherwise one should
-# specify the list of include paths that are normally passed to the compiler
-# using the -I flag.
-
-STRIP_FROM_INC_PATH =
-
-# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
-# less readable) file names. This can be useful is your file systems doesn't
-# support long names like on DOS, Mac, or CD-ROM.
-# The default value is: NO.
-
-SHORT_NAMES = NO
-
-# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
-# first line (until the first dot) of a Javadoc-style comment as the brief
-# description. If set to NO, the Javadoc-style will behave just like regular Qt-
-# style comments (thus requiring an explicit @brief command for a brief
-# description.)
-# The default value is: NO.
-
-JAVADOC_AUTOBRIEF = NO
-
-# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
-# line (until the first dot) of a Qt-style comment as the brief description. If
-# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
-# requiring an explicit \brief command for a brief description.)
-# The default value is: NO.
-
-QT_AUTOBRIEF = NO
-
-# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
-# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
-# a brief description. This used to be the default behavior. The new default is
-# to treat a multi-line C++ comment block as a detailed description. Set this
-# tag to YES if you prefer the old behavior instead.
-#
-# Note that setting this tag to YES also means that rational rose comments are
-# not recognized any more.
-# The default value is: NO.
-
-MULTILINE_CPP_IS_BRIEF = NO
-
-# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
-# documentation from any documented member that it re-implements.
-# The default value is: YES.
-
-INHERIT_DOCS = YES
-
-# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
-# page for each member. If set to NO, the documentation of a member will be part
-# of the file/class/namespace that contains it.
-# The default value is: NO.
-
-SEPARATE_MEMBER_PAGES = NO
-
-# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
-# uses this value to replace tabs by spaces in code fragments.
-# Minimum value: 1, maximum value: 16, default value: 4.
-
-TAB_SIZE = 4
-
-# This tag can be used to specify a number of aliases that act as commands in
-# the documentation. An alias has the form:
-# name=value
-# For example adding
-# "sideeffect=@par Side Effects:\n"
-# will allow you to put the command \sideeffect (or @sideeffect) in the
-# documentation, which will result in a user-defined paragraph with heading
-# "Side Effects:". You can put \n's in the value part of an alias to insert
-# newlines.
-
-ALIASES =
-
-# This tag can be used to specify a number of word-keyword mappings (TCL only).
-# A mapping has the form "name=value". For example adding "class=itcl::class"
-# will allow you to use the command class in the itcl::class meaning.
-
-TCL_SUBST =
-
-# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
-# only. Doxygen will then generate output that is more tailored for C. For
-# instance, some of the names that are used will be different. The list of all
-# members will be omitted, etc.
-# The default value is: NO.
-
-OPTIMIZE_OUTPUT_FOR_C = NO
-
-# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
-# Python sources only. Doxygen will then generate output that is more tailored
-# for that language. For instance, namespaces will be presented as packages,
-# qualified scopes will look different, etc.
-# The default value is: NO.
-
-OPTIMIZE_OUTPUT_JAVA = NO
-
-# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
-# sources. Doxygen will then generate output that is tailored for Fortran.
-# The default value is: NO.
-
-OPTIMIZE_FOR_FORTRAN = NO
-
-# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
-# sources. Doxygen will then generate output that is tailored for VHDL.
-# The default value is: NO.
-
-OPTIMIZE_OUTPUT_VHDL = NO
-
-# Doxygen selects the parser to use depending on the extension of the files it
-# parses. With this tag you can assign which parser to use for a given
-# extension. Doxygen has a built-in mapping, but you can override or extend it
-# using this tag. The format is ext=language, where ext is a file extension, and
-# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
-# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
-# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
-# Fortran. In the later case the parser tries to guess whether the code is fixed
-# or free formatted code, this is the default for Fortran type files), VHDL. For
-# instance to make doxygen treat .inc files as Fortran files (default is PHP),
-# and .f files as C (default is Fortran), use: inc=Fortran f=C.
-#
-# Note: For files without extension you can use no_extension as a placeholder.
-#
-# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
-# the files are not read by doxygen.
-
-EXTENSION_MAPPING =
-
-# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
-# according to the Markdown format, which allows for more readable
-# documentation. See http://daringfireball.net/projects/markdown/ for details.
-# The output of markdown processing is further processed by doxygen, so you can
-# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
-# case of backward compatibilities issues.
-# The default value is: YES.
-
-MARKDOWN_SUPPORT = YES
-
-# When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up
-# to that level are automatically included in the table of contents, even if
-# they do not have an id attribute.
-# Note: This feature currently applies only to Markdown headings.
-# Minimum value: 0, maximum value: 99, default value: 0.
-# This tag requires that the tag MARKDOWN_SUPPORT is set to YES.
-
-TOC_INCLUDE_HEADINGS = 0
-
-# When enabled doxygen tries to link words that correspond to documented
-# classes, or namespaces to their corresponding documentation. Such a link can
-# be prevented in individual cases by putting a % sign in front of the word or
-# globally by setting AUTOLINK_SUPPORT to NO.
-# The default value is: YES.
-
-AUTOLINK_SUPPORT = YES
-
-# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
-# to include (a tag file for) the STL sources as input, then you should set this
-# tag to YES in order to let doxygen match functions declarations and
-# definitions whose arguments contain STL classes (e.g. func(std::string);
-# versus func(std::string) {}). This also make the inheritance and collaboration
-# diagrams that involve STL classes more complete and accurate.
-# The default value is: NO.
-
-BUILTIN_STL_SUPPORT = NO
-
-# If you use Microsoft's C++/CLI language, you should set this option to YES to
-# enable parsing support.
-# The default value is: NO.
-
-CPP_CLI_SUPPORT = NO
-
-# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
-# http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen
-# will parse them like normal C++ but will assume all classes use public instead
-# of private inheritance when no explicit protection keyword is present.
-# The default value is: NO.
-
-SIP_SUPPORT = NO
-
-# For Microsoft's IDL there are propget and propput attributes to indicate
-# getter and setter methods for a property. Setting this option to YES will make
-# doxygen to replace the get and set methods by a property in the documentation.
-# This will only work if the methods are indeed getting or setting a simple
-# type. If this is not the case, or you want to show the methods anyway, you
-# should set this option to NO.
-# The default value is: YES.
-
-IDL_PROPERTY_SUPPORT = YES
-
-# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
-# tag is set to YES then doxygen will reuse the documentation of the first
-# member in the group (if any) for the other members of the group. By default
-# all members of a group must be documented explicitly.
-# The default value is: NO.
-
-DISTRIBUTE_GROUP_DOC = NO
-
-# If one adds a struct or class to a group and this option is enabled, then also
-# any nested class or struct is added to the same group. By default this option
-# is disabled and one has to add nested compounds explicitly via \ingroup.
-# The default value is: NO.
-
-GROUP_NESTED_COMPOUNDS = NO
-
-# Set the SUBGROUPING tag to YES to allow class member groups of the same type
-# (for instance a group of public functions) to be put as a subgroup of that
-# type (e.g. under the Public Functions section). Set it to NO to prevent
-# subgrouping. Alternatively, this can be done per class using the
-# \nosubgrouping command.
-# The default value is: YES.
-
-SUBGROUPING = YES
-
-# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
-# are shown inside the group in which they are included (e.g. using \ingroup)
-# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
-# and RTF).
-#
-# Note that this feature does not work in combination with
-# SEPARATE_MEMBER_PAGES.
-# The default value is: NO.
-
-INLINE_GROUPED_CLASSES = NO
-
-# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
-# with only public data fields or simple typedef fields will be shown inline in
-# the documentation of the scope in which they are defined (i.e. file,
-# namespace, or group documentation), provided this scope is documented. If set
-# to NO, structs, classes, and unions are shown on a separate page (for HTML and
-# Man pages) or section (for LaTeX and RTF).
-# The default value is: NO.
-
-INLINE_SIMPLE_STRUCTS = NO
-
-# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
-# enum is documented as struct, union, or enum with the name of the typedef. So
-# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
-# with name TypeT. When disabled the typedef will appear as a member of a file,
-# namespace, or class. And the struct will be named TypeS. This can typically be
-# useful for C code in case the coding convention dictates that all compound
-# types are typedef'ed and only the typedef is referenced, never the tag name.
-# The default value is: NO.
-
-TYPEDEF_HIDES_STRUCT = NO
-
-# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
-# cache is used to resolve symbols given their name and scope. Since this can be
-# an expensive process and often the same symbol appears multiple times in the
-# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
-# doxygen will become slower. If the cache is too large, memory is wasted. The
-# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
-# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
-# symbols. At the end of a run doxygen will report the cache usage and suggest
-# the optimal cache size from a speed point of view.
-# Minimum value: 0, maximum value: 9, default value: 0.
-
-LOOKUP_CACHE_SIZE = 2
-
-#---------------------------------------------------------------------------
-# Build related configuration options
-#---------------------------------------------------------------------------
-
-# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
-# documentation are documented, even if no documentation was available. Private
-# class members and static file members will be hidden unless the
-# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
-# Note: This will also disable the warnings about undocumented members that are
-# normally produced when WARNINGS is set to YES.
-# The default value is: NO.
-
-EXTRACT_ALL = YES
-
-# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
-# be included in the documentation.
-# The default value is: NO.
-
-EXTRACT_PRIVATE = NO
-
-# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
-# scope will be included in the documentation.
-# The default value is: NO.
-
-EXTRACT_PACKAGE = NO
-
-# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
-# included in the documentation.
-# The default value is: NO.
-
-EXTRACT_STATIC = NO
-
-# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
-# locally in source files will be included in the documentation. If set to NO,
-# only classes defined in header files are included. Does not have any effect
-# for Java sources.
-# The default value is: YES.
-
-EXTRACT_LOCAL_CLASSES = YES
-
-# This flag is only useful for Objective-C code. If set to YES, local methods,
-# which are defined in the implementation section but not in the interface are
-# included in the documentation. If set to NO, only methods in the interface are
-# included.
-# The default value is: NO.
-
-EXTRACT_LOCAL_METHODS = NO
-
-# If this flag is set to YES, the members of anonymous namespaces will be
-# extracted and appear in the documentation as a namespace called
-# 'anonymous_namespace{file}', where file will be replaced with the base name of
-# the file that contains the anonymous namespace. By default anonymous namespace
-# are hidden.
-# The default value is: NO.
-
-EXTRACT_ANON_NSPACES = NO
-
-# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
-# undocumented members inside documented classes or files. If set to NO these
-# members will be included in the various overviews, but no documentation
-# section is generated. This option has no effect if EXTRACT_ALL is enabled.
-# The default value is: NO.
-
-HIDE_UNDOC_MEMBERS = NO
-
-# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
-# undocumented classes that are normally visible in the class hierarchy. If set
-# to NO, these classes will be included in the various overviews. This option
-# has no effect if EXTRACT_ALL is enabled.
-# The default value is: NO.
-
-HIDE_UNDOC_CLASSES = NO
-
-# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
-# (class|struct|union) declarations. If set to NO, these declarations will be
-# included in the documentation.
-# The default value is: NO.
-
-HIDE_FRIEND_COMPOUNDS = NO
-
-# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
-# documentation blocks found inside the body of a function. If set to NO, these
-# blocks will be appended to the function's detailed documentation block.
-# The default value is: NO.
-
-HIDE_IN_BODY_DOCS = NO
-
-# The INTERNAL_DOCS tag determines if documentation that is typed after a
-# \internal command is included. If the tag is set to NO then the documentation
-# will be excluded. Set it to YES to include the internal documentation.
-# The default value is: NO.
-
-INTERNAL_DOCS = NO
-
-# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
-# names in lower-case letters. If set to YES, upper-case letters are also
-# allowed. This is useful if you have classes or files whose names only differ
-# in case and if your file system supports case sensitive file names. Windows
-# and Mac users are advised to set this option to NO.
-# The default value is: system dependent.
-
-CASE_SENSE_NAMES = NO
-
-# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
-# their full class and namespace scopes in the documentation. If set to YES, the
-# scope will be hidden.
-# The default value is: NO.
-
-HIDE_SCOPE_NAMES = NO
-
-# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will
-# append additional text to a page's title, such as Class Reference. If set to
-# YES the compound reference will be hidden.
-# The default value is: NO.
-
-HIDE_COMPOUND_REFERENCE= NO
-
-# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
-# the files that are included by a file in the documentation of that file.
-# The default value is: YES.
-
-SHOW_INCLUDE_FILES = YES
-
-# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
-# grouped member an include statement to the documentation, telling the reader
-# which file to include in order to use the member.
-# The default value is: NO.
-
-SHOW_GROUPED_MEMB_INC = NO
-
-# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
-# files with double quotes in the documentation rather than with sharp brackets.
-# The default value is: NO.
-
-FORCE_LOCAL_INCLUDES = NO
-
-# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
-# documentation for inline members.
-# The default value is: YES.
-
-INLINE_INFO = YES
-
-# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
-# (detailed) documentation of file and class members alphabetically by member
-# name. If set to NO, the members will appear in declaration order.
-# The default value is: YES.
-
-SORT_MEMBER_DOCS = YES
-
-# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
-# descriptions of file, namespace and class members alphabetically by member
-# name. If set to NO, the members will appear in declaration order. Note that
-# this will also influence the order of the classes in the class list.
-# The default value is: NO.
-
-SORT_BRIEF_DOCS = NO
-
-# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
-# (brief and detailed) documentation of class members so that constructors and
-# destructors are listed first. If set to NO the constructors will appear in the
-# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
-# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
-# member documentation.
-# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
-# detailed member documentation.
-# The default value is: NO.
-
-SORT_MEMBERS_CTORS_1ST = NO
-
-# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
-# of group names into alphabetical order. If set to NO the group names will
-# appear in their defined order.
-# The default value is: NO.
-
-SORT_GROUP_NAMES = NO
-
-# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
-# fully-qualified names, including namespaces. If set to NO, the class list will
-# be sorted only by class name, not including the namespace part.
-# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
-# Note: This option applies only to the class list, not to the alphabetical
-# list.
-# The default value is: NO.
-
-SORT_BY_SCOPE_NAME = NO
-
-# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
-# type resolution of all parameters of a function it will reject a match between
-# the prototype and the implementation of a member function even if there is
-# only one candidate or it is obvious which candidate to choose by doing a
-# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
-# accept a match between prototype and implementation in such cases.
-# The default value is: NO.
-
-STRICT_PROTO_MATCHING = NO
-
-# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
-# list. This list is created by putting \todo commands in the documentation.
-# The default value is: YES.
-
-GENERATE_TODOLIST = YES
-
-# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
-# list. This list is created by putting \test commands in the documentation.
-# The default value is: YES.
-
-GENERATE_TESTLIST = YES
-
-# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
-# list. This list is created by putting \bug commands in the documentation.
-# The default value is: YES.
-
-GENERATE_BUGLIST = YES
-
-# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
-# the deprecated list. This list is created by putting \deprecated commands in
-# the documentation.
-# The default value is: YES.
-
-GENERATE_DEPRECATEDLIST= YES
-
-# The ENABLED_SECTIONS tag can be used to enable conditional documentation
-# sections, marked by \if <section_label> ... \endif and \cond <section_label>
-# ... \endcond blocks.
-
-ENABLED_SECTIONS =
-
-# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
-# initial value of a variable or macro / define can have for it to appear in the
-# documentation. If the initializer consists of more lines than specified here
-# it will be hidden. Use a value of 0 to hide initializers completely. The
-# appearance of the value of individual variables and macros / defines can be
-# controlled using \showinitializer or \hideinitializer command in the
-# documentation regardless of this setting.
-# Minimum value: 0, maximum value: 10000, default value: 30.
-
-MAX_INITIALIZER_LINES = 30
-
-# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
-# the bottom of the documentation of classes and structs. If set to YES, the
-# list will mention the files that were used to generate the documentation.
-# The default value is: YES.
-
-SHOW_USED_FILES = YES
-
-# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
-# will remove the Files entry from the Quick Index and from the Folder Tree View
-# (if specified).
-# The default value is: YES.
-
-SHOW_FILES = YES
-
-# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
-# page. This will remove the Namespaces entry from the Quick Index and from the
-# Folder Tree View (if specified).
-# The default value is: YES.
-
-SHOW_NAMESPACES = YES
-
-# The FILE_VERSION_FILTER tag can be used to specify a program or script that
-# doxygen should invoke to get the current version for each file (typically from
-# the version control system). Doxygen will invoke the program by executing (via
-# popen()) the command command input-file, where command is the value of the
-# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
-# by doxygen. Whatever the program writes to standard output is used as the file
-# version. For an example see the documentation.
-
-FILE_VERSION_FILTER =
-
-# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
-# by doxygen. The layout file controls the global structure of the generated
-# output files in an output format independent way. To create the layout file
-# that represents doxygen's defaults, run doxygen with the -l option. You can
-# optionally specify a file name after the option, if omitted DoxygenLayout.xml
-# will be used as the name of the layout file.
-#
-# Note that if you run doxygen from a directory containing a file called
-# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
-# tag is left empty.
-
-LAYOUT_FILE =
-
-# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
-# the reference definitions. This must be a list of .bib files. The .bib
-# extension is automatically appended if omitted. This requires the bibtex tool
-# to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.
-# For LaTeX the style of the bibliography can be controlled using
-# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
-# search path. See also \cite for info how to create references.
-
-CITE_BIB_FILES =
-
-#---------------------------------------------------------------------------
-# Configuration options related to warning and progress messages
-#---------------------------------------------------------------------------
-
-# The QUIET tag can be used to turn on/off the messages that are generated to
-# standard output by doxygen. If QUIET is set to YES this implies that the
-# messages are off.
-# The default value is: NO.
-
-QUIET = NO
-
-# The WARNINGS tag can be used to turn on/off the warning messages that are
-# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
-# this implies that the warnings are on.
-#
-# Tip: Turn warnings on while writing the documentation.
-# The default value is: YES.
-
-WARNINGS = YES
-
-# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
-# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
-# will automatically be disabled.
-# The default value is: YES.
-
-WARN_IF_UNDOCUMENTED = YES
-
-# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
-# potential errors in the documentation, such as not documenting some parameters
-# in a documented function, or documenting parameters that don't exist or using
-# markup commands wrongly.
-# The default value is: YES.
-
-WARN_IF_DOC_ERROR = YES
-
-# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
-# are documented, but have no documentation for their parameters or return
-# value. If set to NO, doxygen will only warn about wrong or incomplete
-# parameter documentation, but not about the absence of documentation.
-# The default value is: NO.
-
-WARN_NO_PARAMDOC = NO
-
-# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when
-# a warning is encountered.
-# The default value is: NO.
-
-WARN_AS_ERROR = NO
-
-# The WARN_FORMAT tag determines the format of the warning messages that doxygen
-# can produce. The string should contain the $file, $line, and $text tags, which
-# will be replaced by the file and line number from which the warning originated
-# and the warning text. Optionally the format may contain $version, which will
-# be replaced by the version of the file (if it could be obtained via
-# FILE_VERSION_FILTER)
-# The default value is: $file:$line: $text.
-
-WARN_FORMAT = "$file:$line: $text"
-
-# The WARN_LOGFILE tag can be used to specify a file to which warning and error
-# messages should be written. If left blank the output is written to standard
-# error (stderr).
-
-WARN_LOGFILE =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the input files
-#---------------------------------------------------------------------------
-
-# The INPUT tag is used to specify the files and/or directories that contain
-# documented source files. You may enter file names like myfile.cpp or
-# directories like /usr/src/myproject. Separate the files or directories with
-# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
-# Note: If this tag is empty the current directory is searched.
-
-INPUT = ../../../nnfw
-
-# This tag can be used to specify the character encoding of the source files
-# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
-# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
-# documentation (see: http://www.gnu.org/software/libiconv) for the list of
-# possible encodings.
-# The default value is: UTF-8.
-
-INPUT_ENCODING = UTF-8
-
-# If the value of the INPUT tag contains directories, you can use the
-# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
-# *.h) to filter out the source-files in the directories.
-#
-# Note that for custom extensions or not directly supported extensions you also
-# need to set EXTENSION_MAPPING for the extension otherwise the files are not
-# read by doxygen.
-#
-# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
-# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
-# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
-# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
-# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf.
-
-FILE_PATTERNS = *.c \
- *.cc \
- *.cxx \
- *.cpp \
- *.c++ \
- *.java \
- *.ii \
- *.ixx \
- *.ipp \
- *.i++ \
- *.inl \
- *.idl \
- *.ddl \
- *.odl \
- *.h \
- *.hh \
- *.hxx \
- *.hpp \
- *.h++ \
- *.cs \
- *.d \
- *.php \
- *.php4 \
- *.php5 \
- *.phtml \
- *.inc \
- *.m \
- *.markdown \
- *.md \
- *.mm \
- *.dox \
- *.py \
- *.pyw \
- *.f90 \
- *.f95 \
- *.f03 \
- *.f08 \
- *.f \
- *.for \
- *.tcl \
- *.vhd \
- *.vhdl \
- *.ucf \
- *.qsf
-
-# The RECURSIVE tag can be used to specify whether or not subdirectories should
-# be searched for input files as well.
-# The default value is: NO.
-
-RECURSIVE = YES
-
-# The EXCLUDE tag can be used to specify files and/or directories that should be
-# excluded from the INPUT source files. This way you can easily exclude a
-# subdirectory from a directory tree whose root is specified with the INPUT tag.
-#
-# Note that relative paths are relative to the directory from which doxygen is
-# run.
-
-EXCLUDE = ../../../nnfw/Product \
- ../../../nnfw/tools/cross/rootfs \
- ../../../nnfw/externals \
- ../../../nnfw/externals/acl \
- ../../../nnfw/externals/tensorflow \
- ../../../nnfw/tests/framework/cache \
- ../../../nnfw/runtimes/tests/neural_networks_test/generated/models \
- .caffemodel \
- .bin
-
-# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
-# directories that are symbolic links (a Unix file system feature) are excluded
-# from the input.
-# The default value is: NO.
-
-EXCLUDE_SYMLINKS = NO
-
-# If the value of the INPUT tag contains directories, you can use the
-# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
-# certain files from those directories.
-#
-# Note that the wildcards are matched against the file with absolute path, so to
-# exclude all test directories for example use the pattern */test/*
-
-EXCLUDE_PATTERNS =
-
-# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
-# (namespaces, classes, functions, etc.) that should be excluded from the
-# output. The symbol name can be a fully qualified name, a word, or if the
-# wildcard * is used, a substring. Examples: ANamespace, AClass,
-# AClass::ANamespace, ANamespace::*Test
-#
-# Note that the wildcards are matched against the file with absolute path, so to
-# exclude all test directories use the pattern */test/*
-
-EXCLUDE_SYMBOLS =
-
-# The EXAMPLE_PATH tag can be used to specify one or more files or directories
-# that contain example code fragments that are included (see the \include
-# command).
-
-EXAMPLE_PATH =
-
-# If the value of the EXAMPLE_PATH tag contains directories, you can use the
-# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
-# *.h) to filter out the source-files in the directories. If left blank all
-# files are included.
-
-EXAMPLE_PATTERNS = *
-
-# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
-# searched for input files to be used with the \include or \dontinclude commands
-# irrespective of the value of the RECURSIVE tag.
-# The default value is: NO.
-
-EXAMPLE_RECURSIVE = NO
-
-# The IMAGE_PATH tag can be used to specify one or more files or directories
-# that contain images that are to be included in the documentation (see the
-# \image command).
-
-IMAGE_PATH =
-
-# The INPUT_FILTER tag can be used to specify a program that doxygen should
-# invoke to filter for each input file. Doxygen will invoke the filter program
-# by executing (via popen()) the command:
-#
-# <filter> <input-file>
-#
-# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
-# name of an input file. Doxygen will then use the output that the filter
-# program writes to standard output. If FILTER_PATTERNS is specified, this tag
-# will be ignored.
-#
-# Note that the filter must not add or remove lines; it is applied before the
-# code is scanned, but not when the output code is generated. If lines are added
-# or removed, the anchors will not be placed correctly.
-#
-# Note that for custom extensions or not directly supported extensions you also
-# need to set EXTENSION_MAPPING for the extension otherwise the files are not
-# properly processed by doxygen.
-
-INPUT_FILTER =
-
-# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
-# basis. Doxygen will compare the file name with each pattern and apply the
-# filter if there is a match. The filters are a list of the form: pattern=filter
-# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
-# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
-# patterns match the file name, INPUT_FILTER is applied.
-#
-# Note that for custom extensions or not directly supported extensions you also
-# need to set EXTENSION_MAPPING for the extension otherwise the files are not
-# properly processed by doxygen.
-
-FILTER_PATTERNS =
-
-# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
-# INPUT_FILTER) will also be used to filter the input files that are used for
-# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
-# The default value is: NO.
-
-FILTER_SOURCE_FILES = NO
-
-# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
-# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
-# it is also possible to disable source filtering for a specific pattern using
-# *.ext= (so without naming a filter).
-# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
-
-FILTER_SOURCE_PATTERNS =
-
-# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
-# is part of the input, its contents will be placed on the main page
-# (index.html). This can be useful if you have a project on for instance GitHub
-# and want to reuse the introduction page also for the doxygen output.
-
-USE_MDFILE_AS_MAINPAGE = roadmap.md
-
-#---------------------------------------------------------------------------
-# Configuration options related to source browsing
-#---------------------------------------------------------------------------
-
-# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
-# generated. Documented entities will be cross-referenced with these sources.
-#
-# Note: To get rid of all source code in the generated output, make sure that
-# also VERBATIM_HEADERS is set to NO.
-# The default value is: NO.
-
-SOURCE_BROWSER = YES
-
-# Setting the INLINE_SOURCES tag to YES will include the body of functions,
-# classes and enums directly into the documentation.
-# The default value is: NO.
-
-INLINE_SOURCES = NO
-
-# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
-# special comment blocks from generated source code fragments. Normal C, C++ and
-# Fortran comments will always remain visible.
-# The default value is: YES.
-
-STRIP_CODE_COMMENTS = YES
-
-# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
-# function all documented functions referencing it will be listed.
-# The default value is: NO.
-
-REFERENCED_BY_RELATION = NO
-
-# If the REFERENCES_RELATION tag is set to YES then for each documented function
-# all documented entities called/used by that function will be listed.
-# The default value is: NO.
-
-REFERENCES_RELATION = NO
-
-# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
-# to YES then the hyperlinks from functions in REFERENCES_RELATION and
-# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
-# link to the documentation.
-# The default value is: YES.
-
-REFERENCES_LINK_SOURCE = YES
-
-# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
-# source code will show a tooltip with additional information such as prototype,
-# brief description and links to the definition and documentation. Since this
-# will make the HTML file larger and loading of large files a bit slower, you
-# can opt to disable this feature.
-# The default value is: YES.
-# This tag requires that the tag SOURCE_BROWSER is set to YES.
-
-SOURCE_TOOLTIPS = YES
-
-# If the USE_HTAGS tag is set to YES then the references to source code will
-# point to the HTML generated by the htags(1) tool instead of doxygen built-in
-# source browser. The htags tool is part of GNU's global source tagging system
-# (see http://www.gnu.org/software/global/global.html). You will need version
-# 4.8.6 or higher.
-#
-# To use it do the following:
-# - Install the latest version of global
-# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
-# - Make sure the INPUT points to the root of the source tree
-# - Run doxygen as normal
-#
-# Doxygen will invoke htags (and that will in turn invoke gtags), so these
-# tools must be available from the command line (i.e. in the search path).
-#
-# The result: instead of the source browser generated by doxygen, the links to
-# source code will now point to the output of htags.
-# The default value is: NO.
-# This tag requires that the tag SOURCE_BROWSER is set to YES.
-
-USE_HTAGS = NO
-
-# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
-# verbatim copy of the header file for each class for which an include is
-# specified. Set to NO to disable this.
-# See also: Section \class.
-# The default value is: YES.
-
-VERBATIM_HEADERS = YES
-
-# If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the
-# clang parser (see: http://clang.llvm.org/) for more accurate parsing at the
-# cost of reduced performance. This can be particularly helpful with template
-# rich C++ code for which doxygen's built-in parser lacks the necessary type
-# information.
-# Note: The availability of this option depends on whether or not doxygen was
-# generated with the -Duse-libclang=ON option for CMake.
-# The default value is: NO.
-
-CLANG_ASSISTED_PARSING = NO
-
-# If clang assisted parsing is enabled you can provide the compiler with command
-# line options that you would normally use when invoking the compiler. Note that
-# the include paths will already be set by doxygen for the files and directories
-# specified with INPUT and INCLUDE_PATH.
-# This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.
-
-CLANG_OPTIONS =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the alphabetical class index
-#---------------------------------------------------------------------------
-
-# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
-# compounds will be generated. Enable this if the project contains a lot of
-# classes, structs, unions or interfaces.
-# The default value is: YES.
-
-ALPHABETICAL_INDEX = YES
-
-# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
-# which the alphabetical index list will be split.
-# Minimum value: 1, maximum value: 20, default value: 5.
-# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
-
-COLS_IN_ALPHA_INDEX = 5
-
-# In case all classes in a project start with a common prefix, all classes will
-# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
-# can be used to specify a prefix (or a list of prefixes) that should be ignored
-# while generating the index headers.
-# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
-
-IGNORE_PREFIX =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the HTML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
-# The default value is: YES.
-
-GENERATE_HTML = YES
-
-# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it.
-# The default directory is: html.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_OUTPUT = html
-
-# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
-# generated HTML page (for example: .htm, .php, .asp).
-# The default value is: .html.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_FILE_EXTENSION = .html
-
-# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
-# each generated HTML page. If the tag is left blank doxygen will generate a
-# standard header.
-#
-# To get valid HTML the header file that includes any scripts and style sheets
-# that doxygen needs, which is dependent on the configuration options used (e.g.
-# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
-# default header using
-# doxygen -w html new_header.html new_footer.html new_stylesheet.css
-# YourConfigFile
-# and then modify the file new_header.html. See also section "Doxygen usage"
-# for information on how to generate the default header that doxygen normally
-# uses.
-# Note: The header is subject to change so you typically have to regenerate the
-# default header when upgrading to a newer version of doxygen. For a description
-# of the possible markers and block names see the documentation.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_HEADER =
-
-# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
-# generated HTML page. If the tag is left blank doxygen will generate a standard
-# footer. See HTML_HEADER for more information on how to generate a default
-# footer and what special commands can be used inside the footer. See also
-# section "Doxygen usage" for information on how to generate the default footer
-# that doxygen normally uses.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_FOOTER =
-
-# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
-# sheet that is used by each HTML page. It can be used to fine-tune the look of
-# the HTML output. If left blank doxygen will generate a default style sheet.
-# See also section "Doxygen usage" for information on how to generate the style
-# sheet that doxygen normally uses.
-# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
-# it is more robust and this tag (HTML_STYLESHEET) will in the future become
-# obsolete.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_STYLESHEET =
-
-# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
-# cascading style sheets that are included after the standard style sheets
-# created by doxygen. Using this option one can overrule certain style aspects.
-# This is preferred over using HTML_STYLESHEET since it does not replace the
-# standard style sheet and is therefore more robust against future updates.
-# Doxygen will copy the style sheet files to the output directory.
-# Note: The order of the extra style sheet files is of importance (e.g. the last
-# style sheet in the list overrules the setting of the previous ones in the
-# list). For an example see the documentation.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_EXTRA_STYLESHEET =
-
-# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
-# other source files which should be copied to the HTML output directory. Note
-# that these files will be copied to the base HTML output directory. Use the
-# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
-# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
-# files will be copied as-is; there are no commands or markers available.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_EXTRA_FILES =
-
-# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
-# will adjust the colors in the style sheet and background images according to
-# this color. Hue is specified as an angle on a colorwheel, see
-# http://en.wikipedia.org/wiki/Hue for more information. For instance the value
-# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
-# purple, and 360 is red again.
-# Minimum value: 0, maximum value: 359, default value: 220.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_COLORSTYLE_HUE = 220
-
-# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
-# in the HTML output. For a value of 0 the output will use grayscales only. A
-# value of 255 will produce the most vivid colors.
-# Minimum value: 0, maximum value: 255, default value: 100.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_COLORSTYLE_SAT = 100
-
-# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
-# luminance component of the colors in the HTML output. Values below 100
-# gradually make the output lighter, whereas values above 100 make the output
-# darker. The value divided by 100 is the actual gamma applied, so 80 represents
-# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
-# change the gamma.
-# Minimum value: 40, maximum value: 240, default value: 80.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_COLORSTYLE_GAMMA = 80
-
-# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
-# page will contain the date and time when the page was generated. Setting this
-# to YES can help to show when doxygen was last run and thus if the
-# documentation is up to date.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_TIMESTAMP = NO
-
-# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
-# documentation will contain sections that can be hidden and shown after the
-# page has loaded.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_DYNAMIC_SECTIONS = NO
-
-# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
-# shown in the various tree structured indices initially; the user can expand
-# and collapse entries dynamically later on. Doxygen will expand the tree to
-# such a level that at most the specified number of entries are visible (unless
-# a fully collapsed tree already exceeds this amount). So setting the number of
-# entries 1 will produce a full collapsed tree by default. 0 is a special value
-# representing an infinite number of entries and will result in a full expanded
-# tree by default.
-# Minimum value: 0, maximum value: 9999, default value: 100.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-HTML_INDEX_NUM_ENTRIES = 100
-
-# If the GENERATE_DOCSET tag is set to YES, additional index files will be
-# generated that can be used as input for Apple's Xcode 3 integrated development
-# environment (see: http://developer.apple.com/tools/xcode/), introduced with
-# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
-# Makefile in the HTML output directory. Running make will produce the docset in
-# that directory and running make install will install the docset in
-# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
-# startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html
-# for more information.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_DOCSET = NO
-
-# This tag determines the name of the docset feed. A documentation feed provides
-# an umbrella under which multiple documentation sets from a single provider
-# (such as a company or product suite) can be grouped.
-# The default value is: Doxygen generated docs.
-# This tag requires that the tag GENERATE_DOCSET is set to YES.
-
-DOCSET_FEEDNAME = "Doxygen generated docs"
-
-# This tag specifies a string that should uniquely identify the documentation
-# set bundle. This should be a reverse domain-name style string, e.g.
-# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
-# The default value is: org.doxygen.Project.
-# This tag requires that the tag GENERATE_DOCSET is set to YES.
-
-DOCSET_BUNDLE_ID = org.doxygen.Project
-
-# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
-# the documentation publisher. This should be a reverse domain-name style
-# string, e.g. com.mycompany.MyDocSet.documentation.
-# The default value is: org.doxygen.Publisher.
-# This tag requires that the tag GENERATE_DOCSET is set to YES.
-
-DOCSET_PUBLISHER_ID = org.doxygen.Publisher
-
-# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
-# The default value is: Publisher.
-# This tag requires that the tag GENERATE_DOCSET is set to YES.
-
-DOCSET_PUBLISHER_NAME = Publisher
-
-# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
-# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
-# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
-# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
-# Windows.
-#
-# The HTML Help Workshop contains a compiler that can convert all HTML output
-# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
-# files are now used as the Windows 98 help format, and will replace the old
-# Windows help format (.hlp) on all Windows platforms in the future. Compressed
-# HTML files also contain an index, a table of contents, and you can search for
-# words in the documentation. The HTML workshop also contains a viewer for
-# compressed HTML files.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_HTMLHELP = NO
-
-# The CHM_FILE tag can be used to specify the file name of the resulting .chm
-# file. You can add a path in front of the file if the result should not be
-# written to the html output directory.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-CHM_FILE =
-
-# The HHC_LOCATION tag can be used to specify the location (absolute path
-# including file name) of the HTML help compiler (hhc.exe). If non-empty,
-# doxygen will try to run the HTML help compiler on the generated index.hhp.
-# The file has to be specified with full path.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-HHC_LOCATION =
-
-# The GENERATE_CHI flag controls if a separate .chi index file is generated
-# (YES) or that it should be included in the master .chm file (NO).
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-GENERATE_CHI = NO
-
-# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
-# and project file content.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-CHM_INDEX_ENCODING =
-
-# The BINARY_TOC flag controls whether a binary table of contents is generated
-# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
-# enables the Previous and Next buttons.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-BINARY_TOC = NO
-
-# The TOC_EXPAND flag can be set to YES to add extra items for group members to
-# the table of contents of the HTML help documentation and to the tree view.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
-
-TOC_EXPAND = NO
-
-# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
-# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
-# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
-# (.qch) of the generated HTML documentation.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_QHP = NO
-
-# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
-# the file name of the resulting .qch file. The path specified is relative to
-# the HTML output folder.
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QCH_FILE =
-
-# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
-# Project output. For more information please see Qt Help Project / Namespace
-# (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).
-# The default value is: org.doxygen.Project.
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_NAMESPACE = org.doxygen.Project
-
-# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
-# Help Project output. For more information please see Qt Help Project / Virtual
-# Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-
-# folders).
-# The default value is: doc.
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_VIRTUAL_FOLDER = doc
-
-# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
-# filter to add. For more information please see Qt Help Project / Custom
-# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
-# filters).
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_CUST_FILTER_NAME =
-
-# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
-# custom filter to add. For more information please see Qt Help Project / Custom
-# Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
-# filters).
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_CUST_FILTER_ATTRS =
-
-# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
-# project's filter section matches. Qt Help Project / Filter Attributes (see:
-# http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHP_SECT_FILTER_ATTRS =
-
-# The QHG_LOCATION tag can be used to specify the location of Qt's
-# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
-# generated .qhp file.
-# This tag requires that the tag GENERATE_QHP is set to YES.
-
-QHG_LOCATION =
-
-# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
-# generated, together with the HTML files, they form an Eclipse help plugin. To
-# install this plugin and make it available under the help contents menu in
-# Eclipse, the contents of the directory containing the HTML and XML files needs
-# to be copied into the plugins directory of eclipse. The name of the directory
-# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
-# After copying Eclipse needs to be restarted before the help appears.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_ECLIPSEHELP = NO
-
-# A unique identifier for the Eclipse help plugin. When installing the plugin
-# the directory name containing the HTML and XML files should also have this
-# name. Each documentation set should have its own identifier.
-# The default value is: org.doxygen.Project.
-# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
-
-ECLIPSE_DOC_ID = org.doxygen.Project
-
-# If you want full control over the layout of the generated HTML pages it might
-# be necessary to disable the index and replace it with your own. The
-# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
-# of each HTML page. A value of NO enables the index and the value YES disables
-# it. Since the tabs in the index contain the same information as the navigation
-# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-DISABLE_INDEX = NO
-
-# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
-# structure should be generated to display hierarchical information. If the tag
-# value is set to YES, a side panel will be generated containing a tree-like
-# index structure (just like the one that is generated for HTML Help). For this
-# to work a browser that supports JavaScript, DHTML, CSS and frames is required
-# (i.e. any modern browser). Windows users are probably better off using the
-# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
-# further fine-tune the look of the index. As an example, the default style
-# sheet generated by doxygen has an example that shows how to put an image at
-# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
-# the same information as the tab index, you could consider setting
-# DISABLE_INDEX to YES when enabling this option.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-GENERATE_TREEVIEW = NO
-
-# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
-# doxygen will group on one line in the generated HTML documentation.
-#
-# Note that a value of 0 will completely suppress the enum values from appearing
-# in the overview section.
-# Minimum value: 0, maximum value: 20, default value: 4.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-ENUM_VALUES_PER_LINE = 4
-
-# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
-# to set the initial width (in pixels) of the frame in which the tree is shown.
-# Minimum value: 0, maximum value: 1500, default value: 250.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-TREEVIEW_WIDTH = 250
-
-# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
-# external symbols imported via tag files in a separate window.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-EXT_LINKS_IN_WINDOW = NO
-
-# Use this tag to change the font size of LaTeX formulas included as images in
-# the HTML documentation. When you change the font size after a successful
-# doxygen run you need to manually remove any form_*.png images from the HTML
-# output directory to force them to be regenerated.
-# Minimum value: 8, maximum value: 50, default value: 10.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-FORMULA_FONTSIZE = 10
-
-# Use the FORMULA_TRANPARENT tag to determine whether or not the images
-# generated for formulas are transparent PNGs. Transparent PNGs are not
-# supported properly for IE 6.0, but are supported on all modern browsers.
-#
-# Note that when changing this option you need to delete any form_*.png files in
-# the HTML output directory before the changes have effect.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-FORMULA_TRANSPARENT = YES
-
-# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
-# http://www.mathjax.org) which uses client side Javascript for the rendering
-# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
-# installed or if you want to formulas look prettier in the HTML output. When
-# enabled you may also need to install MathJax separately and configure the path
-# to it using the MATHJAX_RELPATH option.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-USE_MATHJAX = NO
-
-# When MathJax is enabled you can set the default output format to be used for
-# the MathJax output. See the MathJax site (see:
-# http://docs.mathjax.org/en/latest/output.html) for more details.
-# Possible values are: HTML-CSS (which is slower, but has the best
-# compatibility), NativeMML (i.e. MathML) and SVG.
-# The default value is: HTML-CSS.
-# This tag requires that the tag USE_MATHJAX is set to YES.
-
-MATHJAX_FORMAT = HTML-CSS
-
-# When MathJax is enabled you need to specify the location relative to the HTML
-# output directory using the MATHJAX_RELPATH option. The destination directory
-# should contain the MathJax.js script. For instance, if the mathjax directory
-# is located at the same level as the HTML output directory, then
-# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
-# Content Delivery Network so you can quickly see the result without installing
-# MathJax. However, it is strongly recommended to install a local copy of
-# MathJax from http://www.mathjax.org before deployment.
-# The default value is: http://cdn.mathjax.org/mathjax/latest.
-# This tag requires that the tag USE_MATHJAX is set to YES.
-
-MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest
-
-# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
-# extension names that should be enabled during MathJax rendering. For example
-# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
-# This tag requires that the tag USE_MATHJAX is set to YES.
-
-MATHJAX_EXTENSIONS =
-
-# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
-# of code that will be used on startup of the MathJax code. See the MathJax site
-# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
-# example see the documentation.
-# This tag requires that the tag USE_MATHJAX is set to YES.
-
-MATHJAX_CODEFILE =
-
-# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
-# the HTML output. The underlying search engine uses javascript and DHTML and
-# should work on any modern browser. Note that when using HTML help
-# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
-# there is already a search function so this one should typically be disabled.
-# For large projects the javascript based search engine can be slow, then
-# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
-# search using the keyboard; to jump to the search box use <access key> + S
-# (what the <access key> is depends on the OS and browser, but it is typically
-# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
-# key> to jump into the search results window, the results can be navigated
-# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
-# the search. The filter options can be selected when the cursor is inside the
-# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
-# to select a filter and <Enter> or <escape> to activate or cancel the filter
-# option.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_HTML is set to YES.
-
-SEARCHENGINE = YES
-
-# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
-# implemented using a web server instead of a web client using Javascript. There
-# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
-# setting. When disabled, doxygen will generate a PHP script for searching and
-# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
-# and searching needs to be provided by external tools. See the section
-# "External Indexing and Searching" for details.
-# The default value is: NO.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-SERVER_BASED_SEARCH = NO
-
-# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
-# script for searching. Instead the search results are written to an XML file
-# which needs to be processed by an external indexer. Doxygen will invoke an
-# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
-# search results.
-#
-# Doxygen ships with an example indexer (doxyindexer) and search engine
-# (doxysearch.cgi) which are based on the open source search engine library
-# Xapian (see: http://xapian.org/).
-#
-# See the section "External Indexing and Searching" for details.
-# The default value is: NO.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-EXTERNAL_SEARCH = NO
-
-# The SEARCHENGINE_URL should point to a search engine hosted by a web server
-# which will return the search results when EXTERNAL_SEARCH is enabled.
-#
-# Doxygen ships with an example indexer (doxyindexer) and search engine
-# (doxysearch.cgi) which are based on the open source search engine library
-# Xapian (see: http://xapian.org/). See the section "External Indexing and
-# Searching" for details.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-SEARCHENGINE_URL =
-
-# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
-# search data is written to a file for indexing by an external tool. With the
-# SEARCHDATA_FILE tag the name of this file can be specified.
-# The default file is: searchdata.xml.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-SEARCHDATA_FILE = searchdata.xml
-
-# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
-# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
-# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
-# projects and redirect the results back to the right project.
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-EXTERNAL_SEARCH_ID =
-
-# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
-# projects other than the one defined by this configuration file, but that are
-# all added to the same external search index. Each project needs to have a
-# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
-# to a relative location where the documentation can be found. The format is:
-# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
-# This tag requires that the tag SEARCHENGINE is set to YES.
-
-EXTRA_SEARCH_MAPPINGS =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the LaTeX output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
-# The default value is: YES.
-
-GENERATE_LATEX = NO
-
-# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it.
-# The default directory is: latex.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_OUTPUT = latex
-
-# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
-# invoked.
-#
-# Note that when enabling USE_PDFLATEX this option is only used for generating
-# bitmaps for formulas in the HTML output, but not in the Makefile that is
-# written to the output directory.
-# The default file is: latex.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_CMD_NAME = latex
-
-# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
-# index for LaTeX.
-# The default file is: makeindex.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-MAKEINDEX_CMD_NAME = makeindex
-
-# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
-# documents. This may be useful for small projects and may help to save some
-# trees in general.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-COMPACT_LATEX = NO
-
-# The PAPER_TYPE tag can be used to set the paper type that is used by the
-# printer.
-# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
-# 14 inches) and executive (7.25 x 10.5 inches).
-# The default value is: a4.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-PAPER_TYPE = a4
-
-# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
-# that should be included in the LaTeX output. The package can be specified just
-# by its name or with the correct syntax as to be used with the LaTeX
-# \usepackage command. To get the times font for instance you can specify :
-# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
-# To use the option intlimits with the amsmath package you can specify:
-# EXTRA_PACKAGES=[intlimits]{amsmath}
-# If left blank no extra packages will be included.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-EXTRA_PACKAGES =
-
-# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
-# generated LaTeX document. The header should contain everything until the first
-# chapter. If it is left blank doxygen will generate a standard header. See
-# section "Doxygen usage" for information on how to let doxygen write the
-# default header to a separate file.
-#
-# Note: Only use a user-defined header if you know what you are doing! The
-# following commands have a special meaning inside the header: $title,
-# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
-# $projectbrief, $projectlogo. Doxygen will replace $title with the empty
-# string, for the replacement values of the other commands the user is referred
-# to HTML_HEADER.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_HEADER =
-
-# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
-# generated LaTeX document. The footer should contain everything after the last
-# chapter. If it is left blank doxygen will generate a standard footer. See
-# LATEX_HEADER for more information on how to generate a default footer and what
-# special commands can be used inside the footer.
-#
-# Note: Only use a user-defined footer if you know what you are doing!
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_FOOTER =
-
-# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined
-# LaTeX style sheets that are included after the standard style sheets created
-# by doxygen. Using this option one can overrule certain style aspects. Doxygen
-# will copy the style sheet files to the output directory.
-# Note: The order of the extra style sheet files is of importance (e.g. the last
-# style sheet in the list overrules the setting of the previous ones in the
-# list).
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_EXTRA_STYLESHEET =
-
-# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
-# other source files which should be copied to the LATEX_OUTPUT output
-# directory. Note that the files will be copied as-is; there are no commands or
-# markers available.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_EXTRA_FILES =
-
-# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
-# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
-# contain links (just like the HTML output) instead of page references. This
-# makes the output suitable for online browsing using a PDF viewer.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-PDF_HYPERLINKS = YES
-
-# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
-# the PDF file directly from the LaTeX files. Set this option to YES, to get a
-# higher quality PDF documentation.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-USE_PDFLATEX = YES
-
-# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
-# command to the generated LaTeX files. This will instruct LaTeX to keep running
-# if errors occur, instead of asking the user for help. This option is also used
-# when generating formulas in HTML.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_BATCHMODE = NO
-
-# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
-# index chapters (such as File Index, Compound Index, etc.) in the output.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_HIDE_INDICES = NO
-
-# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
-# code with syntax highlighting in the LaTeX output.
-#
-# Note that which sources are shown also depends on other settings such as
-# SOURCE_BROWSER.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_SOURCE_CODE = NO
-
-# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
-# bibliography, e.g. plainnat, or ieeetr. See
-# http://en.wikipedia.org/wiki/BibTeX and \cite for more info.
-# The default value is: plain.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_BIB_STYLE = plain
-
-# If the LATEX_TIMESTAMP tag is set to YES then the footer of each generated
-# page will contain the date and time when the page was generated. Setting this
-# to NO can help when comparing the output of multiple runs.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_LATEX is set to YES.
-
-LATEX_TIMESTAMP = NO
-
-#---------------------------------------------------------------------------
-# Configuration options related to the RTF output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
-# RTF output is optimized for Word 97 and may not look too pretty with other RTF
-# readers/editors.
-# The default value is: NO.
-
-GENERATE_RTF = NO
-
-# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it.
-# The default directory is: rtf.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_OUTPUT = rtf
-
-# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
-# documents. This may be useful for small projects and may help to save some
-# trees in general.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-COMPACT_RTF = NO
-
-# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
-# contain hyperlink fields. The RTF file will contain links (just like the HTML
-# output) instead of page references. This makes the output suitable for online
-# browsing using Word or some other Word compatible readers that support those
-# fields.
-#
-# Note: WordPad (write) and others do not support links.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_HYPERLINKS = NO
-
-# Load stylesheet definitions from file. Syntax is similar to doxygen's config
-# file, i.e. a series of assignments. You only have to provide replacements,
-# missing definitions are set to their default value.
-#
-# See also section "Doxygen usage" for information on how to generate the
-# default style sheet that doxygen normally uses.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_STYLESHEET_FILE =
-
-# Set optional variables used in the generation of an RTF document. Syntax is
-# similar to doxygen's config file. A template extensions file can be generated
-# using doxygen -e rtf extensionFile.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_EXTENSIONS_FILE =
-
-# If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code
-# with syntax highlighting in the RTF output.
-#
-# Note that which sources are shown also depends on other settings such as
-# SOURCE_BROWSER.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_RTF is set to YES.
-
-RTF_SOURCE_CODE = NO
-
-#---------------------------------------------------------------------------
-# Configuration options related to the man page output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
-# classes and files.
-# The default value is: NO.
-
-GENERATE_MAN = NO
-
-# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it. A directory man3 will be created inside the directory specified by
-# MAN_OUTPUT.
-# The default directory is: man.
-# This tag requires that the tag GENERATE_MAN is set to YES.
-
-MAN_OUTPUT = man
-
-# The MAN_EXTENSION tag determines the extension that is added to the generated
-# man pages. In case the manual section does not start with a number, the number
-# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
-# optional.
-# The default value is: .3.
-# This tag requires that the tag GENERATE_MAN is set to YES.
-
-MAN_EXTENSION = .3
-
-# The MAN_SUBDIR tag determines the name of the directory created within
-# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by
-# MAN_EXTENSION with the initial . removed.
-# This tag requires that the tag GENERATE_MAN is set to YES.
-
-MAN_SUBDIR =
-
-# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
-# will generate one additional man file for each entity documented in the real
-# man page(s). These additional files only source the real man page, but without
-# them the man command would be unable to find the correct page.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_MAN is set to YES.
-
-MAN_LINKS = NO
-
-#---------------------------------------------------------------------------
-# Configuration options related to the XML output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
-# captures the structure of the code including all documentation.
-# The default value is: NO.
-
-GENERATE_XML = NO
-
-# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
-# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
-# it.
-# The default directory is: xml.
-# This tag requires that the tag GENERATE_XML is set to YES.
-
-XML_OUTPUT = xml
-
-# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
-# listings (including syntax highlighting and cross-referencing information) to
-# the XML output. Note that enabling this will significantly increase the size
-# of the XML output.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_XML is set to YES.
-
-XML_PROGRAMLISTING = YES
-
-#---------------------------------------------------------------------------
-# Configuration options related to the DOCBOOK output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
-# that can be used to generate PDF.
-# The default value is: NO.
-
-GENERATE_DOCBOOK = NO
-
-# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
-# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
-# front of it.
-# The default directory is: docbook.
-# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
-
-DOCBOOK_OUTPUT = docbook
-
-# If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the
-# program listings (including syntax highlighting and cross-referencing
-# information) to the DOCBOOK output. Note that enabling this will significantly
-# increase the size of the DOCBOOK output.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
-
-DOCBOOK_PROGRAMLISTING = NO
-
-#---------------------------------------------------------------------------
-# Configuration options for the AutoGen Definitions output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
-# AutoGen Definitions (see http://autogen.sf.net) file that captures the
-# structure of the code including all documentation. Note that this feature is
-# still experimental and incomplete at the moment.
-# The default value is: NO.
-
-GENERATE_AUTOGEN_DEF = NO
-
-#---------------------------------------------------------------------------
-# Configuration options related to the Perl module output
-#---------------------------------------------------------------------------
-
-# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
-# file that captures the structure of the code including all documentation.
-#
-# Note that this feature is still experimental and incomplete at the moment.
-# The default value is: NO.
-
-GENERATE_PERLMOD = NO
-
-# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
-# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
-# output from the Perl module output.
-# The default value is: NO.
-# This tag requires that the tag GENERATE_PERLMOD is set to YES.
-
-PERLMOD_LATEX = NO
-
-# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
-# formatted so it can be parsed by a human reader. This is useful if you want to
-# understand what is going on. On the other hand, if this tag is set to NO, the
-# size of the Perl module output will be much smaller and Perl will parse it
-# just the same.
-# The default value is: YES.
-# This tag requires that the tag GENERATE_PERLMOD is set to YES.
-
-PERLMOD_PRETTY = YES
-
-# The names of the make variables in the generated doxyrules.make file are
-# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
-# so different doxyrules.make files included by the same Makefile don't
-# overwrite each other's variables.
-# This tag requires that the tag GENERATE_PERLMOD is set to YES.
-
-PERLMOD_MAKEVAR_PREFIX =
-
-#---------------------------------------------------------------------------
-# Configuration options related to the preprocessor
-#---------------------------------------------------------------------------
-
-# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
-# C-preprocessor directives found in the sources and include files.
-# The default value is: YES.
-
-ENABLE_PREPROCESSING = YES
-
-# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
-# in the source code. If set to NO, only conditional compilation will be
-# performed. Macro expansion can be done in a controlled way by setting
-# EXPAND_ONLY_PREDEF to YES.
-# The default value is: NO.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-MACRO_EXPANSION = NO
-
-# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
-# the macro expansion is limited to the macros specified with the PREDEFINED and
-# EXPAND_AS_DEFINED tags.
-# The default value is: NO.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-EXPAND_ONLY_PREDEF = NO
-
-# If the SEARCH_INCLUDES tag is set to YES, the include files in the
-# INCLUDE_PATH will be searched if a #include is found.
-# The default value is: YES.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-SEARCH_INCLUDES = YES
-
-# The INCLUDE_PATH tag can be used to specify one or more directories that
-# contain include files that are not input files but should be processed by the
-# preprocessor.
-# This tag requires that the tag SEARCH_INCLUDES is set to YES.
-
-INCLUDE_PATH =
-
-# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
-# patterns (like *.h and *.hpp) to filter out the header-files in the
-# directories. If left blank, the patterns specified with FILE_PATTERNS will be
-# used.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-INCLUDE_FILE_PATTERNS =
-
-# The PREDEFINED tag can be used to specify one or more macro names that are
-# defined before the preprocessor is started (similar to the -D option of e.g.
-# gcc). The argument of the tag is a list of macros of the form: name or
-# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
-# is assumed. To prevent a macro definition from being undefined via #undef or
-# recursively expanded use the := operator instead of the = operator.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-PREDEFINED =
-
-# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
-# tag can be used to specify a list of macro names that should be expanded. The
-# macro definition that is found in the sources will be used. Use the PREDEFINED
-# tag if you want to use a different macro definition that overrules the
-# definition found in the source code.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-EXPAND_AS_DEFINED =
-
-# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
-# remove all references to function-like macros that are alone on a line, have
-# an all uppercase name, and do not end with a semicolon. Such function macros
-# are typically used for boiler-plate code, and will confuse the parser if not
-# removed.
-# The default value is: YES.
-# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
-
-SKIP_FUNCTION_MACROS = YES
-
-#---------------------------------------------------------------------------
-# Configuration options related to external references
-#---------------------------------------------------------------------------
-
-# The TAGFILES tag can be used to specify one or more tag files. For each tag
-# file the location of the external documentation should be added. The format of
-# a tag file without this location is as follows:
-# TAGFILES = file1 file2 ...
-# Adding location for the tag files is done as follows:
-# TAGFILES = file1=loc1 "file2 = loc2" ...
-# where loc1 and loc2 can be relative or absolute paths or URLs. See the
-# section "Linking to external documentation" for more information about the use
-# of tag files.
-# Note: Each tag file must have a unique name (where the name does NOT include
-# the path). If a tag file is not located in the directory in which doxygen is
-# run, you must also specify the path to the tagfile here.
-
-TAGFILES =
-
-# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
-# tag file that is based on the input files it reads. See section "Linking to
-# external documentation" for more information about the usage of tag files.
-
-GENERATE_TAGFILE =
-
-# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
-# the class index. If set to NO, only the inherited external classes will be
-# listed.
-# The default value is: NO.
-
-ALLEXTERNALS = NO
-
-# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
-# in the modules index. If set to NO, only the current project's groups will be
-# listed.
-# The default value is: YES.
-
-EXTERNAL_GROUPS = YES
-
-# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
-# the related pages index. If set to NO, only the current project's pages will
-# be listed.
-# The default value is: YES.
-
-EXTERNAL_PAGES = YES
-
-# The PERL_PATH should be the absolute path and name of the perl script
-# interpreter (i.e. the result of 'which perl').
-# The default file (with absolute path) is: /usr/bin/perl.
-
-PERL_PATH = /usr/bin/perl
-
-#---------------------------------------------------------------------------
-# Configuration options related to the dot tool
-#---------------------------------------------------------------------------
-
-# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
-# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
-# NO turns the diagrams off. Note that this option also works with HAVE_DOT
-# disabled, but it is recommended to install and use dot, since it yields more
-# powerful graphs.
-# The default value is: YES.
-
-CLASS_DIAGRAMS = YES
-
-# You can define message sequence charts within doxygen comments using the \msc
-# command. Doxygen will then run the mscgen tool (see:
-# http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the
-# documentation. The MSCGEN_PATH tag allows you to specify the directory where
-# the mscgen tool resides. If left empty the tool is assumed to be found in the
-# default search path.
-
-MSCGEN_PATH =
-
-# You can include diagrams made with dia in doxygen documentation. Doxygen will
-# then run dia to produce the diagram and insert it in the documentation. The
-# DIA_PATH tag allows you to specify the directory where the dia binary resides.
-# If left empty dia is assumed to be found in the default search path.
-
-DIA_PATH =
-
-# If set to YES the inheritance and collaboration graphs will hide inheritance
-# and usage relations if the target is undocumented or is not a class.
-# The default value is: YES.
-
-HIDE_UNDOC_RELATIONS = YES
-
-# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
-# available from the path. This tool is part of Graphviz (see:
-# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
-# Bell Labs. The other options in this section have no effect if this option is
-# set to NO
-# The default value is: NO.
-
-HAVE_DOT = YES
-
-# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
-# to run in parallel. When set to 0 doxygen will base this on the number of
-# processors available in the system. You can set it explicitly to a value
-# larger than 0 to get control over the balance between CPU load and processing
-# speed.
-# Minimum value: 0, maximum value: 32, default value: 0.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_NUM_THREADS = 0
-
-# When you want a differently looking font in the dot files that doxygen
-# generates you can specify the font name using DOT_FONTNAME. You need to make
-# sure dot is able to find the font, which can be done by putting it in a
-# standard location or by setting the DOTFONTPATH environment variable or by
-# setting DOT_FONTPATH to the directory containing the font.
-# The default value is: Helvetica.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_FONTNAME = Calibri
-
-# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
-# dot graphs.
-# Minimum value: 4, maximum value: 24, default value: 10.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_FONTSIZE = 10
-
-# By default doxygen will tell dot to use the default font as specified with
-# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
-# the path where dot can find it using this tag.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_FONTPATH =
-
-# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
-# each documented class showing the direct and indirect inheritance relations.
-# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-CLASS_GRAPH = YES
-
-# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
-# graph for each documented class showing the direct and indirect implementation
-# dependencies (inheritance, containment, and class references variables) of the
-# class with other documented classes.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-COLLABORATION_GRAPH = YES
-
-# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
-# groups, showing the direct groups dependencies.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-GROUP_GRAPHS = YES
-
-# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
-# collaboration diagrams in a style similar to the OMG's Unified Modeling
-# Language.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-UML_LOOK = NO
-
-# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
-# class node. If there are many fields or methods and many nodes the graph may
-# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
-# number of items for each type to make the size more manageable. Set this to 0
-# for no limit. Note that the threshold may be exceeded by 50% before the limit
-# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
-# but if the number exceeds 15, the total amount of fields shown is limited to
-# 10.
-# Minimum value: 0, maximum value: 100, default value: 10.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-UML_LIMIT_NUM_FIELDS = 10
-
-# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
-# collaboration graphs will show the relations between templates and their
-# instances.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-TEMPLATE_RELATIONS = NO
-
-# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
-# YES then doxygen will generate a graph for each documented file showing the
-# direct and indirect include dependencies of the file with other documented
-# files.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-INCLUDE_GRAPH = YES
-
-# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
-# set to YES then doxygen will generate a graph for each documented file showing
-# the direct and indirect include dependencies of the file with other documented
-# files.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-INCLUDED_BY_GRAPH = YES
-
-# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
-# dependency graph for every global function or class method.
-#
-# Note that enabling this option will significantly increase the time of a run.
-# So in most cases it will be better to enable call graphs for selected
-# functions only using the \callgraph command. Disabling a call graph can be
-# accomplished by means of the command \hidecallgraph.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-CALL_GRAPH = YES
-
-# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
-# dependency graph for every global function or class method.
-#
-# Note that enabling this option will significantly increase the time of a run.
-# So in most cases it will be better to enable caller graphs for selected
-# functions only using the \callergraph command. Disabling a caller graph can be
-# accomplished by means of the command \hidecallergraph.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-CALLER_GRAPH = YES
-
-# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
-# hierarchy of all classes instead of a textual one.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-GRAPHICAL_HIERARCHY = YES
-
-# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
-# dependencies a directory has on other directories in a graphical way. The
-# dependency relations are determined by the #include relations between the
-# files in the directories.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DIRECTORY_GRAPH = YES
-
-# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
-# generated by dot. For an explanation of the image formats see the section
-# output formats in the documentation of the dot tool (Graphviz (see:
-# http://www.graphviz.org/)).
-# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
-# to make the SVG files visible in IE 9+ (other browsers do not have this
-# requirement).
-# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,
-# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
-# png:gdiplus:gdiplus.
-# The default value is: png.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_IMAGE_FORMAT = png
-
-# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
-# enable generation of interactive SVG images that allow zooming and panning.
-#
-# Note that this requires a modern browser other than Internet Explorer. Tested
-# and working are Firefox, Chrome, Safari, and Opera.
-# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
-# the SVG files visible. Older versions of IE do not have SVG support.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-INTERACTIVE_SVG = NO
-
-# The DOT_PATH tag can be used to specify the path where the dot tool can be
-# found. If left blank, it is assumed the dot tool can be found in the path.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_PATH = /usr/local/bin/dot
-
-# The DOTFILE_DIRS tag can be used to specify one or more directories that
-# contain dot files that are included in the documentation (see the \dotfile
-# command).
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOTFILE_DIRS =
-
-# The MSCFILE_DIRS tag can be used to specify one or more directories that
-# contain msc files that are included in the documentation (see the \mscfile
-# command).
-
-MSCFILE_DIRS =
-
-# The DIAFILE_DIRS tag can be used to specify one or more directories that
-# contain dia files that are included in the documentation (see the \diafile
-# command).
-
-DIAFILE_DIRS =
-
-# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the
-# path where java can find the plantuml.jar file. If left blank, it is assumed
-# PlantUML is not used or called during a preprocessing step. Doxygen will
-# generate a warning when it encounters a \startuml command in this case and
-# will not generate output for the diagram.
-
-PLANTUML_JAR_PATH =
-
-# When using plantuml, the PLANTUML_CFG_FILE tag can be used to specify a
-# configuration file for plantuml.
-
-PLANTUML_CFG_FILE =
-
-# When using plantuml, the specified paths are searched for files specified by
-# the !include statement in a plantuml block.
-
-PLANTUML_INCLUDE_PATH =
-
-# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
-# that will be shown in the graph. If the number of nodes in a graph becomes
-# larger than this value, doxygen will truncate the graph, which is visualized
-# by representing a node as a red box. Note that doxygen if the number of direct
-# children of the root node in a graph is already larger than
-# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
-# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
-# Minimum value: 0, maximum value: 10000, default value: 50.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_GRAPH_MAX_NODES = 50
-
-# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
-# generated by dot. A depth value of 3 means that only nodes reachable from the
-# root by following a path via at most 3 edges will be shown. Nodes that lay
-# further from the root node will be omitted. Note that setting this option to 1
-# or 2 may greatly reduce the computation time needed for large code bases. Also
-# note that the size of a graph can be further restricted by
-# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
-# Minimum value: 0, maximum value: 1000, default value: 0.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-MAX_DOT_GRAPH_DEPTH = 0
-
-# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
-# background. This is disabled by default, because dot on Windows does not seem
-# to support this out of the box.
-#
-# Warning: Depending on the platform used, enabling this option may lead to
-# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
-# read).
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_TRANSPARENT = NO
-
-# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
-# files in one run (i.e. multiple -o and -T options on the command line). This
-# makes dot run faster, but since only newer versions of dot (>1.8.10) support
-# this, this feature is disabled by default.
-# The default value is: NO.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_MULTI_TARGETS = NO
-
-# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
-# explaining the meaning of the various boxes and arrows in the dot generated
-# graphs.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-GENERATE_LEGEND = YES
-
-# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
-# files that are used to generate the various graphs.
-# The default value is: YES.
-# This tag requires that the tag HAVE_DOT is set to YES.
-
-DOT_CLEANUP = YES
diff --git a/docs/fig/compiler_flow.png b/docs/fig/compiler_flow.png
new file mode 100644
index 000000000..25daa0ca1
--- /dev/null
+++ b/docs/fig/compiler_flow.png
Binary files differ
diff --git a/docs/fig/nnfw_compiler_structure.png b/docs/fig/nnfw_compiler_structure.png
new file mode 100644
index 000000000..4c650c186
--- /dev/null
+++ b/docs/fig/nnfw_compiler_structure.png
Binary files differ
diff --git a/docs/fig/nnfw_compiler_structure.pptx b/docs/fig/nnfw_compiler_structure.pptx
new file mode 100644
index 000000000..9b5585d0c
--- /dev/null
+++ b/docs/fig/nnfw_compiler_structure.pptx
Binary files differ
diff --git a/docs/fig/nnfw_components.png b/docs/fig/nnfw_components.png
new file mode 100644
index 000000000..2c6bc6d97
--- /dev/null
+++ b/docs/fig/nnfw_components.png
Binary files differ
diff --git a/docs/fig/nnfw_components.pptx b/docs/fig/nnfw_components.pptx
new file mode 100644
index 000000000..a4e86fa82
--- /dev/null
+++ b/docs/fig/nnfw_components.pptx
Binary files differ
diff --git a/docs/fig/nnfw_nativeapi_flow.png b/docs/fig/nnfw_nativeapi_flow.png
new file mode 100644
index 000000000..31e82900d
--- /dev/null
+++ b/docs/fig/nnfw_nativeapi_flow.png
Binary files differ
diff --git a/docs/fig/nnfw_nativeapi_flow.pptx b/docs/fig/nnfw_nativeapi_flow.pptx
new file mode 100644
index 000000000..27f6d6e80
--- /dev/null
+++ b/docs/fig/nnfw_nativeapi_flow.pptx
Binary files differ
diff --git a/docs/fig/nnfw_nnapi_flow.png b/docs/fig/nnfw_nnapi_flow.png
new file mode 100644
index 000000000..2faceb9f2
--- /dev/null
+++ b/docs/fig/nnfw_nnapi_flow.png
Binary files differ
diff --git a/docs/fig/nnfw_nnapi_flow.pptx b/docs/fig/nnfw_nnapi_flow.pptx
new file mode 100644
index 000000000..7407a3940
--- /dev/null
+++ b/docs/fig/nnfw_nnapi_flow.pptx
Binary files differ
diff --git a/docs/fig/nnfw_runtime_behavior.png b/docs/fig/nnfw_runtime_behavior.png
new file mode 100644
index 000000000..952f22c93
--- /dev/null
+++ b/docs/fig/nnfw_runtime_behavior.png
Binary files differ
diff --git a/docs/fig/nnfw_runtime_behavior.pptx b/docs/fig/nnfw_runtime_behavior.pptx
new file mode 100644
index 000000000..2fbcedacb
--- /dev/null
+++ b/docs/fig/nnfw_runtime_behavior.pptx
Binary files differ
diff --git a/docs/fig/nnfw_runtime_structure.png b/docs/fig/nnfw_runtime_structure.png
new file mode 100644
index 000000000..554b5aa04
--- /dev/null
+++ b/docs/fig/nnfw_runtime_structure.png
Binary files differ
diff --git a/docs/fig/nnfw_runtime_structure.pptx b/docs/fig/nnfw_runtime_structure.pptx
new file mode 100644
index 000000000..213925e91
--- /dev/null
+++ b/docs/fig/nnfw_runtime_structure.pptx
Binary files differ
diff --git a/docs/fig/runtime_nativeapi_flow.png b/docs/fig/runtime_nativeapi_flow.png
new file mode 100644
index 000000000..1f9c88236
--- /dev/null
+++ b/docs/fig/runtime_nativeapi_flow.png
Binary files differ
diff --git a/docs/nncc/README.md b/docs/nncc/README.md
new file mode 100644
index 000000000..203b4aa45
--- /dev/null
+++ b/docs/nncc/README.md
@@ -0,0 +1,56 @@
+# 1. nnas SDK
+
+_describe simply that current version is 1.0.0, and nnas SDK has nncc and nnfw._
+
+ _we use symantic versioning. Provide link to https://semver.org/_
+
+_simply mention that we go with apache license_
+
+# 2. nncc
+
+_please write a short description_
+_for example, what is this compiler_
+_design philosophy and advantages of this compiler_
+
+## 2.1. Architecture
+
+_For example, simple architecture or compiling flow, showing we're cool_
+
+## 2.2. Getting Started
+
+This section will explain how to install and compile a Tensorflow model file.
+
+### 2.2.1. Supported Environment
+
+_x86, ubuntu 16.04... versions of Tensorflow that produce models.. frozen file..., ... etc..._
+
+### 2.2.2. How to Install
+
+_please write how to install_
+
+### 2.2.3. How to Compile and Package
+
+_what is 'nnpackage'?_
+_environment variables_
+_compiling inception v3 pb file and packaging into an nnpackage_
+_explaining files in an nnpackage_
+_an example with custom op_
+
+## 2.3. List of Supported Operations
+
+_separate md file_
+_showing a list of [ tensorflow op , circle op, limitation ]_
+
+## 2.4. Benchmark
+
+_inception v3 (we have shorter ops)_
+_instance normalization (link to runtime performance)_
+_showing we have bright future_
+
+## 2.5. Support
+
+_report a bug into our github_
+
+## 2.6. Revision History
+
+_separate md file where SDK 1.0.0 and future version history are maintained_
diff --git a/docs/nncc/design.md b/docs/nncc/design.md
new file mode 100644
index 000000000..a01d6fec4
--- /dev/null
+++ b/docs/nncc/design.md
@@ -0,0 +1,10 @@
+This document describes basic principles behind _nncc_ design.
+
+## Goals and non-goals
+
+As mentioned in README.md, _nncc_ aims to provide a general framework for compiling a given NN model
+to an artifact that runs on a target device (such as CPU, GPU, or NPU).
+
+More specifically, _nncc_ aims to create an efficient artifact (in terms of throughput or memory)
+for a specific target via focusing on a restricted set of NN operations. It is not the goal of _nncc_
+to support all the known NN operations although _nncc_ will keep trying to broaden its coverage.
diff --git a/docs/nncc/getting_started.md b/docs/nncc/getting_started.md
new file mode 100644
index 000000000..8f01bd2a4
--- /dev/null
+++ b/docs/nncc/getting_started.md
@@ -0,0 +1,73 @@
+#### Prerequisites
+
+The following toolchains are needed to build _nncc_ project:
+ - CMake (>= 3.1)
+ - g++ (>= 4.8)
+
+#### How to build _nncc_ with docker
+
+_nncc_ provides ``Dockerfile`` in order to make it easy to setup development environment.
+
+One may build ``nncc`` docker image with the following command:
+```
+nncc$ cat infra/docker/Dockerfile | docker build -t nncc -
+...
+```
+
+By default, this ``Dockerfile`` uses "archive.ubuntu.com" which may be quite slow. One may use mirror site via ``UBUNTU_MIRROR`` variable.
+For example, one may enable the use of ``kr.archive.ubuntu.com`` via the following command
+```
+nncc$ cat infra/docker/Dockerfile | docker build --build-arg UBUNTU_MIRROR="kr.archive.ubuntu.com" -t nncc -
+...
+```
+
+One who works behind proxy should provide proxy configuration via the following command:
+```
+nncc$ cat infra/docker/Dockerfile | docker build --build-arg HTTP_PROXY=<HTTP proxy address> --build-arg HTTPS_PROXY=<HTTPS proxy address> -t nncc -
+...
+```
+One may use simplified command if ``HTTP_PROXY`` and ``HTTPS_PROXY`` environment variables are already set:
+```
+nncc$ export
+...
+declare -x HTTP_PROXY=...
+declare -x HTTPS_PROXY=...
+...
+nncc$ cat infra/docker/Dockerfile | docker build --build-arg HTTP_PROXY --build-arg HTTPS_PROXY -t nncc -
+...
+```
+
+Note that these configurations are orthogonal to each other. One may freely combine these options as follows:
+```
+nncc$ cat infra/docker/Dockerfile | docker build --build-arg HTTP_PROXY --build-arg HTTPS_PROXY --build-arg UBUNTU_MIRROR="kr.archive.ubuntu.com" -t nncc -
+```
+
+One may easily build _nncc_ with the following command once ``nncc`` docker image is built.
+```
+nncc$ ./nncc docker-nncc configure
+...
+nncc$ ./nncc docker-nncc build
+...
+```
+
+#### How to build _nncc_ with ninja
+
+You may build _nncc_ with ninja (instead of make) if ninja is available. Please try the following commands:
+```
+nncc$ rm -rf build
+nncc$ ./nncc configure -G Ninja
+nncc$ ./nncc build
+```
+
+#### How to build and run _nncc_ unittests
+
+_nncc_ includes various unittests to check its correctness. One may build and run these unittests via the following command:
+```
+nncc$ rm -rf build
+nncc$ ./nncc configure -DENABLE_TEST=1
+nncc$ ./nncc build
+nncc$ ./nncc test
+```
+
+**NOTE** As _nncc_ unittests are implemented on top of google test framework (_gtest_), _nncc_ build script will automatically download _gtest_ 1.8 from public GitHub.
+If you are not able to access public GitHub from your machine, please override download URL via ``GTEST_URL`` environment variable.
diff --git a/docs/nncc/images/nncc_components.png b/docs/nncc/images/nncc_components.png
new file mode 100644
index 000000000..becd63d14
--- /dev/null
+++ b/docs/nncc/images/nncc_components.png
Binary files differ
diff --git a/docs/nncc/images/nncc_idef0_a0.png b/docs/nncc/images/nncc_idef0_a0.png
new file mode 100644
index 000000000..9ba09681f
--- /dev/null
+++ b/docs/nncc/images/nncc_idef0_a0.png
Binary files differ
diff --git a/docs/nncc/images/nncc_idef0_a1.png b/docs/nncc/images/nncc_idef0_a1.png
new file mode 100644
index 000000000..c5ebec5d9
--- /dev/null
+++ b/docs/nncc/images/nncc_idef0_a1.png
Binary files differ
diff --git a/docs/nncc/images/nncc_idef0_a12.png b/docs/nncc/images/nncc_idef0_a12.png
new file mode 100644
index 000000000..dabcad718
--- /dev/null
+++ b/docs/nncc/images/nncc_idef0_a12.png
Binary files differ
diff --git a/docs/nncc/project/detailed_level_design.md b/docs/nncc/project/detailed_level_design.md
new file mode 100644
index 000000000..50fb8fa13
--- /dev/null
+++ b/docs/nncc/project/detailed_level_design.md
@@ -0,0 +1,329 @@
+# SW Detailed Level Design
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | ----------------- | ----------------- | ------------ |
+| 0.1 | 2018.06.20 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.06.21 | SE member review | Alexey Kondrashov | |
+| 1.0 | 2018.06.22 | Final DR1 version | Vostokov Sergey | Sung-Jae Lee |
+
+**Terminology and Abbreviation**
+
+| | |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+**References**
+
+\[1\] Vostokov Sergey, [SW Requirements Specification](requirements_specification.md)
+
+\[2\] Vostokov Sergey, [SW High-Level Design](high_level_design.md)
+
+## Overview
+
+### Scope
+
+The main goal of the project is to develop a compiler for neural
+networks to produce executable artefact for specified SW and HW
+platform.
+
+The development scope includes the following components:
+
+ - Develop importer module to parse, verify and represent NN model for
+ further optimization and compilation
+ - Develop code emitters to produce executable binary for CPU and GPU
+
+
+**2018 year goals:**
+
+ - Support TensorFlow Lite NN model format
+ - Support Caffe NN model format
+ - Support Caffe2 NN model format (Optional)
+ - Support compilation of MobileNet NN
+ - Support compilation of Inception v3 NN
+ - Support ARM CPU
+ - Support ARM GPU (Mali)
+ - Support Tizen OS
+ - Support SmartMachine OS
+(Optional)
+
+| Product | Target Model Name | Comment |
+| ------------------- | ------------------------------ | ---------------- |
+| Tizen phone | Tizen TM2 | Reference device |
+| Tizen device | Odroid XU4 | Reference board |
+| SmartMachine target | Microvision mv8890, exynos8890 | Reference device |
+
+Table 1-1. Target Model
+
+### Design Consideration
+
+Deep learning software demands reliability and performance. The common
+approach which comes from the history is to develop a SW framework
+(machine learning framework) which would compute each step of the neural
+network inference process using supported hardware. This approach is
+used in many popular solutions like Google Tensorflow/Tensorflow Lite,
+Caffe/2, etc. Traditionally, neural network developers build a
+computation graph and then an appropriate machine learning framework
+interprets it. The latest discoveries in AI field show that the
+node-visitor method of execution is inefficient. As a result, a second
+approach has been worked out by the industry, which is a neural network
+compiler that executes code more efficiently.
+
+This document presents the design of the *nncc*, a neural network
+compiler collection. The design should provide the easiest way to extend
+the functionality of the *nncc* by adding new modules with the following
+features:
+
+ - Support neural networks produced by various machine learning
+ frameworks;
+ - Produce an artefact taking advantages of various hardware
+ including specialized processors like NPU;
+ - Apply new domain specific optimization techniques over given NN.
+
+Non-functional requirements to the developed software are well-described
+in the SW Requirements Specification, such requirements are not shown
+here to avoid duplication.
+
+### Constraints
+
+See constraints in SW Requirements Specification.
+
+
+<table>
+<colgroup>
+<col style="width: 24%" />
+<col style="width: 64%" />
+<col style="width: 10%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Item</th>
+<th>Assumptions, Dependencies and the Constraints</th>
+<th>Reference</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Tizen SW Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>Tizen API</li>
+<li>Tizen kernel</li>
+<li>Tizen FW</li>
+<li>Tizen SDK</li>
+<li>Tizen naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="www.tizen.org" class="uri">www.tizen.org</a> <br>- <a href="wiki.tizen.org" class="uri">wiki.tizen.org</a> <br>- <a href="developer.tizen.org" class="uri">developer.tizen.org</a></td>
+</tr>
+<tr class="even">
+<td>SmartMachine OS Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>SmartMachine API</li>
+<li>SmartMachine kernel</li>
+<li>SmartMachine FW</li>
+<li>SmartMachine SDK</li>
+<li>SmartMachine naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="http://suprem.sec.samsung.net/confluence/pages/viewpage.action?pageId=81833987">Platform confluence</a> <br>- <a href="https://github.sec.samsung.net/RS7-SmartMachine">Github</a> <br>- <a href="http://suprem.sec.samsung.net/confluence/display/ASEC/Adaptive+AUTOSAR">Functional Safety confluence</a></td>
+</tr>
+<tr class="odd">
+<td>Host OS</td>
+<td>Linux-based OS (Ubuntu, Archlinux, etc)</td>
+<td>- <a href="https://www.ubuntu.com/">Ubuntu site</a> <br>- <a href="https://www.archlinux.org/">Archlinux site</a></td>
+</tr>
+<tr class="even">
+<td>Tizen target HW</td>
+<td>The reference device should be provided: Tizen TM2</td>
+<td></td>
+</tr>
+<tr class="odd">
+<td>SmartMachine target HW</td>
+<td>The reference device should be provided</td>
+<td></td>
+</tr>
+</tbody>
+</table>
+Table 1-2. Assumptions, Dependecies and the Constraints</caption>
+
+## SW Detailed Structure Design
+
+### SW Block Structure
+
+Top-Level Components of the nncc descriped in HLD. More detailed
+structure and class diagram will be available after development
+completion.
+
+### SW Block Feature
+
+1. Initialization: configure all internal modules (see
+ [{Initialization} Detailed Design](#initialization-detailed-design))
+2. Frontend: Import NN model (see [{Import NN model} Detailed
+ Design](#import-nn-model-detailed-design))
+ - *Caffe frontend*: includes the parser of Caffe NN model format,
+ verifier to ensure that parsed data is valid and consentient,
+ and Caffe-specific IR converter
+ - *Caffe2 frontend*: includes the parser of Caffe2 NN model
+ format, verifier to ensure that parsed data is valid and
+ consentient, and Caffe2-specific IR converter to Model IR
+ - *Tensorflow Lite frontend*: includes the parser of Tensorflow NN
+ model format with automatic version recognition feature,
+ verifier to ensure that parsed data is valid and consentient,
+ and Tensorflow Lite-specific IR converter to Model IR
+3. Backend: Generate the code (see [{Generate the code} Detailed
+ Design](#generate-the-code-detailed-design))
+ - *Interpreter:* As it was described in SW High-Level Document
+ imported NN model may proceed through three step of Intermediate
+ representation: Model IR, Coarse-Grained IR, Fine-Grained IR.
+ The Interpreter backend uses each this IR to do inference of
+ given NN model. As the output, the user gets the resulting
+ calculation of all NN ops included into original computation
+ graph.
+ - *Binary*:This type refers to generating binary code that can be
+ executed on the target device. NN compiler can generate code
+ that is either executed solely on CPU or takes advantage of the
+ GPU when possible if the corresponding target was specified. The
+ user may want to incorporate 3rd party libraries included into
+ target firmware or delivered with the application package. In
+ this case, the compiler prepares the data following EABI
+ convention and embeds an invocation of high-level functions by
+ appropriate symbol.
+ - *Soft*: Resulting program is a generated source code in
+ high-level programming language C or C++. Here there are two
+ options: the first one is to generate the source code that does
+ not depend on libraries outside of itself, with the exception of
+ system libraries. The second one is to include the code to
+ invoke high-level functions from 3rd party libraries. For
+ example, it may be an invocation of matrix multiplication from
+ GEMM library.
+
+## SW Detailed Operation Design
+
+### {Initialization} Detailed Design
+
+#### Major Function
+
+To provide a valid configuration session for all modules of *nncc* using
+user input from the command line/config file/environment variables.
+
+#### Operation Sequence
+
+Initialization of the *nncc* includes command line option processing,
+configuration of its subsystems as well as any error checking possible
+at this stage. It consists of the following steps:
+
+1. Collect all command line options and verify their format for
+ validity (no syntax errors etc.)
+
+2. Check for validity and then process general options
+
+3. Load subsystem modules
+
+4. For each one of them:
+
+ - Configure
+ - Pass command line options
+ - Check command line options for validity (for example, check
+ that every required option is present)
+
+At the end of this process each subsystem is configured and has access
+to all data needed for its operation.
+
+### {Import NN model} Detailed Design
+
+#### Major Function
+
+To convert given NN model from framework-specific IR to Model IR for
+further processing.
+
+#### Operation Sequence
+
+As you may see on the diagram, neural network import is the main
+function of the compiler front-end part. The result of this operation is
+a computation graph which is presented as Model IR.
+
+![image](../images/nncc_idef0_a12.png)
+
+The import process consists of three parts:
+
+1. NN model parsing
+2. Verification of the result from the previous step
+3. Converting the model to the Model IR
+
+During the first step, file or files containing the model are read and
+represented in some format specific to each NN framework.
+
+Verification step is included to ensure that:
+
+ - None of the files constituting the model are damaged
+ - Model format corresponds to the specified one
+ - Version of the model format corresponds to the specified one
+
+The most important step is accurately converting the model from the
+framework-specific representation to the Model IR. This conversion
+includes:
+
+ - *Translation of the NN model computation graph to the Model IR
+ computation graph.* During the translation new nodes may be
+ introduced - for example, a high-level NN operation may be split
+ into a few smaller ones.
+ - *NN model parameter layout conversion.* The way parameters (also
+ known as weights) of a model are layed out in each specific NN
+ framework may differ, and it is necessary to convert such layout
+ into a unified format.
+ - *NN operation parameter conversion.* Each NN operation has a set
+ of its own parameters describing the way this operation should be
+ performed, and these parameters also differ between frameworks.
+
+Resulting Model IR is equivalent to the initial NN model in terms of how
+NN model inputs would be transformed into its outputs if all the
+operations in the Model IR were executed.
+
+### {Generate the code} Detailed Design
+
+Development in progress. Will be described on Completion DR.
+
+## Interface Design
+
+Development in progress. Will be described on DR2.
+
+## SW Code Structure
+
+| Directory | Description |
+| ------------------------ | -------------------------------------------------------------------- |
+| / | source codes of the build system, main README file |
+| /contrib | Incubating projects |
+| /doc | Contains the documentation of the project |
+| /doc/project | Contains project management documents (SRS, SDD, STD, HLD, DLD, etc) |
+| /libs | Contains the source of the libraries which are used by the nncc |
+| /libs/core | Contains the source code of the core library of nncc |
+| /libs/frontend | Contains the source code of supported frontend's plugins |
+| /libs/frontend/caffe | The source code for the Caffe frontend |
+| /libs/frontend/caffe2 | The source code for the Caffe2 frontend |
+| /libs/frontend/tflite | The source code for the Tensorflow Lite frontend |
+| /libs/backend | Contains the source code of supported backend’ plugins |
+| /libs/backend/cpu | Contains the source code of CPU backend |
+| /libs/backend/gpu | Contains the source code of GPU backend |
+| /libs/backend/3rd\_party | Contains the source code of backend to utilize 3rd party libraries |
+| /scripts | Various scripts for building and testing the nncc |
+| /tools | The source code of the executables |
diff --git a/docs/nncc/project/development_document.md b/docs/nncc/project/development_document.md
new file mode 100644
index 000000000..8315dd3b6
--- /dev/null
+++ b/docs/nncc/project/development_document.md
@@ -0,0 +1,257 @@
+# SW Development Document
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | --------------------------- | --------------- | ------------ |
+| 0.1 | 2018.04.12 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.04.16 | SE member in-charge review | Ilya Lopatin | |
+| 1.0 | 2018.04.17 | Final Execution DR version | Vostokov Sergey | Sung-Jae Lee |
+| 1.1 | 2018.04.17 | Add SW Quality Verification | Vostokov Sergey | Sung-Jae Lee |
+
+**Terminology and Abbreviation**
+
+| | |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+## Project Overview
+
+### Purpose and Scope
+
+The main goal of the project is to develop a compiler for neural networks to produce executable artefact for specified SW and HW platform.
+
+The development scope includes the following components:
+
+ - Develop importer module to parse, verify and represent NN model for further optimization and compilation
+ - Develop code emitters to produce executable binary for CPU and GPU
+
+
+**2018 year goals:**
+
+ - Support TensorFlow Lite NN model format
+ - Support Caffe NN model format
+ - Support Caffe2 NN model format (Optional)
+ - Support compilation of MobileNet NN
+ - Support compilation of Inception v3 NN
+ - Support ARM CPU
+ - Support ARM GPU (Mali)
+ - Support Tizen OS
+ - Support SmartMachine OS (Optional)
+
+| Product | Target Model Name | Comment |
+| ------------------- | ------------------------------ | ---------------- |
+| Tizen phone | Tizen TM2 | Reference device |
+| Tizen device | Odroid XU4 | Reference board |
+| SmartMachine target | Microvision mv8890, exynos8890 | Reference device |
+
+### Assumptions, Dependencies and Constraints
+
+<table>
+<colgroup>
+<col style="width: 26%" />
+<col style="width: 46%" />
+<col style="width: 26%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Item</th>
+<th>Assumptions, Dependencies and the Constraints</th>
+<th>Reference</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Tizen SW Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>Tizen API</li>
+<li>Tizen kernel</li>
+<li>Tizen FW</li>
+<li>Tizen SDK</li>
+<li>Tizen naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td><ul>
+<li><a href="www.tizen.org" class="uri">www.tizen.org</a></li>
+<li><a href="wiki.tizen.org" class="uri">wiki.tizen.org</a></li>
+<li><a href="developer.tizen.org" class="uri">developer.tizen.org</a></li>
+</ul></td>
+</tr>
+<tr class="even">
+<td>SmartMachine OS Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>SmartMachine API</li>
+<li>SmartMachine kernel</li>
+<li>SmartMachine FW</li>
+<li>SmartMachine SDK</li>
+<li>SmartMachine naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="http://suprem.sec.samsung.net/confluence/pages/viewpage.action?pageId=81833987">Platform confluence</a> <br>- <a href="https://github.sec.samsung.net/RS7-SmartMachine">Github</a> <br>- <a href="http://suprem.sec.samsung.net/confluence/display/ASEC/Adaptive+AUTOSAR">Functional Safety confluence</a></td>
+</tr>
+<tr class="odd">
+<td>Host OS</td>
+<td>Linux-based OS (Ubuntu, Archlinux, etc)</td>
+<td>- <a href="https://www.ubuntu.com/">Ubuntu site</a> <br>- <a href="https://www.archlinux.org/">Archlinux site</a></td>
+</tr>
+<tr class="even">
+<td>Tizen target HW</td>
+<td>The reference device should be provided: Tizen TM2</td>
+<td></td>
+</tr>
+<tr class="odd">
+<td>SmartMachine target HW</td>
+<td>The reference device should be provided</td>
+<td></td>
+</tr>
+</tbody>
+</table>
+
+## Development Plan And Result
+
+### Development Schedule
+
+| Task | Deliverable | Plan start | Plan end | Result start | Result end | Responsibility |
+| ------------------------------------ | --------------------------------- | ---------- | -------- | ------------ | ---------- | -------------- |
+| Prepare SW requirements | SRS | 04.2018 | 04.2018 | | | S. Vostokov |
+| Prepare initial SW Test Document | STD | 04.2018 | 04.2018 | | | S. Vostokov |
+| Prepare Initial Project Plan | SDD | 04.2018 | 04.2018 | | | S. Vostokov |
+| Prepare SW Test Document | STD | 04.2018 | 06.2018 | | | S. Vostokov |
+| Prepare design document | HLD, DLD | 05.2018 | 08.2018 | | | S. Vostokov |
+| Prepare test result | STD, UTR | 04.2018 | 10.2018 | | | S. Vostokov |
+| Prepare project completion documents | SDD, Project completion report | 05.2018 | 12.2018 | | | S. Vostokov |
+| Implement Caffe Importer | Caffe NN model Importer | 05.2018 | 09.2018 | | | S. Vostokov |
+| Implement code emitter for CPU | Code emitter | 05.2018 | 09.2018 | | | S. Vostokov |
+| Implement TF Lite Importer | TensorFlow Lite NN model Importer | 05.2018 | 11.2018 | | | S. Vostokov |
+| Implement code emitter for GPU | Code emitter | 02.2018 | 11.2018 | | | S. Vostokov |
+
+### SW Metrics
+
+| Category | Metric | Collection Method | Collection Period | Planned | Actual | Responsibility |
+| -------- | ---------------------------------------------------------------------- | ------------------------ | ----------------------- | ----------------- | ------ | -------------- |
+| Quality | Test pass rate | GTest | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Quality | Defects density | Defect management system | 22.02.2018 - 31.12.2018 | \<= 1 defect/KLOC | | S. Vostokov |
+| Quality | Defects removal rate | Defect management system | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Quality | Critical defects | Static analysis | 22.02.2018 - 31.12.2018 | 0 | | S. Vostokov |
+| Quality | Major defects | Static analysis | 22.02.2018 - 31.12.2018 | 0 | | S. Vostokov |
+| Quality | Code review issue removal | Samsung Research github | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Quality | Comments Rate | `cloc` tool | 22.02.2018 - 31.12.2018 | Exceed 20% | | S. Vostokov |
+| Quality | Cyclomatic Complexity | SVACE | 22.02.2018 - 31.12.2018 | \< 50 | | S. Vostokov |
+| Quality | Unused Items (Unused Files, Unused Functions, Unused Global Variables) | gcc/g++ | 22.02.2018 - 31.12.2018 | 0 | | S. Vostokov |
+| Process | Project On-time Completion Rate | PLM | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Process | Milestone On-time Completion Rate | PLM | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+| Process | Process compliance | Audit | 22.02.2018 - 31.12.2018 | 100% | | S. Vostokov |
+
+### SW Configurations Management
+
+#### Document
+
+| No | Configuration Item | Location | Submitter |
+| -- | ---------------------------- | -------- | ----------- |
+| 1 | SW Requirement Specification | PLM | S. Vostokov |
+| 2 | SW Development Document | PLM | S. Vostokov |
+| 3 | SW High Level Document | PLM | S. Vostokov |
+| 4 | SW Detailed Level Document | PLM | S. Vostokov |
+| 5 | SW System Test Document | PLM | S. Vostokov |
+| 6 | SW Unit Test Report | PLM | S. Vostokov |
+
+#### SW Source Code
+
+SW Repository:
+<https://github.sec.samsung.net/STAR/nncc>
+
+ git clone https://github.sec.samsung.net/STAR/nncc.git
+
+#### Baseline
+
+| Phase | Baseline Name | SW Configuration Item |
+| ------------------ | ------------------ | ------------------------------------------------------------------------------------------- |
+| 04.2018 Plan | Execution DR | SW Requirement Specification, SW Development Document, System Test Document initial version |
+| 06.2018 Execution | DR1 | System Test Document |
+| 08.2018 Execution | Design document | SW High Level Document, SW Detailed Design Document |
+| 09.2018 Execution | DR2 | |
+| 10.2018 Execution | Test report | SW System Test Document (result), SW Unit Test Report |
+| 12.2018 Completion | Project Completion | Project Completion Report |
+
+## SW Quality Verification
+
+### SW Verification
+
+| No | Verification Item | Quality Goal | Tool | Phase | Development Team Member in Charge | Result | Note |
+| -- | -------------------------------- | ------------------------------------------ | -------- | --------- | --------------------------------- | ------ | ---- |
+| 1 | Open source License Verification | Clear violations of open source obligation | ProtexIP | Execution | Vostokov Sergey | | |
+| 2 | Potential Defect | Fix all defects | Svace | Test | Vostokov Sergey | | |
+| 3 | System Defect | Fix Critical/ Major defects | Github | Test | Vostokov Sergey | | |
+
+### Static Analysis
+
+| No | Activity | Schedule | Result | Comment |
+| -- | --------------------------- | ---------- | ------ | ------- |
+| 1 | SA Verification I (SVACE) | 28.09.2018 | | |
+| 2 | SA Verification II (SVACE) | 30.11.2018 | | |
+| 2 | SA Verification III (SVACE) | 31.12.2018 | | |
+
+### Coding Standard
+
+| No | Activity | Schedule | Result | Comment |
+| -- | ----------------------------------------------------- | -------- | ------ | ------- |
+| 1 | Coding standard enforcement with `clang-format` tool. | Regular | | |
+
+
+### Convergence (integration testing)
+
+Out of scope since the integration with other SW is not required by SW
+Requirement Specification.
+
+### Dynamic Analysis
+
+| No | Activity | Schedule | Result | Comment |
+| -- | ------------------- | ---------- | ------ | ------- |
+| 1 | DA Verification I | 28.09.2018 | | |
+| 2 | DA Verification II | 30.11.2018 | | |
+| 2 | DA Verification III | 31.12.2018 | | |
+
+
+### Architecture Analysis
+
+SW architecture verification is managed by HQ.
+
+### SW Security
+
+Out of the project scope since the project is not related to SW security.
+
+### Code Review
+
+| No | Activity | Schedule | Result | Comment |
+| -- | ----------- | -------- | ------ | ------------------------------------------------------------------- |
+| 1 | Code review | Regular | | All code is reviewed manually using `github` tool before committing |
+
+## Risk Management
+
+| Priority | Risk Description | Risk Reduction Solution | Schedule | Result | Responsibility |
+| -------- | ------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ----------------- | ------ | -------------- |
+| 1 | Project scope is changed due extra HQ request | Discuss the new requirements via email and messenger, update SRS | 02.2018 - 12.2018 | | S. Vostokov |
+| 2 | Unavoidable technical difficulties during requirements implementation | Submit requirements changes and get confirmation from HQ | 02.2018 - 12.2018 | | S. Vostokov |
+| 3 | Not enough HR | Hire team members as soon as possible, request assistance from other teams | 02.2018 - 12.2018 | | S. Vostokov |
+| 4 | Use of GPL code | Minimize usage of GPL code, wrap GPL modules with well-defined interfaces so they can be easily replaced. | 02.2018 - 12.2018 | | S. Vostokov |
+| 5 | Requirements would change due external or internal circumstances, e.g. new technology or product launch | Discuss project changes and make corrections | 02.2018 - 12.2018 | | S. Vostokov |
+
diff --git a/docs/nncc/project/high_level_design.md b/docs/nncc/project/high_level_design.md
new file mode 100644
index 000000000..a15aaca4a
--- /dev/null
+++ b/docs/nncc/project/high_level_design.md
@@ -0,0 +1,457 @@
+# SW High Level Design
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | ----------------- | ----------------- | ------------ |
+| 0.1 | 2018.05.25 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.06.21 | SE member review | Alexey Kondrashov | |
+| 1.0 | 2018.06.22 | Final DR1 version | Vostokov Sergey | Sung-Jae Lee |
+
+**Terminology and Abbreviation**
+
+| Terminology | Description |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+**References**
+
+\[1\] Vostokov Sergey, [SW Requirements Specification](requirements_specification.md)
+
+## Overview
+
+### Scope
+
+The main goal of the project is to develop a compiler for neural
+networks to produce executable artefact for specified SW and HW
+platform.
+
+The development scope includes the following components:
+
+ - Develop importer module to parse, verify and represent NN model for
+ further optimization and compilation
+ - Develop code emitters to produce executable binary for CPU and GPU
+
+
+**2018 year goals:**
+
+ - Support TensorFlow Lite NN model format
+ - Support Caffe NN model format
+ - Support Caffe2 NN model format (Optional)
+ - Support compilation of MobileNet NN
+ - Support compilation of Inception v3 NN
+ - Support ARM CPU
+ - Support ARM GPU (Mali)
+ - Support Tizen OS
+ - Support SmartMachine OS(Optional)
+
+| Product | Target Model Name | Comment |
+| ------------------- | ------------------------------ | ---------------- |
+| Tizen phone | Tizen TM2 | Reference device |
+| Tizen device | Odroid XU4 | Reference board |
+| SmartMachine target | Microvision mv8890, exynos8890 | Reference device |
+
+Table 1-1. Target Model
+
+### Design Consideration
+
+Deep learning software demands reliability and performance. The common
+approach which comes from the history is to develop a SW framework
+(machine learning framework) which would compute each step of the neural
+network inference process using supported hardware. This approach is
+used in many popular solutions like Google Tensorflow/Tensorflow Lite,
+Caffe/2, etc. Traditionally, neural network developers build a
+computation graph and then an appropriate machine learning framework
+interprets it. The latest discoveries in AI field show that the
+node-visitor method of execution is inefficient. As a result, a second
+approach has been worked out by the industry, which is a neural network
+compiler that executes code more efficiently.
+
+This document presents the design of the *nncc*, a neural network
+compiler collection. The design should provide the easiest way to extend
+the functionality of the *nncc* by adding new modules with the following
+features:
+
+ - Support neural networks produced by various machine learning
+ frameworks;
+ - Produce an artefact taking advantages of various hardware
+ including specialized processors like NPU;
+ - Apply new domain specific optimization techniques over given NN.
+
+### Constraints
+
+See constraints in SW Requirements Specification.
+
+<table>
+<colgroup>
+<col style="width: 24%" />
+<col style="width: 64%" />
+<col style="width: 10%" />
+</colgroup>
+<thead>
+<tr class="header">
+<th>Item</th>
+<th>Assumptions, Dependencies and the Constraints</th>
+<th>Reference</th>
+</tr>
+</thead>
+<tbody>
+<tr class="odd">
+<td>Tizen SW Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>Tizen API</li>
+<li>Tizen kernel</li>
+<li>Tizen FW</li>
+<li>Tizen SDK</li>
+<li>Tizen naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="www.tizen.org" class="uri">www.tizen.org</a> <br>- <a href="wiki.tizen.org" class="uri">wiki.tizen.org</a> <br>- <a href="developer.tizen.org" class="uri">developer.tizen.org</a></td>
+</tr>
+<tr class="even">
+<td>SmartMachine OS Platform</td>
+<td><dl>
+<dt>The following items should be provided:</dt>
+<dd><ul>
+<li>SmartMachine API</li>
+<li>SmartMachine kernel</li>
+<li>SmartMachine FW</li>
+<li>SmartMachine SDK</li>
+<li>SmartMachine naming convention</li>
+</ul>
+</dd>
+</dl></td>
+<td>- <a href="http://suprem.sec.samsung.net/confluence/pages/viewpage.action?pageId=81833987">Platform confluence</a> <br>- <a href="https://github.sec.samsung.net/RS7-SmartMachine">Github</a> <br>- <a href="http://suprem.sec.samsung.net/confluence/display/ASEC/Adaptive+AUTOSAR">Functional Safety confluence</a></td>
+</tr>
+<tr class="odd">
+<td>Host OS</td>
+<td>Linux-based OS (Ubuntu, Archlinux, etc)</td>
+<td>- <a href="https://www.ubuntu.com/">Ubuntu site</a> <br>- <a href="https://www.archlinux.org/">Archlinux site</a></td>
+</tr>
+<tr class="even">
+<td>Tizen target HW</td>
+<td>The reference device should be provided: Tizen TM2</td>
+<td></td>
+</tr>
+<tr class="odd">
+<td>SmartMachine target HW</td>
+<td>The reference device should be provided</td>
+<td></td>
+</tr>
+</tbody>
+</table>
+Table 1-2. Assumptions, Dependecies and the Constraints</caption>
+
+## SW System Architecture Design
+
+### Overall Architecture
+
+The picture below presents the result of high-level analysis of the
+requirements which **nncc** should satisfy. It describes the main
+function **Compilation** of the compiler collection using IDEF0
+(functional modeling) notation. The full information on IDEF family of
+modeling languages is available at this link on [Wikipedia:
+IDEF](https://en.wikipedia.org/wiki/IDEF).
+
+![image](../images/nncc_idef0_a0.png)
+
+Figure 1. Top-Level Context Diagram of compilation function.
+
+
+The short explanation of the **Figure 1**:
+
+**1. Input entities:**
+
+ - *NN Model instance:* It is the main input of *nncc*. The compiler
+ takes from a user information describing a neural network which
+ should be compiled. In most cases, this NN is produced by a
+ machine learning framework and stored in one or many files. The
+ contents of these files constitute the essence of the neural
+ network. Here it is denoted as an instance of NN model.
+ - *Command line options:* In order to provide the most convenient
+ way to use the compiler, it should be configurable. Current design
+ presents a tool which has a Command Line Interface (CLI). Command
+ line options are a symbolic representation of directions
+ instructing the compiler how to set up a working session to get
+ the desired result.
+
+**2. Output:**
+
+ - *Target binaries:* Everything that is produced by the compilation
+ operation. In general case the result may consist of one or more
+ files. Each of them may be one of the following: an executable, a
+ source code file, a log/verification/error report. For example,
+ when we require the compiler to compile a neural network for
+ execution on GPU, the output artefact may be OpenCL/C/C++ source
+ code, or a binary containing invocation of the procedures
+ delegating the calculations to GPU.
+
+**3. Rules and notations:**
+
+ - *NN Model specification:* Each machine learning framework has its
+ own architecture design and uses its own format to
+ serialize/deserialize computation graphs which represent neural
+ networks. On a storage device, it may be saved as a file or many
+ files using a unique markup of binary data. To enable *nncc* to
+ read such data and process it, in the future it should recognize
+ the format of the container. Importer/parser subsystem of *nncc*
+ stores the full knowledge of the NN specifications and is
+ responsible for reading and parsing NN models (see [Import NN
+ model](#import-nn-model)).
+ - *High-Level and Low-Level Optimization techniques:* Before
+ deployment, a neural network developer might want to verify their
+ product and optimize it by size and performance. There are many
+ techniques for reducing the common size of neural network weights
+ and improving performance of the inference. NN optimization
+ activity can be automated by implementing each technique in the
+ middleend according to its specifications (see [Apply
+ Optimizations](#apply-optimizations)).
+ - *Target Runtime Environment (TRE):* In the case when the compiler
+ produces the binary for execution on a specific SW platform, it
+ should take into account the common API of this SW Platform. It
+ includes the full public API of a chosen OS available to the 3rd
+ party developers.
+ - *Target Instruction Set Architecture (Target ISA):* Resulting
+ artefact is always executed on a SW Platform using some specified
+ API. The user may want to generate the artefact that would use
+ OpenBlas or Arm Compute Library or something else (if supported by
+ the compiler), to perform calculations. In order to provide such
+ possibility, *nncc* should be aware of the API to the specified
+ 3rd party libraries.
+ - *Device specifications:* Some of the optimization techniques may
+ take into account the technological features of the computing
+ device, like the time to perform some specific calculations. Such
+ information is very helpful during optimization of the final code
+ of the compiled artefact because it may be used to select an
+ optimal sequence of command invocations in order to achieve the
+ best performance.
+
+**4. Mechanism:**
+
+ - *Optimizing NN Compiler:* The implemented compiler itself. Since
+ *nncc* is dedicated to producing the code for the most efficient
+ execution, we may regard the tool as optimizing.
+ - *Host OS:* Since the compiler is a tool that works in some SW
+ Environment, the main Top-Level SW system is an Operating System.
+ In the SW Requirements specification it may be defined as a
+ Linux-like OS, for example Ubuntu, Archlinux, etc.
+
+### Composition of Architecture
+
+The compiler consists of three main parts: frontend, middleend, backend.
+Together they form a Neural Network instance processing pipeline.
+Moreover, there is one additional part that is in charge of the compiler
+configuration.
+
+![image](../images/nncc_components.png)
+
+Figure 2. Top-Level Components of the
+*nncc*.
+
+| Layer or Subsystem Name | Description |
+| ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
+| Frontend | Imports a specified Neural Network, presents it as a computation graph |
+| Middleend | Provides various optimizations over the computation graph; at the end transforms it to internal IR |
+| Backend | Produces the specified artefact as a result of compilation procedure using specified parameters describing the target OS, target HW, etc |
+| Configuration system | Accepts command line options and configures *nncc* according to their contents |
+
+
+The detailed decomposition of the main function **Compilation** is
+presented on the diagram A1 below.
+
+### Interface
+
+Similar to any console application the *nncc* CLI accepts two types of
+options:
+
+ - Options that have values, for example, a name of the output executable
+ - Options that don't have values (switches) that turn various features on and off
+
+Additionally, options can be general and subsystem-specific.
+
+General options direct the process of the neural network compilation as
+a whole, and also control the utility functions like the verbosity of
+the messages that *nncc* outputs during the compilation process.
+
+Subsystem-specific options control each respective subsystem:
+
+ - Frontend subsystem takes options that point to the NN model to
+ compile, which format it has, which version of the format and so
+ on.
+ - Middleend subsystem takes options that either turn on specific
+ optimizations for the NN model, or just point at the more desired
+ outcome, for example "target performance efficiency" or "target
+ memory efficiency".
+ - Backend subsystem takes options that describe the desired target
+ device or architecture and so on.
+
+For better usability, high-level options are also supported. A single
+high-level option is mapped to a group of lower level options, similarly
+to how it is done with conventional compiler drivers, like gcc. This way
+by choosing a single Middleend option "target performance", nncc will
+automatically choose a number of performance optimizations by itself.
+
+## SW System Operation Design
+
+The Figure 3 presents a more detailed composition of the main function
+**Compilation**. As it was shown in previous section [Composition of
+Architecture](#composition-of-architecture) it is composed of 5
+subfunctions:
+
+ - Setup and configure each module - *Block 1* (See
+ [Initialization](#initialization) section)
+ - Import the specified neural network - *Block 2* (See [Import NN
+ model](#import-nn-model) section)
+ - Apply High-Level optimizations - *Block 3* (See [Apply
+ Optimizations](#apply-optimizations) section)
+ - Apply Low-Level optimizations - *Block 4* (See [Apply
+ Optimizations](#apply-optimizations) section)
+ - Generate the output code for specified target - *Block 5* (See
+ [Generate the code](#generate-the-code) section)
+
+![image](../images/nncc_idef0_a1.png)
+
+Figure 3. Decomposition of top-Level function **Compilation**.
+
+### Initialization
+
+At this stage the initialization of all submodules of the *nncc*
+happens. This procedure starts from command line option processing till
+selection of all required and correctly configured modules. At the
+parsing stage the configuration system checks its own consistency. If
+command line option set is not enought to establish a valid
+configuration the environment variables will be used. Also, almost all
+configuration options can be read from config file if it is specified in
+command line.
+
+### Import NN model
+
+The major function of the *nncc* frontend is to import specified NN
+model. It means that frontend should recognize the format of given NN
+model, parse all internal structures (load computation graph using
+framework specific IR: NN topology, NN ops, weights), verify their
+correctness and convert to Model IR.
+
+### Apply Optimizations
+
+There are two levels of neural network optimizations in *nncc*.
+
+First one is High-Level Optimizations, they are applied to the Model IR,
+which is output by the NN Import subsystem.
+
+#### High-Level Optimizations
+
+High-Level optimizations can be divided into two groups:
+
+ - optimizations aimed at reducing the size of the resulting model -
+ *size optimizations*
+ - optimizations aimed at reducing the inference time of the model -
+ *performance optimizations*
+
+These two groups are not mutually exclusive. Some optimization
+techniques positively affect both size and performance, while some of
+them might reduce the size of the model at some performance cost.
+
+High-Level Optimizations in this sense are purely
+neural-network-specific, as they attempt to improve the model by
+manipulating the computation graph and the weights. For example, some
+techniques search for unused parts of the computation graph and remove
+them, or they search for the parts of the graph that can be merged
+together and thus gain some performance. Other techniques manipulate the
+neural network weights - either reduce their amount or modify their
+values in a way that allows for the reduced storage consumption.
+
+Currently, High-Level Optimizations are out of scope of the project.
+
+#### Low-Level Optimization
+
+The Low-Level Optimizations are applied by the compiler closer to the
+end of the whole compilation process, before the executable generation.
+The input for this stage of *nncc* is the Coarse-Grained IR, which is
+output but High-Level Optimization subsystem.
+
+### Generate the code
+
+Present architecture allows for several backend solutions, depending on
+target specified. Those solutions can be divided into 3 types:
+
+ - *Interpretation.* At every step inference can be carried out by
+ interpreting IR produced after that step.
+ - *Soft backend.* Resulting program can be generated as source code
+ in high-level programming language (e.g., C/C++) that does not
+ depend on libraries outside of itself, with the exception of
+ system libraries.
+ - *Hardware (Binary) backend.* This type refers to generating binary
+ code that can be executed on target device. NN compiler can
+ generate code that is either executed solely on CPU, or takes
+ advantage of the GPU when possible if corresponding target was
+ specified.
+
+Third-party libraries incorporation can be done either in form of source
+code or by compiling a binary artefact.
+
+## Appendix 1. Traceability Matrix
+
+The following table shows mapping between SW Requirements Specification
+and SW High-Level Design
+Document.
+
+| Requirement | Description | Section |
+| ----------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- |
+| RF-1 (Frontend: Tensorflow Lite) | The compiler should support import of NN model in Tensorflow Lite format (parsing & verification of data scheme v0-v3, 50 NN ops) | [Import NN model](#import-nn-model) |
+| RF-2 (Frontend: Caffe) | The compiler should support import of NN model in Caffe format (parsing & verification) | [Import NN model](#import-nn-model) |
+| RF-3 (Frontend: Caffe2 (Optional)) | The compiler should support import of NN model in Caffe2 format (parsing & verification) | [Import NN model](#import-nn-model) |
+| RF-4 (Frontend: lossless import) | The frontend should use the lossless approach while it is converting any NN model to IR | [Import NN model](#import-nn-model) |
+| RF-5 (Frontend: Inception\_v3) | The frontend should successful import the Inception V3 NN model | [Import NN model](#import-nn-model) |
+| RF-6 (Frontend: MobileNet) | The frontend should successful import the MobileNet NN model | [Import NN model](#import-nn-model) |
+| RF-7 (Backend: ARM CPU) | The compiler should produce executable for ARM CPU | [Generate the code](#generate-the-code) |
+| RF-8 (Backend: ARM GPU) | The compiler should produce the binary that takes advantages of GPU when it was specified before compilation | [Generate the code](#generate-the-code) |
+| RF-9 (Backend: Artefact type) | The compiler should produce executable as a shared library or as a static library | [Generate the code](#generate-the-code) |
+| RF-10 (Backend: Inception\_v3) | The compiler should produce the valid compiled artefact for Inception v3 NN model | [Generate the code](#generate-the-code) |
+| RF-11 (Backend: MobileNet) | The compiler should produce the valid compiled artefact for MobileNet NN model | [Generate the code](#generate-the-code) |
+| RF-12 (Config: command line) | The compiler should get configuration parameters from command line | [Initialization](#initialization) |
+| RF-13 (Config: config file (Optional)) | The compiler should get configuration parameters from config file | [Initialization](#initialization) |
+| RF-14 (Config: environment variable (Optional)) | The compiler should get configuration parameters from environment variables | [Initialization](#initialization) |
+| RF-15 (Artefact: result) | The artefact should provide comparable result to the original NN model for the same input data | [Generate the code](#generate-the-code) |
+| RF-16 (Artefact: input verifications) | The artefact should verify any input data and check consistency | [Generate the code](#generate-the-code) |
+| RF-17 (Artefact: GPU) | The artefact should take advantage of the GPU for GPU-enabled operations | [Generate the code](#generate-the-code) |
+| RF-18 (Artefact: CPU) | The artefact should take advantage of CPU if it was specified | [Generate the code](#generate-the-code) |
+
+**Design Module of S/W Architecture**
+
+| Requirement | Import NN model | Generate the code | Initialization |
+| ----------------------------------------------- | --------------- | ----------------- | -------------- |
+| RF-1 (Frontend: Tensorflow Lite) | O | | |
+| RF-2 (Frontend: Caffe) | O | | |
+| RF-3 (Frontend: Caffe2 (Optional)) | O | | |
+| RF-4 (Frontend: lossless import) | O | | |
+| RF-5 (Frontend: Inception\_v3) | O | | |
+| RF-6 (Frontend: MobileNet) | O | | |
+| RF-7 (Backend: ARM CPU) | | O | |
+| RF-8 (Backend: ARM GPU) | | O | |
+| RF-9 (Backend: Artefact type) | | O | |
+| RF-10 (Backend: Inception\_v3) | | O | |
+| RF-11 (Backend: MobileNet) | | O | |
+| RF-12 (Config: command line) | | | O |
+| RF-13 (Config: config file (Optional)) | | | O |
+| RF-14 (Config: environment variable (Optional)) | | | O |
+| RF-15 (Artefact: result) | | O | |
+| RF-16 (Artefact: input verifications) | | O | |
+| RF-17 (Artefact: GPU) | | O | |
+| RF-18 (Artefact: CPU) | | O | |
diff --git a/docs/nncc/project/requirements_specification.md b/docs/nncc/project/requirements_specification.md
new file mode 100644
index 000000000..7a6fce762
--- /dev/null
+++ b/docs/nncc/project/requirements_specification.md
@@ -0,0 +1,272 @@
+# SW Requirements Specification
+
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | ------------------------------------------ | ------------------ | ------------ |
+| 0.1 | 2018.04.11 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.04.11 | SE member in-charge review | Aleksei Kondrashov | |
+| 1.0 | 2018.04.13 | Final Execution DR version | Vostokov Sergey | Sung-Jae Lee |
+| 1.1 | 2018.05.24 | Add new requirement in Source code section | Vostokov Sergey | Sung-Jae Lee |
+
+## Introduction
+
+### Purpose and scope
+
+The main goal of the project is to develop a compiler for neural
+networks to produce executable artefact for specified SW and HW
+platform.
+
+The development scope includes the following components:
+
+ - Develop importer module to parse, verify and represent NN model for
+ further optimization and compilation
+ - Develop code emitters to produce executable binary for CPU and GPU
+
+2018 year goals:
+
+ - Support TensorFlow Lite NN model format
+ - Support Caffe NN model format
+ - Support Caffe2 NN model format (Optional)
+ - Support compilation of MobileNet NN
+ - Support compilation of Inception v3 NN
+ - Support ARM CPU
+ - Support ARM GPU (Mali)
+ - Support Tizen OS
+ - Support SmartMachine OS (Optional)
+
+### Terminology and Abbreviation
+
+| | |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+### SW System Architecture
+
+The main components of the compiler are the following:
+
+ - Configuration system
+ - Importer (convert supported NN model to Model IR before
+ optimization)
+ - High-Level optimization (Applies HW independent optimizations)
+ - Low-Level optimization (Applies optimizations appropriate to the
+ specified target HW)
+ - Code emitter (Produces the binary to take advantages of CPU and/or
+ GPU)
+
+![image](../images/nncc_idef0_a1.png)
+
+### Relevant Industry Standards
+
+Architecture design is described using IDEF notation. Since the nncc is a part of open source STAR Platform project
+any other industry standards not required and/or applicable.
+
+## SW Functional Requirements
+
+### Frontend
+
+| ID | Requirement Name | Description |
+| ---- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
+| RF-1 | Frontend: Tensorflow Lite | The compiler should support import of NN model in Tensorflow Lite format (parsing & verification of data scheme v0-v3, 50 NN ops) |
+| RF-2 | Frontend: Caffe | The compiler should support import of NN model in Caffe format (parsing & verification) |
+| RF-3 | Frontend: Caffe2 (Optional) | The compiler should support import of NN model in Caffe2 format (parsing & verification) |
+| RF-4 | Frontend: lossless import | The front-end should use the lossless approach while it is converting any NN model to IR |
+| RF-5 | Frontend: Inception\_v3 | The front-end should successful import the Inception V3 NN model |
+| RF-6 | Frontend: MobileNet | The front-end should successful import the MobileNet NN model |
+
+### High-Level optimization
+
+No special requirements
+
+### Low-Level optimization
+
+No special requirements
+
+### Backend
+
+| ID | Requirement Name | Description |
+| ----- | ---------------------- | ------------------------------------------------------------------------------------------------------------ |
+| RF-7 | Backend: ARM CPU | The compiler should produce executable for ARM CPU |
+| RF-8 | Backend: ARM GPU | The compiler should produce the binary that takes advantages of GPU when it was specified before compilation |
+| RF-9 | Backend: Artefact type | The compiler should produce executable as a shared library or as a static library |
+| RF-10 | Backend: Inception\_v3 | The compiler should produce the valid compiled artefact for Inception v3 NN model |
+| RF-11 | Backend: MobileNet | The compiler should produce the valid compiled artefact for MobileNet NN model |
+
+### Configuration
+
+| ID | Requirement Name | Description |
+| ----- | --------------------------------------- | --------------------------------------------------------------------------- |
+| RF-12 | Config: command line | The compiler should get configuration parameters from command line |
+| RF-13 | Config: config file (Optional) | The compiler should get configuration parameters from config file |
+| RF-14 | Config: environment variable (Optional) | The compiler should get configuration parameters from environment variables |
+
+### Compiled Artefact
+
+| ID | Requirement Name | Description |
+| ----- | ----------------------------- | ---------------------------------------------------------------------------------------------- |
+| RF-15 | Artefact: result | The artefact should provide comparable result to the original NN model for the same input data |
+| RF-16 | Artefact: input verifications | The artefact should verify any input data and check consistency |
+| RF-17 | Artefact: GPU | The artefact should take advantage of the GPU for GPU-enabled operations |
+| RF-18 | Artefact: CPU | The artefact should take advantage of CPU if it was specified |
+
+## SW Non-Functional Requirements
+
+### The compiler
+
+#### Performance
+
+No special requirements
+
+#### SW capacity
+
+No special requirements
+
+#### Reliability
+
+| ID | Requirement Name | Description |
+| ----- | ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RNF-1 | Reliability: input | The compiler should produce correct executable in order to utilize CPU and GPU when the correct input data is provided. If the incorrect input data are provided the compiler should not produce a compiled artefact, but inform user about all errors which were met |
+
+#### Security
+
+No special requirements
+
+#### Usability
+
+No special requirements
+
+#### Availability
+
+No special requirements
+
+#### Maintainability
+
+No special
+requirements
+
+#### Extendibility
+
+| ID | Requirement Name | Description |
+| ----- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------- |
+| RNF-2 | Extendibility: frontend | The compiler design and implementations should provide possibility to add new features to front-end: new NN models format |
+| RNF-3 | Extendibility: backend | The compiler design and implementations should provide possibility to add new features to backend (new targets) |
+
+#### Testability
+
+| ID | Requirement Name | Description |
+| ----- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RNF-4 | Testability: environment | The test environment should be built in order to verify compiler functionality, product build status, artefact build/execution status, artefact calculation result and calculation memory footprint and performance |
+
+#### Portability
+
+| ID | Requirement Name | Description |
+| ----- | ------------------ | --------------------------------------------------- |
+| RNF-5 | Portability: Linux | The compiler should be portable with Linux-based OS |
+
+#### Scalability
+
+No special requirements
+
+#### Expandability
+
+No special
+requirements
+
+#### Configurability
+
+| ID | Requirement Name | Description |
+| ----- | --------------------------------------- | --------------------------------------------------------------------------------- |
+| RNF-6 | Configurability: command line | The compiler should support applying configuration through command line options. |
+| RNF-7 | Configurability: file (Optional) | The compiler should support applying configuration through configuration file. |
+| RNF-8 | Configurability: environment (Optional) | The compiler should support applying configuration through environment variables. |
+
+### The compiled artefact
+
+No special
+requirements
+
+### The source code
+
+| ID | Requirement Name | Description |
+| ------ | ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RNF-9 | Legislation | All source code files should follows its original license and general project license without any conflicts |
+| RNF-10 | Legitimacy | The project should have its own general license |
+| RNF-11 | Coding style | Each source code file should follow the one defined for the project coding style |
+| RNF-12 | Contrib | RNF-9, RNF-10, RNF-11 are applicable only for the final release version of source code. These requirements are not applicable to the source code placed in development branch or any folder which is used as temporary storage for the source code under development. |
+
+## SW Interface Requirements
+
+### The compiler interface
+
+#### User Interface
+
+| ID | Requirement Name | Description |
+| ----- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
+| RIF-1 | Compiler UI: no interaction | The compiler should not require any user interation during compilation (completed compilations, fatal exit) |
+| RIF-2 | Compiler UI: CLI | The compiler is considering as a command line tool which proceed parameters from command line and/or config file, environment variables |
+| RIF-3 | Compiler UI: input | The compiler should provide the facility to specify NN model to be compiled |
+| RIF-4 | Compiler UI: target device | The compiler should provide the facility to specify result target device (CPU or GPU) |
+| RIF-5 | Compiler UI: target platform | The compiler should provide the facility to specify result target SW platform |
+| RIF-6 | Compiler UI: output | The compiler should provide the facility to specify result target name |
+| RIF-7 | Compiler UI: target type | The compiler should provide the facility to specify result target type: shared or static library |
+
+#### Hardware Interface
+
+| ID | Requirement Name | Description |
+| ----- | -------------------------------- | --------------------------------------------------------------------------- |
+| RIF-8 | Compiler HWI: x86\_64 executable | The solution should provide executables to run on x86\_64-compatible system |
+
+#### Software Interface
+
+| ID | Requirement Name | Description |
+| ------ | ------------------------------------------ | ------------------------------------------------------------------------------------------------ |
+| RIF-9 | Compiler SWI: frontend plugin | The compiler should provide the SW interface in order to add support of the new NN model formats |
+| RIF-10 | Compiler SWI: backend plugin (HW) | The compiler should provide the SW interface in order to add support of the new HW |
+| RIF-11 | Compiler SWI: backend plugin (SW Platform) | The compiler should provide the SW interface in order to add support of the new SW Platform |
+
+#### Communication Interface
+
+No requirements for communication interface.
+
+### The compiled artefact interface
+
+#### User Interface
+
+| ID | Requirement Name | Description |
+| ------ | ------------------- | ----------------------------------- |
+| RIF-12 | Artefact UI: no GUI | Command line UI in text is suitable |
+
+#### Hardware Interface
+
+| ID | Requirement Name | Description |
+| ------ | ----------------- | ----------------------------------------------------------------------------- |
+| RIF-13 | Artefact HWI: CPU | The artefact should use ARM CPU instruction set when it was built for ARM CPU |
+| RIF-14 | Artefact HWI: GPU | The artefact should use ARM GPU instruction set when it was build for ARM GPU |
+
+#### Software Interface
+
+| ID | Requirement Name | Description |
+| ------ | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
+| RIF-15 | Artefact SWI: GPU driver | The artefact should use ARM GPU driver to invoke calculations when it was built for ARM GPU |
+| RIF-16 | Artefact SWI: C/C++ header | The artefact should provide C/C++ interface in order to use it in other applications |
+| RIF-17 | Artefact SWI: shared type | The compiled artefact should be a shared library in order to share it between several executables when it was specified before compilation |
+| RIF-18 | Artefact SWI: static type | The compiled artefact should be a static library in order to be built-in to an executable when it was specified before compilation |
+| RIF-19 | Artefact SWI: Info | The artefact should provide SW interface in order to get the actual status of calculation process (progress, errors, final result) |
+
+#### Communication Interface
+
+No requirements for communication interface.
diff --git a/docs/nncc/project/test_plan.md b/docs/nncc/project/test_plan.md
new file mode 100644
index 000000000..a1f0f0a97
--- /dev/null
+++ b/docs/nncc/project/test_plan.md
@@ -0,0 +1,442 @@
+# SW System Test Document
+
+**Revision history**
+
+| Ver. | Date | Contents | Author | Approver |
+| ---- | ---------- | -------------------------- | ------------------ | ------------ |
+| 0.1 | 2018.04.12 | Initial version | Vostokov Sergey | Sung-Jae Lee |
+| 0.2 | 2018.04.13 | SE member in-charge review | Aleksei Kondrashov | |
+| 1.0 | 2018.04.17 | Final Execution DR version | Vostokov Sergey | Sung-Jae Lee |
+| 1.1 | 2018.06.20 | DR1 version | Vostokov Sergey | Sung-Jae Lee |
+
+**Terminology and Abbreviation**
+
+| | |
+| ------------ | ------------------------------------------------------------- |
+| OS | Operating System |
+| OS API | Application interface of OS |
+| HW | Hardware |
+| SW | Software |
+| NN | Neural Network |
+| NN model | Neural network model (Instance of NN built with ML framework) |
+| NN compiler | The compiler for neural network |
+| ML framework | The machine learning framework |
+| TF/TF Lite | Tensorflow/Tensorflow Lite ML framework |
+| IR | Intermediate representation |
+| CI/CI system | Continuous integration system |
+| UI | The user interface |
+| GUI | The graphical user interface |
+| CLI | The command-line interface |
+
+**References**
+
+\[1\] Vostokov Sergey, [SW Requirements Specification](requirements_specification.md)
+
+## SW System Test Overview
+
+### Purpose
+
+Software testing is an investigation to provide the quality of the
+product under test and to reduce risk of its failure to users or
+customers. Purpose of testing is to detect software failures so that
+defects may be discovered and corrected.
+
+Software system test procedure is a collection of processes and methods
+used to ensure quality. An additional goal is to make sure that the
+product follows regulations and meets the quality standards expected by
+the customer.
+
+### Scope
+
+As the number of possible tests for every software is practically
+infinite, we use some strategy to select tests that are feasible for the
+available time and resources.
+
+Software system tests attempt to cover requirements listed in the [SW
+Requirement
+Specification](https://github.sec.samsung.net/STAR/nncc/doc/project/requirements_specification.md).
+
+Since the projest outcome is a compiler then its testing are in
+different domain than many other kinds of application or system testing.
+They are dedicated to find all possible issues that cause the following
+bugs:
+
+ - Compiler crashes (also known as an ICE or Internal Compiler Error)
+
+ - Compiler hangs (kind of infinite loop in the compiler)
+
+ - Bad code generation (a result of incorrect compiler output):
+
+ - Bad code generation that leads to a crash in the application
+ - “Silent” bad code generation
+
+ - Compiler throughput issues (Issues that affect the amount of time
+ the compiler takes to compile code )
+
+ - Code quality issues (Issues that affect the performance of the
+ compiled application)
+
+ - Compiler feature correctness issues (This class of bugs involves the
+ compiler generating correct code, but not doing what a particular
+ feature specifies should be
+done)
+
+## SW System Test Items
+
+### Functions to be tested
+
+| Feature | Test Item ID | Test Item description |
+| ---------------------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| RF-1, RIF-3 - RIF-7 | TST-1 | Test suite checks NN ops import from Tensorflow Lite format by loading NN model that consists of a single NN op. One test for each NN op. |
+| RF-2, RIF-3 - RIF-7 | TST-2 | Test suite checks NN ops import from Caffe format by loading NN model that consists of a single NN op. One test for each NN op. |
+| RF-3, RIF-3 - RIF-7 | TST-3 | Test suite checks NN ops import from Caffe2 format by loading NN model that consists of a single NN op. One test for each NN op. |
+| RF-5, RIF-3 - RIF-7 | TST-4 | The test should verify successful loading the Inception V3 NN model |
+| RF-6, RIF-3 - RIF-7 | TST-5 | The test should verify successful loading the MobileNet NN model |
+| RF-4 | TST-6 | The test suite should automatically verify the completeness of information that was read from the raw data by comparing it with serialized raw data from Model IR |
+| RF-7, RF-18, RIF-13 | TST-7 | The unit test should automatically verify successful execution of binary on target ARM CPU |
+| RF-8, RF-17, RIF-14, RIF-15 | TST-8 | The unit test should automatically verify successful execution of calculation on GPU |
+| RF-9, RNF-1, RIF-17, RIF-18 | TST-9 | Unit test should verify the existence and format of binary (shared or static) in accordance to specified options |
+| RF-10 | TST-10 | Unit test should verify that compiler produces a compiled artefact for the Inception V3 NN model (Validity of compiled artefact is checked by other tests) |
+| RF-11 | TST-11 | Unit test should verify that compiler produces a compiled artefact for the MobileNet NN model (Validity of compiled artefact is checked by other tests) |
+| RF-12, RF-13, RF-14, RNF-6, RNF-7, RNF-8 | TST-12 | The test suite should verify correctness of configuration object by unit testing |
+| RF-15, RNF-1 | TST-13 | The test suite is to verify the correctness of calculations by comparing the result of original NN model and the result of compiled artefact on the same input data |
+| RF-16 | TST-14 | Unit test should verify that the incorrect input data is processed with an error message without unexpected termination of the application |
+| RNF-4, RNF-5, RIF-8 | TST-15 | A Linux-based OS should be used while the test environment are built. |
+| RIF-16 | TST-16 | The unit test should verify the existence and validity of generated C/C++ header for compiled artefact |
+
+Table 2-1. Test Item
+
+**The following requirements can be tested only manually:**
+
+ - Non-functional requirements: RNF-2, RNF-3 (They would be tested
+ during development)
+ - Interface requirements: RIF-1, RIF-2, RIF-9 - RIF-12, RIF-19
+
+### Functions not to be tested
+
+The following requirements cannot be tested:
+
+ - The source code requirements (RNF-9. RNF-10. RNF-11)
+
+## SW System Test Procedure
+
+### Test approaches
+
+While implementation of the project deliverables several kinds of
+testing are used. All of them are performed automatically by continuous
+integration system since it is developed. CI system subscribes on source
+code modification in the version control system. The configuration does
+not allow any changes to be merged into the main line if these changes
+do not pass merge mandatory tests.
+
+ - **Code style check** (Merge mandatory test): to verify consistency
+ of coding style
+ - **Build test** (Merge mandatory test): to verify the current build
+ - **Unit tests**: to verify SW system consistency. All new implemented
+ features, code refactoring, optimizations must not cause unit test
+ failure. Each unit test reflect the exact logic of testing
+ component, thus, it should be adopted any time when program logic
+ changes.
+ - **System tests**: to verify the feature quality as well as
+ compliance with its specified requirements.
+ - **Manual-based UI testing approach**: for interface requirements,
+ which cannot be automated
+
+### Test Pass/Fail Criteria
+
+All tests (unit/system) must be executed without any issues at any time
+for newly implemented, refactored, or changed code.
+
+### Test Start/Suspension/Resumption criteria
+
+Two mandatory tests (code style check and build test) are performed for
+every pool request (PR) before it is merged. The configuration of
+continuous integration system (CI) does not allow to merge the changes
+into devel branch if they does not pass the tests.
+
+Unit and feature testing are performed for the devel branch
+automatically. The merge to master branch (release) are possible when
+all these tests passed.
+
+### Regression Test strategy
+
+If a new issue is detected and it is not covered by an existing test
+then a new test will be developed. In other case the issue should be
+resolved.
+
+### Test tools
+
+| | |
+| ------------------------------- | ------------------------------------------------------------------------------------ |
+| Source code static verification | AEGIS (CODE pre-commit test suite: static/structure/open source violation analyzers) |
+| Test execution | CMake |
+| Defect management | Samsung Research GitHub |
+| Continuous Integration system | HQ CI (CODE) |
+
+Table 3-1. Test Tools
+
+## SW System Test Schedule Plan
+
+### Test task & schedule
+
+| | | | |
+| -------------- | ----------------------- | -------------- | -------------------------------------- |
+| Task | Schedule | Responsibility | Detailed Task |
+| Unit testing | 01.04.2018 - 31.12.2018 | All | All unit tests should be carried out |
+| System testing | 01.04.2018 - 31.12.2018 | All | All system tests should be carried out |
+
+Table 4-1. Test Tasks and Schedule
+
+### Test Resource organization plan
+
+#### Test environment
+
+| Type/Model | Operating System | Usage |
+| ---------- | --------------------------------- | ------------------------------------------------------------------------ |
+| PC/x86 | Ubuntu GNU/Linux version \>=14.04 | Build system with unit tests. System and system tests are performed too. |
+| Tizen TM2 | Tizen | Unit and system testing |
+| Odroid XU4 | Tizen | Unit and system testing |
+
+Table 4-2. Hardware / Operating System
+
+| Type | Spec | Usage |
+| ------------------- | ----------------------------------------------------- | ------------------------------------------------------------------------------- |
+| Library | Google test | Organize test code and provide utility methods |
+| VCS | Samsung github | The source code version controlling system |
+| CI | CODE | The HQ CI system |
+| Build system | CMake | Run test and check status |
+| Device connectivity | sdb | Send tools to the device and provide shell to run it |
+| Management tool | The CODE (Collaborative Open Development Environment) | Source code version control, code review, issue tracker, Continuous Integration |
+
+Table 4-3. Software
+
+### Risk management plan
+
+| Risk | Description | Probability | Countermeasures |
+| ------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | --------------------------------------------------------------------------------------- |
+| SmartMachine OS SDK toolchain is not available | In order to support compilation for SmartMachine OS the SDK is required. The compiler would have dependency of a SmartMachine OS SDK toolchain. | High | Suspend support of SmartMachine OS, and make plans when SmartMachine OS SDK is released |
+| SmartMachine OS targets are not available | To perform testing of executables for SmartMachine OS the specified targets are required. | High | Request targets or SW emulator when SmartMachine OS is released |
+| HQ CI does not support target testing | Some tests required the target devices to be run on it. The provided CI system may not support such type of testing. | High | Set CI environment on site |
+| Targets for testing/development are not available | Full automatic testing may take a long time. It also required target devices to execute the binaries. | Medium | Request/Buy enough amount of devices |
+
+Table 4-5. Risk Management
+
+### SW configuration management plan
+
+#### SW Configuration items identification
+
+| No | Document number | SW configuration Item | File name |
+| -- | ------------------------- | ------------------------------ | ------------------------------------------- |
+| 1 | SRR-RAJ0118ZZ-BWRF-STD001 | System Test Document | 18 NN compiler and Optimizer (STD) v1.0.pdf |
+| 2 | SRR-RAJ0118ZZ-BWRF-STS001 | System Test Case Specification | 18 NN compiler and Optimizer (STS) v1.0.pdf |
+| 3 | SRR-RAJ0118ZZ-BWRF-UTR001 | Unit Test Report | 18 NN compiler and Optimizer (UTR) v1.0.pdf |
+
+Table 4-6. SW Configuration Items List
+
+#### Directory Structure
+
+| Directory | Description |
+| ------------------------ | -------------------------------------------------------------------- |
+| / | source codes of the build system, main README file |
+| /contrib | Incubating projects |
+| /doc | Contains the documentation of the project |
+| /doc/project | Contains project management documents (SRS, SDD, STD, HLD, DLD, etc) |
+| /libs | Contains the source of the libraries which are used by the nncc |
+| /libs/core | Contains the source code of the core library of nncc |
+| /libs/frontend | Contains the source code of supported frontend's plugins |
+| /libs/frontend/caffe | The source code for the Caffe frontend |
+| /libs/frontend/caffe2 | The source code for the Caffe2 frontend |
+| /libs/frontend/tflite | The source code for the Tensorflow Lite frontend |
+| /libs/backend | Contains the source code of supported backend plugins |
+| /libs/backend/cpu | Contains the source code of CPU backend |
+| /libs/backend/gpu | Contains the source code of GPU backend |
+| /libs/backend/3rd\_party | Contains the source code of backend to utilize 3rd party libraries |
+| /scripts | Various scripts for building and testing the nncc |
+| /tools | The source code of the executables |
+
+Table 4-7. Directory Structure
+
+#### Baseline
+
+| Test Round | Baseline Name | Configuration Item | Schedule |
+| ---------- | ------------- | ---------------------------------------------------- | ---------- |
+| Round 1 | The nncc v0.5 | SRR-RAJ0118ZZ-BWRF-STD001, SRR-RAJ0118ZZ-BWRF-UTR001 | 01.09.2018 |
+| Round 2 | The nncc v1.0 | SRR-RAJ0118ZZ-BWRF-STD002, SRR-RAJ0118ZZ-BWRF-UTR002 | 01.12.2018 |
+
+Table 4-8. Baselines
+
+## SW System Test Case
+
+| TestItem ID | Testcase ID | Test Procedures | Expected Results |
+| ----------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| TST-1 | TST-1-1 | Import a NN consisting of a single Tensorflow Lite ADD operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-2 | Import a NN consisting of a single Tensorflow Lite AVERAGE\_POOL\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-3 | Import a NN consisting of a single Tensorflow Lite CONCATENATION operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-4 | Import a NN consisting of a single Tensorflow Lite CONV\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-5 | Import a NN consisting of a single Tensorflow Lite DEPTHWISE\_CONV\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-6 | Import a NN consisting of a single Tensorflow Lite DEQUANTIZE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-7 | Import a NN consisting of a single Tensorflow Lite EMBEDDING\_LOOKUP operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-8 | Import a NN consisting of a single Tensorflow Lite FULLY\_CONNECTED operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-9 | Import a NN consisting of a single Tensorflow Lite HASHTABLE\_LOOKUP operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-10 | Import a NN consisting of a single Tensorflow Lite L2\_NORMALIZATION operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-11 | Import a NN consisting of a single Tensorflow Lite L2\_POOL\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-12 | Import a NN consisting of a single Tensorflow Lite LOCAL\_RESPONSE\_NORMALIZATION operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-13 | Import a NN consisting of a single Tensorflow Lite LOGISTIC operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-14 | Import a NN consisting of a single Tensorflow Lite LSH\_PROJECTION operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-15 | Import a NN consisting of a single Tensorflow Lite LSTM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-16 | Import a NN consisting of a single Tensorflow Lite MAX\_POOL\_2D operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-17 | Import a NN consisting of a single Tensorflow Lite MUL operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-18 | Import a NN consisting of a single Tensorflow Lite RELU operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-19 | Import a NN consisting of a single Tensorflow Lite RELU\_N1\_TO\_1 operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-20 | Import a NN consisting of a single Tensorflow Lite RELU6 operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-21 | Import a NN consisting of a single Tensorflow Lite RESHAPE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-22 | Import a NN consisting of a single Tensorflow Lite RESIZE\_BILINEAR operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-23 | Import a NN consisting of a single Tensorflow Lite RNN operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-24 | Import a NN consisting of a single Tensorflow Lite SOFTMAX operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-25 | Import a NN consisting of a single Tensorflow Lite SPACE\_TO\_DEPTH operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-26 | Import a NN consisting of a single Tensorflow Lite SVDF operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-27 | Import a NN consisting of a single Tensorflow Lite TANH operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-28 | Import a NN consisting of a single Tensorflow Lite CONCAT\_EMBEDDINGS operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-29 | Import a NN consisting of a single Tensorflow Lite SKIP\_GRAM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-30 | Import a NN consisting of a single Tensorflow Lite CALL operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-31 | Import a NN consisting of a single Tensorflow Lite CUSTOM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-32 | Import a NN consisting of a single Tensorflow Lite EMBEDDING\_LOOKUP\_SPARSE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-33 | Import a NN consisting of a single Tensorflow Lite PAD operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-34 | Import a NN consisting of a single Tensorflow Lite UNIDIRECTIONAL\_SEQUENCE\_RNN operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-35 | Import a NN consisting of a single Tensorflow Lite GATHER operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-36 | Import a NN consisting of a single Tensorflow Lite BATCH\_TO\_SPACE\_ND operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-37 | Import a NN consisting of a single Tensorflow Lite SPACE\_TO\_BATCH\_ND operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-38 | Import a NN consisting of a single Tensorflow Lite TRANSPOSE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-39 | Import a NN consisting of a single Tensorflow Lite MEAN operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-40 | Import a NN consisting of a single Tensorflow Lite SUB operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-41 | Import a NN consisting of a single Tensorflow Lite DIV operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-42 | Import a NN consisting of a single Tensorflow Lite SQUEEZE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-43 | Import a NN consisting of a single Tensorflow Lite UNIDIRECTIONAL\_SEQUENCE\_LSTM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-44 | Import a NN consisting of a single Tensorflow Lite STRIDED\_SLICE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-45 | Import a NN consisting of a single Tensorflow Lite BIDIRECTIONAL\_SEQUENCE\_RNN operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-46 | Import a NN consisting of a single Tensorflow Lite EXP operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-47 | Import a NN consisting of a single Tensorflow Lite TOPK\_V2 operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-48 | Import a NN consisting of a single Tensorflow Lite SPLIT operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-49 | Import a NN consisting of a single Tensorflow Lite LOG\_SOFTMAX operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-50 | Import a NN consisting of a single Tensorflow Lite DELEGATE operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-51 | Import a NN consisting of a single Tensorflow Lite BIDIRECTIONAL\_SEQUENCE\_LSTM operation | During import no crashes or error messages occurred |
+| TST-1 | TST-1-52 | Import a NN consisting of a single Tensorflow Lite CAST operation | During import no crashes or error messages occurred |
+| TST-2 | TST-2-1 | Import a NN consisting of Caffe ImageData layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-2 | Import a NN consisting of Caffe Data layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-3 | Import a NN consisting of Caffe HDF5Input layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-4 | Import a NN consisting of two Caffe layers - Input layer and HDF5Output layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-5 | Import a NN consisting of Caffe Input layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-6 | Import a NN consisting of Caffe WindowData layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-7 | Import a NN consisting of Caffe MemoryData layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-8 | Import a NN consisting of Caffe DummyData layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-9 | Import a NN consisting of two Caffe layers - Input layer and Convolution layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-10 | Import a NN consisting of two Caffe layers - Input layer and Pooling layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-11 | Import a NN consisting of two Caffe layers - Input layer and SPP layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-12 | Import a NN consisting of two Caffe layers - Input layer and Crop layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-13 | Import a NN consisting of two Caffe layers - Input layer and Deconvolution layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-14 | Import a NN consisting of two Caffe layers - Input layer and Im2Col layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-15 | Import a NN consisting of two Caffe layers - Input layer and Recurrent layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-16 | Import a NN consisting of two Caffe layers - Input layer and RNN layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-17 | Import a NN consisting of two Caffe layers - Input layer and LSTM layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-18 | Import a NN consisting of two Caffe layers - Input layer and InnerProduct layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-19 | Import a NN consisting of two Caffe layers - Input layer and Dropout layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-20 | Import a NN consisting of two Caffe layers - Input layer and Embed layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-21 | Import a NN consisting of two Caffe layers - Input layer and LRN layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-22 | Import a NN consisting of two Caffe layers - Input layer and MVN layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-23 | Import a NN consisting of two Caffe layers - Input layer and BatchNorm layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-24 | Import a NN consisting of two Caffe layers - Input layer and ReLU layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-25 | Import a NN consisting of two Caffe layers - Input layer and PReLU layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-26 | Import a NN consisting of two Caffe layers - Input layer and ELU layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-27 | Import a NN consisting of two Caffe layers - Input layer and Sigmoid layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-28 | Import a NN consisting of two Caffe layers - Input layer and TanH layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-29 | Import a NN consisting of two Caffe layers - Input layer and AbsVal layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-30 | Import a NN consisting of two Caffe layers - Input layer and Power layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-31 | Import a NN consisting of two Caffe layers - Input layer and Exp layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-32 | Import a NN consisting of two Caffe layers - Input layer and Log layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-33 | Import a NN consisting of two Caffe layers - Input layer and BNLL layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-34 | Import a NN consisting of two Caffe layers - Input layer and Threshold layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-35 | Import a NN consisting of two Caffe layers - Input layer and Bias layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-36 | Import a NN consisting of two Caffe layers - Input layer and Scale layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-37 | Import a NN consisting of two Caffe layers - Input layer and Flatten layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-38 | Import a NN consisting of two Caffe layers - Input layer and Reshape layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-39 | Import a NN consisting of two Caffe layers - Input layer and BatchReindex layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-40 | Import a NN consisting of two Caffe layers - Input layer and Split layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-41 | Import a NN consisting of two Caffe layers - Input layer and Concat layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-42 | Import a NN consisting of two Caffe layers - Input layer and Slice layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-43 | Import a NN consisting of two Caffe layers - Input layer and Eltwise layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-44 | Import a NN consisting of two Caffe layers - Input layer and Filter layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-45 | Import a NN consisting of two Caffe layers - Input layer and Parameter layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-46 | Import a NN consisting of two Caffe layers - Input layer and Reduction layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-47 | Import a NN consisting of two Caffe layers - Input layer and Silence layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-48 | Import a NN consisting of two Caffe layers - Input layer and ArgMax layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-49 | Import a NN consisting of two Caffe layers - Input layer and Softmax layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-50 | Import a NN consisting of two Caffe layers - Input layer and Python layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-51 | Import a NN consisting of two Caffe layers - Input layer and MultinomialLogisticLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-52 | Import a NN consisting of two Caffe layers - Input layer and Infogain layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-53 | Import a NN consisting of two Caffe layers - Input layer and SoftmaxWithLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-54 | Import a NN consisting of two Caffe layers - Input layer and EuclideanLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-55 | Import a NN consisting of two Caffe layers - Input layer and HingeLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-56 | Import a NN consisting of two Caffe layers - Input layer and SigmoidCrossEntropyLoss layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-57 | Import a NN consisting of two Caffe layers - Input layer and Accuracy layer | During import no crashes or error messages occurred |
+| TST-2 | TST-2-58 | Import a NN consisting of two Caffe layers - Input layer and ContrastiveLoss layer | During import no crashes or error messages occurred |
+| TST-3 | TST-3-1 | Import a NN consisting of a single Caffe2 Add operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-2 | Import a NN consisting of a single Caffe2 AveragePool2D operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-3 | Import a NN consisting of a single Caffe2 Concat operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-4 | Import a NN consisting of a single Caffe2 Conv2D operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-5 | Import a NN consisting of a single Caffe2 FC operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-6 | Import a NN consisting of a single Caffe2 LRN operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-7 | Import a NN consisting of a single Caffe2 Sigmoid operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-8 | Import a NN consisting of a single Caffe2 MaxPool2D operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-9 | Import a NN consisting of a single Caffe2 Mul operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-10 | Import a NN consisting of a single Caffe2 Relu operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-11 | Import a NN consisting of a single Caffe2 Reshape operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-12 | Import a NN consisting of a single Caffe2 Softmax operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-13 | Import a NN consisting of a single Caffe2 Tanh operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-14 | Import a NN consisting of a single Caffe2 PadImage operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-15 | Import a NN consisting of a single Caffe2 BatchToSpace operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-16 | Import a NN consisting of a single Caffe2 SpaceToBatch operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-17 | Import a NN consisting of a single Caffe2 Transpose operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-18 | Import a NN consisting of a single Caffe2 Mean operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-19 | Import a NN consisting of a single Caffe2 Sub operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-20 | Import a NN consisting of a single Caffe2 Div operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-21 | Import a NN consisting of a single Caffe2 Squeeze operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-22 | Import a NN consisting of a single Caffe2 Exp operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-23 | Import a NN consisting of a single Caffe2 TopK operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-24 | Import a NN consisting of a single Caffe2 Split operation | During import no crashes or error messages occurred |
+| TST-3 | TST-3-25 | Import a NN consisting of a single Caffe2 Cast operation | During import no crashes or error messages occurred |
+| TST-4 | TST-4-1 | Import Inception V3 NN model | During import no crashes or error messages occurred |
+| TST-5 | TST-5-1 | Import MobileNet NN model | During import no crashes or error messages occurred |
+| TST-6 | TST-6-1 | Import Inception V3 NN model, serialize all model weights, compare serialized data with the initial NN model | Test executed successfully, serialized weights are equal to initial model weights |
+| TST-6 | TST-6-2 | Import MobileNet NN model, serialize all model weigths, compare serialized data with the initial NN model | Test executed successfully, serialized weights are equal to initial model weights |
+| TST-7 | TST-7-1 | Generate binary for the Inception V3 NN model and run its inference on a device with ARM CPU | Test executed successfully, no crashes occurred, inference result was output, amount and format of the outputs corresponds to the expected NN model outputs |
+| TST-7 | TST-7-2 | Generate binary for the MobileNet NN model and run its inference on a device with ARM CPU | Test executed successfully, no crashes occurred, inference result was output, amount and format of the outputs corresponds to the expected NN model outputs |
+| TST-8 | TST-8-1 | Generate binary for the Inception V3 NN model and run its inference on a GPU-enabled device | Test executed successfully, no crashes occurred, inference result was output, amount and format of the outputs corresponds to the expected NN model outputs |
+| TST-8 | TST-8-2 | Generate binary for the MobileNet V3 NN model and run its inference on a GPU-enabled device | Test executed successfully, no crashes occurred, inference result was output, amount and format of the outputs corresponds to the expected NN model outputs |
+| TST-9 | TST-9-1 | Provide correct NN model, compile it as a static library, then check that corresponding binary exists and it is a static library | Test executed successfully |
+| TST-9 | TST-9-2 | Provide correct NN model, compile it as a shared library, then check that corresponding binary exists and it is a shared library | Test executed successfully |
+| TST-9 | TST-9-3 | Provide incorrect model, compile it as a static library, then check that no compiled artifact is produced | Test executed successfully |
+| TST-9 | TST-9-4 | Provide incorrect model, compile it as a shared library, then check that no compiled artifact is produced | Test executed successfully |
+| TST-10 | TST-10-1 | Check that a static library is provided after compiling Inception V3 as a static library | Test executed successfully |
+| TST-10 | TST-10-2 | Check that a shared library is provided after compiling Inception V3 as a shared library | Test executed successfully |
+| TST-11 | TST-11-1 | Check that a static library is provided after compiling MobileNet as a static library | Test executed successfully |
+| TST-11 | TST-11-2 | Check that a shared library is provided after compiling MobileNet as a shared library | Test executed successfully |
+| TST-12 | TST-12-1 | Check that configuration object is constructed correctly when getting configuration parameters from command line | Test executed successfully |
+| TST-12 | TST-12-2 | Check that configuration object is constructed correctly when getting configuration parameters from config file | Test executed successfully |
+| TST-12 | TST-12-3 | Check that configuration object is constructed correctly when getting configuration parameters from environment variables | Test executed successfully |
+| TST-13 | TST-13-1 | Compile Inception V3 as static library for CPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-2 | Compile Inception V3 as shared library for CPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-3 | Compile Inception V3 as static library for GPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-4 | Compile Inception V3 as shared library for GPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-5 | Compile MobileNet as static library for CPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-6 | Compile MobileNet as shared library for CPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-7 | Compile MobileNet as static library for GPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-13 | TST-13-8 | Compile MobileNet as shared library for GPU, provide it and the original model with same correct input data, then compare the result from original model with the result from compiled artifact | Test executed successfully, results are comparable |
+| TST-14 | TST-14-1 | Provide compiled Inception V3 artifact with invalid input, check that no unexpected termination occurs | Test executed successfully |
+| TST-14 | TST-14-2 | Provide compiled Inception V3 artifact with invalid input, check that an error message is provided | Test executed successfully |
+| TST-14 | TST-14-3 | Provide compiled MobileNet artifact with invalid input, check that no unexpected termination occurs | Test executed successfully |
+| TST-14 | TST-14-4 | Provide compiled MobileNet artifact with invalid input, check that an error message is provided | Test executed successfully |
+| TST-15 | TST-15-1 | Check that the OS used during test environment build is Linux-based | Test executed successfully |
+| TST-16 | TST-16-1 | Compile a valid NN model, then check that C/C++ header corresponding to compiled artifact exists | Test executed successfully |
+| TST-16 | TST-16-2 | Compile a valid NN model, then if C/C++ header corresponding to compiled artifact exists, verify its validity | Test executed successfully |
+
+Table 5-1. System Test case
diff --git a/docs/nncc/project_guide.md b/docs/nncc/project_guide.md
new file mode 100644
index 000000000..af6a5acfd
--- /dev/null
+++ b/docs/nncc/project_guide.md
@@ -0,0 +1,27 @@
+### How to create your own project
+_nncc_ aims to make it easy to develop optimized, retargetable NN compilers. Anyone or team interested in _nncc_ can create a new incubating project.
+
+#### Subject
+Subject is related to NN(Neural Network) complier. Some examples are below, but not limited:
+- NN IR(Intermediate Representation)
+- Extended frontend and backend
+- High-performance (model optimization, memory optimization, scheduling, etc.)
+- Tools (verification, benchmark, visualization, etc.)
+- Tutorial, testbed
+
+#### How to propose
+There is no formal proposal process. Anyone can submit an issue or a PR as a starting point of a proposal. It would be helpful that the submissions have documents or descriptions containing the followings to share your idea and concept and attract new contibutors to your project (not mandatory):
+- Overview, goal or architecture description to explain your project
+- How-to guide including building and running your programs
+
+#### Directory to use
+- A directory under `compiler/`, which starts with your project name.
+
+#### Requirement
+- A project should follow the formal review process that _nncc_ is currently using [[(How to create a Pull Request (in contribution guide)](contribution_guide.md#how-to-create-a-pull-request)].
+
+#### How to enable format checker
+- Create a `.FORMATCHECKED` file in your project directory for format checker to check the source code of the directory and its subdirectories.
+
+#### How to contribute`
+Anyone who wants to contribute can create and submit PRs and issues following [nncc contribution_guide](contribution_guide.md). _nncc_ always welcomes your contribution.
diff --git a/docs/nncc/roadmap.md b/docs/nncc/roadmap.md
new file mode 100644
index 000000000..d2227e8be
--- /dev/null
+++ b/docs/nncc/roadmap.md
@@ -0,0 +1,6 @@
+## 2018
+
+In 2018, _nncc_ will provide Caffe/TensorFlow Lite frontends and ARM CPU/GPU backends built on top of
+well-specified common (re-targetable) intermediate representation (IR) which is expressive enough to
+encode Inception(v3) and MobileNet, and is flexible enough to support next-gen H/W architectures, such
+as DSP or NPU.
diff --git a/docs/nncc/v1.0.0/getting_started.md b/docs/nncc/v1.0.0/getting_started.md
new file mode 100644
index 000000000..ee8014042
--- /dev/null
+++ b/docs/nncc/v1.0.0/getting_started.md
@@ -0,0 +1,59 @@
+# Getting Started
+
+## Environments
+
+Currently, Ubuntu 16.04 is officially supported as development environment.
+Other environments may be available but not confirmed.
+
+## How to compile your own model
+
+### What should we preapare
+
+- Tensorflow model file (`.pb` file)
+ - TensorFlow model file should be frozen. [[How to freeze?]](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py)
+ - Only inference operations are supported. Training operations are not supported yet.
+ - Quantization is not yet supported.
+ - `device` attribute should not have `GPU` value.
+- Model information file (`.info` file)
+ - `.info` file should include 4 things.
+ - Specification of input or output
+ - name of input/output node
+ - type of input/output node
+ - shape of input/output node
+ - Example format is written below.
+ ```
+ # input/output, node_name, node_type, node_shape
+
+ input, input:0, TF_FLOAT, [1, 299, 299, 3]
+ output, InceptionV3/Predictions/Reshape_1:0, TF_FLOAT, [1, 1001]
+ ```
+
+### How to compile
+
+1. Generate `nnpkg` using `.pb` file and `.info` file.
+ ```sh
+ tf2nnpkg --graphdef <model.pb> --info <model.info> -o <path/to/generate>
+ ```
+
+1. Check if all files are generated correctly.
+ - Directory name of `nnpkg` is prefix of `.pb` file.
+ - For example, if there is `model.pb` file, directory name will be `model`.
+ ```
+ path/to/generate
+ └ model
+ ├ model.circle
+ └ metadata
+ └ MANIFEST
+ ```
+
+1. Check if `MANIFEST` contents are correct.
+ ```sh
+ $ cat path/to/generate/model/metadata/MANIFEST
+ {
+ "major-version" : "1",
+ "minor-version" : "0",
+ "patch-version" : "0",
+ "models" : [ "model.circle" ],
+ "model-types" : [ "circle" ]
+ }
+ ```
diff --git a/docs/nncc/v1.0.0/operation-list.md b/docs/nncc/v1.0.0/operation-list.md
new file mode 100644
index 000000000..9a43eb518
--- /dev/null
+++ b/docs/nncc/v1.0.0/operation-list.md
@@ -0,0 +1,34 @@
+# List of TensorFlow Operations Supported by nncc
+
+The list of TensorFlow operations supported by nncc as follows:
+
+**Notice: There may be some restrictions on the support of each operation. Details will be updated soon.**
+
+- Add
+- AvgPool
+- BiasAdd
+- ConcatV2
+- Const
+- Conv2D
+- Conv2DBackpropInput
+- DepthwiseConv2dNative
+- FusedBatchNorm
+- Identity
+- MaxPool
+- Mean
+- Mul
+- Pad
+- Placeholder
+- RealDiv
+- Relu
+- Relu6
+- Reshape
+- Rsqrt
+- Shape
+- Softmax
+- Sqrt
+- SquaredDifference
+- Squeeze
+- StopGradient
+- Sub
+- Tanh
diff --git a/docs/nncc/v1.0.0/tutorial.md b/docs/nncc/v1.0.0/tutorial.md
new file mode 100644
index 000000000..9d1f97e67
--- /dev/null
+++ b/docs/nncc/v1.0.0/tutorial.md
@@ -0,0 +1,49 @@
+# Tutorial
+
+Let's compile Inception_v3 model and make a nnpackage!
+
+## Prepare inception_v3 files
+
+1. Download pre-trained `inception_v3.pb` model file.
+ ```sh
+ $ wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz
+ $ tar -xvf inception_v3_2018_04_27.tgz
+ ```
+1. Create model information file as `inception_v3.info`.
+ ```
+ $ cat > inception_v3.info << "END"
+ input, input:0, TF_FLOAT, [1, 299, 299, 3]
+ output, InceptionV3/Predictions/Reshape_1:0, TF_FLOAT, [1, 1001]
+ END
+ ```
+
+## Let's compile inception_v3
+
+1. Generate `nnpkg`. In this tutorial, let's generate to current directory.
+ ```sh
+ tf2nnpkg --use-tf2circle \
+ --graphdef inception_v3.pb \
+ --info inception_v3.info \
+ -o .
+ ```
+
+## Check whether compilation is well done
+
+- Check if all files are generated correctly.
+ ```
+ inception_v3
+ ├ inception_v3.circle
+ └ metadata
+ └ MANIFEST
+ ```
+- Check if `MANIFEST` contents are correct.
+ ```sh
+ $ cat inception_v3/metadata/MANIFEST
+ {
+ "major-version" : "1",
+ "minor-version" : "0",
+ "patch-version" : "0",
+ "models" : [ "inception_v3.circle" ],
+ "model-types" : [ "circle" ]
+ }
+ ```
diff --git a/docs/nncc/v1.1.0/nncc_in_tizen_studio.md b/docs/nncc/v1.1.0/nncc_in_tizen_studio.md
new file mode 100644
index 000000000..d0f89a49b
--- /dev/null
+++ b/docs/nncc/v1.1.0/nncc_in_tizen_studio.md
@@ -0,0 +1,52 @@
+# nncc for Tizen Studio Plugin
+
+## Environments
+
+- Windows 10
+
+## How to install nncc in Tizen Studio
+
+### Things to prepare
+
+- Tizen Studio with IDE
+- Tizen Studio Package Manager
+ - Will be automatically installed when Tizen Studio is installed
+- Firewall Registration
+ - To add a repository at Package Manager, firewall registration must be applied in advance.
+ - IP Address : 107.110.2.162
+ - Service Port : 80(TCP)
+
+### Installation of SDK
+
+1. Execute Package Manager of Tizen Studio.
+1. Click cogwheel at right-top side.
+1. Click `Extension SDK`.
+1. Click `+` button.
+1. Write `http://107.110.2.162/packages/ai_tool_ext/` at `Repository`, and anything at `Name`.
+1. Click `OK`. And then click `OK` again. Refresh progress will be run.
+1. At `Extension SDK` tab, click `install` of `nnas`
+
+## Tutorial
+Let's create nnpackage in Tizen Studio!
+
+1. Enter [File] - [New] - [Tizen Project].
+1. Select `Sample` and click `Next`.
+1. Select `Mobile` with any version and click `Next`.
+1. Select `Web Application` and click `Next`.
+1. Select `Application` - `App Callee` and click `Next`.
+1. Write `AppCallee` at `Project name` and click `Finish`.
+1. Click `Finish`. (Default project name is `AppCallee`)
+1. After project `AppCallee` was created, click `AppCallee` at Project Explorer.
+1. Click `AI extension` (AI chip icon) at the top.
+1. Give `.pb` file path to `Model File` and `.info` file path to `info file`.
+ - Information about `.pb` and `.info`, please refer to [Getting Started](../v1.0.0/getting_started.md#10)
+1. Click `OK`. Generating circle file progress will be done.
+1. Check whether nnpackage is created in `AppCallee\res\shared` folder.
+ - Suppose that `model.pb` and `model.info` were used
+ ```
+ AppCallee\res\shared
+ └ model
+ ├ model.circle
+ └ metadata
+ └ MANIFEST
+ ``` \ No newline at end of file
diff --git a/docs/nncc/v1.1.0/nncc_in_visual_studio.md b/docs/nncc/v1.1.0/nncc_in_visual_studio.md
new file mode 100644
index 000000000..bc9e59fa9
--- /dev/null
+++ b/docs/nncc/v1.1.0/nncc_in_visual_studio.md
@@ -0,0 +1,61 @@
+# nncc for Visual Studio Tizen Extension
+
+## Environments
+
+- Windows 10
+
+## How to install nncc in Visual Studio
+
+### Things to prepare
+
+- Visual Studio 2019 for Windows
+ - Version Status
+ - Community version : Not available yet
+ - Professional version : Available
+ - Enterprise version : Available
+ - Needed Workload
+ - .NET Desktop Development
+ - If above workload was not installed, please install it using Visual Studio Installer.
+ - Under 2019 version, some details can be different
+ - Express version : Not available
+ - Other versions : Not confirmed
+ - Refer to https://developer.tizen.org/development/visual-studio-tools-tizen/installing-visual-studio-tools-tizen
+- Tizen Baseline SDK
+ - Install `nnas` by using Package Manager. For details, [click here.](nncc_in_tizen_studio.md)
+
+### Installation
+
+1. Download `VisualStudioToolsForTizen_2019AI_3.1.0116.1.vsix` from the release page.
+1. Execute the `vsix` file.
+ - Do not execute Visual Studio during this step. If executed, the process will wait infinitely.
+1. Open Visual Studio and click `Continue without code`.
+1. Enter [Tools] - [NuGet Package Manager] - [Package Manager Settings] - [NuGet Package Manager - Package Sources]
+1. Click green `+` button to add new package source.
+1. Set like the following. Then, click `Update`.
+ - `Name` : write `Tizen.NET.SDK`
+ - `Source`: write `https://tizen.myget.org/F/dotnet/api/v3/index.json`
+1. <b>Only when</b> `nuget.org` is not found in `Available package sources`, follow below three steps.
+ - Click green `+` button
+ - Set `Name` as `nuget.org` and set `Source` as `https://api.nuget.org/v3/index.json`
+ - Click `Update`
+1. Click `OK`.
+
+## Tutorial
+Let's create nnpackage in Visual Studio!
+
+1. Open Visual Studio.
+1. Enter [File] - [New] - [Project].
+1. Select `AI App Project` and click `Next`.
+1. Click `Create`. (Default project name is `AIAppTemplate`)
+1. A dialog pops up. Enter the path of your `model.pb` and `model.info` into the dialog.
+ - In this version, names of model file and info file <b>must be</b> `model.pb` and `model.info`.
+ - Detailed information about `.pb` file and `.info` file is in [getting_started](../v1.0.0/getting_started.md#12)
+1. Open `AIAppTemplate_App.cs` in `AIAppTemplate` and build it.
+1. If build succeeded, nnpackage will be found at `AIAppTemplate\res\shared` folder.
+ ```
+ AIAppTemplate\res\shared
+ └ model
+ ├ model.circle
+ └ metadata
+ └ MANIFEST
+ ```
diff --git a/docs/fig/nnfw_architecture.png b/docs/nnfw/2018/fig/nnfw_architecture.png
index d183e2b56..d183e2b56 100644
--- a/docs/fig/nnfw_architecture.png
+++ b/docs/nnfw/2018/fig/nnfw_architecture.png
Binary files differ
diff --git a/docs/fig/nnfw_architecture.pptx b/docs/nnfw/2018/fig/nnfw_architecture.pptx
index 3e5b4fad5..3e5b4fad5 100644
--- a/docs/fig/nnfw_architecture.pptx
+++ b/docs/nnfw/2018/fig/nnfw_architecture.pptx
Binary files differ
diff --git a/docs/roadmap.md b/docs/nnfw/2018/roadmap.md
index aca206889..aca206889 100644
--- a/docs/roadmap.md
+++ b/docs/nnfw/2018/roadmap.md
diff --git a/docs/HowToImplementOperatorKernel.md b/docs/nnfw/HowToImplementOperatorKernel.md
index 715575a5f..715575a5f 100644
--- a/docs/HowToImplementOperatorKernel.md
+++ b/docs/nnfw/HowToImplementOperatorKernel.md
diff --git a/docs/nnfw/fig/nnfw_architecture.png b/docs/nnfw/fig/nnfw_architecture.png
new file mode 100644
index 000000000..566151e4a
--- /dev/null
+++ b/docs/nnfw/fig/nnfw_architecture.png
Binary files differ
diff --git a/docs/nnfw/fig/nnfw_architecture.pptx b/docs/nnfw/fig/nnfw_architecture.pptx
new file mode 100644
index 000000000..9a4e8fbb7
--- /dev/null
+++ b/docs/nnfw/fig/nnfw_architecture.pptx
Binary files differ
diff --git a/docs/fig/nnfw_behavior.png b/docs/nnfw/fig/nnfw_behavior.png
index b7527b48c..b7527b48c 100644
--- a/docs/fig/nnfw_behavior.png
+++ b/docs/nnfw/fig/nnfw_behavior.png
Binary files differ
diff --git a/docs/fig/nnfw_behavior.pptx b/docs/nnfw/fig/nnfw_behavior.pptx
index bac51f363..bac51f363 100644
--- a/docs/fig/nnfw_behavior.pptx
+++ b/docs/nnfw/fig/nnfw_behavior.pptx
Binary files differ
diff --git a/docs/howto.md b/docs/nnfw/howto.md
index 866f56115..2c28453bd 100644
--- a/docs/howto.md
+++ b/docs/nnfw/howto.md
@@ -20,7 +20,7 @@ $ USE_NNAPI=1 LD_LIBRARY_PATH="$(pwd)/Product/obj/runtimes/logging:$(pwd)/Produc
```
## How to get pre-built T/F Lite Flatbuffer models?
-Google provides several pre-built T/F Lite models. Please check [this article](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md)
+Google provides several pre-built T/F Lite models. Please check [this page](https://www.tensorflow.org/lite/models)
## Build How-to
@@ -34,3 +34,5 @@ Google provides several pre-built T/F Lite models. Please check [this article](h
- [How to setup XU3 with Ubuntu 16.04](howto/device/xu3_ubuntu.md)
- [How to setup XU4 with Ubuntu 16.04](howto/device/xu4_ubuntu.md)
- [How to add unittest using gtest](howto/HowToAddUnittest.md)
+- [How to manually test NNFW on single model/input pair](howto/HowToTestManualy.md)
+- [How to use nnfw API](howto/HowToUseNNFWAPI.md)
diff --git a/docs/howto/BuildTFfromSource.md b/docs/nnfw/howto/BuildTFfromSource.md
index 3880d5ab9..3880d5ab9 100644
--- a/docs/howto/BuildTFfromSource.md
+++ b/docs/nnfw/howto/BuildTFfromSource.md
diff --git a/docs/howto/CrossBuildForAarch64.md b/docs/nnfw/howto/CrossBuildForAarch64.md
index f3dc55236..9f0af85b8 100644
--- a/docs/howto/CrossBuildForAarch64.md
+++ b/docs/nnfw/howto/CrossBuildForAarch64.md
@@ -1,10 +1,6 @@
-# Cross building for AARCH64
+# Cross building for AARCH64 (ARM64)
-In nnfw, we use both `ARM64` and `AARCH64` on build files such as Makefile, CMakeLists.txt and so on.
-- `ARM64`: only for Android
-- `AARCH64`: all except for Android
-
-However we use only one term `ARM64` in RootFS. Use `ARM64` if you need a RootFS for `AARCH64`.
+In nnfw, we use `AARCH64` on build files such as Makefile, CMakeLists.txt and so on.
## Prepare Ubuntu RootFS
@@ -17,9 +13,9 @@ sudo apt-get install qemu qemu-user-static binfmt-support debootstrap
Use `build_rootfs.sh` script to prepare Root File System. You should have `sudo`
```
-sudo ./tools/cross/build_rootfs.sh arm64
+sudo ./tools/cross/build_rootfs.sh aarch64
```
-- supports `arm`(default) and `arm64` architecutre for now
+- supports `arm`(default) and `aarch64` architecutre for now
- supports `xenial`(default) and `trusty` release
To see the options,
@@ -27,14 +23,14 @@ To see the options,
./tools/cross/build_rootfs.sh -h
```
-RootFS will be prepared at `tools/cross/rootfs/arm64` folder.
+RootFS will be prepared at `tools/cross/rootfs/aarch64` folder.
### Prepare RootFS at alternative folder
Use `ROOTFS_DIR` to a full path to prepare at alternative path.
```
-ROOTFS_DIR=/home/user/rootfs/arm64-xenial sudo ./tools/cross/build_rootfs.sh arm64
+ROOTFS_DIR=/home/user/rootfs/aarch64-xenial sudo ./tools/cross/build_rootfs.sh aarch64
```
### Using proxy
@@ -43,9 +39,9 @@ If you need to use proxy server while building the rootfs, use `--setproxy` opti
```
# for example,
-sudo ./tools/cross/build_rootfs.sh arm64 --setproxy="1.2.3.4:8080"
+sudo ./tools/cross/build_rootfs.sh aarch64 --setproxy="1.2.3.4:8080"
# or
-sudo ./tools/cross/build_rootfs.sh arm64 --setproxy="proxy.server.com:8888"
+sudo ./tools/cross/build_rootfs.sh aarch64 --setproxy="proxy.server.com:8888"
```
This will put `apt` proxy settings in `rootfs/etc/apt/apt.conf.d/90proxy` file
@@ -76,6 +72,6 @@ CROSS_BUILD=1 TARGET_ARCH=aarch64 make install
If you used `ROOTFS_DIR` to prepare in alternative folder,
you should also give this to makefile.
```
-CROSS_BUILD=1 ROOTFS_DIR=/home/user/rootfs/arm64-xenial TARGET_ARCH=aarch64 make
-CROSS_BUILD=1 ROOTFS_DIR=/home/user/rootfs/arm64-xenial TARGET_ARCH=aarch64 make install
+CROSS_BUILD=1 ROOTFS_DIR=/home/user/rootfs/aarch64-xenial TARGET_ARCH=aarch64 make
+CROSS_BUILD=1 ROOTFS_DIR=/home/user/rootfs/aarch64-xenial TARGET_ARCH=aarch64 make install
```
diff --git a/docs/nnfw/howto/CrossBuildForAndroid.md b/docs/nnfw/howto/CrossBuildForAndroid.md
new file mode 100644
index 000000000..ab9d04e92
--- /dev/null
+++ b/docs/nnfw/howto/CrossBuildForAndroid.md
@@ -0,0 +1,52 @@
+# Cross building for Android
+
+Supported Architecture : AARCH64 only (ARM32 is not supported yet)
+
+## Prepare Android NDK
+
+Use `tools/cross/build_android_ndk.sh` script to prepare Android NDK. This is recommended way to build Android NDK.
+You may download it yourself from the offical Android NDK website, but the script does a little more than just downloading and unzipping.
+
+## Build
+
+### Host Environment Requirements
+
+With Ubuntu 16.04, everything is fine except one. CMake 3.6.0 or later is required for Android NDK CMake support.
+So if you want to use Docker, please use `infra/docker/Dockerfile.1804` which is based on Ubuntu 18.04. It has CMake 3.10.2.
+
+```bash
+docker build --network host -t nnas1804 -f infra/docker/Dockerfile.1804 infra/docker
+```
+
+### Get prebuilt ARM Compute Library
+
+Download prebuilt binary from [github](https://github.com/ARM-software/ComputeLibrary/releases). Check the version we support and platform(Android).
+
+Then extract the tarball and we will use the ones in `lib/android-arm64-v8a-neon-cl`. The following files are used.
+
+```
+libarm_compute_core.so
+libarm_compute_graph.so
+libarm_compute.so
+```
+
+### Build and install the runtime
+
+Some tools/libs are still not supported and those are not built by default - mostly due to dependency on Boost library.
+Please refer to `infra/nnfw/cmake/options/options_aarch64-android.cmake` for details.
+
+Different from cross build for linux,
+
+- `NDK_DIR` is required
+
+Here is an example of using Makefile.
+
+```bash
+cp -n Makefile.template Makefile
+
+TARGET_OS=android \
+CROSS_BUILD=1 \
+NDK_DIR=/path/android-tools/r20/ndk \
+EXT_ACL_FOLDER=/path/arm_compute-v19.05-bin-android/lib/android-arm64-v8a-neon-cl \
+make install
+```
diff --git a/docs/howto/CrossBuildForArm.md b/docs/nnfw/howto/CrossBuildForArm.md
index e307596d0..07b4a17b3 100644
--- a/docs/howto/CrossBuildForArm.md
+++ b/docs/nnfw/howto/CrossBuildForArm.md
@@ -13,8 +13,8 @@ Use `build_rootfs.sh` script to prepare Root File System. You should have `sudo`
```
sudo ./tools/cross/build_rootfs.sh arm
```
-- supports `arm`(default) and `arm64` architecutre for now
-- supports `xenial`(default) and `trusty` release
+- supports `arm`(default) and `aarch` architecutre for now
+- supports `xenial`(default) `trusty`, and `bionic` release
To see the options,
```
@@ -23,7 +23,7 @@ To see the options,
RootFS will be prepared at `tools/cross/rootfs/arm` folder.
-## Prepare RootFS at alternative folder
+### Prepare RootFS at alternative folder
Use `ROOTFS_DIR` to a full path to prepare at alternative path.
@@ -31,7 +31,7 @@ Use `ROOTFS_DIR` to a full path to prepare at alternative path.
ROOTFS_DIR=/home/user/rootfs/arm-xenial sudo ./tools/cross/build_rootfs.sh arm
```
-## Using proxy
+### Using proxy
If you need to use proxy server while building the rootfs, use `--setproxy` option.
@@ -49,7 +49,7 @@ for `http`, `https` and `ftp` protocol.
We recommend you have g++ >= 6 installed on your system because NN generated tests require it.
-On Ubuntu 16.04 or older, follow the next steps:
+- On Ubuntu 16.04 or older, follow the next steps:
```
cd ~/your/path
@@ -58,7 +58,7 @@ tar xvf gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf.tar.xz
echo 'PATH=~/your/path/gcc-linaro-7.2.1-2017.11-x86_64_arm-linux-gnueabihf/bin:$PATH' >> ~/.bashrc
```
-On Ubuntu 18.04 LTS, you can install using `apt-get`.
+- On Ubuntu 18.04 LTS, you can install using `apt-get`.
Choose g++ version whatever you prefer: 6, 7 or 8.
```
@@ -78,30 +78,41 @@ Then, copy `libstdc++.so.6.0.24` into `/usr/lib/arm-linux-gnueabihf`, and update
## Build and install ARM Compute Library
-```
-CROSS_BUILD=1 TARGET_ARCH=armv7l make acl
-```
-Mostly you only need once of ACL build. This will build and install to `Product/(target_arch-os)/out/bin` folder.
-- this is required for ARM on Ubuntu
+Mostly you only need once of ACL build.
+
+ACL will be automatically installed in `externals/acl` when you build nnfw without any changes.
+
+You can check ACL source information in `cmake/packages/ARMComputeSourceConfig.cmake`
## Build nnfw
-Give `TARGET_ARCH` variable to set the target architecture
+Give `TARGET_ARCH` variable to set the target architecture.
+
+If you used `ROOTFS_DIR` to prepare in alternative folder, you should also give this to makefile.
```
CROSS_BUILD=1 TARGET_ARCH=armv7l make all install
+
+# If ROOTFS_DIR is in alternative folder
+ROOTFS_DIR=/path/to/your/rootfs/arm \
+CROSS_BUILD=1 TARGET_ARCH=armv7l make all install
```
-- supports `armv7l` and `aarch64` for now
-If you used `ROOTFS_DIR` to prepare in alternative folder, you should also give this to makefile.
+You can also omit the `CROSS_BUILD=1` option if you explicitly pass `ROOTFS_DIR`. In that case, if
+the `TARGET_ARCH` are differs from the hostarchitecture, the make script automatically applies
+`CROSS_BUILD=1`. So, if you set `ROOTFS_DIR` as an environment variable, you can simply perform
+normal build and cross build as follows.
```
-ROOTFS_DIR=ROOTFS_ARM=/path/to/your/rootfs/arm \
-CROSS_BUILD=1 TARGET_ARCH=armv7l make all install
+export ROOTFS_DIR = xxx
+...
+make all install # do normal build
+TARGET_ARCH = armv7l make all install # do cross build
```
## Run test
```
- ./tests/scripts/test_driver.sh --artifactpath=.
+ ./tests/scripts/test_driver.sh --artifactpath=. \
+ --frameworktest_list_file=tests/scripts/list/neurun_frameworktest_list.armv7l.acl_cl.txt
```
diff --git a/docs/howto/HowToAddUnittest.md b/docs/nnfw/howto/HowToAddUnittest.md
index 5bb75b258..5bb75b258 100644
--- a/docs/howto/HowToAddUnittest.md
+++ b/docs/nnfw/howto/HowToAddUnittest.md
diff --git a/docs/nnfw/howto/HowToRunNnpackge.md b/docs/nnfw/howto/HowToRunNnpackge.md
new file mode 100644
index 000000000..93dd74e83
--- /dev/null
+++ b/docs/nnfw/howto/HowToRunNnpackge.md
@@ -0,0 +1,75 @@
+# How To Run 'nnpackage' (for beginners)
+
+## 0. Environment
+
+This document is based on an experience with ...
+
+```
+- Architecture : armhf
+- OS : ubuntu 18.04
+```
+
+## 1. What is 'nnpackage'?
+
+'nnpackage' is the input of nnfw and the output of nncc.
+
+'nnpackage' contains all data (such as model, MANIFEST, custom_op) that requires to run a given model.
+
+'nnpackage' is a Zip archive in the following structure:
+
+```
+nnpackage
+├── custom_op
+├── metadata
+│ └── MANIFEST
+└── mymodel.model
+```
+
+For more information, find the document [nnpackage/spec/10_packaging_and_manifest.md](../../../nnpackage/spec/10_packaging_and_manifest.md)
+
+## 2. How to generate nnpackage?
+
+'nnpackage' can be generated from either '.circle' or '.tflite'.
+
+In this example, we generate 'nnpackage' from '.tflite'.
+
+ [1] Find 'model2nnpkg.sh'.
+ ```
+ nnfw/tools/nnpackage_tool/model2nnpkg/model2nnpkg.sh
+ ```
+
+ [2] Get any \*.tflite model file.
+ You can simply use a file in test framework directory, 'nnfw/tests/framework/cache/'.
+ If you don't have /cache directory, download them with command
+ ```
+ cd nnfw
+ MODELFILE_SERVER={MODELFILE_SERVER_LINK} ./tests/framework/run_test.sh --download=on
+
+ For {MODELFILE_SERVER_LINK}, put appropriate server link.
+ ```
+ In this example, we will use 'nnfw/tests/framework/cache/add/1D/add_test1.tflite'
+
+ [3] Simply run.
+ ```
+ $./model2nnpkg.sh add_test1
+ ```
+ Now, you got add_test1 directory. Check into the directory to find the hierchical structure inside.
+
+## 3. How to set up an environment and run?
+
+ [1] Build 'nnfw'.
+
+ After build, you can see an execution file 'nnfw/Product/armv7l-linux.debug/out/bin/nnpackage_run'.
+ For how to build, check out the document [docs/nnfw/howto/CrossBuildForArm.md](../../../docs/nnfw/howto/CrossBuildForArm.md).
+
+ [2] Install package 'libhdf5-cpp-100'.
+ ```
+ $ sudo apt install libhdf5-cpp-100
+ ```
+
+ [3] Run nnpackage.
+ ```
+ $ ./nnpackage_run add_test1
+ ```
+ Note that you need to put an whole 'add_test_1' directory,
+ because 'nnpackage' means an archive, not a single file.
diff --git a/docs/nnfw/howto/HowToTestManualy.md b/docs/nnfw/howto/HowToTestManualy.md
new file mode 100644
index 000000000..bb36cc67b
--- /dev/null
+++ b/docs/nnfw/howto/HowToTestManualy.md
@@ -0,0 +1,62 @@
+# How to test NNFW on single model/input pair
+
+1. Select backend through environment variables:
+ * acl_cl: `export OP_BACKEND_ALLOPS=acl_cl`
+ * acl_neon: `export OP_BACKEND_ALLOPS=acl_neon`
+ * cpu: `export OP_BACKEND_ALLOPS=cpu`
+ * different backends for different operations:
+ ```
+ unset OP_BACKEND_ALLOPS
+ export OP_BACKEND_Conv2D=cpu
+ export OP_BACKEND_MaxPool2D=acl_cl
+ export OP_BACKEND_AvgPool2D=acl_neon
+ ```
+
+2. Select executor through environment variable:
+ * linear: `export EXECUTOR=Linear`
+ * dataflow: `export EXECUTOR=Dataflow`
+ * parallel: `export EXECUTOR=Parallel`
+
+## Test NNFW through NNAPI
+
+### Testing on random input
+1. Generate random input, get reference result using tflite interpreter, dump input and result into file:
+ ```
+ /path/to/tflite_run --tflite /path/to/model.tflite --dump /path/to/out.dat
+ ```
+2. Inference with NNFW NNAPI and compare result with reference one:
+ ```
+ USE_NNAPI=1 /path/to/tflite_run --tflite /path/to/model.tflite ---compare /path/to/out.dat
+ ```
+
+### Testing on particular input
+1. Prepare input:
+
+ `tflite_run` consumes input as sequence of floats.
+
+ For example, you could convert `.jpg` image into such format file with next python3 script:
+ ```
+ from PIL import Image
+ import numpy as np
+
+ img = Image.open("./image.jpg")
+ np_img = np.array(img.getdata()).reshape(img.size[0], img.size[1], 3).astype(np.float32) / 255.
+
+ with open('./converted_image.dat', 'wb') as f:
+ for i in np_img.flatten('C'):
+ f.write(i)
+ ```
+
+2. Get reference result using tflite interpreter, dump input and result into file:
+
+ ```
+ /path/to/tflite_run --tflite /path/to/model.tflite --input /path/to/input.dat --dump /path/to/out.dat
+ ```
+3. Inference with NNFW NNAPI and compare result with reference one:
+ ```
+ USE_NNAPI=1 /path/to/tflite_run --tflite /path/to/model.tflite ---compare /path/to/out.dat
+ ```
+
+## Test NNFW through NNPackage
+
+TODO: fill this section when NNPackage will be implemented
diff --git a/docs/howto/HowToUseDockerImage.md b/docs/nnfw/howto/HowToUseDockerImage.md
index a28502cf0..2c8d98f58 100644
--- a/docs/howto/HowToUseDockerImage.md
+++ b/docs/nnfw/howto/HowToUseDockerImage.md
@@ -2,7 +2,7 @@
We have a docker image to build `nnfw` repo.
-This docker image is built from https://github.sec.samsung.net/STAR/nnfw/blob/master/docker/Dockerfile and based on Ubuntu 16.04.
+This docker image is built from https://github.sec.samsung.net/STAR/nnfw/blob/master/infra/docker/Dockerfile and based on Ubuntu 16.04.
And prebuilt docker image is available from Samsung private docker registry.
This document describes how to use prebuilt docker image when developing `nnfw`.
@@ -44,7 +44,7 @@ If you are behind an HTTP or HTTPS proxy server, you will need to add this confi
These are the actual steps to set an HTTP/HTTPS proxy environment variable:
```
$ sudo mkdir -p /etc/systemd/system/docker.service.d
-$ sudo vi etc/systemd/system/docker.service.d/http-proxy.conf
+$ sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
```
```
[Service]
@@ -66,14 +66,14 @@ If there is a `/etc/default/docker`, please edit the file as below.
```
$ sudo vi /etc/default/docker
-DOCKER_OPTS="--insecure-registry docker.sec.samsung.net:5000"
+DOCKER_OPTS="--insecure-registry npuci.mooo.com:5000"
```
If there is a `/etc/docker/daemon.json`, please edit the file as below.
```
{
...,
- "insecure-registries": [..., "docker.sec.samsung.net:5000"]
+ "insecure-registries": [..., "npuci.mooo.com:5000"]
}
```
@@ -89,17 +89,11 @@ $ sudo systemctl restart docker // Ubuntu 16.04
## Install docker image of `nnfw`
-Let's pull docker image for `nnfw` repo and tag it to `nnfw_docker:latest`
+Let's pull docker image for `nnfw` repo and tag it to `nnas:latest`
```
-$ docker pull docker.sec.samsung.net:5000/star/nnfw/nnfw_docker:1.5
-$ docker tag docker.sec.samsung.net:5000/star/nnfw/nnfw_docker:1.5 nnfw_docker:latest
-```
-
-If you would like to build `nnfw` tizen package using gbs, pull `nnfw_docker_tizen`.
-```
-$ docker pull docker.sec.samsung.net:5000/star/nnfw/nnfw_docker_tizen:1.2
-$ docker tag docker.sec.samsung.net:5000/star/nnfw/nnfw_docker_tizen:1.2 nnfw_docker_tizen:latest
+$ docker pull npuci.mooo.com:5000/star/nnfw/nnas:latest
+$ docker tag npuci.mooo.com:5000/star/nnfw/nnas:latest nnas:latest
```
## Build docker image instead of pull
@@ -108,61 +102,53 @@ You can build docker image in your environment instead of pull docker image from
```
$ cd nnfw
-$ ./run build-docker
+$ ./nnas build-docker-image
```
-Default docker image name is `nnfw_docker`. If you want to change image name and/or tag, use `-t` or `--tag` option
+Default docker image name is `nnas`. If you want to change image name, set environment variable `DOCKER_IMAGE_NAME`
```
$ cd nnfw
-$ ./run build-docker -t nnfw_docker_test
+$ DOCKER_IMAGE_NAME=nnas_test ./nnas build-docker-image
```
-You can use options supported by `docker build` command (ex. `--network` option)
-
-```
-$ cd nnfw
-$ ./run build-docker --network=host --no-cache
-```
+You can use options supported by `docker build` command (ex. `--network` or `--build-arg` option)
-If you want to build docker image for tizen build, use `--tizen` option
+In case of error with a message : 'Temporary failure resolving..', try to build with '--network host' option
```
-```
$ cd nnfw
-$ ./run build-docker --tizen
+$ ./nnas build-docker-image --network host --build-arg UBUNTU_MIRROR="kr.archive.ubuntu.com"
```
-```
-
-## Use docker image to build `nnfw`
+## Use docker image to build `neurun`
Three different targets for `nnfw` can be built using docker image.
-1. Build `nnfw` for `x86_64` target
+1. Build `neurun` for `x86_64` target
```
$ cd nnfw
-$ docker run --rm -v $(pwd):/opt/nnfw -w /opt/nnfw nnfw_docker make install
+$ docker run --rm -v $(pwd):/opt/nnfw -w /opt/nnfw nnas make install
```
-or use `docker_run_test.sh` for convenience as below.
+or use `docker_build_test_x64.sh` for convenience as below.
```
$ cd nnfw
-$ ./run docker_run_test.sh
+$ ./infra/scripts/docker_build_test_x64.sh
```
You can find built artifacts at `nnfw/Product/x86_64-linux.debug`.
-2. Cross build `nnfw` for ARM on x86_64 host
+2. Cross build `neurun` for ARM on x86_64 host
-You should prepare RootFS, following [Cross Building for ARM](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/howto/CrossBuildForArm.md) except ACL build and cross build steps. Then execute below commands. If your RootFS directory is different with below directory, change it to correct path and ensure the path is absolute.
+You should prepare RootFS, following [Cross Building for ARM](./CrossBuildForArm.md) except ACL build and cross build steps. Then execute below commands. If your RootFS directory is different with below directory, change it to correct path and ensure the path is absolute.
```
$ cd nnfw
$ ROOTFS_DIR=$(pwd)/tools/cross/rootfs/arm \
-./run docker_build_cross_arm_ubuntu.sh
+./infra/scripts/docker_build_cross_arm_neurun.sh
```
You can find built artifacts at `nnfw/Product/armv7l-linux.debug/`.
-3. Build `nnfw` for Tizen ARM package on x86_64 host
+3. Build `neurun` for Tizen ARM package on x86_64 host
```
$ cd nnfw
-$ ./run docker_gbs_build.sh
+$ ./infra/scripts/docker_build_tizen_gbs.sh
```
You can find built artifacts at `Product/out/rpm`.
diff --git a/docs/nnfw/howto/HowToUseNNFWAPI.md b/docs/nnfw/howto/HowToUseNNFWAPI.md
new file mode 100644
index 000000000..e09343275
--- /dev/null
+++ b/docs/nnfw/howto/HowToUseNNFWAPI.md
@@ -0,0 +1,63 @@
+# Prepare nnpackage
+
+## Convert tensorflow pb file to nnpackage
+Follow the [compiler guide](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nncc/Release_2019/tutorial.md) to generate nnpackge from tensorflow pb file
+
+## Convert tflite file to nnpackage
+Please see [model2nnpkg](https://github.sec.samsung.net/STAR/nnfw/tree/master/tools/nnpackage_tool/model2nnpkg) for converting from tflite model file.
+
+# Build app with nnfw API
+
+Here are basic steps to build app with [nnfw C API](https://github.sec.samsung.net/STAR/nnfw/blob/master/runtime/neurun/api/include/nnfw.h)
+
+1) Initialize nnfw_session
+``` c
+nnfw_session *session = nullptr;
+nnfw_create_session(&session);
+```
+2) Load nnpackage
+``` c
+nnfw_load_model_from_file(session, nnpackage_path);
+```
+3) (Optional) Assign a specific backend to operations
+``` c
+ // Use acl_neon backend for CONV_2D and acl_cl for otherwise.
+ // Note that defalut backend is acl_cl
+ nnfw_set_op_backend(session, "CONV_2D", "acl_neon");
+```
+
+4) Compilation
+``` c
+ // Compile model
+ nnfw_prepare(session);
+```
+
+5) Prepare Input/Output
+``` c
+ // Prepare input. Here we just allocate dummy input arrays.
+ std::vector<float> input;
+ nnfw_tensorinfo ti;
+ nnfw_input_tensorinfo(session, 0, &ti); // get first input's info
+ uint32_t input_elements = num_elems(&ti);
+ input.resize(input_elements);
+ // TODO: Please add initialization for your input.
+ nnfw_set_input(session, 0, ti.dtype, input.data(), sizeof(float) * input_elements);
+
+ // Prepare output
+ std::vector<float> output;
+ nnfw_output_tensorinfo(session, 0, &ti); // get first output's info
+ uint32_t output_elements = num_elems(&ti);
+ output.resize(output_elements);
+ nnfw_set_output(session, 0, ti.dtype, output.data(), sizeof(float) * output_elements);
+```
+6) Inference
+``` c
+ // Do inference
+ nnfw_run(session);
+```
+## Run Inference with app on the target devices
+reference app : [minimal app](https://github.sec.samsung.net/STAR/nnfw/blob/master/runtime/neurun/sample/minimal)
+
+```
+$ ./minimal path_to_nnpackage_directory
+```
diff --git a/docs/nnfw/howto/HowtoMakeSampleAppOnNnfw.md b/docs/nnfw/howto/HowtoMakeSampleAppOnNnfw.md
new file mode 100644
index 000000000..d272a8390
--- /dev/null
+++ b/docs/nnfw/howto/HowtoMakeSampleAppOnNnfw.md
@@ -0,0 +1,132 @@
+# How to make a sample app on nnfw
+
+Our runtime `neurun` support `NNAPI` as interface currently. To use `NNAPI` efficiently, one of solution is to use tensorflow lite. We support additional library to help using tensorflow lite in `/libs/tflite`. (this library is not official support)
+
+To use tensorflow lite, you need to prepare tensorflow lite model file, and you should know input/output tensor name. Then write sample app.
+
+## Prepare loaded tensorflow lite model object
+
+You can select one of kernel register: tensorflow lite official kernel register or extended register (for pre-implemented custom op)
+```
+#include "tensorflow/lite/kernels/register.h"
+#include "tflite/ext/kernels/register.h"
+```
+
+To use tensorflow lite interpreter, need tensorflow lite interpreter session header
+```
+#include "tflite/InterpreterSession.h"
+```
+
+For NNAPI usage, need NNAPI session header
+```
+#include "tflite/NNAPISession.h"
+```
+
+Load the model object into `FlatBuffer`, create a tensorflow lite operator resolver `BuiltinOpResolver` and construct a tensorflow interpreter builder using them:
+```
+tflite::StderrReporter error_reporter;
+auto model = tflite::FlatBufferModel::BuildFromFile(model_file.c_str(), &error_reporter);
+
+// TODO: determine which BuiltinOpResolver and prepend namespace
+BuiltinOpResolver resolver;
+
+tflite::InterpreterBuilder builder(*model, resolver);
+```
+
+Create a tensorflow interpreter and init the builder using it:
+```
+std::unique_ptr<tflite::Interpreter> interpreter;
+builder(&interpreter);
+```
+
+Create a tensorflow lite session to use NNAPI:
+```
+std::shared_ptr<nnfw::tflite::Session> sess = std::make_shared<nnfw::tflite::NNAPISession>(interpreter.get());
+```
+
+If you want to use tensorflow lite interpreter instead of NNAPI, then:
+```
+std::shared_ptr<nnfw::tflite::Session> sess = std::make_shared<nnfw::tflite::InterpreterSession>(interpreter.get());
+```
+
+`NNAPISession` constructs a computational graph from the interpreter and builds the model.
+
+## Prepare tensors memory allocation and model input for inference
+
+Allocate the memory for tensors of `tflite::Interpreter`:
+```
+sess->prepare();
+```
+
+Prepare inputs. How to prepare is out of scope and task specific.<br/>
+Copy the input data into model, i.e. into `interpreter->inputs`. This is tensorflow specific, not nnfw, so one can use any method, that is applicable to Tensorflow, e.g.:
+```
+for (const auto &id : interpreter->inputs())
+{
+ if (interpreter->tensor(id)->name == input_name)
+ {
+ float *p = interpreter->tensor(id)->data.f;
+
+ for (int y = 0; y < height; ++y)
+ {
+ for (int x = 0; x < width; ++x)
+ {
+ for (int c = 0; c < channel; ++c)
+ {
+ *p++ = data[y * width * channel + x * channel + c];
+ }
+ }
+ }
+ }
+}
+```
+where:<br/>
+`input_name` - name of the inputs of the model;<br/>
+`data` - source vector of size `height * width * channel`.
+
+## Run the inference and get outputs
+
+Run the inference
+```
+sess->run();
+```
+
+Get the result from `interpreter->outputs()`. This is tensorflow lite specific, not nnfw, so one can use any method, that is applicable to tensorflow lite, e.g.:
+```
+for (const auto &id : interpreter->outputs())
+{
+ if (interpreter->tensor(id)->name == output_name)
+ {
+ float *p = interpreter->tensor(id)->data.f;
+
+ for (int i = 0; i < result.capacity(); ++i)
+ {
+ result.push_back(p[i]);
+ }
+ }
+}
+```
+where:<br/>
+`output_name` - name of the outputs of the model;<br/>
+`result` - float vector, where to put output. Its size can be calculated using
+```
+for (const auto &id : interpreter->outputs())
+{
+ if (interpreter->tensor(id)->name == output_name)
+ {
+ TfLiteTensor *t = interpreter->tensor(id);
+ int v = 1;
+ for (int i = 0; i < t->dims->size; ++i)
+ {
+ v *= t->dims->data[i];
+ }
+ return v;
+ }
+}
+return -1;
+```
+
+Release the session
+```
+sess->teardown();
+```
diff --git a/docs/nnfw/howto/RemoteDebuggingForVSCode.md b/docs/nnfw/howto/RemoteDebuggingForVSCode.md
new file mode 100644
index 000000000..c83a09bd5
--- /dev/null
+++ b/docs/nnfw/howto/RemoteDebuggingForVSCode.md
@@ -0,0 +1,147 @@
+# Remote Debugging for Visual Studio Code
+
+This document describes how to debug nnfw on arm devices using visual studio code.
+
+## Install gdb-multiarch on build host
+
+1. Install `gdb-multiarch`
+
+```bash
+$ sudo apt install gdb-multiarch
+```
+
+## Configure VS code on build host
+
+1. Install `Native Debug` extension on VS code
+
+2. Setup GDB environment on VS code
+
+- Debug -> Add configuration -> GDB: Connect to gdbserver
+- Change configuration as below
+ - Change `<TARGET_IP>` to IP of your target
+ - The default port number for gdbserver is 2345. You can change this number.
+ - You can change `executable` configuration from `tflite_run` to other binaries you want to debug.
+
+```json
+{
+ "version": "0.2.0",
+ "configurations": [
+ {
+ "type": "gdb",
+ "request": "attach",
+ "name": "Attach to gdbserver",
+ "gdbpath": "/usr/bin/gdb-multiarch",
+ "executable": "./Product/armv7l-linux.debug/out/bin/tflite_run",
+ "target": "<TARGET_IP>:2345",
+ "remote": true,
+ "printCalls": true,
+ "cwd": "${workspaceRoot}",
+ "valuesFormatting": "parseText"
+ }
+ ]
+}
+```
+
+## Install gdbserver and debugging symbols at target
+
+You need to setup a target device for remote debugging.
+
+1. Install `gdbserver`
+```bash
+$ sudo apt install gdbserver
+```
+
+2. Install `libc6-dbg` and copy debugging symbols
+```bash
+$ sudo apt install libc6-dbg
+$ sudo mkdir -p /lib/.debug
+$ sudo ln -s /usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so /lib/.debug
+```
+
+## Run remote debugging
+
+1. Start gdbserver on target
+
+```bash
+gdbserver --multi :<PORT> <BINARY_PATH> <EXECUTION_ARGUMENTS>
+```
+
+Example
+```bash
+gdbserver --multi :2345 Product/armv7l-linux.debug/out/bin/tflite_run ../models/slice_test.tflite
+```
+
+2. Connect to gdbserver using VS code
+
+- Setup breakpoints on any code you want.
+
+- Click F5 to start remote debugging.
+
+- Program will execute and exit if no breakpoint exists.
+
+## Optional: Setup rootfs on build host
+
+When debugging starts, `gdb` downloads shared libraries that nnfw uses from the target device.
+This process makes `gdb` to wait for shared library download to finish for every debugging start.
+
+To reduce shared library loading, you can setup an arm root file system on your build host and use it.
+
+1. Create arm root file system
+
+Following [CrossBuildForArm](docs/nnfw/howto/CrossBuildForArm.md) to create an arm root file system.
+
+You can use an arm root file system created for arm cross-compile.
+
+2. Install `libc6-dbg` on arm root file system
+
+`<ROOTF_DIR>` should point ARM root file system.
+
+Default path is `tools/cross/rootfs/arm` folder.
+
+```bash
+$ sudo chroot <ROOTFS_DIR>
+$ apt install libc6-dbg
+$ exit
+```
+
+3. Create symbolic link of nnfw on arm rootfs
+
+`gdb` will use source code folder at sysroot.
+
+```bash
+$ ln -s <NNFW_DIR> <ROOTFS_DIR>/<NNFW_DIR>
+```
+Example
+```bash
+$ ln -s /home/user/nnfw /home/user/nnfw/tools/cross/rootfs/arm/home/user/nnfw
+```
+
+4. Setup `.gdbinit` file on nnfw folder
+
+`gdb` will use `<ROOTFS_DIR>` to find arm related symbols.
+
+```bash
+set sysroot <ROOTFS_DIR>
+set debug-file-directory <ROOTFS_DIR>/usr/lib/debug
+```
+
+# Troubleshooting
+
+### Unable to open 'unordered_map.h'
+
+If you are using docker to build nnfw, you should download and decompress gcc-linaro at `/opt` folder
+
+```bash
+wget https://releases.linaro.org/components/toolchain/binaries/6.3-2017.02/arm-linux-gnueabihf/gcc-linaro-6.3.1-2017.02-x86_64_arm-linux-gnueabihf.tar.xz -O gcc-hardfp.tar.xz
+sudo tar -xf gcc-hardfp.tar.xz -C /opt/ && sudo rm -rf gcc-hardfp.tar.xz
+```
+
+### Skip STL files
+
+Step into (F11) will debug STL files such as `unordered_map` or `vector`.
+
+To skip those files from debugging, you can add below line to `.gdbinit` file.
+
+```bash
+skip -gfile /opt/gcc-linaro-6.3.1-2017.02-x86_64_arm-linux-gnueabihf/arm-linux-gnueabihf/include/c++/6.3.1/bits/*
+```
diff --git a/docs/howto/device/xu3-dip.png b/docs/nnfw/howto/device/xu3-dip.png
index 59c0be3f2..59c0be3f2 100644
--- a/docs/howto/device/xu3-dip.png
+++ b/docs/nnfw/howto/device/xu3-dip.png
Binary files differ
diff --git a/docs/nnfw/howto/device/xu3_tizen.md b/docs/nnfw/howto/device/xu3_tizen.md
new file mode 100644
index 000000000..6473ab9a8
--- /dev/null
+++ b/docs/nnfw/howto/device/xu3_tizen.md
@@ -0,0 +1,140 @@
+# About
+
+This will describe how to flash microSD with Tizen-5.5 for ODroid XU3.
+
+Host environment is Ubuntu 18.04
+
+This document will explain the only on eMMC + XU3.
+
+# Download files
+
+## Images
+
+Boot
+- https://download.tizen.org/snapshots/tizen/unified/latest/images/standard/tv-boot-armv7l-odroidxu3/
+- download the biggest file
+
+Root FS
+- https://download.tizen.org/snapshots/tizen/unified/latest/images/standard/tv-wayland-armv7l-odroidu3/
+- download the biggest file
+
+U-Boot images
+```
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/bl1.bin.hardkernel
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/bl2.bin.hardkernel.1mb_uboot
+wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/tzsw.bin.hardkernel
+```
+
+You also need `u-boot-mmc.bin` that is inside `tizen-unified_20180425.2_tv-boot-armv7l-odroidxu3.tar.gz` file.
+```
+tar xvf tizen-unified_20180425.2_tv-boot-armv7l-odroidxu3.tar.gz u-boot-mmc.bin
+```
+
+
+## Flashing script
+
+Download [sd_fusing_xu4.sh](https://git.tizen.org/cgit/platform/kernel/u-boot/plain/scripts/tizen/sd_fusing_xu4.sh?h=tizen)
+
+This file name has `xu4` but it works on also xu3.
+
+
+## Files
+
+```
+dragon@loki:~/Works/tizen/odroid-xu3/flashing$ ls -l
+total 1316
+-rw-rw-r-- 1 dragon dragon 15616 9월 5 14:41 bl1.bin.hardkernel
+-rw-rw-r-- 1 dragon dragon 14592 9월 5 14:41 bl2.bin.hardkernel.1mb_uboot
+-rw-rw-r-- 1 dragon dragon 262144 9월 5 14:41 tzsw.bin.hardkernel
+-rwxr-xr-x 1 dragon dragon 1048576 9월 4 15:17 u-boot-mmc.bin
+```
+
+# Flash
+
+Host environment
+- Ubuntu 18.04
+- eMMC connected through microUSB from xu3 to host
+
+## Flash boot files
+
+on target
+```
+...
+
+CPU: Exynos5422 @ 800 MHz
+
+Model: Odroid XU3 based on EXYNOS5422
+Board: Odroid XU3 based on EXYNOS5422
+Type: xu3
+DRAM: 2 GiB
+MMC: EXYNOS DWMMC: 0, EXYNOS DWMMC: 1
+In: serial
+Out: serial
+Err: serial
+Net: No ethernet found.
+Hit any key to stop autoboot: 0
+ODROID-XU3 #
+
+ODROID-XU3 # mmc list
+EXYNOS DWMMC: 0 (eMMC)
+EXYNOS DWMMC: 1
+
+ODROID-XU3 # ums 0 mmc 0
+
+UMS: LUN 0, dev 0, hwpart 0, sector 0x0, count 0x1d5a000
+
+/
+```
+
+then on host
+```
+$ sudo fdisk -l
+..........
+
+Partition table entries are not in disk order
+
+Disk /dev/sdh: 32.0 GB, 32010928128 bytes
+
+64 heads, 32 sectors/track, 30528 cylinders, total 62521344 sectors
+
+Units = sectors of 1 * 512 = 512 bytes
+
+Sector size (logical/physical): 512 bytes / 512 bytes
+
+I/O size (minimum/optimal): 512 bytes / 512 bytes
+
+Disk identifier: 0x00000000
+
+
+Device Boot Start End Blocks Id System
+
+/dev/sdh1 * 8192 139263 65536 e W95 FAT16 (LBA) ..........
+```
+
+```
+$ sudo ../sd_fusing_xu4.sh -d /dev/sdh --format \
+ -b bl1.bin.hardkernel bl2.bin.hardkernel.1mb_uboot tzsw.bin.hardkernel u-boot-mmc.bin
+...
+```
+
+`--format` option will, 1) delete current partition 2) create new partition table, 3) format each partitions.
+
+- If you meet `./sd_fusing_xu4-u1604.sh: line 147: pv: command not found` message and want to remove this message, install pv package by `sudo apt-get install pv`
+
+## Flash image files
+```
+$ sudo ../sd_fusing_xu4.sh -d /dev/sdh \
+ -b tizen-unified_20190905.1_tv-boot-armv7l-odroidxu3.tar.gz \
+ tizen-unified_20190905.1_tv-wayland-armv7l-odroidxu3.tar.gz
+```
+
+# After boot
+
+Follow [xu4_tizen](xu4_tizen.md)
+
+# References
+
+- http://suprem.sec.samsung.net/confluence/display/KS/Odroid+XU3
+- http://suprem.sec.samsung.net/confluence/pages/viewpage.action?pageId=104635990
+- http://suprem.sec.samsung.net/confluence/pages/viewpage.action?spaceKey=TPLAB&title=XU3+Image+Flashing
+- http://download.tizen.org/snapshots/tizen/unified/latest/images/standard/
diff --git a/docs/howto/device/xu3_ubuntu.md b/docs/nnfw/howto/device/xu3_ubuntu.md
index 38dbc69b0..38dbc69b0 100644
--- a/docs/howto/device/xu3_ubuntu.md
+++ b/docs/nnfw/howto/device/xu3_ubuntu.md
diff --git a/docs/howto/device/xu4_tizen.md b/docs/nnfw/howto/device/xu4_tizen.md
index 3481be206..a270bef1b 100644
--- a/docs/howto/device/xu4_tizen.md
+++ b/docs/nnfw/howto/device/xu4_tizen.md
@@ -1,8 +1,8 @@
# About
-This will describe how to flash microSD with Tizen-4.0 for ODroid XU4.
+This will describe how to flash microSD with Tizen for ODroid XU4.
-Host environment is Ubuntu 16.04
+Tested host environment is Ubuntu 16.04, target environment is Tizen 5.5
# Download files
@@ -13,9 +13,11 @@ Boot
- download the biggest file
Root FS
-- https://download.tizen.org/snapshots/tizen/unified/latest/images/standard/tv-wayland-armv7l-odroidu3/
+- https://download.tizen.org/snapshots/tizen/unified/latest/images/standard/tv-wayland-armv7l-odroidxu3/
- download the biggest file
+If you cannot access directories `tv-boot-armv7l-odroidxu3` or `tv-wayland-armv7l-odroidxu3`, or cannot find images in those directories, go to https://download.tizen.org/snapshots/tizen/unified/ and find latest snapshot including images for Odroid-XU3.
+
U-Boot images
```
wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/bl1.bin.hardkernel
@@ -23,23 +25,15 @@ wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardker
wget https://github.com/hardkernel/u-boot/raw/odroidxu3-v2012.07/sd_fuse/hardkernel_1mb_uboot/tzsw.bin.hardkernel
```
-You also need `u-boot-mmc.bin` that is inside `tizen-unified_20180425.2_tv-boot-armv7l-odroidxu3.tar.gz` file.
-```
-tar xvf tizen-unified_20180425.2_tv-boot-armv7l-odroidxu3.tar.gz u-boot-mmc.bin
-```
-
-
## Flashing script
-Download `sd_fusing_xu4-u1604.sh` from https://github.sec.samsung.net/RS7-RuntimeNTools/TizenTools/tree/master/sd_fusing_xu4
+Download `sd_fusing_xu4.sh` from https://git.tizen.org/cgit/platform/kernel/u-boot/plain/scripts/tizen/sd_fusing_xu4.sh?h=tizen
-This file is modified to work on Ubuntu 16.04.
-
-You can download original (What I got in the first place) file as `sd_fusing_xu4.sh`
+This file works on Ubuntu 16.04 and 18.04
Make it executable
```
-chmod u+x sd_fusing_xu4-u1604.sh
+chmod u+x sd_fusing_xu4.sh
```
@@ -47,28 +41,29 @@ chmod u+x sd_fusing_xu4-u1604.sh
You should see like this
```
--rw-rw-r-- 1 maxwell maxwell 15616 Mar 23 17:11 bl1.bin.hardkernel
--rw-rw-r-- 1 maxwell maxwell 14592 Mar 23 17:10 bl2.bin.hardkernel.1mb_uboot
--rw-rw-r-- 1 maxwell maxwell 9290646 Apr 26 02:35 tizen-unified_20180425.2_tv-boot-armv7l-odroidxu3.tar.gz
--rw-rw-r-- 1 maxwell maxwell 346530499 Apr 26 02:59 tizen-unified_20180425.2_tv-wayland-armv7l-odroidu3.tar.gz
--rw-rw-r-- 1 maxwell maxwell 262144 Mar 23 17:11 tzsw.bin.hardkernel
--rwxr-xr-x 1 maxwell maxwell 1048576 Apr 26 02:35 u-boot-mmc.bin*
+-rw-r--r-- 1 hseok82 hseok82 15616 11월 5 13:56 bl1.bin.hardkernel
+-rw-r--r-- 1 hseok82 hseok82 14592 11월 5 13:56 bl2.bin.hardkernel.1mb_uboot
+-rwxrwxr-x 1 hseok82 hseok82 8040 11월 5 13:53 sd_fusing_xu4.sh
+-rw-rw-r-- 1 hseok82 hseok82 10515369 11월 5 14:01 tizen-unified_20191105.1_tv-boot-armv7l-odroidxu3.tar.gz
+-rw-rw-r-- 1 hseok82 hseok82 465487683 11월 5 14:01 tizen-unified_20191105.1_tv-wayland-armv7l-odroidxu3.tar.gz
+-rw-r--r-- 1 hseok82 hseok82 262144 11월 5 13:56 tzsw.bin.hardkernel
```
-
# Flash
Host environment
- Ubuntu 16.04
- microSD connected through USB Reader as `/dev/sdd` file.
-## Flash boot files
+## Flash boot files and image files
Give `--format` if it's a new flash memory.
```
-sudo ./sd_fusing_xu4-u1604.sh --format \
+sudo ./sd_fusing_xu4.sh --format \
-d /dev/sdd \
--b bl1.bin.hardkernel bl2.bin.hardkernel.1mb_uboot tzsw.bin.hardkernel u-boot-mmc.bin
+-b bl1.bin.hardkernel bl2.bin.hardkernel.1mb_uboot tzsw.bin.hardkernel \
+tizen-unified_20191105.1_tv-boot-armv7l-odroidxu3.tar.gz \
+tizen-unified_20191105.1_tv-wayland-armv7l-odroidxu3.tar.gz
```
Change `/dev/sdd` to your configuration.
@@ -80,30 +75,25 @@ y
You can omit `--format` from the second time and followings.
```
-sudo ./sd_fusing_xu4-u1604.sh \
+sudo ./sd_fusing_xu4.sh \
-d /dev/sdd \
--b bl1.bin.hardkernel bl2.bin.hardkernel.1mb_uboot tzsw.bin.hardkernel u-boot-mmc.bin
+-b bl1.bin.hardkernel bl2.bin.hardkernel.1mb_uboot tzsw.bin.hardkernel \
+tizen-unified_20191105.1_tv-boot-armv7l-odroidxu3.tar.gz \
+tizen-unified_20191105.1_tv-wayland-armv7l-odroidxu3.tar.gz
```
`--format` option will, 1) delete current partition 2) create new partition table, 3) format each partitions.
-- If you meet `./sd_fusing_xu4-u1604.sh: line 147: pv: command not found` message and want to remove this message, install pv package by `sudo apt-get install pv`
-
-## Flash image files
-```
-sudo ./sd_fusing_xu4-u1604.sh -d /dev/sdd \
--b tizen-unified_20180425.2_tv-boot-armv7l-odroidxu3.tar.gz \
-tizen-unified_20180425.2_tv-wayland-armv7l-odroidu3.tar.gz
-```
+- If you meet `./sd_fusing_xu4.sh: line 147: pv: command not found` message and want to remove this message, install pv package by `sudo apt-get install pv`
-# Boot with Tizen 4.0
+# Boot with Tizen
Follow the steps
Step 1.
- Take out eMMC memory card if you have any
-Step 2.
-- Plug-In microSD with Tizen 4.0
+Step 2.
+- Plug-In microSD with Tizen
Step 3. Set boot switch
- Refer https://wiki.odroid.com/odroid-xu4/hardware/hardware
@@ -144,36 +134,23 @@ If the fan noise is disturbing, you can slow down a little.
echo "100" > /sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1
```
This will slow down the speed to 100. Range is from 0 to 255. "0" to make it stop. "255" for maximum speed.
+This value resets automatically and after reboot so may have to set the value every time you reboot and when fan noise loud again.
-This value resets after reboot so may have to set the value every time you reboot.
+Other solution is changing cpu governors policy for big core to `ondemand`
-## Expand root file system
+```
+echo ondemand | tee /sys/devices/system/cpu/cpu{0..7}/cpufreq/scaling_governor
+```
-Default Root FS is 3G but the size shows about the size of the image file, about 700MB.
+## Remount root file system writable
-There would be not enough space to install files. To overcome this do the following in Tizen root shell.
+Default ROOT FS (except `/opt/usr`) is read-only. If you want to modify FS, you need to remount as wriable.
```
mount -o remount,rw /
-resize2fs /dev/mmcblk0p2
-sync
-```
-And reboot
-```
-reboot
-```
-
-`df` before and after
-```
-Filesystem 1K-blocks Used Available Use% Mounted on
-/dev/root 754716 721228 8764 99% /
-```
-to
-```
-Filesystem 1K-blocks Used Available Use% Mounted on
-/dev/root 3031952 724724 2282504 25% /
```
+This is resets after reboot so need to fix `/etc/fstab` when you want to mount FS with wriable on every boot
## Wide console
@@ -231,11 +208,15 @@ connected to 10.113.xxx.yyy:26101
With `sdb devices`,
```
sdb devices
-List of devices attached
+List of devices attached
10.113.xxx.yyy:26101 device xu3
```
It comes up with `xu3` as our `xu4` also uses same image `xu3` image.
+# (Optional) Install OpenCL
+
+To use arm compute CL backend, install OpenCL.
+You can get OpenCL for tizen in Tizen Mali DDK.
# Known issue
- `ls -al` of root folder shows strange output.
diff --git a/docs/howto/device/xu4_ubuntu.md b/docs/nnfw/howto/device/xu4_ubuntu.md
index 7b8a3aa2b..7b8a3aa2b 100644
--- a/docs/howto/device/xu4_ubuntu.md
+++ b/docs/nnfw/howto/device/xu4_ubuntu.md
diff --git a/docs/nnfw/op_list.md b/docs/nnfw/op_list.md
new file mode 100644
index 000000000..a19c0937a
--- /dev/null
+++ b/docs/nnfw/op_list.md
@@ -0,0 +1,71 @@
+# List of Operations Supported by Runtime
+
+The list is based on commit 6f09c89f90216aed7df792.
+
+**Notice: There may be some restrictions on the support of each operation. Details will be updated soon.**
+
+
+| Operaion Name | acl_cl | acl_neon | srcn | cpu |
+| -------------------------- | --- | ----- | -- | --- |
+| Abs | O | O | | |
+| Add | O | O | O | O |
+| ArgMax | O | O | | |
+| AvgPool2D | O | O | | |
+| BatchToSpaceND | O | O | | |
+| Cast | O | O | | |
+| Comparison | O | O | | |
+| Concat | O | O | | O |
+| Conv2D | O | O | O | O |
+| Custom | | | | O |
+| DepthToSpace | O | O | | |
+| DepthwiseConv2D | O | O | O | O |
+| Dequantize | O | O | | |
+| Div | O | O | | |
+| EmbeddingLookup | O | O | | |
+| Exp | O | O | | |
+| Floor | O | O | | |
+| FullyConnected | O | O | | O |
+| Gather | O | O | | O |
+| HashtableLookup | O | O | | |
+| InstanceNorm | O | O | O | |
+| L2Normalization | O | O | | |
+| L2Pool2D | O | O | | |
+| LSTM | O | O | | |
+| LocalResponseNormalization | O | O | | |
+| LogicalAnd | O | O | | |
+| LogicalNot | O | O | | |
+| LogicalOr | O | O | | |
+| Logistic | O | O | | O |
+| Max | O | O | | |
+| MaxPool2D | O | O | | O |
+| Mean | O | O | | |
+| Min | O | O | | |
+| Mul | O | O | | O |
+| Neg | O | O | | |
+| PReLU | O | O | | |
+| Pack | O | O | | |
+| Pad | O | O | | O |
+| Permute | O | O | | O |
+| RNN | O | O | | |
+| RSQRT | O | O | | |
+| ReLU | O | O | | |
+| ReLU1 | O | O | | |
+| ReLU6 | O | O | | |
+| ReduceMax | O | O | | |
+| ReduceMin | O | O | | |
+| ReduceSum | O | O | | |
+| Reshape | O | O | | O |
+| ResizeBilinear | O | O | | |
+| SQRT | O | O | | |
+| Softmax | O | O | | O |
+| SpaceToBatchND | O | O | | |
+| SpaceToDepth | O | O | | |
+| Split | O | O | | |
+| SquaredDifference | O | O | | |
+| Squeeze | O | O | | O |
+| StridedSlice | O | O | | |
+| Sub | O | O | | O |
+| Tanh | O | O | | |
+| TopKV2 | O | | | |
+| Transpose | O | O | | |
+| TransposeConv | O | O | O | |
diff --git a/docs/nnfw/roadmap.md b/docs/nnfw/roadmap.md
new file mode 100644
index 000000000..c04bab66b
--- /dev/null
+++ b/docs/nnfw/roadmap.md
@@ -0,0 +1,76 @@
+This document describes roadmap of 2019 NN Runtime (or _nnfw_) project.
+
+# Goal
+
+This project _nnfw_ aims at providing a high-performance, on-device neural network (NN) inference
+framework that performs inference of a given NN model on processors, such as CPU, GPU, or NPU, in
+the target platform, such as Tizen and Android.
+
+Last year in 2018, we already saw significant gains in accelerating with a single CPU or GPU
+back-end. Now we want to gain more benefits by using a mixture of CPU and GPU according to each
+operation characteristic. It could give us an opportunity to have a high degree of freedom in terms
+of operator coverage, and possibly provide better performance compared to single back-end
+acceleration.
+
+On the other hand, we are going to introduce a new compiler to the front-end. This will support a
+variety of deep learning frameworks in relatively spacious host PC environments, while the runtime
+running on the target device is intended to take a smaller burden. In this process, the compiler and
+the runtime will effectively share information among themselves by the Common IR, which is referred
+to as the NN Package.
+
+# Architecture
+
+![nnfw_architecture](./fig/nnfw_architecture.png)
+
+The figure above illustrates the overall architecture and scope of _nnfw_, along with _nncc_, a
+sibling project, to help understand. In this document, we will deal specifically with _nnfw_.
+
+The _nnfw_ can be divided into three parts which is NN API and NN Runtime, as well as NN Compute
+that is provided by the platform.
+
+1. NN API
+ - Provide a common interface to application.
+ - Last year, Android NN API was selected for seamless integration with TF Lite. As long as our
+ NN runtime provides Android NN API as an interface, TF Lite can link to our NN runtime without
+ any modification.
+ - In choosing Android NN API, we expected standardization and rapid adoption. But the results
+ were far less than that. We could not control its specifications, and its growth rate was too
+ slow to accommodate our needs. So we try to define our own new one, NN Runtime API, in this
+ year. (Once the new API is stable, we provide a way to replace the Android NN API and it will
+ naturally be deprecated.)
+1. NN Runtime
+ - It already provides significant performance improvements using CPU or GPU acceleration. Now we
+ want to add the flexibility to this by providing various functions suitable to specific device
+ configuration.
+ - Mixed back-end acceleration enables various usage scenarios according to device-specific CPU
+ or GPU configurations and usage conditions.
+ - By introducing an interpreter, it will respond to dynamic conditions that the compiler can not
+ handle, and will effectively utilize the memory through the memory manager.
+1. NN Compute
+ - Provide computation acceleration library, such as ACL, or device driver for NPU.
+ - This layer will be provided by OS platform, and we will use the library or device driver as it
+ is. We may request a specific version to the Platform team, but we don't expect we will be
+ modifying the library.
+ - In this year, we will also introduce an extension mechanism to support custom operations on
+ this part.
+
+# Deliverables
+
+- On-Device AI SW stack for Tizen
+ + Advanced runtime support with interpreter, memory manager, and execution planner.
+ + Provides back-end flexibility, such as CPU/GPU mixed acceleration
+ + Well designed custom op support.
+ + Basic infrastructure for NPU support.
+- Specification and implementation of Common IR and Runtime API
+
+# Milestones
+
+- [Project Milestones](https://github.sec.samsung.net/orgs/STAR/projects/1)
+- [Monthly Milestones](https://github.sec.samsung.net/STAR/nnfw/projects/25)
+
+# Workgroups (WGs)
+
+- We organize WGs for major topics, and each WG will be working on its own major topic by breaking
+ it into small tasks/issues, performing them inside WG, and collaborating between WGs.
+- The WG information can be found [here](workgroups.md).
+
diff --git a/docs/tests/Convolution_manual_3x3.xlsx b/docs/nnfw/tests/Convolution_manual_3x3.xlsx
index 7211f6ab3..7211f6ab3 100644
--- a/docs/tests/Convolution_manual_3x3.xlsx
+++ b/docs/nnfw/tests/Convolution_manual_3x3.xlsx
Binary files differ
diff --git a/docs/tests/Softmax_manual.xlsx b/docs/nnfw/tests/Softmax_manual.xlsx
index 5ad4b8b2b..5ad4b8b2b 100644
--- a/docs/tests/Softmax_manual.xlsx
+++ b/docs/nnfw/tests/Softmax_manual.xlsx
Binary files differ
diff --git a/docs/project/2018_high_level_design.md b/docs/project/2018_high_level_design.md
deleted file mode 100644
index 7be495b34..000000000
--- a/docs/project/2018_high_level_design.md
+++ /dev/null
@@ -1,79 +0,0 @@
-# Software High Level Design
-
-## Design
-
-### SW System Overall Architecture
-
-![nnfw_architecture](../fig/nnfw_architecture.png)
-
-The figure above illustrates the overall software stack including _nnfw_, which consists of ML
-Framework and NN Runtime, and NN Compute. Note that NN Compute is provided by the underlying
-platform, and NN Runtime utilizes NN Compute for operation acceleration. The next bullets describe
-the role of each module and the background of design choice in the architecture.
-
-1. ML Framework
- - Provide TensorFlow (TF) Lite on Tizen and SMP
- - We chose TF Lite as a standard ML framework in _nnfw_ for this year, since TF Lite is
- lightweight compared to other ML frameworks and its community is rapidly growing. We expect
- supporting TF Lite on Samsung's OS platforms would be beneficial to Samsung's diverse
- business areas and AI solutions.
- - Provide TF Lite C# API for Tizen .NET
- - Considering the existing TF Lite supports only C++ and Java API, C# API for TF Lite would
- be a great complement to TF Lite and natural extension for Tizen.
-1. NN Runtime
- - Provide a common runtime interface, which is Android NN API
- - Android NN API (NN API for short) was selected for seamless integration with TF Lite. As
- long as our NN runtime provides NN API as an interface, TF Lite can link to our NN runtime
- without any modification.
- - Although we borrowed NN API as the runtime's interface, we plan to design and implement the
- runtime itself by ourselves. For the implementation, we would utilize compute libraries,
- e.g., ARM Compute Library (ACL), or device driver provided by NN Compute, for NN operation
- acceleration on CPU and GPU.
-1. NN Compute
- - Provide computation acceleration library, such as ACL, or device driver for NPU
- - This layer will be provided by OS platform, and we will use the library or device driver as it
- is. We may request a specific version to the Platform team, but we don't expect we will be
- modifying the library.
-
-
-### SW Structure Design
-
-1. ML Framework
- - Provide TensorFlow (TF) Lite on Tizen and SMP
- - Provide TF Lite C# API for Tizen .NET
-1. NN Runtime
- - Provide an implementation of Android NN API
- - Provide hardware acceleration of NN operations on the target platform
-
-
-### SW Behavior Design
-
-![nnfw_behavior](../fig/nnfw_behavior.png)
-
-The figure above depicts the execution flow from the user input to the inference result output.
-
-1. Input is a TF Lite model and is fed into TF Lite. Application on Tizen or SMP may use C# or C++
- API to load and run the input model.
-1. TF Lite determines whether the NN runtime is provided and supported on the platform or not. If
- supported, it constructs an internal model from the TF Lite model for the NN runtime and passes
- the model to the runtime. Otherwise, it invokes the internal interpreter, which runs on CPU, in
- order to perform inference with the given model.
-1. If the NN runtime receives the model from TF Lite, it consults with the Execution Planner about
- how to decompose operations in the model and how to make an execution order of the operations.
- The Execution Planner also decides which backend between CPU fallback and ACL kernel backend
- could be better performing depending on operations.
-1. When the NN runtime finishes the inference on CPU or GPU (or on both), it returns the output to
- TF Lite, which again delivers the output to the application.
-
-
-### SW Interface Design
-
-1. ML Framework
- - Java and C++ API of TF Lite will be provided as it is.
- - C# API will be defined as the project makes progress.
-1. NN Runtime
- - Public API for NN Runtime is the same as Android NN API that is provided in [Android
- oreo-m2-release](https://android.googlesource.com/platform/frameworks/ml/+/oreo-m2-release).
- - The API is defined in
- [NeuralNetworks.h](../../include/NeuralNetworks.h).
-
diff --git a/docs/project/2018_requirement_specification.md b/docs/project/2018_requirement_specification.md
deleted file mode 100644
index 90e3937ef..000000000
--- a/docs/project/2018_requirement_specification.md
+++ /dev/null
@@ -1,113 +0,0 @@
-# Software Requirement Specification
-
-## Background
-Artificial intelligence (AI) techniques are getting popular and utilized in various products and
-services. While the cloud-based AI techniques have been used to perform compute/memory intensive
-inferences because of the powerful servers on cloud, on-device AI technologies are recently drawing
-attention from the mobile industry for response time reduction, privacy protection, and
-connection-less AI service. Big mobile players, such as Google, Apple, and Huawei, are investing
-their research effort on the on-device AI technologies and already announced hardware and software
-on-device AI solutions. Samsung is not leading this trend currently, but since on-device AI area is
-just started and still in the initial state, there are still opportunities and possibilities to
-reduce the gap between pioneer companies and Samsung. We believe on-device AI will become a key
-differentiator for mobile phone, TV, and other home appliances, and thus developing on-device AI
-software stack is of paramount importance in order to take leadership in the on-device AI
-technology.
-
-Although the vision of on-device AI is promising, enabling on-device AI involves unique technical
-challenges compared to traditional cloud-based approach. This is because on-device AI tries to
-conduct inference tasks solely on device without connecting to cloud resources. Specifically,
-hardware resources on device, such as processor performance, memory capacity, and power budget, are
-very scarce and limit the compute capability, which is typically required to execute complicated
-neural network (NN) models. For example, in one product requirement, a mobile device should consume
-less than 1.2W and could use at most 2W only for 10 minutes due to thermal issue. Next, on-device
-AI software stack needs to support diverse device environments, since embedded platforms may consist
-of heterogeneous compute devices, such as CPU, GPU, DSP, or neural processing unit (NPU), and use
-different OS platforms, such as Tizen, Android, or Smart Machine OS.
-
-To tackle the challenges above and to have the leadership on on-device AI technology, this project,
-as the first step, aims at developing a neural network inference framework specialized and optimized
-for on-device AI.
-
-
-## Product Context
-
-This project _nnfw_ aims at providing a high-performance, on-device neural network (NN) inference
-framework that performs inference of a given NN model on processors, such as CPU, GPU, or NPU, in
-the target platform, such as Tizen and Smart Machine Platform (SMP).
-
-### Expected Value
-
-We expect the following would be possible with _nnfw_:
-
-- To improve user experience by reducing the service response time
-- To provide AI services without network connection while achieving similar performance
-- To protect personal information and company confidential by limiting data transfer to the network
-
-
-### Success Criteria
-
-The goals of this project are:
-
-- To support all 50 TensorFlow (TF) Lite operations on ARM CPU and GPU
-- To support all 29 operations of Android Neural Network (NN) API on ARM CPU and GPU
-- To support InceptionV3 and MobileNet, written in TF Lite model format, on ARM CPU and GPU
-
-
-### Target
-
-_nnfw_ targets two platforms with two target devices:
-
-- ODroid XU4 running Tizen 5.0
-- MV8890 running Smart Machine Platform 1.0
-
-
-### Product Roadmap
-
-- March: Set up milestones, tasks, workgroups, initial code structure, and build/test infra
-- April: Run InceptionV3 using ARM Compute Library (ACL) on ODroid XU4 running Tizen
-- May: Run MobileNet on Tizen / Tizen M1 release
-- June: Run ADAS models on Tizen
-- July: STAR Platform preview release
-- October: Tizen M2 release / SMP v1.0 release / STAR Platform v1.0 release
-
-
-## Requirements
-
-### Functionality Requirements
-
-_nnfw_ has the following functionality requirements:
-
-1. Run InceptionV3 on Tizen
- - Description
- - Support InceptionV3, written in TF Lite model format, on Tizen
- - Run on ARM CPU and GPU
- - Validation
- - Run the test code that executes InceptionV3 on Tizen CPU
- - Run the test code that executes InceptionV3 on Tizen GPU
- - Compare the results of test codes with that using the TF Lite interpreter
-1. Run MobileNet on Tizen
- - Description
- - Support MobileNet, written in TF Lite model format, on Tizen
- - Run on ARM CPU and GPU
- - Validation
- - Run the test code that executes MobileNet on Tizen CPU
- - Run the test code that executes MobileNet on Tizen GPU
- - Compare the results of test codes with that using the TF Lite interpreter
-1. Support 50 TF Lite operations and 29 NN API operations
- - Description
- - Support 50 TF Lite operations on Tizen for ARM CPU and GPU
- - Support 50 TF Lite operations on SMP for ARM CPU and GPU
- - Support 29 NN API operations on Tizen for ARM CPU and GPU
- - Support 29 NN API operations on SMP for ARM CPU and GPU
- - Validation
- - Run the test code for operations on Tizen CPU
- - Run the test code for operations on Tizen GPU
- - Run the test code for operations on SMP CPU
- - Run the test code for operations on SMP GPU
- - Compare the results of test codes with that using the TF Lite interpreter
-
-
-### Non-Functionality Requirements
-
-_nnfw_ does not have non-functionality requirements.
diff --git a/docs/release/release_note_1.0.0.md b/docs/release/release_note_1.0.0.md
new file mode 100644
index 000000000..e5f58d1fa
--- /dev/null
+++ b/docs/release/release_note_1.0.0.md
@@ -0,0 +1,65 @@
+# NNAS 1.0.0 Release Note
+Welcome to the first release of NNAS !
+
+## Feature Highlights
+
+- `nnpackage` : package format for NNAS
+- `nncc` : compiler collection for converting neural network model to `nnpackage`
+ - Currently supports 28 operations and 3 models
+ - Model optimization
+- `nnfw` : on-device runtime for runnning `nnpackage` on multiple devices
+ - Currently supports 63 operations
+ - Heterogeneous Execution
+ - (Experimental) Support custom operation
+
+## nnpackage
+`nnpackage` is our new package format for handling various formats easily in NNAS.
+
+Please refer to `nnpackage`'s [spec documentation](https://github.sec.samsung.net/STAR/nnfw/blob/master/nnpackage/spec) for the details.
+
+## nncc
+
+### Guide
+- Compilation tutorial : [inception_v3 compilation](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nncc/v1.0.0/tutorial.md)
+- Detailed compilation guide : [getting started](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nncc/v1.0.0/getting_started.md)
+
+### Supported Operations and Models
+
+#### Operations
+Compiler supports total [28 operations](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nncc/v1.0.0/operation-list.md).
+
+#### Models
+_Note that compiler does not support quantized model(e.g. QASYMM8) yet._
+
+Officially, compiler supports the following models :
+- Inception V3 (FLOAT32 model)
+- MobileNet V1 (FLOAT32 model)
+- Style Transfer (FLOAT32 model)
+
+
+### Model Optimizations
+- Constant Folding
+- Remove dead operation
+- Remove `Identity`
+- Resolve duplicate `Reshape`
+- Resolve redundant `Reshape`
+- Merge `Concat`
+- Fuse some fusible operations
+
+## nnfw
+
+### Guide
+- User can run own app with nnpackage via [nnfw API](https://github.sec.samsung.net/STAR/nnfw/blob/master/runtime/neurun/api/include/nnfw.h). You can find a guide in [Usage guide](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nnfw/howto).
+- For building an app with nnfw API, a minimal sample app is also provided at [minimal app](https://github.sec.samsung.net/STAR/nnfw/blob/master/runtime/neurun/sample/minimal).
+
+### Target Devices
+Runtime does not restrict which target devices your app can run on as far as our backends support the target devices. However, dev team uses [odroid-xu4](https://www.hardkernel.com/shop/odroid-xu4-special-price/) as a reference target. For setting odroid-xu4 board, you can find a guide on [arm Ubuntu guide](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nnfw/howto/device/xu4_ubuntu.md) and [arm Tizen guide](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nnfw/howto/device/xu4_tizen.md).
+
+### Supported Operations
+Runtime supports 63 NN operations. Note that operation coverage differs per backend. Please refer to [Runtime OP table](https://github.sec.samsung.net/STAR/nnfw/blob/master/docs/nnfw/op_list.md) for full list.
+
+### Heterogeneous Execution
+Runtime provides 4 backends : CPU, Compute Library OpenCL(acl_cl), Compute Library NEON(acl_neon), SRCN. Each backend has their own characteristic. In order to exploit the characteristic, runtime provides a way to assign specific backend at operation level. Please see `nnfw_set_op_backend` function of [nnfw API](https://github.sec.samsung.net/STAR/nnfw/blob/master/runtime/neurun/api/include/nnfw.h) for details. For concrete example, refer to [minimal app](https://github.sec.samsung.net/STAR/nnfw/blob/master/runtime/neurun/sample/minimal), please.
+
+### (Experimental) Custom Operator
+If your model has unsupported operation by runtime, you can still run this model with custom operator. Custom operator allows for users to provide your own implementation of such operations. For more details, refer to [custom operator documentation](https://github.sec.samsung.net/STAR/nnfw/blob/master/nnpackage/spec/30_custom_op.md). Note that this feature is experimental and subject to change.
diff --git a/docs/release/release_note_1.1.0.md b/docs/release/release_note_1.1.0.md
new file mode 100644
index 000000000..f267c4cbe
--- /dev/null
+++ b/docs/release/release_note_1.1.0.md
@@ -0,0 +1,40 @@
+# NNAS 1.1.0 Release Note
+
+## Feature Highlights
+
+- `nncc`
+ - Available for Tizen Studio in Windows
+ - Available for Visual Studio in Windows
+- `nnfw`
+ - Interpreter supports more operations
+ - CPU Arithmetic kernels support broadcasing
+ - Fully Connected Operation supports hybrid quantization
+
+
+## nncc
+
+### Available for Tizen Studio in Windows
+We now support `nncc` in Tizen Studio as plugin. For detailed information and simple tutorial, please refer to [NNCC Installation Guide for Tizen Studio](../nncc/v1.1.0/nncc_in_tizen_studio).
+
+#### Known Issues
+- Output directory of nnpackage is fixed to `res/shared`.
+
+### Available for Visual Studio in Windows
+We now support `nncc` in Visual Studio as Tizen extension program. For detailed information and simple tutorial, please refer to [NNCC Installation Guide for Visual Studio](../nncc/v1.1.0/nncc_in_visual_studio).
+
+#### Known Issues
+- `nncc` in Visual Studio extension program only accepts `model.pb` and `model.info` in `model` folder.
+ - If user want to create nnpackage using `model2.pb` and `model2.info`, user should change the names as `model.pb` and `model.info` first.
+- Output directory of nnpackage is fixed to `res/shared/model`.
+
+## nnfw
+
+ ### The following operations are supported on Interpreter :
+ - Activation : Relu, Relu1, Relu6, Tanh
+ - Logistics
+ - Gather
+ - Instance Normalization
+ - Transpose Convolution
+ ### CPU Arithmetic kernels support broadcasing
+ ### Fully Connected Operation supports hybrid quantization
+ Note that this support is only for acl_neon backend. See [hybrid quatization document](https://www.tensorflow.org/lite/performance/post_training_quantization#weight_quantization) for more about hybrid quantization. \ No newline at end of file
diff --git a/docs/workgroups.md b/docs/workgroups.md
deleted file mode 100644
index b258c3971..000000000
--- a/docs/workgroups.md
+++ /dev/null
@@ -1,19 +0,0 @@
-For faster communication and development, we organize workgroups (WGs) based on major topics
-described in [#61](https://github.sec.samsung.net/STAR/nnfw/issues/61). All WGs will work together
-to achieve the goal of _nnfw_ project, but each WG will define its own tasks and milestones, set its
-own sprints, and conduct its tasks. All WGs will sync up through github (note that github is our
-primary communication channel, and thus using github for communication is highly recommended) and
-on/off-line meetings.
-
-Current WGs based on the major topics in [#61](https://github.sec.samsung.net/STAR/nnfw/issues/61)
-and their root issue links are as follows:
-
-1. ML Framework (MLFW) WG
- - [Tasks and Milestones](https://github.sec.samsung.net/STAR/nnfw/issues/74)
-2. NN Runtime (NNRT) WG
- - [Tasks and Milestones](https://github.sec.samsung.net/STAR/nnfw/issues/72)
-3. NN API Operations (NNOP) WG
- - [Tasks and Milestones](https://github.sec.samsung.net/STAR/nnfw/issues/73)
-
-If you would like to participate in any WGs above or create a new WG, please create an issue or
-leave a comment at [#87](https://github.sec.samsung.net/STAR/nnfw/issues/87).