summaryrefslogtreecommitdiff
path: root/docs/howto/how-to-build-runtime.md
blob: f4751198ea10bd724e11a897216ea609402df230 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
# How to Build Runtime

This document is based on the system where Ubuntu Desktop Linux 18.04 LTS is installed with default settings, and can be applied in other environments without much difference. For reference, the development of our project started in the Ubuntu Desktop Linux 16.04 LTS environment.

## Build requirements

If you are going to build this project, the following modules must be installed on your system:

- CMake
- Boost C++ libraries

In the Ubuntu, you can easily install it with the following command.

```
$ sudo apt-get install cmake libboost-all-dev
```

If your linux system does not have the basic development configuration, you will need to install more packages. A list of all packages needed to configure the development environment can be found in the https://github.com/Samsung/ONE/blob/master/infra/docker/Dockerfile.1804 file.

Here is a summary of it

```
$ sudo apt install \
build-essential \
clang-format-3.9 \
cmake \
doxygen \
git \
graphviz \
hdf5-tools \
lcov \
libatlas-base-dev \
libboost-all-dev \
libgflags-dev \
libgoogle-glog-dev \
libgtest-dev \
libhdf5-dev \
libprotobuf-dev \
protobuf-compiler \
pylint \
python3 \
python3-pip \
python3-venv \
scons \
software-properties-common \
unzip \
wget

$ mkdir /tmp/gtest
$ cd /tmp/gtest
$ cmake /usr/src/gtest
$ make
$ sudo mv *.a /usr/lib

$ pip install yapf==0.22.0 numpy

```

## Build from source code, for Ubuntu

In a typical linux development environment, including Ubuntu, you can build the runtime with a simple command like this:

```
$ git clone https://github.com/Samsung/ONE.git one
$ cd one
$ make -f Makefile.template install
```

Unfortunately, the debug build on the x86_64 architecture currently has an error. To solve the problem, you must use gcc version 9 or higher. Another workaround is to do a release build rather than a debug build. This is not a suitable method for debugging during development, but it is enough to check the function of the runtime. To release build the runtime, add the environment variable `BUILD_TYPE=release` to the build command as follows.

```
$ export BUILD_TYPE=release
$ make install
```

Or you can simply do something like this:

```
$ BUILD_TYPE=release make install
```

The build method described here is a `native build` in which the build environment and execution environment are same. So, this command creates a runtime binary targeting the current build architecture, probably x86_64, as the execution environment. You can find the build output in the ./Product folder as follows:

```
$ tree -L 2 ./Product
./Product
├── obj -> /home/sjlee/star/one/Product/x86_64-linux.debug/obj
├── out -> /home/sjlee/star/one/Product/x86_64-linux.debug/out
└── x86_64-linux.debug
    ├── BUILD
    ├── CONFIGURE
    ├── INSTALL
    ├── obj
    └── out

5 directories, 3 files

$ tree -L 3 ./Product/out
./Product/out
├── bin
│   ├── nnapi_test
│   ├── nnpackage_run
│   ├── tflite_loader_test_tool
│   └── tflite_run
├── include
│   ├── nnfw
│   │   ├── NeuralNetworksEx.h
│   │   ├── NeuralNetworksExtensions.h
│   │   ├── NeuralNetworks.h
│   │   ├── nnfw_experimental.h
│   │   └── nnfw.h
│   └── onert
│       ├── backend
│       ├── compiler
│       ├── exec
│       ├── ir
│       ├── misc
│       └── util
├── lib
│   ├── libbackend_cpu.so
│   ├── libcircle_loader.so
│   ├── libneuralnetworks.so
│   ├── libnnfw-dev.so
│   ├── libnnfw_lib_benchmark.so
│   ├── libnnfw_lib_misc.a
│   ├── libonert_core.so
│   └── libtflite_loader.so
├── tests
│   ├── FillFrom_runner
│   ├── nnpkgs
│   │   └── FillFrom
│   └── scripts
│       ├── benchmark_nnapi.sh
│       ├── benchmark_nnpkg.sh
│       ├── common.sh
│       ├── framework
│       ├── list
│       ├── print_to_json.sh
│       ├── test-driver.sh
│       ├── test_framework.sh
│       ├── test_scheduler_with_profiling.sh
│       └── unittest.sh
├── unittest
│   ├── nnapi_gtest
│   ├── nnapi_gtest.skip
│   ├── nnapi_gtest.skip.noarch.interp
│   ├── nnapi_gtest.skip.x86_64-linux.cpu
│   ├── test_compute
│   ├── test_onert
│   ├── test_onert_backend_cpu_common
│   ├── test_onert_frontend_nnapi
│   └── tflite_test
└── unittest_standalone
    └── nnfw_api_gtest

19 directories, 36 files

```

Here, let's recall that the main target of our project is the arm architecture. If you have a development environment running Linux for arm on a device made of an arm CPU, such as Odroid-XU4, you will get a runtime binary that can be run on the arm architecture with the same command above. This is the simplest way to get a binary for an arm device. However, in most cases, native builds on arm devices are too impractical as they require too long. Therefore, we will create an executable binary of an architecture other than x86_64 through a `cross build`. For cross-build method for each architecture, please refer to the corresponding document in the following section, [How to cross-build runtime for different architecture](#how-to-cross-build-runtime-for-different-architecture).

### Run test

The simple way to check whether the build was successful is to perform inference of the NN model using the runtime. The model to be used for the test can be obtained as follows.

```
$ wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/model_zoo/upload_20180427/inception_v3_2018_04_27.tgz
$ tar zxvf inception_v3_2018_04_27.tgz ./inception_v3.tflite
$ ls *.tflite
inception_v3.tflite
```

The result of running the inception_v3 model using runtime is as follows. Please consider that this is a test that simply checks execution latency without considering the accuracy of the model.

```
$ USE_NNAPI=1 LD_LIBRARY_PATH="./Product/out/lib/:$LD_LIBRARY_PATH" ./Product/out
/bin/tflite_run ./inception_v3.tflite
nnapi function 'ANeuralNetworksModel_create' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksModel_addOperand' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksModel_setOperandValue' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksModel_addOperation' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksModel_identifyInputsAndOutputs' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksModel_finish' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksCompilation_create' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksCompilation_finish' is loaded from './Product/out/lib/libneuralnetworks.so'
input tensor indices = [317,]
nnapi function 'ANeuralNetworksExecution_create' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksExecution_setInput' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksExecution_setOutput' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksExecution_startCompute' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksEvent_wait' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksEvent_free' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksExecution_free' is loaded from './Product/out/lib/libneuralnetworks.so'
... run 1 takes 183.895 ms
output tensor indices = [316(max:905),]
===================================
MODEL_LOAD   takes 1.108 ms
PREPARE      takes 0.190 ms
EXECUTE      takes 183.895 ms
- MEAN     :  183.895 ms
- MAX      :  183.895 ms
- MIN      :  183.895 ms
- GEOMEAN  :  183.895 ms
===================================
nnapi function 'ANeuralNetworksCompilation_free' is loaded from './Product/out/lib/libneuralnetworks.so'
nnapi function 'ANeuralNetworksModel_free' is loaded from './Product/out/lib/libneuralnetworks.so'
```
Here, `USE_NNAPI=1` means that **ONE** runtime is used for model inference. If omitted, the model will be executed using Tensorflow lite, the basic framework for verification. `LD_LIBRARY_PATH="./Product/out/lib/:$LD_LIBRARY_PATH"` specifies the location of the runtime library to be used for testing. From the previous build result, you can see that it is the path to the directory where `libneuralnetworks.so` and `libonert_core.so` are located.

If you come here without any problems, you have all of the basic environments for runtime development.

## Build for Tizen

(Will be written)

## Build using docker image

If your development system is not a linux environment like Ubuntu, but you can use docker on your system, you can build a runtime using a pre-configured docker image. Of course, you can also build a runtime using a docker image in a ubuntu environment, without setting up a complicated development environment. For more information, please refer to the following document.

- [Build using prebuilt docker image](how-to-build-runtime-using-prebuilt-docker-image.md)

## How to cross-build runtime for different architecture

Please refer to the following document for the build method for architecture other than x86_64, which is the basic development environment.

- [Cross building for ARM](how-to-cross-build-runtime-for-arm.md)
- [Cross building for AARCH64](how-to-cross-build-runtime-for-aarch64.md)
- [Cross building for Android](how-to-cross-build-runtime-for-android.md)