summaryrefslogtreecommitdiff
path: root/contrib/TFLiteSharp/README.md
blob: 8e43be61853b2ba8172a318e5ce0ab520a8673c4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
# C-Sharp TFLite API Directory structure
```
.
├── packaging
│   ├── TFLiteSharp.manifest
│   └── TFLiteSharp.spec
├── README.md
├── TFLiteNative
│   ├── CMakeLists.txt
│   ├── include
│   │   ├── tflite_log.h
│   │   └── tflite_nativewrapper.h
│   ├── src
│   │   └── tflite_nativewrapper.cpp
│   └── tflite-native.pc.in
├── TFLiteSharp
│   ├── TFLiteSharp
│   │   ├── src
│   │   │   └── Interpreter.cs
│   │   └── TFLiteSharp.csproj
│   └── TFLiteSharp.sln
└── TFLiteSharpTest
    ├── TFLiteSharpTest
    │   ├── Program.cs
    │   └── TFLiteSharpTest.csproj
    └── TFLiteSharpTest.sln
```

# Build C-Sharp TFLite
gbs should be used to build TFLiteSharp package. nnfw is also built by gbs. As in most cases when building nnfw we won't intend to build TFLiteSharp hence we have separated its build process, so in order to build TFLiteSharp below command is needed:
```
nnfw$ gbs build --packaging-dir=contrib/TFLiteSharp/packaging/ --spec=TFLiteSharp.spec -A armv7l
```
This will first build the TFLiteNative package containing native c++ bindings between c# api and tflite api
and then it will build TFLiteSharp(c# api package).

Please use gbs.conf file corresponding to tizen image version. In most cases gbs.conf file should be same as the one which is used to build nnfw.
# C-Sharp TFLite API list

## Interpreter Class

### Constructor

The `Interpreter.cs` class drives model inference with TensorFlow Lite.

#### Initializing an `Interpreter` With a Model File

The `Interpreter` can be initialized with a model file using the constructor:

```c#
public Interpreter(string modelFile);
```

Number of threads available to the interpereter can be set by using the following function:
```c#
public void SetNumThreads(int num_threads)
```

### Running a model

If a model takes only one input and returns only one output, the following will trigger an inference run:

```c#
interpreter.Run(input, output);
```

For models with multiple inputs, or multiple outputs, use:

```c#
interpreter.RunForMultipleInputsOutputs(inputs, map_of_indices_to_outputs);
```

The C# api also provides functions for getting the model's input and output indices given the name of tensors as input:

```c#
public int GetInputIndex(String tensorName)
public int GetOutputIndex(String tensorName)
```

Developer can also enable or disable the use of NN API based on hardware capabilites:
```c#
public void SetUseNNAPI(boolean useNNAPI)
```

### Releasing Resources After Use

An `Interpreter` owns resources. To avoid memory leak, the resources must be
released after use by:

```c#
interpreter.Dispose();
```