summaryrefslogtreecommitdiff
path: root/compiler/one-cmds/how-to-use-one-commands.txt
diff options
context:
space:
mode:
Diffstat (limited to 'compiler/one-cmds/how-to-use-one-commands.txt')
-rw-r--r--compiler/one-cmds/how-to-use-one-commands.txt125
1 files changed, 120 insertions, 5 deletions
diff --git a/compiler/one-cmds/how-to-use-one-commands.txt b/compiler/one-cmds/how-to-use-one-commands.txt
index 0ee69e077..028cde47a 100644
--- a/compiler/one-cmds/how-to-use-one-commands.txt
+++ b/compiler/one-cmds/how-to-use-one-commands.txt
@@ -1,7 +1,7 @@
About
-----
-Last update: 2020-07-31
+Last update: 2020-10-29
This document briefly explains how to use one-* commands.
Detailed options are not explained here. Run the command to see options.
@@ -20,8 +20,75 @@ Compilation flow for NPU
4) one-codegen will compile to binary codes.
+common features
+---------------
+
+[configuration file]
+
+You can run one-commands with configuration file as well as command line parameters. The
+configuration file should be written with the options the one-commands need to run.
+
+```
+# configuration_file.cfg
+
+[The_driver_you_want_to_run]
+input_path=/input/path/to/convert
+output_path=...
+option_0=...
+option_1=...
+...
+
+```
+
+You can see a template file for how to write a configuration file in `one-build.template.cfg`.
+
+[options to write]
+
+Sometimes you want to change certain options without touching the configuration file. If you
+pass the option directly to the command line, the option is processed prior to the configuration
+file. A list of options can be found in each driver's help message with `-h` option.
+
+e.g.
+```
+$ ./one-import tf -C my-conf.cfg -i path/to/overwrite.pb
+```
+
+
+one-build
+---------
+
+one-build is an integrated driver that can execute one-commands at once. It's nice to run each
+driver individually, but sometimes you'll want to put together the most frequently used commands
+and run them all at once. You can do this with one-build and its configuration file.
+
+For one-build, the configuration file needs 'one-build' section that consists of list of driver.
+
+```
+# one-build.template.cfg
+[one-build]
+one-import-tf=True
+one-import-tflite=False
+one-import-bcq=False
+one-optimize=True
+one-quantize=False
+one-pack=True
+one-codegen=False
+
+[one-import-tf]
+...
+
+[one-optimize]
+...
+
+[one-pack]
+...
+
+```
+See 'one-build.template.cfg' for more details.
+
+
one-import
------------
+----------
one-import will invokes one-import-* commands.
@@ -30,14 +97,15 @@ Syntax: one-import [framework] [options]
Currently supported frameworks are 'tf', 'tflite' for TensorFlow and TensorFlow
lite.
+
one-import-bcq
--------------
+--------------
This will convert Tensorflow model file (.pb) to our circle model file with applying BCQ.
To execute this command, original Tensorflow model file must include BCQ information.
This command invokes following scripts internally.
-- preserve_bcq_info : Prevent BCQ information vanishing problem
+- generate_bcq_metadata : Generate BCQ metadata in the model
- generate_bcq_info : Designate BCQ information nodes as model output automatically
- tf2tfliteV2 : Convert Tensorflow model to tflite model
- tflite2circle : Convert Tensorflow Lite model to circle model
@@ -58,7 +126,7 @@ one-import-tf
This will convert TensorFlow model (.pb) file to our circle model. You can also
directly call this command. one-import-tf invokes tf2tfliteV2.py script that
will internally use TensorFlow lite converter and then invoke tflite2circle
-converter to convert tflite model to circle model.
+converter to convert tflite model to circle model.
As tf2tfliteV2.py runs TensorFlow lite converter, you need to have TensorFlow
installed in your system. We recommand to use 2.3.0 for now.
@@ -81,16 +149,63 @@ one-optimize
one-optimize provides network or operator transformation shown below.
Current transformation options are
+- disable_validation : This will turn off operator validations.
+- expand_broadcast_const : This will expand broadcastable constant node inputs
+- fold_add_v2 : This removes AddV2 operation which can be folded
+- fold_cast : This removes Cast operation which can be folded
+- fold_densify: This removes Densify operator which can be folded
+- fold_dequantize : This removes Dequantize operation which can be folded
+- fold_dwconv : This folds Depthwise Convolution operation which can be folded
+- fold_gather : This removes Gather operation which can be folded
+- fold_sparse_to_dense : This removes SparseToDense operation which can be folded
+- forward_reshape_to_unaryop: This will move Reshape after UnaryOp for centain condition
+- fuse_add_with_fully_connected: This fuses Add operator with the preceding FullyConnected operator if possible
+- fuse_add_with_tconv: This fuses Add operator with the preceding TConv operator if possible
+- fuse_batchnorm_with_conv : This fuses BatchNorm operator to convolution operator
+- fuse_batchnorm_with_dwconv : This fuses BatchNorm operator to depthwise convolution operator
+- fuse_batchnorm_with_tconv : This fuses BatchNorm operator to transpose convolution operator
- fuse_bcq: This enables Binary-Coded-bases Quantized DNNs
- read https://arxiv.org/abs/2005.09904 for detailed information
- fuse_instnorm: This will convert instance normalization related operators to
one InstanceNormalization operator that our onert provides for faster
execution.
+- fuse_prelu: This will fuse operators to PReLU operator
+- fuse_preactivation_batchnorm: This fuses batch normalization operators of pre-activations to Conv operators.
+- fuse_activation_function: This fuses Activation function to a preceding operator.
+- fuse_mean_with_mean: This fuses two consecutive ReduceMean operations into one.
+- fuse_transpose_with_mean: This fuses ReduceMean with a preceding Transpose under certain conditions.
+- make_batchnorm_gamma_positive: This makes negative gamma of batch normalization into a small positive value (1e-10).
+ Note that this pass can change the execution result of the model.
+ So, use it only when the impact is known to be acceptable.
+- mute_warnings : This will turn off warning messages.
+- generate_profile_data : This will turn on profiling data generation.
+- remove_fakequant : This will remove all fakequant operators.
+- remove_quantdequant : This will remove all Quantize-Dequantize sequence.
+- remove_redundant_quantize : This removes redundant quantize operators.
+- remove_redundant_reshape : This fuses or removes redundant reshape operators.
+- remove_redundant_transpose : This fuses or removes redundant transpose operators.
+- remove_unnecessary_reshape : This removes unnecessary reshape operators.
+- remove_unnecessary_slice : This removes unnecessary slice operators.
+- remove_unnecessary_strided_slice : This removes unnecessary strided slice operators.
+- remove_unnecessary_split : This removes unnecessary split operators.
+- replace_cw_mul_add_with_depthwise_conv: This will replace channel-wise Mul/Add with DepthwiseConv2D.
- resolve_customop_add: This will convert Custom(Add) to normal Add operator
- resolve_customop_batchmatmul: This will convert Custom(BatchMatMul) to
normal BatchMatMul operator
- resolve_customop_matmul: This will convert Custom(MatMul) to normal MatMul
operator
+- resolve_customop_max_pool_with_argmax: This will convert Custom(MaxPoolWithArgmax)
+ to net of builtin operators.
+- shuffle_weight_to_16x1float32 : This will convert weight format of FullyConnected to SHUFFLED16x1FLOAT32.
+ Note that it only converts weights whose row is a multiple of 16.
+- substitute_pack_to_reshape : This will convert single input Pack to Reshape.
+- substitute_padv2_to_pad : This will convert certain condition PadV2 to Pad.
+- substitute_splitv_to_split : This will convert certain condition SplitV to Split.
+- substitute_squeeze_to_reshape : This will convert certain condition Squeeze to Reshape.
+- substitute_strided_slice_to_reshape : This will convert certain condition StridedSlice to Reshape.
+- substitute_transpose_to_reshape : This will convert certain condition Transpose to Reshape.
+- transform_min_max_to_relu6: This will transform Minimum-Maximum pattern to Relu6 operator.
+- transform_min_relu_to_relu6: This will transform Minimum(6)-Relu pattern to Relu6 operator.
one-quantize