1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
|
namespace arm_compute
{
/** @mainpage Documentation
@tableofcontents
@section S0_introduction Introduction
The ARM Computer Vision and Machine Learning library is a set of functions optimised for both ARM CPUs and GPUs using SIMD technologies.
Several builds of the library are available using various configurations:
- OS: Linux, Android or bare metal.
- Architecture: armv7a (32bit) or arm64-v8a (64bit)
- Technology: NEON / OpenCL / NEON and OpenCL
- Debug / Asserts / Release: Use a build with asserts enabled to debug your application and enable extra validation. Once you are sure your application works as expected you can switch to a release build of the library for maximum performance.
@subsection S0_1_contact Contact / Support
Please email developer@arm.com
In order to facilitate the work of the support team please provide the build information of the library you are using. To get the version of the library you are using simply run:
$ strings android-armv7a-cl-asserts/libarm_compute.so | grep arm_compute_version
arm_compute_version=v16.12 Build options: {'embed_kernels': '1', 'opencl': '1', 'arch': 'armv7a', 'neon': '0', 'asserts': '1', 'debug': '0', 'os': 'android', 'Werror': '1'} Git hash=f51a545d4ea12a9059fe4e598a092f1fd06dc858
@section S1_file_organisation File organisation
This archive contains:
- The arm_compute header and source files
- The latest Khronos OpenCL 1.2 C headers from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a>
- The latest Khronos cl2.hpp from the <a href="https://www.khronos.org/registry/cl/">Khronos OpenCL registry</a> (API version 2.1 when this document was written)
- The sources for a stub version of libOpenCL.so to help you build your application.
- An examples folder containing a few examples to compile and link against the library.
- A @ref test_helpers folder containing headers with some boiler plate code used by the examples.
- This documentation.
You should have the following file organisation:
.
├── arm_compute --> All the arm_compute headers
│ ├── core
│ │ ├── CL
│ │ │ ├── CLKernels.h --> Includes all the OpenCL kernels at once
│ │ │ ├── CL specialisation of all the generic objects interfaces (ICLTensor, ICLImage, etc.)
│ │ │ ├── kernels --> Folder containing all the OpenCL kernels
│ │ │ │ └── CL*Kernel.h
│ │ │ └── OpenCL.h --> Wrapper to configure the Khronos OpenCL C++ header
│ │ ├── CPP
│ │ │ └── kernels --> Folder containing all the CPP kernels
│ │ │ │ └── CPP*Kernel.h
│ │ ├── NEON
│ │ │ ├── kernels --> Folder containing all the NEON kernels
│ │ │ │ └── NE*Kernel.h
│ │ │ └── NEKernels.h --> Includes all the NEON kernels at once
│ │ ├── All common basic types (Types.h, Window, Coordinates, Iterator, etc.)
│ │ ├── All generic objects interfaces (ITensor, IImage, etc.)
│ │ └── Objects metadata classes (ImageInfo, TensorInfo, MultiImageInfo)
│ └── runtime
│ ├── CL
│ │ ├── CL objects & allocators (CLArray, CLImage, CLTensor, etc.)
│ │ ├── functions --> Folder containing all the OpenCL functions
│ │ │ └── CL*.h
│ │ └── CLFunctions.h --> Includes all the OpenCL functions at once
│ ├── CPP
│ │ └── CPPScheduler.h --> Basic pool of threads to execute CPP/NEON code on several cores in parallel
│ ├── NEON
│ │ ├── functions --> Folder containing all the NEON functions
│ │ │ └── NE*.h
│ │ └── NEFunctions.h --> Includes all the NEON functions at once
│ └── Basic implementations of the generic object interfaces (Array, Image, Tensor, etc.)
├── documentation
│ ├── index.xhtml
│ └── ...
├── documentation.xhtml -> documentation/index.xhtml
├── examples
│ ├── cl_convolution.cpp
│ ├── neoncl_scale_median_gaussian.cpp
│ ├── neon_convolution.cpp
│ └── neon_scale.cpp
├── include
│ └── CL
│ └── Khronos OpenCL C headers and C++ wrapper
├── opencl-1.2-stubs
│ └── opencl_stubs.c
├── src
│ ├── core
│ │ └── ... (Same structure as headers)
│ │ └── CL
│ │ └── cl_kernels --> All the OpenCL kernels
│ └── runtime
│ └── ... (Same structure as headers)
└── test_helpers --> Boiler plate code used by examples
└── Utils.h
@section S2_versions_changelog Release versions and changelog
@subsection S2_1_versions Release versions
All releases are numbered vYY.MM Where YY are the last two digits of the year, and MM the month number.
If there is more than one release in a month then an extra sequential number is appended at the end:
v17.03 (First release of March 2017)
v17.03.1 (Second release of March 2017)
v17.04 (First release of April 2017)
@note We're aiming at releasing one major public release with new features per quarter. All releases in between will only contain bug fixes.
@subsection S2_2_changelog Changelog
v17.05 Public bug fixes release
- Various bug fixes
- Remaining of the functions ported to use accurate padding.
- Library does not link against OpenCL anymore (It uses dlopen / dlsym at runtime instead to determine whether or not OpenCL is available).
- Added "free" method to allocator.
- Minimum version of G++ required for armv7 Linux changed from 4.8 to 4.9
v17.04 Public bug fixes release
The following functions have been ported to use the new accurate padding:
- @ref CLColorConvertKernel
- @ref CLEdgeNonMaxSuppressionKernel
- @ref CLEdgeTraceKernel
- @ref CLGaussianPyramidHorKernel
- @ref CLGaussianPyramidVertKernel
- @ref CLGradientKernel
- @ref NEChannelCombineKernel
- @ref NEFillArrayKernel
- @ref NEGaussianPyramidHorKernel
- @ref NEGaussianPyramidVertKernel
- @ref NEHarrisScoreFP16Kernel
- @ref NEHarrisScoreKernel
- @ref NEHOGDetectorKernel
- @ref NELogits1DMaxKernel
- @ref NELogits1DShiftExpSumKernel
- @ref NELogits1DNormKernel
- @ref NENonMaximaSuppression3x3FP16Kernel
- @ref NENonMaximaSuppression3x3Kernel
v17.03.1 First Major public release of the sources
- Renamed the library to arm_compute
- New CPP target introduced for C++ kernels shared between NEON and CL functions.
- New padding calculation interface introduced and ported most kernels / functions to use it.
- New OpenCL kernels / functions:
- @ref CLGEMMLowpMatrixMultiplyKernel / @ref CLGEMMLowp
- New NEON kernels / functions:
- @ref NENormalizationLayerKernel / @ref NENormalizationLayer
- @ref NETransposeKernel / @ref NETranspose
- @ref NELogits1DMaxKernel, @ref NELogits1DShiftExpSumKernel, @ref NELogits1DNormKernel / @ref NESoftmaxLayer
- @ref NEIm2ColKernel @ref NECol2ImKernel @ref NEConvolutionLayerWeightsReshapeKernel / @ref NEConvolutionLayer
- @ref NEGEMMMatrixAccumulateBiasesKernel / @ref NEFullyConnectedLayer
- @ref NEGEMMLowpMatrixMultiplyKernel / @ref NEGEMMLowp
v17.03 Sources preview
- New OpenCL kernels / functions:
- @ref CLGradientKernel, @ref CLEdgeNonMaxSuppressionKernel, @ref CLEdgeTraceKernel / @ref CLCannyEdge
- GEMM refactoring + FP16 support: @ref CLGEMMInterleave4x4Kernel, @ref CLGEMMTranspose1xWKernel, @ref CLGEMMMatrixMultiplyKernel, @ref CLGEMMMatrixAdditionKernel / @ref CLGEMM
- @ref CLGEMMMatrixAccumulateBiasesKernel / @ref CLFullyConnectedLayer
- @ref CLTransposeKernel / @ref CLTranspose
- @ref CLLKTrackerInitKernel, @ref CLLKTrackerStage0Kernel, @ref CLLKTrackerStage1Kernel, @ref CLLKTrackerFinalizeKernel / @ref CLOpticalFlow
- @ref CLNormalizationLayerKernel / @ref CLNormalizationLayer
- @ref CLLaplacianPyramid, @ref CLLaplacianReconstruct
- New NEON kernels / functions:
- @ref NEActivationLayerKernel / @ref NEActivationLayer
- GEMM refactoring + FP16 support (Requires armv8.2 CPU): @ref NEGEMMInterleave4x4Kernel, @ref NEGEMMTranspose1xWKernel, @ref NEGEMMMatrixMultiplyKernel, @ref NEGEMMMatrixAdditionKernel / @ref NEGEMM
- @ref NEPoolingLayerKernel / @ref NEPoolingLayer
v17.02.1 Sources preview
- New OpenCL kernels / functions:
- @ref CLLogits1DMaxKernel, @ref CLLogits1DShiftExpSumKernel, @ref CLLogits1DNormKernel / @ref CLSoftmaxLayer
- @ref CLPoolingLayerKernel / @ref CLPoolingLayer
- @ref CLIm2ColKernel @ref CLCol2ImKernel @ref CLConvolutionLayerWeightsReshapeKernel / @ref CLConvolutionLayer
- @ref CLRemapKernel / @ref CLRemap
- @ref CLGaussianPyramidHorKernel, @ref CLGaussianPyramidVertKernel / @ref CLGaussianPyramid, @ref CLGaussianPyramidHalf, @ref CLGaussianPyramidOrb
- @ref CLMinMaxKernel, @ref CLMinMaxLocationKernel / @ref CLMinMaxLocation
- @ref CLNonLinearFilterKernel / @ref CLNonLinearFilter
- New NEON FP16 kernels (Requires armv8.2 CPU)
- @ref NEAccumulateWeightedFP16Kernel
- @ref NEBox3x3FP16Kernel
- @ref NENonMaximaSuppression3x3FP16Kernel
v17.02 Sources preview
- New OpenCL kernels / functions:
- @ref CLActivationLayerKernel / @ref CLActivationLayer
- @ref CLChannelCombineKernel / @ref CLChannelCombine
- @ref CLDerivativeKernel / @ref CLChannelExtract
- @ref CLFastCornersKernel / @ref CLFastCorners
- @ref CLMeanStdDevKernel / @ref CLMeanStdDev
- New NEON kernels / functions:
- HOG / SVM: @ref NEHOGOrientationBinningKernel, @ref NEHOGBlockNormalizationKernel, @ref NEHOGDetectorKernel, @ref NEHOGNonMaximaSuppressionKernel / @ref NEHOGDescriptor, @ref NEHOGDetector, @ref NEHOGGradient, @ref NEHOGMultiDetection
- @ref NENonLinearFilterKernel / @ref NENonLinearFilter
- Introduced a CLScheduler to manage the default context and command queue used by the runtime library and create synchronisation events.
- Switched all the kernels / functions to use tensors instead of images.
- Updated documentation to include instructions to build the library from sources.
v16.12 Binary preview release
- Original release
@section S3_how_to_build How to build the library and the examples
@subsection S3_1_build_options Build options
scons 2.3 or above is required to build the library.
To see the build options available simply run ```scons -h```:
debug: Debug (default=0) (0|1)
default: 0
actual: 0
asserts: Enable asserts (This flag is forced to 1 for debug=1) (default=0) (0|1)
default: 0
actual: 0
arch: Target Architecture (default=armv7a) (armv7a|arm64-v8a|arm64-v8.2-a|x86_32|x86_64)
default: armv7a
actual: armv7a
os: Target OS (default=linux) (linux|android|bare_metal)
default: linux
actual: linux
build: Build type: (default=cross_compile) (native|cross_compile)
default: cross_compile
actual: cross_compile
Werror: Enable/disable the -Werror compilation flag (Default=1) (0|1)
default: 1
actual: 1
opencl: Enable OpenCL support(Default=1) (0|1)
default: 1
actual: 1
neon: Enable Neon support(Default=0) (0|1)
default: 0
actual: 0
embed_kernels: Embed OpenCL kernels in library binary(Default=0) (0|1)
default: 0
actual: 0
set_soname: Set the library's soname and shlibversion (Requires SCons 2.4 or above) (yes|no)
default: 0
actual: False
extra_cxx_flags: Extra CXX flags to be appended to the build command
default:
actual:
Debug / asserts:
- With debug=1 asserts are enabled, and the library is built with symbols and no optimisations enabled.
- With debug=0 and asserts=1: Optimisations are enabled and symbols are removed, however all the asserts are still present (This is about 20% slower than the release build)
- With debug=0 and asserts=0: All optimisations are enable and no validation is performed, if the application misuses the library it is likely to result in a crash. (Only use this mode once you are sure your application is working as expected).
Architecture: The x86_32 and x86_64 targets can only be used with neon=0 and opencl=1.
OS: Choose the operating system you are targeting: Linux, Android or bare metal.
@note bare metal can only be used for NEON (not OpenCL), only static libraries get built and NEON's multi-threading support is disabled.
Build type: you can either build directly on your device (native) or cross compile from your desktop machine (cross-compile). In both cases make sure the compiler is available in your path.
Werror: If you are compiling using the same toolchains as the ones used in this guide then there shouldn't be any warning and therefore you should be able to keep Werror=1. If with a different compiler version the library fails to build because of warnings interpreted as errors then, if you are sure the warnings are not important, you might want to try to build with Werror=0 (But please do report the issue either on Github or by an email to developer@arm.com so that the issue can be addressed).
OpenCL / NEON: Choose which SIMD technology you want to target. (NEON for ARM Cortex-A CPUs or OpenCL for ARM Mali GPUs)
embed_kernels: For OpenCL only: set embed_kernels=1 if you want the OpenCL kernels to be built in the library's binaries instead of being read from separate ".cl" files. If embed_kernels is set to 0 then the application can set the path to the folder containing the OpenCL kernel files by calling CLKernelLibrary::init(). By default the path is set to "./cl_kernels".
set_soname: Do you want to build the versioned version of the library ?
If enabled the library will contain a SONAME and SHLIBVERSION and some symlinks will automatically be created between the objects.
Example:
libarm_compute_core.so -> libarm_compute_core.so.1.0.0
libarm_compute_core.so.1 -> libarm_compute_core.so.1.0.0
libarm_compute_core.so.1.0.0
@note This options is disabled by default as it requires SCons version 2.4 or above.
extra_cxx_flags: Custom CXX flags which will be appended to the end of the build command.
@subsection S3_2_linux Linux
@subsubsection S3_2_1_library How to build the library ?
For Linux, the library was successfully built and tested using the following Linaro GCC toolchain:
- gcc-linaro-arm-linux-gnueabihf-4.9-2014.07_linux
- gcc-linaro-4.9-2016.02-x86_64_aarch64-linux-gnu
- gcc-linaro-6.3.1-2017.02-i686_aarch64-linux-gnu
@note If you are building with opencl=1 then scons will expect to find libOpenCL.so either in the current directory or in "build" (See the section below if you need a stub OpenCL library to link against)
To cross-compile the library in debug mode, with NEON only support, for Linux 32bit:
scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=linux arch=armv7a
To cross-compile the library in asserts mode, with OpenCL only support, for Linux 64bit:
scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=linux arch=arm64-v8a
You can also compile the library natively on an ARM device by using <b>build=native</b>:
scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=arm64-v8a build=native
scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=native
@note G++ for ARM is mono-arch, therefore if you want to compile for Linux 32bit on a Linux 64bit platform you will have to use a cross compiler.
For example on a 64bit Debian based system you would have to install <b>g++-arm-linux-gnueabihf</b>
apt-get install g++-arm-linux-gnueabihf
Then run
scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a build=cross_compile
or simply remove the build parameter as build=cross_compile is the default value:
scons Werror=1 -j8 debug=0 neon=1 opencl=0 os=linux arch=armv7a
@attention To cross compile with opencl=1 you need to make sure to have a version of libOpenCL matching your target architecture.
@subsubsection S3_2_2_examples How to manually build the examples ?
The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
@note The following command lines assume the arm_compute and libOpenCL binaries are present in the current directory or in the system library path. If this is not the case you can specify the location of the pre-built library with the compiler option -L. When building the OpenCL example the commands below assume that the CL headers are located in the include folder where the command is executed.
To cross compile a NEON example for Linux 32bit:
arm-linux-gnueabihf-g++ examples/neon_convolution.cpp test_helpers/Utils.cpp -I. -std=c++11 -mfpu=neon -L. -larm_compute -o neon_convolution
To cross compile a NEON example for Linux 64bit:
aarch64-linux-gnu-g++ examples/neon_convolution.cpp test_helpers/Utils.cpp -I. -std=c++11 -L. -larm_compute -o neon_convolution
(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
To cross compile an OpenCL example for Linux 32bit:
arm-linux-gnueabihf-g++ examples/cl_convolution.cpp test_helpers/Utils.cpp -I. -Iinclude -std=c++11 -mfpu=neon -L. -larm_compute -lOpenCL -o cl_convolution
To cross compile an OpenCL example for Linux 64bit:
aarch64-linux-gnu-g++ examples/cl_convolution.cpp test_helpers/Utils.cpp -I. -Iinclude -std=c++11 -L. -larm_compute -lOpenCL -o cl_convolution
(notice the only difference with the 32 bit command is that we don't need the -mfpu option and the compiler's name is different)
To compile natively (i.e directly on an ARM device) for NEON for Linux 32bit:
g++ examples/neon_convolution.cpp test_helpers/Utils.cpp -I. -std=c++11 -mfpu=neon -larm_compute -o neon_convolution
To compile natively (i.e directly on an ARM device) for NEON for Linux 64bit:
g++ examples/neon_convolution.cpp test_helpers/Utils.cpp -I. -std=c++11 -larm_compute -o neon_convolution
(notice the only difference with the 32 bit command is that we don't need the -mfpu option)
To compile natively (i.e directly on an ARM device) for OpenCL for Linux 32bit or Linux 64bit:
g++ examples/cl_convolution.cpp test_helpers/Utils.cpp -I. -Iinclude -std=c++11 -larm_compute -lOpenCL -o cl_convolution
@note These two commands assume libarm_compute.so is available in your library path, if not add the path to it using -L
To run the built executable simply run:
LD_LIBRARY_PATH=build ./neon_convolution
or
LD_LIBRARY_PATH=build ./cl_convolution
@note If you built the library with support for both OpenCL and NEON you will need to link against OpenCL even if your application only uses NEON.
@subsection S3_3_android Android
For Android, the library was successfully built and tested using Google's standalone toolchains:
- arm-linux-androideabi-4.9 for armv7a (clang++)
- aarch64-linux-android-4.9 for arm64-v8a (g++)
Here is a guide to <a href="https://developer.android.com/ndk/guides/standalone_toolchain.html">create your Android standalone toolchains from the NDK</a>
- Download the NDK r14 beta 2 from here: https://developer.android.com/ndk/downloads/index.html
- Make sure you have Python 2 installed on your machine.
- Generate the 32 and/or 64 toolchains by running the following commands:
$NDK/build/tools/make_standalone_toolchain.py --arch arm64 --install-dir $MY_TOOLCHAINS/aarch64-linux-android-4.9 --stl gnustl
$NDK/build/tools/make_standalone_toolchain.py --arch arm --install-dir $MY_TOOLCHAINS/arm-linux-androideabi-4.9 --stl gnustl
@attention Due to some NDK issues make sure you use g++ & gnustl for aarch64 and clang++ & gnustl for armv7
@note Make sure to add the toolchains to your PATH: export PATH=$PATH:$MY_TOOLCHAINS/aarch64-linux-android-4.9/bin:$MY_TOOLCHAINS/arm-linux-androideabi-4.9/bin
@subsubsection S3_3_1_library How to build the library ?
@note If you are building with opencl=1 then scons will expect to find libOpenCL.so either in the current directory or in "build" (See the section below if you need a stub OpenCL library to link against)
To cross-compile the library in debug mode, with NEON only support, for Android 32bit:
CXX=clang++ CC=clang scons Werror=1 -j8 debug=1 neon=1 opencl=0 os=android arch=armv7a
@attention Due to some NDK issues make sure you use g++ & gnustl for aarch64 and clang++ & libc++ for armv7
To cross-compile the library in asserts mode, with OpenCL only support, for Android 64bit:
scons Werror=1 -j8 debug=0 asserts=1 neon=0 opencl=1 embed_kernels=1 os=android arch=arm64-v8a
@subsubsection S3_3_2_examples How to manually build the examples ?
The examples get automatically built by scons as part of the build process of the library described above. This section just describes how you can build and link your own application against our library.
@note The following command lines assume the arm_compute binaries are present in the current directory or in the system library path.
Once you've got your Android standalone toolchain built and added to your path you can do the following:
To cross compile a NEON example:
#32 bit:
arm-linux-androideabi-clang++ examples/neon_convolution.cpp -I. -Iinclude -std=c++11 -larm_compute-static -L. -o neon_convolution_arm -static-libstdc++ -pie
#64 bit:
aarch64-linux-android-g++ examples/neon_convolution.cpp -I. -Iinclude -std=c++11 -larm_compute-static -L. -o neon_convolution_aarch64 -static-libstdc++ -pie
To cross compile an OpenCL example:
#32 bit:
arm-linux-androideabi-clang++ examples/cl_convolution.cpp -I. -Iinclude -std=c++11 -larm_compute-static -L. -o cl_convolution_arm -static-libstdc++ -pie -lOpenCL
#64 bit:
aarch64-linux-android-g++ examples/cl_convolution.cpp -I. -Iinclude -std=c++11 -larm_compute-static -L. -o cl_convolution_aarch64 -static-libstdc++ -pie -lOpenCL
@note Due to some issues in older versions of the Mali OpenCL DDK (<= r13p0), we recommend to link arm_compute statically on Android.
Then you need to do is upload the executable and the shared library to the device using ADB:
adb push neon_convolution_arm /data/local/tmp/
adb push cl_convolution_arm /data/local/tmp/
adb shell chmod 777 -R /data/local/tmp/
And finally to run the example:
adb shell /data/local/tmp/neon_convolution_arm
adb shell /data/local/tmp/cl_convolution_arm
For 64bit:
adb push neon_convolution_aarch64 /data/local/tmp/
adb push cl_convolution_aarch64 /data/local/tmp/
adb shell chmod 777 -R /data/local/tmp/
And finally to run the example:
adb shell /data/local/tmp/neon_convolution_aarch64
adb shell /data/local/tmp/cl_convolution_aarch64
@subsection S3_4_cl_stub_library The OpenCL stub library
In the opencl-1.2-stubs folder you will find the sources to build a stub OpenCL library which then can be used to link your application or arm_compute against.
If you preferred you could retrieve the OpenCL library from your device and link against this one but often this library will have dependencies on a range of system libraries forcing you to link your application against those too even though it is not using them.
@warning This OpenCL library provided is a stub and *not* a real implementation. You can use it to resolve OpenCL's symbols in arm_compute while building the example but you must make sure the real libOpenCL.so is in your PATH when running the example or it will not work.
To cross-compile the stub OpenCL library simply run:
<target-prefix>-gcc -o libOpenCL.so -Iinclude opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
For example:
<target-prefix>-gcc -o libOpenCL.so -Iinclude opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
#Linux 32bit
arm-linux-gnueabihf-gcc -o libOpenCL.so -Iinclude opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
#Linux 64bit
aarch64-linux-gnu-gcc -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC
#Android 32bit
arm-linux-androideabi-clang -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
#Android 64bit
aarch64-linux-android-gcc -o libOpenCL.so -Iinclude -shared opencl-1.2-stubs/opencl_stubs.c -fPIC -shared
@section S4_architecture Library Architecture
@subsection S4_1 Core vs Runtime libraries
The Core library is a low level collection of algorithms implementations, it is designed to be embedded in existing projects and applications:
- It doesn't allocate any memory (All the memory allocations/mappings have to be handled by the caller).
- It doesn't perform any kind of multi-threading (but provide information to the caller about how the workload can be split).
The Runtime library is a very basic wrapper around the Core library which can be used for quick prototyping, it is basic in the sense that:
- It allocates images and tensors are allocatd using standard malloc().
- It multi-threads NEON code in a very basic way using a very simple pool of threads.
- For OpenCL it will use the default CLScheduler command queue for all mapping operations and kernels.
For maximum performance, it is expected that the users would re-implement an equivalent to the runtime library which suits better their needs (With a more clever multi-threading strategy, load-balancing between NEON and OpenCL, etc.)
@subsection S4_2_windows_kernels_mt_functions Windows, kernels, multi-threading and functions
@subsubsection S4_2_1_windows Windows
A @ref Window represents a workload to execute, it's made of up to @ref Coordinates::num_max_dimensions dimensions.
Each dimension is defined by a start, end and step.
It can split into subwindows as long as *all* the following rules remain true for all the dimensions:
- max[n].start() <= sub[n].start() < max[n].end()
- sub[n].start() < sub[n].end() <= max[n].end()
- max[n].step() == sub[n].step()
- (sub[n].start() - max[n].start()) % max[n].step() == 0
- (sub[n].end() - sub[n].start()) % max[n].step() == 0
@subsubsection S4_2_2 Kernels
Each implementation of the @ref IKernel interface (base class of all the kernels in the core library) works in the same way:
OpenCL kernels:
@code{.cpp}
// Initialise the CLScheduler with the default context and default command queue
// Also initialises the CLKernelLibrary to use ./cl_kernels as location for OpenCL kernels files and sets a default device for which OpenCL programs are built.
CLScheduler::get().default_init();
cl::CommandQueue q = CLScheduler::get().queue();
//Create a kernel object:
MyKernel kernel;
// Initialize the kernel with the input/output and options you want to use:
kernel.configure( input, output, option0, option1);
// Retrieve the execution window of the kernel:
const Window& max_window = kernel.window();
// Run the whole kernel in the current thread:
kernel.run( q, max_window ); // Enqueue the kernel to process the full window on the default queue
// Wait for the processing to complete:
q.finish();
@endcode
NEON / CPP kernels:
@code{.cpp}
//Create a kernel object:
MyKernel kernel;
// Initialize the kernel with the input/output and options you want to use:
kernel.configure( input, output, option0, option1);
// Retrieve the execution window of the kernel:
const Window& max_window = kernel.window();
// Run the whole kernel in the current thread:
kernel.run( max_window ); // Run the kernel on the full window
@endcode
@subsubsection S4_2_3 Multi-threading
The previous section shows how to run a NEON / CPP kernel in the current thread, however if your system has several CPU cores, you will probably want the kernel to use several cores. Here is how this can be done:
@snippet src/runtime/CPP/CPPScheduler.cpp Scheduler example
This is the very basic implementation used in the NEON runtime library by all the NEON functions.
@sa CPPScheduler.
@note Some kernels like for example @ref NEHistogramKernel need some local temporary buffer to perform their calculations. In order to avoid memory corruption between threads, the local buffer must be of size: ```memory_needed_per_thread * num_threads``` and each subwindow must be initialised by calling @ref Window::set_thread_id() with a unique thread_id between 0 and num_threads.
@subsubsection S4_2_4 Functions
Functions will automatically allocate the temporary buffers mentioned above, and will automatically multi-thread kernels' executions using the very basic scheduler described in the previous section.
Simple functions are made of a single kernel (e.g @ref NEConvolution3x3), while more complex ones will be made of a several kernels pipelined together (e.g @ref NEGaussianPyramid, @ref NEHarrisCorners), check their documentation to find out which kernels are used by each function.
@code{.cpp}
//Create a function object:
MyFunction function;
// Initialize the function with the input/output and options you want to use:
function.configure( input, output, option0, option1);
// Execute the function:
function.run();
@endcode
@warning ARM Compute libraries require Mali OpenCL DDK r8p0 or above(OpenCL kernels are compiled using the -cl-arm-non-uniform-work-group-size flag)
@note All OpenCL functions and objects in the runtime library use the command queue associated with CLScheduler for all operations, a real implementation would be expected to use different queues for mapping operations and kernels in order to reach a better GPU utilisation.
@subsubsection S4_4_1_cl_scheduler OpenCL Scheduler and kernel library
The ARM Compute runtime uses a single command queue and context for all the operations.
The user can get / set this context and command queue through the CLScheduler's interface.
@attention Make sure the application is using the same context as the library as in OpenCL it is forbidden to share objects across contexts. This is done by calling @ref CLScheduler::init() or @ref CLScheduler::default_init() at the beginning of your application.
All the OpenCL kernels used by the library are built and stored in the @ref CLKernelLibrary.
If the library is compiled with embed_kernels=0 the application can set the path to the OpenCL kernels by calling @ref CLKernelLibrary::init(), by default the path is set to "./cl_kernels"
@subsubsection S4_4_2_events_sync OpenCL events and synchronisation
In order to block until all the jobs in the CLScheduler's command queue are done executing the user can call @ref CLScheduler::sync() or create a sync event using @ref CLScheduler::enqueue_sync_event()
For example:
@snippet cl_events.cpp OpenCL events
@subsubsection S4_4_2_cl_neon OpenCL / NEON interoperability
You can mix OpenCL and NEON kernels and or functions, however it is the user's responsibility to handle the mapping unmapping of the OpenCL objects, for example:
@snippet neoncl_scale_median_gaussian.cpp NEON / OpenCL Interop
@sa main_neoncl_scale_median_gaussian
@subsection S4_5_algorithms Algorithms
All algorithms in this library have been implemented following the [OpenVX 1.1 specifications](https://www.khronos.org/registry/vx/specs/1.1/html/)
Please refer to the Khronos documentation for more information.
@subsection S4_6_images_tensors Images, padding, border modes and tensors
Most kernels and functions in the library process images, however, in order to be future proof most of the kernels actually accept tensors, see below for more information about they are related.
@attention Each memory object can be written by only one kernel, however it can be read by several kernels. Writing to the same object from several kernels will result in undefined behaviour. The kernel writing to an object must be configured before the kernel(s) reading from it.
@subsubsection S4_6_1_padding_and_border Padding and border modes
Several algorithms rely on neighbour pixels to calculate the value of a given pixel: this means the algorithm will not be able to process the borders of the image unless you give it more information about what you want to happen for border pixels, this is the @ref BorderMode.
You have 3 types of @ref BorderMode :
- @ref BorderMode::UNDEFINED : if you are missing pixel values then don't calculate the value. As a result all the pixels which are on the border will have a value which is undefined.
- @ref BorderMode::REPLICATE : if you are missing pixel values then assume the missing pixels have the same value as the closest valid pixel.
- @ref BorderMode::CONSTANT : if you are missing pixel values then assume the missing pixels all have the same constant value (The user can choose what this value should be).
Moreover both OpenCL and NEON use vector loads and stores instructions to access the data in buffers, so in order to avoid having special cases to handle for the borders all the images and tensors used in this library must be padded.
@paragraph padding Padding
There are different ways padding can be calculated:
- Accurate padding:
@snippet neon_convolution.cpp Accurate padding
@note It's important to call allocate @b after the function is configured: if the image / tensor is already allocated then the function will shrink its execution window instead of increasing the padding. (See below for more details).
- Manual padding / no padding / auto padding: You can allocate your images / tensors up front (before configuring your functions), in that case the function will use whatever padding is available and will shrink its execution window if there isn't enough padding available (Which will translates into a smaller valid region for the output. See also @ref valid_region).
If you don't want to manually set the padding but still want to allocate your objects upfront then you can use auto_padding.
@code{.cpp}
Image src, dst;
// Use auto padding for the input:
src.info()->init_auto_padding(TensorShape(640u,480u), Format::U8);
// Use manual padding for the destination image
dst.info()->init(src.info()->tensor_shape(), Format::U8, strides_in_bytes, offset_first_element_in_bytes, total_size_in_bytes);
// Allocate all the images
src.allocator()->allocate();
dst.allocator()->allocate();
// Fill the input image with the content of the PPM image if a filename was provided:
fill_image(src);
NEGaussian3x3 gauss;
// Apply a Gaussian 3x3 filter to the source image (Note: if the padding provided is not enough then the execution window and valid region of the output will be shrunk)
gauss.configure(&src, &dst, BorderMode::UNDEFINED);
//Execute the functions:
gauss.run();
@endcode
@warning Some kernels need up to 3 neighbour values to calculate the value of a given pixel, therefore to be safe we use a 4 pixels padding all around the image and some kernels read and write up to 32 pixels at the time, therefore we add an extra 32 pixels of padding at the end of each row to be safe. As a result auto padded buffers waste a lot of memory and are less cache friendly. It is therefore recommended to use accurate padding or manual padding wherever possible.
@paragraph valid_region Valid regions
Some kernels (like edge detectors for example) need to read values of neighbouring pixels to calculate the value of a given pixel, it is therefore not possible to calculate the values of the pixels on the edges.
Another case is: if a kernel processes 8 pixels per iteration then if the image's dimensions is not a multiple of 8 and not enough padding is available then the kernel will not be able to process the pixels near the right edge as a result these pixels will be left undefined.
In order to know which pixels have been calculated, each kernel sets a valid region for each output image or tensor. See also @ref TensorInfo::valid_region(), @ref ValidRegion
@subsubsection S4_6_2_tensors Tensors
Tensors are multi-dimensional arrays made of up to @ref Coordinates::num_max_dimensions dimensions.
A simple vector of numbers can be represented as a 1D tensor, an image is actually just a 2D tensor, a 3D tensor can be seen as an array of images, a 4D tensor as a 2D array of images, etc.
@note Most algorithms process images (i.e a 2D slice of the tensor), therefore only padding along the X and Y axes is required (2D slices can be stored contiguously in memory).
@subsubsection S4_6_3_description_conventions Images and Tensors description conventions
Image objects are defined by a @ref Format and dimensions expressed as [width, height, batch]
Tensors are defined by a @ref DataType plus a number of channels (Always expected to be 1 for now) and their dimensions are expressed as [width, height, feature_maps, batch].
In other words, the lower three dimensions of a tensor specify a single input in [width, height, feature_maps], while any other specified dimension represents a batch in the appropriate dimension space.
For example, a tensor with dimensions [128, 128, 64, 16] represents a 1D batch space with 16 batches of 128 elements in width and height and 64 feature maps each.
Each kernel specifies the expected layout of each of its tensors in its documentation.
@note Unless specified otherwise in the kernel's or function's documentation all tensors and images parameters passed must have identical dimensions.
@note Unless specified otherwise in the kernel's or function's documentation the number of channels for tensors is expected to be 1 (For images, the number of channels is inferred from the @ref Format).
@attention Regardless of the @ref DataType used by a tensor the @ref ITensor::buffer() method will always return a uint8_t pointer, and all the metadata in @ref TensorInfo will be expressed in bytes. It is the user's responsibility to cast the pointer to the correct type.
For example, to read the element located at the coordinates (x,y) of a float tensor:
@code{.cpp}
float value = *reinterpret_cast<float*>(input.buffer() + input.info()->offset_element_in_bytes(Coordinates(x,y)));
@endcode
@subsubsection S4_6_4_working_with_objects Working with Images and Tensors using iterators
The library provides some iterators to access objects' data.
Iterators are created by associating a data object (An image or a tensor for example) with an iteration window.
Iteration windows are defined by an array of dimension, each of which is made of a start, end and step.
The @ref execute_window_loop function takes an execution window, a lambda function and one or more iterators.
It will iterate through every element of the execution window and for each element it will update the iterators accordingly and call the lambda function.
Here is a couple of examples of how to use the iterators to fill / read tensors:
@snippet examples/neon_copy_objects.cpp Copy objects example
*/
}
|