coder.ARMNEONConfig
Parameters to configure deep learning code generation with the ARM Compute Library
Since R2019a
Description
The coder.ARMNEONConfig
object contains ARM® Compute Library and target specific parameters that codegen
uses for generating C++ code for deep neural networks.
To use a coder.ARMNEONConfig
object for code generation, assign it to
the DeepLearningConfig
property of a code generation configuration object
that you pass to codegen
.
Creation
Create an ARM NEON configuration object by using the coder.DeepLearningConfig
function with target library set as
'arm-compute'
.
Properties
ArmComputeVersion
— Version of ARM Compute Library
'20.02.1' (default) | '19.05'
Version of ARM Compute Library used on the target hardware, specified as a character
vector or string scalar. If you set ArmComputeVersion
to a version
later than '20.02.1'
, ArmComputeVersion
is set to
'20.02.1'
.
ArmArchitecture
— ARM architecture supported in the target hardware
'armv8' | 'armv7'
ARM architecture supported in the target hardware, specified as a character vector or string scalar. The specified architecture must be the same as the architecture for the ARM Compute Library on the target hardware.
ARMArchitecture
must be specified for these cases:
You do not use a hardware support package (the
Hardware
property of the code generation configuration object is empty).You use a hardware support package, but generate code only.
DataType
— Inference computation precision
'fp32'
(default) | 'int8'
Specify the precision of the inference computations in supported layers. When
performing inference in 32-bit floats, use 'fp32'
. For 8-bit integer,
use 'int8'
. Default value is 'fp32'
.
CalibrationResultFile
— Location of calibration MAT-file
''
(default) | character vector | string scalar
Location of the MAT-file containing the calibration data. Default value is
''
. This option is applicable only when
DataType
is set to 'int8'
.
When performing quantization, the calibrate
(Deep Learning Toolbox)
function exercises the network and collects the dynamic ranges of the weights and biases
in the convolution and fully connected layers of the network and the dynamic ranges of
the activations in all layers of the network. To generate code for the optimized
network, save the results from the calibrate
function to a MAT-file
and specify the location of this MAT-file to the code generator using this property. For
more information, see Generate int8 Code for Deep Learning Networks.
TargetLibrary
— Target library name
'arm-compute'
Name of target library, specified as a character vector.
Examples
Specify Configuration Parameters for C++ Code Generation for the SqueezeNet Network
Create an entry-point function squeezenet
that uses the
coder.loadDeepLearningNetwork
function to load the squeezenet
(Deep Learning Toolbox)
object.
function out = squeezenet_predict(in) persistent mynet; if isempty(mynet) mynet = coder.loadDeepLearningNetwork('squeezenet', 'squeezenet'); end out = predict(mynet,in);
Create a coder.config
configuration object for generation of a
static library.
cfg = coder.config('lib');
Set the target language to C++. Specify that you want to generate only source code.
cfg.TargetLang = 'C++';
cfg.GenCodeOnly=true;
Create a coder.ARMNEONConfig
deep learning configuration object.
Assign it to the DeepLearningConfig
property of the
cfg
configuration object.
dlcfg = coder.DeepLearningConfig('arm-compute'); dlcfg.ArmArchitecture = 'armv8'; dlcfg.ArmComputeVersion = '20.02.1'; cfg.DeepLearningConfig = dlcfg;
Use the -config
option of the codegen
function to specify the cfg
configuration
object. The codegen
function must determine the size,
class, and complexity of MATLAB® function inputs. Use the -args
option to specify the
size of the input to the entry-point function.
codegen -args {ones(227,227,3,'single')} -config cfg squeezenet_predict
The codegen
command places all the generated files in the
codegen
folder. The folder contains the C++ code for the
entry-point function squeezenet_predict.cpp
, header, and source files
containing the C++ class definitions for the neural network, weight, and bias
files.
Version History
Introduced in R2019a
Commande MATLAB
Vous avez cliqué sur un lien qui correspond à cette commande MATLAB :
Pour exécuter la commande, saisissez-la dans la fenêtre de commande de MATLAB. Les navigateurs web ne supportent pas les commandes MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)