Engines: pyvar.ml.engines¶
TensorFlow Lite Inference Engine¶
pyvar.ml.engines.tflite¶
- platform
Unix/Yocto
- synopsis
Class to handle TensorFlow Lite inference engine.
- class pyvar.ml.engines.tflite.TFLiteInterpreter(model_file_path=None, num_threads=1, ext_delegate=None)[source]¶
- Variables
interpreter – TensorFlow Lite interpreter;
input_details – input details from model;
output_details – output details from inference;
result – results from inference;
inference_time – inference time;
model_file_path – path to the machine learning model;
k – number of top results;
confidence – confidence score, default is 0.5.
- get_output(index, squeeze=False)[source]¶
Get the result after running the inference.
- Parameters
index (int) – index of the result
squeeze (bool) – result is squeezed or not.
- Returns
if squeeze, return squeezed if not, return not squeeze
Ethosu (TensorFlow Lite) Inference Engine¶
pyvar.ml.engines.ethosu¶
- platform
Unix/Yocto
- synopsis
Class to handle Ethosu inference engine.
- class pyvar.ml.engines.ethosu.EthosuInterpreter(model_file_path=None)[source]¶
- Variables
interpreter – TensorFlow Lite interpreter;
input_details – input details from model;
output_details – output details from inference;
result – results from inference;
inference_time – inference time;
model_file_path – path to the machine learning model;
k – number of top results;
confidence – confidence score, default is 0.5.
- get_output(index, squeeze=False)[source]¶
Get the result after running the inference.
- Parameters
index (int) – index of the result
squeeze (bool) – result is squeezed or not.
- Returns
if squeeze, return squeezed if not, return not squeeze
Arm NN Inference Engine¶
pyvar.ml.engines.armnn¶
- platform
Unix/Yocto
- synopsis
Class to handle Arm NN inference engine.
- class pyvar.ml.engines.armnn.ArmNNInterpreter(model_file_path=None, accelerated=True)[source]¶
- Variables
interpreter – Arm NN interpreter;
input_details – input details from model;
output_details – output details from inference;
result – results from inference;
inference_time – inference time;
model_file_path – path to the machine learning model;
input_width – size of model input width;
input_height – size of model input height;
accelerated – runs inference on NPU or CPU.