Engines: pyvar.ml.engines

TensorFlow Lite Inference Engine

../_images/tensorflow_lite.png

pyvar.ml.engines.tflite

platform

Unix/Yocto

synopsis

Class to handle TensorFlow Lite inference engine.

class pyvar.ml.engines.tflite.TFLiteInterpreter(model_file_path=None, num_threads=1, ext_delegate=None)[source]
Variables
  • interpreter – TensorFlow Lite interpreter;

  • input_details – input details from model;

  • output_details – output details from inference;

  • result – results from inference;

  • inference_time – inference time;

  • model_file_path – path to the machine learning model;

  • k – number of top results;

  • confidence – confidence score, default is 0.5.

get_dtype()[source]

Get the model type.

Returns

The model type.

get_height()[source]

Get the model height.

Returns

The model height.

get_output(index, squeeze=False)[source]

Get the result after running the inference.

Parameters
  • index (int) – index of the result

  • squeeze (bool) – result is squeezed or not.

Returns

if squeeze, return squeezed if not, return not squeeze

get_result(category=None)[source]

Get the result from the output details.

Parameters

category (str) – model category (classification or detection);

Returns

if success, return True if not, return False

get_width()[source]

Get the model width.

Returns

The model width.

run_inference()[source]

Runs inference on the image/frame set in the set_input() method.

set_confidence(confidence)[source]

Set the confidence results attribute.

set_input(image)[source]

Set the image/frame into the input tensor to be inferred.

set_k(k)[source]

Set the k results attribute.

Ethosu (TensorFlow Lite) Inference Engine

../_images/tensorflow_lite.png

pyvar.ml.engines.ethosu

platform

Unix/Yocto

synopsis

Class to handle Ethosu inference engine.

class pyvar.ml.engines.ethosu.EthosuInterpreter(model_file_path=None)[source]
Variables
  • interpreter – TensorFlow Lite interpreter;

  • input_details – input details from model;

  • output_details – output details from inference;

  • result – results from inference;

  • inference_time – inference time;

  • model_file_path – path to the machine learning model;

  • k – number of top results;

  • confidence – confidence score, default is 0.5.

get_dtype()[source]

Get the model type.

Returns

The model type.

get_height()[source]

Get the model height.

Returns

The model height.

get_output(index, squeeze=False)[source]

Get the result after running the inference.

Parameters
  • index (int) – index of the result

  • squeeze (bool) – result is squeezed or not.

Returns

if squeeze, return squeezed if not, return not squeeze

get_result(category=None)[source]

Get the result from the output details.

Parameters

category (str) – model category (classification);

Returns

if success, return True if not, return False

get_width()[source]

Get the model width.

Returns

The model width.

run_inference()[source]

Runs inference on the image/frame set in the set_input() method.

set_confidence(confidence)[source]

Set the confidence results attribute.

set_input(image)[source]

Set the image/frame into the input tensor to be inferred.

set_k(k)[source]

Set the k results attribute.

Arm NN Inference Engine

../_images/armnn.png

pyvar.ml.engines.armnn

platform

Unix/Yocto

synopsis

Class to handle Arm NN inference engine.

class pyvar.ml.engines.armnn.ArmNNInterpreter(model_file_path=None, accelerated=True)[source]
Variables
  • interpreter – Arm NN interpreter;

  • input_details – input details from model;

  • output_details – output details from inference;

  • result – results from inference;

  • inference_time – inference time;

  • model_file_path – path to the machine learning model;

  • input_width – size of model input width;

  • input_height – size of model input height;

  • accelerated – runs inference on NPU or CPU.

get_result(category=None)[source]

Get the result from the output details.

Parameters

category (str) – model category (classification or detection);

Returns

if success, return True if not, return False

run_inference()[source]

Runs inference on the image/frame set in the set_input() method.

set_input(image)[source]

Set the image/frame into the input tensor to be inferred.