Axelera AI guide: Running your first AI inference with PySDK and Axelera AI

Ready to run your first AI model on Axelera’s METIS hardware using DeGirum PySDK?

This guide will walk you through the essentials to get your first computer vision AI model running — either via cloud (easiest path) or locally (with your own METIS hardware).

What is DeGirum PySDK?

The DeGirum PySDK (Python SDK) abstracts away the low-level complexity of inference pipelines. It enables developers to load, run, and visualize AI model inference with minimal setup required, in just a few lines of Python.

Core Capabilities:

Feature Description
Super simple API You don’t need to write tons of code — everything works with minimal setup.
Model loading Load from DeGirum AI HUB (cloud) or local models.
Hardware acceleration Support for multiple hardware accelerators like Axelera METIS, Hailo, OpenVINO.
Visualization Built-in rendering for tasks like Detection, Classification, Segmentation, and Pose.
Streaming support Inference on images or video feeds.

Let’s get started by installing the degirum and degirum_tools packages!

Setting up your environment

This guide assumes that you have installed PySDK, the Axelera AI runtime and driver, and DeGirum Tools.

Click here for more information about installing PySDK.
Click here for information about installing the Axelera AI runtime and driver.

To install degirum_tools, run pip install degirum_tools in the same environment as PySDK.

Verify PySDK installation

To confirm everything is set up correctly, run:

degirum sys-info

This prints system configuration, installed devices, and available models.

DeGirum PySDK provides simple APIs to run AI model inference. In general, there are three steps in running an AI model:

  1. Load a model using the degirum.load_model method.

  2. Run inference on an input using the model.predict method.

  3. Visualize results using results.image_overlay method.

Run cloud inference on image

Begin by using precompiled models from our AI Hub.

This is the fastest and easiest way to test AI models. No local model files needed.

import degirum as dg, degirum_tools
from degirum_tools import remote_assets

inference_host_address = "@cloud"
zoo_url = 'degirum/axelera' 
device_type='AXELERA/METIS'

# set model name, and image source
model_name = "yolov8n_coco--640x640_quant_axelera_metis_1"
image_source= remote_assets.three_persons

# load AI model
model = dg.load_model(
    model_name=model_name,
    inference_host_address=inference_host_address,
    zoo_url=zoo_url,
    device_type=device_type,    
)

# perform AI model inference on given image source
print(f" Running inference using '{model_name}' on image source '{image_source}'")
inference_result = model(image_source)

# print('Inference Results \n', inference_result)  # numeric results
print(inference_result)
print("Press 'x' or 'q' to stop.")

# show results of inference
with degirum_tools.Display("AI Camera") as output_display:
    output_display.show_image(inference_result.image_overlay)

Run cloud inference on video stream

  • The predict_stream function in degirum_tools provides a powerful and efficient way to perform AI inference on video streams in real-time. It processes video frames sequentially and returns inference results frame by frame, enabling seamless integration with various video input sources.

  • The code below shows how to use predict_stream on a video file.

import degirum as dg, degirum_tools
from degirum_tools import remote_assets

inference_host_address = "@cloud"
zoo_url = 'degirum/axelera'
device_type='AXELERA/METIS'

model_name = "yolov8n_coco--640x640_quant_axelera_metis_1"
video_source = remote_assets.traffic

# load AI model
model = dg.load_model(
    model_name=model_name,
    inference_host_address=inference_host_address,
    zoo_url=zoo_url,
    device_type=device_type
)

with degirum_tools.Display("AI Camera") as output_display:
    for inference_result in degirum_tools.predict_stream(model, video_source):
        output_display.show(inference_result)

Run local inference with Axelera METIS hardware

If you already have the Axelera tools installed and the METIS device properly recognized, the simplest way to run local inference is by changing the inference host address from @cloud to @local.

import degirum as dg, degirum_tools
from degirum_tools import remote_assets

inference_host_address = "@local"
zoo_url = 'degirum/axelera' 
device_type='AXELERA/METIS'

# set model name, and image source
model_name = "yolov8n_coco--640x640_quant_axelera_metis_1"
image_source = remote_assets.three_persons

# load AI model
model = dg.load_model(
    model_name=model_name,
    inference_host_address=inference_host_address,
    zoo_url=zoo_url,
    device_type=device_type,    
)

# perform AI model inference on given image source
print(f" Running inference using '{model_name}' on image source '{image_source}'")
inference_result = model(image_source)

# print('Inference Results \n', inference_result)  # numeric results
print(inference_result)
print("Press 'x' or 'q' to stop.")

# show results of inference
with degirum_tools.Display("AI Camera") as output_display:
    output_display.show_image(inference_result.image_overlay)

Great — you’re all set to run inference completely offline, using your local METIS hardware.

Download a model from the AI Hub using our PySDK CLI:

degirum download-zoo --model_family "yolov8n_coco--640x640_quant_axelera_metis_1" --url https://hub.degirum.com/degirum/axelera

If you happen to be testing these in a Jupyter notebook, just add an exclamation to the command:

!degirum download-zoo --model_family "yolov8n_coco--640x640_quant_axelera_metis_1" --url https://hub.degirum.com/degirum/axelera

You should see this if the model was downloaded successfully:

Downloading models
  from 'https://hub.degirum.com/degirum/axelera'
  into '.'
yolov8n_coco--640x640_quant_axelera_metis_1
Downloaded 1 model(s)

Understanding model zoo and model file structure

After you download a model, you will get a folder with several files.
A typical model directory looks like:

yolov8n_coco--640x640_quant_axelera_metis_1/
├── yolov8n_coco--640x640_quant_axelera_metis_1.json
├── model_yolov8n_coco.json
└── labels_yolov8n_coco.json
└── ...

These files represent the model .json file, the model file, and a labels .json file. Depending on model, you’ll see a different set of files.

Run inference locally

After downloading the model from AI Hub, it can be loaded using the model’s .json configuration file.

The example below demonstrates loading from a model .json file.

import degirum as dg, degirum_tools
from degirum_tools import remote_assets

inference_host_address = "@local"
zoo_url = f"file://./yolov8n_coco--640x640_quant_axelera_metis_1/yolov8n_coco--640x640_quant_axelera_metis_1.json"
device_type='AXELERA/METIS'

# set model name, inference host address, zoo url, token, and image source
model_name = "yolov8n_coco--640x640_quant_axelera_metis_1"
image_source = remote_assets.three_persons

# load AI model
model = dg.load_model(
    model_name=model_name,
    inference_host_address=inference_host_address,
    zoo_url=zoo_url,
    device_type=device_type
)

# perform AI model inference on given image source
print(f" Running inference using '{model_name}' on image source '{image_source}'")
inference_result = model(image_source)

# print('Inference Results \n', inference_result)  # numeric results
print(inference_result)
print("Press 'x' or 'q' to stop.")

# show results of inference
with degirum_tools.Display("AI Camera") as output_display:
    output_display.show_image(inference_result)

You’ve just run inference on METIS!

You’ve successfully run your first AI model on Axelera’s METIS hardware using DeGirum PySDK.

In this walkthrough, you’ve learned how to:

  • Install and verify the DeGirum PySDK.

  • Run AI models on the cloud using DeGirum AI Hub.

  • Run models locally on METIS hardware using precompiled models from a local model zoo.

  • Visualize inference results both for images and videos.

Additional resources

For more detailed examples and hardware setup instructions, visit our Axelera Examples Repository.

1 Like