In this guide, we illustrate how to leverage the built-in features of PySDK to simplify running object detection models on Hailo devices with just a few lines of code. We recommend reviewing our previous user guide before proceeding. By the end of this guide, you’ll understand how to integrate any precompiled object detection model (with Hailo NMS post-processing) with DeGirum PySDK and adapt the process to your needs.
Overview of the inference pipeline
The overall inference pipeline in PySDK can be summarized as follows:
- Pre-processing: Prepare the input image by resizing and formatting it (e.g., applying letterboxing) to meet model requirements before inference.
- Inference: Run the precompiled model (e.g.,
.hef
file) on the Hailo device. - Post-processing: Convert raw model outputs into human-readable detections (bounding boxes, labels, etc.) using a post-processor class.
- Visualization: Overlay the detection results onto the original image for easy inspection.
A diagram of this flow looks like:
Input image
│
▼
Pre-processing (resize, quantize, etc.)
│
▼
Model inference (Hailo device using .hef file)
│
▼
Post-processing (built-in postprocessor)
│
▼
Detection results (bounding boxes, labels, confidence scores)
│
▼
Visualization (image overlay using OpenCV)
What you’ll need to begin
To follow this guide, you’ll need a machine equipped with a Hailo AI accelerator (Hailo8 or Hailo8L). The host CPU can be an x86 system or an Arm-based system (e.g., Raspberry Pi). Ensure that all necessary drivers and software tools are correctly installed by following the Hailo + PySDK setup instructions.
Here’s what else you’ll need:
- A model file (
.hef
):
For this guide, we use theyolov11n
detection model. The model is available for Hailo8 devices at Hailo8 Object Detection Models. - An input image:
The image you want to process. For instance, you can download this Cat Image. - A labels file (
labels_coco.json
):
This file maps class indices to human-readable labels for the 80 classes of the COCO dataset. You can download it from Hugging Face.
Download these assets and keep them handy, as you will use them throughout this guide.
Summary
In this guide, you will learn to:
- Configure the model JSON file: Set up a JSON file that defines key configurations for pre-processing, model parameters, and post-processing.
- Prepare the model zoo: Organize model files (the
.json
configuration,.hef
file, and the labels file) for seamless integration with PySDK. - Run inference: Write Python code to load the model, run inference, and print the detection results.
- Visualize the output: Use the
image_overlay
method to visualize the detection results.
Configuring the model JSON file
Below is the JSON file that leverages PySDK’s built-in pre-processor and post-processor features.
Model JSON (yolov11n.json
)
{
"ConfigVersion": 11,
"Checksum": "5ccc384699f608188621975c0121aa1f01aa4398af30a00100474bae964195a8",
"DEVICE": [
{
"DeviceType": "HAILO8",
"RuntimeAgent": "HAILORT",
"SupportedDeviceTypes": "HAILORT/HAILO8"
}
],
"PRE_PROCESS": [
{
"InputType": "Image",
"InputN": 1,
"InputH": 640,
"InputW": 640,
"InputC": 3,
"InputPadMethod": "letterbox",
"InputResizeMethod": "bilinear",
"InputQuantEn": true
}
],
"MODEL_PARAMETERS": [
{
"ModelPath": "yolov11n.hef"
}
],
"POST_PROCESS": [
{
"OutputPostprocessType": "DetectionYoloHailo",
"OutputNumClasses": 80,
"LabelsPath": "labels_coco.json",
"OutputConfThreshold": 0.3
}
]
}
Key entries
- Pre-processing section:
Specifies that the input is an image that must be resized to 1 x 640 x 640 x 3 using letterboxing (to preserve aspect ratio) and that the model input is quantized to UINT8. - Post-processing section:
Indicates that the model output will be post-processed using the DetectionYoloHailo postprocessor. Additional parameters likeOutputNumClasses
,OutputConfThreshold
, andLabelsPath
are provided to fine-tune the detection results.
Note that Checksum is a required field and for local models can be any non-empty dummy value
Preparing the model zoo
A model zoo is a structured repository of model assets (configuration JSON files, model files, post-processor code, and labels) that simplifies model management. To organize your assets:
- Save the above JSON configuration as
yolov11n.json
. - Place the corresponding
yolov11n.hef
file in the same directory. - Save the labels file as
labels_coco.json
.
Tip: For easier maintenance, you can organize models into separate subdirectories. PySDK will automatically search for model JSON files in all subdirectories inside the directory specified by the zoo_url
.
Running inference
The yolov11n
model, configured by the above JSON file, takes an image as input and outputs a list of dictionaries, each containing a bounding box, confidence score, class ID, and label. The results are scaled to the original image size.
import degirum as dg
from pprint import pprint
# Load the model from the model zoo.
# Replace '<path_to_model_zoo>' with the directory containing your model assets.
model = dg.load_model(
model_name='yolov11n',
inference_host_address='@local',
zoo_url='<path_to_model_zoo>'
)
# Run inference on the input image.
# Replace '<path_to_cat_image>' with the actual path to your cat image.
inference_result = model('<path_to_cat_image>')
# Pretty print the detection results.
pprint(inference_result.results)
Expected output (example):
[{'bbox': [254.35779146128206,
94.76393992199104,
899.6128147829603,
707.9610544375571],
'category_id': 15,
'label': 'cat',
'score': 0.8902369737625122}]
Visualizing the output
PySDK supports automatic visualization of inference results. The returned inference_result
object includes an image_overlay
method that overlays bounding boxes and labels on the input image. The code below shows how to use OpenCV for visualization:
import cv2
# Display the image with overlayed detection results.
cv2.imshow("AI Inference", inference_result.image_overlay)
# Wait for the user to press 'x' or 'q' to exit.
while True:
key = cv2.waitKey(0) & 0xFF # Wait indefinitely until a key is pressed.
if key == ord('x') or key == ord('q'):
break
cv2.destroyAllWindows() # Close all OpenCV windows.
Note: The cv2.waitKey(0)
function waits indefinitely for a key press, which is useful for pausing the display until the user is ready to close it.
An example of the output is shown below:
Troubleshooting and debug tips
- File naming and paths:
Ensure that file names (yolov11n.json
,yolov11n.hef
, andlabels_coco.json
) are consistent and that the paths specified in the JSON file are correct. - Configuration mismatches:
Verify that the input dimensions and quantization settings in the JSON file match your model’s requirements.
Conclusion
This guide demonstrated how PySDK simplifies running object detection models on Hailo devices by integrating built-in pre-processing, post-processing, and visualization features. Although we used the yolov11n
model as an example, the outlined method applies to other object detection models that utilize built-in NMS post-processing on Hailo devices.
Below is a list of supported models for reference:
nanodet_repvgg.hef | nanodet_repvgg_a12.hef | nanodet_repvgg_a1_640.hef |
yolov10b.hef | yolov10n.hef | yolov10s.hef |
yolov10x.hef | yolov11l.hef | yolov11m.hef |
yolov11n.hef | yolov11s.hef | yolov11x.hef |
yolov5m.hef | yolov5m_6.1.hef | yolov5m6_6.1.hef |
yolov5m_wo_spp.hef | yolov5s.hef | yolov5s_c3tr.hef |
yolov5s_wo_spp.hef | yolov5xs_wo_spp.hef | yolov5xs_wo_spp_nms_core.hef |
yolov6n.hef | yolov6n_0.2.1_nms_core.hef | yolov7.hef |
yolov7_tiny.hef | yolov7e6.hef | yolov8l.hef |
yolov8m.hef | yolov8n.hef | yolov8s.hef |
yolov8x.hef | yolov9c.hef | yolox_l_leaky.hef |
yolox_s_leaky.hef | yolox_s_wide_leaky.hef | yolox_tiny.hef |
The same post-processor work for the 5 models below as well but the efficientdet models have 89 output classes and ssd_mobilenet models have 90 output classes. For these models, the appropriate labels file should be used. These models are listed below:
model name | number of classes |
---|---|
efficientdet_lite0 | 89 |
efficientdet_lite1 | 89 |
efficientdet_lite2 | 89 |
ssd_mobilenet_v1 | 90 |
ssd_mobilenet_v2 | 90 |
The above lists together cover 41 of the 56 models available in the object detection model zoo. The remaining 15 models listed below require their own post-processors. We will cover them in later user guides.
centernet_resnet_v1_18_postprocess | centernet_resnet_v1_50_postprocess | damoyolo_tinynasL20_T |
damoyolo_tinynasL25_S | damoyolo_tinynasL35_M | detr_resnet_v1_18_bn |
detr_resnet_v1_50 | tiny_yolov3 | tiny_yolov4 |
yolov3 | yolov3_416 | yolov3_gluon |
yolov3_gluon_416 | yolov4_leaky | yolov6n_0.2.1 |
Custom models
The method described in this guide is not limited to precompiled models from the Hailo model zoo. It works equally well with custom models. If you plan to deploy your own object detection models, make sure to adjust the JSON configuration with the correct values. In particular, update the OutputNumClasses field to match the number of classes your model detects and provide an appropriate labels file in the LabelsPath field. This ensures that the post-processor correctly interprets the raw output and maps class indices to human-readable labels.