Model pipelining lets multiple models work in sequence. One model’s output becomes the next model’s input. This guide shows a practical pipeline for license plate detection followed by OCR on Hailo accelerators.
Tip: You can compile and evaluate your own model using the AI Hub Cloud Compiler.
If you’re new to the tool, check out the Cloud Compiler Quickstart Guide.
What you’ll build
- License plate detection finds plates in an image or video.
- License plate OCR reads the characters from each detected plate.
- Results are combined and displayed. You get boxes and text in one pass.
Setting up your environment
This guide assumes that you have installed PySDK, the Hailo AI runtime and driver, and DeGirum Tools.
Click here for more information about installing PySDK.
Click here for information about installing the Hailo runtime and driver.To install degirum_tools, run:
pip install degirum_tools
Models used (Hailo)
- Detector:
yolov8n_relu6_global_lp_det--640x640_quant_hailort_multidevice_1
- OCR:
yolov8s_relu6_lp_ocr_7ch--256x128_quant_hailort_multidevice_1
These are available from the degirum/hailo
model zoo. You can also compile compatible models in AI Hub and use them the same way.
Quickstart (image example)
Copy this block into a notebook. Update the paths if needed.
import degirum as dg, degirum_tools
from degirum_tools import remote_assets
# 1) Choose where to run
hw_location = "@local"
device_type = ['HAILORT/HAILO8','HAILORT/HAILO8L']
# 2) Select model zoo and models
model_zoo_url = "degirum/hailo"
lp_det_model_name = "yolov8n_relu6_global_lp_det--640x640_quant_hailort_multidevice_1"
lp_ocr_model_name = "yolov8s_relu6_lp_ocr_7ch--256x128_quant_hailort_multidevice_1"
# 3) Token (Cloud or remote AI Server). Leave empty for local/Hailo.
token = ''
# 4) Load models
lp_det_model = dg.load_model(
model_name=lp_det_model_name,
inference_host_address=hw_location,
zoo_url=model_zoo_url,
token=token,
device_type=device_type
)
lp_ocr_model = dg.load_model(
model_name=lp_ocr_model_name,
inference_host_address=hw_location,
zoo_url=model_zoo_url,
token=token,
device_type=device_type
)
# 5) Build a compound model
crop_model = degirum_tools.CroppingAndClassifyingCompoundModel(
lp_det_model,
lp_ocr_model
# , 30.0 # optional crop extent percentage
)
# 6) Run on an image
inference_result = crop_model(remote_assets.car)
# 7) Display combined results
with degirum_tools.Display("License Plates") as display:
display.show_image(inference_result)
How it works
-
Model loading:
- The license plate detection and OCR models are loaded using
dg.load_model
. Each is configured for Hailo inference or cloud/AI server runs.
- The license plate detection and OCR models are loaded using
-
Pipeline creation:
- A compound model is created with
degirum_tools.CroppingAndClassifyingCompoundModel
. It automatically crops plate regions from the detector and passes them to the OCR model. You can adjust thecrop_extent
parameter to expand or shrink the cropped region.
- A compound model is created with
-
Inference execution:
- You can run inference on still images or video streams. With
predict_stream
, each frame is processed, plates are detected, cropped, and recognized in sequence.
- You can run inference on still images or video streams. With
-
Result display:
- Combined results are shown in a display window. You see bounding boxes around detected plates along with their recognized text. Press
x
orq
to stop when running on a video source.
- Combined results are shown in a display window. You see bounding boxes around detected plates along with their recognized text. Press
Applications
- Traffic monitoring
- Parking management systems
- Smart toll booths
- Vehicle access and automation
This pipelining approach also allows extending workflows, such as adding vehicle detection. It provides a scalable, modular way to build real-world license plate recognition systems on Hailo devices.