License plate detection and recognition – pipelining two models on Hailo devices

Model pipelining lets multiple models work in sequence. One model’s output becomes the next model’s input. This guide shows a practical pipeline for license plate detection followed by OCR on Hailo accelerators.

Tip: You can compile and evaluate your own model using the AI Hub Cloud Compiler.
If you’re new to the tool, check out the Cloud Compiler Quickstart Guide.


What you’ll build

  • License plate detection finds plates in an image or video.
  • License plate OCR reads the characters from each detected plate.
  • Results are combined and displayed. You get boxes and text in one pass.

Setting up your environment

This guide assumes that you have installed PySDK, the Hailo AI runtime and driver, and DeGirum Tools.

Click here for more information about installing PySDK.
Click here for information about installing the Hailo runtime and driver.

To install degirum_tools, run:

pip install degirum_tools

Models used (Hailo)

  • Detector: yolov8n_relu6_global_lp_det--640x640_quant_hailort_multidevice_1
  • OCR: yolov8s_relu6_lp_ocr_7ch--256x128_quant_hailort_multidevice_1

These are available from the degirum/hailo model zoo. You can also compile compatible models in AI Hub and use them the same way.


Quickstart (image example)

Copy this block into a notebook. Update the paths if needed.

import degirum as dg, degirum_tools
from degirum_tools import remote_assets

# 1) Choose where to run
hw_location = "@local"
device_type = ['HAILORT/HAILO8','HAILORT/HAILO8L']

# 2) Select model zoo and models
model_zoo_url = "degirum/hailo"
lp_det_model_name = "yolov8n_relu6_global_lp_det--640x640_quant_hailort_multidevice_1"
lp_ocr_model_name = "yolov8s_relu6_lp_ocr_7ch--256x128_quant_hailort_multidevice_1"

# 3) Token (Cloud or remote AI Server). Leave empty for local/Hailo.
token = ''

# 4) Load models
lp_det_model = dg.load_model(
    model_name=lp_det_model_name,
    inference_host_address=hw_location,
    zoo_url=model_zoo_url,
    token=token,
    device_type=device_type
)

lp_ocr_model = dg.load_model(
    model_name=lp_ocr_model_name,
    inference_host_address=hw_location,
    zoo_url=model_zoo_url,
    token=token,
    device_type=device_type
)

# 5) Build a compound model
crop_model = degirum_tools.CroppingAndClassifyingCompoundModel(
    lp_det_model,
    lp_ocr_model
    # , 30.0  # optional crop extent percentage
)

# 6) Run on an image
inference_result = crop_model(remote_assets.car)

# 7) Display combined results
with degirum_tools.Display("License Plates") as display:
    display.show_image(inference_result)

How it works

  1. Model loading:

    • The license plate detection and OCR models are loaded using dg.load_model. Each is configured for Hailo inference or cloud/AI server runs.
  2. Pipeline creation:

    • A compound model is created with degirum_tools.CroppingAndClassifyingCompoundModel. It automatically crops plate regions from the detector and passes them to the OCR model. You can adjust the crop_extent parameter to expand or shrink the cropped region.
  3. Inference execution:

    • You can run inference on still images or video streams. With predict_stream, each frame is processed, plates are detected, cropped, and recognized in sequence.
  4. Result display:

    • Combined results are shown in a display window. You see bounding boxes around detected plates along with their recognized text. Press x or q to stop when running on a video source.

Applications

  • Traffic monitoring
  • Parking management systems
  • Smart toll booths
  • Vehicle access and automation

This pipelining approach also allows extending workflows, such as adding vehicle detection. It provides a scalable, modular way to build real-world license plate recognition systems on Hailo devices.

1 Like
Sorry, does it work with hailo8 or hailo8L? Thanks

Hi @claudio.rebecchi

Welcome to the DeGirum community. The above guide works for both Hailo8 and Hailo8L. Please note that we modified the code to include device_type. If your system has HAILO8 it will run on HAILO8 and if it has HAILO8L, it will run on HAILO8L. Models compiled for HAILO8L can run on both devices, but not the other way around and hence we use those models in our examples.

It still doesn’t work and it gives this error

degirum.exceptions.DegirumException: [ERROR]Incorrect value of parameter
Device type HAILORT/HAILO8L is not supported by the system
dg_pipeline_processor_helpers.cpp: 186 [DG::CoreProcessorHelper::deviceTypeGet]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/home/claud/hailo_examples/test_degirum.py”, line 38, in
inference_result = crop_model(degirum_tools.remote_assets.car)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum_tools/compound_models.py”, line 208, in call
return self.predict(data)
^^^^^^^^^^^^^^^^^^
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum_tools/compound_models.py”, line 192, in predict
for result in self.predict_batch([data]):
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum_tools/compound_models.py”, line 692, in predict_batch
for result in super().predict_batch(data):
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum_tools/compound_models.py”, line 345, in predict_batch
for result1 in self.model1.predict_batch(data):
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 293, in predict_batch
for res in self._predict_impl(source):
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 1233, in _predict_impl
raise DegirumException(msg) from saved_exception
degirum.exceptions.DegirumException: Failed to perform model ‘yolov8n_relu6_global_lp_det–640x640_quant_hailort_multidevice_1’ inference: [ERROR]Incorrect value of parameter
Device type HAILORT/HAILO8L is not supported by the system
dg_pipeline_processor_helpers.cpp: 186 [DG::CoreProcessorHelper::deviceTypeGet]

Thanks for your help

Hi @claudio.rebecchi

Can you please share the output of degirum sys-info?

degirum sys-info
Devices:
HAILORT/HAILO8:

  • @Index’: 0
    Board Name: Hailo-8
    Device Architecture: HAILO8
    Firmware Version: 4.20.0
    ID: ‘0001:01:00.0’
    Part Number: ‘’
    Product Name: ‘’
    Serial Number: ‘’
    N2X/CPU:
  • @Index’: 0
    TFLITE/CPU:
  • @Index’: 0
  • @Index’: 1
    Software Version: 0.18.3

Hi @claudio.rebecchi

Since your system has Hailo8, please use device_type=’HAILORT/HAILO8’

first attempt:

import degirum as dg, degirum_tools
#from degirum_tools import remote_assets

1) Choose where to run

hw_location = “@local
device_type = ‘HAILORT/HAILO8’

2) Select model zoo and models

model_zoo_url = “degirum/hailo”
lp_det_model_name = “yolov8n_relu6_global_lp_det–640x640_quant_hailort_multidevice_1”
lp_ocr_model_name = “yolov8s_relu6_lp_ocr_7ch–256x128_quant_hailort_multidevice_1”

3) Token (Cloud or remote AI Server). Leave empty for local/Hailo.

token = ‘’

4) Load models

lp_det_model = dg.load_model(
model_name=lp_det_model_name,
inference_host_address=hw_location,
zoo_url=model_zoo_url,
token=token,
)

lp_ocr_model = dg.load_model(
model_name=lp_ocr_model_name,
inference_host_address=hw_location,
zoo_url=model_zoo_url,
token=token,
)

5) Build a compound model

crop_model = degirum_tools.CroppingAndClassifyingCompoundModel(
lp_det_model,
lp_ocr_model

, 30.0 # optional crop extent percentage

)

6) Run on an image

inference_result = crop_model(degirum_tools.remote_assets.car)

7) Display combined results

with degirum_tools.Display(“License Plates”) as display:
display.show_image(inference_result)

output:

Traceback (most recent call last):
File “/home/claud/hailo_examples/test_degirum.py”, line 38, in
inference_result = crop_model(degirum_tools.remote_assets.car)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module ‘degirum_tools’ has no attribute ‘remote_assets’

Second attempt:
In the script I added the command: from degirum_tools import remote_assets

new output:

degirum.exceptions.DegirumException: [ERROR]Incorrect value of parameter
Device type HAILORT/HAILO8L is not supported by the system
dg_pipeline_processor_helpers.cpp: 186 [DG::CoreProcessorHelper::deviceTypeGet]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/home/claud/hailo_examples/test_degirum.py”, line 38, in
inference_result = crop_model(degirum_tools.remote_assets.car)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum_tools/compound_models.py”, line 208, in call
return self.predict(data)
^^^^^^^^^^^^^^^^^^
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum_tools/compound_models.py”, line 192, in predict
for result in self.predict_batch([data]):
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum_tools/compound_models.py”, line 692, in predict_batch
for result in super().predict_batch(data):
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum_tools/compound_models.py”, line 345, in predict_batch
for result1 in self.model1.predict_batch(data):
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 293, in predict_batch
for res in self._predict_impl(source):
File “/home/claud/hailo_examples/degirum_env/lib/python3.11/site-packages/degirum/model.py”, line 1233, in _predict_impl
raise DegirumException(msg) from saved_exception
degirum.exceptions.DegirumException: Failed to perform model ‘yolov8n_relu6_global_lp_det–640x640_quant_hailort_multidevice_1’ inference: [ERROR]Incorrect value of parameter
Device type HAILORT/HAILO8L is not supported by the system
dg_pipeline_processor_helpers.cpp: 186 [DG::CoreProcessorHelper::deviceTypeGet]

I hope I was clear, thanks for your interest

Hi @claudio.rebecchi

Thanks for sharing the code. I see where the mistake is now. You need to add device_type to the load_model function. See below:

lp_det_model = dg.load_model(model_name=lp_det_model_name,
    inference_host_address=hw_location,
    zoo_url=model_zoo_url,
    token=token,
    device_type=device_type
)

lp_ocr_model = dg.load_model( model_name=lp_ocr_model_name,
    inference_host_address=hw_location,
    zoo_url=model_zoo_url,
    token=token,
    device_type=device_type
)

Thanks, Shashi, everything’s fine now. If I had a stream of images from a webcam instead of a still image, what degirum features should I use in conjunction with Hailo8 for real-time operation?

Hi @claudio.rebecchi

Glad to hear it is working. Running inference on video streams is strightforward. You can use our predict_stream method that supports webcameras, video files, RTSP, and RTMP streams. See PySDKExamples/examples/singlemodel/object_detection_video_stream.ipynb at main · DeGirum/PySDKExamples for example usage. Just 1 line change. please let us know if you encounter any issues.