Deploying YOLOe on Hailo-8 with deGirum SDK – Feasibility & Workflow?

Hi everyone,

I’m exploring the possibility of running YOLOe (an edge-optimized variant of YOLO) on the Hailo-8 accelerator using the deGirum SDK, and I’d love to hear from anyone who’s tried this or has insights on:

  1. Compatibility

    • Has anyone successfully compiled and deployed a YOLOe model to Hailo-8?
  2. Model Preparation & Conversion

    • What’s the recommended workflow to convert a YOLOe checkpoint to a Hailo-compatible format using deGirum’s tooling?
  3. Troubleshooting

    • Common errors you hit (unsupported ops, shape mismatches, etc.) and how you resolved them.

Hi @sahil

Due to the presence of some unsupported layers in the model, YOLOe cannot be compiled for Hailo8/Hailo8L.

1 Like

Hi @sahil, did @shashi’s response give you some clarity on your question?

If so, do you think his response should be marked as the “solution” so others can clearly see the current limitations of compiling YOLOe for Hailo-8/Hailo-8L?

Hi,

Sorry to resuscitate this post, but I’ve just read this in the YoloE Ultralitics documentation:

YOLO11-L and YOLOE-L have identical architectures (prompt modules disabled in YOLO11-L), resulting in identical inference speed and similar GFLOP estimates.

If this model is identical in architecture to yolo11-l what is making it incompatible with Hailo8?

I thought that as raspberry can execute this model that the Hailo8 Hat in it would make it run faster, but that’s not the case…

Does anyone knows if this ai model has issues with other accelerator chips? What do I need to search for, in the chips or frameworks, to know which one would be able to run it?

Ultralitics say this YoloE model, at least the smaller versions like YoloE11n or YoloE11s, should be able to be executed in edge devices :person_shrugging: but which ones besides Jetson ones???

Hi @ic3_2k

Welcome to the DeGirum community. We will reopen this issue, as a different user reported that they were able to compile the model to Hailo after fixing the classes. We will keep you posted.

OMG!!! That’s amazing!

Is there any link¿?

I’m very interested in the topic and want to understand what is/was the problem and how to ‘identify’ and ‘fix’ the problematic classes…

What about the other question, is this model compatible with other hardware platforms using DeGirum PySDK?

@ic3_2k @sahil

YOLOe is now supported if the prompt embeddings are pre-calculated. Here is an example of how to do that.

from ultralytics import YOLOE

# load the pretrained model (not the prompt-free version)

model = YOLOE(“yoloe-11s-seg.pt”)
classes = [“apples”, “oranges”]
model.set_classes(classes, model.get_text_pe(classes))
model.save(“yoloe-11s-seg_fruit.pt”)

The newly saved checkpoint can be compiled for Hailo.

Hi @lawrence and @shashi

Im currently working with ‘yoloe-v8s-seg.pt’ and ‘yoloe-11l-seg.pt’ to detect road damages using prompt such as “crashed vehicle, damaged road, road garbage, etc” i have 7 classes in total and i want to deploy one of the yoloe models (8s or 11l) on hailo8 how can i compile the models? can you please help by sharing your knowledge or any links related to that.

PS: i already have the .onnx models i just need the steps to convert the model to .hef

Hi @kamali.nade

You can apply for our cloud compiler access: Early access: DeGirum Cloud Compiler is live! - General - DeGirum Community

1 Like