I’m exploring the possibility of running YOLOe (an edge-optimized variant of YOLO) on the Hailo-8 accelerator using the deGirum SDK, and I’d love to hear from anyone who’s tried this or has insights on:
Compatibility
Has anyone successfully compiled and deployed a YOLOe model to Hailo-8?
Model Preparation & Conversion
What’s the recommended workflow to convert a YOLOe checkpoint to a Hailo-compatible format using deGirum’s tooling?
Troubleshooting
Common errors you hit (unsupported ops, shape mismatches, etc.) and how you resolved them.
Hi @sahil, did @shashi’s response give you some clarity on your question?
If so, do you think his response should be marked as the “solution” so others can clearly see the current limitations of compiling YOLOe for Hailo-8/Hailo-8L?
Sorry to resuscitate this post, but I’ve just read this in the YoloE Ultralitics documentation:
YOLO11-L and YOLOE-L have identical architectures (prompt modules disabled in YOLO11-L), resulting in identical inference speed and similar GFLOP estimates.
If this model is identical in architecture to yolo11-l what is making it incompatible with Hailo8?
I thought that as raspberry can execute this model that the Hailo8 Hat in it would make it run faster, but that’s not the case…
Does anyone knows if this ai model has issues with other accelerator chips? What do I need to search for, in the chips or frameworks, to know which one would be able to run it?
Ultralitics say this YoloE model, at least the smaller versions like YoloE11n or YoloE11s, should be able to be executed in edge devices but which ones besides Jetson ones???
Welcome to the DeGirum community. We will reopen this issue, as a different user reported that they were able to compile the model to Hailo after fixing the classes. We will keep you posted.