Hi, there!
I have well trained model YOLO (in pt format). I’ve made it in two models - YOLO11n and YOLO11s. Testing all this models inference (ONNX and OpenVino) I have perfect results almost without any difference. Both models N and S worked perfect.
Now I’ve converted them to HEF forman to inference on HAILO8L.
And now model S works fine (as expected), but N. N modef (HEF) doesnt see almost nothing…
Compillations was made on DeGirum HUB. For quantization was selected the same images (100).
All settings for both models was the same.
Inference was on the same device (RPI5).
And one more detail - the FPS (on a video file inference is the same (S and N models)).
Is there anybodey who can explain me why I have such a result ?
Thank you for advance!