Issue in online inference with custom trained model compiled on Degirum platform

Hey,
I have trained one yolov8m model where I have included P2 detection head in model. I compiled it online on DeGirum platform for hailo8. It got compiled successfully. But when i am running inference on Degirum platform using that .hef file it is throwing this error :

An error occurred during predict:s: Functionality is not supported DetectionPostprocess: input boxes tensor type is unknown dg_postprocess_detection.cpp: 1878

How to solve this?

Hi @suraj.upadhyay

Welcome to the DeGirum community. Thank you for bringing this to our notice. We will check the postprocessor behavior when there is an extra detection head and keep you posted.

Hi @shashi

Is it only happening on online Degirum compiler and inference or same will happen when I will compile and run inference locally too?

Hi @suraj.upadhyay

If you compile by yourself, it should be ok. We will soon release a fix and you can compile again. if it is urgent, you can send your checkpoint by email and we will compile it for you.

Hi @suraj.upadhyay

The fix has been pushed to the cloud compiler. Please try to compile again. The compiler should now support any combination of heads coming off the backbone (P2 through P6).

1 Like

Hi @lawrence
Thankyou for quick fix. I have tested it.
I wanted to know that when i am compiling on DeGirum. Does it compile for best FPS or Best Accuracy ?
As Right now when i tested on same image where 3 small objects are there in .pt it is detecting all three with 55+ confidence but in .hef it is detecting one object only that too with always less than 30 confidence score.

Hi @suraj.upadhyay

Did you supply calibration data when compiling? Also, it is worth evaluating the mAP of the compiled model to quantify the loss: Hailo guide: Evaluating model accuracy after compilation

With the Hailo compiler, the best FPS is decoupled from the best accuracy. They are not mutually exclusive. We offer only the standard optimization level (which is zero) at the standard quantization level (8 bit) and we always try to compile for best FPS.

One thing that usually helps is if you use a representative dataset for quantization. We currently provide a default dataset for quantization but it may not be representative of your data. I suggest you upload 128 images from your dataset for quantization. You can find this option in this advanced options drop down of the cloud compiler.

Hi @shashi I did added images in advance setting. I will try to Evaluate mAP.
@lawrence Regarding FPS just sharing my experience where when i compile locally with –performance tag i get 6-7 fps more compare to DeGrum online compiler.

Hi @suraj.upadhyay

Thanks for the info about compiler performance. It would be interesting to see the mAP for different settings.