Difference between Yolo11n and Yolo11s

Hi, there!

I have well trained model YOLO (in pt format). I’ve made it in two models - YOLO11n and YOLO11s. Testing all this models inference (ONNX and OpenVino) I have perfect results almost without any difference. Both models N and S worked perfect.

Now I’ve converted them to HEF forman to inference on HAILO8L.

And now model S works fine (as expected), but N. N modef (HEF) doesnt see almost nothing…

Compillations was made on DeGirum HUB. For quantization was selected the same images (100).

All settings for both models was the same.

Inference was on the same device (RPI5).

And one more detail - the FPS (on a video file inference is the same (S and N models)).

Is there anybodey who can explain me why I have such a result ?

Thank you for advance!

Hi @zoomrenew

Welcome to the DeGirum community. After compiling the models, did you evaluate the mAPs to quantify the loss? Please see Hailo guide: Evaluating model accuracy after compilation for details.

Thank you.

I have already look at this way…

Was there any mAP loss for yolo11s and yolo11n? Please also do the same for the OpenVINO float and quant versions as well.

Hi!

Im trying to evaluate themAps by using @cloud resource.

This is my code:

import degirum as dg

import degirum_tools

from degirum_tools.classification_eval import ImageClassificationModelEvaluator

from degirum_tools.detection_eval import ObjectDetectionModelEvaluator

# Load classification model

model = dg.load_model(

**model_name="bestYolo11s--640x640_quant_hailort_multidevice_1",**

**inference_host_address="@cloud",**

**zoo_url="sergio_suslov/models",**

**token='\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*'**

)

classmap = {

“0”: “person”

}

# Create evaluator

evaluator = ObjectDetectionModelEvaluator(

**model,**

**classmap=classmap,**

**show_progress=True**

)

# Folder structure should be: /images/cat/, /images/dog/, etc.

image_dir = “../imagesLite/0”

coco_json = “F:/YOLO/RamNew/HAILO/Win11VENV/degirum-windows/test/_annotations.coco.json”

# Run evaluation (no annotation file required)

results = evaluator.evaluate(image_dir, coco_json, max_images=0)

# Print top-k accuracy

print(“Top-K Accuracies:”, results[0])

And I constantly have an error:

File “F:\YOLO\RamNew\HAILO\Win11VENV\degirum-windows\Lib\site-packages\pycocotools\coco.py”, line 330, in loadRes
if ‘caption’ in anns[0]:
~~~~^^^
IndexError: list index out of range

Could you say - whats wrong?

I think - this is due I havent proper connection to @cloud HAILO8L. If so, could you explain how to do it (if possible)

Thank you for advance.

Hi @zoomrenew

Are you trying to evaluate an object detection model or a classification model? Our logs show that you were able to run inferences in the cloud (~7000 inferences). The error is most likely in the annotations file (either path is not correct, or format does not match detection annotations or something similar).

I want to evaluete Detection and annotations file path is correct. If I change it to incorrect the error “No such file or directory: 'F:/YOLO/RamNew/HAILO/Win11VENV/degirum-windows/test1/_annotations.coco.json“ occurs.

What about format my json - this file was produced by Roboflow… And I have no idea what it may be incorrect with it.

Hi @zoomrenew

Hard to troubleshoot without actually seeing the file. The error is coming from pycocotools which means that all inferences were done successfully and the script reached the mAP evaluation stage.

How can I share the json file!

I am very grateful for your attention for my problem.

Hi @zoomrenew

Please check your email.

Hi @zoomrenew

After reviewing the evaluation code, you’ll need to update classmap = [1].

Your annotation file follows the COCO format, where "category_id": 1 is used in the JSON. Meanwhile, the model outputs class ID 0. To make the model predictions align correctly with your annotations, you should map the model’s 0 to 1—hence the need for classmap = [1].
classmap = {"0", "person"} --> classmap = [1]

This ensures consistency between the model output and the dataset labels.
Please try with your YOLO11s, which is working.
My guess for the YOLO11n checkpoint issue is that it is sensitive to quantization. To check if your YOLO11n model is very sensitive to quantization, you can compile this model to OpenVINO quant and check, as @lawrence suggested. If you see big performance degradation, then it shows that the model is sensitive to quantization and it is not device-dependent.

Let me know if this solves your evaluation process.

Hello!

Thank you so much for you reply. It realy was helpful for me.

I did everything as you indicated.

My code is:

model = dg.load_model(

model_name=“bestYolo11s–640x640_quant_hailort_multidevice_1”,

zoo_url = “aiserver://” ,

inference_host_address=“192.168.31.173:8778”,

token=‘<>’

)

model.output_confidence_threshold = 0.001

model.output_nms_threshold = 0.7

model.output_max_detections = 300

model.output_max_detections_per_class = 300

classmap = [1]

# Create evaluator

evaluator = ObjectDetectionModelEvaluator(

model,

classmap=classmap,

show_progress=True

)

# Folder structure should be: /images/cat/, /images/dog/, etc.

image_dir = “F:/YOLO/RamNew/HAILO/Win11VENV/degirum-windows/test”

coco_json = “F:/YOLO/RamNew/HAILO/Win11VENV/degirum-windows/test/_annotations.coco.json”

# Run evaluation (no annotation file required)

results = evaluator.evaluate(image_dir, coco_json, max_images=0)

print(“COCO mAP stats:”, results[0])

And this is my results:

COCO mAP stats: [0.75327223 0.99623725 0.95059774 0.4 0.68988011 0.78812285
0.47962382 0.81159875 0.81159875 0.4 0.75 0.84009009]

The metrics are more then I expected and more then described here Hailo guide: Evaluating model accuracy after compilation

  • AP: Overall mean Average Precision

  • AP50: Precision at IoU ≥ 0.5

  • AP75: Precision at IoU ≥ 0.75

  • AP_small, AP_medium, AP_large: Size-specific precision

  • AR: Recall statistics

Here I can see 7 metrics. In my case there 10 metrics.
How can I exlore the last 3 metrics?

Thank you for advance!

The statistics are as follows:

  • AP50:95
  • AP50
  • AP75
  • AP50:95 small
  • AP50:95 medium
  • AP50:95 large
  • AR maxDets1
  • AR maxDets10
  • AR maxDets100
  • AR maxDets100 small
  • AR maxDets100 medium
  • AR maxDets100 large

What you need to do is run the same evaluation on the yolov8n model. Please compile yolov8n using our compiler to OpenVINO quant and perform the evaluation on that model. Then perform the same evaluation on the Hailo version as well. Compare the statistics.

Thank you so much!

I will try.