Hello!
Thank you so much for you reply. It realy was helpful for me.
I did everything as you indicated.
My code is:
model = dg.load_model(
model_name=“bestYolo11s–640x640_quant_hailort_multidevice_1”,
zoo_url = “aiserver://” ,
inference_host_address=“192.168.31.173:8778”,
token=‘<>’
)
model.output_confidence_threshold = 0.001
model.output_nms_threshold = 0.7
model.output_max_detections = 300
model.output_max_detections_per_class = 300
classmap = [1]
# Create evaluator
evaluator = ObjectDetectionModelEvaluator(
model,
classmap=classmap,
show_progress=True
)
# Folder structure should be: /images/cat/, /images/dog/, etc.
image_dir = “F:/YOLO/RamNew/HAILO/Win11VENV/degirum-windows/test”
coco_json = “F:/YOLO/RamNew/HAILO/Win11VENV/degirum-windows/test/_annotations.coco.json”
# Run evaluation (no annotation file required)
results = evaluator.evaluate(image_dir, coco_json, max_images=0)
print(“COCO mAP stats:”, results[0])
And this is my results:
COCO mAP stats: [0.75327223 0.99623725 0.95059774 0.4 0.68988011 0.78812285
0.47962382 0.81159875 0.81159875 0.4 0.75 0.84009009]
The metrics are more then I expected and more then described here Hailo guide: Evaluating model accuracy after compilation
-
AP: Overall mean Average Precision
-
AP50: Precision at IoU ≥ 0.5
-
AP75: Precision at IoU ≥ 0.75
-
AP_small, AP_medium, AP_large: Size-specific precision
-
AR: Recall statistics
Here I can see 7 metrics. In my case there 10 metrics.
How can I exlore the last 3 metrics?
Thank you for advance!