Hi,
Sorry for my late reply.
I confirm that the HEF file is from the Hailo Model Zoo. I re-evaluated the model using the attached JSON and script configurations with hailomz eval and obtained the following results:
Yolov11s.hef: mAP50-95 = 45%
When I run hailomz eval, I get 45% mAP, which matches the number shown in the Hailo Model Zoo. However, when evaluated with the DeGirum library, it shows 37% mAP as mentioned in the question before. I assume something might be missing or different in the JSON configuration. Could you please check the attached files?
Additionally, for custom model compilations with a different number of classes, what exactly needs to be changed in order to enable proper evaluation after compilation?
Thank you.
JSON: {
“ConfigVersion”: 11,
“Checksum”: “da96ad3b3730500d56c8e13d164d44a78eb6f062516717d4c4195f7995a8c391”,
“DEVICE”: [
{
“DeviceType”: “HAILO8”,
“RuntimeAgent”: “HAILORT”,
“SupportedDeviceTypes”: “HAILORT/HAILO8L, HAILORT/HAILO8”
}
],
“PRE_PROCESS”: [
{
“InputN”: 1,
“InputH”: 640,
“InputW”: 640,
“InputC”: 3,
“InputQuantEn”: true
}
],
“MODEL_PARAMETERS”: [
{
“ModelPath”: “yolov11s-80class.hef”
}
],
“POST_PROCESS”: [
{
“OutputPostprocessType”: “DetectionYoloHailo”,
“OutputNumClasses”: 80,
“LabelsPath”: “labels_yolov11.json”
}
]
}
import degirum as dg
import degirum_tools
from degirum_tools.detection_eval import ObjectDetectionModelEvaluator
import numpy as np
# Load the detection model
model = dg.load_model(
model_name="yolov11s-80class",
inference_host_address="@local",
zoo_url="/home/Downloads/vutl1-hailo-test/models/yolov11s-80class/yolov11s-80class.json",
token=''
)
# Optional class ID remapping: model → COCO
classmap = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
27, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 70, 72, 73, 74, 75, 76, 77,
78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]
# Create evaluator
evaluator = ObjectDetectionModelEvaluator(model, classmap=classmap)
# Evaluation inputs
image_dir = "/home/Downloads/vutl1-hailo-test/datasets/vutlval2017/val2017"
coco_json = "/home/Downloads/vutl1-hailo-test/datasets/vutlval2017/annotations/instances_val2017.json"
# Evaluate and return mAP results
results = evaluator.evaluate(image_dir, coco_json, max_images=0)
# Print COCO-style mAP results
# print("COCO mAP stats:", results[0])
metric_labels = [
"mAP@[IoU=0.50:0.95]",
"mAP@0.50",
"mAP@0.75",
"mAP_small",
"mAP_medium",
"mAP_large",
"AR@1",
"AR@10",
"AR@100",
"AR_small",
"AR_medium",
"AR_large"
]
# Extract and print with metric names
print("COCO mAP/Eval Results:\n")
for label, value in zip(metric_labels, results[0]):
print(f"{label:<20}: {value:.4f}")
# Compute overall statistics
mean_val = np.mean(results[0])
max_val = np.max(results[0])
min_val = np.min(results[0])
print("\nSummary Statistics:")
print(f"Mean: {mean_val:.4f}")
print(f"Max: {max_val:.4f}")
print(f"Min: {min_val:.4f}")
'''