So I was trying to run a custom YOLO on Hailo-8 using DeGirum

I was trying to run my custom trained yolo model in .hef and when i run it it doesn work i gives me the following error

raise DegirumException(msg) from saved_exception
degirum.exceptions.DegirumException: Failed to perform model 'yolov8n' inference: Model 'yolov8n' inference failed: [ERROR]Execution failed
Condition 'input_tensor->shape()[ 1 ] == 4 + m_OutputNumClasses' is not met: input_tensor->shape()[ 1 ] is 1, 4 + m_OutputNumClasses is 4
dg_postprocess_detection.cpp: 1695 [DG::DetectionPostprocessYoloV8::inputDataProcessBaseline]

When running model ‘yolov8n’

Hi @sahil,

The issue you’re encountering typically arises when the YOLOv8 model was compiled with Hailo’s built-in NMS enabled. When that happens, the model’s output format changes, and it won’t be compatible with DeGirum’s default DetectionYoloV8 postprocessor, which expects raw output.

Why this error happens:

The postprocessor is expecting an output tensor shaped like [N, 4 + num_classes], but the compiled model is returning [N, 1] instead, likely because NMS already compressed the detections. This shape mismatch causes the inference to fail.

What usually fixes it:

If Hailo NMS is indeed enabled, you’ll need to:

  1. Replace the default postprocessor with a custom Python-based postprocessor that can interpret the output format from the compiled model.
  2. Update the .json config to reference that custom script.

We’ve helped users resolve similar issues by using a script like HailoDetectionYolo.py, which includes logic tailored to Hailo NMS output. You can find that thread here: Failure to deploy first custom model: DeGirum exceptions in postprocessing, tensor shape condition not met - #2 by khatami.mehrdad

To confirm the root cause and provide the exact solution, we’ll need to see your compiled .hef and its associated .json config file. These files will tell us whether NMS is applied and what postprocessing is appropriate.

If you can share those, our team will take a look and guide you from there.

ok i have done that but now a weird thing keeps happening it just keeps saying Model ‘Yolov8n’ is not found in model zoo ‘/home/pi/hailo_examples/models’ and i am stuck on it for hours

heres my code

import degirum as dg, degirum_tools

inference_host_address = "@local"
zoo_url = '/home/tennibot/hailo_examples/models'
token=''
device_type='HAILORT/HAILO8'

# set model name, and image source
model_name = "Yolov8n"
video_source=0
class_list = ["tennisball"]

# load AI model
model = dg.load_model(
    model_name=model_name,
    inference_host_address=inference_host_address,
    zoo_url = zoo_url,
    token=token,
    device_type=device_type,
)

# create object tracker
tracker = degirum_tools.ObjectTracker(
    class_list=class_list,
    track_thresh=0.35,
    track_buffer=100,
    match_thresh=0.9999,
    trail_depth=20,
    anchor_point=degirum_tools.AnchorPoint.BOTTOM_CENTER,
)

# attach object tracker to model
degirum_tools.attach_analyzers(model, [tracker])

# run AI inference on video stream and display the results
with degirum_tools.Display("AI Camera") as output_display:
    for inference_result in degirum_tools.predict_stream(model, video_source):
        output_display.show(inference_result.image_overlay)

and heres the config file

{
    "Checksum": "3314f4fcd9099b9085ee79d010affe0c3975ab2be811d26fca7cf252c92e6239",
    "ConfigVersion": 10,
    "DEVICE": [
        {
            "DeviceType": "HAILO8",
            "RuntimeAgent": "HAILORT",
            "SupportedDeviceTypes": "HAILORT/HAILO8",
        }
    ],
    "PRE_PROCESS": [
    {
      "InputType": "Image",
      "ImageBackend": "opencv",
      "InputPadMethod": "letterbox",
      "InputResizeMethod": "bilinear",
      "InputN": 1,
      "InputH": 640,
      "InputW": 640,
      "InputC": 3,
      "InputQuantEn": true
    }
    ],
    "MODEL_PARAMETERS": [
        {
            "ModelPath": "Yolov8n.hef"
        }
    ],
    "POST_PROCESS": [
        {
            "LabelsPath": "Labels.json",
            "OutputNumClasses": 1,
            "PythonFile": "HailoDetectionYolo.py",
            "OutputPostprocessType": "DetectionYoloV8"
        }
    ]
}

For future readers, per this thread, it turns out the issue is that there’s an extra comma here breaking the .json file.