degirum.exceptions.DegirumException: Failed to perform model 'yolov8n_seg' inference: float division by zero

Hello,

I am trying to evaluate the accuracy of the yolov8n-seg model, but I have been stuck for several days due to the same error.
On the same environment, I was able to successfully evaluate the yolov8n detection model, but the segmentation model always fails.

Environment:

  • Device: Raspberry Pi 5 + Hailo-8L

  • pySDK version: 0.18.3

  • hailort / hailo PCIe driver version: 4.22.0

Error Message :

degirum.exceptions.DegirumException: float division by zero

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/tappas/npu/accuracy_new/degirum/seg/eval_example.py", line 28, in <module>
    results = evaluator.evaluate(image_dir, coco_json, max_images=0)
  File "/app/tappas/npu/hailo_venv2/lib/python3.11/site-packages/degirum_tools/detection_eval.py", line 110, in evaluate
    for image_id, predictions in zip(
  File "/app/tappas/npu/hailo_venv2/lib/python3.11/site-packages/degirum/model.py", line 293, in predict_batch
    for res in self._predict_impl(source):
  File "/app/tappas/npu/hailo_venv2/lib/python3.11/site-packages/degirum/model.py", line 1233, in _predict_impl
    raise DegirumException(msg) from saved_exception

degirum.exceptions.DegirumException: Failed to perform model 'yolov8n_seg' inference: float division by zero

This is yolov8n_seg.json file.

{
    "ConfigVersion": 10,
    "Checksum": "926bf34651d94e850361ad272b141a61af0097e64e46f3a7519e7dff84c8f323",
    "DEVICE": [
        {
            "DeviceType": "HAILO8L",
            "RuntimeAgent": "HAILORT",
            "SupportedDeviceTypes": "HAILORT/HAILO8L"
        }
    ],
    "PRE_PROCESS": [
        {
            "InputType": "Image",
            "InputN": 1,
            "InputH": 640,
            "InputW": 640,
            "InputC": 3,
            "InputPadMethod": "letterbox",
            "InputResizeMethod": "bilinear",
            "InputQuantEn": true
        }
    ],
    "MODEL_PARAMETERS": [
        {
            "ModelPath": "yolov8n_seg.hef"
        }
    ],
    "POST_PROCESS": [
        {
            "OutputPostprocessType": "SegmentationYoloV8",
            "OutputNumClasses": 80,
            "LabelsPath": "labels_coco.json",
            "OutputConfThreshold": 0.3,
            "OutputPostprocessArguments": {
                "outputs_names": [
                    "yolov8n_seg/conv73", "yolov8n_seg/conv74", "yolov8n_seg/conv75",
                    "yolov8n_seg/conv60", "yolov8n_seg/conv61", "yolov8n_seg/conv62",
                    "yolov8n_seg/conv44", "yolov8n_seg/conv45", "yolov8n_seg/conv46",
                    "yolov8n_seg/conv48"
                ],
                "detections_outputs": ["yolov8n_seg/conv73", "yolov8n_seg/conv60", "yolov8n_seg/conv44"],
                "prototypes_output": "yolov8n_seg/conv48",
                "masks_outputs": ["yolov8n_seg/conv75", "yolov8n_seg/conv62", "yolov8n_seg/conv46"]
            }
        }
    ]
}

This is my evaluation script.

import degirum as dg
import degirum_tools
from degirum_tools.detection_eval import ObjectDetectionModelEvaluator

# Load the detection model
model = dg.load_model(
    model_name="yolov8n_seg",
    inference_host_address="@local",
    zoo_url="/app/tappas/npu/accuracy_new/degirum/seg",
    token=''
)

# Optional class ID remapping: model → COCO
classmap = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
            27, 28, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51,
            52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 70, 72, 73, 74, 75, 76, 77,
            78, 79, 80, 81, 82, 84, 85, 86, 87, 88, 89, 90]

# Create evaluator
evaluator = ObjectDetectionModelEvaluator(model, classmap=classmap)

# Evaluation inputs
image_dir = "/app/tappas/npu/accuracy/coco/images/val2017"
coco_json = "/app/tappas/npu/accuracy/coco/annotations/instances_val2017.json"


# Evaluate and return mAP results
results = evaluator.evaluate(image_dir, coco_json, max_images=0)

# Print COCO-style mAP results
print("COCO mAP stats:", results[0])

Hi @manjookim

Welcome to the DeGirum community. Is the segmentation model failing only in evaluation or does it fail for every image as well? Also, is it the standard yolov8n segmentation model? Or your own custom model?

Hi @shashi

Thank you for your reply.

To clarify:

  • Running inference on individual images with the yolov8n-seg model works fine, and I am able to print the results without any issues.

  • The problem occurs only when I try to run the evaluation.

Also, this is not a custom model — I am using the official yolov8n-seg model from Ultralytics.

Hi @manjookim

Thanks for the details. We will see if we can replicate on our side.

Hi @manjookim

We ran benchmarking on the yolov8n-seg model and could not replicate the issue. Is it possible for you to share the hef?