Model not found in model zoo

When I run the example code with a custom model in local model zoo folder it gives me an error. I have been trying to solve it for many hours and still no luck.

          raise DegirumException(
degirum.exception.DegirumException: Model 'yolov8n--640x640_quant_hailort_hailo8_1'
is not found in model zoo 'home/tennibot/hailo_examples/models'

Main code:

import degirum as dg, degirum_tools

inference_host_address = "@local"
zoo_url = '/home/tennibot/hailo_examples/models'
token=''
device_type='HAILORT/HAILO8'

# set model name, and image source
model_name = "yolov8n--640x640_quant_hailort_hailo8_1"
video_source=0
class_list = ["tennisball"]

# load AI model
model = dg.load_model(
    model_name=model_name,
    inference_host_address=inference_host_address,
    zoo_url = zoo_url,
    token=token,
    device_type=device_type,
)

# create object tracker
tracker = degirum_tools.ObjectTracker(
    class_list=class_list,
    track_thresh=0.35,
    track_buffer=100,
    match_thresh=0.9999,
    trail_depth=20,
    anchor_point=degirum_tools.AnchorPoint.BOTTOM_CENTER,
)

# attach object tracker to model
degirum_tools.attach_analyzers(model, [tracker])

# run AI inference on video stream and display the results
with degirum_tools.Display("AI Camera") as output_display:
    for inference_result in degirum_tools.predict_stream(model, video_source):
        output_display.show(inference_result.image_overlay)

Heres my folder structure:

yolov8n--640x640_quant_hailort_hailo8_1
|--- HailoDetectionYolo.py
|--- Labels.json
|--- yolov8n--640x640_quant_hailort_hailo8_1.hef
|--- yolov8n--640x640_quant_hailort_hailo8_1.json

HailoDetectionYolo.py

import json

# Post-processor class, must have fixed name 'PostProcessor'
class PostProcessor:
    def __init__(self, json_config):
        """
        Initialize the post-processor with configuration settings.

        Parameters:
            json_config (str): JSON string containing post-processing configuration.
        """
        # Parse the JSON configuration
        self._json_config = json.loads(json_config)

        # Extract configuration parameters
        self._num_classes = int(
            self._json_config["POST_PROCESS"][0]["OutputNumClasses"]
        )
        self._label_json_path = self._json_config["POST_PROCESS"][0]["LabelsPath"]
        self._input_height = int(self._json_config["PRE_PROCESS"][0]["InputH"])
        self._input_width = int(self._json_config["PRE_PROCESS"][0]["InputW"])

        # Load label dictionary from JSON file
        with open(self._label_json_path, "r") as json_file:
            self._label_dictionary = json.load(json_file)

        # Extract confidence threshold
        self._output_conf_threshold = float(
            self._json_config["POST_PROCESS"][0].get("OutputConfThreshold", 0.0)
        )

    def forward(self, tensor_list, details_list):
        """
        Process the raw output tensor to produce formatted JSON results.

        Parameters:
            tensor_list (list): List of tensors from the model.
            details_list (list): Additional details (unused in this example).

        Returns:
            str: JSON-formatted string containing detection results.
        """
        # Initialize results list
        new_inference_results = []

        # Extract and reshape the raw output tensor
        output_array = tensor_list[0].reshape(-1)

        # Index to parse the array
        index = 0

        # Iterate over classes and parse results
        for class_id in range(self._num_classes):
            # Number of detections for this class
            num_detections = int(output_array[index])
            index += 1  # Move to the next entry

            # Skip if no detections for this class
            if num_detections == 0:
                continue

            # Process each detection for this class
            for _ in range(num_detections):
                if index + 5 > len(output_array):
                    # Safeguard against unexpected array end
                    break

                # Extract score and bounding box in x_center, y_center, width, height format
                score = float(output_array[index + 4])
                y_min, x_min, y_max, x_max = map(float, output_array[index : index + 4])
                index += 5  # Move to the next detection

                # Skip detections below the confidence threshold
                if score < self._output_conf_threshold:
                    continue

                # Convert to x_min, y_min, x_max, y_max format
                x_min = x_min * self._input_width
                y_min = y_min * self._input_height
                x_max = x_max * self._input_width
                y_max = y_max * self._input_height

                # Format the detection result
                result = {
                    "bbox": [x_min, y_min, x_max, y_max],
                    "score": score,
                    "category_id": class_id,
                    "label": self._label_dictionary.get(
                        str(class_id), f"class_{class_id}"
                    ),
                }
                new_inference_results.append(result)

            # Stop processing if padded zeros are reached
            if index >= len(output_array) or all(v == 0 for v in output_array[index:]):
                break

        # Return results as JSON string
        return json.dumps(new_inference_results)

Labels.json:

{
    "0": "tennisball"
}

yolov8n–640x640_quant_hailort_hailo8_1.json:

{
    "Checksum": "3314f4fcd9099b9085ee79d010affe0c3975ab2be811d26fca7cf252c92e6239",
    "ConfigVersion": 10,
    "DEVICE": [
        {
            "DeviceType": "HAILO8",
            "RuntimeAgent": "HAILORT",
            "SupportedDeviceTypes": "HAILORT/HAILO8",
        }
    ],
    "PRE_PROCESS": [
    {
      "InputType": "Image",
      "ImageBackend": "opencv",
      "InputPadMethod": "letterbox",
      "InputResizeMethod": "bilinear",
      "InputN": 1,
      "InputH": 640,
      "InputW": 640,
      "InputC": 3,
      "InputQuantEn": true
    }
    ],
    "MODEL_PARAMETERS": [
        {
            "ModelPath": "Yolov8n.hef"
        }
    ],
    "POST_PROCESS": [
        {
            "LabelsPath": "Labels.json",
            "OutputNumClasses": 1,
            "PythonFile": "HailoDetectionYolo.py",
            "OutputPostprocessType": "DetectionYoloV8"
        }
    ]
}

When when i run this code to list models in zoo directory:

import degirum as dg

zoo_1 = dg.connect(
    inference_host_address='@local',
    zoo_url='/home/tennibot/hailo_examples/models'
)
print(zoo_1.list_models())

I get this output:

['yolov8n_relu6_coco--640x640_quant_hailort_hailo8_3', 'yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1']

Hi @sahil , thanks for sharing your code. I’ll look into it within an hour and see what recommendations I can make. I’ll try to replicate your structure and identify how you get

raise DegirumException(
degirum.exception.DegirumException: Model 'yolov8n--640x640_quant_hailort_hailo8_1'
is not found in model zoo 'home/tennibot/hailo_examples/models'

even with

zoo_url = '/home/tennibot/hailo_examples/models'

Hi @sahil,

There’s a typo in yolov8n--640x640_quant_hailort_hailo8_1.json.

Delete the comma after "SupportedDeviceTypes": "HAILORT/HAILO8":

    "DEVICE": [
        {
            "DeviceType": "HAILO8",
            "RuntimeAgent": "HAILORT",
            "SupportedDeviceTypes": "HAILORT/HAILO8",
        }
    ],

On my side, the model is not detected when the json file has an extra comma.

May you see if this fix works?

Additionally, I assume you have multiple models in your zoo_url = '/home/tennibot/hailo_examples/models' path corresponding to

['yolov8n_relu6_coco--640x640_quant_hailort_hailo8_3', 'yolov8n_relu6_coco--640x640_quant_hailort_hailo8l_1']

If this is untrue, then perhaps the issue is elsewhere.

1 Like

oh my god thank you so much i @raytao-degirum you saved me a lot of hours.

1 Like

I was trying to figure out your previous thread and wasn’t able to, I guess it’s because the issue was just a syntax error. Glad your issue is solved!

1 Like

Added a reply to your original thread so that future readers are also able to resolve this issue if encountered.

2 Likes