Inconsistent OBB Model Behavior - One Works, Another Fails with Tensor Mismatch

Hi! I’m new to Hailo8 and the model compilation process, running inference via Python on Google Colab. Happy to provide any other debugging information that might help resolve this issue.

Environment

  • Platform: DeGirum Cloud (@cloud)

  • Model Zoo: sivagnanam_maheshwaran/Hailo8_Deployyment

  • Hardware: Hailo8

Issue Description

I’m experiencing inconsistent behavior with two similar OBB (Oriented Bounding Box) models. One model works perfectly while another fails with a tensor mismatch error, despite both being compiled for the same target hardware and having similar naming conventions.

Models Tested

:white_check_mark: Working Model

  • Name: ShipDatset-obb--640x640_quant_hailort_hailo8_1

  • Behavior: Inference runs successfully, outputs proper oriented bounding boxes

:cross_mark: Failing Model

  • Name: HRSID-obb--640x640_quant_hailort_hailo8_2

  • Error: Mismatch in the number of box/prob tensors!

Error Details

DegirumException: Model 'sivagnanam_maheshwaran/Hailo8_Deployyment/HRSID-obb--640x640_quant_hailort_hailo8_2' inference failed: [ERROR]Incorrect value
Mismatch in the number of box/prob tensors!
dg_postprocess_detection.cpp: 1608 [DG::DetectionPostprocessYoloV8::findPostprocessorInputsOrder]
When running model 'sivagnanam_maheshwaran/Hailo8_Deployyment/HRSID-obb--640x640_quant_hailort_hailo8_2'

Reproduction Code

import degirum as dg
import cv2

# This WORKS
working_model = dg.load_model(
    model_name="ShipDatset-obb--640x640_quant_hailort_hailo8_1",
    inference_host_address="@cloud",
    zoo_url="sivagnanam_maheshwaran/Hailo8_Deployyment",
    token="[TOKEN]"
)

# This FAILS
failing_model = dg.load_model(
    model_name="HRSID-obb--640x640_quant_hailort_hailo8_2",
    inference_host_address="@cloud",
    zoo_url="sivagnanam_maheshwaran/Hailo8_Deployyment",
    token="[TOKEN]"
)

img = cv2.imread("test_image.jpg")

# Works fine
res_working = working_model(img)  # ✅ Success

# Throws tensor mismatch error
res_failing = failing_model(img)  # ❌ Fails

Both are models OBB models were compiled in DeGiurm AI Hub cloud compiler using the same settings, so I am confused as to why one of them work while the next one does not.

Hi @Mahesh

Welcome to the DeGirum community. Thanks for bringing this to our notice and providing detailed information. We will take a look and update you.

Hi @Mahesh

We took a look, and discovered that the second model provides a special case that was overlooked. The model will work if you update the model JSON to the following (please notice the PostProcessorInputs field):

{
    "ConfigVersion": 11,
    "Checksum": "caf29dd0e107c67c2c7d1e237712897f13f26ce998c196f208bbb0294964ea0e",
    "DEVICE": [
        {
            "DeviceType": "HAILO8",
            "RuntimeAgent": "HAILORT",
            "SupportedDeviceTypes": "HAILORT/HAILO8",
            "EagerBatchSize": 1
        }
    ],
    "PRE_PROCESS": [
        {
            "InputN": 1,
            "InputH": 640,
            "InputW": 640,
            "InputC": 3,
            "InputQuantEn": true
        }
    ],
    "MODEL_PARAMETERS": [
        {
            "ModelPath": "HRSID-obb--640x640_quant_hailort_hailo8_2.hef"
        }
    ],
    "POST_PROCESS": [
        {
            "PostProcessorInputs": [
                0,
                1,
                2,
                3,
                4,
                5,
                6,
                7,
                8
            ],
            "OutputPostprocessType": "DetectionYoloV8OBB",
            "OutputNumClasses": 1,
            "LabelsPath": "labels_HRSID-obb.json",
            "SigmoidOnCLS": true
        }
    ]
}

Please use this approach while we integrate appropriate fixes to eliminate the need for this workaround in the future; these changes will be available in the next release of PySDK. Let us know if you have any other questions.

Thanks for the quick responses @nikita and @shashi. May I know how I could set the PostProcessorInputs? As the model documentation does not seem to provide a field for it.

Hi @Mahesh

Did you download the model for local use? or are you using it directly from the cloud zoo?

If using locally, the model folder will have a json file. You just need to copy+paste the above json.

If using cloud zoo, you can go to the AI hub and click on model json tab and modify the JSON in the browser itself.

Thanks @shashi and @nikita, the issue is resolved and I can run inference now!

Hi @Mahesh

Thanks for confirming. Glad it is working and thank you for helping us catch this corner case :slight_smile: . Would be great if you can mark the right answer as a solution so that others can benefit.