model not fond in zoo:
?? Using zoo path: /home/gama/ai_projects/models/zoo
zoo/
zoo.json
yolov5xu/
labels.txt
model.json
yolov5xu.hef
?? Models in zoo: {}
(env) root@pi5:/home/gama/ai_projects#
model not fond in zoo:
?? Using zoo path: /home/gama/ai_projects/models/zoo
zoo/
zoo.json
yolov5xu/
labels.txt
model.json
yolov5xu.hef
?? Models in zoo: {}
(env) root@pi5:/home/gama/ai_projects#
Hi @jinomathew42 ,
Try renaming your model.json file to yolo5xu.json. If your model.json is formatted correctly for PySDK, then there’s a good chance renaming the .json to your model’s name will get it working. PySDK expects the .json file to share the same name as the model.
Example:
└───yolo5xu
labels.txt
yolo5xu.hef
yolo5xu.json
If you still get an error about not finding the model in the model zoo even after renaming your .json file, then could you paste the contents of your model.json file here? The contents of your model.json would really help us figure out why you can’t see your model.
still the same issue
(env) root@pi5:/home/gama/ai_projects# python test.py
Available models in zoo:
Traceback (most recent call last):
File “/home/gama/ai_projects/test.py”, line 17, in
model = zoo.load_model(
^^^^^^^^^^^^^^^
File “/home/gama/ai_projects/env/lib/python3.11/site-packages/degirum/log.py”, line 92, in sync_wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “/home/gama/ai_projects/env/lib/python3.11/site-packages/degirum/zoo_manager.py”, line 324, in load_model
model = self._zoo.load_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/gama/ai_projects/env/lib/python3.11/site-packages/degirum/log.py”, line 92, in sync_wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “/home/gama/ai_projects/env/lib/python3.11/site-packages/degirum/_zoo_accessor.py”, line 312, in load_model
model_params = self.model_info(model)
^^^^^^^^^^^^^^^^^^^^^^
File “/home/gama/ai_projects/env/lib/python3.11/site-packages/degirum/log.py”, line 92, in sync_wrap
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File “/home/gama/ai_projects/env/lib/python3.11/site-packages/degirum/_zoo_accessor.py”, line 130, in model_info
raise DegirumException(
degirum.exceptions.DegirumException: Model ‘yolov5xu’ is not found in model zoo ‘/home/gama/ai_projects/models/zoo’
(env) root@pi5:/home/gama/ai_projects#
yolov5xu.json
____
{
“Name”: “yolov5xu”,
“Type”: “HailoCompiledModel”,
“Description”: “YOLOv5xu model for person detection”,
“DEVICE”: [
{
“DeviceType”: “HAILO8”,
“RuntimeAgent”: “HAILORT”,
“SupportedDeviceTypes”: “HAILORT/HAILO8”
}
],
“MODEL_PARAMETERS”: [
{
“ModelPath”: “yolov5xu.hef”
}
],
“POST_PROCESS”: [
{
“LabelsPath”: “labels.txt”,
“OutputNumClasses”: 1,
“PythonFile”: “HailoDetectionYolo.py”,
“OutputPostprocessType”: “DetectionYoloV5”
}
],
“PRE_PROCESS”: [
{
“InputType”: “Image”,
“ImageBackend”: “opencv”,
“InputPadMethod”: “letterbox”,
“InputResizeMethod”: “bilinear”,
“InputN”: 1,
“InputH”: 640,
“InputW”: 640,
“InputC”: 3,
“InputQuantEn”: true
}
]
}
test.py
_____
import degirum as dg
inference_host_address = “@local”
zoo_url = “/home/gama/ai_projects/models/zoo”
device_type = “HAILORT/HAILO8”
model_name = “yolov5xu”
zoo = dg.connect(
inference_host_address=inference_host_address,
zoo_url=zoo_url,
)
print(“Available models in zoo:”, zoo.list_models())
model = zoo.load_model(
model_name=model_name,
device_type=device_type,
)
print(f"
Model ‘{model_name}’ loaded successfully!")
folder structure
_____
├── models/ │ └── zoo/ │ └── yolov5xu/ │ ├── model.json <— **Updated** with the correct JSON format │ ├── yolov5xu.hef │ └── labels.txt └── test.py
now my folder stucture
models/zoo/yolov5xu/
labels.txt
yolov5xu.hef
yolov5xu.json
Hi @jinomathew42 ,
Try adding a ConfigVersion and a Checksum before DEVICE:
{
"ConfigVersion": 6,
"Checksum": "55dd0f0f98845970bb57612abd25af9b1498281795283326a0a526a469820957",
"DEVICE": [...
Checksum can be any value, ConfigVersion should be 6.
i tried
(env) root@pi5:/home/gama/ai_projects# python test.py
Available models in zoo:
(env) root@pi5:/home/gama/ai_projects#
updated yolov5xu.json
{
“ConfigVersion”: 6,
“Checksum”: “55dd0f0f98845970bb57612abd25af9b1498281795283326a0a526a469820957”,
“Name”: “yolov5xu”,
“Type”: “HailoCompiledModel”,
“Description”: “YOLOv5xu model for person detection”,
“DEVICE”: [
{
“DeviceType”: “HAILO8”,
“RuntimeAgent”: “HAILORT”,
“SupportedDeviceTypes”: “HAILORT/HAILO8”
}
],
“MODEL_PARAMETERS”: [
{
“ModelPath”: “yolov5xu.hef”
}
],
“POST_PROCESS”: [
{
“LabelsPath”: “labels.txt”,
“OutputNumClasses”: 1,
“PythonFile”: “HailoDetectionYolo.py”,
“OutputPostprocessType”: “DetectionYoloV5”
}
],
“PRE_PROCESS”: [
{
“InputType”: “Image”,
“ImageBackend”: “opencv”,
“InputPadMethod”: “letterbox”,
“InputResizeMethod”: “bilinear”,
“InputN”: 1,
“InputH”: 640,
“InputW”: 640,
“InputC”: 3,
“InputQuantEn”: true
}
]
}
still model not loading
Hi @jinomathew42 ,
Let’s do two troubleshooting attempts:
First, do a degirum sys-info in your degirum venv. You should see either HAILORT/HAILO8 or HAILO8L/HAILORT. If you see HAILORT/HAILO8L, then change the DEVICE field in the .json file to:
"DEVICE": [
{
"DeviceType": "HAILO8L",
"RuntimeAgent": "HAILORT",
"SupportedDeviceTypes": "HAILORT/HAILO8L, HAILORT/HAILO8"
}
],
Second, if that doesn’t work, check the existence of the model with degirum.load_model(). Run this in the same directory as test.py:
import degirum
model = degirum.load_model(
model_name="yolo5xu",
inference_host_address="@local",
zoo_url="/home/gama/ai_projects/models/zoo",
)
print(model)
If the .json file is formatted correctly, you will see one of these:
Successful model load:
<degirum.model._ClientModel object at 0x000002127FB83EE0>
ClientModel object is good, and the issue is probably somewhere earlier in your code.
Model exists but doesn’t match any device:
degirum.exceptions.DegirumException: Model 'yolo5xu' does not have any supported runtime/device combinations that will work on this system.
This means you need to correct your DEVICE field.
If the .json file is not formatted correctly, you should see the same old error:
degirum.exceptions.DegirumException: Model 'yolo5xu' is not found in model zoo '.'
In this case, you need to look very carefully for any syntax errors in the model .json file.
To quickly verify the model .json syntax, log into the AI Hub, go to any public model, click the Model JSON button, and paste your .json into it. The model .json previewer has the ability to check your model .json syntax for errors.
thankyou for the response when i run inference this i got
(env) root@pi5:/home/gama/ai_projects# python test.py
[INFO] Model loaded: yolov5xu on HAILORT/HAILO8L
[INFO] Input dtype: uint8, shape: (640, 640, 3)
[ERROR] Inference failed: Failed to perform model ‘yolov5xu’ inference: Model ‘yolov5xu’ inference failed: [ERROR]Incorrect value of parameter
Hailo Async Runtime: Input DG Tensor type DG_FLT does not match Hailo input type DG_UINT8
hailo_plugin_impl.cpp: 203 [{anonymous}::to_hailo_tensor]
When running model ‘yolov5xu’
Can you share the contents of your test.py file? We would like to see how you are running inference after loading the model.
test.py
import degirum as dg
import cv2
zoo = dg.connect(
inference_host_address=“@local”,
zoo_url=“/home/gama/ai_projects/models/zoo”
)
model = zoo.load_model(“yolov5xu”, device_type=“HAILORT/HAILO8L”)
print(“[INFO] Model loaded: yolov5xu on HAILORT/HAILO8L”)
frame = cv2.imread(“p.jpg”)
if frame is None:
raise RuntimeError(“Could not load image ‘p.jpg’”)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_resized = cv2.resize(frame_rgb, (640, 640))
input_frame = frame_resized.astype(“float32”) / 255.0
results = model.predict(input_frame)
results = model.predict(input_frame, preprocess=False) # ← key fix
for det in results.detections:
print(f"Label: {det.label}, Confidence: {det.confidence:.2f}, BBox: {det.bbox}")
json
________________________
{
“ConfigVersion”: 6,
“Checksum”: “dummychecksum1234567890”,
“Name”: “yolov5xu”,
“Type”: “HailoCompiledModel”,
“Description”: “YOLOv5xu model for person detection”,
“DEVICE”: [
{
“DeviceType”: “HAILO8L”,
“RuntimeAgent”: “HAILORT”,
“SupportedDeviceTypes”: “HAILORT/HAILO8L”
}
],
“MODEL_PARAMETERS”: [
{
“ModelPath”: “yolov5xu.hef”
}
],
“POST_PROCESS”: [
{
“LabelsPath”: “labels.txt”,
“OutputNumClasses”: 1,
“PythonFile”: “HailoDetectionYolo.py”,
“OutputPostprocessType”: “DetectionYoloV5”
}
],
“PRE_PROCESS”: [
{
“InputType”: “Image”,
“InputH”: 640,
“InputW”: 640,
“InputC”: 3,
“ColorSpace”: “RGB”,
“ImageBackend”: “opencv”,
“InputLayout”: “HWC”,
“InputDtype”: “uint8”
}
]
}
You can directly pass the path of the file to the model predict function. Or you can pass frame from frame = cv2.imread(“p.jpg”)line. No need to do color conversion, resizing, and float conversion. Please note that Hailo8/Hailo8L accelerators work with quantized models.
Also, please make following changes to your model JSON file:
Finally, please let us know what you used as a reference to generate this JSON. We will try to improve our documentation so that users can easily create custom JSONs without running into so many issues.
hi @shashi
I’ve made the changes you have suggested , i have an error in my postprocessor section
raise DegirumException(msg) from saved_exception
degirum.exceptions.DegirumException: Failed to perform model ‘yolov5xu’ inference: Postprocessor type ‘GenericDetection’ is not known
iam using degirum
Version: 0.18.2
Can you tell us how you got the hef file? Is it from hailo model zoo? Or did you compile it by yourself? If so, do you know if it has NMS attached?
from hailo model zoo
Can you share the link? I could not find the model with the name you are using: yolov5xu.
This will allow us to make the proper json and even test it before we ask you to do more experiments ![]()
hi @shashi
ignore the name difference yolov5s.hef is what iam using
We took the yolov5s.hef file from Hailo model zoo and made a model json and labels file and integrated it into our AI Hub: YOLOv5s .
Below is a working code snippet that tests this model:
from degirum_tools import ModelSpec, remote_assets, Display
# ModelSpec configuration
model_spec = ModelSpec(
model_name="yolov5s_coco--640x640_quant_hailort_multidevice_1",
zoo_url="degirum/public",
inference_host_address="@local",
model_properties={"device_type": ["HAILORT/HAILO8L", "HAILORT/HAILO8"]} # Specify multiple device types
# token can be added if needed
)
# Choose image source from remote assets
image_source = remote_assets.three_persons
# Load AI model
model = model_spec.load_model()
# Perform AI model inference on given image source
print(f"Running inference using '{model_spec.model_name}' on image source '{image_source}'")
inference_result = model(image_source)
# Print inference results
# print(inference_result)
# print("Press 'x' or 'q' to stop.")
# Show results of inference
with Display("AI Camera") as output_display:
output_display.show_image(inference_result.image_overlay)
Our suggestion is to first make sure that the above code works for you. You can then download the model from our AI Hub onto your disk and try the same code snippet by changing zoo_url to the folder path.
After this, you can mimic the JSON we used to make your settings work. The model JSON is as follows:
{
"ConfigVersion": 11,
"Checksum": "09160dddbb2d9be374e8384e8091919326a08b2277736310a92d63e71303ab63",
"DEVICE": [
{
"DeviceType": "HAILO8L",
"RuntimeAgent": "HAILORT",
"SupportedDeviceTypes": "HAILORT/HAILO8L, HAILORT/HAILO8",
"EagerBatchSize": 1
}
],
"PRE_PROCESS": [
{
"InputN": 1,
"InputH": 640,
"InputW": 640,
"InputC": 3,
"InputQuantEn": true
}
],
"MODEL_PARAMETERS": [
{
"ModelPath": "yolov5s_coco--640x640_quant_hailort_multidevice_1.hef"
}
],
"POST_PROCESS": [
{
"OutputPostprocessType": "DetectionYoloHailo",
"OutputNumClasses": 80,
"LabelsPath": "labels_yolov5s_coco.json"
}
]
}
The labels JSON is as below for your reference:
{
"0": "person",
"1": "bicycle",
"2": "car",
"3": "motorcycle",
"4": "airplane",
"5": "bus",
"6": "train",
"7": "truck",
"8": "boat",
"9": "traffic light",
"10": "fire hydrant",
"11": "stop sign",
"12": "parking meter",
"13": "bench",
"14": "bird",
"15": "cat",
"16": "dog",
"17": "horse",
"18": "sheep",
"19": "cow",
"20": "elephant",
"21": "bear",
"22": "zebra",
"23": "giraffe",
"24": "backpack",
"25": "umbrella",
"26": "handbag",
"27": "tie",
"28": "suitcase",
"29": "frisbee",
"30": "skis",
"31": "snowboard",
"32": "sports ball",
"33": "kite",
"34": "baseball bat",
"35": "baseball glove",
"36": "skateboard",
"37": "surfboard",
"38": "tennis racket",
"39": "bottle",
"40": "wine glass",
"41": "cup",
"42": "fork",
"43": "knife",
"44": "spoon",
"45": "bowl",
"46": "banana",
"47": "apple",
"48": "sandwich",
"49": "orange",
"50": "broccoli",
"51": "carrot",
"52": "hot dog",
"53": "pizza",
"54": "donut",
"55": "cake",
"56": "chair",
"57": "couch",
"58": "potted plant",
"59": "bed",
"60": "dining table",
"61": "toilet",
"62": "tv",
"63": "laptop",
"64": "mouse",
"65": "remote",
"66": "keyboard",
"67": "cell phone",
"68": "microwave",
"69": "oven",
"70": "toaster",
"71": "sink",
"72": "refrigerator",
"73": "book",
"74": "clock",
"75": "vase",
"76": "scissors",
"77": "teddy bear",
"78": "hair drier",
"79": "toothbrush"
}
Please let us know if you still cannot get your model to work. Our JSON schema is documented here for your reference: Model JSON Structure | DeGirum Docs
hi @shashi
I got this code working — thank you for the reference! ![]()
I have one more question:
How can we check NPU usage in DeGirum, similar to how hailortcli_monitor is used with Hailo?
You can still use hailortcli_monitorto monitor NPU usage.