Trying to deploy DeGirum Fire and Smoke detection model on my RPi5 with Hailo-8, running into Hailo_File_Operation_Failure(14) and segfault errors

I downloaded DeGirum’s yolov8n_relu6_fire_smoke–640x640_quant_hailort_hailo8_1/yolov8n_relu6_fire_smoke–640x640_quant_hailort_hailo8_1.hef but I am running into segfault when I run the cpp object detector available on Hailo Application Code repo here. I am able to run my custom Hailo models with the same code, so not sure what I am doing wrong with the DeGirum model. Any ideas?

Hi @wmat28
Welcome to the DeGirum community. Before trying the model in cpp code, can you once try to run it through hailortcli to make sure device type, runtime version etc are compatible?

This error could mean that you misspelled the model path in the detection_hef argument, or the model file itself has incorrect file permissions.

Ensure you put the .hef right next to the executable so that you don’t have to provide relative paths to the model binary when running the application.

You can also take a look at the hailort.log file that appears next to the application after running for more details

Thank you for your replies, however, I have checked and none of these have solved the problem. Here is the exact error I am getting:

pi@raspberrypi:~/Hailo-Application-Code-Examples/runtime/hailo-8/cpp/object_dete
ction $ ./build/x86_64/obj_det -hef=models/yolov8n_relu6_fire_smoke--640x640_quant_hailort_hailo8_1.hef -input=images
-I-----------------------------------------------
-I-  Network Name                               
-I-----------------------------------------------
-I   yolov8n_relu6_fire_smoke--640x640_quant_hailort_hailo8_1.hef
-I-----------------------------------------------
-I-  Input: yolov8n_relu6_firesmoke/input_layer1, Shape: (640, 640, 3)
-I-----------------------------------------------
-I-  Output: yolov8n_relu6_firesmoke/conv59, Shape: (40, 40, 64)
-I-  Output: yolov8n_relu6_firesmoke/conv70, Shape: (20, 20, 64)
-I-  Output: yolov8n_relu6_firesmoke/conv48, Shape: (80, 80, 2)
-I-  Output: yolov8n_relu6_firesmoke/conv60, Shape: (40, 40, 2)
-I-  Output: yolov8n_relu6_firesmoke/conv71, Shape: (20, 20, 2)
-I-  Output: yolov8n_relu6_firesmoke/conv47, Shape: (80, 80, 64)
-I-----------------------------------------------

Progress: [======>                                           ]  12% (  1/8)Segmentation fault

I hope it helps find the cause of this error.

Hi Stephan, the model path is correct and I have also checked the file permissions. I will paste the content of hailort.log. Let me know if this is useful in finding the reason for error, please.

[2025-07-20 14:58:06.111] [13983] [HailoRT] [info] [device.cpp:49] [Device] OS Version: Linux 6.12.34+rpt-rpi-2712 #1 SMP PREEMPT Debian 1:6.12.34-1+rpt1~bookworm (2025-06-26) aarch64
[2025-07-20 14:58:06.112] [13983] [HailoRT] [info] [control.cpp:108] [control__parse_identify_results] firmware_version is: 4.20.0
[2025-07-20 14:58:06.112] [13983] [HailoRT] [info] [vdevice.cpp:651] [create] VDevice Infos: 0001:01:00.0
[2025-07-20 14:58:06.133] [13983] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: yolov8n_relu6_firesmoke
[2025-07-20 14:58:06.133] [13983] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: yolov8n_relu6_firesmoke
[2025-07-20 14:58:06.133] [13983] [HailoRT] [info] [hef.cpp:1929] [get_network_group_and_network_name] No name was given. Addressing all networks of default network_group: yolov8n_relu6_firesmoke
[2025-07-20 14:58:06.139] [13983] [HailoRT] [info] [internal_buffer_manager.cpp:202] [print_execution_results] Default Internal buffer planner failed to meet requirements
[2025-07-20 14:58:06.139] [13983] [HailoRT] [info] [internal_buffer_manager.cpp:212] [print_execution_results] Default Internal buffer planner executed successfully
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [device_internal.cpp:57] [configure] Configuring HEF took 13.066484 milliseconds
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [vdevice.cpp:749] [configure] Configuring HEF on VDevice took 13.388107 milliseconds
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [infer_model.cpp:436] [configure] Configuring network group 'yolov8n_relu6_firesmoke' with params: batch size: 0, power mode: PERFORMANCE, latency: NONE
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [multi_io_elements.cpp:756] [create] Created (AsyncHwEl)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (EntryPushQEl0yolov8n_relu6_firesmoke/input_layer1 | timeout: 10s)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl0AsyncHwEl)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl1AsyncHwEl)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl4AsyncHwEl)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl3AsyncHwEl)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [queue_elements.cpp:450] [create] Created (PushQEl5AsyncHwEl | timeout: 10s)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [filter_elements.cpp:375] [create] Created (PostInferEl5AsyncHwEl | Reorder - src_order: FCR, src_shape: (20, 20, 8), dst_order: FCR, dst_shape: (20, 20, 2))
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl0PostInferEl5AsyncHwEl)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [edge_elements.cpp:187] [create] Created (LastAsyncEl2AsyncHwEl)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] EntryPushQEl0yolov8n_relu6_firesmoke/input_layer1 | inputs: user | outputs: AsyncHwEl(running in thread_id: 13992)
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] AsyncHwEl | inputs: EntryPushQEl0yolov8n_relu6_firesmoke/input_layer1[0] | outputs: LastAsyncEl0AsyncHwEl LastAsyncEl1AsyncHwEl LastAsyncEl2AsyncHwEl LastAsyncEl3AsyncHwEl LastAsyncEl4AsyncHwEl PushQEl5AsyncHwEl
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsyncEl0AsyncHwEl | inputs: AsyncHwEl[0] | outputs: user
[2025-07-20 14:58:06.147] [13983] [HailoRT] [info] [pipeline.cpp:891] [print_deep_description] LastAsync

Hi Shashi,

This is what I got when I checked for hailortcli:

pi@raspberrypi:~ $ hailortcli fw-control identify
Executing on device: 0001:01:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.20.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8
Serial Number: <N/A>
Part Number: <N/A>
Product Name: <N/A>

Hi @wmat28
Thank you for providing all these details. Can you please let us know when you downloaded this model from our AI Hub? One issue I see here is that the Hailo application code expects a model compiled with built-in NMS whereas the model hef used does not have the built-in postprocessor. A few weeks ago, we updated the models to be compatible with Hailo code as well. Please download the model again and try. Use this link: Fire and Smoke Model. Even after you download this model, I can see some new potential issues arising from using runtime version 4.20. Anyways, please try this and we can see what happens.

Finally, I would like to note that we developed PySDK to make this type of application code debugging easier. Our main focus at DeGirum is to help users get started with development on different HW options with PySDK. While we try our best to help with the SW stack provided by the vendor, our expertise in using and debugging Hailo’s application SW stack is quite limited. Hope you understand.

1 Like

Hi @wmat28
Checking back to see if your issue has been resolved by any of our suggestions.

Hi @shashi I appreciate your support. I am happy to report that the model link you shared works with the Hailo Application Code cpp example for object detection :slightly_smiling_face:

However, there are major differences in the detections made by the same model on this link when run on DeGirum’s AI Hub (on the right) vs. when I deploy it on my rpi5 (on the left). I used the same image for fair comparison. Please see the screenshot attached:

I am not sure why this is happening. Perhaps the model on the download link is different from the one being used for inferencing on the AI Hub :thinking: Can you please check?

Hi @wmat28
Glad to hear it is working. The AI Hub inference result can be different as we compress the image to save network bandwidth during cloud inference. However, if you run the model locally using our PySDK and compare with HailoRT output they should match exactly. I am not sure, if you want to spend time on making such a test, but since we use HailoRT underneath, we do not see any reason for differing results when run locally.

Hi @wmat28, just following up to see if this post resolved your original question on this topic:

If so, would you consider this to be the solution?

Just checking if we can mark a solution to make it easier for others in the community facing the same issue to find help. :slightly_smiling_face: