Good evening and congratulations for your work. I have the following question: When running inference of object detection / classification / compound model etc. Is there a way to disable drawing the result to screen and only printing relevant information in order to improve FPS? Thanks in advance.
Hi @alema416
Welcome to the DeGirum community and thank you for your kind words. If you want to disable the rendering, you can just do print(res.results)
instead of display.show(res)
. Please let us know if this helps.
Hi @shashi,
Is it possible to compile a model in Degirum AI hub with Nvidia support. For Hailo support I gave HailoRT as a runtime during compilation but for Nvidia there is no option. Can you please tell me which option I have to give for Nvidia during compilation?And which model I have to give for inference on Nvidia board there is no model for Nvidia support in Degirum model zoo?Can you please help me out
Hi @manasa
Welcome to the DeGirum community. Nvidia devices typically do not need any compilation as they can directly work with onnx files. Our public model zoo has an example of a model that works with Nvidia devices: yolov8n_model. You can try this model first. Also, we would like to make sure that pysdk is setup properly on your device as first time setup on Nvidia devices can be tricky. @bolotov from our team can help you in setting up things correctly. To get started, can you pip install degirum
on your machine and then type degirum sys-info
and let us know the output. It would be helpful if you can provide the Jetpack version as well.
Hi @shashi,
Thank you for your information.In public model zoo there is a models with extention of tflite,openvino and rknn but I am unable to find onnx yolov8n model.Can you please provide the model name fully with extention which will support Nvidia device?
nvidia@nvidia-desktop:~$ degirum sys-info
Devices:
N2X/CPU:
- ‘@Index’: 0
- ‘@Index’: 1
TFLITE/CPU: - ‘@Index’: 0
- ‘@Index’: 1
Software Version: 0.14.3
nvidia@nvidia-desktop:~$ sudo apt-cache show nvidia-jetpack
[sudo] password for nvidia:
Package: nvidia-jetpack
Version: 5.1.3-b29
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-jetpack-runtime (= 5.1.3-b29), nvidia-jetpack-dev (= 5.1.3-b29)
Homepage: Jetson - Embedded AI Computing Platform | NVIDIA Developer
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_5.1.3-b29_arm64.deb
Size: 29308
SHA256: b8b35bd12e6bf07c9fe61563babf08e802db09bd8fe4611bd4c53c21daaf0b48
SHA1: b713997204bb7353f9d6edf499af68c944d0108d
MD5sum: 3199c10880f3f756308f59ffacd6a060
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8
Hi @manasa,
The current version of PySDK only works with JetPack 6.1 and higher. Older versions of PySDK did work with JetPack 5.1. The latest version of PySDK that worked with JetPack 5.1 is PySDK 0.14.1. You can either update your device to a recent version of JetPack, as described on the NVidia website, or install an older version of PySDK via
pip install degirum==0.14.1
Hi @bolotov ,
Thank you for your information. This will be more helpful for me.
Hi @shashi ,
I compiled my custom model in Degirum AI Hub. I am able to get detections but got mismatched id’s for my objects. Whatever the pytorch model I have given for compilation, with that same model I used in hailo previously that time I got correct id’s and detections. I added the output below whatever I got and expected. Can you please help me out why I am getting these mismatched id’s.
Output I got:
Mapped class name ‘popmid’ to class_id 49
Mapped class name ‘puff’ to class_id 33
Output I expected:
Mapped class name ‘popmid’ to class_id 2
Mapped class name ‘puff’ to class_id 3
Hi @manasa
Thank you for sharing the details. You’re right — the class ID mapping should remain consistent with your original PyTorch model.
To help us investigate and reproduce the issue on our end, could you please provide the following:
- A link to the compiled model in the DeGirum AI Hub Model Zoo
- The original YOLO
.pt
file used for compilation. You can email me the pt file ( khatami.mehrdad@degirum.com ) - A sample image that demonstrates the mismatch in class ID mapping
Once we have these, we’ll take a closer look and get back to you with a resolution.
Hi @khatami.mehrdad ,
Thank you for responding. I have shared the pt file and some sample images to your mail. The link for compiled is : DeGirum AI Hub
Appreciate your quick help.
Hi @khatami.mehrdad ,
I apologize to you. The problem is with my code. In my code I am using some other mappings also for another model that’s why I got mismatched id’s. Please ignore the above issue. Thank you.
Glad the issue is resolved.