Request for example code to run public CLIP model on Hailo8 with DeGirum

Hello,

I am currently running video inference smoothly on Hailo8 device with DeGirum PySDK using YOLOv8 models.
Now I want to try applying the CLIP model. Since CLIP outputs image embedding vectors rather than object detection results like labels or bounding boxes, I expect the usage to be different.

Could you please share example code or guidance on how to run a public CLIP model from the DeGirum model zoo on Hailo8 environment using PySDK?
Specifically, I am interested in:

  • How to load the model

  • How to perform inference with an input image

  • How to obtain and handle the embedding vector output

Thank you very much for your help.

1 Like

Hi @20246146

Welcome to the DeGirum community. Glad to hear you are able to run video inference on YOLO models. We will provide example usage of CLIP model. Please give us a couple of days.

2 Likes