How do we begin using Degrium

Hi Everyone,

We are looking to test models on degrium specifically for DeepX on raspberry pi. We have a series of models we would like to test and need a step by step direction on how to begin.

Regards

Hi @michaelosumune

Welcome to the DeGirum community. Can you please elaborate on what you mean by you have a series of models? Are these models in onnx/pytorch/tensorflow? Or are they already compiled for DEEPX?

The models are Yolo and llama models for people detection and Illama for NLP, we want to see how this works together while evaluating them.
We do not have the devices yet, that’s why we are looking to see how to do this on DeGirum before the devices come in.

Hi @michaelosumune

Welcome to the community. We have a bunch of ready-to-use models precompiled for Deepx on our AI Hub. Here’s a basic example to run simple inference using any of the models from our zoo. You can also use your own compiled model by setting up a local model zoo.

As you mentioned, you do not have the device yet, you can run these models on our cloud platform. To run the models on DeGirum cloud, you will require a workspace token. please follow these steps to setup workspace token.

Simple inference on DeepX using DeGirum PySDK:

import degirum as dg, degirum_tools
from degirum_tools import remote_assets
from degirum_tools import ModelSpec

model_spec = ModelSpec(model_name="yolov8s_coco--640x640_quant_deepx_m1a_1",
                       zoo_url='degirum/deepx',
                       inference_host_address='@cloud',
                       #model_properties={'output_confidence_threshold': 0.1}
                       )

# image source
image_source = remote_assets.three_persons

# load AI model
model = model_spec.load_model()

# perform AI model inference on given image source
print(f" Running inference using '{model_spec.model_name}' on image source '{image_source}'")
inference_result = model(image_source)

# print('Inference Results \n', inference_result)  # numeric results
print(inference_result)
print("Press 'x' or 'q' to stop.")

# show results of inference
with degirum_tools.Display("AI Camera") as output_display:
    output_display.show_image(inference_result.image_overlay)

If you prefer to run a video inference, you can use below code snippet:

import degirum as dg, degirum_tools
from degirum_tools import remote_assets
from degirum_tools import ModelSpec

model_spec = ModelSpec(model_name="yolov8s_coco--640x640_quant_deepx_m1a_1",
                       zoo_url='degirum/deepx',
                       inference_host_address='@cloud',
                       #model_properties={'output_confidence_threshold': 0.1}
                       )

# video source
video_source = remote_assets.walking_people

# load AI model
model = model_spec.load_model()

# running video inference
with degirum_tools.Display("AI Camera") as output_display:
    for inference_result in degirum_tools.predict_stream(model, video_source):
        output_display.show(inference_result)

Once you have a device, you can run it on local hardware by just changing inference_host_address= ‘@local‘ .

Please let me know if you face any issues or have any further questions.

@michaelosumune , just to clarify: running DeepX models in DeGirum cloud (also called AI Hub) indeed means that those models will run on real DeepX hardware, hosted in our datacenter. You can start developing your code right away without waiting for real hardware to arrive and setup.