This guide will help you get started with PySDK, focusing on performing AI inferences using the DeGirum AI Hub and AI accelerator hardware hosted by DeGirum.
1. Basic Inference Example
To start working with PySDK, import the degirum
package and connect to the DeGirum AI Hubd:
import degirum as dg
zoo = dg.connect(dg.CLOUD, "https://cs.degirum.com", "<cloud token>")
model = zoo.load_model("mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1")
result = model("https://docs.degirum.com/images/samples/TwoCats.jpg")
display(result.image_overlay)
Key Steps
- Import the PySDK Package:
import degirum as dg
- Connect to the DeGirum AI Hub: Use the dg.connect function, providing your cloud token.
- Load an AI Model: Use
zoo.load_model("model_name")
to load a specific model from the DeGirum AI Hubd. - Perform Inference: Pass an image to the model and display the results using
display(result.image_overlay)
.
2. Accessing the Public Model Zoo
To list all available AI models in the public model zoo, use:
model_list = zoo.list_models()
print(model_list)
This will return a list of model names that you can use for inference.
3. Performing Inference on an Image
You can perform inference on an image using various input formats:
- Inference on a Local File:
result = model("./images/TwoCats.jpg")
- Inference on a URL:
result = model("https://docs.degirum.com/images/samples/TwoCats.jpg")
- Inference on a PIL Image Object:
from PIL import Image
image = Image.open("./images/TwoCats.jpg")
result = model(image)
4. Accessing Inference Results
The result object returned by the model contains both numeric and graphical results. You can access the graphical results using:
result_image = result.image_overlay
result_image.save("./images/TwoCatsResults.jpg")
result_image.show()
result.image_overlay
: Returns the image with all inference results overlaid.result.image
: Provides the original image used for inference.
DeGirum maintains a PySDKExamples GitHub repository containing Jupyter notebooks that illustrate how to build edge AI applications using PySDK. These notebooks demonstrate performing ML inferences using different hosting options:
-
Using the DeGirum AI Hub
-
On a DeGirum AI Server (local or on a LAN/VPN)
-
On a DeGirum Orca Accelerator (installed on a local computer)
To try different hosting options, uncomment the relevant lines in the code cells under the “Specify where do you want to run your inferences” header in the notebooks.
This guide provides a basic overview to help you start using PySDK with the DeGirum AI Hub. For more detailed information and advanced usage, please refer to our full documentation.