When working with DeGirum PySDK for local or remote inference, you may encounter runtime errors that prevent models from loading or inference from running.
The degirum.connect()
function is the starting point for interacting with PySDK. It establishes a connection to the appropriate AI inference engine and model zoo based on the configuration you provide. In this guide, we’ll use Hailo as an example, but the guidance will be the same for the other hardware that we support.
inference_host_manager = degirum.connect(
inference_host_address = "@cloud",
zoo_url = "degirum/hailo"
)
When you call degirum.connect()
, you will:
-
Specify an inference host to run AI models
-
Specify a model zoo from which AI models can be loaded
degirum.connect()
creates and returns a ZooManager
object. This object enables:
-
Searching for models available in the connected model zoo
-
Loading AI models and creating appropriate AI model handling objects for inference
-
Accessing model parameters to customize inference behavior
However, you may encounter errors while connecting to the DeGirum AI Hub. This guide covers common issues and how to troubleshoot them.
Setting up your environment
This guide assumes that you have installed PySDK, the hardware runtime and driver, and DeGirum Tools.
To install degirum_tools, run
pip install degirum_tools
in the same environment as PySDK.
1. Common errors when connecting to AI Hub
1.1 Invalid access token
To access the AI Hub with PySDK, you’ll need to create a Workspace Token with the AI Hub. You can create Workspace Tokens only if your account is part of a Workspace. If you plan to use PySDK fully offline, or if you plan to use models available in public Model Zoos, you don’t need to create Workspace Tokens.
If you have installed PySDK, please follow the steps here to setup workspace Token.
Workspace Tokens allow you to:
-
Run cloud inference on models in public Model Zoos.
-
Access to models in Workspace Model Zoos for local and AI server inference.
-
Run cloud inference on models in Workspace Model Zoos
if you encounter the following error:
degirum.exceptions.DegirumException: Unable to connect to server hub.degirum.com: Your cloud API access token is not valid
It means your access token has not been setup correctly.
Troubleshooting:
- Try
degirum token status
in terminal to ensure that your degirum token is valid and setup correctly - Ensure you have generated a token and set it up correctly after installing PySDK
- Please note that tokens are specific to a Workspace. Hence a token created for one Eorkspace will not work for another
1.2 Invalid or Inaccessible Model Zoo URL
Example:
inference_host_manager = degirum.connect(
inference_host_address = "@cloud",
zoo_url = "degirum/hail" # Incorrect zoo url
)
Error:
DegirumException: Cloud model zoo 'degirum/hail' either does not exist or you do not have access to it.
(cloud server response: 403 Client Error: Forbidden for url: https://hub.degirum.com/api/v1/public/zoos-check/degirum/hail)
Cause:
The zoo_url does not match a valid or accessible model zoo.
Troubleshooting:
-
Verify the correct zoo URL from DeGirum AI Hub.
-
For cloud, the correct format is:
https://hub.degirum.com/<‘org_name’>/<‘zoo_name’>
-
If it’s a private zoo, ensure your token has access.
1.3 Invalid host address:
- Host address should be either
@local
,@cloud
, or a valid AI server address
Example:
inference_host_manager = degirum.connect(
inference_host_address = "@cld", # Incorrect host address
zoo_url = "degirum/hailo"
)
Error:
DegirumException: Incorrect inference host address '@cld'. It should be either @local, or @cloud, or a valid AI server address
Troubleshooting:
Valid values for inference_host_address
are:
@local
- for using your local hardware@cloud
- for using DeGirum’s cloud inference- A custom host in the format:
host:port
- for example,192.168.1.50:8778
1.4 Invalid AI server address:
- Using
inference_host_address
(@host:port
), the client must establish a TCP connection. Failures here indicate network-level or authentication problems.
Code:
inference_host_manager = degirum.connect(
inference_host_address = "192.168.50:8778", #incorrect IP
zoo_url = "degirum/hailo")
Error:
DegirumException: Incorrect inference host address '192.168.50:8778'. It should be either @local, or @cloud, or a valid AI server address
Causes:
-
Host unreachable: IP address or hostname is wrong, or the server is offline
-
Port blocked by firewall: TCP port is closed or filtered
-
Mismatch in inference host configuration: The server may be listening on a different port
-
Authentication failure: Invalid or missing API token for cloud or secured hosts
Troubleshooting:
-
Check host reachability. Open terminal and type
ping 192.168.1.50
-
Verify open port. Open terminal and type
nc -vz 192.168.1.50 8778
- Check server logs for inference host startup and bound address.
-
Validate credentials: Ensure token parameter in
dg.load_model()
is set correctly for secured environments -
Test with
@local
to confirm the client works with a local runtime
2. Common errors when loading a model from AI Hub
After we create an object of model zoo inference, the next step is to connect to the AI model from the AI Hub. One way to do this is to use the model zoo object and use it to load model:
import degirum as dg
inference_host_manager = dg.connect(
inference_host_address = "@local",
zoo_url = "degirum/hailo")
model = inference_host_manager.load_model(model_name='yolov5s_relu6_coco--640x640_quant_hailort_hailo8_1')
2.1 Easier way to load models
Creating a zoo manager object using dg.connect()
allows you to list the models in that zoo or get a list of supported devices for that model zoo. However, there is a simpler way to connect to AI Hub and load a model directly using the below code snippet:
import degirum as dg
model = dg.load_model(
model_name='yolov5s_relu6_coco--640x640_quant_hailort_hailo8_1',
inference_host_address='@cloud',
zoo_url='degirum/hailo'
)
However, you might face errors while loading a model using load_model()
. Some common errors can be following:
2.2 Invalid or inaccessible model zoo URL
Code:
import degirum as dg
model = dg.load_model(
model_name='yolov5s_relu6_coco--640x640_quant_hailort_hailo8_1',
inference_host_address='@cloud',
zoo_url='degirum/hail' # incorrect or inaccessible
)
Error:
DegirumException: Cloud model zoo 'degirum/hail' either does not exist or you do not have access to it.
(cloud server response: 403 Client Error: Forbidden for url: https://hub.degirum.com/api/v1/public/zoos-check/degirum/hail)
Cause:
The zoo_url
does not match a valid or accessible model zoo.
Troubleshooting:
-
Verify the correct zoo URL from DeGirum AI Hub
-
For cloud, the correct format is:
https://hub.degirum.com/
<‘org_name’>/<‘zoo_name’>
-
If it’s a private zoo, ensure your token has access
2.3 Incorrect model name
Code:
import degirum as dg
model = dg.load_model(
model_name='yolov5s_relu6_coc--640x640_quant_hailort_hailo8_1', #incorrect name
inference_host_address='@cloud',
zoo_url='degirum/hailo',
)
Error:
DegirumException: could not get model by url.
(cloud server response: 400 Client Error: Bad Request for url: https://hub.degirum.com/zoo/v1/public/models/degirum/hailo/yolov5s_relu6_coc--640x640_quant_hailort_hailo8_1/info)
In most cases, the cause of this error is that the model name is misspelled or improperly formatted.
The standard format of a model name on DeGirum AI Hub is:
<‘base_model_name’>–<‘input_size’><‘quant/float’><‘hardware’><‘version’>
For example: yolov5s_relu6_coco–640x640_quant_hailort_hailo8_1
Troubleshooting:
-
Copy the model name exactly as shown in AI Hub.
-
Ensure correct quantization (
quant
orfloat
), runtime (hailort
), and hardware name (hailo8
).
2.4 Cannot find device (runtime not found)
Sometimes while connecting to DeGirum AI Hub using dg.connect()
you may encounter an error like below:
Error:
DegirumException: Cloud Model 'yolov5s_relu6_coco--640x640_quant_hailort_hailo8_1' does not have any supported runtime/device combinations that will work on this system.
This error indicates that the PySDK cannot locate a compatible AI accelerator (e.g. Hailo-8 runtime) on the system. Without the correct runtime, the model cannot be executed locally.
Troubleshooting:
To troubleshoot this error, the first step is to check if your device is being detected in the supported device list. You can check available hardware using the below code:
import degirum
device_list = degirum.get_supported_devices(inference_host_address="@local", zoo_url=".")
print(device_list)
This will provide you the list of available devices. You must check if your desired device is listed here.
dict_keys(['DUMMY/DUMMY', 'N2X/CPU', 'ONNX/CPU', 'OPENVINO/CPU', 'TFLITE/CPU'])
If your device is missing, you must follow device specific instructions to ensure your device is properly setup. If you have choosen inference_host_address='@cloud'
and still see this error, it means our devices on the cloud server are facing downtime and there is nothing on your side that may have caused this issue. In that case, try again after some time or reach out to us if the issue persists.
3. Conclusion
In this guide, we explored common connection and model-loading errors you might encounter. We explained typical causes—such as invalid zoo URLs, incorrect host addresses, authentication issues, and unsupported runtimes—and provided troubleshooting steps for each. By following these checks, you can quickly resolve most issues and ensure a smooth workflow for running inference locally or on the cloud.