PySDK guide: Troubleshooting problems with loading models in PySDK

:hammer_and_wrench: DeGirum PySDK Troubleshooting Guide: dg.load_model Failures

This guide helps diagnose and fix model loading issues when using the DeGirum PySDK. It is structured around the required parameter inference_host_address, which controls where inference runs.


:repeat_button: Shared Pre-Checks (Applies to All Cases)

Before debugging specific environments, verify the following:

  • :white_check_mark: model_name is correct and matches an existing model.
  • :white_check_mark: zoo_url is appropriate for the context (cloud, local, or file path).
  • :white_check_mark: token is provided when using cloud inference or private model zoos.
  • :white_check_mark: device_type matches the runtime backend (e.g., "HAILORT/HAILO8L").
  • :white_check_mark: You’re using a compatible version of the DeGirum SDK and runtime.

:cloud: Case 1: inference_host_address = "@cloud" (Cloud Inference)

Uses DeGirum AI Hub to run models in the cloud β€” no hardware required.

:white_check_mark: Expected Behavior

  • Model is pulled from the AI Hub and executed in the cloud.
  • Ideal for evaluation or prototyping.

:cross_mark: Common Issues and Fixes

Symptom Likely Cause Suggested Fix
403 Unauthorized Missing or invalid token Supply a valid Pro token.
NoneType or ModelNotFound Incorrect model_name or zoo Use dg.list_models() to inspect available models.
ConnectionError Firewall or offline Ensure outbound access to the internet is available.

:desktop_computer: Case 2: inference_host_address = "@local" (Local Inference)

Performs inference on the same machine, using local hardware (Hailo, OpenVINO, TENSORRT, etc.).

:white_check_mark: Supported zoo_url Modes

Type zoo_url Description
Cloud Zoo "degirum/hailo" Downloads models from AI Hub and runs them locally.
Local Folder "./models" Folder containing .json config files and binaries.
Local File "./models/model.json" Single model .json file.

:cross_mark: Common Issues and Fixes

Symptom Likely Cause Fix
FileNotFoundError Wrong file/folder path or model name Double-check the path and file name.
RuntimeNotFound Missing backend runtime Install HailoRT, CUDA, OpenVINO, etc.
PermissionError Inaccessible directory or file Ensure your script has read permissions.
NoneType Missing checksum in .json Add a dummy checksum field (see below).

:paperclip: Required: checksum Field for Local Models

Even for local models, your .json config must include a checksum:

{
    "CHECKSUM": "dummy-checksum-value",
  ...
}
  • :white_check_mark: Required by the SDK schema.
  • :prohibited: Not actually verified for local execution.
  • :red_exclamation_mark: Missing it causes silent failures.

:globe_with_meridians: Case 3: inference_host_address = "<ip>:<port>" (AI Server)

Connects to a DeGirum AI Server, either locally or over the network.

:white_check_mark: Setup

Start the AI Server with:

degirum server start --zoo ./my-zoo

:cross_mark: Common Issues and Fixes

Symptom Likely Cause Fix
ConnectionRefused Wrong IP or server not running Verify server is reachable and port is correct.
ModelNotFound Model not present in zoo folder Add .json to the folder and rescan the zoo.

:counterclockwise_arrows_button: Updating Models After Startup

Adding a model to the zoo folder after server startup?

  • :white_check_mark: If AI Server is running natively:

    degirum server rescan-zoo
    

    Must be run on the same machine where the server is running.

  • :spouting_whale: If AI Server runs in Docker:

    docker restart <your_server_name>
    

This is the only way to trigger a model refresh in Docker containers.


:magnifying_glass_tilted_left: Step-by-Step Troubleshooting: Is the Model Available?

Most model loading issues are due to typos or unlisted models.

:white_check_mark: Always Verify with dg.list_models

Use this to inspect which models are available and compatible with your setup:

import degirum as dg

model_list = dg.list_models(
    inference_host_address=<your_host>,  # "@cloud", "@local", or "<ip>:<port>"
    zoo_url=<your_zoo_url>,              # Required for cloud and local; ignored by AI Server
    token=<your_token>,                  # Optional for public zoos
    device_type=["HAILORT/HAILO8L"]      # Optional filter for supported backends
)
print(model_list)

This will:

  • Confirm correct spelling of model_name.
  • Show runtime-compatible models.
  • Reveal missing or unregistered models.

Always run dg.list_models() first before loading a model β€” it’s the most reliable way to validate visibility.


:white_check_mark: Final Tips

  • :white_check_mark: inference_host_address is a required argument β€” no default.
  • :white_check_mark: Always include a checksum field in local model .json files.
  • :white_check_mark: Never assume the model exists β€” confirm with dg.list_models.