Hi everyone,
I recently started experimenting with the DeGirum Python SDK and wanted to share some issues I ran into while trying to run inference locally on an Intel CPU using OpenVINO on Windows 11. Hopefully this saves someone some time!
Setup: Virtual environment + Windows 11 + Intel CPU (local inference)
Issue 1 — OpenVINO not recognized when installed via pip
Installing OpenVINO through pip inside a venv does not work — DeGirum simply won’t detect it:
device_list = dg.get_supported_devices(inference_host_address="@local", zoo_url=".")
# OPENVINO/CPU will NOT appear in the list
Fix: Install OpenVINO using the archive method instead.
VS Code / PowerShell users: The .bat setup script does nothing in PowerShell. Use the .ps1 script to properly set the environment variables.
Issue 2 — Passing token directly to load_model fails
Passing the token as a parameter to load_model results in:
License does not allow usage of runtime agent 'OPENVINO'
Fix: Install the token via the CLI instead:
degirum token install <your_token>
Issue 3 — DeGirum PySDK is incompatible with OpenVINO > 2024.x Most important
This one took the most time to figure out. OpenVINO versions newer than 2024 no longer accept f64 precision, but that’s exactly what DeGirum tries to pass internally. The error looks like this:
degirum.exceptions.DegirumException: Model 'yolov8n_relu6_fire_smoke–640x640_quant_openvino_multidevice_1'
inference failed: Exception from src\inference\src\cpp\core.cpp:129:
Exception from src\inference\src\dev\plugin.cpp:53:
Exception from src\plugins\intel_cpu\src\config.cpp:259:
Wrong value f64 for property key INFERENCE_PRECISION_HINT.
Supported values: bf16, f16, f32, undefined
Fix: Downgrade OpenVINO to version 2024.6:
This is my final working code, did I miss something or the issues I encountered are known?
import degirum as dg
your_model_name = "yolov8n_relu6_fire_smoke--640x640_quant_openvino_multidevice_1"
your_host_address = "@local"
your_model_zoo = "degirum/intel"
device_type = "OPENVINO/CPU"
your_image = "fire.jpg"
model = dg.load_model(
model_name = your_model_name,
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
device_type = device_type,
)
result = model(your_image)
print(result)