Pose Landmarks - Incorrect coordinates Issue when running locally

Hi,

I’m having an issue when running pose estimation inference locally with any pose models I’ve tried. The X coordinates are skewed and Y are always 0.

However when using cloud inference it maps correctly. Is this a bug or is there anyway to map this correctly when running locally?

For clarity - I’m using Hailo-8 through via the AI hat on a Raspberry Pi 5. Below is the code I am running interchanging “@local” and “@cloud”:

from degirum_tools import remote_assets, ModelSpec

model_spec = ModelSpec(
    model_name='yolov8n_relu6_coco_pose--640x640_quant_hailort_hailo8_1',
    zoo_url='degirum/hailo',
    inference_host_address='@local',  # '@local' / '@cloud'
    model_properties={'device_type': ['HAILORT/HAILO8', 'HAILORT/HAILO8L']}
    )

model = model_spec.load_model()

results = model(remote_assets.three_persons)
print(results)

here is the typical output when using the “remote_assests.three_persons” image input for landmarks:

Local Inference

bbox: [422.90557861328125, 23.543975830078125, 640.0, 348.1736755371094]
category_id: 0
label: person
landmarks:

  • category_id: 0
    connect:
    landmark: [101.43762969970703, 0]
    score: 0.030687743797898293
  • category_id: 1
    connect:
    landmark: [99.1980209350586, 0]
    score: 0.022328002378344536
  • category_id: 2
    connect:
    landmark: [101.43762969970703, 0]
    score: 0.0335884764790535
  • category_id: 3
    connect:
    landmark: [89.49307250976562, 0]
    score: 0.0420428141951561
  • category_id: 4
    connect:
    landmark: [104.42376708984375, 0]
    score: 0.0713447779417038
  • category_id: 5
    connect:
    landmark: [91.73267364501953, 0]
    score: 0.05487994849681854
  • category_id: 6
    connect:
    landmark: [101.43762969970703, 0]
    score: 0.059926051646471024
  • category_id: 7
    connect:
    landmark: [93.22574615478516, 0]
    score: 0.05487994849681854
  • category_id: 8
    connect:
    landmark: [103.67723083496094, 0]
    score: 0.06831458956003189
  • category_id: 9
    connect:
    landmark: [97.7049560546875, 0]
    score: 0.0654040277004242
  • category_id: 10
    connect:
    landmark: [102.18415832519531, 0]
    score: 0.07778020948171616
  • category_id: 11
    connect:
    landmark: [91.73267364501953, 0]
    score: 0.11375253647565842
  • category_id: 12
    connect:
    landmark: [95.46534729003906, 0]
    score: 0.11854175478219986
  • category_id: 13
    connect:
    landmark: [99.1980209350586, 0]
    score: 0.09624993801116943
  • category_id: 14
    connect:
    landmark: [102.18415832519531, 0]
    score: 0.10913287103176117
  • category_id: 15
    connect:
    landmark: [96.95841979980469, 0]
    score: 0.11854175478219986
  • category_id: 16
    connect:
    landmark: [103.67723083496094, 0]
    score: 0.12864470481872559
    score: 0.9169332385063171

Cloud Inference:

bbox: [423.3338317871094, 25.373138427734375, 639.911865234375, 350.6205749511719]
category_id: 0
label: person
landmarks:

  • category_id: 0
    connect:
    landmark: [571.5079956054688, 103.22462463378906]
    score: 0.9558721780776978
  • category_id: 1PosPPP
    connect:
    landmark: [580.6630859375, 103.22462463378906]
    score: 0.6715580224990845
  • category_id: 2
    connect:
    landmark: [571.5079956054688, 94.06954956054688]
    score: 0.9079425930976868
  • category_id: 3
    connect:
    landmark: [585.2406005859375, 107.80216979980469]
    score: 0.37738367915153503
  • category_id: 4
    connect:
    landmark: [557.775390625, 84.91447448730469]
    score: 0.8810773491859436
  • category_id: 5
    connect:
    landmark: [566.930419921875, 149.0]
    score: 0.9988816380500793
  • category_id: 6
    connect:
    landmark: [516.5775146484375, 84.91447448730469]
    score: 0.9964962601661682
  • category_id: 7
    connect:
    landmark: [566.930419921875, 190.19784545898438]
    score: 0.9898244738578796
  • category_id: 8
    connect:
    landmark: [475.37969970703125, 62.02679443359375]
    score: 0.9905201196670532
  • category_id: 9
    connect:
    landmark: [598.9732055664062, 181.04275512695312]
    score: 0.977920413017273
  • category_id: 10
    connect:
    landmark: [525.7326049804688, 52.87171936035156]
    score: 0.9855131506919861
  • category_id: 11
    connect:
    landmark: [498.26739501953125, 176.4652099609375]
    score: 0.999030590057373
  • category_id: 12
    connect:
    landmark: [461.6470947265625, 158.15505981445312]
    score: 0.9987099170684814
  • category_id: 13
    connect:
    landmark: [562.3529052734375, 245.12826538085938]
    score: 0.9942324757575989
  • category_id: 14
    connect:
    landmark: [502.8449401855469, 226.818115234375]
    score: 0.9949973821640015
  • category_id: 15
    connect:
    landmark: [516.5775146484375, 318.3688659667969]
    score: 0.9494291543960571
  • category_id: 16
    connect:
    landmark: [461.6470947265625, 300.0587158203125]
    score: 0.9494291543960571

Hi @colley345

Thanks for bringing this to out attention. We are able to replicate the issue on our side and trying to find the root cause. As a temporary workaround, can you please revert to an older stable version of PySDK (0.18.3). We see this issue only with 0.19.1. Once we release a new version with the fix, we will let you know.

That’s brilliant - thanks for the quick response!

I’ll give that previous version a go.

much appreciated!

2 Likes

Hi @colley345

We released 0.19.2 today that fixes the above bug: Release Notes | DeGirum Docs
We appreciate you bringing this to our notice.

1 Like

Fantastic, all working great. I’m loving using the SDK btw! it just works!

1 Like

Hi @colley345

Thanks for confirming. Thanks for the kind words about PySDK.