Early access: Help us improve the Cloud Compiler!

Thanks for exploring the DeGirum Cloud Compiler, currently in early access!

This tool is still evolving, and your feedback plays a key role in shaping its future. Whether you’re testing it out or integrating it into your workflow, we’d love to hear from you. We welcome feedback on both what could be improved and what’s already working well.

Some questions to guide your feedback:

  • Was the interface intuitive and easy to use?
  • Did you run into any issues, limitations, or bugs?
  • What features would make it more useful?
  • Any improvements or additions you’d like to see?
  • Was there any aspect that felt especially well-designed or helpful?

If you’re not in early access, you can request access here to try it out and join the conversation.

Please feel free to leave your feedback in a post under this topic. Thanks for helping us make the Cloud Compiler better for everyone!

2 Likes

Hi Alex,

I’m using the cloud compile to build our production model. It worked like a charm.

Our use case is object detection and we use Yolov8 as base.

It would be nice if we could customize things such as cfg/alls/generic/yolov8s.alls.

1 Like

Hi @doleron

Thanks for the feedback. Can you please elaborate on what you would like to customize?

Hi @shashi

In the last week, we tweaked parameters such as:

  • post_quantization_optimization(finetune, policy=enabled, dataset_size=8192, epochs=8)
  • model_optimization_config(calibration, batch_size=2, calibset_size=64)
  • model_optimization_flavor(compression_level=2)
  • quantization_param(output_layer1, precision_mode=a16_w16)

Maybe the compiler UX could have an advanced tab to allow the user to customize the compilation using a predefined form or, more simply, the user provides an alls string, similarly to what we can do using the ClientRunner API:

alls = 'normalization1 = normalization([123.675, 116.28, 103.53], [58.395, 57.12, 57.375])\n'
runner.load_model_script(alls)

Does it make sense?

Hi @doleron

Understood. Unfortunately, adding such customization to a cloud setup is very difficult as it requires a lot of testing and in some cases, a lot of compute resources as well (e.g. finetuning). Even with current limited setup, we see that the total number of combinations we support already runs into a few 1000s. So, such customization is unlikely to be supported in the near future. We can however help you on a case-by-case basis if needed. Once again thank you for your feedback and taking the time to use our tools.

1 Like

i got yolov8 models that is trained with 640 size input but can be exported as 1280p size input. Do you think this is an idea that can be executed easily ?

Hi @traminhhoang1106

Welcome to the DeGirum community. In general, models trained at 640 can be exported as 1280p but whether it succeeds or not depends on the hardware. If you can tell us the exact model variant, we can check for you. Also, 1280p: is it 1280x1280? or 1280x720?

it was yolov8x 640x640 export to 1280x1280

Hi @traminhhoang1106

yolov8x for 1280x1280 does not compile for hailo due to high memory requirements. This is not an issue of our cloud compiler but coming from their compiler itself. if you need to run on higher resolution, we recommend using tiling.

ok i got it, can you give me sample code for tiling, or provide here the link of the sample code

Hi @traminhhoang1106

Sure. Here is an example: hailo_examples/examples/017_advanced_tiling_strategies.ipynb at multimodel_multistream · DeGirum/hailo_examples