How to debug compilation on degerium ai hub if accuracy is not good

I was able to quantize the model using cloud compiler on degerium ai hub. But the model does not perform well.Is there any piece of documentation that allows us to tweak the parameters during compilation. like when i wa susing hailo_sdk_client to quantize the mode, it would go something like:

alls = “”“normalization1 = normalization([0.0, 0.0, 0.0], [255.0, 255.0, 255.0])change_output_activation(conv74, sigmoid)change_output_activation(conv90, sigmoid)change_output_activation(conv105, sigmoid)nms_postprocess(“/local/shared_with_docker/nms_config/nms_layer_config_yolo11m.json”, meta_arch=yolov8, engine=cpu)model_optimization_config(calibration, batch_size=1)performance_param(compiler_optimization_level=max)“””

performance_param(compiler_optimization_level=max) - To achieve optimal performance, set the compiler_optimization_level to “max”

post_quantization_optimization(finetune, policy=enabled, learning_rate=0.00001)

resources_param(max_apu_utilization=0.8, max_compute_16bit_utilization=0.8, max_compute_utilization=0.8, max_control_utilization=0.8, max_input_aligner_utilization=0.8, max_memory_utilization=0.8, max_utilization=0.0)

model_optimization_flavor(optimization_level=0)

runner.load_model_script(alls)runner.optimize_full_precision()

How can i debug compilation using degerium. does pysdk for it support it?

Hi @muhammadhamzaj2001

Such advanced options are not supported in cloud compiler as they need heavy compute resources. Also, can you tell us how you evaluated the compiled model?

i had written a python script which simply evaluated with the ground truth labels for every prediction that i got using hailort.

quick question, can i quantize the model on my local system without the hailo architecture since then i will be able to use my local 4090. I’ll be able to quantize with higher optimization_levels. Is this possible? Then maybe i can move my .hef onto the hailo OS and do inference there

regards

Hi @muhammadhamzaj2001

In our experience with a lot of users, we see that there are couple of places where things can go wrong in evaluation. We published a guide that outlines how to use the tools we built to do this evaluation: Hailo guide: Evaluating model accuracy after compilation - Guides / Hailo Guides - DeGirum

Regarding quantizing on your local system: yes, it is possible and you do not need a hailo device to compile.