Hailortcli monitor with no information

Hi everyone,

I don’t know what I’m doing wrong but I’m unable to make hailortcli monitor to work. I have installed latest degirum and degirum_tools in a RPI5 with Hailo8 hat+.

Whatever is happening I only was able to run hailoclirt monitor once showing data, every other time I try to run it the result is always empty.

I checked the environment variables are in place (both HAILO_MONITOR=1 and HAILO_MONITOR_TIME_INTERVAL=500) but it does not make any difference.

I tried with a ‘remote’ script using the degirum service and also with a local script (ensuring first the degirum service is stopped prior to run the local script)

Hi @dario

Did you follow all the steps in this guide: How to make sure “hailortcli monitor” picks up your inference - Guides - DeGirum Community

Yes @shashi, that’s why I commented about the environment variables (both HAILO_MONITOR=1 and HAILO_MONITOR_TIME_INTERVAL=500) and aso that’s why I commented about both situations, the one about the local script and te one using the service

(degirum) pi@edgepi:~/DeGirum $ cat /etc/default/hailort_service
[Service]
HAILORT_LOGGER_PATH="/var/log/hailo"
HAILO_MONITOR=1
HAILO_MONITOR_TIME_INTERVAL=500
HAILO_TRACE=0
HAILO_TRACE_TIME_IN_SECONDS_BOUNDED_DUMP=0
HAILO_TRACE_SIZE_IN_KB_BOUNDED_DUMP=0
HAILO_TRACE_PATH=""

Here the result of the env command were you can see HAILO_MONITOR=1

(degirum) pi@edgepi:~/DeGirum $ env
SHELL=/bin/bash
NO_AT_BRIDGE=1
PWD=/home/pi/DeGirum
LOGNAME=pi
XDG_SESSION_TYPE=tty
MOTD_SHOWN=pam
HOME=/home/pi
LANG=en_GB.UTF-8
VIRTUAL_ENV=/home/pi/DeGirum/degirum
SSH_CONNECTION=192.168.3.3 63758 192.168.3.22 22
HAILO_MONITOR=1

Hello @dario
Let’s get this figured out.

Most importantly, the environment variable must be set before launching either the python script or the multi-process service.

You are either using:

  • Single-process: your Python script launches inference and multi process service is OFF.
  • Multi-process: the Hailo multi-process service (hailort.service) is the one that handles the device operations (always used if multi-process service is running!)

In both cases, the Hailo monitor will only show something when the inference is actually running, and be blank/empty when it is not.

Also, the monitor must be launched in a different terminal shell / process than where you launch the script. So this would mean opening a 2nd SSH connection just for the monitor.

This is how to make it work with your script:
If you are using @local or dg.LOCAL or localhost - turn off the Hailo multi-process service:
sudo systemctl stop hailort.service
Then set the variable:
export HAILO_MONITOR=1
and run your script:
python your_inference.py
Open a second terminal window and launch the monitor:
hailortcli monitor
You will see output in the monitor terminal window once inference actually starts running

If you want to use the multi-process service
Make the /etc/default/hailort_service file enable the variable, so it will look like:

[Service]
HAILORT_LOGGER_PATH="/var/log/hailo"
HAILO_MONITOR=1
HAILO_TRACE=0
HAILO_TRACE_TIME_IN_SECONDS_BOUNDED_DUMP=0
HAILO_TRACE_SIZE_IN_KB_BOUNDED_DUMP=0
HAILO_TRACE_PATH=""

Then:
sudo systemctl daemon-reload
sudo systemctl enable --now hailort.service
sudo systemctl restart hailort.service

and once again, launch a script in one terminal, the monitor in a 2nd terminal.
The monitor will show output when the 1st script is performing inference.

In general, though, if you’re on a Pi, be aware that using the multi-process service adds overhead and slows down inference. I recommend disabling it and using a DeGirum AIServer if you need multi-process support (more than one python script running inference).

Let me know if you need more help!

¿? So what I have to do when I’m using DeGirum Server as service but I need to do more than one inference at same time? (and the source script is not in localhost)

Prior I had both the hailort service and the degirum service running…

Now following your instructions:

1. - DeGirum Service enabled and started (at boot)
2. - HailoRT service stopped and disabled
3.-  SSH terminal 1 with HAILO_MONITOR=1
4.-  SSH Terminal 2 with hailortcli monitor
5.-  SSH Terminal 3 [to another raspberry starting the script]

And same no results on hailortcli monitor

*** EDITED it is working with local scripts but not when the script is running from another device

Hello @dario let me clear this up:

In general, the monitoring process will always look like this:

  1. Process that is responsible for running the inference needs to have HAILO_MONITOR=1 variable set BEFORE the process launches
  2. A second terminal opens hailortcli monitor and will see output only when Process 1 is actually performing inference.

So no matter what method you are using for running inference, whether it is through a DeGirum AIServer or Hailo Multi process service or just local inference (e.g. @local or dg.LOCAL inference_host_address, you must have HAILO_MONITOR=1 set in the corresponding process before inference launches.

So let’s go through the possibilities.

Do you want multi-process capability?

You will want multi-process capability ONLY if you are running more than one Python script of inference at the same time.

If yes, then you have two options, do not use both:

  • Hailo multi-process service
  1. To properly use multi process service, you set inference_host_address to @local or dg.LOCAL in your Python script(s).
  2. Ensure /etc/default/hailort_service file enable the variable HAILO_MONITOR=1.
  3. Reload, start, and enable the hailort.service.
    sudo systemctl daemon-reload
    sudo systemctl enable --now hailort.service
    sudo systemctl restart hailort.service
  • DeGirum AIServer
  1. Turn off multi-process service: sudo systemctl stop hailort.service
  2. The AIServer is a process that is launched, and must have the environment variable HAILO_MONITOR=1 set before it launches! Remember, you can have AIServer launched either using degirum server/python3 -m degirum.server commands OR as a system service using /etc/systemd/system/degirum.service + systemctl commands on degirum.service.
  3. If you launch it the simple way through CLI - set the variable BEFORE launching.
    If you want the system service, then you must set the variable inside the /etc/systemd/system/degirum.service file - e.g. adding the line Environment="HAILO_MONITOR=1" under [Service]

If no, then simply

  1. Turn off multi-process service: sudo systemctl stop hailort.service
  2. set inference_host_address to be @local or dg.LOCAL in your Python script and you will get maximum performance as well.

Once you have properly launched the inference process, another terminal will see output in hailortcli monitor when inference happens.

My recommendation: Turn off and disable multi process service.
Just use either @local for one script (multiple models will work just fine in one script) or launch an AIServer the simple way through CLI.

Hopefully this helps.

That was the key, thank you very much :star_struck:

2 Likes