EventDetector(), EventNotifier() and gotify

Hi @shashi , @vladk

Finnally was able to delimit the existing problem and to reproduce it with several scripts and devices.

For some reason the clip files are no longer being saved nor to local directory nor to s3/minio server , this make the email notification to fail but only when you have the {url} placeholder in the email body template…

Kept digging and doing tests, and I can confirm that the temporal files are being created in /tmp before saving/uploading them to final destination before sending the notification:

livestream # ls -lta /tmp/tmpu9iscdol
total 276
drwx------   2 root root   4096 Oct 30 18:16 .
-rw-r--r--   1 root root 211504 Oct 30 18:16 -0000010.mp4
-rw-r--r--   1 root root  58562 Oct 30 18:16 -0000010.json

Do you have some messaged in stdout like “Job … is not completed in … sec”?
If yes, then check what FPS your camera reports. You can do this:

import cv2

src = cv2.VideoCapture("rtsp://user:pass@camera-ip:554/")
print(src.get(cv2.CAP_PROP_FPS))

Replace "rtsp://user:pass@camera-ip:554/" with real camera designator.

If this snippet prints something like 100, then this is the point of trouble: clip saving timeout is computed as 2*clip-duration, and clip-duration is computed as clip-size / FPS. If FPS is artificially high (like 100; and many RTSP cameras report such wrong FPS) then timeout will happen on each clip saving.

Here is the resul

But I don’t understand why this test. The temporal files are created, but not the ‘real ones’, if the problem were that not even the temporal file would be created, isn’t?

The script was working perfetly taking captures and sending emails without problems until the update to 0.22.4 , and as said in the last post you can reproduce it very easy no matter the script even with the nvr script, you only need to add the {url} placeholder

@vladk forgot to mention in my script the clip_timeout_duration was harcoded to 10 and I put clip_duration as configurable parameter because I was told that clip_duration = 1 would mean a ‘jpg’ image and any duration greater than that would be a video file ‘mp4’

I don’t know if this helps.

I changed in my script the input source to the same video file as in smart_nvr example and in the moment the person shows up on the screen the script crashes with this error:

dgstreams.Composition(cam_source >> detector >> streamer).start()
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/streams/base.py", line 697, in start
    self.wait()
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/streams/base.py", line 773, in wait
    raise Exception(errors)
Exception: Error detected during execution of VideoStreamerGizmo:
  <class 'degirum.exceptions.DegirumException'>: could not broadcast input array from shape (22,1656,3) into shape (22,376,3)

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/streams/base.py", line 664, in gizmo_run
    gizmo.run()
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/streams/gizmos.py", line 521, in run
    send_frame(data)
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/streams/gizmos.py", line 506, in send_frame
    streamer.write(get_img(data))
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/streams/gizmos.py", line 467, in get_img
    frame = inference_meta.image_overlay
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/result_analyzer_base.py", line 181, in image_overlay
    image = analyzer.annotate(self, image)
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/notifier.py", line 745, in annotate
    return put_text(
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/ui_support.py", line 292, in put_text
    image[
ValueError: could not broadcast input array from shape (22,1656,3) into shape (22,376,3)
Exception ignored in: <function ResultAnalyzerBase.__del__ at 0x7fdcd4fc4b80>
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/result_analyzer_base.py", line 140, in __del__
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/notifier.py", line 765, in finalize
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/video_support.py", line 817, in join_all_saver_threads
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/video_support.py", line 793, in _save_clip
  File "/usr/lib/python3.10/copy.py", line 92, in copy
ImportError: sys.meta_path is None, Python is likely shutting down
Exception ignored in: <function NotificationServer.__del__ at 0x7fdcd46c9ea0>
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/notifier.py", line 463, in __del__
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/notifier.py", line 459, in terminate
  File "/usr/local/lib/python3.10/dist-packages/degirum_tools/notifier.py", line 207, in _process_response_queue
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 135, in get_nowait
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 122, in get
ImportError: sys.meta_path is None, Python is likely shutting down

Hmm, just reviewed all changes made in degirum_tools since 0.22.4 - nothing which should affect that behavior. Also I cannot reproduce it on my side. That is why I am asking questions.

Files in /tmp dir indicate that the notification event was generated in the gizmo pipeline. But looks like the notification server running as a separate process was not seen this file in a timely manner and that job timed out.

Yes, this is known bug of put_text: when it tries to print long string on the image, it fails. We already fined it - will be available in the next release.

@dario , one more issue: the local path defined in endpoint parameter must exist on your local filesystem prior to running the script. The code in degirum_tools differentiates local case vs cloud case by existence of the local path:

        if os.path.exists(config.endpoint):
            # <<<< local filesystem case
            self._client = _LocalMinio(config.endpoint)
        else:
            # <<<< cloud S3 storage case
            self._client = minio.Minio(
                self._config.endpoint,
                access_key=self._config.access_key,
                secret_key=self._config.secret_key,
                secure=True,  # Set to False if not using HTTPS
            )

Thanks @vladk, Finally found the problem… When I select ‘1’ as clip_duration (pretending to get a .jpg file) the script is not able to create the clip_file so the notification with the {url} placeholder fails…

How I make the clip_file to be a .jpg file instead .mp4 ? Only by setting ‘clip_duration’ as 1 looks not to be enough…
Also do I need to do something for the message to be interpreted as markdown ? Now it is treated as html…

*EDIT
Found something interesting:

When the message is in a single line like:
message="Object Detected!!! <br/> Time: **{time}** <br/> You can access the detection footage here: <br/> [Download Detection Video]({url})"

It is correctly treated as markdown:

But if the message is in multiline format like this:

      message="""
              Object Detected!!! <br/> Time: **{time}** <br/> You can access the detection footage here: <br/> [Download Detection Video]({url})
              """,

or like this:

      message="""
              Object Detected!!! <br/> 
              Time: **{time}** <br/> 

              You can access the detection footage here: <br/> 

              [Download Detection Video]({url})
              """,

is treated as plaintext:
image

Hi @dario ,

Yes, this is because notification job timeout is calculated based on the clip duration, which is now 1. So that timeout becomes very small and notification job fails with timeout. We will fix it in the next release.
As a temporary workaround, you may specify notification_timeout_s=3.0 argument in EventNotifier constructor.

Here we have no control: this is how Apprise package interprets the message body. We just pass the string you provided directly to Apprise notify() method as body argument. What we did to enable markdown is to set body_format = apprise.NotifyFormat.MARKDOWN.

Hi @dario , one more note:
when you attach analyzers to the model by degirum_tools.attach_analyzers(model, […] ), and one of the analyzers is EventNotifier, then, in order to exit cleanly from your script, you need to call degirum_tools.attach_analyzers(model, None) at the very end of your script. This will ensure that all .finalize() methods will be called for all analyzers. For most analyzers, .finalize() methods do nothing. But for EventNotifier, .finalize() stops notification server process. If not called, it will prevent your script from exiting.

You may either call notifier.finalize() directly, or you may call degirum_tools.attach_analyzers(model, None) so .finalize() will be called for all analyzers, or you may use AiAnalyzerGizmo instead of attaching analyzers to the model. In this case, AiAnalyzerGizmo will call .finalize() on composition stop.

I know, and I’m really grateful that you take my comments and suggestions into consideration.
It’s the second time you’ve adjusted the code based on something I said or mentioned (like with the RTMP compatinility), which is incredible and makes one feel like part of the team! :grinning_face_with_smiling_eyes:

I’m also extremely grateful that you take the time to read and help me organize my messy ideas — seriously, thank you all! :folded_hands:

I just wanted to confirm that behavior. If you can confirm it, maybe it would be good for everyone to know — perhaps by pinning the comment or adding a short note to the documentation:

By default, message formatting uses Markdown.
However, Markdown is applied only when the message string is declared using single or double quotes on a single line.

If the message string is defined using triple quotes (a multi-line string), it will use plain text formatting unless you remove Python indentation and start all lines at column 0, regardless of how deeply nested the message variable is in your code.

Right now I’m still running more tests because I wanted to figure out how why using triple quotes some notificationns arrived formated in HTML but other arrived in raw plaintext.

When I first noticed the reason for the message not being formated to Markdown if using triple quoted strings I was just checking the different ways I could insert a line break in the notification message.

During one of these tests, I tried using a multi-line string, and interestingly, the first few times it came through formatted as HTML, not plain text. At the time I didn’t pay much attention to it because I was trying to receive a message in Markdown, but the notification always arrived with markdown uninterpreted (I was using triple quotes).

Eventually, I was able to confirm that Markdown is only interpreted if the message string is only one line declared using a single or double quote. If the string is defined with triple quotes, it comes through as HTML or plain text.

Now I need to understand what caused the message to sometimes be formatted as HTML and other times as plain text when using triple quotes.

You’re a genius!!! Now it generates ‘.jgp’ thanks!!!
This argument is only needed for jpg? Do I have to remove it when video clip is selected?

I added it to my script, but I did not notice any difference, maybe because I’m starting/stoping the script externally because I do not know the duration of the video source, so I though it based on triggered events…
In big words when I receive a ‘start inference’ trigger the program checks in the database the related inference and notification configuration, like what model to use, clip duration, notification destination[s], etc… this info is then used to popen the script, I save the pid to be able to kill it later…
Then when I receive the ‘stop inference X’ the program kill the script and all his children to stop the inference and the streaming out.

Aout the formatting:

Found more difference behaviour in formatting when using tripe quote, I was looking for the notification to be both multiline and markdown/html.

What I found is for that to be possible, the lines of the multiline should start in tabulation=0, besides this ‘breaking’ the rules of python…
So to get email with a ‘good’ looking using both html formating and markdown, the text on the message must start in position 0:


As curiosity you can combine several behaviours like:
HTML and markdown in the first line(s): the text of the message must start in the same line as the opening triple quoute, next tabulated lines will be sent in plaintext formatting::

Same can be achieved y putting each line you want in html/markdown in the 0 position (h2 line):

If the text of the next lines does not start in position 0, then is treated as raw plaintext


If the first line is also tabulated then all the body will be fully plaintext (notice the <br/> being printed among the body):


#1: Yes. #2: Yes.

But in the next release we will fix it, so you will not need to specify it in any case.

Hi everyone!!!

Happy new year!!!

It’s been a long time since I last wrote, but I’ve finally gotten back to work on this project that I’d kind of abandoned. Precisely because of all this time that’s passed, I’ve forgotten some things.
I was wondering if anyone could remind me which option works to remove the text from the image attached to the notification email, I wanted the image/video to only show the bounding box corresponding to the detection, but as you can see, the notification text is also copied.

… = EventNotifier(… , show_overlay=False)

hi @vladk ,

Thanks for the answer, but with EventNotifier(.....,clip_embed_ai_annotations=True, show_overlay=False,...) I lost everything related to the inference in the file stored in the S3: the bounding box arround the clock and the blue box with the ‘raw’ text of the notification, meanwhile the stream does in-fact show both the blue box with the text and the red bounding bbox but the blue box is only seen a few frames in the beging of the stream:

But I want to achieve is to have the stored file in S3 but getting rid of the blue box with the email text but keep the red bounding box around the clock in the saved image…

EDIT
Looks like the problem with the annotation in the file is when the inference is not constant and the bbox in the stream seems to be blinking or flickering arround the clock, this moment the file saved does not contain the bbox

Hi @dario ,

I tried examples/applications/smart_nvr.ipynb, where I added one line to notifier = degirum_tools.EventNotifier( construction: show_overlay=False,.

After that the notification message in the left bottom corner was gone in both live video and in saved clip, while other annotations remain. I believe this is your desired behavior.

You can share your code snippet for my review or you can take a look to that smart_nvr.ipynb
example and compare it with your code.

Yes, that exactly was what I wanted, to get rid of the blue box and you change in-fact made it happen, but the image I was downloading didn’t had without the blue box nor the bbox arround the clock, at least the first tries, but in the fourth attempt the file had the bbox, so I kept investigating, because the next try the file was again without the bounding box.

After that I moved the ‘scene’ and the attachment image always came with the bbox around the detected object

So I think the cause for the image being saved without the bbox could been originated by ‘flickering’ in the detection, I mean when the detection is stable and you are able to ‘see’ the bounding box in the stream all the time while the object is being detected then the file attached in the notification contains the bbox, but when the detection is not stable but quick (you see the bbox arround the detected object multiple times per second in the same second) the detection file saved does not contain the bbox, is something like if the screenshot was taken in one of the miliseconds the bbox is not there…

import degirum as dg, degirum_tools, time
from degirum_tools import streams as dgstreams
import argparse

parser = argparse.ArgumentParser(description="Stream video with object detection.")

parser.add_argument('--input', type=str, default="rtmp://input.server/live/livestream", help='The video source URL.')
parser.add_argument('--output', type=str, default="rtmp://output.server/live/livestream", help='The output URL path.')
parser.add_argument('--model_name', type=str, default="yolo11s_coco--640x640_quant_hailort_hailo8_1", help='The model choosen to do the inference')
parser.add_argument('--hw_location', type=str, default="192.168.20.2:8778", help='IP of AI server')
parser.add_argument('--confidence', type=float, default=0.5, help='Confidence threshold value')
parser.add_argument('--classes', type=str, default="people", help='classes label to search for')
parser.add_argument('--model_zoo_url', type=str, default="aiserver://home/pi/DeGirum/zoo", help='URL path of the model_zoo.')
parser.add_argument('--notification_config', type=str, default="", help='Apprise configuration string')
parser.add_argument('--device', type=str, default="HAILORT/HAILO8", help='Neural Chip type')
parser.add_argument('--clip_save', action='store_true', help='Enable clip saving')
parser.add_argument('--clip_duration', type=int, help='Clip duration in seconds')
parser.add_argument('--bucket_name', type=str, help='Bucket name for saving clips')

args = parser.parse_args()


dg.log.DGLog.set_verbose_state('DEBUG')

hw_location=args.hw_location
model_name = args.model_name
model_zoo_url= args.model_zoo_url
video_source = args.input
video_output= args.output
#classes = set(args.classes)
classes = set(args.classes.split(','))
device_type = args.device
confidence = args.confidence



model_manager = dg.connect(
    inference_host_address=hw_location,
    zoo_url = model_zoo_url
)

model = model_manager.load_model(
    model_name=model_name,
    device_type=device_type,
    output_confidence_threshold=confidence,
    input_pad_method="letterbox",
    image_backend='opencv',
    overlay_color=[255,0,0],
    output_class_set=classes
)

anchor = degirum_tools.AnchorPoint.CENTER

# create object tracker
tracker = degirum_tools.ObjectTracker(
    class_list=classes,
    track_thresh=0.35,
    track_buffer=100,
    match_thresh=0.9999,
    trail_depth=20,
    anchor_point=anchor,
    show_only_track_ids = True,
    #show_overlay = True,
    annotation_color = [255,0,0]
)


cam_source = dgstreams.VideoSourceGizmo(video_source)

#
# create analyzers:
#

event_name = "object_detected"

zone_detector = degirum_tools.EventDetector(
      f"""
      Trigger: {event_name}
      when: ObjectCount
      is greater than: 0
      during: [10, frames]
      for at least: [90, percent]
      """,
      show_overlay=False,
)

if args.notification_config:
    # clip storage config
    clip_storage_config = degirum_tools.ObjectStorageConfig(
        endpoint=config.S3_SERVER,  # path to Server
        access_key=config.ACCESS_KEY,  # not needed for local storage
        secret_key=config.SECRET_KEY,  # not needed for local storage
        bucket=args.bucket_name,  # subdirectory name for local storage
    )

    holdoff_sec = 3.0
    # event notifier
    notifier = degirum_tools.EventNotifier(
      "AI Detection Service - Object Detected",
      event_name,
      message="""<h1>AI Detection Alert System:</h1>  <br/> <br/>
Object Detected!!! <br/>
Time: **{time}** <br/>
--- <br/>
You can access the detection footage here: <br/>
Download: [Evidence file]({url})""",
      holdoff=holdoff_sec,
      notification_config=args.notification_config,
      clip_save=args.clip_save,
      clip_duration=args.clip_duration,
      #clip_sub_dir="{time}",
      clip_pre_trigger_delay=args.clip_duration // 2,
      clip_embed_ai_annotations=True,
      show_overlay=False,
      storage_config = clip_storage_config,
      notification_timeout_s=3.0
    )

    degirum_tools.attach_analyzers(model, [tracker, zone_detector, notifier])
else:
    degirum_tools.attach_analyzers(model, [tracker])

detector = dgstreams.AiSimpleGizmo(model)

# Show in the stream but not in the clip -> show_ai_overlay=True, EventDetector.show_overlay=False, clip_embed_ai_annotations=False
# Show in the stream but not in the clip -> show_ai_overlay=True, EventDetector.show_overlay=False, clip_embed_ai_annotations=False
# Show in the stream but not in the clip -> show_ai_overlay=True, EventDetector.show_overlay=False, clip_embed_ai_annotations=False

streamer = dgstreams.VideoStreamerGizmo(video_output, show_ai_overlay=True)

dgstreams.Composition(cam_source >> detector >> streamer).start()

You may try to lower confidence threshold

… or save short video clip with, say, 10 frames…