Providing result from a camera as a stream for i.e. Homeassistant

In my actually project I provide the stream from a camera connected to the pi5 (via CSI) after processing it with the result overlays as a stream to display it on the HA dashboard, now I am wondering if there might be a better solution that this one I use at the moment (works well so far).

app = Flask(__name__)

def gen_frames():
    while True:
        frame = picam2.capture_array()
        frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

        processed_frame = run_inference(frame_rgb)

        ret, jpeg = cv2.imencode('.jpg', processed_frame)
        if not ret:
            continue
        yield (b'--frame\r\n'
               b'Content-Type: image/jpeg\r\n\r\n' + jpeg.tobytes() + b'\r\n')



@app.route('/video_feed')
def video_feed():
    return Response(gen_frames(),
                    mimetype='multipart/x-mixed-replace; boundary=frame')

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, threaded=True)

Hi @core-stuff,

You may try the following approach:

  1. Run model.predict_batch() for highly effective pipelined inference
  2. Use ffmpeg to encode video stream to RTSP
  3. Use MediaMTX server to re-stream RTSP stream to outside using WebRTC

Fortunately, all necessary components are implemented in degirum_tools.
The script below demonstrates how to do it.

In order to use it you need to install:

  1. degirum_tools: pip install -U degirum_tools
  2. ffmpeg: sudo apt install ffmpeg
  3. MediaMTX: refer to GitHub - bluenviron/mediamtx: Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy, record and playback video and audio streams. for installation instructions. Just unzip it into /usr/local/bin.

You may want to adjust hw_location, model_zoo_url, model_name, and degirum_tools.get_token() to match your needs.

When you run this script, the FastAPI-based HTTP server will be launched at localhost:8080 port. Just open http://localhost:8080 in your browser to see main page. The WebRTC stream will be served at localhost:8888/stream URL, so the main page at localhost:8080 will show it.

The script:

import degirum_tools, degirum as dg
from nicegui import ui, app, context
import threading

#
# Settings
#
hw_location = "@cloud"  # adjust to "@local" for local inference
model_zoo_url = "degirum/public"  # URL for the model zoo
model_name = "yolov8n_relu6_widerface_kpts--640x640_quant_n2x_orca1_1"  # Name of the model to use
stream_url_path = "stream"  # URL path for RTSP streaming


def frame_source():
    from picamera2 import Picamera2
    picam2 = Picamera2()
    picam2.configure(picam2.preview_configuration(main={"format": "BGR888"}))
    picam2.start()
    try:
        while True:
            yield picam2.capture_array()
    finally:
        picam2.stop()


def run_pipeline():
    """Run the video processing pipeline."""

    # Load the model
    model = dg.load_model(
        model_name, hw_location, model_zoo_url, degirum_tools.get_token()
    )

    rtsp_streamer = None
    # run the model on frames from the video source
    for result in model.predict_batch(frame_source()):
        if rtsp_streamer is None:
            rtsp_streamer = degirum_tools.VideoStreamer(
                f"rtsp://localhost:8554/{stream_url_path}", *result.image.shape[1::-1]
            )

        # end send processed frames to the RTSP stream
        rtsp_streamer.write(result.image_overlay)


media_server = None  # media server instance for RTSP streaming


@app.on_startup
def startup():
    global media_server
    media_server = degirum_tools.MediaServer()
    # start AI pipeline in a separate thread
    threading.Thread(target=run_pipeline, daemon=True).start()


@app.on_shutdown
def cleanup():
    media_server.stop()  # stop the media server


@ui.page("/")
def main_page():

    # this is just a label
    ui.label("Live Stream").classes("text-xl font-bold mb-4")

    # define WebRTC video stream URL
    host = context.client.request.headers.get("host", "localhost")
    stream_url = f"http://{host.split(':')[0]}:8888/{stream_url_path}"

    # and this is the video element that will display the WebRTC video stream
    ui.element("iframe").props(f'src="{stream_url}"').classes(
        "w-[90%] mx-auto h-[calc(90vh)]"
    )


# Run the NiceGUI application
try:
    ui.run(title="Streaming Demo", workers=1, reload=False, show=False)
except KeyboardInterrupt:
    print("Shutting down the application...")
1 Like

@vladk thank you for this approach!
It seems there is something that I am missing:
The first error I got was at this point:

picam2.configure(picam2.preview_configuration(main={"format": "BGR888"}))

if I write it like that it got a bit further in the code execution:

picam2.configure(picam2.create_video_configuration(main={"size": (640, 480), "format": "RGB888"}))

but all in all I am ending at the same point with this error message:

Starting VideoStreamer with URL: rtsp://localhost:8554/stream
Image shape: (480, 640, 3)
Exception in thread Thread-1 (run_pipeline):
Traceback (most recent call last):
  File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/home/pi/projects/degirum-hailo-1/main_11.py", line 47, in run_pipeline
    rtsp_streamer.write(result.image_overlay)
  File "/home/pi/degirum/lib/python3.11/site-packages/degirum_tools/video_support.py", line 864, in write
    self._process.stdin.write(img.tobytes())
    ^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'stdin'

The MediaMTX is running, ffmpeg is running so could you give me a hint where I could fix that, please?

Hmm, very strange problem. The code of write() method is as follows:

    def write(self, img: ImageType):

        if not self._process:
            return

        im_sz = image_size(img)

        if (self._width, self._height) != im_sz:
            img = resize_image(img, self._width, self._height)

        try:
            self._process.stdin.write(img.tobytes())
        except (BrokenPipeError, IOError):
            self.stop()

As you see, it first checks that self._process is not None and just exits if it is, so self._process.stdin.write(img.tobytes()) code is reached only when self._process is not None. Error states that None object has no attribute stdin, meaning that self._process object is None.

Please make sure yuo are using the latest degirum_tools package of ver. 0.18.0.
You can run pip install -U degirum_tools to ensure that.

I double checked, I have the degrirum_tools ver. 0.18.0 installed.

Name: degirum_tools
Version: 0.18.0
Summary: Tools for PySDK
Home-page: 
Author: DeGirum
Author-email: 
License: MIT
Location: /home/pi/degirum/lib/python3.11/site-packages
Requires: apprise, degirum, ffmpeg-python, ffmpegcv, ipython, jsonschema, numpy, opencv-python, pafy, pillow, psutil, pycocotools, python-dotenv, pyyaml, requests, scipy, typing-extensions, youtube-dl
Required-by: 

Does it require a special configuration for the mediaMTX, or is that a problem with the that line that I have changed?

picam2.configure(picam2.preview_configuration(main={"format": "BGR888"}))`

to

picam2.configure(picam2.create_video_configuration(main={"size": (640, 480), "format": "RGB888"}))

Hi @core-stuff
We are still trying to replicate the error on our side. We will keep you posted.

1 Like

Hi @core-stuff
So far, we are unable to replicate the error on our side, primarily due to the complex setup needed to exactly get the same system as yours. To narrow down the problem, do you have another camera source? Say a USB camera or an RTSP stream? This will help us eliminate picam as source of error. Also, did you test the picam independently without the extra Home assistant setup? This will help us narrow down where the issue could be.

1 Like

Hi @shashi, thank you for your efforts. Yes it’s a bit odd, the first point is, I wrote that the code would not run with that line:


picam2.configure(picam2.preview_configuration(main={"format": "BGR888"}

I have had to replace it with that line:

picam2.configure(picam2.create_video_configuration(main={"size": (640, 480), "format": "RGB888"}))

so I thought that might be an version conflict but now I build a face recognition pipeline and there the preview_configuration works, so I have to figure out why it doesn’t work with your script and I think then I will get a step further to get it running.
To your other question, I will try it with an ESP32 webstream as I don‘t have an USB cam and to your other point: I run the script without any implementation of HA in a test environment. I‘ll keep you updated.

1 Like

Hi @core-stuff

At last, we have some good news (hopefully). We believe we know what is causing the different errors you are seeing.

  1. picam2.configure(picam2.preview_configuration(main={“format”: “BGR888”}
    This is the easy one :slight_smile: The actual command should be picam2.configure(picam2.create_preview_configuration(main={“format”: “BGR888”} : Note that the word create was missing and hence you probably would
    have seen error like ConfigurationObject not callable
  2. Regarding this error: AttributeError: ‘NoneType’ object has no attribute ‘stdin’. We believe this is due to the fact that the MediaMTX is not running. We are aware that you mentioned MediaMTX is running but we could replicate this exact error when we stopped running the MediaMTX server.

Now, how can we confirm that MediaMTX is running or not independently? We can use the below code:

#!/bin/bash

set -e

# Define RTSP stream URL
RTSP_URL="rtsp://localhost:8554/teststream"

echo "=== Starting test ==="

# 1. Stream test video to MediaMTX using FFmpeg
echo "Streaming test video to MediaMTX..."
ffmpeg -re -f lavfi -i testsrc=size=1280x720:rate=30 \
    -c:v libx264 -f rtsp -rtsp_transport tcp "$RTSP_URL" \
    -t 10 -loglevel warning &

# Give the stream a moment to start
sleep 3

# 2. Read the stream back from MediaMTX to validate
echo "Reading stream back from MediaMTX..."
ffmpeg -rtsp_transport tcp -i "$RTSP_URL" -t 5 -f null - -loglevel warning

echo "=== Test completed successfully ==="

Please save the above snippet as test_mediamtx_ffmpeg.sh and run it as below.

chmod +x test_mediamtx_ffmpeg.sh
./test_mediamtx_ffmpeg.sh

If you do not see the printout “=== Test completed successfully ===”, it means there is something wrong with the ffmpeg+mediamtx setup.

Please let us know if this solves the problem. If not, we have a couple of more ideas for debugging that can further isolate the problem. They involve not streaming out the RTSP and eliminating AI inference as a source of error.

1 Like

Hi @core-stuff, just following up to see if @shashi’s last post was able to resolve the issue or if you have any other questions. We’d be happy to continue the discussion (especially if this is still unresolved). :smiley: