In my actually project I provide the stream from a camera connected to the pi5 (via CSI) after processing it with the result overlays as a stream to display it on the HA dashboard, now I am wondering if there might be a better solution that this one I use at the moment (works well so far).
You may want to adjust hw_location, model_zoo_url, model_name, and degirum_tools.get_token() to match your needs.
When you run this script, the FastAPI-based HTTP server will be launched at localhost:8080 port. Just open http://localhost:8080 in your browser to see main page. The WebRTC stream will be served at localhost:8888/stream URL, so the main page at localhost:8080 will show it.
The script:
import degirum_tools, degirum as dg
from nicegui import ui, app, context
import threading
#
# Settings
#
hw_location = "@cloud" # adjust to "@local" for local inference
model_zoo_url = "degirum/public" # URL for the model zoo
model_name = "yolov8n_relu6_widerface_kpts--640x640_quant_n2x_orca1_1" # Name of the model to use
stream_url_path = "stream" # URL path for RTSP streaming
def frame_source():
from picamera2 import Picamera2
picam2 = Picamera2()
picam2.configure(picam2.preview_configuration(main={"format": "BGR888"}))
picam2.start()
try:
while True:
yield picam2.capture_array()
finally:
picam2.stop()
def run_pipeline():
"""Run the video processing pipeline."""
# Load the model
model = dg.load_model(
model_name, hw_location, model_zoo_url, degirum_tools.get_token()
)
rtsp_streamer = None
# run the model on frames from the video source
for result in model.predict_batch(frame_source()):
if rtsp_streamer is None:
rtsp_streamer = degirum_tools.VideoStreamer(
f"rtsp://localhost:8554/{stream_url_path}", *result.image.shape[1::-1]
)
# end send processed frames to the RTSP stream
rtsp_streamer.write(result.image_overlay)
media_server = None # media server instance for RTSP streaming
@app.on_startup
def startup():
global media_server
media_server = degirum_tools.MediaServer()
# start AI pipeline in a separate thread
threading.Thread(target=run_pipeline, daemon=True).start()
@app.on_shutdown
def cleanup():
media_server.stop() # stop the media server
@ui.page("/")
def main_page():
# this is just a label
ui.label("Live Stream").classes("text-xl font-bold mb-4")
# define WebRTC video stream URL
host = context.client.request.headers.get("host", "localhost")
stream_url = f"http://{host.split(':')[0]}:8888/{stream_url_path}"
# and this is the video element that will display the WebRTC video stream
ui.element("iframe").props(f'src="{stream_url}"').classes(
"w-[90%] mx-auto h-[calc(90vh)]"
)
# Run the NiceGUI application
try:
ui.run(title="Streaming Demo", workers=1, reload=False, show=False)
except KeyboardInterrupt:
print("Shutting down the application...")
but all in all I am ending at the same point with this error message:
Starting VideoStreamer with URL: rtsp://localhost:8554/stream
Image shape: (480, 640, 3)
Exception in thread Thread-1 (run_pipeline):
Traceback (most recent call last):
File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
self.run()
File "/usr/lib/python3.11/threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "/home/pi/projects/degirum-hailo-1/main_11.py", line 47, in run_pipeline
rtsp_streamer.write(result.image_overlay)
File "/home/pi/degirum/lib/python3.11/site-packages/degirum_tools/video_support.py", line 864, in write
self._process.stdin.write(img.tobytes())
^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'stdin'
The MediaMTX is running, ffmpeg is running so could you give me a hint where I could fix that, please?
Hmm, very strange problem. The code of write() method is as follows:
def write(self, img: ImageType):
if not self._process:
return
im_sz = image_size(img)
if (self._width, self._height) != im_sz:
img = resize_image(img, self._width, self._height)
try:
self._process.stdin.write(img.tobytes())
except (BrokenPipeError, IOError):
self.stop()
As you see, it first checks that self._process is not None and just exits if it is, so self._process.stdin.write(img.tobytes()) code is reached only when self._process is not None. Error states that None object has no attribute stdin, meaning that self._process object is None.
Please make sure yuo are using the latest degirum_tools package of ver. 0.18.0.
You can run pip install -U degirum_tools to ensure that.
Hi @core-stuff
So far, we are unable to replicate the error on our side, primarily due to the complex setup needed to exactly get the same system as yours. To narrow down the problem, do you have another camera source? Say a USB camera or an RTSP stream? This will help us eliminate picam as source of error. Also, did you test the picam independently without the extra Home assistant setup? This will help us narrow down where the issue could be.
so I thought that might be an version conflict but now I build a face recognition pipeline and there the preview_configuration works, so I have to figure out why it doesn’t work with your script and I think then I will get a step further to get it running.
To your other question, I will try it with an ESP32 webstream as I don‘t have an USB cam and to your other point: I run the script without any implementation of HA in a test environment. I‘ll keep you updated.
At last, we have some good news (hopefully). We believe we know what is causing the different errors you are seeing.
picam2.configure(picam2.preview_configuration(main={“format”: “BGR888”}
This is the easy one The actual command should be picam2.configure(picam2.create_preview_configuration(main={“format”: “BGR888”} : Note that the word create was missing and hence you probably would
have seen error like ConfigurationObject not callable
Regarding this error: AttributeError: ‘NoneType’ object has no attribute ‘stdin’. We believe this is due to the fact that the MediaMTX is not running. We are aware that you mentioned MediaMTX is running but we could replicate this exact error when we stopped running the MediaMTX server.
Now, how can we confirm that MediaMTX is running or not independently? We can use the below code:
#!/bin/bash
set -e
# Define RTSP stream URL
RTSP_URL="rtsp://localhost:8554/teststream"
echo "=== Starting test ==="
# 1. Stream test video to MediaMTX using FFmpeg
echo "Streaming test video to MediaMTX..."
ffmpeg -re -f lavfi -i testsrc=size=1280x720:rate=30 \
-c:v libx264 -f rtsp -rtsp_transport tcp "$RTSP_URL" \
-t 10 -loglevel warning &
# Give the stream a moment to start
sleep 3
# 2. Read the stream back from MediaMTX to validate
echo "Reading stream back from MediaMTX..."
ffmpeg -rtsp_transport tcp -i "$RTSP_URL" -t 5 -f null - -loglevel warning
echo "=== Test completed successfully ==="
Please save the above snippet as test_mediamtx_ffmpeg.sh and run it as below.
If you do not see the printout “=== Test completed successfully ===”, it means there is something wrong with the ffmpeg+mediamtx setup.
Please let us know if this solves the problem. If not, we have a couple of more ideas for debugging that can further isolate the problem. They involve not streaming out the RTSP and eliminating AI inference as a source of error.
Hi @core-stuff, just following up to see if @shashi’s last post was able to resolve the issue or if you have any other questions. We’d be happy to continue the discussion (especially if this is still unresolved).