Gstreamer use compositor with shmsrc - c

I have the a few pipelines that send raw video shmsink like below.
gst-launch-1.0 videotestsrc ! video/x-raw,format=I420,width=640,height=360,framerate=15/1 ! timeoverlay ! queue ! shmsink socket-path=/tmp/test1 shm-size=20000000 sync=true
gst-launch-1.0 videotestsrc ! video/x-raw,format=I420,width=640,height=360,framerate=15/1 ! timeoverlay ! queue ! shmsink socket-path=/tmp/test2 shm-size=20000000 sync=true
I am trying to mux the videos using the compositor plugin in another process like this
gst-launch-1.0 shmsrc socket-path=/tmp/test1 is-live=true ! queue ! video/x-raw,format=I420,width=640,height=360 ! compositor name=comp sink_1::xpos=860 ! videoconvert ! autovideosink \
shmsrc socket-path=/tmp/test2 is-live=true ! queue ! video/x-raw,format=I420,width=640,height=360 ! comp.
However I get an assertion error that GST_FORMAT_TIME is not available like below.
ERROR:../subprojects/gst-plugins-base/gst-libs/gst/video/gstvideoaggregator.c:2322:gst_video_aggregator_sink_event: assertion failed: (seg.format == GST_FORMAT_TIME)
Bail out! ERROR:../subprojects/gst-plugins-base/gst-libs/gst/video/gstvideoaggregator.c:2322:gst_video_aggregator_sink_event: assertion failed: (seg.format == GST_FORMAT_TIME)
[1] 268025 abort (core dumped) GST_DEBUG=4 gst-launch-1.0 shmsrc socket-path=/tmp/test1 is-live=true !
This is all implemented programatically using gstreamer-rs (gstreamer rust).
But I am able to reproduce the same issue running the pipelines above.
Is there a way to manually add GST_FORMAT_TIME?
I tried the videomixer element and have the same issue there. I tried inserting identity sync=true but that doesnt seem to do the trick.
I appreciate any help in this!
Thanks a lot!

It seems to work with do-timestamp=true and an increased shm-size:
gst-launch-1.0 shmsrc socket-path=/tmp/test1 is-live=true do-timestamp=true ! queue ! video/x-raw,format=I420,width=640,height=360,framerate=15/1 ! compositor name=comp sink_1::xpos=860 ! videoconvert ! autovideosink shmsrc socket-path=/tmp/test2 is-live=true do-timestamp=true ! queue ! video/x-raw,format=I420,width=640,height=360,framerate=15/1 ! comp.
gst-launch-1.0 videotestsrc ! video/x-raw,format=I420,width=640,height=360,framerate=15/1 ! timeoverlay ! queue ! shmsink socket-path=/tmp/test1 shm-size=200000000
gst-launch-1.0 videotestsrc do-timestamp=true ! video/x-raw,format=I420,width=640,height=360,framerate=15/1 ! timeoverlay ! queue ! shmsink socket-path=/tmp/test2 shm-size=200000000
however it doesn't seem to be in sync even if adding sync=true.

Related

gst-launch-1.0 is showing blank screen for YUYV frame format for image size 864x480

I am trying to acquire image data from a logitech USB camera (C270 HD WEBCAM) connected to a NVIDIA Jetson Nano for image size 864x480 using the below GStreamer command but I am experiencing a blank screen attached below (which means it is not working though there are no issues).
gst-launch-1.0 -v v4l2src device="/dev/video1" ! 'video/x-raw,width=(int)864,height=(int)480' ! videoconvert ! ximagesink
Blank window created by ximagesink
When I try to capture the same image(864x480) with jpeg compression then it is working
gst-launch-1.0 -v v4l2src device="/dev/video0" ! 'image/jpeg,width=(int)864,height=(int)480' ! jpegparse ! jpegdec ! videoconvert ! fpsdisplaysink video-sink=ximagesin
Checked both the pipeline in C programming too but same result.
Please let me know if there are any issues with the first pipeline. Thanks in advance.
-RK
Maybe your camera does not support yuv. You can check it using.
v4l2-ctl --list-formats-ext
On the other hand you might specify YUV format for the gstreamer to use. Something like this may work:
... 'video/x-raw, width=1280, height=720, format=YUY2' ! ...
or
... videoconvert ! 'video/x-raw, width=1280, height=720, format=YUY2' ! ...

GStreamer split mp4 video by seconds with decodebin and splitmuxsink

With the example from gstreamer
gst-launch-1.0 -e filesrc location=audio/mario.mp4 ! queue ! qtdemux name=demux demux.video_0 ! queue ! h264parse ! queue ! mux.video demux.audio_0 ! queue ! aacparse ! queue ! mux.audio_0 splitmuxsink location=audio/test//out_%d.mp4 max-size-time=10000000000 muxer=mp4mux name=mux
I am able to split the mp4 to several pieces. But when I change the qtdemux to decodebin, things change
gst-launch-1.0 filesrc location=audio/mario_00.mp4 ! decodebin name=demuxer ! queue ! video/x-raw ! videoconvert ! x264enc ! mux.video audio/x-raw ! queue ! audioconvert ! wavenc ! queue ! mux.audio_0 splitmuxsink location=audio/test/video%02d.mp4 max-size-time=10000000000 muxer=mp4mux name=mux
This give me error messages
GStreamer-CRITICAL **: 07:58:00.441: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
WARNING: erroneous pipeline: no element "audio"
I change audio/x-raw to demuxer.audio/x-raw, demuxer.audio, demuxer.audio_0, they give me error messages
(gst-launch-1.0:11252): GStreamer-CRITICAL **: 07:59:35.828: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
WARNING: erroneous pipeline: unexpected reference "demuxer" - ignoring
How could I split the mp4 after I change the qtdemux to decodebin?

want to set the time to capture the video using GSTREAMER

gst-launch-1.0 -e v4l2src ! x264enc ! h264parse ! mp4mux !
splitmuxsink max-size-time=30000000000 location=test1.mp4
This is my pipeline when I'm executing this it gives error like this
WARNING: erroneous pipeline: no element "splitmuxsink"
any solution for this WARNING. I installed every plugins and I am beginner to gstreamer.
Give me some idea to set time of capturing the video.
splitmuxsink sink was added in gstreamer version 1.6, make sure you have 1.6/+ (gst-launch-1.0 --version should tell you which version is installed), also its part of gst-plugins-good make sure you have installed it. Adding a link for your reference to install gstreamer on linux machines.
gst-launch-1.0 -e v4l2src ! x264enc ! h264parse ! mp4mux !
splitmuxsink max-size-time=30000000000 location=test1.mp4
Also your above pipeline is wrong it should something like below,
gst-launch-1.0 -e v4l2src num-buffers=500 !
video/x-raw,width=320,height=240 ! videoconvert ! queue ! timeoverlay
! x264enc key-int-max=10 ! h264parse ! splitmuxsink
location=video%02d.mov max-size-time=10000000000
max-size-bytes=1000000
Refer splitmuxsink, it adds the muxer for you.

Gstreamer - too much delay while listening to RTP stream

I am new to Gstreamer and I want to use it to listen to RTP stream.
To do that, I use this pipeline :
gst-launch-1.0 udpsrc caps=application/x-rtp port=5000 ! rtpjitterbuffer ! rtpopusdepay ! opusdec ! alsasink
I don't know why, but I have some delay (~ 1s) and I want to minimize it.
I'm sure that this is not coming from source and transport.
If anyone has any ideas :)
So,
If anyone has the same problem, this is the properties that helped me :
latency of rtpjitterbuffer
buffer-time and latency-time of alsasink
And also update gstreamer :)
try playing with the latency setting on the jitter buffer, eg.
gst-launch-1.0 udpsrc caps=application/x-rtp port=5000 ! rtpjitterbuffer latency=250 ! rtpopusdepay ! opusdec ! alsasink

gstreamer gst_child_proxy_set C syntax

I am struggling with some gstreamer code in C, which I cannot write. I am asking you to give me simple example of C syntax code of two videotestsrc'es which would be displayed in one window using gst_child_proxy_set -> xpos,ypos. I cannot figure it out because of lack of examples, I'm quite new to Gstreamer but really need to do it this way. I know for some of you it is just a moment to write that and I am trying for couple of days now...
Best regards!
gst-launch-1.0 -e \
videomixer name=mix \
sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=0\
sink_1::xpos=0 sink_1::ypos=0 \
sink_2::xpos=640 sink_2::ypos=0 \
sink_3::xpos=0 sink_3::ypos=360 \
sink_4::xpos=640 sink_4::ypos=360 \
! autovideosink \
videotestsrc pattern="black" \
! video/x-raw,format=AYUV,width=1280,height=720 \
! mix.sink_0 \
uridecodebin uri=rtsp://10.0.0.121:554/video.h264 \
! videoconvert ! videoscale \
! video/x-raw,format=AYUV,width=640,height=360 \
! mix.sink_1 \
uridecodebin uri=rtsp://10.0.0.122:554/video.h264 \
! videoconvert ! videoscale \
! video/x-raw,format=AYUV,width=640,height=360 \
! mix.sink_2 \
uridecodebin uri=rtsp://10.0.0.123:554/video.h264 \
! videoconvert ! videoscale \
! video/x-raw,format=AYUV,width=640,height=360 \
! mix.sink_3 \
uridecodebin uri=rtsp://10.0.0.124:554/video.h264 \
! videoconvert ! videoscale \
! video/x-raw,format=AYUV,width=640,height=360 \
! mix.sink_4 \
Okay, so you have managed to create a working pipeline with gst-launch. As you may now, your Gstreamer pipeline describes the processing your data is going through from its source (your RTSP stream) to a sink (an autovideosink followed by a neat mixer to have your streams side by side).
From now on, to use it in some useful C code you have two options :
use gst-parse-launch (http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html#gst-parse-launch) with the pipeline description you've just used. It will build the whole pipeline just like gst-launch would have.
or you could build the pipeline yourself, using GstElement factories, and binding those elements together. This is very tedious and only useful if you need a dynamic pipeline from the beginning.
So my advice would be to use something like :
const char *pipeline_desc = "videomixer name=mix "
"sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=0 "
"sink_1::xpos=0 sink_1::ypos=0 "
"sink_2::xpos=640 sink_2::ypos=0 "
/* bla bla... */
"! video/x-raw,format=AYUV,width=640,height=360 "
"! mix.sink_4";
GstElement *pipeline = gst-parse-launch(pipeline_desc, NULL);
This plus some event handling, and you should get what you want.
You can find some examples of this and much more in this very good tutorial : http://docs.gstreamer.com/pages/viewpage.action?pageId=327735
This doc got me out of trouble a few time. :)
Good luck!
EDIT : just saw you authored GStreamer multiple videos in one window - C syntax and what you tried there was building the pipeline yourself. While it should work it represents an insane amount of work, is very error prone, and I must confess I never got much out of it myself. :p
gst-parse-launch was created for human being such as you and me. :)

Resources