How to embed subtitles into an mp4 file using gstreamer - c

My Goal
I'm trying to embed subtitles into an mp4 file using the mp4mux gstreamer element.
What I've tried
The pipeline I would like to use is:
GST_DEBUG=3 gst-launch-1.0 filesrc location=sample-nosub-avc.mp4 ! qtdemux ! queue ! video/x-h264 ! mp4mux name=mux reserved-moov-update-period=1000 ! filesink location=output.mp4 filesrc location=english.srt ! subparse ! queue ! text/x-raw,format=utf8 ! mux.subtitle_0
It just demuxes a sample mp4 file for the h.264 stream and then muxes it together with an srt subtitle file.
The error I get is:
Setting pipeline to PAUSED ...
0:00:00.009958915 1324869 0x5624a8c7a0a0 WARN basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<filesrc0> pad not activated yet
Pipeline is PREROLLING ...
0:00:00.010128080 1324869 0x5624a8c53de0 WARN basesrc gstbasesrc.c:3072:gst_base_src_loop:<filesrc1> error: Internal data stream error.
0:00:00.010129102 1324869 0x5624a8c53e40 WARN qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type pasp
0:00:00.010140810 1324869 0x5624a8c53de0 WARN basesrc gstbasesrc.c:3072:gst_base_src_loop:<filesrc1> error: streaming stopped, reason not-negotiated (-4)
0:00:00.010172990 1324869 0x5624a8c53e40 WARN qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 1
ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc1: Internal data stream error.
Additional debug info:
gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc1:
streaming stopped, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
My Thoughts
I believe the issue is not related to the above warning but rather mp4mux's incompatibility with srt subtitles.
The reason I belive this is because, other debug logs hint at it, but also stealing the subititles from another mp4 file and muxing it back together does work.
gst-launch-1.0 filesrc location=sample-nosub-avc.mp4 ! qtdemux ! mp4mux name=mux ! filesink location=output.mp4 filesrc location=sample-with-subs.mp4 ! qtdemux name=demux demux.subtitle_1 ! text/x-raw,format=utf8 ! queue ! mux.subtitle_0
A major catch 22 I am having is that mp4 files don't typically support srt subtitles, but gstreamer's subparse element doesn't support parsing mp4 subtitle formats (tx3g, ttxt, etc.) so I'm not sure how I'm meant to put it all together.
I'm very sorry for the lengthy question but I've tried many things so it was difficult to condense it. Any hints or help is appreciated. Thank you.

Related

gst-launch-1.0 is showing blank screen for YUYV frame format for image size 864x480

I am trying to acquire image data from a logitech USB camera (C270 HD WEBCAM) connected to a NVIDIA Jetson Nano for image size 864x480 using the below GStreamer command but I am experiencing a blank screen attached below (which means it is not working though there are no issues).
gst-launch-1.0 -v v4l2src device="/dev/video1" ! 'video/x-raw,width=(int)864,height=(int)480' ! videoconvert ! ximagesink
Blank window created by ximagesink
When I try to capture the same image(864x480) with jpeg compression then it is working
gst-launch-1.0 -v v4l2src device="/dev/video0" ! 'image/jpeg,width=(int)864,height=(int)480' ! jpegparse ! jpegdec ! videoconvert ! fpsdisplaysink video-sink=ximagesin
Checked both the pipeline in C programming too but same result.
Please let me know if there are any issues with the first pipeline. Thanks in advance.
-RK
Maybe your camera does not support yuv. You can check it using.
v4l2-ctl --list-formats-ext
On the other hand you might specify YUV format for the gstreamer to use. Something like this may work:
... 'video/x-raw, width=1280, height=720, format=YUY2' ! ...
or
... videoconvert ! 'video/x-raw, width=1280, height=720, format=YUY2' ! ...

GStreamer 1.0 autovideosink Could not initialise Xv output

I have my C code which uses GStreamer.I have just installed gst-launch-1.0 on my Ubuntu 14.04.5 LTS,after that i verified that i have a working installation by command
gst-inspect-1.0 fakesrc
which print out a bunch of information about this.Also checked with
gst-launch-1.0 -v fakesrc silent=false num-buffers=3 ! fakesink
silent=false
command it produces output.But when i try to test video displayed by running
gst-launch-1.0 videotestsrc ! videoconvert ! autovideosink
it is giving me below logs
Setting pipeline to PAUSED ...
libEGL warning: DRI2: xcb_connect failed
libEGL warning: DRI2: xcb_connect failed
libEGL warning: GLX: failed to load GLX
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstXvImageSink:autovideosink0-actual-sink-xvimage: Could not initialise Xv output
Additional debug info:
xvimagesink.c(1765): gst_xvimagesink_open (): /GstXvImageSink:autovideosink0-actual-sink-xvimage:
Could not open display (null)
Setting pipeline to NULL ...
Freeing pipeline ...
please HELP.

want to set the time to capture the video using GSTREAMER

gst-launch-1.0 -e v4l2src ! x264enc ! h264parse ! mp4mux !
splitmuxsink max-size-time=30000000000 location=test1.mp4
This is my pipeline when I'm executing this it gives error like this
WARNING: erroneous pipeline: no element "splitmuxsink"
any solution for this WARNING. I installed every plugins and I am beginner to gstreamer.
Give me some idea to set time of capturing the video.
splitmuxsink sink was added in gstreamer version 1.6, make sure you have 1.6/+ (gst-launch-1.0 --version should tell you which version is installed), also its part of gst-plugins-good make sure you have installed it. Adding a link for your reference to install gstreamer on linux machines.
gst-launch-1.0 -e v4l2src ! x264enc ! h264parse ! mp4mux !
splitmuxsink max-size-time=30000000000 location=test1.mp4
Also your above pipeline is wrong it should something like below,
gst-launch-1.0 -e v4l2src num-buffers=500 !
video/x-raw,width=320,height=240 ! videoconvert ! queue ! timeoverlay
! x264enc key-int-max=10 ! h264parse ! splitmuxsink
location=video%02d.mov max-size-time=10000000000
max-size-bytes=1000000
Refer splitmuxsink, it adds the muxer for you.

Gstreamer - too much delay while listening to RTP stream

I am new to Gstreamer and I want to use it to listen to RTP stream.
To do that, I use this pipeline :
gst-launch-1.0 udpsrc caps=application/x-rtp port=5000 ! rtpjitterbuffer ! rtpopusdepay ! opusdec ! alsasink
I don't know why, but I have some delay (~ 1s) and I want to minimize it.
I'm sure that this is not coming from source and transport.
If anyone has any ideas :)
So,
If anyone has the same problem, this is the properties that helped me :
latency of rtpjitterbuffer
buffer-time and latency-time of alsasink
And also update gstreamer :)
try playing with the latency setting on the jitter buffer, eg.
gst-launch-1.0 udpsrc caps=application/x-rtp port=5000 ! rtpjitterbuffer latency=250 ! rtpopusdepay ! opusdec ! alsasink

How to play the port audio in basic raw

How can i do raw send and raw receive ? This is not working, i cant play what i sended:
Send: $ gst-launch -v autoaudiosrc ! udpsink host=127.0.0.1 auto-multicast=true port=4444
Recv/play:
[root#example ~]# gst-launch udpsrc multicast-group=127.0.0.1 port=4444 ! autoaudiosink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstAutoAudioSink:autoaudiosink0/GstPulseSink:autoaudiosink0-actual-sink-pulse: The stream is in the wrong format.
Additional debug info:
gstbaseaudiosink.c(866): gst_base_audio_sink_preroll (): /GstPipeline:pipeline0/GstAutoAudioSink:autoaudiosink0/GstPulseSink:autoaudiosink0-actual-sink-pulse:
sink not negotiated.
Execution ended after 16110169 ns.
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
[root#example ~]#
You need to specify capabilities for source. This is what actually ERROR: from element /GstPipeline:pipeline0/GstAudioConvert:audioconvert0: not negotiated means.
(Use -v flag on gst-launch to see more details on errors).
So, the solution is:
$ gst-launch -v udpsrc multicast-group=127.0.0.1 port=4444 \
! audio/x-raw-int, endianness=1234, signed=true, width=16, depth=16, rate=44100, channels=2 \
! autoaudiosink
Actually, I just copied capabilities from the verbose output of the sending gst-launch.

Resources