Error in calculating pps for h264 rtp stream - c

I have a problem in my Gstreamer pipeline that causes the sprop-parameter-sets to (i think) overflow its buffer. I am doing this in a iMX6 board and my pipeline is appsrc format=3 ! imxvpuenc_h264 ! rtph264pay and I use an RTSP server for accessing the pipeline. The pipeline works if a static image is sent, but in the case of a video it stops working by calculating the wrong pps.
I have tried using a static sprop-parameter-sets for rtph264pay by setting its property, but in this case the same thing happens in rtph264depay that calculates a new sprop-parameter-set. The output from the caps creation can be seen below:
0:01:15.970217009 578 0xa482ad50 INFO GST_EVENT gstevent.c:809:gst_event_new_caps: creating caps event application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)"Z0JAIKaAUAIGQAA\=\,aM48gP94AAIS4AAg2AACAudABxMbtz5ZqJ6U4vk7wAAQMgABAOgA5R6ZQkwQNaTPhfwAQAAgjAACD54YHcvx9FXG9ON62mcABAAFAAEAYbX2rm8Qe4mSKvXrwAAQBgACNJAZdcgDiEnNE5djN4GAAIJhoAKAEnAmvb0KVFQMwyGTwAAi4AIgBINIKIds1udUngAAgcAACAWS1IEgBehG7wDL75/W5JRBIi0WrX8gABAsAAEA0DVsAnpAKiCjVLNdK8AAEJ4AEAc/YVCfjDJO+t73KSd4AAII4AAgpAACAWwBo6CGMh3HueozX+Z4AAIJgAAgOgD2gYFqlGlGBjWn1MULXgAAg5AACAkEA8JLN5OJHLJcZmDo+eAACC8AAIDoAMAGGzM8zzGmJZwKeFL8AAQAAgKhbICDBChH5BKlw+PuMscAACACAAcACA3uGjeSK7gZZzT+NH/ewABDWAAEEQsALG1gYcE5FEbXp1hW8DAcAAQBQAnNfkbKQ/Pc/I9SGjgAwABAXAGdyJu7gpKxj9M5ERP/eAA6MAAIBgopwP8Sbdqzl4CjgAAQMwABAAAHALgpUcLtczR+Yjocj/eBgACC0YACtjKAXenmNmgRczT4AAIF4AAgDgAEASJqHnyzxQfCXUdO3gAAgoAACBgaSADVwoxVTFA7X0vaZsnexAU7CW/gAAgvAAQoABAXFGq3qUtmUv9VYp8AACCaEAIA7Bmj1M+lA7...
and this continues on for about a hundred or more lines and the device crashes if the pipeline isn't stopped. This should have only a few more characters after the first comma. Can someone tell why this would happen and provide a solution?

Related

GStreamer: Pipeline to write data to file based on condition

Is there a way to write data in a GStreamer pipeline to a file based on an (external) condition?
I have an application/code, which streams/displays video to the screen and continuously writes it to a file (it works fine).
I would like to have the GStreamer pipeline to only write to a file if an external condition is true (at runtime - I don't know the condition in advance).
What I have done so far:
I carefully searched the official GStreamer documentation, where I found some information on appsink, but I don't really see a way how to apply it based on an (external) conditional.
I also used 'dynamic pipelines' as a search term, which seems describe the modification of GStreamer pipelines based on conditions.
I also searched the GStreamer mailing list and found this post, which uses the gst_element_set_locked_state() function.
I added a
if (condition) {
gst_element_set_locked_state(videosink, 'TRUE');
} else {
gst_element_set_locked_state(videosink, 'FALSE');
}
to my code by then the pipeline would not work at all (displaying a black image).
Another way is described on https://coaxion.net/blog/2014/01/gstreamer-dynamic-pipelines/ in Example 2 with the corresponding code being available on GitHub (https://github.com/sdroege/gst-snippets/blob/217ae015aaddfe3f7aa66ffc936ce93401fca04e/dynamic-tee-vsink.c).
It seems to use a callback and the gst_element_set_state (sink->sink, GST_STATE_NULL) function call to write to a file based on an (external) condition.
Applying this function in analogy to the function above causes the pipeline to display find, but also results in continuous (and not conditional) output to a file:
if (condition) {
gst_element_set_state(videosink, GST_STATE_PLAYING);
} else {
gst_element_set_state(videosink, GST_STATE_NULL);
}
Also gst_pad_add_probe () could be a possibility to dynamically change output to a file, but despite having loocked in the GStreamer documentation, I don't know how use this function correctly.
For your requirement you need tee and valve elements.
Tee will seperate the pipeline for both displaying to window and writing to a file. Valve is the condition you are looking. Its drop attribute drops the frame where the valve is.
Your pipeline will be like:
gst-launch-1.0 ksvideosrc ! videoconvert ! tee name=t ! queue ! valve drop=false ! autovideosink t. ! queue ! valve drop=false ! openh264enc ! h264parse ! mp4mux ! filesink location="test.mp4" -v --eos-on-shutdown
When your condition occurres, set your specific valve's drop attribute as true for not continuing to write file.
In C/C++:
if(condition)
g_object_set(videoValve,"drop",true,nullptr);
else
g_object_set(videoValve,"drop",false,nullptr);
WARNING:
Valve elements must be false until data will pass inside everything in the pipeline. Which means, you can set valve's drop attribute as true when the pipeline is on PLAYING State. You can adjust your code accordingly such as trigger the mechanism on BusCallback, you can reach pipeline states inside that.
Note: ksvideosrc (Windows) if you use Unix try v4lsrc.
If you build your application like this, it will work, I use similar scenario.

Running two v4l2loopback devices with their individual properties

Working with v4l2loopback devices I can run these two virtual devices:
a) running the preview image from a Canon DSLR via USB through v4l2loopback into OBS:
modprobe v4l2loopback
gphoto2 --stdout --capture-movie | gst-launch-1.0 fdsrc fd=0 ! decodebin name=dec ! queue ! videoconvert ! tee ! v4l2sink device=/dev/video0
Found here, and it works.
b) Streaming the output of OBS into a browser based conferencing system, like this:
modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam" exclusive_caps=1
Found here, this also works.
However, I need to run both a) and b) at the same time, which isn't working as expected. They are interfering, it seems they are using the same buffer the video flips back and forth between the two producers.
What I learned and tried:
A kernel module can only be loaded once. The v4l2loopback module can be unloaded using the command modprobe -r v4l2loopback. I don't know if loading it a second time will be ignored or unload the previous one.
I've tried to load the module with devices=2 as an option as well as different video devices, but I can't find the right syntax.
As there is an already accepted answer, I assume your problem has been solved. Yet, I was quite newbie and couldn't set the syntax even after the answer above (i.e. how to set video2)
After a bit of more search, I found the website that explains how to add multiple devices with an example.
modprobe v4l2loopback video_nr=3,4,7 card_label="device number 3","the number four","the last one"
Will create 3 devices with the card names passed as the second parameter:
/dev/video3-> device number 3
/dev/video4 -> the number four
/dev/video7-> the last one
When I was trying to use my Nikon camera as a webcam and OBS as a virtual camera for streaming, to have full control of naming my video devices was important. I hope this answer will help some others, as well.
from your description ("the video flips back and forth between the two producers") it seems that both producers are writing to the same video-device.
to fix this, you need to do two things:
create 2 video-devices
tell each producer to use their own video device
creating multiple video-devices
as documented this can be accomplished by specifying devices=2 when loading the module.
taking your invocation of modprobe, this would mean:
modprobe v4l2loopback devices=2 video_nr=10 card_label="OBS Cam" exclusive_caps=1
this will create two new devices, the first one will be /dev/video10 (since you specified video_nr), the 2nd one will take the first free video-device.
on my system (that has a webcam, which occupies both /dev/video and /dev/video1) this is /dev/video2
telling each producer to use their own device
well, tell one producer to use /dev/video10 and the other to use /dev/video2 (or whatever video-devices you got)
e.g.
gphoto2 --stdout --capture-movie | gst-launch-1.0 \
fdsrc fd=0 \
! decodebin name=dec \
! queue \
! videoconvert \
! tee \
! v4l2sink device=/dev/video10
and configure obs to use /dev/video2.
or the other way round.
just don't use the same video-device for both producers.
(also make sure that your consumers use the correct video-device)

How to fix output framerate in playbin?

What I need is to use gstream playbin to play the video at a different frame rate than the original video framerate.
For example, the actual framerate of a video recorded with the camera's 30 fps setting is 30000/1001, but I would like to force play to 30 fps.
gst-launch-1.0 -v playbin uri=http://media.w3.org/2010/05/video/movie_300.mp4 video-sink="videorate ! capsfilter caps=video/x-raw,framerate=3000/1000 ! videoconvert ! autovideosink"
I tried above thing and i can see internal framerate of videorate element path is changed to "framerate=(fraction)3/1"
But remaining elements are still keep original framerate after videorate.
Is there a way to reset the caps of all paths or to override the framerate value of the original source when parsing?

Switch on the fly between two sources in gstreamer

I'm currently facing a problem that I'm not able to resovle yet, but I hope I can do it with your help.
I currently developp an application with gstreamer to playback different kind of files : video and photo (avi and jpg respectively). The user has to have the possibility to switch between those different files. I have achieved this but by creating a new pipeline if the file format is different. There, screen randomly blinks between two files loading.
Now, I've played with valve just for jpg files and it works like a charm. But, I'm stuck at the step to implement video files, I don't know how to swith between two video files : the code below doesn't work for video files, it freezes:
gst-launch-1.0 filesrc name=photosrc ! jpegdec ! valve name=playvalve drop=false ! imxg2dvideosink
Then further in my code, I drop the valve, set differents elements to ready state, change location of filesrc and return to playing state.
I take a look a input-selector but it appears that non-read file still playing when one switches to the other (cf doc). Is it possible to set an input as ready to avoid this behavior ?
Thanks a lot for helping
Take a look at https://github.com/RidgeRun/gst-interpipe plugin.
You can create 2 different source mini pipelines ending with interpipesink, and in runtime change which will connect to interpipesrc. Make sure to have the same format on both ends. Or use renegotiation capability, however, I have not tried it yet.
Check wiki for dynamic switching details:
/* Create pipelines */
GstElement *pipe1 = gst_parse_launch ("videotestsrc ! interpipesink name=camera1", NULL);
GstElement *pipe2 = gst_parse_launch ("videotestsrc ! interpipesink name=camera2", NULL);
GstElement *pipe3 = gst_parse_launch ("interpipesrc name=src listen-to=camera1 ! fakesink", NULL);
/* Grab a reference to the interpipesrc */
GstElement *src = gst_bin_get_by_name(pipe3, "src");
/* Perform the switch */
g_object_set (src, "listen-to", "camera2", NULL);
It seems a little bit tricky for me to compile this plugin for imx6 target...
Is it possible to change pipeline like this :
- ----. .----------. .---- - .---- -
filesrc | avidemux | | vpudec | imxg2dsink
src -> sink src -> sink src -> sink
- ----' '----------' '---- - '---- -
to
- ----. .----------. .---- -
filesrc | jpedec | | imxg2dsink
src -> sink src -> sink
- ----' '----------' '---- -
Without set all pipeline to null ?
I've tried to create a new pipeline at each time I change the location of filesrc, it works but sometimes the framebuffer blinks ....
When I change location of filesrc in case of jpeg file, it works. In case I change location of avi file, pipeline doesn't restart correctly :
avidemux gstavidemux.c:5678:gst_avi_demux_loop:<demux> error: Internal data stream error.
avidemux gstavidemux.c:5678:gst_avi_demux_loop:<demux> error: streaming stopped, reason not-linked
Thank you.

Extracting I-Frames from H264 in MPEG-TS in C

I am experimenting with video and would like to know how I can extract I-frames from H264 contained in MPEG-TS container.
What I want to do is generate preview images out of a video stream.
As the I-frame is supposed to be a complete picture fro which P- and B-Frames derive, is there a possibility to just extract the data of the picture without having to decode it using a codec?
I have already done some work with MPEG-TS container format but I am not that much specialized in codecs.
I am rather in search of information.
Thanks a lot.
I am no expert in this domain but I believe the answer to your question is NO.
If you want to save the I-frame as a JPEG image, you still need to "transcode" the video frame i.e. you first need to decode the I-frame using a H264 decoder and then encode it using a JPEG encoder. This is so because the JPEG encoder does not understand a H264 frame, it only accepts uncompressed video frames as input.
As an aside, since the input to the JPEG encoder is an uncompressed frame, you can generate a JPEG image from any type of frame (I/P/B) as it would already be decoded (using reference I frame, if needed) before feeding to the encoder.
As others have noted decoding h.264 is complicated. You could write your own decoder but it is a major effort. Why not use an existing decoder?
Intel's IPP library has the basic building blocks for a decoder and a sample decoer:
Code Samples for the IntelĀ® Integrated Performance Primitives
There's libavcodec:
Using libavformat and libavcodec
Revised avcodec_sample.0.4.9.cPP
I am not expert in this domain too. But I've played with decoding. Use this gstreamer pipeline to extract preview from video.mp4:
gst-launch -v filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! videorate ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
If you want to write some code, replace videorate with appsrc/appsink elements. Write control program to the pipelines (see example):
filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! appsink
appsrc ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
Buffers without GST_BUFFER_FLAG_DELTA_UNIT flag set is I-frames. You can safely skip many frames and start decoding stream at any I-frame.

Resources