Switch on the fly between two sources in gstreamer - c

I'm currently facing a problem that I'm not able to resovle yet, but I hope I can do it with your help.
I currently developp an application with gstreamer to playback different kind of files : video and photo (avi and jpg respectively). The user has to have the possibility to switch between those different files. I have achieved this but by creating a new pipeline if the file format is different. There, screen randomly blinks between two files loading.
Now, I've played with valve just for jpg files and it works like a charm. But, I'm stuck at the step to implement video files, I don't know how to swith between two video files : the code below doesn't work for video files, it freezes:
gst-launch-1.0 filesrc name=photosrc ! jpegdec ! valve name=playvalve drop=false ! imxg2dvideosink
Then further in my code, I drop the valve, set differents elements to ready state, change location of filesrc and return to playing state.
I take a look a input-selector but it appears that non-read file still playing when one switches to the other (cf doc). Is it possible to set an input as ready to avoid this behavior ?
Thanks a lot for helping

Take a look at https://github.com/RidgeRun/gst-interpipe plugin.
You can create 2 different source mini pipelines ending with interpipesink, and in runtime change which will connect to interpipesrc. Make sure to have the same format on both ends. Or use renegotiation capability, however, I have not tried it yet.
Check wiki for dynamic switching details:
/* Create pipelines */
GstElement *pipe1 = gst_parse_launch ("videotestsrc ! interpipesink name=camera1", NULL);
GstElement *pipe2 = gst_parse_launch ("videotestsrc ! interpipesink name=camera2", NULL);
GstElement *pipe3 = gst_parse_launch ("interpipesrc name=src listen-to=camera1 ! fakesink", NULL);
/* Grab a reference to the interpipesrc */
GstElement *src = gst_bin_get_by_name(pipe3, "src");
/* Perform the switch */
g_object_set (src, "listen-to", "camera2", NULL);

It seems a little bit tricky for me to compile this plugin for imx6 target...
Is it possible to change pipeline like this :
- ----. .----------. .---- - .---- -
filesrc | avidemux | | vpudec | imxg2dsink
src -> sink src -> sink src -> sink
- ----' '----------' '---- - '---- -
to
- ----. .----------. .---- -
filesrc | jpedec | | imxg2dsink
src -> sink src -> sink
- ----' '----------' '---- -
Without set all pipeline to null ?
I've tried to create a new pipeline at each time I change the location of filesrc, it works but sometimes the framebuffer blinks ....
When I change location of filesrc in case of jpeg file, it works. In case I change location of avi file, pipeline doesn't restart correctly :
avidemux gstavidemux.c:5678:gst_avi_demux_loop:<demux> error: Internal data stream error.
avidemux gstavidemux.c:5678:gst_avi_demux_loop:<demux> error: streaming stopped, reason not-linked
Thank you.

Related

Changing playback speed using gstreamer

I am currently working on the gstreamer tutorials, in particular the one about playback speed adjustments. I pasted the example code into a file which I compiled with the flags pkg-config --cflags --libs gstreamer-1.0 (my gstreamer version is 1.20.5)
I tried to change the playback rates using the keys S / s, got the corresponding prints (Current rate: 0.5 etc.) but the playback speed stayed constant at 1. I thought that the failure to change the playback speed was due to the source being a remote file, so I changed the code to use local files (as command line arguments) instead:
gchar buffer[4096];
g_snprintf(buffer, 4096, "playbin uri=file://%s", argv[1]);
/* Build the pipeline */
data.pipeline = gst_parse_launch(buffer, &error);
I also switched from the video sink to an audio sink (audio is sufficient for my cases).
I then noticed that whether or not the rate changed works is (apparently) up to the type of the file I am opening: When I open a local ogg file, the playback speed changes, when I open an mp3 instead, nothing happens.
Is this a bug in gstreamer, or do I need a more sophisticated pipeline in order to get the approach to work with different media types (local files would be sufficient for my needs)?
Edit: Complete code, sample mp3

GStreamer: Pipeline to write data to file based on condition

Is there a way to write data in a GStreamer pipeline to a file based on an (external) condition?
I have an application/code, which streams/displays video to the screen and continuously writes it to a file (it works fine).
I would like to have the GStreamer pipeline to only write to a file if an external condition is true (at runtime - I don't know the condition in advance).
What I have done so far:
I carefully searched the official GStreamer documentation, where I found some information on appsink, but I don't really see a way how to apply it based on an (external) conditional.
I also used 'dynamic pipelines' as a search term, which seems describe the modification of GStreamer pipelines based on conditions.
I also searched the GStreamer mailing list and found this post, which uses the gst_element_set_locked_state() function.
I added a
if (condition) {
gst_element_set_locked_state(videosink, 'TRUE');
} else {
gst_element_set_locked_state(videosink, 'FALSE');
}
to my code by then the pipeline would not work at all (displaying a black image).
Another way is described on https://coaxion.net/blog/2014/01/gstreamer-dynamic-pipelines/ in Example 2 with the corresponding code being available on GitHub (https://github.com/sdroege/gst-snippets/blob/217ae015aaddfe3f7aa66ffc936ce93401fca04e/dynamic-tee-vsink.c).
It seems to use a callback and the gst_element_set_state (sink->sink, GST_STATE_NULL) function call to write to a file based on an (external) condition.
Applying this function in analogy to the function above causes the pipeline to display find, but also results in continuous (and not conditional) output to a file:
if (condition) {
gst_element_set_state(videosink, GST_STATE_PLAYING);
} else {
gst_element_set_state(videosink, GST_STATE_NULL);
}
Also gst_pad_add_probe () could be a possibility to dynamically change output to a file, but despite having loocked in the GStreamer documentation, I don't know how use this function correctly.
For your requirement you need tee and valve elements.
Tee will seperate the pipeline for both displaying to window and writing to a file. Valve is the condition you are looking. Its drop attribute drops the frame where the valve is.
Your pipeline will be like:
gst-launch-1.0 ksvideosrc ! videoconvert ! tee name=t ! queue ! valve drop=false ! autovideosink t. ! queue ! valve drop=false ! openh264enc ! h264parse ! mp4mux ! filesink location="test.mp4" -v --eos-on-shutdown
When your condition occurres, set your specific valve's drop attribute as true for not continuing to write file.
In C/C++:
if(condition)
g_object_set(videoValve,"drop",true,nullptr);
else
g_object_set(videoValve,"drop",false,nullptr);
WARNING:
Valve elements must be false until data will pass inside everything in the pipeline. Which means, you can set valve's drop attribute as true when the pipeline is on PLAYING State. You can adjust your code accordingly such as trigger the mechanism on BusCallback, you can reach pipeline states inside that.
Note: ksvideosrc (Windows) if you use Unix try v4lsrc.
If you build your application like this, it will work, I use similar scenario.

Running two v4l2loopback devices with their individual properties

Working with v4l2loopback devices I can run these two virtual devices:
a) running the preview image from a Canon DSLR via USB through v4l2loopback into OBS:
modprobe v4l2loopback
gphoto2 --stdout --capture-movie | gst-launch-1.0 fdsrc fd=0 ! decodebin name=dec ! queue ! videoconvert ! tee ! v4l2sink device=/dev/video0
Found here, and it works.
b) Streaming the output of OBS into a browser based conferencing system, like this:
modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam" exclusive_caps=1
Found here, this also works.
However, I need to run both a) and b) at the same time, which isn't working as expected. They are interfering, it seems they are using the same buffer the video flips back and forth between the two producers.
What I learned and tried:
A kernel module can only be loaded once. The v4l2loopback module can be unloaded using the command modprobe -r v4l2loopback. I don't know if loading it a second time will be ignored or unload the previous one.
I've tried to load the module with devices=2 as an option as well as different video devices, but I can't find the right syntax.
As there is an already accepted answer, I assume your problem has been solved. Yet, I was quite newbie and couldn't set the syntax even after the answer above (i.e. how to set video2)
After a bit of more search, I found the website that explains how to add multiple devices with an example.
modprobe v4l2loopback video_nr=3,4,7 card_label="device number 3","the number four","the last one"
Will create 3 devices with the card names passed as the second parameter:
/dev/video3-> device number 3
/dev/video4 -> the number four
/dev/video7-> the last one
When I was trying to use my Nikon camera as a webcam and OBS as a virtual camera for streaming, to have full control of naming my video devices was important. I hope this answer will help some others, as well.
from your description ("the video flips back and forth between the two producers") it seems that both producers are writing to the same video-device.
to fix this, you need to do two things:
create 2 video-devices
tell each producer to use their own video device
creating multiple video-devices
as documented this can be accomplished by specifying devices=2 when loading the module.
taking your invocation of modprobe, this would mean:
modprobe v4l2loopback devices=2 video_nr=10 card_label="OBS Cam" exclusive_caps=1
this will create two new devices, the first one will be /dev/video10 (since you specified video_nr), the 2nd one will take the first free video-device.
on my system (that has a webcam, which occupies both /dev/video and /dev/video1) this is /dev/video2
telling each producer to use their own device
well, tell one producer to use /dev/video10 and the other to use /dev/video2 (or whatever video-devices you got)
e.g.
gphoto2 --stdout --capture-movie | gst-launch-1.0 \
fdsrc fd=0 \
! decodebin name=dec \
! queue \
! videoconvert \
! tee \
! v4l2sink device=/dev/video10
and configure obs to use /dev/video2.
or the other way round.
just don't use the same video-device for both producers.
(also make sure that your consumers use the correct video-device)

Error in calculating pps for h264 rtp stream

I have a problem in my Gstreamer pipeline that causes the sprop-parameter-sets to (i think) overflow its buffer. I am doing this in a iMX6 board and my pipeline is appsrc format=3 ! imxvpuenc_h264 ! rtph264pay and I use an RTSP server for accessing the pipeline. The pipeline works if a static image is sent, but in the case of a video it stops working by calculating the wrong pps.
I have tried using a static sprop-parameter-sets for rtph264pay by setting its property, but in this case the same thing happens in rtph264depay that calculates a new sprop-parameter-set. The output from the caps creation can be seen below:
0:01:15.970217009 578 0xa482ad50 INFO GST_EVENT gstevent.c:809:gst_event_new_caps: creating caps event application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)"Z0JAIKaAUAIGQAA\=\,aM48gP94AAIS4AAg2AACAudABxMbtz5ZqJ6U4vk7wAAQMgABAOgA5R6ZQkwQNaTPhfwAQAAgjAACD54YHcvx9FXG9ON62mcABAAFAAEAYbX2rm8Qe4mSKvXrwAAQBgACNJAZdcgDiEnNE5djN4GAAIJhoAKAEnAmvb0KVFQMwyGTwAAi4AIgBINIKIds1udUngAAgcAACAWS1IEgBehG7wDL75/W5JRBIi0WrX8gABAsAAEA0DVsAnpAKiCjVLNdK8AAEJ4AEAc/YVCfjDJO+t73KSd4AAII4AAgpAACAWwBo6CGMh3HueozX+Z4AAIJgAAgOgD2gYFqlGlGBjWn1MULXgAAg5AACAkEA8JLN5OJHLJcZmDo+eAACC8AAIDoAMAGGzM8zzGmJZwKeFL8AAQAAgKhbICDBChH5BKlw+PuMscAACACAAcACA3uGjeSK7gZZzT+NH/ewABDWAAEEQsALG1gYcE5FEbXp1hW8DAcAAQBQAnNfkbKQ/Pc/I9SGjgAwABAXAGdyJu7gpKxj9M5ERP/eAA6MAAIBgopwP8Sbdqzl4CjgAAQMwABAAAHALgpUcLtczR+Yjocj/eBgACC0YACtjKAXenmNmgRczT4AAIF4AAgDgAEASJqHnyzxQfCXUdO3gAAgoAACBgaSADVwoxVTFA7X0vaZsnexAU7CW/gAAgvAAQoABAXFGq3qUtmUv9VYp8AACCaEAIA7Bmj1M+lA7...
and this continues on for about a hundred or more lines and the device crashes if the pipeline isn't stopped. This should have only a few more characters after the first comma. Can someone tell why this would happen and provide a solution?

Extracting I-Frames from H264 in MPEG-TS in C

I am experimenting with video and would like to know how I can extract I-frames from H264 contained in MPEG-TS container.
What I want to do is generate preview images out of a video stream.
As the I-frame is supposed to be a complete picture fro which P- and B-Frames derive, is there a possibility to just extract the data of the picture without having to decode it using a codec?
I have already done some work with MPEG-TS container format but I am not that much specialized in codecs.
I am rather in search of information.
Thanks a lot.
I am no expert in this domain but I believe the answer to your question is NO.
If you want to save the I-frame as a JPEG image, you still need to "transcode" the video frame i.e. you first need to decode the I-frame using a H264 decoder and then encode it using a JPEG encoder. This is so because the JPEG encoder does not understand a H264 frame, it only accepts uncompressed video frames as input.
As an aside, since the input to the JPEG encoder is an uncompressed frame, you can generate a JPEG image from any type of frame (I/P/B) as it would already be decoded (using reference I frame, if needed) before feeding to the encoder.
As others have noted decoding h.264 is complicated. You could write your own decoder but it is a major effort. Why not use an existing decoder?
Intel's IPP library has the basic building blocks for a decoder and a sample decoer:
Code Samples for the IntelĀ® Integrated Performance Primitives
There's libavcodec:
Using libavformat and libavcodec
Revised avcodec_sample.0.4.9.cPP
I am not expert in this domain too. But I've played with decoding. Use this gstreamer pipeline to extract preview from video.mp4:
gst-launch -v filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! videorate ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
If you want to write some code, replace videorate with appsrc/appsink elements. Write control program to the pipelines (see example):
filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! appsink
appsrc ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
Buffers without GST_BUFFER_FLAG_DELTA_UNIT flag set is I-frames. You can safely skip many frames and start decoding stream at any I-frame.

Resources