GStreamer: Pipeline to write data to file based on condition - c

Is there a way to write data in a GStreamer pipeline to a file based on an (external) condition?
I have an application/code, which streams/displays video to the screen and continuously writes it to a file (it works fine).
I would like to have the GStreamer pipeline to only write to a file if an external condition is true (at runtime - I don't know the condition in advance).
What I have done so far:
I carefully searched the official GStreamer documentation, where I found some information on appsink, but I don't really see a way how to apply it based on an (external) conditional.
I also used 'dynamic pipelines' as a search term, which seems describe the modification of GStreamer pipelines based on conditions.
I also searched the GStreamer mailing list and found this post, which uses the gst_element_set_locked_state() function.
I added a
if (condition) {
gst_element_set_locked_state(videosink, 'TRUE');
} else {
gst_element_set_locked_state(videosink, 'FALSE');
}
to my code by then the pipeline would not work at all (displaying a black image).
Another way is described on https://coaxion.net/blog/2014/01/gstreamer-dynamic-pipelines/ in Example 2 with the corresponding code being available on GitHub (https://github.com/sdroege/gst-snippets/blob/217ae015aaddfe3f7aa66ffc936ce93401fca04e/dynamic-tee-vsink.c).
It seems to use a callback and the gst_element_set_state (sink->sink, GST_STATE_NULL) function call to write to a file based on an (external) condition.
Applying this function in analogy to the function above causes the pipeline to display find, but also results in continuous (and not conditional) output to a file:
if (condition) {
gst_element_set_state(videosink, GST_STATE_PLAYING);
} else {
gst_element_set_state(videosink, GST_STATE_NULL);
}
Also gst_pad_add_probe () could be a possibility to dynamically change output to a file, but despite having loocked in the GStreamer documentation, I don't know how use this function correctly.

For your requirement you need tee and valve elements.
Tee will seperate the pipeline for both displaying to window and writing to a file. Valve is the condition you are looking. Its drop attribute drops the frame where the valve is.
Your pipeline will be like:
gst-launch-1.0 ksvideosrc ! videoconvert ! tee name=t ! queue ! valve drop=false ! autovideosink t. ! queue ! valve drop=false ! openh264enc ! h264parse ! mp4mux ! filesink location="test.mp4" -v --eos-on-shutdown
When your condition occurres, set your specific valve's drop attribute as true for not continuing to write file.
In C/C++:
if(condition)
g_object_set(videoValve,"drop",true,nullptr);
else
g_object_set(videoValve,"drop",false,nullptr);
WARNING:
Valve elements must be false until data will pass inside everything in the pipeline. Which means, you can set valve's drop attribute as true when the pipeline is on PLAYING State. You can adjust your code accordingly such as trigger the mechanism on BusCallback, you can reach pipeline states inside that.
Note: ksvideosrc (Windows) if you use Unix try v4lsrc.
If you build your application like this, it will work, I use similar scenario.

Related

Running two v4l2loopback devices with their individual properties

Working with v4l2loopback devices I can run these two virtual devices:
a) running the preview image from a Canon DSLR via USB through v4l2loopback into OBS:
modprobe v4l2loopback
gphoto2 --stdout --capture-movie | gst-launch-1.0 fdsrc fd=0 ! decodebin name=dec ! queue ! videoconvert ! tee ! v4l2sink device=/dev/video0
Found here, and it works.
b) Streaming the output of OBS into a browser based conferencing system, like this:
modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam" exclusive_caps=1
Found here, this also works.
However, I need to run both a) and b) at the same time, which isn't working as expected. They are interfering, it seems they are using the same buffer the video flips back and forth between the two producers.
What I learned and tried:
A kernel module can only be loaded once. The v4l2loopback module can be unloaded using the command modprobe -r v4l2loopback. I don't know if loading it a second time will be ignored or unload the previous one.
I've tried to load the module with devices=2 as an option as well as different video devices, but I can't find the right syntax.
As there is an already accepted answer, I assume your problem has been solved. Yet, I was quite newbie and couldn't set the syntax even after the answer above (i.e. how to set video2)
After a bit of more search, I found the website that explains how to add multiple devices with an example.
modprobe v4l2loopback video_nr=3,4,7 card_label="device number 3","the number four","the last one"
Will create 3 devices with the card names passed as the second parameter:
/dev/video3-> device number 3
/dev/video4 -> the number four
/dev/video7-> the last one
When I was trying to use my Nikon camera as a webcam and OBS as a virtual camera for streaming, to have full control of naming my video devices was important. I hope this answer will help some others, as well.
from your description ("the video flips back and forth between the two producers") it seems that both producers are writing to the same video-device.
to fix this, you need to do two things:
create 2 video-devices
tell each producer to use their own video device
creating multiple video-devices
as documented this can be accomplished by specifying devices=2 when loading the module.
taking your invocation of modprobe, this would mean:
modprobe v4l2loopback devices=2 video_nr=10 card_label="OBS Cam" exclusive_caps=1
this will create two new devices, the first one will be /dev/video10 (since you specified video_nr), the 2nd one will take the first free video-device.
on my system (that has a webcam, which occupies both /dev/video and /dev/video1) this is /dev/video2
telling each producer to use their own device
well, tell one producer to use /dev/video10 and the other to use /dev/video2 (or whatever video-devices you got)
e.g.
gphoto2 --stdout --capture-movie | gst-launch-1.0 \
fdsrc fd=0 \
! decodebin name=dec \
! queue \
! videoconvert \
! tee \
! v4l2sink device=/dev/video10
and configure obs to use /dev/video2.
or the other way round.
just don't use the same video-device for both producers.
(also make sure that your consumers use the correct video-device)

Error in calculating pps for h264 rtp stream

I have a problem in my Gstreamer pipeline that causes the sprop-parameter-sets to (i think) overflow its buffer. I am doing this in a iMX6 board and my pipeline is appsrc format=3 ! imxvpuenc_h264 ! rtph264pay and I use an RTSP server for accessing the pipeline. The pipeline works if a static image is sent, but in the case of a video it stops working by calculating the wrong pps.
I have tried using a static sprop-parameter-sets for rtph264pay by setting its property, but in this case the same thing happens in rtph264depay that calculates a new sprop-parameter-set. The output from the caps creation can be seen below:
0:01:15.970217009 578 0xa482ad50 INFO GST_EVENT gstevent.c:809:gst_event_new_caps: creating caps event application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)"Z0JAIKaAUAIGQAA\=\,aM48gP94AAIS4AAg2AACAudABxMbtz5ZqJ6U4vk7wAAQMgABAOgA5R6ZQkwQNaTPhfwAQAAgjAACD54YHcvx9FXG9ON62mcABAAFAAEAYbX2rm8Qe4mSKvXrwAAQBgACNJAZdcgDiEnNE5djN4GAAIJhoAKAEnAmvb0KVFQMwyGTwAAi4AIgBINIKIds1udUngAAgcAACAWS1IEgBehG7wDL75/W5JRBIi0WrX8gABAsAAEA0DVsAnpAKiCjVLNdK8AAEJ4AEAc/YVCfjDJO+t73KSd4AAII4AAgpAACAWwBo6CGMh3HueozX+Z4AAIJgAAgOgD2gYFqlGlGBjWn1MULXgAAg5AACAkEA8JLN5OJHLJcZmDo+eAACC8AAIDoAMAGGzM8zzGmJZwKeFL8AAQAAgKhbICDBChH5BKlw+PuMscAACACAAcACA3uGjeSK7gZZzT+NH/ewABDWAAEEQsALG1gYcE5FEbXp1hW8DAcAAQBQAnNfkbKQ/Pc/I9SGjgAwABAXAGdyJu7gpKxj9M5ERP/eAA6MAAIBgopwP8Sbdqzl4CjgAAQMwABAAAHALgpUcLtczR+Yjocj/eBgACC0YACtjKAXenmNmgRczT4AAIF4AAgDgAEASJqHnyzxQfCXUdO3gAAgoAACBgaSADVwoxVTFA7X0vaZsnexAU7CW/gAAgvAAQoABAXFGq3qUtmUv9VYp8AACCaEAIA7Bmj1M+lA7...
and this continues on for about a hundred or more lines and the device crashes if the pipeline isn't stopped. This should have only a few more characters after the first comma. Can someone tell why this would happen and provide a solution?

Switch on the fly between two sources in gstreamer

I'm currently facing a problem that I'm not able to resovle yet, but I hope I can do it with your help.
I currently developp an application with gstreamer to playback different kind of files : video and photo (avi and jpg respectively). The user has to have the possibility to switch between those different files. I have achieved this but by creating a new pipeline if the file format is different. There, screen randomly blinks between two files loading.
Now, I've played with valve just for jpg files and it works like a charm. But, I'm stuck at the step to implement video files, I don't know how to swith between two video files : the code below doesn't work for video files, it freezes:
gst-launch-1.0 filesrc name=photosrc ! jpegdec ! valve name=playvalve drop=false ! imxg2dvideosink
Then further in my code, I drop the valve, set differents elements to ready state, change location of filesrc and return to playing state.
I take a look a input-selector but it appears that non-read file still playing when one switches to the other (cf doc). Is it possible to set an input as ready to avoid this behavior ?
Thanks a lot for helping
Take a look at https://github.com/RidgeRun/gst-interpipe plugin.
You can create 2 different source mini pipelines ending with interpipesink, and in runtime change which will connect to interpipesrc. Make sure to have the same format on both ends. Or use renegotiation capability, however, I have not tried it yet.
Check wiki for dynamic switching details:
/* Create pipelines */
GstElement *pipe1 = gst_parse_launch ("videotestsrc ! interpipesink name=camera1", NULL);
GstElement *pipe2 = gst_parse_launch ("videotestsrc ! interpipesink name=camera2", NULL);
GstElement *pipe3 = gst_parse_launch ("interpipesrc name=src listen-to=camera1 ! fakesink", NULL);
/* Grab a reference to the interpipesrc */
GstElement *src = gst_bin_get_by_name(pipe3, "src");
/* Perform the switch */
g_object_set (src, "listen-to", "camera2", NULL);
It seems a little bit tricky for me to compile this plugin for imx6 target...
Is it possible to change pipeline like this :
- ----. .----------. .---- - .---- -
filesrc | avidemux | | vpudec | imxg2dsink
src -> sink src -> sink src -> sink
- ----' '----------' '---- - '---- -
to
- ----. .----------. .---- -
filesrc | jpedec | | imxg2dsink
src -> sink src -> sink
- ----' '----------' '---- -
Without set all pipeline to null ?
I've tried to create a new pipeline at each time I change the location of filesrc, it works but sometimes the framebuffer blinks ....
When I change location of filesrc in case of jpeg file, it works. In case I change location of avi file, pipeline doesn't restart correctly :
avidemux gstavidemux.c:5678:gst_avi_demux_loop:<demux> error: Internal data stream error.
avidemux gstavidemux.c:5678:gst_avi_demux_loop:<demux> error: streaming stopped, reason not-linked
Thank you.

anyway to avoid to loop monitor multi handle in libcurl?

I've read some examples from the libcurl homepage. It always uses loop monitoring the multi handle when download through curl_multi_perform like below:
curl_multi_add_handle(multi_handle, easy_handle);
do {
curl_multi_wait(…);
curl_multi_perform(multi_handle, &still_running);
} while (still_running);
that make me block on the section of program
I want the libcurl will do callback after the anyone of easy_handle is download finished
for example:
Server can receive requests and parse the requests to multi_handle to
download asynchronously.
Server still can receive requests while multi_handle is downloading.
Those are independent(asynchronous in other words)
Typically curl_multi_perform is called in a loop to complete all the curl related task, like http transaction.
The way you have put the code, you would not achieve the asynchronous way of using libcurl. There are ways to achieve it.
In a typical implementation you will have your main loop, where you might be dealing with number of task. For example
do
{
execute task1
execute task2
.............
execute taskn
}
while(condition)
In that loop, you can call curl_multi_perform.
So main loop looks like
do
{
execute task1
execute task2
.............
execute taskn
curl_multi_perform(curlm, &count);
}
while(condition)
That way you will do all your task and curl_multi_perform is called time to time and you will achieve asynchronous way of using libcurl.
Please check documentation, depending on some return value you may avoid calling curl_multi_perform (I remember reading it previously).

c - GTK3.0 GUI freezes when using "g_spawn_async_with_pipes()"

I've written a very simple front end for ffmpeg (converting stuff: video -> mp3) in GTK3.0/C for linux. For spawning ffmpeg I use g_spawn_async_with_pipes(). I thought this was the right way to execute stuff like that without having the GUI freeze up - but it does though. So - how can I prevent it from freezing - so I can f.e. display a spinner?
You might need to add something like"
while (gtk_events_pending ()) {
gtk_main_iteration_do (FALSE);
}
That is, to let GTK process the pending events (like drawing the UI).
I suppose you are processing the output of ffmpeg with g_io_add_watch
or similar.

Resources