What I need is to use gstream playbin to play the video at a different frame rate than the original video framerate.
For example, the actual framerate of a video recorded with the camera's 30 fps setting is 30000/1001, but I would like to force play to 30 fps.
gst-launch-1.0 -v playbin uri=http://media.w3.org/2010/05/video/movie_300.mp4 video-sink="videorate ! capsfilter caps=video/x-raw,framerate=3000/1000 ! videoconvert ! autovideosink"
I tried above thing and i can see internal framerate of videorate element path is changed to "framerate=(fraction)3/1"
But remaining elements are still keep original framerate after videorate.
Is there a way to reset the caps of all paths or to override the framerate value of the original source when parsing?
Related
Working with v4l2loopback devices I can run these two virtual devices:
a) running the preview image from a Canon DSLR via USB through v4l2loopback into OBS:
modprobe v4l2loopback
gphoto2 --stdout --capture-movie | gst-launch-1.0 fdsrc fd=0 ! decodebin name=dec ! queue ! videoconvert ! tee ! v4l2sink device=/dev/video0
Found here, and it works.
b) Streaming the output of OBS into a browser based conferencing system, like this:
modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam" exclusive_caps=1
Found here, this also works.
However, I need to run both a) and b) at the same time, which isn't working as expected. They are interfering, it seems they are using the same buffer the video flips back and forth between the two producers.
What I learned and tried:
A kernel module can only be loaded once. The v4l2loopback module can be unloaded using the command modprobe -r v4l2loopback. I don't know if loading it a second time will be ignored or unload the previous one.
I've tried to load the module with devices=2 as an option as well as different video devices, but I can't find the right syntax.
As there is an already accepted answer, I assume your problem has been solved. Yet, I was quite newbie and couldn't set the syntax even after the answer above (i.e. how to set video2)
After a bit of more search, I found the website that explains how to add multiple devices with an example.
modprobe v4l2loopback video_nr=3,4,7 card_label="device number 3","the number four","the last one"
Will create 3 devices with the card names passed as the second parameter:
/dev/video3-> device number 3
/dev/video4 -> the number four
/dev/video7-> the last one
When I was trying to use my Nikon camera as a webcam and OBS as a virtual camera for streaming, to have full control of naming my video devices was important. I hope this answer will help some others, as well.
from your description ("the video flips back and forth between the two producers") it seems that both producers are writing to the same video-device.
to fix this, you need to do two things:
create 2 video-devices
tell each producer to use their own video device
creating multiple video-devices
as documented this can be accomplished by specifying devices=2 when loading the module.
taking your invocation of modprobe, this would mean:
modprobe v4l2loopback devices=2 video_nr=10 card_label="OBS Cam" exclusive_caps=1
this will create two new devices, the first one will be /dev/video10 (since you specified video_nr), the 2nd one will take the first free video-device.
on my system (that has a webcam, which occupies both /dev/video and /dev/video1) this is /dev/video2
telling each producer to use their own device
well, tell one producer to use /dev/video10 and the other to use /dev/video2 (or whatever video-devices you got)
e.g.
gphoto2 --stdout --capture-movie | gst-launch-1.0 \
fdsrc fd=0 \
! decodebin name=dec \
! queue \
! videoconvert \
! tee \
! v4l2sink device=/dev/video10
and configure obs to use /dev/video2.
or the other way round.
just don't use the same video-device for both producers.
(also make sure that your consumers use the correct video-device)
I have a problem in my Gstreamer pipeline that causes the sprop-parameter-sets to (i think) overflow its buffer. I am doing this in a iMX6 board and my pipeline is appsrc format=3 ! imxvpuenc_h264 ! rtph264pay and I use an RTSP server for accessing the pipeline. The pipeline works if a static image is sent, but in the case of a video it stops working by calculating the wrong pps.
I have tried using a static sprop-parameter-sets for rtph264pay by setting its property, but in this case the same thing happens in rtph264depay that calculates a new sprop-parameter-set. The output from the caps creation can be seen below:
0:01:15.970217009 578 0xa482ad50 INFO GST_EVENT gstevent.c:809:gst_event_new_caps: creating caps event application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)"Z0JAIKaAUAIGQAA\=\,aM48gP94AAIS4AAg2AACAudABxMbtz5ZqJ6U4vk7wAAQMgABAOgA5R6ZQkwQNaTPhfwAQAAgjAACD54YHcvx9FXG9ON62mcABAAFAAEAYbX2rm8Qe4mSKvXrwAAQBgACNJAZdcgDiEnNE5djN4GAAIJhoAKAEnAmvb0KVFQMwyGTwAAi4AIgBINIKIds1udUngAAgcAACAWS1IEgBehG7wDL75/W5JRBIi0WrX8gABAsAAEA0DVsAnpAKiCjVLNdK8AAEJ4AEAc/YVCfjDJO+t73KSd4AAII4AAgpAACAWwBo6CGMh3HueozX+Z4AAIJgAAgOgD2gYFqlGlGBjWn1MULXgAAg5AACAkEA8JLN5OJHLJcZmDo+eAACC8AAIDoAMAGGzM8zzGmJZwKeFL8AAQAAgKhbICDBChH5BKlw+PuMscAACACAAcACA3uGjeSK7gZZzT+NH/ewABDWAAEEQsALG1gYcE5FEbXp1hW8DAcAAQBQAnNfkbKQ/Pc/I9SGjgAwABAXAGdyJu7gpKxj9M5ERP/eAA6MAAIBgopwP8Sbdqzl4CjgAAQMwABAAAHALgpUcLtczR+Yjocj/eBgACC0YACtjKAXenmNmgRczT4AAIF4AAgDgAEASJqHnyzxQfCXUdO3gAAgoAACBgaSADVwoxVTFA7X0vaZsnexAU7CW/gAAgvAAQoABAXFGq3qUtmUv9VYp8AACCaEAIA7Bmj1M+lA7...
and this continues on for about a hundred or more lines and the device crashes if the pipeline isn't stopped. This should have only a few more characters after the first comma. Can someone tell why this would happen and provide a solution?
I'm currently facing a problem that I'm not able to resovle yet, but I hope I can do it with your help.
I currently developp an application with gstreamer to playback different kind of files : video and photo (avi and jpg respectively). The user has to have the possibility to switch between those different files. I have achieved this but by creating a new pipeline if the file format is different. There, screen randomly blinks between two files loading.
Now, I've played with valve just for jpg files and it works like a charm. But, I'm stuck at the step to implement video files, I don't know how to swith between two video files : the code below doesn't work for video files, it freezes:
gst-launch-1.0 filesrc name=photosrc ! jpegdec ! valve name=playvalve drop=false ! imxg2dvideosink
Then further in my code, I drop the valve, set differents elements to ready state, change location of filesrc and return to playing state.
I take a look a input-selector but it appears that non-read file still playing when one switches to the other (cf doc). Is it possible to set an input as ready to avoid this behavior ?
Thanks a lot for helping
Take a look at https://github.com/RidgeRun/gst-interpipe plugin.
You can create 2 different source mini pipelines ending with interpipesink, and in runtime change which will connect to interpipesrc. Make sure to have the same format on both ends. Or use renegotiation capability, however, I have not tried it yet.
Check wiki for dynamic switching details:
/* Create pipelines */
GstElement *pipe1 = gst_parse_launch ("videotestsrc ! interpipesink name=camera1", NULL);
GstElement *pipe2 = gst_parse_launch ("videotestsrc ! interpipesink name=camera2", NULL);
GstElement *pipe3 = gst_parse_launch ("interpipesrc name=src listen-to=camera1 ! fakesink", NULL);
/* Grab a reference to the interpipesrc */
GstElement *src = gst_bin_get_by_name(pipe3, "src");
/* Perform the switch */
g_object_set (src, "listen-to", "camera2", NULL);
It seems a little bit tricky for me to compile this plugin for imx6 target...
Is it possible to change pipeline like this :
- ----. .----------. .---- - .---- -
filesrc | avidemux | | vpudec | imxg2dsink
src -> sink src -> sink src -> sink
- ----' '----------' '---- - '---- -
to
- ----. .----------. .---- -
filesrc | jpedec | | imxg2dsink
src -> sink src -> sink
- ----' '----------' '---- -
Without set all pipeline to null ?
I've tried to create a new pipeline at each time I change the location of filesrc, it works but sometimes the framebuffer blinks ....
When I change location of filesrc in case of jpeg file, it works. In case I change location of avi file, pipeline doesn't restart correctly :
avidemux gstavidemux.c:5678:gst_avi_demux_loop:<demux> error: Internal data stream error.
avidemux gstavidemux.c:5678:gst_avi_demux_loop:<demux> error: streaming stopped, reason not-linked
Thank you.
I am transcoding a video using FFMPEG API in c code.
I am trying to set the video bit rate using the ffmpeg API as shown below:
ovCodecCtx->bit_rate = 100 * 1000;
The Encoder I am using is libx264.
But this parameter is not taken into effect and the resulting video quality is very bad.
I have even tried setting related parameters like rc_min_rate, rc_max_rate, etc.. but the video quality is still very low as these related parameters are not taken into effect.
Could any expert tell how one can set the bit rate correctly using the FFMPEG API?
Thanks
I have found the solution to my problem. In fact somebody who was facing the same problem has posted the solution in ffmpeg(libav) user forum. This seems to work in my case too. I am posting the answer to my own question so that other users facing similar issue might benefit from this post.
Problem:
Setting the Video Bit Rate programmatically for the H264 Video Codec was not honoured by the libx264 Codec. Even though it was working for MPEG1, 2 and MPEG4 video codecs, this setting was not recognised for H264 Video Codec. And the resulting video quality was very bad.
Solution:
We need to set the pts for the decoded/resized frames before they are fed to encoder.
The person who found the solution has gone through ffmpeg.c source and was able to figure this out. We need to first rescale the AVFrame's pts from the stream's time_base to the codec time_base to get a simple frame number (e.g. 1, 2, 3).
pic->pts = av_rescale_q(pic->pts, ost->time_base, ovCodecCtx->time_base);
avcodec_encode_video2(ovCodecCtx, &newpkt, pic, &got_packet_ptr);
And when we receive back the encoded packet from the libx264 codec, we need to rescale the pts and dts of the encoded video packet to the stream time base
newpkt.pts = av_rescale_q(newpkt.pts, ovCodecCtx->time_base, ost->time_base);
newpkt.dts = av_rescale_q(newpkt.dts, ovCodecCtx->time_base, ost->time_base);
Thanks
I am experimenting with video and would like to know how I can extract I-frames from H264 contained in MPEG-TS container.
What I want to do is generate preview images out of a video stream.
As the I-frame is supposed to be a complete picture fro which P- and B-Frames derive, is there a possibility to just extract the data of the picture without having to decode it using a codec?
I have already done some work with MPEG-TS container format but I am not that much specialized in codecs.
I am rather in search of information.
Thanks a lot.
I am no expert in this domain but I believe the answer to your question is NO.
If you want to save the I-frame as a JPEG image, you still need to "transcode" the video frame i.e. you first need to decode the I-frame using a H264 decoder and then encode it using a JPEG encoder. This is so because the JPEG encoder does not understand a H264 frame, it only accepts uncompressed video frames as input.
As an aside, since the input to the JPEG encoder is an uncompressed frame, you can generate a JPEG image from any type of frame (I/P/B) as it would already be decoded (using reference I frame, if needed) before feeding to the encoder.
As others have noted decoding h.264 is complicated. You could write your own decoder but it is a major effort. Why not use an existing decoder?
Intel's IPP library has the basic building blocks for a decoder and a sample decoer:
Code Samples for the IntelĀ® Integrated Performance Primitives
There's libavcodec:
Using libavformat and libavcodec
Revised avcodec_sample.0.4.9.cPP
I am not expert in this domain too. But I've played with decoding. Use this gstreamer pipeline to extract preview from video.mp4:
gst-launch -v filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! videorate ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
If you want to write some code, replace videorate with appsrc/appsink elements. Write control program to the pipelines (see example):
filesrc location=./video.mp4 ! qtdemux name=demux demux.video_00 ! ffdec_h264 ! appsink
appsrc ! 'video/x-raw-yuv,framerate=1/1' ! jpegenc ! multifilesink location=image-%05d.jpeg
Buffers without GST_BUFFER_FLAG_DELTA_UNIT flag set is I-frames. You can safely skip many frames and start decoding stream at any I-frame.