How can I simulate OpenFile in FFmpeg? - c

Most gif capture software capture screen and then save them one by one single frame picture file on disk,then read them into memory and combine them to gif,makes the whole procdure very slowly.
I got a idea to capture screen with DirectX(so we could also capture directx window faster since it direct operate the screen d3d device)API to got the bitmap,then save them to memory(such as buffer),then passing the memory location to ffmpeg to produce a video so we don't need disk storge as a middle buffer so it could be ten more faster since the disk is now most slowly part on pc now.
the directx capture screen part is already.But I found that ffmpeg using OpenFile to read the picture file,so here may we can simulate the OpenFile?
If answer is yes,how could we do it?

You can open a named pipe and use that as a source.
An example:
ffmpeg -f rawvideo -vcodec ravideo -s $width$x$height$ -r $framerate -pix_fmt $pixelFormat$ -i "\\.\pipe\$pipeName$" Output.gif
You have to fix the format of the frames you are going to feed FFmpeg, hence the -s and the -pix_fmt parameters.

Related

run ffmpeg in a loop on pngs for efficiency?

I am using ffmpeg to embed production data as text overlay into separate frames of animation saved as png files.
At the moment its very inefficient as I need to sample values from the animation scene such as current frame and camera focal length, and so I have a loop that samples the information each frame, runs ffmpeg to embed the text for that frame, then closes ffmpeg and repeats until complete.
If I have two given string list arrays (such as frame number and focal length), is there a way to ask ffmpeg to run a single instance but for each png from its given list it also references the next item from the array to print the text and therefore operate only once the entire set of frames?
Use the concat demuxer.
Make input.txt:
file '01.png'
file_packet_metadata length=50mm
duration 0.04
file '02.png'
file_packet_metadata length=35mm
duration 0.04
file '03.png'
file_packet_metadata length=18mm
duration 0.04
Encode, using the drawtext filter to add text:
ffmpeg -f concat -i input.txt -vf "drawtext=text='%{metadata\:length}':fontsize=22:fontcolor=white:x=w-tw-10:y=h-th-10,format=yuv420p" output.mp4
For positioning see How to position drawtext text.

make a video from a subset of images and audio

I want to create a video of three images and three audio files but the duration time of each image should be the time of the corresponding audio file.
Lets say I have three images image_0.png, image_1.png and image_2.png and three audio files audio_0.mp3 (length 10 seconds) , audio_1.mp3 (length 15 seconds), audio_2.mp3 (length 12 seconds).
I want to create a video showing first image_0.png with audio_0.mp3 for 10 seconds, then image_1.png with audio_1.mp3 for 15 seconds and in the end image_2.png with audio_2.mp3 for 12 seconds.
I tried to make this with avconv. I tried different variations of -i commands
avconv -i imageInputFile.png -i audioInputFile.mp3 -c copy output.avi
nothing worked. Indeed, I could make for each image+audio a single avi video, but I failed concatenating all single avi files... Besides this is not the best way I think because of quality loss.
How would you do this? Is this even possible with avconv?
first concatenate all your .mp3 in one single .mp3
then name your .png something like img01.png, img02.png ... imgxx.png
then try:
mencoder 'mf://img*.png' -oac mp3lame -ovc lavc -fps 1 -ofps 25 -vf harddup -audiofile audio.mp3 -o test.avi
obviously replace lavc with your preferred codec and 1 with a reasonable value to fit the frames in your audio track.
some may argue that it's stupid to recompress audio again and I can use -oac copy instead but when converting from multiple sources it can cause issues.
this command creates a 25 fps video stream with 15-26 duplicated frames per second, if you remove -ofps 25 you will avoid duplicate frames but some decoders could hang, especially when seeking

How to dump raw RTSP stream to file?

Is it possible to dump a raw RTSP stream to file and then later decode the file to something playable?
Currently I'm using FFmpeg to receive and decode the stream, saving it to an mp4 file. This works perfectly, but is CPU intensive, and will severely limit the number of RTSP streams I can receive simultaneously on my server.
I would like to save the stream to file without decoding it, and delay the decoding part to when the file needs to be opened.
Is this possible?
I have tried VLC, which is even more CPU intensive than FFmpeg. I've also looked at this question where the answer says dumping RTSP to file is not useful, and this question, where the comment below the question says "Raw RTSP content is not well suited for save and replay...", which seems to indicate that there is way.
EDIT
Here is the command I'm using for FFmpeg:
ffmpeg -i rtsp://#192.168.241.1:62159 -r 15 C:/DB_Videos/2013-04-30 17_18_34.703.mp4
If you are reencoding in your ffmpeg command line, that may be the reason why it is CPU intensive. You need to simply copy the streams to the single container. Since I do not have your command line I cannot suggest a specific improvement here. Your acodec and vcodec should be set to copy is all I can say.
EDIT: On seeing your command line and given you have already tried it, this is for the benefit of others who come across the same question. The command:
ffmpeg -i rtsp://#192.168.241.1:62156 -acodec copy -vcodec copy c:/abc.mp4
will not do transcoding and dump the file for you in an mp4. Of course this is assuming the streamed contents are compatible with an mp4 (which in all probability they are).
With this command I had poor image quality
ffmpeg -i rtsp://192.168.XXX.XXX:554/live.sdp -vcodec copy -acodec copy -f mp4 -y MyVideoFFmpeg.mp4
With this, almost without delay, I got good image quality.
ffmpeg -i rtsp://192.168.XXX.XXX:554/live.sdp -b 900k -vcodec copy -r 60 -y MyVdeoFFmpeg.avi
You can use mplayer.
mencoder -nocache -rtsp-stream-over-tcp rtsp://192.168.XXX.XXX/test.sdp -oac copy -ovc copy -o test.avi
The "copy" codec is just a dumb copy of the stream. Mencoder adds a header and stuff you probably want.
In the mplayer source file "stream/stream_rtsp.c" is a prebuffer_size setting of 640k and no option to change the size other then recompile. The result is that writing the stream is always delayed, which can be annoying for things like cameras, but besides this, you get an output file, and can play it back most places without a problem.

processing .raw file image with ffmpeg api or C code

I am trying to process a .raw image file captured using vrl2, it's a h264 encoded image with yuv422 color space from a Logitech c920 webcam, dcraw is not working for me however from my previous question this command is working fine with low performance (a 32kb jpg image however using opencv capture I get a 900kb image for the same 640x480 resolution):
ffmpeg -f rawvideo -s 640x480 -pix_fmt yuyv422 -i frame-1.raw
frame-1.jpg
I need a code written in C or the ffmpeg api/OpenCV etc .. to do the same as this command,I don't want to use QProcess in Qt(I am working on a server using Qt where I am trying to send the raw file from a Raspberry PI to the server and process it their), dcraw output is a corrupted image.
http://ffmpeg.org/doxygen/trunk/examples.html
There should be some api samples in there that show how to get the image out with that specific encoding.
When interacting with a RAW file, I have also used IrfanView. If you know the headersize of the file and the width and the height and the bits per pixel per color, you can see what it looks like quickly that way.
EDIT: I tried using Irfanview with your RAW, and I got something close, but not quite. The coloring was always off. I don't think it can handle that particular encoding of a RAW file right now.

FFMpeg: Take Certain Amount of Screenshots Between X and X?

Is there any way to get ffmpeg to take X number of screenshots between X time and X time? The way I'm doing my command line code now is like this:
ffmpeg -ss 79 -i 1.avi -r 1/2.15 -f image2 1_%%05d.jpg
This method only starts taking screenshots starting at 79 seconds, but I can't figure out a way to set an ending time (before the video ends).
Also, I will be displaying these video screenshots on a website and want there to be the same amount of screenshots per video file for consistency purposes. Is there a way to set how many screenshots I want from a video? As in, ffmpeg figures out how much time is between the two points I specify, then figures out how often to take a screenshot based on how many I want total from a video?
There is a -vframes option to control, how many frames of input ffmpeg should work with.
There is also a -t option to control, how many seconds of content to process.
Use any one of them.

Resources