FFMpeg: Take Certain Amount of Screenshots Between X and X? - batch-file

Is there any way to get ffmpeg to take X number of screenshots between X time and X time? The way I'm doing my command line code now is like this:
ffmpeg -ss 79 -i 1.avi -r 1/2.15 -f image2 1_%%05d.jpg
This method only starts taking screenshots starting at 79 seconds, but I can't figure out a way to set an ending time (before the video ends).
Also, I will be displaying these video screenshots on a website and want there to be the same amount of screenshots per video file for consistency purposes. Is there a way to set how many screenshots I want from a video? As in, ffmpeg figures out how much time is between the two points I specify, then figures out how often to take a screenshot based on how many I want total from a video?

There is a -vframes option to control, how many frames of input ffmpeg should work with.
There is also a -t option to control, how many seconds of content to process.
Use any one of them.

Related

Drop some parts of video and re-make key frames (c++ + libav)

I'm trying to drop some parts of video in my app using libav, for example in a video that has 00:08:00 length, I try to drop frames 100-250 and 400-500 (Just for example).
I wrote this code that copy AVPacket and drop some packets, But there is a problem! In our videos every keyframe followed by 29 non-key frames. So when my code goes to drop frames 100-250 the frame 100 may be is a non-key frame, in this case the parts that are going to join (In this example frame 250 to frame 400) the frame 400 is positioned after a keyframe that is not belogs to.
In this section video frames shown garbled,
Video cutting speed is so important in my code, so I can't decode/re-encode all of video frames.
The question is that, How can I decode encode begin of each parts (from begin frame to first key frame) and make another frames copy without decode?
Or, Is there any another FAST solution for splitting/merging (Dropping some parts of video)?
The question is that, How can I decode encode begin of each parts (from begin frame to first key frame) and make another frames copy without decode?
You can't. It doesn't work that way.
Start to think about time, not in frames
you can get new videos fast in parts of base_video.mp4, for example,
ffmpeg -ss 00:00:00.000 -i base_video.mp4 -t 8.000 -c copy -strict -2 new_video_8seconds_fromstart.mp4
-ss 00:00:00.000 is the time to start the new video
-t is the duration in seconds and miliseconds, example, for 8 seconds of duration you have to use 8.000
-an if you dont want audio
-strict -2 if for copy some files codecs like DTS
but if you want with re-encoding remove -c copy but it never will be fast!
ffmpeg -ss 00:00:00.000 -i base_video.mp4 -t 8.000 new_video_8seconds_fromstart.mp4

run ffmpeg in a loop on pngs for efficiency?

I am using ffmpeg to embed production data as text overlay into separate frames of animation saved as png files.
At the moment its very inefficient as I need to sample values from the animation scene such as current frame and camera focal length, and so I have a loop that samples the information each frame, runs ffmpeg to embed the text for that frame, then closes ffmpeg and repeats until complete.
If I have two given string list arrays (such as frame number and focal length), is there a way to ask ffmpeg to run a single instance but for each png from its given list it also references the next item from the array to print the text and therefore operate only once the entire set of frames?
Use the concat demuxer.
Make input.txt:
file '01.png'
file_packet_metadata length=50mm
duration 0.04
file '02.png'
file_packet_metadata length=35mm
duration 0.04
file '03.png'
file_packet_metadata length=18mm
duration 0.04
Encode, using the drawtext filter to add text:
ffmpeg -f concat -i input.txt -vf "drawtext=text='%{metadata\:length}':fontsize=22:fontcolor=white:x=w-tw-10:y=h-th-10,format=yuv420p" output.mp4
For positioning see How to position drawtext text.

How can I simulate OpenFile in FFmpeg?

Most gif capture software capture screen and then save them one by one single frame picture file on disk,then read them into memory and combine them to gif,makes the whole procdure very slowly.
I got a idea to capture screen with DirectX(so we could also capture directx window faster since it direct operate the screen d3d device)API to got the bitmap,then save them to memory(such as buffer),then passing the memory location to ffmpeg to produce a video so we don't need disk storge as a middle buffer so it could be ten more faster since the disk is now most slowly part on pc now.
the directx capture screen part is already.But I found that ffmpeg using OpenFile to read the picture file,so here may we can simulate the OpenFile?
If answer is yes,how could we do it?
You can open a named pipe and use that as a source.
An example:
ffmpeg -f rawvideo -vcodec ravideo -s $width$x$height$ -r $framerate -pix_fmt $pixelFormat$ -i "\\.\pipe\$pipeName$" Output.gif
You have to fix the format of the frames you are going to feed FFmpeg, hence the -s and the -pix_fmt parameters.

Gstreamer- Duration query error on mp3

I am working on a simple application using Gstreamer on C, that involves playing a song and show some info about it on terminal. Thta info includes the total length of the song in seconds. As usual, I used the function gst_element_query_duration to get this data. The thing is, when I run my program, sometimes it shows the right time on screen, but then I run it again and the total time showed is about 6 seconds less. Because is just a simple trial application, I am using playbin as the general bin for reproduction, so I tried with different file extensions and it seems this only happens with mp3 files. Have anyone ever experienced this? Any ideas on how to fix it?
MP3 has the problem that there is no duration stored inside the file (usually). With constant bitrate files you can simply check the bitrate and the file size, but for variable bitrate files you can only do an approximation based on that. Your problem is probably exactly that.
The only way to know the exact duration of a variable bitrate MP3 file without header information with the duration (see Xing header) is to parse the file until the end and count the exact duration. With playbin you should get the accurate duration at the end of the file.

make a video from a subset of images and audio

I want to create a video of three images and three audio files but the duration time of each image should be the time of the corresponding audio file.
Lets say I have three images image_0.png, image_1.png and image_2.png and three audio files audio_0.mp3 (length 10 seconds) , audio_1.mp3 (length 15 seconds), audio_2.mp3 (length 12 seconds).
I want to create a video showing first image_0.png with audio_0.mp3 for 10 seconds, then image_1.png with audio_1.mp3 for 15 seconds and in the end image_2.png with audio_2.mp3 for 12 seconds.
I tried to make this with avconv. I tried different variations of -i commands
avconv -i imageInputFile.png -i audioInputFile.mp3 -c copy output.avi
nothing worked. Indeed, I could make for each image+audio a single avi video, but I failed concatenating all single avi files... Besides this is not the best way I think because of quality loss.
How would you do this? Is this even possible with avconv?
first concatenate all your .mp3 in one single .mp3
then name your .png something like img01.png, img02.png ... imgxx.png
then try:
mencoder 'mf://img*.png' -oac mp3lame -ovc lavc -fps 1 -ofps 25 -vf harddup -audiofile audio.mp3 -o test.avi
obviously replace lavc with your preferred codec and 1 with a reasonable value to fit the frames in your audio track.
some may argue that it's stupid to recompress audio again and I can use -oac copy instead but when converting from multiple sources it can cause issues.
this command creates a 25 fps video stream with 15-26 duplicated frames per second, if you remove -ofps 25 you will avoid duplicate frames but some decoders could hang, especially when seeking

Resources