Compressed video:How to show B and P frames - video-processing

I want to analyze a compressed video (h264).
I'm encoding using this command:
ffmpeg -i in_path -vf scale=340:256,setsar=1:1 -q:v 1 -c:v h264 -f
rawvideo out_path
now I want to see how the P&B frames look like, so I'm using this command in order extract only the b frames:
ffmpeg -ss 0 -i in_vid -t 2 -q:v 2 -vf
select="eq(pict_type\,PICT_TYPE_B)" -vsync 0 frameb%03d.jpg
The extraction went well, no errors, and the number of frames extracted makes since by theory.
But, I Don't know how to "show" the image, when I'm doing:
eog frameb001.jpg
I'm getting a normal picture and not what I expected from a B frame, now I understand why doing "eog" won't be good, but I have no idea how to "show" the frame so that it will be meaningful (saw some articles that use HSV to show the frame).
One more thing the fact that I got a meaningful image from the B frames, maybe the extraction wasn't good.
Any help will be great, Thanks a lot!

Related

create movie from one image updates in linux

I'm a C programmer on linux.
I write a program that saves an image in /srv/ftp/preview.png which is updating frequently and i want to create a movie from this updates.
It's timestamp is important for me, e.g if image updates after 3.654 seconds i want movie show this update(frame) after 3.654 seconds too.
I searched in Internet for several hours but i can't find any solution.
I know about ffmpeg but it will convert images(and not one image) to movie without millisecond timestamp.
I found this Question but it seems is not useful in this case.
Is there any tool to do that? if not, please introduce an API in c to write a program myself
You can try to use inotify watch modification on the file and ffmpeg to append file to the movie:
#!/bin/bash
FRAMERATE=1
FILE="/path/to/image.jgp"
while true
do inotifywait -e modify "$FILE"
echo "file changed"
# create temp file name
TMP=$(mktemp)
# copy file
cp "$FILE" "$TMP$
# append copy file to movie
# from https://video.stackexchange.com/q/17228
# if movie already exist
if [ -f movie.mp4 ]
then
# append image to a new movie
ffmpeg -y -i movie.avi -loop 1 -f image2 -t $FRAMERATE -i "$TMP".jpg -f lavfi -t 3 -i anullsrc -filter_complex "[0:v] [1:v] concat=n=2:v=1 [v] " -map "[v]" newmovie.avi
# replace old by new movie
mv newmovie.mp4 movie.mp4
else
#create a movie from one image
ffmpeg -framerate 1 -t $FRAMERATE -i "$TMP" movie.mp4
fi
rm "$TMP"
done
This script must certainly be adapted, (in particular if your framerate is high) but I think you can try to play with it.
One bad thing also that the movie creation will become slower and slower because the movie becomes bigger.
You should to store images of a certain time duration in a directory and convert all at once (like once an hour/day)
If you want to serve a stream instead of creating a video file, you can look at https://stackoverflow.com/a/31705978/1212012

How to use blend and concat filter with audio stream?

I tried to combine two video files with concat, blend and amx filter.
I've succeed two video when I used only video stream.
after then adding audio streams the code(filter script)
Ffmpeg didn't work.
two video crossfade(Good work)
ffmpeg -y -i "A.mp4" -i "B.mp4" -filter_complex
"[0:v]split[v000][v010];[1:v]split[v100][v110];[v000]trim=0:17[v001];
[v010]trim=17:27[v011t];[v011t]setpts=PTS-STARTPTS[v011];[v100]trim=0:10[v101];
[v110]trim=10:50[v111t];[v111t]setpts=PTS-STARTPTS[v111];[v101]
[v011]blend=all_expr='A*(if(gte(T,10),1,T/10))+B*(1-
(if(gte(T,10),1,T/10)))'[outv];[v001][outv][v111]
concat=n=3[outv2]" -vcodec libx264 -map [outv2] -t 50 d:\Output\1.mp4
1 + audio streams
ffmpeg -y -i "A.mp4" -i "B.mp4"
-filter_complex "[0:v]split[v000][v010];[1:v]split[v100][v110];[v000]trim=0:17[v001];[v010]trim=17:27[v011t];[v011t]setpts=PTS-STARTPTS[v011];[v100]trim=0:10[v101];[v110]trim=10:50[v111t];[v111t]setpts=PTS-STARTPTS[v111];[0:a]asplit[a000][a010];[1:a]asplit[a100][a110];[a000]atrim=0:17[a001];[v010]atrim=17:27[a011t];[a011t]asetpts=PTS-STARTPTS[a011];[a100]atrim=0:10[a101];[a110]atrim=10:50[a111t];[a111t]asetpts=PTS-STARTPTS[a111];[v101][v011]blend=all_expr='A*(if(gte(T,10),1,T/10))+B*(1-(if(gte(T,10),1,T/10)))'[outv];[a101][a011]amix=inputs=2:duration=first:dropout_transition=3[outa];[v001][outv][v111] [a001][outa][a111] concat=n=6:v=1:a=1:unsafe=1 [outv2][outa2]"
-vcodec libx264 -acodec aac -map [outv2] -map [outa2] -t 50 d:\Output\1.mp4
the Error message
Media type mismatch between the 'Parsed_blend_16' filter output pad 0 (video) and the 'Parsed_concat_18' filter input pad 1 (audio)
[AVFilterGraph # 026d3680] Cannot create the link blend:0 -> concat:1
Error initializing complex filters.
Invalid argument
How to fix it?
ps. I think the filter script is too complicated.
Could you let me know more easy way how to use crossfade with ffmpeg?
Solved If you want to use video and audio streams in a script at once you should write about video stream script after then do audio's.
"[0:v]split[v000][v010];[1:v]split[v100][v110];
[v000]trim=0:17[v001];[v010]trim=17:27[v011t];[v011t]setpts=PTS-STARTPTS[v011];
[v100]trim=0:10[v101];[v110]trim=10:50[v111t];[v111t]setpts=PTS-STARTPTS[v111];
[v101][v011]blend=all_expr='A*(if(gte(T,10),1,T/10))+B*(1-
(if(gte(T,10),1,T/10)))'[outv];
[v001][outv][v111] concat=n=3 [outv2];
[0:a]asplit[a000][a010];[1:a]asplit[a100][a110];[a000]atrim=0:17[a001];
[a010]atrim=17:27[a011t];[a011t]asetpts=PTS-STARTPTS[a011];
[a100]atrim=0:10[a101];[a110]atrim=10:50[a111t];
[a111t]asetpts=PTS-STARTPTS[a111];
[a101][a011]acrossfade=d=10[outa];
[a001][outa][a111] concat=n=3:v=0:a=1:unsafe=1 [outa2]"

How seeking ffplay or pipe from ffmpeg to ffplay

my goal is to check the file 10 minutes after the start. This is my script.
ffplay.exe -f lavfi "amovie=input.mov,showvolume=b=4:w=640:h=96"
If I add seeking, something like -ss 600, the file always starts from the beginning,
anyone know workaround? thanks.
Two ways of doing this:
ffplay -f lavfi "amovie=input.mov:sp=600,showvolume=b=4:w=640:h=96"
sp is option seek_point - will seek from nearest keyframe before seek point.
ffplay -f lavfi "amovie=input.mov,atrim=600,showvolume=b=4:w=640:h=96"
Apply a (a)trim filter.

How do I enable FFMPEG logging and where can I find the FFMPEG log file?

I want to be able to log FFMPEG processes because I am trying to work out how long a minute of video takes to convert to help with capacity planning of my video encoding server. How do I enable logging and where is the log file saved. I have FFMPEG installed on a CentOS LAMP machine.
FFmpeg does not write to a specific log file, but rather sends its output to standard error. To capture that, you need to either
capture and parse it as it is generated
redirect standard error to a file and read that afterward the process is finished
Example for std error redirection:
ffmpeg -i myinput.avi {a-bunch-of-important-params} out.flv 2> /path/to/out.txt
Once the process is done, you can inspect out.txt.
It's a bit trickier to do the first option, but it is possible. (I've done it myself. So have others. Have a look around SO and the net for details.)
I found the below stuff in ffmpeg Docs. Hope this helps! :)
Reference: http://ffmpeg.org/ffmpeg.html#toc-Generic-options
‘-report’ Dump full command line and console output to a file named
program-YYYYMMDD-HHMMSS.log in the current directory. This file can be
useful for bug reports. It also implies -loglevel verbose.
Note: setting the environment variable FFREPORT to any value has the
same effect.
I find the answer.
1/First put in the presets, i have this example "Output format MPEG2 DVD HQ"
-vcodec mpeg2video -vstats_file MFRfile.txt -r 29.97 -s 352x480 -aspect 4:3 -b 4000k -mbd rd -trellis -mv0 -cmp 2 -subcmp 2 -acodec mp2 -ab 192k -ar 48000 -ac 2
If you want a report includes the commands -vstats_file MFRfile.txt into the presets like the example. this can make a report which it's ubicadet in the folder source of your file Source.
you can put any name if you want , i solved my problem "i write many times in this forum" reading a complete .docx about mpeg properties. finally i can do my progress bar reading this txt file generated.
Regards.
ffmpeg logs to stderr, and can log to a file with a different log-level from stderr. The -report command-line option doesn't give you control of the log file name or the log level, so setting the environment variable is preferable.
(-v is a synonym for -loglevel. Run ffmpeg -v help to see the levels. Run ffmpeg -h full | less to see EVERYTHING. Or consult the online docs, or their wiki pages like the h.264 encode guide).
#!/bin/bash
of=out.mkv
FFREPORT="level=32:file=$of.log" ffmpeg -v verbose -i src.mp4 -c:a copy -preset slower -c:v libx264 -crf 21 "$of"
That will trancode src.mp4 with x264, and set the log level for stderr to "verbose", and the log level for out.mkv.log to "status".
(AV_LOG_WARNING=24, AV_LOG_INFO=32, AV_LOG_VERBOSE=40, etc.). Support for this was added 2 years ago, so you need a non-ancient version of ffmpeg. (Always a good idea anyway, for security / bugfixes and speedups)
A few codecs, like -c:v libx265, write directly to stderr instead of using ffmpeg's logging infrastructure. So their log messages don't end up in the report file. I assume this is a bug / TODO-list item.
To log stderr, while still seeing it in a terminal, you can use tee(1).
If you use a log level that includes status line updates (the default -v info, or higher), they will be included in the log file, separated with ^M (carriage return aka \r). There's no log level that includes encoder stats (like SSIM) but not status-line updates, so the best option is probably to filter that stream.
If don't want to filter (e.g. so the fps / bitrate at each status-update interval is there in the file), you can use less -r to pass them through directly to your terminal so you can view the files cleanly. If you have .enc logs from several encodes that you want to flip through, less -r ++G *.enc works great. (++G means start at the end of the file, for all files). With single-key key bindings like . and , for next file and previous file, you can flip through some log files very nicely. (the default bindings are :n and :p).
If you do want to filter, sed 's/.*\r//' works perfectly for ffmpeg output. (In the general case, you need something like vt100.py, but not for just carriage returns). There are (at least) two ways to do this with tee + sed: tee to /dev/tty and pipe tee's output into sed, or use a process substitution to tee into a pipe to sed.
# pass stdout and stderr through to the terminal,
## and log a filtered version to a file (with only the last status-line update).
of="$1-x265.mkv"
ffmpeg -v info -i "$1" -c:a copy -c:v libx265 ... "$of" |& # pipe stdout and stderr
tee /dev/tty | sed 's/.*\r//' >> "$of.enc"
## or with process substitution where tee's arg will be something like /dev/fd/123
ffmpeg -v info -i "$1" -c:a copy -c:v libx265 ... "$of" |&
tee >(sed 's/.*\r//' >> "$of.enc")
For testing a few different encode parameters, you can make a function like this one that I used recently to test some stuff. I had it all on one line so I could easily up-arrow and edit it, but I'll un-obfuscate it here. (That's why there are ;s at the end of each line)
ffenc-testclip(){
# v should be set by the caller, to a vertical resolution. We scale to WxH, where W is a multiple of 8 (-vf scale=-8:$v)
db=0; # convenient to use shell vars to encode settings that you want to include in the filename and the ffmpeg cmdline
of=25s#21.15.${v}p.x265$pre.mkv;
[[ -e "$of.enc" ]]&&echo "$of.enc exists"&&return; # early-out if the file exists
# encode 25 seconds starting at 21m15s (or the keyframe before that)
nice -14 ffmpeg -ss $((21*60+15)) -i src.mp4 -t 25 -map 0 -metadata title= -color_primaries bt709 -color_trc bt709 -colorspace bt709 -sws_flags lanczos+print_info -c:a copy -c:v libx265 -b:v 1500k -vf scale=-8:$v -preset $pre -ssim 1 -x265-params ssim=1:cu-stats=1:deblock=$db:aq-mode=1:lookahead-slices=0 "$of" |&
tee /dev/tty | sed 's/.*\r//' >> "$of.enc";
}
# and use it with nested loops like this.
for pre in fast slow; do for v in 360 480 648 792;do ffenc-testclip ;done;done
less -r ++G *.enc # -r is useful if you didn't use sed
Note that it tests for existence of the output video file to avoid spewing extra garbage into the log file if it already exists. Even so, I used and append (>>) redirect.
It would be "cleaner" to write a shell function that took args instead of looking at shell variables, but this was convenient and easy to write for my own use. That's also why I saved space by not properly quoting all my variable expansions. ($v instead of "$v")
appears that if you add this to the command line:
-loglevel debug
or
-loglevel verbose
You get more verbose debugging output to the command line.
You can find more debugging info just simply adding the option -loglevel debug, full command will be
ffmpeg -i INPUT OUTPUT -loglevel debug -v verbose
You must declare the reportfile as variable for console.
Problem is all the Dokumentations you can find are not running so ..
I was give 1 day of my live to find the right way ....
Example: for batch/console
cmd.exe /K set FFREPORT=file='C:\ffmpeg\proto\test.log':level=32 && C:\ffmpeg\bin\ffmpeg.exe -loglevel warning -report -i inputfile f outputfile
Exemple Javascript:
var reortlogfile = "cmd.exe /K set FFREPORT=file='C:\ffmpeg\proto\" + filename + ".log':level=32 && C:\ffmpeg\bin\ffmpeg.exe" .......;
You can change the dir and filename how ever you want.
Frank from Berlin
If you just want to know how long it takes for the command to execute, you may consider using the time command. You for example use time ffmpeg -i myvideoofoneminute.aformat out.anotherformat

Concatenation of files using ffmpeg does not work as I expected. Why?

I execute the following command line
ffmpeg.exe
-i C:\Beema\video-source\DO_U_BEEMA176x144short.avi
-i C:\Beema\video-source\DO_U_BEEMA176x144short.avi
-i C:\Beema\temp\9016730-51056331-stitcheds.avi
-i C:\Beema\video-source\GOTTA_BEEMA176x144short.avi
-y -ac 1 -r 24 -b 25K
C:\Beema\video-out\9a062fb6-d448-48fe-b006-a85d51adf8a1.mpg
The output file in video-out ends up having a single copy of DO_U_BEEMA. I do not understand why ffmpeg is not concatenating.
Any help is dramatically appreciated,
mencoder -oac copy -ovc copy file1 file2 file3 … -o final_movie.mpg
Have you tried with mencoder?
Also are all of the files the same bitrate and dimensions? If not your going to need to make sure all of the video files are identical in these two areas before attempting to combine. It also appears your attempting to combine .avi's with a single .mpg, you'll most likely want to convert the .mpg to a similar format when re-encoding.
Hope this helps.
If it has C:\, it's Windows. use:
copy video1 + video2 + video3
and add more + videoN instances until you get there.
Here is the command that will work for you first cat the files then pipe to ffmpeg
cat C:\Beema\video-source\DO_U_BEEMA176x144short.avi C:\Beema\video-source\DO_U_BEEMA176x144short.avi C:\Beema\temp\9016730-51056331-stitcheds.avi
C:\Beema\video-source\GOTTA_BEEMA176x144short.avi | ffmpeg -f mpeg -i - C:\Beema\video-out\9a062fb6-d448-48fe-b006-a85d51adf8a1.mpg
-i - is important "-" this is the piped input to ffmpeg
Cheers.
I guess ffmpeg cann't do that. It usually takes only one input file. Trying to cat files to input may result in a lump togather, but i guess it won't stitch properly.
Best bet is mencoder or using transcode

Resources