I've had excellent results using melt for chroma keying the output of two video files:
melt bg.mp4 -track greened.mp4 -filter chroma key=0x00ff0000 variance=0.45 -transition comp
Is there a technique for piping the raw output of a camera to the -track argument?
You need to use a FFmpeg video4linux2 URL that you know works with ffplay and a simple melt playback command. Don't try anything more complex before ensuring you can simply open and view the camera stream. Walk before running.
See the FAQ for more information:
http://www.mltframework.org/bin/view/MLT/Questions#How_can_I_capture_audio_and_or_v
Related
Most gif capture software capture screen and then save them one by one single frame picture file on disk,then read them into memory and combine them to gif,makes the whole procdure very slowly.
I got a idea to capture screen with DirectX(so we could also capture directx window faster since it direct operate the screen d3d device)API to got the bitmap,then save them to memory(such as buffer),then passing the memory location to ffmpeg to produce a video so we don't need disk storge as a middle buffer so it could be ten more faster since the disk is now most slowly part on pc now.
the directx capture screen part is already.But I found that ffmpeg using OpenFile to read the picture file,so here may we can simulate the OpenFile?
If answer is yes,how could we do it?
You can open a named pipe and use that as a source.
An example:
ffmpeg -f rawvideo -vcodec ravideo -s $width$x$height$ -r $framerate -pix_fmt $pixelFormat$ -i "\\.\pipe\$pipeName$" Output.gif
You have to fix the format of the frames you are going to feed FFmpeg, hence the -s and the -pix_fmt parameters.
I'm currently recording an RTSP stream using the FFMPEG libaries (essentially recording in 1 minute video files). Everything is working well, with the exception that if I launch the video files, the player views them as streams, not as videos (so seeking is disabled, etc.).
I suspect I need to set the proper option when I am using avformat_write_header(output_context, NULL) instead of giving no options.
I've discovered the list of options in libavformat/options_table.h but none of them seem to apply. As an example of how I'm thinking I need to solve this, I've looked at https://ffmpeg.org/pipermail/libav-user/2013-January/003541.html and I see things like "sample_rate", "pixel_format" etc. that could be set. Is there something to set the metadata in the file I'm writing from an RTSP stream to behave as a video instead of a stream when I play it after the fact? Or if it isn't written with the header, is there some other way I can do this?
Seems like the issue is related to the specific video player being used, which makes me believe its not related to the meta data but how the player is interpreting the video stream.
If it does turn out that I can explicitly set this using the FFMPEG libraries I'll mark that as the accepted answer.
I am trying to process a .raw image file captured using vrl2, it's a h264 encoded image with yuv422 color space from a Logitech c920 webcam, dcraw is not working for me however from my previous question this command is working fine with low performance (a 32kb jpg image however using opencv capture I get a 900kb image for the same 640x480 resolution):
ffmpeg -f rawvideo -s 640x480 -pix_fmt yuyv422 -i frame-1.raw
frame-1.jpg
I need a code written in C or the ffmpeg api/OpenCV etc .. to do the same as this command,I don't want to use QProcess in Qt(I am working on a server using Qt where I am trying to send the raw file from a Raspberry PI to the server and process it their), dcraw output is a corrupted image.
http://ffmpeg.org/doxygen/trunk/examples.html
There should be some api samples in there that show how to get the image out with that specific encoding.
When interacting with a RAW file, I have also used IrfanView. If you know the headersize of the file and the width and the height and the bits per pixel per color, you can see what it looks like quickly that way.
EDIT: I tried using Irfanview with your RAW, and I got something close, but not quite. The coloring was always off. I don't think it can handle that particular encoding of a RAW file right now.
I am using espeak on BSD to output text as sound. My problem is that I want it to take it as a .mp3 but I am having little luck. I tried piping the output to tee but I guess that only works with stdout not just playing a sound.
Any ideas? My last resort would be recompiling my own version of espeak that allows me to save to a file instead of playing it
you can write it as wave and then convert it with ffmpeg:
espeak "HelloWorld" -w <file>.wav
Or pipe to ffmpeg
espeak "HelloWorld" --stdout | ffmpeg -i pipe:0 output.mp3
From the documentation:
-w < wave file>
Writes the speech output to a file in WAV format, rather than speaking it.
--stdout
Writes the speech output to stdout as it is produced, rather than speaking it. The data starts with a WAV file header which indicates the sample rate and format of the data. The length field is set to zero because the length of the data is unknown when the header is produced.
It looks like both of those options produce WAV files, but you can easily convert those without another program like ffmpeg.
I need to extract all meta data along with play-length information from the video files in pure C .
I goggled and found MediaInfo Library but was not able to find any relevant c sample code .
Is there any other way to achieve this with / without MetaInfo ?
Or can somebody point me to a good sample code of MediaInfo in C
ffprobe which is part of ffmpeg can do a whole lot more.
ffprobe without switches will give some common information
It also has lot of switches of which you can use one at a time [exclusively]
-show_format show format/container info
-show_streams show streams info
-show_packets show packets info
-show_frames show frames info
-show_data show packets data
Try it out.