mlt add audio tracks to multitrack audio without mixing - mlt

I have 2 separate stereo wav files, and I want to create a wav file with the 2 stereo tracks but without mixing them. So that the left channel of file 1 would be mapped to the left channel of track 1 of the multitrack file, etc.
How can you do this with melt?
Thanks

Just as you first said, 4 audio tracks in the output file.Two from the first file and two from the second file.

Related

Writing matroska to an append only stream

I need to write a matroška video file to a stream that only supports an append operation (not network streaming, output is a single MKV file for offline playback). Right now I'm using ffmpeg's libavformat to do the muxing, but the resulting video file is not seekable (in a player) at all.
Going through the matroška specs, I figured out a way to create a seekable (in a player) file work with only one (file) seek operation:
SeekHead 1 (without clusters)
...
Clusters
Cues
SeekHead 2 (only clusters)
After the file is written I need to go back to SeekHead 1 and update it with positions of SeekHead 2 and Cues.
My output files can easily get to tens of gigabytes, so buffering the whole thing in memory is not an option.
Is there really no way to create the MKV without seeking in the output file?

FFmpeg decoding .mp4 video file

I'm working on a project that needs to open .mp4 file format, read it's frames 1 by 1, decode them and encode them with better type of lossless compression and save them into a file.
Please correct me if i'm wrong with order of doing things, because i'm not 100% sure how this particular thing should be done. From my understanding it should go like this:
1. Open input .mp4 file
2. Find stream info -> find video stream index
3. Copy codec pointer of found video stream index into AVCodecContext type pointer
4. Find decoder -> allocate codec context -> open codec
5. Read frame by frame -> decode the frame -> encode the frame -> save it into a file
So far i encountered couple of problems. For example, if i want to save a frame using av_interleaved_write_frame() function, i can't open input .mp4 file using avformat_open_input() since it's gonna populate filename part of the AVFormatContext structure with input file name and therefore i can't "write" into that file. I've tried different solution using av_guess_format() but when i dump format using dump_format() i get nothing so i can't find stream information about which codec is it using.
So if anyone have any suggestions, i would really appreciate them. Thank you in advance.
See the "detailed description" in the muxing docs. You:
set ctx->oformat using av_guess_format
set ctx->pb using avio_open2
call avformat_new_stream for each stream in the output file. If you're re-encoding, this is by adding each stream of the input file into the output file.
call avformat_write_header
call av_interleaved_write_frame in a loop
call av_write_trailer
close the file (avio_close) and clear up all allocated memory
You can convert a video to a sequence of losses images with:
ffmpeg -i video.mp4 image-%05d.png
and then from a series of images back to a video with:
ffmpeg -i image-%05d.png video.mp4
The functionality is also available via wrappers.
You can see a similar question at: Extracting frames from MP4/FLV?

make a video from a subset of images and audio

I want to create a video of three images and three audio files but the duration time of each image should be the time of the corresponding audio file.
Lets say I have three images image_0.png, image_1.png and image_2.png and three audio files audio_0.mp3 (length 10 seconds) , audio_1.mp3 (length 15 seconds), audio_2.mp3 (length 12 seconds).
I want to create a video showing first image_0.png with audio_0.mp3 for 10 seconds, then image_1.png with audio_1.mp3 for 15 seconds and in the end image_2.png with audio_2.mp3 for 12 seconds.
I tried to make this with avconv. I tried different variations of -i commands
avconv -i imageInputFile.png -i audioInputFile.mp3 -c copy output.avi
nothing worked. Indeed, I could make for each image+audio a single avi video, but I failed concatenating all single avi files... Besides this is not the best way I think because of quality loss.
How would you do this? Is this even possible with avconv?
first concatenate all your .mp3 in one single .mp3
then name your .png something like img01.png, img02.png ... imgxx.png
then try:
mencoder 'mf://img*.png' -oac mp3lame -ovc lavc -fps 1 -ofps 25 -vf harddup -audiofile audio.mp3 -o test.avi
obviously replace lavc with your preferred codec and 1 with a reasonable value to fit the frames in your audio track.
some may argue that it's stupid to recompress audio again and I can use -oac copy instead but when converting from multiple sources it can cause issues.
this command creates a 25 fps video stream with 15-26 duplicated frames per second, if you remove -ofps 25 you will avoid duplicate frames but some decoders could hang, especially when seeking

Load wave into array + Subtract channels + Save as wave/mp3

I have a raw stereo audio file.
It is part of a noise cancellation system on the raspberry pi, Microphone 1 records the main voice to the left channel, Microphone 2 the surrounding noises to the right channel. Goal is to subtract the right channel from the left channel. I am going to write what I tried but I don't want you to stick to it and meddle with it if another way is much easier.
Recording takes place using a modified http://freedesktop.org/software/pulseaudio/doxygen/parec-simple_8c-example.html version. I output it to the raw audio file, which is a valid raw file. Advantage of stereo is that they are in sync. See my other question on How to find the right RAW format.
To summarize: How do I
Load a wave file into an array? ( I am asking this because in my other question the wave format never seems right)
Subtract the right channel from the left channel? (I presume sample_left minus sample_right)
Save it as raw or even better mp3 mono. ( I could pipe to lame )
If you are3 giving raw audio file as input or reading raw audio samples from audio device file, You can do following
1.Open raw audio file in binary format and read raw data in to a buffer,if you are using a file to give raw audio data. (or) Read Raw audio samples from audio device fire in to a buffer.
2.We know that Right audio channel is always followed by left audio channel in stereo audio format. So you can simply separate left and right audio channels. For example, if your device is giving 16-bit pcm pulses, that means, first 16-bits are left channel and next 16-bits are right channel.
3.You can simply open a normal binary file and you can make it as a wav file by defining wav header at the stating of the file. Define a wav header and write audio data in to wav file.
For wav file references see here

bitstream details of wav, mp3 file

I would like to know about the meaning of each byte in a wav / mp3 files.
But find nth from google...anyone knows where i can find such information?
(Infact I would like to know all types of multimedia files in the bitstream level)
MP3 files are divided into frames, each begins with a sequence of FRAMESYNC bits so hardware can find the beginning of each frame. More here.
Some info about WAV is here.

Resources