Save video encoded by libavcodec to an AVI file format - c

I am able to encode video frames using libavcodec, by calling avcodec_encode_video function. How do I save these encoded frames into an AVI file?

Check this out:
http://forum.doom9.org/archive/index.php/t-112286.html
You must open file for binary write, write header in it, and simultaniosly put binary frames in it, I think.

Related

FFmpeg decoding .mp4 video file

I'm working on a project that needs to open .mp4 file format, read it's frames 1 by 1, decode them and encode them with better type of lossless compression and save them into a file.
Please correct me if i'm wrong with order of doing things, because i'm not 100% sure how this particular thing should be done. From my understanding it should go like this:
1. Open input .mp4 file
2. Find stream info -> find video stream index
3. Copy codec pointer of found video stream index into AVCodecContext type pointer
4. Find decoder -> allocate codec context -> open codec
5. Read frame by frame -> decode the frame -> encode the frame -> save it into a file
So far i encountered couple of problems. For example, if i want to save a frame using av_interleaved_write_frame() function, i can't open input .mp4 file using avformat_open_input() since it's gonna populate filename part of the AVFormatContext structure with input file name and therefore i can't "write" into that file. I've tried different solution using av_guess_format() but when i dump format using dump_format() i get nothing so i can't find stream information about which codec is it using.
So if anyone have any suggestions, i would really appreciate them. Thank you in advance.
See the "detailed description" in the muxing docs. You:
set ctx->oformat using av_guess_format
set ctx->pb using avio_open2
call avformat_new_stream for each stream in the output file. If you're re-encoding, this is by adding each stream of the input file into the output file.
call avformat_write_header
call av_interleaved_write_frame in a loop
call av_write_trailer
close the file (avio_close) and clear up all allocated memory
You can convert a video to a sequence of losses images with:
ffmpeg -i video.mp4 image-%05d.png
and then from a series of images back to a video with:
ffmpeg -i image-%05d.png video.mp4
The functionality is also available via wrappers.
You can see a similar question at: Extracting frames from MP4/FLV?

File extension detection mechanism

How Application will detect file extension?
I knew that every file has header that contains all the information related to that file.
My question is how application will use that header to detect that file?
Every file in file system associated some metadata with it for example, if i changed audio file's extension from .mp3 to .txt and then I opened that file with VLC but still VLC is able to play that file.
I found out that every file has header section which contains all the information related to that file.
I want to know how can I access that header?
Just to give you some more details:
A file extension is basically a way to indicate the format of the data (for example, TIFF image files have a format specification).
This way an application can check if the file it handles is of the right format.
Some applications don't check (or accept wrong) file formats and just tries to use them as the format it needs. So for your .mp3 file, the data in this file is not changed when you simply change the extension to .txt.
When VLC reads the .txt byte by byte and interprets it as a .mp3 it can just extract the correct music data from that file.
Now some files include a header for extra validation of what kind of format the data inside the file is. For example a unicode text file (should) include a BOM to indicate how the data in the file needs to be handled. This way an application can check whether the header tag matches the expected header and so it knows for sure that your '.txt` file actually contains data in the 'mp3' format.
Now there are quite some applications to read those header tags, but they are often specific for each format. This TIFF Tag Viewer for example (I used it in the past to check the header tags from my TIFF files).
So or you could just open your file with some kind of hex viewer and then look at the format specifications what every bytes means, or you search Google for a header viewer for the format you want to see them.

How to encode pixel data into a video format?

I'm using ESCAPI to capture webcam, it captures the frames in form of RGB pixel data, I've stored the RGB pixel data into a file but the file is huge 200MB for 15s video of 320x240.
I want to encode that pixel data into a video format.
I'm using MinGW on windows.
First use any encoder
I suggest H264 codec for encoding
so find its library for encoding and encode it
Then find any container File formats
I suggest Matroska (.mkv file)
so find its library for muxing encoded h264 in .mkv
Good begining is start with ffmpeg libraries.

Re-encode with ffmpeg

I am trying to do following work in C code with the help of ffmpeg library
Decode a mp2 audio file.
Write decoded data to a file named test.sw
Read data from test.sw and re-encode it to mp2 audio file.
For 1 and 2, i followed example given in decoding_encoding.c which is working fine. While reading and re-encoding, i can't understand how to read from test.sw file and encode it. Can anybody help me with that? It will help me a lot if anybody can provide me any tutorial regarding this topic.
As I understood your question you want to encode to mp2 format.I suggest to use the encoding technique used for mp2 format and use the library ,You have one library ffmpeg just check is it having for encoding .If yes then just use that function and pass your decoded file.
You can check it
http://ffmpeg.org/doxygen/trunk/encoding-example_8c-source.html

jpg file transfer using a socket_stream in C

I was wondering if sending a file with a jpg extension through a socket_stream, this automatically makes the transformation of bytes to jpg ? or need to implement some algorithm to transform the lot of bytes to image... Please could somebody explain me the way to do?
JPEG images are nothing but a bunch of bytes organized according to the JPEG format. A network socket isn't going to organize random bytes into the JPEG format. You can send the bytes that make up a JPEG formatted image across a socket as a binary blob, receive it on the other end, and write it to a file with a .jpg extension. An application can interpret this file as a JPEG image based on the extension and try to display it. But you are still responsible for providing a set of bytes that are organized as a JPEG image.

Resources