what is AV_SAMPLE_FMT_FLT - c

SwrContext *swr_ctx = swr_alloc_set_opts(NULL,
AV_CH_LAYOUT_STEREO,
AV_SAMPLE_FMT_FLT,
sample_rate,
pCodecParameters->channel_layout,
pCodecParameters->format,
pCodecParameters->sample_rate,
0,
NULL);
what exactly AV_SAMPLE_FMT_FLT is ? i already read docs but i want to know that what is float layout means in the context of Audio. How actually binary data of audio will look in that format.

It means every sample is in single buffer and per sample data is 32bit float.
Buffer AFAIK structured like this:
[SAMPLE_CH0][SAMPLE_CH1]...[SAMPLE_CHn]
[SAMPLE_CH0][SAMPLE_CH1]...[SAMPLE_CHn]
So on so forth. This repeats the number of "samples" times.
I may be wrong though, you need to check it yourself.

Related

How is frame data stored in libav?

I am trying to learn to use libav. I have followed the very first tutorial on dranger.com, but I got a little confused at one point.
// Write pixel data
for(y=0; y<height; y++)
fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile);
This code clearly works, but I don't quite understand why, particulalry I don't understand how the frame data in pFrame->data stored, whether or not it depends on the format/codec in use, why pFrame->data and pFrame->linesize is always referenced at index 0, and why we are adding y to pFrame->data[0].
In the tutorial it says
We're going to be kind of sketchy on the PPM format itself; trust us, it works.
I am not sure if writing it to the ppm format is what is causing this process to seem so strange to me. Any clarification on why this code is the way it is and how libav stores frame data would be very helpful. I am not very familiar with media encoding/decoding in general, thus why I am trying to learn.
particularly I don't understand how the frame data in pFrame->data stored, whether or not it depends on the format/codec in use
Yes, It depends on the pix_fmt value. Some formats are planar and others are not.
why pFrame->data and pFrame->linesize is always referenced at index 0,
If you look at the struct, you will see that data is an array of pointers/a pointer to a pointer. So pFrame->data[0] is a pointer to the data in the first "plane". Some formats, like RGB have a singe plane, where all data is stored in one buffer. Other formats like YUV, use a separate buffer for each plane. e.g. Y = pFrame->data[0], U = pFrame->data[1], pFrame->data[3] Audio may use one plane per channel, etc.
and why we are adding y to pFrame->data[0].
Because the example is looping over an image line by line, top to bottom.
To get the pointer to the fist pixel of any line, you multiply the linesize by the line number then add it to the pointer.

How can I get current microphone input level with C WinAPI?

Using Windows API, I want to implement something like following:
i.e. Getting current microphone input level.
I am not allowed to use external audio libraries, but I can use Windows libraries. So I tried using waveIn functions, but I do not know how to process audio input data in real time.
This is the method I am currently using:
Record for 100 milliseconds
Select highest value from the recorded data buffer
Repeat forever
But I think this is way too hacky, and not a recommended way. How can I do this properly?
Having built a tuning wizard for a very dated, but well known, A/V conferencing applicaiton, what you describe is nearly identical to what I did.
A few considerations:
Enqueue 5 to 10 of those 100ms buffers into the audio device via waveInAddBuffer. IIRC, when the waveIn queue goes empty, weird things happen. Then as the waveInProc callbacks occurs, search for the sample with the highest absolute value in the completed buffer as you describe. Then plot that onto your visualization. Requeue the completed buffers.
It might seem obvious to map the sample value as follows onto your visualization linearly.
For example, to plot a 16-bit sample
// convert sample magnitude from 0..32768 to 0..N
length = (sample * N) / 32768;
DrawLine(length);
But then when you speak into the microphone, that visualization won't seem as "active" or "vibrant".
But a better approach would be to give more strength to those lower energy samples. Easy way to do this is to replot along the μ-law curve (or use a table lookup).
length = (sample * N) / 32768;
length = log(1+length)/log(N);
length = max(length,N)
DrawLine(length);
You can tweak the above approach to whatever looks good.
Instead of computing the values yourself, you can rely on values from Windows. This is actually the values displayed in your screenshot from the Windows Settings.
See the following sample for the IAudioMeterInformation interface:
https://learn.microsoft.com/en-us/windows/win32/coreaudio/peak-meters.
It is made for the playback but you can use it for capture also.
Some remarks, if you open the IAudioMeterInformation for a microphone but no application opened a stream from this microphone, then the level will be 0.
It means that while you want to display your microphone peak meter, you will need to open a microphone stream, like you already did.
Also read the documentation about IAudioMeterInformation it may not be what you need as it is the peak value. It depends on what you want to do with it.

Seeking some guidance on webcam picture display using GTK+ and Cairo in C

In this question I'm mostly seeking for advice and guidance on overall understanding of some concepts of drawing wth GTK+ and Cairo in C language (IMO the information on topic is rather scarce, also my experience in really modest).
I'm coding some pet application which captures frames from webcam and displays them on a GTK window.
My app is working, but there are some points which I don't feel like grasped.
Overall process:
I've got a webcam frame as an array of bytes mmaped from webcam device to my app's process memory. So when another frame is captured what I have is a 640*480*3 bytes long array which is denoted as being in a RGB24 format. After some searching it looks like for a purpose of displaying it in a GTK window I need to create an object called drawing area using gtk_drawing_area_new(), add a "draw" callback and do "drawing" there in a designated callback. So, according to Cairo "drawing" is a process of applying "source" to "destination". I assume that I already have a source - my webcam mmaped pixels, but it looks like I need to use some "source" that Cairo is able to understand. I found a candidate:
cairo_surface_t* surface = cairo_image_surface_create(CAIRO_FORMAT_RGB24, 640, 480);
As I see this call creates some Cairo acceptable object, which along the way allocates a buffer in my app's memory which I can get, using:
unsigned char* surface_data = cairo_image_surface_get_data(surface);
According to docs this is a 640x480x4 bytes long buffer, which, on a little endian archs, should be filled with BGRA formatted pixel data.
Then I should rearrange my original webcam pixels for EVERY frame captured using this :
for (size_t idx_src=0, idx_dst=0; idx_src<640*480*3; idx_dst+=4, idx_src+=3) {
surface_data[idx_dst] = image[idx_src+2]; //B [3rd pos -> 1st pos]
surface_data[idx_dst+1] = image[idx_src+1]; //G [no change]
surface_data[idx_dst+2] = image[idx_src]; //R [1st pos -> 3rd pos]
}
After this I should do "drawing" with:
cairo_set_source_surface(cr, surface, 0, 0);
cairo_paint(cr);
So questions:
Is it what is supposed to be done for task at hand or I miss
something completely here ?
What confuses me is that I should
rearrange my original webcam pixels for EVERY frame captured (this
presumably consumes some cpu time, could be a limiting factor for
capturing in HD res at high frame rates). Is there some other way ?
Let's suppose I somehow acquire pixels from webcam in a Cairo
conforming format, e.g. 640x480x4 BGRA formatted bytes. Is there a
way to "wrap" this data in some Cairo acceptable object to exclude
pixel rearranging part ?
Any other thoughts I should've consider ?
Thanks for attention.
For most of your questions: Cairo only supports some image formats. Since your data comes in another format, you will have to convert it. All this copying around will likely be too slow. To make this work with an acceptable speed, you would need some other approach. No, I do not have any helpful suggestions here.
An unhelpful one would be: Is there some example for this webcam that you could look at?
Let's suppose I somehow acquire pixels from webcam in a Cairo conforming format, e.g. 640x480x4 BGRA formatted bytes. Is there a way to "wrap" this data in some Cairo acceptable object to exclude pixel rearranging part ?
Yup. cairo_image_surface_create_for_data.

changing HM reference software to display some information about the bitstream

I am very new to the HM HEVC (and the JEM) reference software, and I am currently trying to understand the source code. I want to add some lines to display for each component: name of Algo (i.e. inter/intra Algos) + length of the bitstream+ position in output bin file.
To know which component cost more bits to code and how codec is working. I want to do same thing for the JEM also after that.
my problem first is that I am unable of understanding a lot of function there, the comment is not sufficient, so is there any references to understand the code??!! (I already read the Manuel ,doesn’t help).
2nd I don’t know where & how exactly to add these lines; is it in TEncGOP, TEncSlice or TEncCU. Ps: I don’t think in TEncGOP.compressGOP so maybe in the 2 other classes.
(I put the answer to comment that #Mourad put four hours ago here, becuase it will be long)
I assume that you could manage to find where the actual encoding after the RDO loop is implemented. As you correctly mentioned, xEncodeCU is the function you need to refer to make sure you are not still in the RDO.
Now you need to find the exact function in xEncodeCU that is responsible for your target codec tool.
For instance, if you want to count the number of bits for coefficient coding, you should be looking into the m_pcEntropyCoder->encodeCoeff() (It's a JEM function and may have different name in the HM). Once you find this line in the xEncodeCU, you may do this and get the number of bits written inside encodeCoeff() function:
UInt b_before = m_pcEntropyCoder->getNumberOfWrittenBits();
m_pcEntropyCoder->encodeCoeff( ... );
UInt b_after = m_pcEntropyCoder->getNumberOfWrittenBits();
UInt writtenBitsCoeff = b_after - b_before;
One important point: as you cas see, the function getNumberOfWrittenBits() gives you integer rates, which is obtained by rounding sum of fractional rates corresponding to all syntax elements coded inside the function encodeCoeff. This error might or might not be acceptable, depending on your problem. For example, if instead of coefficient coding rate, you wanted to know the rate of CBF, then this error would not be acceptable at all. Because, CBF rate is mostly less than one bit. If this is your case, then you would need to calculate the fractional bits one-by-one. It would be totally different and relatively more complicated than this.
Point 1: There is one rule of tumb that logging coding decisions (e.g. pred mode, MV, IPM, block size) is much easier at the decoder side than encoder. This is because of the fact that you have super complicated RDO process at the encoder side that can easily make you get lost in the loops. But at the decoder side, everything appears only once. However, if you insist on doing it at the encoder side, you may find some tips here: Get some information from HEVC reference software
Point 2: Unlike coding decisions, logging rate (i.e. number of written bits for different syntax elements) is more complicated at the decoder side than encoder. This is particularly true for fractional bits associated to anything that is encoded in non-EP mode (i.e. with CABAC contexts). So you may do this part at the ecoder side. But I am afraid it is not easy.
Point 3: I think the best way to understand the code is to read it line-by-line. It's very time-consuming but if you theoritically know the standard(s), you will probably be able to distiguish important parts and ignore the rest.
PS: I think there are too many questions, mostly too general, in your post. It makes it a bit difficult for me to answer them all together. So you I'll wait for you to take your next step and ask more precise questions.

Images and Filters in OpenCL

Lets say I have an image called Test.jpg.
I just figured out how to bring an image into the project by the following line:
FILE *infile = fopen("Stonehenge.jpg", "rb");
Now that I have the file, do I need to convert this file into a bmp image in order to apply a filter to it?
I have never worked with images before, let alone OpenCl so there is a lot that is going over my head.
I need further clarification on this part for my own understanding
Does this bmp image also need to be stored in an array in order to have a filter applied to it? I have seen a sliding window technique be used a couple of times in other examples. Is the bmp image pretty much split up into RGB values (0-255)? If someone can provide a link on this item that should help me understand this a lot better.
I know this may seem like a basic question to most but I do not have a mentor on this subject in my workplace.
Now that I have the file, do I need to convert this file into a bmp image in order to apply a filter to it?
Not exactly. bmp is a very specific image serialization format and actually a quite complicated one (implementing a BMP file parser that deals with all the corner cases correctly is actually rather difficult).
However what you have there so far is not even file content data. What you have there is a C stdio FILE handle and that's it. So far you did not even check if the file could be opened. That's not really useful.
JPEG is a lossy compressed image format. What you need to be able to "work" with it is a pixel value array. Either an array of component tuples, or a number of arrays, one for each component (depending on your application either format may perform better).
Now implementing image format decoders becomes tedious. It's not exactly difficult but also not something you can write down on a single evening. Of course the devil is in the details and writing an implementation that is high quality, covers all corner cases and is fast is a major effort. That's why for every image (and video and audio) format out there you usually can find only a small number of encoder and decoder implementations. The de-facto standard codec library for JPEG are libjpeg and libjpeg-turbo. If your aim is to read just JPEG files, then these libraries would be the go-to implementation. However you also may want to support PNG files, and then maybe EXR and so on and then things become tedious again. So there are meta-libraries which wrap all those format specific libraries and offer them through a universal API.
In the OpenGL wiki there's a dedicated page on the current state of image loader libraries: https://www.opengl.org/wiki/Image_Libraries
Does this bmp image also need to be stored in an array in order to have a filter applied to it?
That actually depends on the kind of filter you want to apply. A simple threshold filter for example does not take a pixel's surroundings into account. If you were to perform scanline signal processing (e.g. when processing old analogue television signals) you may require only a single row of pixels at a time.
The universal solution of course to keep the whole image in memory, but then some pictures are so HUGE that no average computer's RAM can hold them. There are image processing libraries like VIPS that implement processing graphs that can operate on small subregions of an image at a time and can be executed independently.
Is the bmp image pretty much split up into RGB values (0-255)? If someone can provide a link on this item that should help me understand this a lot better.
In case you mean "pixel array" instead of BMP (remember, BMP is a specific data structure), then no. Pixel component values may be of any scalar type and value range. And there are in fact colour spaces in which there are value regions which are mathematically necessary but do not denote actually sensible colours.
When it comes down to pixel data, an image is just a n-dimensional array of scalar component tuples where each component's value lies in a given range of values. It doesn't get more specific for that. Only when you introduce colour spaces (RGB, CMYK, YUV, CIE-Lab, CIE-XYZ, etc.) you give those values specific colour-meaning. And the choice of data type is more or less arbitrary. You can either use 8 bits per component RGB (0..255), 10 bits (0..1024) or floating point (0.0 .. 1.0); the choice is yours.

Resources