I am interested in the question of how to pack pixels into a buffer in a certain specified format like RGB 565, RGB 888, BGR 888, RGBA 8888, immediately change the values of one pixel if necessary, and not convert all or part of the buffer?
Would it be good practice to edit a void* bitmap buffer into a specific format using different functions that handle different formats and a single pointer to them?
For example, I have four functions that fill the buffer in RGB888, RGBA8888, BGRA8888 and RGB565 formats, and if I have defined the RGB565 buffer format, I simply set the corresponding function to the pointer responsible for filling the buffer. Or is this bad and I need to look for another approach?
Related
I'm trying to figure out how FFmpeg saves data in an AVFrame after the audio has been decoded.
Basically, if I print the data in the AVFrame->data[] array I get a number of unsigned 8 bit integers that is the audio in raw format.
From what I can understand from the FFmpeg doxygen, the format of the data is expressed in the enum AVSampleFormat and there are 2 main categories: interleaved and planar. In the interleaved type, the data is all kept in the first row of the AVFrame->data array with size AVFrame->linesize[0] while in the planar type each channel of the audio file is kept in a separate row of the AVFrame->data array and the arrays have as size AVFrame->linesize[0].
Is there a guide/tutorial that explains what do the numbers in the array mean for each of the formats?
Values in each of the data arrays (planes) are actual audio samples according to specified format. E.g. if format is AV_SAMPLE_FMT_S16P it means that data arrays actually are arrays of int16_t PCM data. If we have deal with mono signal - only data[0] is valid, if it is stereo - data[0] and data[1] are valid, so on.
I'm not sure that there is any guide that can help you to explain each particular case but anyway described approach is quite simple and is easy to understand. You should just play a bit with it and thing should become clear.
I started messing around with image processing and wanted to know if there is a way to make a gray scale image without using image processing libraries, I know the first three bits are given the information on the image, the format and size, from now on I want to make a gray scale image This is the code :
#include <stdio.h>
int main(){
FILE * binary = fopen("C:/Arabidopsis3.bmp", "rb");
FILE * bCopy=fopen("C:/copy.bmp","wb");
int get_char;
int i=3;
while ((get_char=fgetc(binary))!= EOF)
{
if(i>3)
{
doing the grayscale process
}
fputc(get_char, bCopy);
i++;
}
fclose(binary);
fclose(bCopy);
return 0;
}
As you can see I`m copying the bmp into copy.bmp but copy.bmp should be grayscale.
Any Suggestions?
To start, the BMP file format has a header file that gets stripped (not in memory). After the header, there is a region of memory that is a fixed size, but there are 7 different formats to what this next region is: http://en.wikipedia.org/wiki/BMP_file_format
You'll have to determine the format, then determine how many bytes are needed for each pixel. Then you'll need to know the pixel description format.
All of this is pretty easy if you just do a little research. I don't know that much about the file format, but you should be able to do it pretty easily if you just understand its structure.
Edit:
Well, if you know where each pixel value is located, you should be able to extract them just by walking along the pixel array with a pointer. For instance, if You know there are 8 pixels in an image, and each pixel is defined by 8 bytes, you can walk along the pixel array with an 8 byte pointer by placing an 8 byte pointer at the beginning the pixel array section and then do your Xor. I don't know exactly how to make something gray scale; I assume you just get rid of all the color values. As such, If the first byte describes the gray scale, and the rest is color data, you would just take the color data and make sure those bits are set to 0 then.
If this is a Windows bitmap, your unstructured approach is ill-advised. I'd suggest you read up on the Windows SDK for information on manipulating bmp files. Some keywords to point you in the right direction: BITMAPINFOHEADER, GetDIBits/SetDIBits. Also, GDI+ has some very convenient wrappers for loading and saving images.
For an 8-bit embedded system that has a small monochrome LCD with black/white pixels (not grayscale), I need an efficient way of storing and displaying fonts. I will probably choose two fixed-width fonts, 4x5 pixels and 5x7 pixels. The resources are very limited: 30k ROM, 2k RAM. The fonts will be written to the buffer with a 1:1 scale, as a one-line string with a given start offset in pixels (char* str, byte x, byte y)
I think I would use 1k of RAM for the buffer. Unless there's a more efficient structure for writing the fonts to, I would have it arranged so it can be written sequentially into the LCD, which would be as follows:
byte buffer[1024];
Where each byte represents a horizontal line of 8 pixels (MSB on the left), and each line of the display is completed from left to right, and in that fashion, top to bottom. (So each line is represented by (128px / 8 =) 16 bytes.)
So my question:
How should the fonts be stored?
What form should the buffer take?
How should the fonts be written to the buffer?
I presume there are some standard algorithms for this, but I can't find anything in searches. Any suggestions would be very helpful (I don't expect anyone to code this for me!!)
Thanks
As a first cut, implement bit blit, a primitive with many uses, including drawing characters. This dictates the following answers to your questions.
As bitmaps.
A bitmap.
Bit blit.
The implementation of bit blit itself involves a bunch of bitwise operations repeatedly extracting a byte or combination of two partial bytes from the source bitmap to be combined with the destination byte.
I need to get the RGB value reading image. How can i do it in C?
The image format can be png,jpg,bmp or other usual format.
It has to be saved in a text file.
A very easy-to-use image library that can cover the reading and writing of all these formats would be FreeImage. It is primarily a C library, but there are also wrappers for C++, etc.
When you say "saved in a text file", that is pretty atypical for images due to the fact that binary formats are much more compact that storing raw string values for the pixel intensities. Additionally, many formats use compression, which would mean there isn't really a given "value" per-pixel ... instead the data must be decompressed before you can individually assign a value to every pixel. There are some image formats such as PPM that can be stored as ASCII data, but again, that's not necessarily the most efficient way to store a large image.
So for your workflow, you would use a library like FreeImage to read the values out of the image file, and then write back the uncompressed pixel values to a PPM file, or a custom-formatted text file.
How to read any yuv image?
How can the dimensions of an YUV image be passed for reading to a buffer?
Usually, when people talk about YUV they talk about YUV 4:2:0. Your reference to any YUV image is misleading, because there are a number of different formats, and each is handled differently. For example, raw YUV 4:2:0 (by convention, files with an extension .yuv) doesn't contain any dimension data; whereas y4m files typically do. So you really need to know what sort of image you're dealing with before you start reading it. For YUV 4:2:0, you just open the file and read the luminance (Y') and chrominance (CbCr) components in that order, keeping in mind the chrominance components have been decimated (quarter size of the luminance components). After that you typically convert Y'CbCr to RGB so you can display the image.
From what I've mentioned above, you simply can't determine the dimensions of any YUV image. Often, the dimensions are simply not in the file. You have to know them up front -- this is why most sites listing YUV files for download also list their frame dimensions (and for video, the number of frames). Most YUV files are either CIF (352x288) or QCIF (176x144), although there are some files digitized from analog video that are in the slightly different SIF format. Read about the different formats here.