I want to perform some conversions of pixels in bitmap which is given as byte stream.
Depending on bytes per pixel value the final pixels are different, but the code is almost the same. This is because the byte stream is presented as pointer like uint8_t*, uint16_t*, uint32_t* and iterating is done with bytestream[x].
Currently my code looks like:
if (bytes per pixel == 1) {
uint8_t *pixels = (uint8_t *) stream;
// a lot of code iterating over pixels pointer
} else if (bytes per pixel == 2) {
uint16_t *pixels = (uint16_t *) stream;
// exactly the same lot of code iterating over pixels pointer
} else if (bytes per pixel == 4) {
uint32_t *pixels = (uint32_t *) stream;
// exactly the same lot of code iterating over pixels pointer
}
Problem is the code style, I don't like that the code is repeated. I can't put that into the separate function, because function require pointer in specific type.
In C++ I believe I could use template, but this is C.
The alternative is casting to specific pointer type per each pixel, so the ifology with repeated code would be smaller, but I believe this could harm the performance.
Something like that:
uint8_t *pixels = (uint8_t *) stream;
// a lot of code iterating with `pixel = pixels + i * bytesPerPixel
//finally setting pixels
some loop {
if (bytes per pixel == 1) {
// one or two lines
} else if (bytes per pixel == 2) {
// one or two lines
} else if (bytes per pixel == 4) {
// one or two lines
}
}
This is a lot smaller, but as I said before this could limit the performance, since we are doing this check per every pixel.
Is making such checks per pixel is troublesome on current machines (example ~1GHz ARM cpu)?
Related
I'm trying to create a 2d array that contains RGB value for each pixel in the image,
I created a copy of the original image to check out if the images are similar and I got that the output is mostly gray pixels.
(I try to use only standard libraries).
that's the structs I use
typedef struct { //bmp file values struct
int width;
int height;
} image_t;
typedef struct { //pixel color
unsigned char r;
unsigned char g;
unsigned char b;
} color_t;
the main
int main() {
int i, j;
color_t** matrix;
static image_t image;
if ((LoadSprite(&image, BMP)) != 0) {
printf_s("Failed to load file: \" %s\"", BMP);
return -1;
}
matrix = malloc(sizeof(color_t*) * image.height); // allocate memory to image pixel matrix
for (i = 0;i < image.height;i++) {
matrix[i] = malloc(sizeof(color_t) * image.width);
}
imgtrx(matrix, image, BMP);
CreateBMP(BMPCPY, matrix, image.height, image.width);
return 0;
}
the function imgtrx essentially assigns RGB pixel value to the correct location in the matrix according to the image height & width
void imgtrx(color_t** mtrx, image_t image, char* filename) {
int val, t, i = 0, j, k = 0;
FILE* file;
val = fopen_s(&file, filename, "rt");
//fseek(file, 54, SEEK_SET);
fseek(file, 10, SEEK_SET);
fread(&t, 1, 4, file); //reads the offset and puts it in t
fseek(file, t, SEEK_SET);
int p, e;
for ( i = 0; i < image.width; i++)
{
for (j = 0;j < image.height;j++) {
fread(&mtrx[i][j].r, 8, 1, file);
fread(&mtrx[i][j].g, 8, 1, file);
fread(&mtrx[i][j].b, 8, 1, file);
}
}
fclose(file);
return 0;
}
the following function converts the 2d array to single dimensions and writes a BMP copy
void CreateBMP(char* filename, color_t** matrix, int height, int width)
{
int i, j;
int padding, bitmap_size;
color_t* wrmat;
wrmat = malloc(sizeof(color_t) * height * width);
for (i = 0;i < height;i++) {
for (j = 0;j < width;j++) {
wrmat[i + j] = matrix[i][j];
}
}
if (((width * 3) % 4) != 0) {
padding = (width * 3) + 1;
}
else
{
padding = width * 3;
}
bitmap_size = height * padding * 3;
char tag[] = { 'B', 'M' };
int header[] = {
0x3a, 0x00, 0x36,
0x28, // Header Size
width, height, // Image dimensions in pixels
0x180001, // 24 bits/pixel, 1 color plane
0, // BI_RGB no compression
0, // Pixel data size in bytes
0x002e23, 0x002e23, // Print resolution
0, 0, // No color palette
};
header[0] = sizeof(tag) + sizeof(header) + bitmap_size;
FILE* fp;
fopen_s(&fp, filename, "w+");
fwrite(&tag, sizeof(tag), 1, fp);
fwrite(&header, sizeof(header), 1, fp); //write header to disk
fwrite(wrmat, bitmap_size * sizeof(char), 1, fp);
fclose(fp);
fclose(fp);
free(wrmat);
}
Note that BMP files can have different pixel formats as detailed in the BITMAPINFOHEADER family of structures documented in the Microsoft manuals for Win32 and OS/2 .
From your "all grey" result, I suspect you are trying to interpret a less than 24bpp format as RGB values, or have run into a technical problem reading the pixel area of the file.
So to do what you are trying to do, your code needs to read the first 4 bytes of the BITMAPINFOHEADER structure, use that to determine the structure version, then read in the rest of the BITMAPINFORHEADER to determine the pixel array format and the size/format of the palette/color information structures that lie between the BITMAPINFOHEADER and the actual pixels.
Also remember to convert any header fields (including the offset field you already parse) from little endian disk format to whatever your runtime CPU uses.
Please note the use of "negative height value" to indicate reversed scanline order is extremely common, as most PC and television hardware stores the top left pixel first, while positive height BMP files store the bottom left pixel first.
The BITMAPINFOHEADER structure versions to worry about are these:
2. BITMAPCOREHEADER (from the 1980s)
3. BITMAPINFOHEADER (Since Windows 3.00)
4. BITMAPV4HEADER
5. BITMAPV5HEADER (Note that the documentation has typos in the linked color profile description).
The pixel formats to worry about are:
1bpp (black and white, with a mandatory palette indicating the RGB equivalent of 0 and 1 bits), as in CGA/EGA/VGA hardware, the most significant bit of each byte is the leftmost (essentially a big endian pixel format).
4bpp (traditionally standard CGA colors, but the palette can specify any other RGB values for the 16 possible pixel codes). The most significant 4 bits of each byte are the leftmost pixel (essentially a big endian pixel format). This format is also used for 3bpp EGA hardware pixels and 2bpp CGA pixels, but using only 8 or 4 of the 16 values.
8bpp (the mandatory palette indicates how each byte value maps to RGB values). VGA hardware included a dedicated chip (the RAMDAC) to do the RGB mapping at the full pixel speed of the monitor.
16bpp (the mandatory palette is an array of 3 unit32_t bitmasks to AND with each pixel to get back only the bits storing the B, G and R values, converting those masks to appropriate shift values is left as an exercise for every graphics programmer). Each pixel is a little endian 2-byte value to be ANDed with the 3 mask values to get the Blue, Green and Red channel values. Most common bitmask values are those for 5:5:5 15bpp hardware and those for 5:6:5 hardware with 1 more green bit.
24bpp (there is no palette or mask data) each pixel is 3 bytes in the order Blue, Green, Red, each giving the color values as a fraction of 255.
32bpp is the same as 16bpp, only with 4 bytes in each pixel value. Common formats described by the mask are Blue,Green,Red,x and Red,Green,Blue,x where x may be 0, random noise or an alpha channel. There are also a few files that use more than 8 bits per color channel to encode HDR images. If you are writing a generic decoder, you need to at least extract the 8 most significant bits of each color channel, though keeping extra pixel bits is rarely necessary.
On top of all these color formats, the HEADER may indicate any of multiple compression formats, most notably the historic BMP RLE4 and BMP RLE8 compressions used by Windows 3.x and the use of common 3rd party compressions such as JPEG, however using BMP format as a wrapper around compressed images is quite rare these days, so you probably don't need to worry, it is however the traditional way to ask the GPU or laser printer to decompress JPEG and PNG files for you, by passing the contents of an in-memory JPEG (from a standard JFIF file) along with an in-memory BITMAPINFOHEADER structure to a GDI or GPI API after checking that other GDI/GPI APIs report that the installed driver supports this.
I am using the following way to copy a region from a bitmap in rgb565 pixel format:
void bmpcpy(size_t left, size_t top, size_t right, size_t bottom) {
size_t index = 0;
do {
do {
bmpCopy[index] = bmpSrc[(top * BMP_WIDTH) + left];
index++;
} while (++left < right);
} while (++top < bottom);
}
Is there a faster way to do the copy?
There might be faster ways using memcpy or accelerated graphics APIs, but first notice that your code is flawed:
bmpCopy and bmpSrc are not defined, it is unlikely they should be global variables.
bmpCopy is assumed to have a straddle value of right - left, not necessarily correct because of alignment constraints.
left is not reset for each row.
the width and height of the region are assumed to be non zero.
Depending on the type of bmpSrc, the parity and amplitude of width and the alignment of the source and destination pointers, it might be more efficient to copy multiple pixels at a time using a larger type.
I am overlaying a custom drawn view on top of a video frame and sending that overlay pixel information to my microcontroller and encoding it into a byte array. Most of the RGB values are 0 because they are apart of the video frame and not the custom view I am drawing.
I am encoding the 32-bit rgb value into 4 different bytes, for each pixel. However, the frame is very slow and lags because I am looping through the entire frame and converting each value to a byte array. I understand what's causing the issue, but wondering if there is a way to speed it up.
I would remove the 0's and only pass values that have a valid RGB value, but I need to keep the position.
The matrix size has 518400 elements.
for (int i = 0; i < A.size(); i++) {
byte *t = (byte *) &A.mem[i];
byte t1 = t[0];
byte t2 = t[1];
byte t3 = t[2];
byte t4 = t[3];
if (t1 != '\0' || t2 != '\0' || t3 != '\0' || t4 != '\0') {
writeSerialData(t, 4);
}
}
You could check the four bytes against zero in one go:
int iMax = A.size();
for (int i = 0; i < iMax; ) {
int32_t *t = (int32_t *) &A.mem[i++];
if (*t) {
writeSerialData(t, 4);
}
}
Do you control the receiving part of that writeSerialData()? Then you should invent some encoding that specifies a new offset of the following data.
Also, are you using 16 millions colors in your overlay? If you are only using a few colors, they invented palette for that. If you know the colors upfront, you can hard-code them on both ends. If not - send a small header with that palette info before the image. That can cut the size of your data by 4 times. If you only have single color, you can further reduce your data to single bits.
UPDATE
By "encoding" I mean some special format of your data, other than simple RGB value. For example, do you use the Alpha channel (I noticed you didn't call your colors RGBA)? If not - you can use that byte for length-encoding, stating that the following color is repeated that many times (up to 255). This will save you some bandwidth on your zeroes (just say how many are there), as well as on a line of the same color (you don't need to send every pixel).
While making a game with SDL2 in c,i have to compare 2 SDL_Surface to check a win condition but i couldn't find a way to do so
It seems you're interested in comparing two SDL_Surfaces, so here is how you do it. There is probably a better way to solve your specific problem, but anyways:
From the SDL Wiki, SDL_Surface has members of interest format, w, h, pitch, pixels.
format represents the pixel encoding information
format->format is the specific enumeration constant specifying a given encoding
w represents the number of pixels in a row of pixels in the surface
h represents the number of rows of pixels in the surface
pitch represents the byte length of a row
pixels is an array with all the pixel data
If you want to compare two SDL_Surfaces, you need to compare the pixels against one another. But first we should check that the pixel encoding and the dimensions match:
int SDL_Surfaces_comparable(SDL_Surface *s1, SDL_Surface *s2) {
return (s1->format.format == s2->format.format && s1->w == s2->w && s1->h == s2->h);
}
If SDL_Surfaces_comparable evaluates to true, we can check if two surfaces are equal by comparing the pixels fields byte by byte.
int SDL_Surfaces_equal(SDL_Surface *s1, SDL_Surface *s2) {
if (!SDL_Surfaces_comparable(s1, s2) {
return 0;
}
// the # of bytes we want to check is bytes_per_pixel * pixels_per_row * rows
int len = s1->format->BytesPerPixel * s1->pitch * s1->h;
for (int i = 0; i < len; i++) {
// check if any two pixel bytes are unequal
if (*(uint8_t *)(s1->pixels + i) != *(uint8_t *)(s2->pixels + i))
break;
}
// return true if we finished our loop without finding non-matching data
return i == len;
}
This assumes that pixel data is serialized as bytes without any padding, or that the padding is zeroed. I couldn't find any SDLPixel structure, so I'm assuming this is the standard way to compare pixels. I did find this link, which seems to verify my approach.
I'm trying to read an image file and scale it by multiplying each byte by a scale its pixel levels by some absolute factor. I'm not sure I'm doing it right, though -
void scale_file(char *infile, char *outfile, float scale)
{
// open files for reading
FILE *infile_p = fopen(infile, 'r');
FILE *outfile_p = fopen(outfile, 'w');
// init data holders
char *data;
char *scaled_data;
// read each byte, scale and write back
while ( fread(&data, 1, 1, infile_p) != EOF )
{
*scaled_data = (*data) * scale;
fwrite(&scaled_data, 1, 1, outfile);
}
// close files
fclose(infile_p);
fclose(outfile_p);
}
What gets me is how to do each byte multiplication (scale is 0-1.0 float) - I'm pretty sure I'm either reading it wrong or missing something big. Also, data is assumed to be unsigned (0-255). Please don't judge my poor code :)
thanks
char *data;
char *scaled_data;
No memory was allocated for these pointers - why do you need them as pointers? unsigned char variables will be just fine (unsigned because it makes more sense for byte data).
Also, what happens when the scale shoots the value out of the 256-range? Do you want saturation, wrapping, or what?
change char *scaled_data; to char scaled_data;
change *scaled_data = (*data) * scale; to scaled_data = (*data) * scale;
That would get you code that would do what you are trying to do, but ....
This could only possibly work on an image file of your own custom format. There is no standard image format that just dumps pixels in bytes in a file in sequential order. Image files need to know more information, like
The height and width of the image
How pixels are represented (1 byte gray, 3 bytes color, etc)
If pixels are represented as an index into a palette, they have the palette
All kinds of other information (GPS coordinates, the software that created it, the date it was created, etc)
The method of compression used for the pixels
All of this is called Meta-data
In addition (as alluded to by #5), pixel data is usually compressed.
You're code is equivalent to saying "I want to scale down my image by dividing the bits in half"; it doesn't make any sense.
Images files are complex formats with headers and fields and all sorts of fun stuff that needs to be interpreted. Take nobugz's advice and check out ImageMagick. It's a library for doing exactly the kind of thing you want.
why you think you are wrong, i see nothing wrong in your algorithm except for not being efficient and char *data; and char *scaled_data; should be unsigned char data; and unsigned char scaled_data;
My understanding of a bitmap (just the raw data) is that each pixle is represented by three numbers one each for RGB; multiplying each by a number <=1 would just make the image darker. If you're trying to make the image wider, you could maby just output each pixle twice (to double the size), or just output every other pixel (to halve the size), but that depends on how its rasterized.