I'm trying to play around with image manipulation in C and I want to be able to read and write pixels on an SDL Surface. (I'm loading a bmp to a surface to get the pixel data) I'm having some trouble figuring out how to properly use the following functions.
SDL_CreateRGBSurfaceFrom();
SDL_GetRGB();
SDL_MapRGB();
I have only found examples of these in c++ and I'm having a hard time implementing it in C because I don't fully understand how they work.
so my questions are:
how do you properly retrieve pixel data using GetRGB? + How is the pixel addressed with x, y cordinates?
What kind of array would I use to store the pixel data?
How do you use SDL_CreateRGBSurfaceFrom() to draw the new pixel data back to a surface?
Also I want to access the pixels individually in a nested for loop for y and x like so.
for(int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
// get/put the pixel data
}
}
First have a look at SDL_Surface.
The parts you're interested in:
SDL_PixelFormat*format
int w, h
int pitch
void *pixels
What else you should know:
On position x, y (which must be greater or equal to 0 and less than w, h) the surface contains a pixel.
The pixel is described by the format field, which tells us, how the pixel is organized in memory.
The Remarks section of SDL_PixelFormat gives more information on the used datatype.
The pitch field is basically the width of the surface multiplied by the size of the pixel (BytesPerPixel).
With the function SDL_GetRGB, one can easily convert a pixel of any format to a RGB(A) triple/quadruple.
SDL_MapRGB is the reverse of SDL_GetRGB, where one can specify a pixel as RGB(A) triple/quadruple to map it to the closest color specified by the format parameter.
The SDL wiki provides many examples of the specific functions, i think you will find the proper examples to solve your problem.
Related
I'm tasked with writing a program that applies a gray scale filter to a given image. This is done by looping over each pixel and averaging the rgb values. When I check my code, I receive an error which seems to say "ERROR: expected X, received X". Any clarifications of the error message or comments about possible bugs are appreciated.
My code loops over each pixel in each row of an array of pixels which are described as RGBTRIPLES, structs comprised of 3 uint8_t's. In the case of a non-whole-number average the program rounds up to the nearest whole number. My answer is the code included below.
My problem comes when I run check50 cs50/problems/2021/x/filter/more, a program which tests students' programs against keys made by the course designers. The 7th test check's the program's accuracy in filtering a "complex" 3x3 image; I get the following result:
:( grayscale correctly filters more complex 3x3 image
expected "20 20 20\n50 5...", not "20 20 20\n50 5..."
I am confused as it seems that my output matches the expected. Could someone give me a hint as to what my problem is?
My code:
void grayscale(int height, int width, RGBTRIPLE image[height][width])
{
//Loop over pixels
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
//Average out rgbs to get a corresponding greyscale value
BYTE average_activation = ceil(((float) image[y][x].rgbtBlue + image[y][x].rgbtRed + image[y][x].rgbtGreen) / 3);
RGBTRIPLE resultant_pixel = {average_activation, average_activation, average_activation};
//apply
image[y][x] = resultant_pixel;
}
}
return;
}
I think you are overcomplicating things you should use round instead of ceil using #include <math.h>. Also you shouldn't define the new pixel per each pixel in the image. Instead define a new image that is the same width and height as the old image and replace every pixel in this image. At the end substitute the new image with the old image.
Very complicated for me to explain the problem, but I will try my best.
I am making a game. There is an area of game objects and a canvas that draws every object in that area using some "draw_from" function - void draw_from(const char *obj, int x, int y, double scale) so that it looks as if a copy of that area is made on-screen.
This gives the advantage of scaling that area using the scale parameter of the draw_from() function.
However, a problem occurs when doing so. For simplicity imagine there are just two actors in that area - one that is right above the other one.
When they are scaled-down, they will appear in different vertical positions, further from each other.
I need to calculate the new correct positions for each of the objects and pass them to draw_from, but I just seem to be unable to figure out how. What is the correct way to recalculate the new positions if each of those objects is scaled down with the same value?
Here is a decent illustration of the problem more or less:
As you can tell the draw_from function will draw the object centered on the x/y coordinates. To draw an object at 0:0 (top-left corner) you must do draw_from(obj, obj->width/2, obj->height/2, 1.0); Not sure if the scaling is implemented that way exactly, but I created a function to obtain the new width and height of the scaled object:
void character_draw_get_scaled_dimensions (Actor* srcActor, double scale, double* sWidth, double* sHeight)
{
double sCharacterWidth = 0;
double sCharacterHeight = 0;
if(srcActor->width >= srcActor->height)
{
sCharacterWidth = (double)srcActor->width * scale / 1.0;
sCharacterHeight = sCharacterWidth * (double)srcActor->height / (double)srcActor->width;
}
else
{
sCharacterHeight = (double)srcActor->height * scale / 1.0;
sCharacterWidth = sCharacterHeight * (double)srcActor->width / (double)srcActor->height;
}
if(sWidth)
(*sWidth) = sCharacterWidth;
if(sHeight)
(*sHeight) = sCharacterHeight;
}
In other words, I need to maintain the distances between those objects across down-scales and I explained how draw_from and /somehow/ how its scaling works.
I need the correct parameters to pass to the draw_from's x and y arguments.
From that point, I think it will get just too broad if I continue elaborating further.
Not the solution I hoped for, but it is still a solution.
The more hacky and less practical (including performance-wise) solution is to draw every object on an offscreen canvas with a scale of 1.0 then draw from that canvas to the main canvas at any scale desired.
That way only the canvas should be repositioned and not every object. It gets really easy from there. I still would prefer the conventional purposed mathematical solution.
I'm trying to rotate a 2D pixel matrix, but nothing actually happens.
my origin is a stored bitmap[w x h x 3].
why isn't the shown image being rotated?
Here's the display function:
void display()
{
uint32_t i = 0,j = 0,k = 0;
unsigned char pixels[WINDOW_WIDTH * WINDOW_HEIGHT * 3];
memset(pixels, 0, sizeof(pixels));
for(j = bitmap_h -1; j > 0; j--) {
for(i = 0; i < bitmap_w; i++) {
pixels[k++]=bitmap[j][i].r;
pixels[k++]=bitmap[j][i].g;
pixels[k++]=bitmap[j][i].b;
}
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glRotatef(90,0,0,1);
glDrawPixels(g_img.descriptor.size_w, g_img.descriptor.size_h, GL_RGB, GL_UNSIGNED_BYTE, &pixels);
glutSwapBuffers();
}
First and foremost glDrawPixels should not be used. The problem you have is one of the reasons. The convoluted rules by which glDrawPixels operate are too vast to outline here, let's just say, that there's a so called "raster position" in your window, at which glDrawPixels will place the lower left corner of the image it draws. No transformation whatsoever will be applied to the image.
However when setting the raster position, that's when transformations get applied. And should, for whatever reason, the raster position lie outside the visible window nothing will get drawn at all.
Solution: Don't use glDrawPixels. Don't use glDrawPixels. DON'T USE glDrawPixels. I repeat DON'T USE glDrawPixels. It's best you completely forget that this function actually exists in legacy OpenGL.
Use a textured quad instead. That will also transform properly.
I did something similar. I'm creating a 3D space shooter game using OpenGL/C++. For one of my levels, I have a bunch of asteroids/rocks in the background each rotating and moving at a random speed.
I did this by taking the asteroid bitmap image and creating a texture. Then I applied the texture to a square (glBegin(GL_QUADS)). Each time I draw the square, I multiply each of the vertex coordinates (glVertex3f(x, y, z)) with a rotation matrix.
|cos0 -sin0|
|sin0 cos0 |
0 is the theta angle. I store this angle as part of my Asteroid class. each iteration I increment it by a value, depending on how fast I want the asteroid to spin. It works great.
I am currently writing a program for an assignment that requires a single black line to be drawn perfectly solid diagonal (so that all x=y) from the upper-left corner of a standard PNM P6 file. I have had no issues with file I/O.
However, I cannot get the diagonal line to display properly. Instead of the single, solid, white line I need from the corner, I get dotted lines wrapping over the image as shown in this picture.
Does anyone have any idea as to what is going wrong?
My function is as follows:
Image *
DiagonalWhite(Image *img)
{
int i, j;
for (i = 0; i < img->x; i++)
{
for (j=0; j < img->y; j++)
{
if (i==j)
{
img->data[i*img->y+j].red=255;
img->data[i*img->y+j].green=255;
img->data[i*img->y+j].blue=255;
}
}
}
return img;
}
You don't give any definition for Image *img, so actually this question cannot be answered with confidence; however, I assume you are doing the same class as yesterday's Issues writing PNM P6.
You are multiplying in the wrong direction. img->y holds the height of the image. However, since you need the span, you should be using img->x -- the width -- to go down by i pixels (followed by adding j pixels to go right).
img->data[i*img->x+j].red=255; /* x, not y */
Note: Better names for these properties would have been width and height.
Note: It's easier and quicker to loop only once over the minimum of width and height, and set pixel[i,j] immediately, rather than testing which one 'has' the same x and y position.
In the book learning opencv there's a question in chapter 3 :
Create a two dimensional matrix with three channels of type byte with data size 100-by-100 and initialize all the values to 0.
Use the pointer element to access cvptr2D to point to the middle 'green' channel.Draw the rectangle between 20,5 and 40,20.
I've managed to do the first part, but I can't get my head around second part. Here's what I've done so far :
/*
Create a two dimensional matrix with three channels of type byte with data size 100- by-100 and initialize all the values to 0.
Use the pointer element to access cvptr2D to point to the middle 'green' channel.Draw `enter code here`the rectangle between 20,5 and 40,20.
*/
void ex10_question3(){
CvMat* m = cvCreateMat(100,100,CV_8UC3);
CvSetZero(m); // initialize to 0.
uchar* ptr = cvPtr2D(m,0,1); // if RGB, then start from first RGB pair, Green.
cvAdd(m,r);
cvRect r(20,5,20,15);
//cvptr2d returns a pointer to a particular row element.
}
I was considering adding both the rect and matrix, but obviously that won't work because a rect is just co-ordinates, and width/height. I'm unfamiliar with cvPtr2D(). How can I visualise what the exercise wants me to do and can anyone give me a hint in the right direction? The solution must be in C.
From my understanding with interleaved RGB channels the 2nd channel will always be the channel of interest. (array index : 1,4,6..)
So that's the direction where the winds blow from...
First of all, the problem is the C API. This API is still present for legacy reasons, but will soon become obsolete. If you are serious about OpenCV please refer to C++ API. The official tutorials are great source of information.
To further target your question, this would be implementation of your question in C++.
cv::Mat canvas = cv::Mat::zero(100,100, CV_8UC3); // create matrix of bytes, filled with 0
std::vector<cv::Mat> channels(3); // prepare storage for splitting
split(canvas, channels); // split matrix to single channels
rectangle(channels[1], ...); // draw rectangle [I don't remember exact params]
merge(channels, canvas); // merge the channels together
If you only need to draw green rectangle, it's actually much easier:
cv::Mat canvas = cv::Mat::zero(100,100, CV_8UC3); // create matrix of bytes, filled with 0
rectangle(canvas, ..., Scalar(0,255,0)); // draw green rectangle
Edit:
To find out how to access single pixels in image using C++ API please refer to this answer:
https://stackoverflow.com/a/8139210/892914
Try this code:
cout<<"Chapter 3. Task 3."<<'\n';
CvMat *Mat=cvCreateMat(100, 100, CV_8UC3);
cvZero(Mat);
for(int J=5; J<=20; J++)
for(int I=20; I<40; I++)
(*(cvPtr2D(Mat, J, I)+1))=(uchar)(255);
cvNamedWindow("Chapter 3. Task 3", CV_WINDOW_FREERATIO);
cvShowImage("Chapter 3. Task 3", Mat);
cvWaitKey(0);
cvReleaseMat(&Mat);
cvDestroyAllWindows;