How to draw a square with SDL 2.0? - c

I would like to do something simple like draw a square on the screen using C and SDL. The example that I copied is not working.
//Get window surface
SDL_Surface *screenSurface = SDL_GetWindowSurface(window);
//Fill the surface white
SDL_FillRect(screenSurface, NULL, SDL_MapRGB(screenSurface->format, 0xFF, 0xFF, 0xFF));
//create a square
SDL_FillRect(screenSurface, SDL_Rect(0,0,100,100), SDL_MapRGB(screenSurface->format, 0x00, 0x00, 0x00));
It correctly fills the screen white, but fails on the call to SDL_Rect:
error: expected expression before ‘SDL_Rect’
How do I correctly draw a square using SDL 2.0?

SDL_FillRect does not take an SDL_Rect as an argument; it takes a pointer to SDL_Rect.
//Create a square
SDL_Rect rect(0,0,100,100);
SDL_FillRect(screenSurface, &rect, SDL_MapRGB(...))
That is why when you fill with white you can pass NULL to the function. NULL is not of type SDL_Rect, but it is a pointer, so the compiler is fine with it.

As mentioned by Zach Stark, SDL_FillRect does not take an SDL_Rect as an argument. Rather, it takes a pointer to an SDL_Rect. By prefixing the variable with an ampersand (&), you pass only a reference (a pointer) to the original variable. However, I could not get Zach's code sample to work in my C program. The following uses a different syntax for creating an SDL_Rect but it worked for me.
// create a black square
SDL_Rect rect = {0, 0, 100, 100}; // x, y, width, height
SDL_FillRect(screenSurface, &rect, SDL_MapRGB(screenSurface->format, 0x00, 0x00, 0x00));

Check out SDL_RenderDrawRect.

Related

how to draw bitmap image using putimage function in c

I'm dealing with machine embedded C language software and struggling to understand how to use 'putimage' function to load 'qrcode'image in c. What I've tried to do is put some image on the LCD panel screen on the machine and coudn't figure out how to use putimage function properly.
I've learned that 'putimage' function shows image data array which have gotten from 'getimage' function and two functions I've mentioned are used as below.
void putimage(int left, int top, void *bitmap, int op);
void getimage(int left, int top, int right, int bottom, void *bitmap) ;
Since it is said that bitmap files can be converted to hex format starting '0x42, 0x4D...', I've tried to put array which I've defined like 'char QRbuff[] = {'0x42, 0x4D,...,}' into '*bitmap' parameter while using 'putimage'fuction. but no image have been appeared.
If I defined BMPbuff[] as below ingnoring bitmap format(eg. 0x42, 0x4D), some dotted image is shown.
char BMPbuff[] = {30, 0, 30, 0, 0x18, 0x24, 0x42, 0x99, 0x99, 0x42, 0x24, 0x18} ;
I have no idea how it works. I guess bitmap format that has to do with putimage is not the format which starts with 0x42, 0x4D. Instead, it seems that the format start with size of image {30, 0, 30, 0...} which I don't know where it came from.
I would be so appreciated with any help how to define bitmap array in this case...

SDL Texture created from memory is rendering only in black and white

I am trying to port a game from DDraw to SDL2.
The original program loads the images and blits them to a backbuffer then flips it to a primary one.
I am thinking that I could technically shortcut part of the process and just grab the backbuffer in memory and then turn it into a texture and blit that to the screen. This kind of works already the only problem is that the screen is black and white.
here is some code. The variable that is holding the backbuffer is the destmemarea
if (SDL_Init(SDL_INIT_EVERYTHING) != 0) {
SDL_Log("Unable to initialize SDL: %s", SDL_GetError());
}
SDL_Window* window = NULL;
SDL_Texture *bitmapTex = NULL;
SDL_Surface *bitmapSurface = NULL;
SDL_Surface *MySurface = NULL;
SDL_DisplayMode DM;
SDL_GetCurrentDisplayMode(0, &DM);
auto Width = DM.w;
auto Height = DM.h;
window = SDL_CreateWindow("SDL Tutorial ", Width = DM.w - SCREEN_WIDTH, 32, SCREEN_WIDTH *4, SCREEN_HEIGHT, SDL_WINDOW_SHOWN);
if (window == NULL)
{
printf("Window could not be created! SDL_Error: %s\n", SDL_GetError());
}
SDL_Renderer * renderer = SDL_CreateRenderer(window, -1, 0);
int w, h;
SDL_GetRendererOutputSize(renderer, &w, &h);
SDL_Surface * image = SDL_CreateRGBSurfaceFrom( destmemarea, 640, 0, 32, 640, 0, 0, 0,0);
SDL_Texture * texture = SDL_CreateTextureFromSurface(renderer, image);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
SDL_Delay(10000);
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(window);
}
Not sure if this helps but this is what is being used for DDRAW fort he looks...
dd.dwWidth = 768;
dd.lPitch = 768;
dd.dwSize = 108;
dd.dwFlags = DDSD_PIXELFORMAT|DDSD_PITCH|DDSD_WIDTH|DDSD_HEIGHT|DDSD_CAPS;
dd.ddsCaps.dwCaps = DDSCAPS_SYSTEMMEMORY|DDSCAPS_OFFSCREENPLAIN;
dd.dwHeight = 656;
dd.ddpfPixelFormat.dwSize = 32;
So, I'm not 100% sure I understand what you are trying to do, but I have a few assumptions.
You said that you're porting your codebase from DDraw, so I assume that the backbuffer you are mentioning is an internal backbuffer that you are allocating, and in the rest of your application are doing your rendering to it.
If I am correct in this assumption, than your current approach is what you need to do, but need to specify correct parameters to SDL_CreateRGBSurfaceFrom
width and height are... width and height in pixels
depth is the amount of bits in a single pixel. This depends on the rest of your rendering code that writes to your memory buffer. If we assume that you're doing a standard RGBA, where each channel is 8 bits, it would be 32.
pitch is the size in bytes for a single row in your surface - should be equal to width * (depth / 8).
the 4 masks, Rmask, Gmask, Bmask, and Amask describe how each of your depth sized pixels distributes channels. Again, depends on how you render to your memory buffer, and the endianness of your target platform. From the documentation, 2 possible standard layouts:
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
rmask = 0xff000000;
gmask = 0x00ff0000;
bmask = 0x0000ff00;
amask = 0x000000ff;
#else
rmask = 0x000000ff;
gmask = 0x0000ff00;
bmask = 0x00ff0000;
amask = 0xff000000;
#endif
Be sure not to forget to free your surface by calling SDL_FreeSurface()
With all that said... I think you are approaching your problem from the wrong angle.
As I stated in my comment, SDL handles double buffering for you. Instead of having custom code that renders to a buffer in memory, and then trying to create a surface from that memory and rendering it to SDLs backbuffer, and calling present... you should skip the middle man and draw directly to SDLs back buffer.
This is done through the various SDL render functions, of which RenderCopy is a member.
Your render loop should basically do 3 things:
Call SDL_RenderClear()
Loop over every object that you want to present to the screen, and use one of the SDL render functions - in the most common case of an image, that would be SDL_RenderCopy. This would mean, throughout your codebase, load your images, create SDL_Surface and SDL_Texture for them, keep those, and on every frame call SDL_RenderCopy or SDL_RenderCopyEx
Finally, you call SDL_RenderPresent exactly once per frame. This will swap the buffers, and present your image to screen.

When processing an image, it gets weird colors in GTK

Hello people from stackoverflow, what I want to do is to process an image with a transformation pixel by pixel to make it darker. The idea is really simple, I have to do something like this:
R *= factor
G *= factor
B *= factor
Where "factor" is a float number between 0 and 1, and R, G, B are the Rgb numbers for each pixel. To do so, I load an "RGB" file that has three numbers for each pixel from 0 to 255 to an array of char pointers.
char *imagen1, *imagen;
int resx, resy; //resolution
imagen1 = malloc....;
//here I load a normal image buffer to imagen1
float factor = 0.5f; // it can be any number between 0 an 1
for(unsigned int i=0; i< 3*resx*resy; i++)
imagen[i] = (char) (((int) imagen1[i])*factor);
gtk_init (&argc, &argv);
window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
g_signal_connect (window, "destroy", G_CALLBACK (gtk_main_quit), NULL);
pixbuf = gdk_pixbuf_new_from_data (buffer, GDK_COLORSPACE_RGB, FALSE, 8,
resx, resy, (resx)*3, NULL, NULL);
image = gtk_image_new_from_pixbuf (pixbuf);
gtk_container_add(GTK_CONTAINER (window), image);
pixbuf = gdk_pixbuf_new_from_data (imagen, GDK_COLORSPACE_RGB, FALSE, 8,
resx, resy, (resx)*3, NULL, NULL);
gtk_image_set_from_pixbuf(image, pixbuf);
Ignore if the GTK part is not properly written, it displays "imagen". If factor is 1 the image is well displayed, with real colors. The problem is when I use a number between 0 and 1, the image displayed gets very weird colors, like it is "saturated" or the color depth is worse. The further to 1 factor is, the worse the image gets. I don't know why it happens, I thought GTK normalized the RGB values and for that reason the color depth decreased, but I tried adding some white (255, 255, 255) and black (0, 0, 0) pixels and the problem persists. I would like to know what I am doing wrongly, sorry for my English. Thank you in advance!
Your colour component of a pixel is 8 bit char casted to an int. You are then multiplying by a float, so the int gets first converted to a float and a float is the result. This float is then down casted to a char which going off the representation of a float will look nothing like the int you really want it to look like.
You'll want to cast this result to an int before casting back to char.

How to use unsigned short in an opengl shader?

I'm trying to upload a texture with unsigned shorts in a shader but it's not working.
I have tried the following:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, vbt[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 640, 480, 0, GL_RED, GL_UNSIGNED_SHORT, kinect_depth);
glUniform1i(ptexture1, 1);
GLenum ErrorCheckValue = glGetError();
I know I'm binding correctly the texture because I get some results by using
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, vbt[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 640, 480, 0,
GL_RG, GL_UNSIGNED_BYTE, kinect_depth);
glUniform1i(ptexture1, 1);
GLenum ErrorCheckValue = glGetError();
In particular, I get part of my values in the red channel. I would like to upload the texture as a unsigned byte or as a float. However I don't manage to get the glTexImage2D call correctly. Also, is it possible to something similar using a depth texture? I would like to do some operations on the depth information I get from a kinect and display it.
Your arguments to glTexImage2D are inconsistent. The 3rd argument (GL_RGB) suggests that you want a 3 component texture, the 7th (GL_RED) suggests a one-component texture. Then your other attempt uses GL_RG, which suggests 2 components.
You need to use an internal texture format that stores unsigned shorts, like GL_RGB16UI.
If you want one component, your call would look like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, 640, 480, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, kinect_depth);
If you want three components:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16UI, 640, 480, 0, GL_RGB_INTEGER, GL_UNSIGNED_SHORT, kinect_depth);
You also need to make sure that the types used in your shader for sampling the texture match the type of the data stored in the texture. In this example, since you use a 2D texture containing unsigned integer values, your sampler type should be usampler2D, and you want to store the result of the sampling operation (result of texture() call in the shader) in a variable of type uvec4. (paragraph added based on suggestion by Andon)
Some more background on the format/type arguments of glTexImage2D, since this is a source of fairly frequent misunderstandings:
The 3rd argument (internalFormat) is the format of the data that your OpenGL implementation will store in the texture (or at least the closest possible if the hardware does not support the exact format), and that will be used when you sample from the texture.
The last 3 arguments (format, type, data) belong together. format and type describe what is in data, i.e. they describe the data you pass into the glTexImage2D call.
It is mostly a good idea to keep the two formats matched. Like in this case, the data you pass in is GL_UNSIGNED_SHORT, and the internal format GL_R16UI contains unsigned short values. In OpenGL ES it is required for the internal format to match format/type. Full OpenGL does conversion if necessary, which is undesirable for performance reasons, and also frequently not what you want because the precision of the data in the texture won't be the same as the precision of your original data.

How to write out a greyscale cairo surface to PNG

I have a cairo_surface_t of format CAIRO_FORMAT_A8. I want to write out the surfe as a greyscale image, so every pixel has a single byte value of type uchar.
If I use cairo_surface_write_to_png directly on the CAIRO_FORMAT_A8 surface, all I get is an all-black image. I think this is how cairo internally treats the A8 surface - as alpha values, not as greyscale data. I want a single greyscale image, however.
I'd be enough if somebody count point out how to copy the A8 format to all 3 layers of an RGB24 image.
Any help appreciated!
Untested code below. The idea is to create an ARGB-surface and "copy" the A8 surface there via cairo_mask_surface(). If the colors are "swapped", swap the two cairo_set_source_rgb() calls.
cairo_surface_t *s = YOUR_A8_SURFACE;
cairo_t *cr = cairo_create(s);
cairo_push_group_with_content(cr, CAIRO_CONTENT_COLOR_ALPHA);
cairo_set_source_rgb(cr, 1, 1, 1);
cairo_paint(cr);
cairo_set_source_rgb(cr, 0, 0, 0);
cairo_mask_surface(cr, cairo_get_target(cr), 0, 0);
cairo_surface_write_to_png(cairo_get_group_target(cr), "/tmp/foo.png");
/* If you want to continue using the context:
cairo_pattern_destroy(cairo_pop_group(cr)); */
cairo_destroy(cr);

Resources