Drawing with high precision alpha blending - c

I need to blend together about 1 million semi-transparent rectangles, while being able to manage transparency accuracy by increment of 1e-6.
Typically, if my 1 millions rectangle would be drawn on top of each other, I want to have a resulting alpha value for these pixels of exactly 1.0 (0.5 for 500 000 rectangles, and so on).
Using the cairo library, it would ideally look like:
const int NB_RECT = 1000000;
//[...]
cairo_set_operator(cr, CAIRO_OPERATOR_ADD);
cairo_set_source_rgba(cr, 1.0, 0, 0, 1.0/NB_RECT);
for(int i = 0 ; i < NB_RECT ; i++) {
//[...]
cairo_rectangle(cr, x, y, w, h);
cairo_fill(cr);
}
// [...]
This does not work because below alpha~=0.01, the drawing commands seem to be simply discarded (probably due to the internal representation of colors inside cairo).
Could you suggest a drawing library that handle high precision transparency, or possible workaround?

Related

GLFW3/GLU 3D world space using static pipeline

In previous projects, I enabled depth testing used gluPerspective called once on startup to set up a 3D space. Currently, I am rendering a square between -0.5 and 0.5 with 0.0 as its origin after the 3D world has initialised with code below will cause a square to cover the entire screen:
glBegin(GL_QUADS);
{
glVertex3f(-0.5, -0.5, 0);
glVertex3f(-0.5, 0.5, 0);
glVertex3f(0.5, 0.5, 0);
glVertex3f(0.5, -0.5, 0);
}
glEnd();
What I am looking is a way to set the perspective so that shapes are rendered in world space. For example, the snippet below should cause a square of 200x200 to be rendered:
glBegin(GL_QUADS);
{
glVertex3f(-100, -100, 0);
glVertex3f(-100, 100, 0);
glVertex3f(100, 100, 0);
glVertex3f(100, -100, 0);
}
glEnd();
The code below is what I am currently using to initialise a 3D world.
// WINDOW_WIDTH = 1600, WINDOW_HEIGHT = 900
glViewport(0, 0, WINDOW_WIDTH, WINDOW_HEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(47, WINDOW_WIDTH / WINDOW_HEIGHT, 0.01, 1000);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_DEPTH_TEST);
Have I missed any steps in setting up a 3D space and if gluPerspective is used to do this any suggestions why it is not working?
I am able to achieve this in 2D using ortho, it is important that the world is 3D.
Everything is being written in C using OpenGL and GLU up to 1.3 with my GLFW set up identical to this. Due to technical restraints, I am unable to use the modern pipeline.
First of all, the result of WINDOW_WIDTH / WINDOW_HEIGHT is 1, because WINDOW_WIDTH and WINDOW_HEIGHT are integral values. You have to perform a floating point division ((float)WINDOW_WIDTH / WINDOW_HEIGHT) to compute the correct aspect ratio.
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
The projected size of an object on the viewport depends on its distance to the camera. The different size at different distances (depths) causes the perspective effect. The perspective projection matrix defines a Viewing frustum
The ratio of projected size and the distance to the camera depends on the field of view angle:
maxDim / cameraZ = tan(FOV / 2) * 2
So there is exactly 1 distance where an object with a length of 200 covers 200 pixel. For instance, If you have a filed of view angle of 90° then an object with a z distance of half the window height (height /2) and a vertical size of 200 covers 200 pixel (vertical) because tan(90° / 2) * 2 = 2.
When you use gluPerspective, then you define the field of view angle along the y axis. The field of view along the x axis depends on the aspect ratio. If the aspect ratio is set correctly, then the projection of a square which is parallel to the xy plane of the view is still a square.
Note, if you would use orthographic projection, then the size of the object is independent on the distance.

What is the most efficient way to put multiple colours on a window, especially in frame-by-frame format?

I am making a game with C and X11. I've been trying for quite a while to find a way to put different coloured pixels on a window, frame by frame. I've seen fully developed games get thousands of frames per second. What is the most efficient way of doing this?
I have seen 2-coloured bitmaps with XImages, allocating 256 colours on a fade of black-white, and using XPutPixel with XImages (which I wasn't able to figure how to create an XImage properly that could later have pixels put on it).
I have made this for loop that creates a random image, but it is, obviously, pixel-by-pixel instead of frame-by-frame and takes 18 seconds to render one entire frame.
XColor pixel;
for (int x = 0; x < currentWindowWidth; x++) {
for (int y = 0; y < currentWindowHeight; y++) {
pixel.red = rand() % 256 * 256; //Converting 16-bit colour to 8-bit colour
pixel.green = rand() % 256 * 256;
pixel.blue = rand() % 256 * 256;
XAllocColor(display, XDefaultColormap(display, screenNumber), &pixel); //This probably takes the most time,
XSetForeground(display, graphics, pixel.pixel); //as does this.
XDrawPoint(display, window, graphics, x, y);
}
}
After three or so more weeks of testing things off and on, I finally figured out how to do it, and it was rather simple. As I said in the OP, XAllocColor and XSetForground take quite a bit of time (relatively) to work. XDrawPoint also was slow, as it does more than just put a pixel at a point on an image.
First I tested how Xlib's colour format works (for the unsigned long int represented as pixel.pixel, which was what I needed XAllocColor for), and it appears to have 100% red set to 16711680, 100% green set to 65280, and 100% blue set to 255, which is obviously a pattern. I found the maximum to be a 50% of all colours, 4286019447, which is a solid grey.
Next, I made sure my XVisualInfo would be supported by my system with a test using XMatchVisualInfo([expected visual info values]). That ensures the depth I will use and the TrueColor class works.
Finally, I made an XImage copied from the root window's image for manipulation. I used XPutPixel for each pixel on the window and set it to a random value between 0 and 4286019448, creating the random image. I then used XPutImage to paste the image to the window.
Here's the final code:
if (!XMatchVisualInfo(display, screenNumber, 24, TrueColor, &visualInfo)) {
exit(0);
}
frameImage = XGetImage(display, rootWindow, 0, 0, screenWidth, screenHeight, AllPlanes, ZPixmap);
while (1) {
for (unsigned short x = 0; x < currentWindowWidth; x += pixelSize) {
for (unsigned short y = 0; y < currentWindowHeight; y += pixelSize) {
XPutPixel(frameImage, x, y, rand() % 4286019447);
}
}
XPutImage(display, window, graphics, frameImage, 0, 0, 0, 0, currentWindowWidth, currentWindowHeight);
}
This puts a random image on the screen, at a stable 140 frames per second on fullscreen. I don't necessarily know if this is the most efficient way, but it works way better than anything else I've tried. Let me know if there is any way to make it better.
Thousands of frames per second is not possible. The monitor frequency is about 100 Hz, or 100 cycles per second, that's roughly the maximum frame rate. This is still very fast. Human eye wouldn't pick up faster frame rates.
The monitor response time is about 5ms, so any single point on the screen cannot be refreshed more than 200 times per second.
8-bit is 1 byte, so 8-bit image uses one byte per pixel, each pixel is from 0 to 256. The pixel doesn't have red, blue, green component. Instead each pixel points to an index in the color table. The color table holds 256 colors. There is a trick where you keep the pixels the same and change the color table, this makes the image fade in and out or do other weird things.
In a 24-bit image, each pixel has blue, red, green component. Each color is 1 byte, so each pixel is 3 bytes, or 24 bits.
uint8_t red = rand() % 256;
uint8_t grn = rand() % 256;
uint8_t blu = rand() % 256;
A 16-bit image uses an odd format to store red, blue, green. 16 is not divisible by 3, often times 2 colors are assigned 5-bits, and the 3rd color gets 6-bits. Then you have to fit these colors on one uint16_t sized pixel. It's probably not worth it to explore this.
The slowness of your routine is because you are painting one pixel at a time. You should paint to a buffer instead, and render the buffer once per frame. You might consider using other frame works like SDL. Other games may use things like OpenGL which takes advantage of GPU optimization for matrix operation etc.
You must use a GPU. GPUs have a highly parallel architecture optimized for graphics (hence the name). To access the GPU you will use an API like OpenGL or Vulkan or make use of a Game Engine.

OpenGL: Wrapping texture around cylinder

I am trying to add textures to a cylinder to draw a stone well. I'm starting with a cylinder and then mapping a stone texture I found here but am getting some weird results. Here is the function I am using:
void draw_well(double x, double y, double z,
double dx, double dy, double dz,
double th)
{
// Set specular color to white
float white[] = {1,1,1,1};
float black[] = {0,0,0,1};
glMaterialfv(GL_FRONT_AND_BACK,GL_SHININESS,shinyvec);
glMaterialfv(GL_FRONT_AND_BACK,GL_SPECULAR,white);
glMaterialfv(GL_FRONT_AND_BACK,GL_EMISSION,black);
glPushMatrix();
// Offset
glTranslated(x,y,z);
glRotated(th,0,1,0);
glScaled(dx,dy,dz);
// Enable textures
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV,GL_TEXTURE_ENV_MODE,GL_MODULATE);
glBindTexture(GL_TEXTURE_2D,texture[0]); // Stone texture
glBegin(GL_QUAD_STRIP);
for (int i = 0; i <= 359; i++)
{
glNormal3d(Cos(i), 1, Sin(i));
glTexCoord2f(0,0); glVertex3f(Cos(i), -1, Sin(i));
glTexCoord2f(0,1); glVertex3f(Cos(i), 1, Sin(i));
glTexCoord2f(1,1); glVertex3f(Cos(i + 1), 1, Sin(i + 1));
glTexCoord2f(1,0); glVertex3f(Cos(i + 1), -1, Sin(i + 1));
}
glEnd();
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
// Later down in the display function
draw_well(0, 0, 0, 1, 1, 1, 0);
and the output I receive is
I'm still pretty new to OpenGL and more specifically textures so my understanding is pretty limited. My thought process here is that I would map the texture to each QUAD used to make the cylinder, but clearly I am doing something wrong. Any explanation on what is causing this weird output and how to fix it would be greatly appreciated.
There are possibly three main issues with your draw routine. quad-strip indexing, texture coordinates repeating too often and possible incorrect usage of the trig functions;
Trigonometric functions usually accept values which represent angles expressed in radians and not degrees. Double check what the parameters of the Sin and Cos functions you are using.
Quadstrip indexing is incorrect. Indexing should go like this...
Notice how the quad is defined in a clock-wise fashion, however the diagonal vertices are defined sequentially. You are defining the quad as v0, v1, v3, v2 instead of v0, v1, v2, v3 so swap the last two vertices of the four. This also leads to another error in not sharing the vertices correctly. You are duplicating them along each vertical edge since you draw the same set of vertices (i+1) in one loop, as you do in the next (i.e since i has now been incremented by 1).
Texture coordinates are in the range from 0, 1 for each quad which means you are defining a cylinder which is segmented 360 times and this texture is repeated 360 times around the cylinder. I'm assuming the texture should be mapped 1:1 to the Cylinder and not repeated?
Here is some example code using what you provided. I have reduced the number of segments down to 64, if you wish to still have 360 then ammend numberOfSegments accordingly.
float pi = 3.141592654f;
unsigned int numberOfSegments = 64;
float angleIncrement = (2.0f * pi) / static_cast<float>(numberOfSegments);
float textureCoordinateIncrement = 1.0f / static_cast<float>(numberOfSegments);
glBegin(GL_QUAD_STRIP);
for (unsigned int i = 0; i <= numberOfSegments; ++i)
{
float c = cos(angleIncrement * i);
float s = sin(angleIncrement * i);
glTexCoord2f( textureCoordinateIncrement * i, 0); glVertex3f( c, -1.0f, s);
glTexCoord2f( textureCoordinateIncrement * i, 1.0f); glVertex3f( c, 1.0f, s);
}
glEnd();
N.BYou are using an old version of OpenGL (the use of glBegin/glVertex etc).

a better way to draw grid as background

I want to draw grid as in the below picture.
I know a trick to draw this by draw 6 vertical and horizontal lines instead of 6 x 6 small rectangle.
But if I want to have smaller zoom (zoom for viewing picture), the lines are many. For example, say my view window is of size 800 x 600 and viewing a picture of size 400 x 300 (so zoom in is 2). There will be 400 x 300 rectangle of size 2 x 2 (each rectangle represents a pixel).
If I draw each cell (in a loop, say 400 x 300 times), it is very slow (when I move the window...).
Using the trick solves the problem.
By I am still curious if there is a better way to do this task in winapi, GDI(+). For example, a function like DrawGrid(HDC hdc, int x, int y, int numOfCellsH, int numOfCellsV)?
A further question is: If I don't resize, move the window or I don't change the zoom in, the grid won't be changed. So even if I update the picture continuously (capture screen), it is uncessary to redraw the grid. But I use StretchBlt and BitBlt to capture the screen (to memory DC then hdc of the window), if I didn't redraw the grid in memory DC, then the grid will disappear. Is there a way to make the grid stick there and update the bitmap of the screen capture?
ps: This is not a real issue. Since I want to draw the grid when zoom is not less than 10 (so each cell is of size 10 x 10 or larger). In this case, there will be at most 100 + 100 = 200 lines to draw and it is fast. I am just curious if there is a faster way.
Have you considered using CreateDIBSection this will allow you a pointer so that you can manipulate the R, G, B values rapidly, for example the following creates a 256x256x24 bitmap and paints a Green squares at 64 pixel intervals:
BITMAPINFO BI = {0};
BITMAPINFOHEADER &BIH = BI.bmiHeader;
BIH.biSize = sizeof(BITMAPINFOHEADER);
BIH.biBitCount = 24;
BIH.biWidth = 256;
BIH.biHeight = 256;
BIH.biPlanes = 1;
LPBYTE pBits = NULL;
HBITMAP hBitmap = CreateDIBSection(NULL, &BI, DIB_RGB_COLORS, (void**) &pBits, NULL, 0);
LPBYTE pDst = pBits;
for (int y = 0; y < 256; y++)
{
for (int x = 0; x < 256; x++)
{
BYTE R = 0;
BYTE G = 0;
BYTE B = 0;
if (x % 64 == 0) G = 255;
if (y % 64 == 0) G = 255;
*pDst++ = B;
*pDst++ = G;
*pDst++ = R;
}
}
HDC hMemDC = CreateCompatibleDC(NULL);
HGDIOBJ hOld = SelectObject(hMemDC, hBitmap);
BitBlt(hdc, 0, 0, 256, 256, hMemDC, 0, 0, SRCCOPY);
SelectObject(hMemDC, hOld);
DeleteDC(hMemDC);
DeleteObject(hBitmap);
Generally speaking, the major limiting factors for these kinds of graphics operations are the fill rate and the number of function calls.
The fill rate is how fast the machine can change the pixel values. In general, blits (copying rectangular areas) are very fast because they're highly optimized and designed to touch memory in a cache friendly order. But a blit touches all the pixels in that region. If you're going to overdraw or if most of those pixels don't really need to change, then it's likely more efficient to draw just the pixels you need, even if that's not quite as cache-friendly.
If you're drawing n primitives by making n things, then that might be a limiting factor as n gets large, and it could make sense to look for an API call that lets you draw several (or all) of the lines at once.
Your "trick" demonstrates both of these optimizations. Drawing 20 lines is fewer calls than 100 rectangles, and it touches far fewer pixels. And as the window grows or your grid size decreases, the lines approach will increase linearly both in number of calls and in pixels touched while the rectangle method will grow as n^2.
I don't think you can do any better when it comes to touching the minimum number of pixels. But I suppose the number of function calls might become a factor if you're drawing very many lines. I don't know GDI+, but in plain GDI, there are functions like Polyline and PolyPolyline which will let you draw several lines in one call.

OpenGL - drawing 2D polygons shapes with texture

I am trying to make a few effects in a C+GL game. So far I draw all my sprites as a quad, and it works.
However, I am trying to make a large ring appear at times, with a texture following that ring, as it takes less memory than a quad with the ring texture inside.
The type of ring I want to make is not a round-shaped GL mesh ring (the "tube" type) but a "paper" 2D ring. That way I can modify the "width" of the ring, getting more of the effect than a simple quad+ring texture.
So far all my attempts have been...kind of ridiculous, as I don't understand GL's coordinates too well (and I can't really understand the available documentation...I am just a designer with no coder help or background. A n00b, basically).
glBegin(GL_POLYGON);
for(i = 0;i < 360; i += 10){
glTexCoord2f(0, 0);
glVertex2f(Cos(i)*(H-10),Sin(i)*H);
glTexCoord2f(0, HP);
glVertex2f(Sin(i)*(H-10),Cos(i)*(H-10));
glTexCoord2f(WP, HP);
glVertex2f(Cos(i)*H,Sin(i)*(H-10));
glTexCoord2f(WP, 0);
glVertex2f(Sin(i)*H,Cos(i)*H);
}
glEnd();
This is my last attempt, and it seems to generate a "sunburst" from the right edge of the circle instead of a ring. It's an amusing effect but definitely not what I want. Other results included the circle looking exactly the same as the quad textured (aka drawing a sprite literally) or something that looked like a pop-art filter, by working on this train of thought.
Seems like my logic here is entirely flawed, so, what would be the easiest way to obtain such a ring? No need to reply in code, just some guidance for a non-math-skilled user...
Edit: A different way to word what I want, would be a sequence of rotated rectangles connected to each other, forming a low-resolution ring.
So you want an annulus? That is, the area between two circles with the same center but different radii? I'd try a quad strip like this:
glBegin(GL_QUAD_STRIP);
for(i = 0; i <= 360; i += 10){
glTexCoord2f(WP*i/360, 0);
glVertex2f(Cos(i)*(H-10),Sin(i)*(H-10));
glTexCoord2f(WP*i/360, HP);
glVertex2f(Cos(i)*H,Sin(i)*H);
}
glEnd();
Each quad is a 10-degree sector of the ring. Note that if you want to draw N quads in a strip, it takes 2*(N+1) points, so we draw a total of 2*(36+1) = 74 points.
The post here on the OpenGL forums seems to do what you want. An overview of the approach:
If you want a circle filed with a
texture, you can use triangle fan.
First, draw the vertex at the center
of the circle. Then draw the vertex on
the contour of the circle, use
cos(angle)*radius for x and
sin(angle)*radius for y. Since texture
coordinates s and t are in the range
[0 1] => s = (cos(angle)+1.0)*0.5 and
t = (sin(angle)+1.0)*0.5 . The texture
coordinate for the vertex at the
center of the circle is (0.5,0.5).
GLvoid draw_circle(const GLfloat radius,const GLuint num_vertex)
{
GLfloat vertex[4];
GLfloat texcoord[2];
const GLfloat delta_angle = 2.0*M_PI/num_vertex;
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,texID);
glTexEnvi(GL_TEXTURE_ENV,GL_TEXTURE_ENV_MODE,GL_REPLACE);
glBegin(GL_TRIANGLE_FAN);
//draw the vertex at the center of the circle
texcoord[0] = 0.5;
texcoord[1] = 0.5;
glTexCoord2fv(texcoord);
vertex[0] = vertex[1] = vertex[2] = 0.0;
vertex[3] = 1.0;
glVertex4fv(vertex);
for(int i = 0; i < num_vertex ; i++)
{
texcoord[0] = (std::cos(delta_angle*i) + 1.0)*0.5;
texcoord[1] = (std::sin(delta_angle*i) + 1.0)*0.5;
glTexCoord2fv(texcoord);
vertex[0] = std::cos(delta_angle*i) * radius;
vertex[1] = std::sin(delta_angle*i) * radius;
vertex[2] = 0.0;
vertex[3] = 1.0;
glVertex4fv(vertex);
}
texcoord[0] = (1.0 + 1.0)*0.5;
texcoord[1] = (0.0 + 1.0)*0.5;
glTexCoord2fv(texcoord);
vertex[0] = 1.0 * radius;
vertex[1] = 0.0 * radius;
vertex[2] = 0.0;
vertex[3] = 1.0;
glVertex4fv(vertex);
glEnd();
glDisable(GL_TEXTURE_2D);
}

Resources