Creating openGL wgl context for cairo graphics - c

I am trying to create dummy context so I can do rendering with cairo-gl and then I can copy that surface to surface image buffer.
This code below is just a test so I can evaluate if openGL backed works.
So here is what I am doing, but I always get an error when creating cairo_device. Error is thrown at _cairo_gl_dispatch_init_buffers.
EDIT: Actually cairo can't get the _cairo_gl_get_version and _cairo_gl_get_flavor.
HDC hdc = GetDC((HWND)pGraphics->GetWindow());
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR), // size of this pfd
1, // version number
PFD_DRAW_TO_WINDOW | // support window
PFD_SUPPORT_OPENGL | // support OpenGL
PFD_DOUBLEBUFFER, // double buffered
PFD_TYPE_RGBA, // RGBA type
24, // 24-bit color depth
0, 0, 0, 0, 0, 0, // color bits ignored
0, // no alpha buffer
0, // shift bit ignored
0, // no accumulation buffer
0, 0, 0, 0, // accum bits ignored
32, // 32-bit z-buffer
0, // no stencil buffer
0, // no auxiliary buffer
PFD_MAIN_PLANE, // main layer
0, // reserved
0, 0, 0 // layer masks ignored
};
//HDC hdc;
int iPixelFormat;
// get the best available match of pixel format for the device context
iPixelFormat = ChoosePixelFormat(hdc, &pfd);
// make that the pixel format of the device context
SetPixelFormat(hdc, iPixelFormat, &pfd);
////// create a rendering context
HGLRC hglrc = wglCreateContext(hdc);
// Test openGL
cairo_surface_t *surface_gl;
cairo_t *cr_gl;
cairo_device_t *cairo_device = cairo_wgl_device_create(hglrc);
surface_gl = cairo_gl_surface_create_for_dc(cairo_device, hdc, 500, 500);
cr_gl = cairo_create(surface_gl);
cairo_set_source_rgb(cr_gl, 1, 0, 0);
cairo_paint(cr_gl);
cairo_set_source_rgb(cr_gl, 0, 0, 0);
cairo_select_font_face(cr_gl, "Sans", CAIRO_FONT_SLANT_NORMAL,
CAIRO_FONT_WEIGHT_NORMAL);
cairo_set_font_size(cr_gl, 40.0);
cairo_move_to(cr_gl, 10.0, 50.0);
cairo_show_text(cr_gl, "openGL test");
cairo_surface_write_to_png(surface_gl, "C:/Users/Youlean/Desktop/imageGL.png");
cairo_destroy(cr_gl);
cairo_surface_destroy(surface_gl);

Related

How do i draw Lines in a 3D Coordinate system

i want to draw lines in a 3D Coordinate System but in 2D in C. I know that i have to do an interpolation
Or can i just draw Vectors?
I read online some facts about interpolation but it didnt work because i have some Coordinates lower than zero. (https://math.stackexchange.com/questions/2305792/3d-projection-on-a-2d-plane-weak-maths-ressources)
Thats why i get so tiny lines.
Here are some of the coordinates of an data i want to draw lines with.
i already read the date and saved the points in an Array (coor).
from x: -10.0
from y: -10.0
from z: -10.0
to x : 200.0
to y: -5.0
to z: 20.0
[#include <windows.h>
#include <windowsx.h>
#include <commctrl.h>
#include <string.h>
#include <stdio.h>
#include <conio.h>
#include "project3res.h"][1]
double coor [4][6];
///////////// KOORDINATENSYSTEM ZEICHNEN//////////////////////
BOOL zeichnen (HWND hwnd)
//double x0=650,y0=350;
//double d= 100; //Hilfsvariable zum Interpolieren von 2D zu 3D
//Vorbereitung
{
double x0=650,y0=350; //KOORDINATENSYSTEM URSPRUNG
double d= 2; //Hilfsvariable für Umwandlung von 3D in 2D
HDC hdc;
PAINTSTRUCT ps;
InvalidateRect (hwnd, NULL, TRUE);
hdc = BeginPaint (hwnd, &ps);
//Zeichenbefehle
//KOORDINATENSYSTEM//
SetViewportOrgEx(hdc,x0,y0,NULL);
MoveToEx (hdc, 0, 0, NULL);
LineTo (hdc, 100, 0); //X-ACHSE
MoveToEx (hdc, 0, 0, NULL);
LineTo (hdc, -100, 0);
MoveToEx (hdc, 0, 0, NULL);
LineTo (hdc, 0, 100); //Y-ACHSE
MoveToEx (hdc, 0, 0, NULL);
LineTo (hdc, 0, -100);
MoveToEx (hdc, 0, 0, NULL);
LineTo (hdc, 100, -100); //Z-ACHSE
MoveToEx (hdc, 0, 0, NULL);
LineTo (hdc, -100, 100);
MoveToEx (hdc,coor[0][0]*(d/coor[0][2]), coor[0][1]*(d/coor[0][2]), NULL);
LineTo (hdc, coor[0][3]*(d/coor[0][5]), coor[0][4]*(d/coor[0][5]));
EndPaint (hwnd, &ps);
UpdateWindow(hwnd);
return 0;
}
Recap:
x' = x * d / z;
y' = y * d / z;
Based on your linked post, the positive z-axis points away from you and the center of the projection plane is at (0, 0, d).
Now, if any z-coordinate is less than d (between eye and the plane), as described, clipping will occur.
There are two possibilities to adjust your render output:
move the camera away from the scene
d remains unchanged
add a (positive) distance factor to every z-coordinate (move the objects in the scene along the positive z-axis)
advantage: every point gets projected
disadvantage: scene gets smaller, the more you move away
change the field of view
alter the d value
decrease: wide lens effect
increase: narrow lens, zoom in effect, more clipping will occur
The best approach would be, to find the bounding box of your scene (lines) and fit that box into the frustum of the view (trigonometry required).
The frustum is the cutoff part of the pyramid (the one from your eye to the zfar plane). You need the znear plane (which is d) and the zfar plane, which could be the depth (farthest z-coordinate) of your bounding box.
More Info:
Viewing frustum
The Size of the Frustum at a Given Distance from the Camera
Perspective projection
Camera matrix

How to draw a framebuffer object to the default framebuffer

This code is supposed to clear the background with yellow color using framebuffer object and render buffers but what I get is black background.
#include <SDL2/SDL.h>
#include <GL/glew.h>
int main( int argc, char** argv)
{
SDL_GL_SetAttribute( SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute( SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute( SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute( SDL_GL_ACCELERATED_VISUAL, 1);
SDL_Window* gWindow= SDL_CreateWindow( "Title",
SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
500, 500, SDL_WINDOW_OPENGL);
SDL_GLContext gContext= SDL_GL_CreateContext( gWindow);
glewExperimental= GL_TRUE;
glewInit();
GLuint fbo;
glGenFramebuffers( 1, &fbo);
glBindFramebuffer( GL_FRAMEBUFFER, fbo);
GLuint color_rbr;
glGenRenderbuffers(1, &color_rbr);
glBindRenderbuffer( GL_RENDERBUFFER, color_rbr);
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA32UI, 500, 500);
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_rbr);
GLuint depth_rbr;
glGenRenderbuffers( 1, &depth_rbr);
glBindRenderbuffer( GL_RENDERBUFFER, depth_rbr);
glRenderbufferStorage( GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 500, 500);
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_rbr);
if( glCheckFramebufferStatus( GL_DRAW_FRAMEBUFFER)!= GL_FRAMEBUFFER_COMPLETE)
return 1;
if( glCheckFramebufferStatus( GL_READ_FRAMEBUFFER)!= GL_FRAMEBUFFER_COMPLETE)
return 2;
glViewport( 0, 0, 500, 500);
glClearColor( 1, 1, 0, 0);
SDL_GL_SetSwapInterval( 1);
int quit= 0;
SDL_Event event;
glReadBuffer( GL_COLOR_ATTACHMENT0);
while( !quit)
{
while( SDL_PollEvent( &event))
if( event.type== SDL_QUIT)
quit= 1;
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, fbo);
glDrawBuffer( GL_COLOR_ATTACHMENT0);
glClear( GL_COLOR_BUFFER_BIT);
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, 0);
glDrawBuffer( GL_BACK);
glBlitFramebuffer(
0, 0, 500, 500,
0, 0, 500, 500,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
SDL_GL_SwapWindow( gWindow);
}
SDL_DestroyWindow( gWindow);
return 0;
}
It first clears the framebuffer object with the specified color then blits the framebuffer object to the default framebuffer. Is there something wrong in the code? I can't seem to find where exactly the problem is.
The glBlitFramebuffer operation fails, because the read buffer contains unsigned integer values and the default framebuffer doesn't. This gains a GL_INVALID_OPERATION error.
The format of the read buffer is GL_RGBA32UI
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA32UI, 500, 500);
while the format of the draw buffer (default framebuffer) is an unsigned normalized integer format (probably GL_RGBA8).
If you change the internal format to GL_RGBA16 or GL_RGBA32F your code will proper work.
Note, formats like RGBA8 and RGBA16 are normalized floating point formats and can store floating point values in range [0.0, 1.0].
But formats like RGBA16UI and RGBA32UI are unsigned integral formats and can store integer values in the corresponding range.
The specification says that unsigned integer values can only be copied to unsigned integer values. So the source format has to be *UI and the target format has to be *UI, too.
See OpenGL 4.6 API Compatibility Profile Specification - 18.3.2 Blitting Pixel Rectangles page 662:
void BlitFramebuffer( int srcX0, int srcY0, int srcX1, int srcY1,
int dstX0, int dstY0, int dstX1, int dstY1, bitfield mask, enum filter );
[...]
Errors
[...]
An INVALID_OPERATION error is generated if format conversions are not supported, which occurs under any of the following conditions:
The read buffer contains fixed-point or floating-point values and any draw buffer contains neither fixed-point nor floating-point values.
The read buffer contains unsigned integer values and any draw buffer
does not contain unsigned integer values.
The read buffer contains signed integer values and any draw buffer does
not contain signed integer values.

OpenGL ES 2.0 glDrawElements odd behavior

In my application (running on Mali-400 GPU) I am using OpenGL ES 2.0 to draw UI. For that I setup orthogonal projection matrix and use 2D vectors to describe the geometry (only x, y attributes).
So my vertex structure looks something like this:
struct my_vertext {
struct vec2 pos;
struct unsigned int color;
}
later then I prepare GL context, setup shaders:
vertex:
uniform mat4 matrix;
attribute vec2 pos;
attribute vec4 color;
varying vec4 frag_color;
void main() {
frag_color = color;
gl_Position = matrix * vec4(pos.xy, 0, 1);
};
fragment:
precision mediump float;
varying vec4 frag_color;
void main() {
gl_FragColor = frag_color;
};
and bind them with attribute arrays:
GLuint prog = glCreateProgram(); CHECK_GL;
// create shaders, load source and compile them
...
/// binding with attributes
GLuint attrib_pos, attrib_col, vertex_index = 0;
glBindAttribLocation(prog, vertex_index, "pos"); CHECK_GL;
attrib_pos = vertex_index++;
glBindAttribLocation(prog, vertex_index, "color"); CHECK_GL;
attrib_col = vertex_index++;
// link program
glLinkProgram(prog); CHECK_GL;
glUseProgram(prog); CHECK_GL;
When I render my geometry (for simplicity I am drawing 2 rectangles one behind another) - only the first call to glDrawElements produces an image on the screen (dark gray rectangle), the second (red) - doesn`t.
For rendering I am using Vertex Array Object with 2 bound buffers - one for geometry (GL_ARRAY_BUFFER) and the second one for indices (GL_ELEMENT_ARRAY_BUFFER). All the geometry is placed to this buffers and later is drawn with glDrawElements calls providing needed offsets.
My drawing code:
glBindVertexArray(vao); CHECK_GL;
...
GLuint *offset = 0;
for each UI object:
{
glDrawElements(GL_TRIANGLES, (GLsizei)ui_object->elem_count,
GL_UNSIGNED_SHORT, offset); CHECK_GL;
offset += ui_object->elem_count;
}
This puzzles me a lot since I check each and every return code of glXXX functions and all of them return GL_NO_ERROR. Additionally, I ran my program in Mali Graphics Debugger and the latter hasn`t revealed any problems/errors.
Geometry and indices for both calls (obtained from Mali Graphics Debugger):
First rectangle geometry (which is shown on screen):
0 Position=[30.5, 30.5] Color=[45, 45, 45, 255]
1 Position=[1250.5, 30.5] Color=[45, 45, 45, 255]
2 Position=[1250.5, 690.5] Color=[45, 45, 45, 255]
3 Position=[30.5, 690.5] Color=[45, 45, 45, 255]
Indices: [0, 1, 2, 0, 2, 3]
Second rectangle geometry (which isnt` shown on screen):
4 Position=[130.5, 130.5] Color=[255, 0, 0, 255]
5 Position=[230.5, 130.5] Color=[255, 0, 0, 255]
6 Position=[230.5, 230.5] Color=[255, 0, 0, 255]
7 Position=[130.5, 230.5] Color=[255, 0, 0, 255]
Indices: [4, 5, 6, 4, 6, 7]
P.S.: On my desktop everything works perfectly. I suppose it has something to do with limitations/peculiarities of embedded Open GL.
On my desktop:
$ inxi -F
$ ....
$ GLX Renderer: Mesa DRI Intel Ivybridge Desktop GLX Version: 3.0 Mesa 10.1.3
I know very little of OpenGL ES, but this part looks wrong:
glBindVertexArray(vao); CHECK_GL;
...
GLuint *offset = 0;
for each UI object:
{
glDrawElements(GL_TRIANGLES, (GLsizei)ui_object->elem_count,
GL_UNSIGNED_SHORT, offset); CHECK_GL;
offset += ui_object->elem_count;
}
compared to
glBindVertexArray(vao); CHECK_GL;
...
GLuint offset = 0;
for each UI object:
{
glDrawElements(GL_TRIANGLES, (GLsizei)ui_object->elem_count,
GL_UNSIGNED_SHORT, &offset); CHECK_GL;
offset += ui_object->elem_count;
}

Opencv 2.4.0: Load image through unsigned char array

I think I've thoroughly searched the forums, unless I left out certain keywords in my search string, so forgive me if I've missed a post. I am currently using OpenCV 2.4.0 and I have what I think is just a simple problem:
I am trying to take in an unsigned character array (8 bit, 3 channel) that I get from another API and put that into an OpenCV matrix to then view it. However, all that displays is an image of the correct size but a completely uniform gray. This is the same color you see when you specify the incorrect Mat name to be displayed.
Have consulted:
Convert a string of bytes to cv::mat (uses a string inside of array) and
opencv create mat from camera data (what I thought was a BINGO!, but can't seem to get to display the image properly).
I took a step back and just tried making a sample array (to eliminate the other part that supplies this array):
int main() {
bool isCamera = true;
unsigned char image_data[] = {255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255,0,0,255};
cv::Mat image_as_mat(Size(6,3),CV_8UC3,image_data);
namedWindow("DisplayVector2",CV_WINDOW_AUTOSIZE);
imshow("DisplayVector2",image_as_mat);
cout << image_as_mat << endl;
getchar();
}
So I am just creating a 6x3 matrix, with the first row being red pixels, the second row being green pixels, and third row being blue. However this still results in the same blank gray image but of correct size.
The output of the matrix is (note the semicolons i.e. it formatted it correctly):
[255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0; 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0; 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255, 0, 0, 255]
I might be crazy or missing something obvious here. Do I need to initialize something in the Mat to allow it to display properly? Much appreciated as always for all your help everyone!
all the voodoo here boils down to calling getchar() instead of (the required) waitKey()
let me explain, waitKey might be a misnomer here, but you actually need it, as the code inside wraps the window's messageloop, which triggers the actual blitting (besides waiting for keypresses).
if you don't call it, your window will never get updated, and will just stay grey ( that's what you observe here )
indeed, you should have trusted the result from cout , your Mat got properly constructed, it just did not show up in the namedWindow
(btw, getchar() waits for a keypress from the console window, not your img-window)
hope it helps, happy hacking further on ;)

Color Depth PIXELFORMATDESCRIPTOR

I'm wondering what values to change in a PIXELFORMATDESCRIPTOR object to change the color depth.
According to the OpenGL wiki, this is how you'd create a PIXELFORMATDESCRIPTOR object for an OpenGL context:
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER, //Flags
PFD_TYPE_RGBA, //The kind of framebuffer. RGBA or palette.
32, //Colordepth of the framebuffer.
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
24, //Number of bits for the depthbuffer
8, //Number of bits for the stencilbuffer
0, //Number of Aux buffers in the framebuffer.
PFD_MAIN_PLANE,
0,
0, 0, 0
};
But it has different variables effecting the color depth.
Which ones do I need to change to adjust the color depth accordingly?
The first number, 32 in your particular example specifies the amount of color bitplanes available to the framebuffer. The other numbers define the numbers of bitplanes to use for each component. It's perfectly possible to fit a 5-6-5 pixelformat into a 32 bitplanes framebuffer, which is a valid choice.
When you pass a PIXELFORMATDESCRIPTOR to ChoosePixelFormat the values are takes as minimum values. However the algorithm used by ChoosePixelFormat may not deliver an optimal result for your desired application. It can then be better to enumerate all available pixelformats and choose from them using a custom set of rules.

Resources