I'm playing around with GLX and xlib and I'm curious about rendering using straight X calls on top of an openGL buffer. The glx intro clearly says that:
GLX extended X servers make a subset of their visuals available for OpenGL rendering. Drawables created with these visual can also be rendered into using the core X renderer and or any other X extension that is compatible with all core X visuals.
And, indeed, I'm able to render a simple quad colored with some rainbow effects and then draw on top of it with xlib calls. However, GLX extends the X window with a back buffer, which I have to swap to the front before I can then directly render to the window. My question is: is it possible to use X to render to the back buffer after openGL is done with it and then swap that buffer wholesale to the front, thus giving me flicker free animation on both my openGL and X graphics?
I think the answer is no, but there are maybe some alternatives.
You could do another layer of double-buffering with a pixmap (render X and GL to a pixmap, then draw the pixmap to your X window). It probably wrecks your framerate if you were doing an FPS game but for what you describe might not matter.
You could also use Cairo to draw to a client-side memory buffer, with alpha channel for the background to show through. Then upload the result as a texture to GL and paint it over your background. The Clutter toolkit does this for some of its drawing.
Related
I created two windows using GLFW. The first window has an OpenGL context and the second one doesn't. What I want to do is render the same scene to both windows using a single OpenGL context. Something like this.
glBindVertexArray(vaoId);
// ... tell OpenGL to draw on first window
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(...);
// ... swap first window buffers
// ... tell OpenGL to draw on second window
glClear(GL_COLOR_BUFFER_BIT);
glDrawArrays(...);
// ... swap second window buffers
glBindVertexArray(0);
The problem is I don't know how to tell OpenGL to draw on a specific window. And I also don't know how to swap buffers for a specific window. If it's necessary, I can use Win32 API.
As far as I'm aware, GLFW does not directly support that in it's API. It generally considers a Window and a GL context as a unit. However, with the native APIs, you can do what you want. For windows 32 in partiuclar, have a look at wglMakeCurrent(). In GLFW, you can get the required context and window handles via GLFW's native access API. Note that you will only get a HWND that way, you will have to manually use GetDC() to get the device context of the window.
Be aware that switching contexts will imply flushing the GL command queue, which can have negative effects on the performance. See GL_KHR_context_flush_control for more details.
I'm currently experimenting with opengl and glut.
As i have like no idea what i'm doing i totally mess up with the lighting.
The complete compilable file can be found here: main.c
I have a display loop which currently operates like following:
glutDisplayFunc(also idle func):
glClear GL_COLOR_BUFFER_BIT and GL_DEPTH_BUFFER_BIT
switch to modelview matrix
load identity
first Rotate and then Translate according to my keyboard and mouse inputs for the camera
draw a ground with glNormal 0,1,0 and glMaterial on front and back,
which is encapsulated by push/popmatrix
pushmatrix
translate
glutSolidTeapod
popmatrix
do lighting related things, glEnable GL_LIGHTING and LIGHT0 and passing
float pos[] = {0.1,0.1,-0.1,0.1};
glLightfv( GL_LIGHT0, GL_POSITION, pos );
swap the buffers
the function associated with
glutReshapeFunc operates(this is from the lighthouse3d.com glut tutorial):
calculate the ratio of the screen
switch to projection matrix
loadidentity
set the viewport
set the perspective
switch to modelview matrix
However this all seems to work somehow,
but as i enable lighting, the normals seem to totally mess up.
My GL_LIGHT0 seems to stay as it should, as i can see the lightspot on the ground
not moving, as i move around
And the Teapods texture seem to move if i move my camera,
the teapod itself stands still.
Here is some visual material to explain it,
i apologize for my bad english : /
Link to YouTube video describing visually
You have a series of mistakes in your code:
You don't properly set the properties of your OpenGL window:
glutCreateWindow (WINTITLE);
glutInitDisplayMode (GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
The glutInitDisplayMode will only affect any windows you create after that. You should swap those two lines.
You never enable the depth test. You should add glEnable(GL_DEPTH_TEST) after you created the windows. Not using the depth test expalins the weird "see-through" effect you get with the teapot.
You have the following code
glEnable (GL_CULL_FACE | GL_CULL_FACE_MODE);
This is wrong in two ways: the GLenums are not single bits, but just values. You cannot OR them together and expect anything useful to happen. I don't know if this particular call will enable something you don't expect or just generate an error.
The second issue here is that GL_CULL_FACE_MODE isn't even a valid enum to enable.
In your case, you either skip the CULL_FACE completely, or you should write
glEnable (GL_CULL_FACE);
glFrontFace(GL_CW);
The latter call changes the face orientation from OpenGL's default counterclokcwise rule to the clockwise one, as the GLUT teapot is defined that way. Interestingly, your floor is also drawn following that rule, so it will fit for your whole scene. At least for now.
You have not fully understood how GL's state machine works. You draw the scene and then set the lighting. But this will not have an effect on the already drawn objects. It just affects the next time you draw something, which will be in the next frame here. Now, the lighting of the fixed function pipeline works in eye space. That means that if you want a light source which is located at a fixed position in the world, and not in a fixed position relativ to the camera, you have to re-specify the light position, with the updated modelview matrix, everytime the camera moves. In your case, the light source will lag behind one frame when the camera moves. This is probably not noticeable, but still wrong in principle. You should reorder your display() function to
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity();
control (WALKSPEED, MOUSESPEED, mousein);
lightHandler();
drawhelpgrid(1, 20);
drawTeapod();
glutSwapBuffers();
With those changes, I can actually get the expected result of a lighted teapot on my system. But as I side note I feel obligded to warn you that almost all of your code relies on deprecated features of OpenGL. This stuff has been removed from modern versions of OpenGL. If you start learning OpenGL now, you should consider learning the modern programmable pipeline, and not some decades old obsolete stuff.
I am making an OpenCL raycaster, and I'm looking to blit pixels to the screen with the less overhead possible (Every tick counts), as low than calling glClear() every tick, I thought of creating a framebuffer to draw to and passing it to OpenCL, and then blitting with glBlitFramebuffer() but I think that automatically drawing to screen is way better, so there, is there a way to draw pixels with openCL ? Hacky stuff are OK
The best thing I can do now is check out how glClear does it ...
The usual approach is to use OpenCL to draw to a shared OpenGL/OpenCL texture object (created with the clCreateFromGLTexture() function) and then draw it to the screen with OpenGL by rendering a full-screen quad with that texture.
Edit: I've written a small example which uses OpenCL to calculate a mandelbrot fractal and then renders it directly from the GPU to the screen with OpenGL. The code is here: mandelbrot.cpp.
Normally working on graphics and display, we are encountering wordssuch as Displaybuffer, DisplaySurface & DisplayContext? what is the different between these terms?
It depends on the system these are general terms and are often interchanged. But in general
A DisplaySurface is a surface you'd perform operations on i.e. draw a line, circle etc on. A display surface is the physical screen surface you are writing on.
But, although you'd write on a display surface in many cases you'd have a display buffer so that when you draw on the surface, you actually draw on the display buffer so that the user doesn't see the drawing happening and when you've finished drawing you flip the display buffer onto the surface so that the drawing appears instantaneously
A display context is the description of the physical charecteristics of the drawing surface e.g. width, height, color depth and so on. In win32 for example you obtain a device context for a particular piece of hardware - a printer or screen, but then you draw on this device context so it is also the display surface. Likewise you can obtain a device context for an offscreen bitmap (a display buffer). So the terms can blur a bit.
I'm considering integrating some D3D code I have with WPF via the new D3DImage as described
here:
My question is this: Do pixel shader's work on offscreen surfaces?
Rendering to an offscreen surface is generally less constrained than rendering directly to a back buffer. The only constraints that come with using an offscreen surface with D3DImage is that it must be in a 32-bit RGB/ARGB format (depending on your platform). Other than that, all that the hardware has to offer is at your disposal.
In fact, tons of shader effects take advantage of offscreen surfaces for multipass, or full screen post-processing.
I don't know if there's anything special about it with WPF, but in general yes, pixel shaders work on offscreen surfaces.
For some effects rendering to a different surface is required - glass refraction in front of a shader-rendered scene for example. Pixel shaders cannot access the current screen contents and so the view has to be first rendered to a buffer and then used as a texture in the refraction shader pass so that it can take the background colour from a pixel other than the one being calculated.