OpenGL texture transformations - c

I'm a beginner to OpenGL and I'd like a simple introduction to using textures. For my application, I have no need of geometry, just some texture manipulation. I want to be able to scale, rotate, and translate textures, blend textures together (mixing R,G,B components), and display textures on the screen. If you could also tell me how to draw a solid filled rectangle, that would be good.
I'm also fuzzy on shaders. Could I use GLSL to transform the color at every point on a texture by a formula?
Examples or explanations in C would be preferred.

You have asked a lot of questions...
If you want to play with textures and do some 2d effects here is a little pseudocode that could help:
render() {
glClear(...)
glUseProgram(shader_program);
bind_textures();
setup_shader_params();
draw_fullscreen_quad();
glUseProgram(0);
// rest of opengl...
}
read more on:
http://www.arcsynthesis.org/gltut/Basics/Tut01%20Following%20the%20Data.html
What's the best way to draw a fullscreen quad in OpenGL 3.2?

Related

Per-Pixel Lighting in WPF

Is it possible to implement per-pixel-lighting in WPF 3d application (C#) by using their shader effects?
I have a basic 3d application running in WPF but it only shows Gouraud Shading, by interpolating shaded colour values between vertices to the inner of the polygons. I tried to implement a per-pixel lighting approach, like Phong, but I realize that I do not seem to have access to interpolated normals in the WPF pixel shader effects.
Is this the limitation of WPF, where one should better go with C++ and OpenGL/DirectX directly?
as far as I know, yes it is a WPF limitation...contrary to the OpenGL where you can do per-pixel shading, the WPF only allows you to work on a meshGeometry.
I think a way to go with it, would be to create textures that contains your shader effect, and then apply it per triangle.
This would take a longer amount of computation time, but should work.
Regards

Applying a pixel shader to a ViewPort3D

I'm new to pixel shaders, and I'm trying to apply an underwater-effect to my 3d scene. I can apply it to an image and animate it easily enough, but not to my ViewPort3D. The computer just hangs when calling BeginAnimation on the effect being applied to the Viewport3D. Is this something that cannot be done in WPF?
After a little digging I learned that pixel shaders are only applied to 2 dimensional types, like images. So what I would need is called a vertex shader, which for WPF, there are none.

OpenGL total beginner and 2D animation project?

I have installed GLUT and Visual Studio 2010 and found some tutorials on OpenGL basics (www.opengl-tutorial.org) and 2D graphics programming. I have advanced knowledge in C but no expirience with graphics programming...
For project (astronomy - time scales) , i must create one object in center of window and make other 5 objects (circles,dots...) to rotate around centered object with respect to some equations (i can implement them and solve). Equations is for calculating coordinates of that 5 objects and all of equations have parameter t (as time). For creating animation i will vary parameter t from 0 to 2pi with some step and get coordinates in different moments. If task was to print new coordinates of objects it would be easy to me but problem is how to make animation of graphics. Can i use some functions of OpenGL for rotation/translation ? How to make an object to move to desired location with coordinates determined by equation? Or i can redraw object in new coordinates every millisecond? First thing i thought was to draw all objects, calculate new coordinates, clear screen and draw all objects in new coordinates and repeat that infinitely..(it would be primitive but will work?)
Here is screen shot of that objects - http://i.snag.gy/ht7tG.jpg . My question is how to make animation by calculating new coordinates of objects each step and moving them to new location. Can i do that with basics in OpenGL and good knowledge of C and geometry? Any ideas from what to start? Thanks
Or i can redraw object in new coordinates every millisecond? First
thing i thought was to draw all objects, calculate new coordinates,
clear screen and draw all objects in new coordinates and repeat that
infinitely..
This is indeed the way to go. I would further suggest that you don't bother with shaders and vertex buffers as is the OpenGL 3/4 way. What would be easiest is called "immediate mode", deprecated by OpenGL 3/4 but available in 1/2/3. It's easy:
glPushMatrix(); //save modelview matrix
glTranslatef(obj->x, obj->y, obj->z); //move origin to object center
glBegin(GL_TRIANGLES); //start drawing triangles
glColor3f(1.0f, 0.0f, 0.0f); //a nice red one
glVertex3f(0.0, +0.6f, 0.0f);
glVertex3f(-0.4f, 0.0f, 0.0f);
glVertex3f(+0.4f, 0.0f, 0.0f); //almost equilateral
glEnd();
glPopMatrix(); //restore modelview matrix/origin
Do look into helper libraries glu (useful for setting up the camera / the projection matrix) and glut (should make it very easy to set up a window and basic controls and drawing).
It would probably take you longer to set it up (display a rotating triangle) than to figure out how to use it. In fact, here's some code to help you get started. Your first challenge could be to set up a 2D orthogonal projection matrix that projects along the Z-axis, so you can use the 2D functions (glVertex2).
First thing i thought was to draw all objects, calculate new coordinates, clear screen and draw all objects in new coordinates and repeat that infinitely..(it would be primitive but will work?)
That's exactly how it works. With GLUT, you set a display function that gets called when GLUT thinks it's time to draw a new frame. In this function, clear the screen, draw the objects and flush it to the screen. Then just instruct GLUT to draw another frame, and you're animating!
Might want to keep track of the time inbetween frames so you can animate things smoothly, but I'm sure you can figure that part out.
OpenGL is really just a drawing library. It doesn't do animation, that's up to you to implement. Clear/draw/flush is the commonly used approach for it though.
Note: with 'flush' I mean glFlush(), although GLUT in multi-buffer mode requires glutSwapBuffers()
The red book explains the proper way to draw models that can first be translated, rotated, scaled and so on: http://www.glprogramming.com/red/chapter03.html
Basically, you load the identity, perform transforms/rotations/scales (which one you want first matters - again the book explains it), draw the model as though it was at the origin at normal scale and it'll be placed in its new position. Then you can load identity and proceed with the next one. Every frame of an animation, you glClear() and recalculate/redraw everything. (It sounds expensive, but there's usually not much you can cache between draws).

Generate Texture in Silverlight imitate leather

I would like to display textures in different colors pretty much having this texture.
How do I do this in Silverlight?
Thanks!
alt text http://a.imageshack.us/img535/5255/leathertexture.png
Turn your texture into alpha textue. Exact steps will depend on your image manipulation software. After that simply place your texture on top of colored rectangle.
You could make pixel shader for even better result, but that would be an overkill in your case.

Can pixel shaders be used when rendering to an offscreen surface?

I'm considering integrating some D3D code I have with WPF via the new D3DImage as described
here:
My question is this: Do pixel shader's work on offscreen surfaces?
Rendering to an offscreen surface is generally less constrained than rendering directly to a back buffer. The only constraints that come with using an offscreen surface with D3DImage is that it must be in a 32-bit RGB/ARGB format (depending on your platform). Other than that, all that the hardware has to offer is at your disposal.
In fact, tons of shader effects take advantage of offscreen surfaces for multipass, or full screen post-processing.
I don't know if there's anything special about it with WPF, but in general yes, pixel shaders work on offscreen surfaces.
For some effects rendering to a different surface is required - glass refraction in front of a shader-rendered scene for example. Pixel shaders cannot access the current screen contents and so the view has to be first rendered to a buffer and then used as a texture in the refraction shader pass so that it can take the background colour from a pixel other than the one being calculated.

Resources