Create flat shaded cube using triangle strip - cube

I'm not completely solid on how triangle strips work with normals. I want to make a flat shaded cube so I wrote vertices for a triangle strip that make a cube. That works. I made a cube using a triangle strip. The thing is that I set the normals of each vertex as the opposite direction of the center of the cube. So the shading is all weird. I want each side to be a flat color. Any idea how I can set the normals to achieve this?

So, you have normals pointing out from the center of the cube, in a circular fashion?
\_/
-|_|-
/ \
Is this how it looks?
Is the goal something like this?
L
-| |-
T
If that is the case, you could just check which normals your normals are closes too, then change them into the closest normal.
distance = sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z2)^2)

Related

Rotate Camera in the Direction behind object in OpenGL

I'm making a game in OpenGL, using freeglut.
I have a car, which I am able to move back and forward using keys and the camera follows it. Now, when I turn the car(glRotate in xz plane), I want the camera to change the Camera position(using gluLookAt) so it always points to the back of the car.
Any suggestions how do I do that?
For camera follow I use the object transform matrix
get object transform matrix
camera=object
use glGetMatrix or whatever for that
shift rotate the position so Z axis is directing where you want to look
I use object aligned to forward on Z axis, but not all mesh models are like this so rotate by (+/-)90 deg around x,y or z to match this:
Z-axis is forward (or backward depends on your projection matrix and depth function)
X-axis is Right
Y-axis is Up
with respect to your camera/screen coordinate system (projection matrix). Then translate to behind position
apply POV rotation (optional)
if you can slightly rotate camera view from forward direction (mouse look) then do it at this step
camera*=rotation_POV
convert matrix to camera
camera matrix is usually inverse of the coordinate system matrix it represents so:
camera=Inverse(camera)
For more info look here understanding transform matrices the OpenGL inverse matrix computation in C++ is included there.

I have a dot bouncing around an image. Need to calculate angles of reflection off of groups of pixels (surface of objects)

Suppose we have an image (pixel buffer) that is in black and white, so each pixel is either black or white (not gray scale).
Now somewhere in the middle of that images, place a green dot. It may have a radius of n for rendering purposed, but it is really a just point. Give the dot a randomly selected direction and speed, and start it moving. If the image is all white pixels, the dot will bounce off the edges of the image, infinitely wandering around the picture. This is quite easy... just reverse either the rise or run of the dot's vector.
Next, suppose the image has some globs of black pixels. As the dot encounters these globs of black pixels, the angle of reflection needs to be calculated. This is also quite easy of the the black pixels have a fixed slope, as in my sketch (blue X represents black pixels). You can find the slope of the blue Xs and easily calculate the new vector.
But how about the case where the black pixels form really unfriendly surfaces? What are some approaches to figuring out this angle?
This is the subject that I am interested in.
There must be some algorithms that exist for this kind of purpose, but I never ran across any in school. I am not asking how to code this, rather approaches to writing the algorithm to do this. I have a few ideas that I'll try, but if there are some standard ways to do this that exist, I'd like to learn about them.
Obviously I'd like to start with Black and White then move into RGBA.
I am looking for any reference material on just this sort of subject. Websites, books, or other references are very very welcome.
Also, if there are different StackOverflow tags that could be good, let me know.
Thanks much!
Edit********** More pics and information
Maybe I wasn't clear what I meant by "unfriendly surfaces". In the previous picture, our blue X's happened to just be a line. Picture a case where it is not a line, rather a wierd shape.
We start with our green pixel traveling at a slope of 2. Suppose it's vector is that of 12 pixels per frame. It would have a projected path like this:
But suppose instead of a nice friendly line, we have this:
In my mind I can kinda of see what is likely to happen if this were a ball and some walls.
Look for edge detection algorithms used in image processing. Some edge detectors also approximate the direction of edges.
You can think of the pixel neighborhood of the green dot, maybe somewhere between 3x3 and 7x7, as a small edge direction detection problem. One approach would take two passes at the pixels:
In the first pass, smooth the sharp black/white pixels using a Gaussian filter.
In the second pass, apply an edge detection operator, such as Sobel, Prewitt or Roberts to produce the X and Y derivatives of the pixels' intensity. You can then approximate the direction as:
angle = arctan(dx/dy)
The motivation for the smoothing pass is to give the edge detection operator information from farther-away pixels.
The Wikipedia page on the Canny edge detector has a good discussion on obtaining the direction (the "gradient") of an edge, including an example of a particular Gaussian filter you can use for smoothing.
Am doing something similar with a ball and randomly generated backgrounds.
The filter and edge detection is highly technical but all other processes using a 5*5 or 3*3 grid seem similarly difficult.
However, I think I may have a cheap way around this. Assuming a ball travelling in any direction, scan all leading edges of the ball - a semicircle. The further to the edge of the ball the collision occurs the closer to vertical is the collision. Again, I think, this should allow you to easily infer the background normal and from there the answer is fairly simple

how do I do "reverse" texture mapping from texture image x,y to 3d space?

I am using WPF 3D, but I think this question should apply to any 3d texture mapping.
Suppose I have a model of a cow, and I want to draw a circular spot on the cow (and I want to do this dynamically -- supposed I don't know the location of the spot until run-time). I could do this by coloring the vertexes (vertexes are assigned a color based on their distance from the center of the spot), but if the model is fairly low-poly, that will give a pretty jagged-edged circle.
I could do it using a pixel shader, where the shader colors each pixel based on its distance from the center of the spot. But suppose I don't have access to pixel shaders (since I don't in WPF).
So, it seems that what I want to do is dynamically create a texture with the circle pattern on it, and texture the cow with it.
The question is: As I'm drawing that texture, how can I know what 3d coordinate in model space a given xy coordinate on the texture image corresponds to?
That is, suppose I have already textured my model with a plain white texture -- I've set up texture coordinates, done texture mapping, but don't have the texture image yet. So I have this 1000x1000 (or whatever) pixel image that gets draped nicely over the cow according to some nice texture coordinates that have been set up on the model beforehand. I understand that when the 3D hardware goes to draw a given triangle, it uses the texture coordinates of the vertexes of the triangle to find the corresponding triangular region of the image, and then interpolates across the surface of the triangle to fill displayed model pixels with colors from that triangular region of the image.
How do I go the other way? How do I say, for this given xy point on my texture image, and given the texture coordinates that have already been set up on the model, what's the 3d coordinate in model space that this image pixel is going to correspond to once texture mapping happens?
If I had such a function, I could color my texture map image such that all the points (in 3d space) within a certain distance of the circle center point on the cow would get one color, and all points outside that distance would get another color, and I'd end up with a nice, crisp circular spot on the cow, even with a relatively low-poly model. Does that sound right?
I do understand that given the texture coordinates for the vertexes of each triangle, I can step through the triangles in my model, find the corresponding triangle on the texture image, and do my own interpolation, across the texture pixels in that triangle, by interpolating across the 3d plane determined by the vertex points. And that doesn't sound too hard. But I'm just trying to understand if there is some standard 3d concept/function where I can just call a ready-made function to give me the model space coordinates for a given texture xy.
I did end up getting this working. I walk every point on the texture (1024 x 1024 points). Using the model's texture coordinates, I determine which polygon face, if any, the given u,v point is inside of. If it's inside of a face, I get the model coordinates for each point on that face. I then do a barycentric interpolation as described here: http://paulbourke.net/texture_colour/interpolation/
That is, for each u,v point on the texture, I use an inside-polygon check to determine which quad it's in on the 2D texture sheet and then I use an interpolation on that same 2D geometry as described in the link above, but instead of interpolating colors or normals I'm interpolating 3D coordinates.
I can then use the 3D coordinate to color the point on the texture (e.g., to color a circular spot on the cow based on how far in model space the given texture point is from the spot center point). And then I can apply the texture to the model, and it works.
Again, it seems like this must be a standard procedure with a name...
One issue is that the result is very sensitive to the quality of the the texturing as set up by the modeler. For instance, if a relatively large quad on the cow corresponds to a small quad on the texture image, there just aren't enough pixels to work with to get a smooth curve within that model quad once the texture is applied. You can of course use a higher-res texture, such as 2048x2048, but then your loop time is 4x.
It's actually a rasterization process if I didn't misunderstand your question. In lightmapping, one may also need to find the corresponding positions and normals in world space for each texel in the lightmap space and then baking irradiance. (which seems similar to your goal)
You can use standard Graphics API to do this task instead of writing your own implementation. Let:
Size of texture -> Size of G-buffers
UVs of each mesh triangle -> Vertex positions vec3(u, v, 0) of the input stage
Indices of each mesh triangle -> Indices of the input stage
Positions (and normals, etc.) of each mesh triangle -> Attributes of the input stage
After the rasterizer stage of the graphics pipeline, all fragments that lie within the UV triangle are generated, and the attributes that have been supplied are interpolated automatically. You can do whatever you want now in pixel shader!

OpenGL total beginner and 2D animation project?

I have installed GLUT and Visual Studio 2010 and found some tutorials on OpenGL basics (www.opengl-tutorial.org) and 2D graphics programming. I have advanced knowledge in C but no expirience with graphics programming...
For project (astronomy - time scales) , i must create one object in center of window and make other 5 objects (circles,dots...) to rotate around centered object with respect to some equations (i can implement them and solve). Equations is for calculating coordinates of that 5 objects and all of equations have parameter t (as time). For creating animation i will vary parameter t from 0 to 2pi with some step and get coordinates in different moments. If task was to print new coordinates of objects it would be easy to me but problem is how to make animation of graphics. Can i use some functions of OpenGL for rotation/translation ? How to make an object to move to desired location with coordinates determined by equation? Or i can redraw object in new coordinates every millisecond? First thing i thought was to draw all objects, calculate new coordinates, clear screen and draw all objects in new coordinates and repeat that infinitely..(it would be primitive but will work?)
Here is screen shot of that objects - http://i.snag.gy/ht7tG.jpg . My question is how to make animation by calculating new coordinates of objects each step and moving them to new location. Can i do that with basics in OpenGL and good knowledge of C and geometry? Any ideas from what to start? Thanks
Or i can redraw object in new coordinates every millisecond? First
thing i thought was to draw all objects, calculate new coordinates,
clear screen and draw all objects in new coordinates and repeat that
infinitely..
This is indeed the way to go. I would further suggest that you don't bother with shaders and vertex buffers as is the OpenGL 3/4 way. What would be easiest is called "immediate mode", deprecated by OpenGL 3/4 but available in 1/2/3. It's easy:
glPushMatrix(); //save modelview matrix
glTranslatef(obj->x, obj->y, obj->z); //move origin to object center
glBegin(GL_TRIANGLES); //start drawing triangles
glColor3f(1.0f, 0.0f, 0.0f); //a nice red one
glVertex3f(0.0, +0.6f, 0.0f);
glVertex3f(-0.4f, 0.0f, 0.0f);
glVertex3f(+0.4f, 0.0f, 0.0f); //almost equilateral
glEnd();
glPopMatrix(); //restore modelview matrix/origin
Do look into helper libraries glu (useful for setting up the camera / the projection matrix) and glut (should make it very easy to set up a window and basic controls and drawing).
It would probably take you longer to set it up (display a rotating triangle) than to figure out how to use it. In fact, here's some code to help you get started. Your first challenge could be to set up a 2D orthogonal projection matrix that projects along the Z-axis, so you can use the 2D functions (glVertex2).
First thing i thought was to draw all objects, calculate new coordinates, clear screen and draw all objects in new coordinates and repeat that infinitely..(it would be primitive but will work?)
That's exactly how it works. With GLUT, you set a display function that gets called when GLUT thinks it's time to draw a new frame. In this function, clear the screen, draw the objects and flush it to the screen. Then just instruct GLUT to draw another frame, and you're animating!
Might want to keep track of the time inbetween frames so you can animate things smoothly, but I'm sure you can figure that part out.
OpenGL is really just a drawing library. It doesn't do animation, that's up to you to implement. Clear/draw/flush is the commonly used approach for it though.
Note: with 'flush' I mean glFlush(), although GLUT in multi-buffer mode requires glutSwapBuffers()
The red book explains the proper way to draw models that can first be translated, rotated, scaled and so on: http://www.glprogramming.com/red/chapter03.html
Basically, you load the identity, perform transforms/rotations/scales (which one you want first matters - again the book explains it), draw the model as though it was at the origin at normal scale and it'll be placed in its new position. Then you can load identity and proceed with the next one. Every frame of an animation, you glClear() and recalculate/redraw everything. (It sounds expensive, but there's usually not much you can cache between draws).

Possible to give fixed angle in path?

I have a path as shown in the attached image. i would like to know if it is possible to give a fixed angle for the edge of the path, coz i would like to use the same angle while styling other controls as well. Is there a way to make this possible (in Expression Blend)?
You can do this using 'Subtract' Path Operation.
1) First draw the required shapes like
2) Draw a small rectangle and rotate it to a certain angle you want. Say 50. Then, place the small rotated rectangle over the shapes draw previously, like
3) Bring each of the Big Shapes to the front, like
4) Finally, goto "Object -> Combine -> Subtract" by selecting each big and small rectangle pair. The final output will have same angle, like

Resources