I'm trying to mimic a flashlight in opengl. Basically I want the spotlight to be in the same position as the camera and point in the same direction that the camera is pointing in.
Here is my code:
gluLookAt (xAt, yAt, zAt, xLookAt, yLookAt, zLookAt, 0, 1, 0);
light_pos [4] = {xAt, yAt, zAt, 1.0};
glLightfv (GL_LIGHT0, GL_POSITION, light_pos);
spotDir [] = {xLookAt - xAt, yLookAt - yAt, zLookAt - zAt};
glLightfv (GL_LIGHT0, GL_SPOT_DIRECTION, spotDir);
I've made calls to initialize the light and I've calculated the surface normals of all my objects.
Now the above code kind of works, when the camera is moved then the spotlight follows. However when I move the camera closer to an object the object gets less light shone on it. And when I move the camera further away the object gets more light.
I want the opposite to happen - the further away the camera is from an object there should be less light being shone on the object. How is this done? Or is this not the behaviour of an opengl spotlight?
So I looked into this and apparently modifying the attenuation of the light will yield correct results. Hope this helps anyone else who stumbles across this.
Related
I’m learning SceneKit by writing a game where you’re flying through an asteroid field dodging objects. Initially, I did this by moving/rotating the camera, but I realized that at some point I’d run out of coordinate space and it’s probably better to move all of the objects toward the camera (and dispose of them when I’ve “passed” them).
But I can’t seem to get them to move. My original code that moved the camera looked like this:
[cameraNode setTransform:CATransform3DTranslate(cameraNode.transform, 0.f, 0.f, -2.f)];
I thought I could do something similar with each asteroid node:
[asteroidNode setTransform:CATransform3DTranslate(cameraNode.transform, 0.f, 0.f, 2.f)];
but they don’t move. If I add a basic animation:
CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:#"position.z"];
anim.byValue = #10;
anim.duration = 1.0;
[asteroidNode addAnimation:anim forKey:#"move forward"];
the asteroids move but predictably snap back to their original location when it’s done.
This feels like a rookie mistake but I can’t find anything addressing this problem online. Am I going about this the wrong way?
Thanks,
Jeff
moving the cameraNode the way you do it should work but make sure "cameraNode" is your current pointOfView or it will have no effect (check that scnView.pointOfView == cameraNode).
If you want to move the nodes instead you should translate "node.transform" (not "cameraNode.transform"). But Actually it's simpler to just do:
node.position = SCNVector3Make(node.position.x, node.position.y, node.position.z+2.0);
Also make sure there is no animation or physics running on these nodes that could override your changes.
I'm finally upgrading a very old universal game app I have to play nice with newer OSs and device sizes - I've got everything updated and targeting iOS 6.1 right now but when I run it on the iPhone 5, my actual in game view, which is rendered using open GL into an EAGLView, is positioned very strangely and shows a lot of clipping (see screenshot).
On the "normal" devices that were around when we first created this, everything appears as expected.
In my view controller, I basically load a nib with the right size set for the different devices - iPad and non 4" devices get a 1024x768 view and the 4" device gets a new 1136x640 view.
Then, in my viewDidLoad, I set up my view's self.view.contentScaleFactor to [UIScreen mainsScreen] scale], I then do some view sizing like so (roughly):
if(iPad){
[self.view setFrame:CGRectMake(0, 0,1024,768)];
[self.view setCenter:CGPointMake(384,512)];
DefaultViewScale=1.2;
}else if(WideScreen){
[self.view setFrame:CGRectMake(0, 0, 568, 320)];
[self.view setCenter:CGPointMake(160, 293)];
DefaultViewScale = 1.0f;
}else{
[self.view setFrame:CGRectMake(0, 0,480,320)];
[self.view setCenter:CGPointMake(160,240)];
DefaultViewScale=1.0f;;
}
Lastly, I apply a transform to scale the view by a factor defined above which I've just hand tweaked and then rotated it since the app is Landscape-Left only.
[self.view
setTransform:
CGAffineTransformConcat(CGAffineTransformMakeScale(DefaultViewScale,DefaultViewScale),
CGAffineTransformMakeRotation(-M_PI_2))];
I then initialize a new EAGLContext (openGL ES 1),
[(EAGLView *)self.view setContext:context];
[(EAGLView *)self.view setFramebuffer];
setFramebuffer is mostly:
[EAGLContext setCurrentContext:context];
// Create default framebuffer object.
glGenFramebuffers(1, &defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
glViewport(0, 0, framebufferWidth, framebufferHeight);
There's some more boilerplate EAGLView code but note that I'm setting the glViewport to whatever gl tells me it's width and height is which is grabbed from the UIView's layer size.
And finally it sets up the projection matrix:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, self.view.frame.size.width , 0, self.view.frame.size.height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_BLEND_SRC);
glEnableClientState(GL_VERTEX_ARRAY);
glDisable(GL_DEPTH_TEST);
// Set the colour to use when clearing the screen with glClear
glClearColor(51.0/255.0,135.0/255.0,21.0/255.0, 1.0f);
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
This is not my strongest area of knowledge, so let me know if I've missed something and I can get you more info if needed. If anyone has an "a ha" or a similar experience, I'd appreciate some tips in the right direction.
Thanks
Short answer, start using GLKView instead of EAGLView.
First, a good way of getting to know the best practice for setting up e.g. an OpenGL context in the most recent version of iOS is to create a new project using the "OpenGL Game" template and look at for reference.
The most significant difference is the GLKView (introduced in iOS 5.0), which greatly simplifies things for you. It will take care of most of the things you now do manually, including setting up the viewport.
Start by replacing use of the EAGLView with the GLKView (make sure to reference GLKit.framework in your project).
Remove the call to the setFramebuffer method. GLKit takes care of this for you.
Remove the [self.view setTransform:] call. If your app is full-screen OpenGL, there is no need to use view transforms. (And it not, it is likely that it is still not needed).
Set the frame of the view to the bounds of the screen (e.g. by letting it autoresize). You can probably do this in the XIB.
Make sure to call [context setCurrentContext] in your viewDidLoad somewhere.
That should more or less be it. Also make sure to set the context property of the GLKView to the OpenGL context, just as for the EAGLView.
I suggest ensuring that your launch images are up-to-date.
OpenGL is definitely not my area of expertise, but I had a similar issue when upgrading a flash game to iOS 6. I did not supply appropriate launch images for the retina displays of the new iPhone etc, and my app was run in 'compatibility mode' with margins at the top and bottom of the screen. Admittedly, you don't quite have these margins, but it's possible that it's messing with how big your app thinks its screen is. Might be worth checking out.
Why is your "DefaultViewScale=1.2" on an iPad? If the app is fullscreen, it shouldn't be scaled anymore since it's 1024x768. Are you rescaling something there?
In my OpenGL Apps I just have a EAGLView that is always fullscreen and then read the sizes from the framebufferWidth/height.
If you have a UIViewController, with the HUD-Elements correctly set up, you wouldn't need to do any [self.view setTransform..]. I have the feeling you're making life more complicated for yourself, then it should be!
Just add the EAGLView with "UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth" as the lowest subview of your ViewControllers main view. And set up the rotation code correctly (keep in mind the iOS5 calls so shouldAutoRotateToInterfaceOrientation:.. are deprecated).
It looks like the transformation you're doing is after setting the frame of the view, and may therefor change the bounds. I would suggest breaking in your draw method and checking the bounds of both the view and its layer.
Remember that the frame is set from the perspective of the parent, while the bounds is in local coordinates. From UIView.h:
do not use frame if view is transformed since it will not correctly
reflect the actual location of the view. use bounds + center instead.
In the walk through for blackberry 10 sdk using opengl es. it uses 2 commands namely:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
and later:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
I don't understand what these are used for when initializing the viewport. If I take those lines out the program still runs perfectly and nothing changes.
I see its got to do with rendering the matrix but i'm not sure I understand which matrix as this is only when im initializing before any sort of rendering.
Called in an initialization routine, those do nothing. The default value of both matrices is identity, so it's just setting it to the same value that they already are.
As to why it is there, I guess that some people just like to explicitly setup their context so they know for sure what the current value is, maybe it's easier to remember or they don't trust the context to have the right default value, I don't know.
(Using OpenGL, GLUT, GLU, and C)
I am trying to create a 3D game in C, and I have the camera movement, collision detection and all of the main stuff ready, however I have failed at the first hurdle. To create my rectangles I am using
glutSolidCube (2.0);
And I know about tranformations and scale and rotations, however I am looking for how to place it in a precise location. Say I had a 3D space, with XYZ. Say I had the camera at 5,5,20, looking towards 0,0,0 (So at an angle) and wanted to place a Cube at 5,2,10, and then another at -5,-2,20. How would I use these absolute positions? Also, how would I use absolute sizes, so say I wanted the one at -5,-2,20 to be 20,5,10 in size. How would I do this in OpenGL?
You'll have to use the functions:
glTranslatef()
glRotatef()
glScalef()
Additionally, also learn these:
glPushMatrix()
glPopMatrix()
Read the OpenGL reference for details.
First forget about glutSolidCube. GLUT is not a part of OpenGL, it's just a small convenience library for it.
You must understand the OpenGL only deals with points, lines and tranangles. And it doesn't maintain a scene, but its merely drawing points, lines and triangles. Each on its own without any connotation of topology. Also OpenGL should not be confused for some math library. The functions glTranslate, glRotate, glScale and so on are a pure legacy and have been removed from contemporary OpenGL versions.
That being said...
Say I had the camera at 5,5,20, looking towards 0,0,0 (So at an angle) and wanted to place a Cube at 5,2,10, and then another at -5,-2,20. How would I use these absolute positions? Also, how would I use absolute sizes, so say I wanted the one at -5,-2,20 to be 20,5,10 in size. How would I do this in OpenGL?
I'll go along with what you already know (which mans old OpenGL-1.1 and GLUT):
void draw()
{
/* Viewport and projection really should be set in the
drawing handler. They don't belong into the reshape. */
glViewport(...);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
your_projection();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(5, 5, 20, 0, 0, 0, 0, 1, 0);
glPushMatrix();
glTranslatef(5, 2, 10);
draw_cube();
glPopMatrix();
glPushMatrix();
glTranslatef(-5, -2, 20);
draw_cube();
glPopMatrix();
glPushMatrix();
glTranslatef(-5, -2, 20);
glScalef(20, 5, 10);
draw_cube();
glPopMatrix();
}
I've been trying to get a HUD texture to display for a simulator for a while now, without success.
First I bind the texture like this:
glGenTextures(1,&hudTexObj);
gHud = getPPM("textures/purplenebula/hud.ppm",&n,&m,&s);
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
//glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,n,m,0,GL_RGB,GL_UNSIGNED_INT, gHud);
And then I attempt to map it to a QUAD, which results in the whole quad being a single brown color, and I want it to use all the texels. Here's how I map:
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0);
glVertex2f(0,0);
glTexCoord2f(0.0,1.0);
glVertex2f(0,m);
glTexCoord2f(1.0,1.0);
glVertex2f(n,m);
glTexCoord2f(1.0,0.0);
glVertex2f(n,0);
glEnd();
The weird thing is that I've been able to get the exact above code to display the texture in a program of its own, yet when I put it into my main program it fails. Could it have to do with the texture matrix? I'm dumbfounded at this point.
Stupidly, I had enabled automatic tex coord generation far away in another part of the code. So if you see one texel's color covering a whole image, that is the likely cause.