I'm finally upgrading a very old universal game app I have to play nice with newer OSs and device sizes - I've got everything updated and targeting iOS 6.1 right now but when I run it on the iPhone 5, my actual in game view, which is rendered using open GL into an EAGLView, is positioned very strangely and shows a lot of clipping (see screenshot).
On the "normal" devices that were around when we first created this, everything appears as expected.
In my view controller, I basically load a nib with the right size set for the different devices - iPad and non 4" devices get a 1024x768 view and the 4" device gets a new 1136x640 view.
Then, in my viewDidLoad, I set up my view's self.view.contentScaleFactor to [UIScreen mainsScreen] scale], I then do some view sizing like so (roughly):
if(iPad){
[self.view setFrame:CGRectMake(0, 0,1024,768)];
[self.view setCenter:CGPointMake(384,512)];
DefaultViewScale=1.2;
}else if(WideScreen){
[self.view setFrame:CGRectMake(0, 0, 568, 320)];
[self.view setCenter:CGPointMake(160, 293)];
DefaultViewScale = 1.0f;
}else{
[self.view setFrame:CGRectMake(0, 0,480,320)];
[self.view setCenter:CGPointMake(160,240)];
DefaultViewScale=1.0f;;
}
Lastly, I apply a transform to scale the view by a factor defined above which I've just hand tweaked and then rotated it since the app is Landscape-Left only.
[self.view
setTransform:
CGAffineTransformConcat(CGAffineTransformMakeScale(DefaultViewScale,DefaultViewScale),
CGAffineTransformMakeRotation(-M_PI_2))];
I then initialize a new EAGLContext (openGL ES 1),
[(EAGLView *)self.view setContext:context];
[(EAGLView *)self.view setFramebuffer];
setFramebuffer is mostly:
[EAGLContext setCurrentContext:context];
// Create default framebuffer object.
glGenFramebuffers(1, &defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
glViewport(0, 0, framebufferWidth, framebufferHeight);
There's some more boilerplate EAGLView code but note that I'm setting the glViewport to whatever gl tells me it's width and height is which is grabbed from the UIView's layer size.
And finally it sets up the projection matrix:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrthof(0, self.view.frame.size.width , 0, self.view.frame.size.height, -1, 1);
glMatrixMode(GL_MODELVIEW);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_BLEND_SRC);
glEnableClientState(GL_VERTEX_ARRAY);
glDisable(GL_DEPTH_TEST);
// Set the colour to use when clearing the screen with glClear
glClearColor(51.0/255.0,135.0/255.0,21.0/255.0, 1.0f);
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
This is not my strongest area of knowledge, so let me know if I've missed something and I can get you more info if needed. If anyone has an "a ha" or a similar experience, I'd appreciate some tips in the right direction.
Thanks
Short answer, start using GLKView instead of EAGLView.
First, a good way of getting to know the best practice for setting up e.g. an OpenGL context in the most recent version of iOS is to create a new project using the "OpenGL Game" template and look at for reference.
The most significant difference is the GLKView (introduced in iOS 5.0), which greatly simplifies things for you. It will take care of most of the things you now do manually, including setting up the viewport.
Start by replacing use of the EAGLView with the GLKView (make sure to reference GLKit.framework in your project).
Remove the call to the setFramebuffer method. GLKit takes care of this for you.
Remove the [self.view setTransform:] call. If your app is full-screen OpenGL, there is no need to use view transforms. (And it not, it is likely that it is still not needed).
Set the frame of the view to the bounds of the screen (e.g. by letting it autoresize). You can probably do this in the XIB.
Make sure to call [context setCurrentContext] in your viewDidLoad somewhere.
That should more or less be it. Also make sure to set the context property of the GLKView to the OpenGL context, just as for the EAGLView.
I suggest ensuring that your launch images are up-to-date.
OpenGL is definitely not my area of expertise, but I had a similar issue when upgrading a flash game to iOS 6. I did not supply appropriate launch images for the retina displays of the new iPhone etc, and my app was run in 'compatibility mode' with margins at the top and bottom of the screen. Admittedly, you don't quite have these margins, but it's possible that it's messing with how big your app thinks its screen is. Might be worth checking out.
Why is your "DefaultViewScale=1.2" on an iPad? If the app is fullscreen, it shouldn't be scaled anymore since it's 1024x768. Are you rescaling something there?
In my OpenGL Apps I just have a EAGLView that is always fullscreen and then read the sizes from the framebufferWidth/height.
If you have a UIViewController, with the HUD-Elements correctly set up, you wouldn't need to do any [self.view setTransform..]. I have the feeling you're making life more complicated for yourself, then it should be!
Just add the EAGLView with "UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth" as the lowest subview of your ViewControllers main view. And set up the rotation code correctly (keep in mind the iOS5 calls so shouldAutoRotateToInterfaceOrientation:.. are deprecated).
It looks like the transformation you're doing is after setting the frame of the view, and may therefor change the bounds. I would suggest breaking in your draw method and checking the bounds of both the view and its layer.
Remember that the frame is set from the perspective of the parent, while the bounds is in local coordinates. From UIView.h:
do not use frame if view is transformed since it will not correctly
reflect the actual location of the view. use bounds + center instead.
Related
I have trouble getting Map behave properly when calling ZoomToResolution and PanTo
I need to be able to Zoom into specific coordinate and center map.
The only way I got it working is by removing animations:
this.MapControl.ZoomDuration = new TimeSpan(0);
this.MapControl.PanDuration = new TimeSpan(0);
Otherwise if I make call like this:
control.MapControl.ZoomToResolution(ZoomLevel);
control.MapControl.PanTo(MapPoint());
It does one or another (i.e. pan or zoom, but not both). If (after animation) I call this code second time (map already zoomed or panned to needed position/level) - it does second part.
Tried this:
control.MapControl.ZoomToResolution(ZoomLevel, MapPoint());
Same issue, internally it calls above commands
So, my only workaround right now is to set Zoom/Pan duration to 0. And it makes for bad UX when using mouse.
I also tried something like this:
this.MapControl.ZoomDuration = new TimeSpan(0);
this.MapControl.PanDuration = new TimeSpan(0);
control.MapControl.ZoomToResolution(ZoomLevel);
control.MapControl.PanTo(MapPoint());
this.MapControl.ZoomDuration = new TimeSpan(750);
this.MapControl.PanDuration = new TimeSpan(750);
Which seems to be working, but then mouse interaction becomes "crazy". Mouse scroll will make map jump and zoom to random places.
Is there known solution?
The problem is the second operation replaces the previous one. You would have to wait for one to complete before starting the next one. But that probably doesn't give the effect you want.
Instead zoom to an extent, and you'll get the desired behavior. If you don't have the extent but only center and resolution, you can create one using the following:
var zoomToExtent = new Envelope(point.X - resolution * MapControl.ActualWidth/2, point.Y, point.X + resolution * MapControl.ActualWidth/2, point.Y);
Btw it's a little confusing in your code that you call your resolution "ZoomLevel". I assume this is a map resolution, and not a level number right? The esri map control doesn't deal with service-specific levels, but is agnostic to the data's levels and uses a more generic "units per pixels" resolution value.
I'm trying to mimic a flashlight in opengl. Basically I want the spotlight to be in the same position as the camera and point in the same direction that the camera is pointing in.
Here is my code:
gluLookAt (xAt, yAt, zAt, xLookAt, yLookAt, zLookAt, 0, 1, 0);
light_pos [4] = {xAt, yAt, zAt, 1.0};
glLightfv (GL_LIGHT0, GL_POSITION, light_pos);
spotDir [] = {xLookAt - xAt, yLookAt - yAt, zLookAt - zAt};
glLightfv (GL_LIGHT0, GL_SPOT_DIRECTION, spotDir);
I've made calls to initialize the light and I've calculated the surface normals of all my objects.
Now the above code kind of works, when the camera is moved then the spotlight follows. However when I move the camera closer to an object the object gets less light shone on it. And when I move the camera further away the object gets more light.
I want the opposite to happen - the further away the camera is from an object there should be less light being shone on the object. How is this done? Or is this not the behaviour of an opengl spotlight?
So I looked into this and apparently modifying the attenuation of the light will yield correct results. Hope this helps anyone else who stumbles across this.
Our app has a rotating map view which aligns with the compass heading. We counter-rotate the annotations so that their callouts remain horizontal for reading. This works fine on iOS5 devices but is broken on iOS6 (problem seen with same binary as used on iOS5 device and with binary built with iOS6 SDK). The annotations initially rotate to the correct horizontal position and then a short time later revert to the un-corrected rotation. We cannot see any events that are causing this. This is the code snippet we are using in - (MKAnnotationView *)mapView:(MKMapView *)theMapView viewForAnnotation:(id )annotation
CATransform3D transformZ = CATransform3DIdentity;
transformZ = CATransform3DRotate(transformZ, _rotationZ, 0, 0, 1);
annotation.myView.layer.transform = transformZ;
Anyone else seen this and anyone got any suggestions on how to fix it on iOS6?
I had an identical problem so my workaround may work for you. I've also submitted a bug to Apple on it. For me, every time the map got panned by the user the Annotations would get "unrotated".
In my code I set the rotations using CGAffineTransformMakeRotation and I don't set it in viewForAnnotation but whenever the users location get's updated. So that is a bit different than you.
My workaround was to add an additional minor rotation at the bottom of my viewForAnnotation method.
if(is6orMore) {
[annView setTransform:CGAffineTransformMakeRotation(.001)]; //iOS6 BUG WORKAROUND !!!!!!!
}
So for you, I'm not sure if that works, since you are rotating differently and doing it in viewForAnnotation. But give it a try.
Took me forever to find and I just happened across this fix.
In the walk through for blackberry 10 sdk using opengl es. it uses 2 commands namely:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
and later:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
I don't understand what these are used for when initializing the viewport. If I take those lines out the program still runs perfectly and nothing changes.
I see its got to do with rendering the matrix but i'm not sure I understand which matrix as this is only when im initializing before any sort of rendering.
Called in an initialization routine, those do nothing. The default value of both matrices is identity, so it's just setting it to the same value that they already are.
As to why it is there, I guess that some people just like to explicitly setup their context so they know for sure what the current value is, maybe it's easier to remember or they don't trust the context to have the right default value, I don't know.
I've been trying to get a HUD texture to display for a simulator for a while now, without success.
First I bind the texture like this:
glGenTextures(1,&hudTexObj);
gHud = getPPM("textures/purplenebula/hud.ppm",&n,&m,&s);
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
//glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,n,m,0,GL_RGB,GL_UNSIGNED_INT, gHud);
And then I attempt to map it to a QUAD, which results in the whole quad being a single brown color, and I want it to use all the texels. Here's how I map:
glBindTexture(GL_TEXTURE_2D,hudTexObj);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0);
glVertex2f(0,0);
glTexCoord2f(0.0,1.0);
glVertex2f(0,m);
glTexCoord2f(1.0,1.0);
glVertex2f(n,m);
glTexCoord2f(1.0,0.0);
glVertex2f(n,0);
glEnd();
The weird thing is that I've been able to get the exact above code to display the texture in a program of its own, yet when I put it into my main program it fails. Could it have to do with the texture matrix? I'm dumbfounded at this point.
Stupidly, I had enabled automatic tex coord generation far away in another part of the code. So if you see one texel's color covering a whole image, that is the likely cause.