Im developing a game using BOX2D. The plist file is set to support just the Portrait orientation. Game starts in the Portrait mode fine. Im using CCLayer for subclassing. Inorder to switch views or scenes i use the below code.
[[CCDirector sharedDirector] pushScene:[ scene]];
So all I want is to change orientation when loading the views/scenes.
1st View/Scene -> Portrait Mode
2nd View/Scene -> Landscape Mode
3rd View/Scene -> Portrait Mode
Tried few sources but was not of good help. Please help me with this. Any suggestion would be appreciated. Thank You.
It seems to me that you want to rotate the entire scene to 90 degree (- or +) to achieve portrait and landscape for your views. One simple way to do this and not getting sucked into is to create a very small layer as a pivot point that holds all your views at the center of the screen.
Create a very small CCLayer that will act as a viewport. lets call this viewportLayer.
set the position of the viewportLayer to the center of the screen.
add the 1st , 2nd and 3rd view scenes as child of viewportLayer.
after you done the above, whenever you need to go for portrait or landscape mode, all you have to do is rotate the viewportLayer by 90 degree (- or +).
as an additional bonus, zooming in and out becomes an easy solution too. to achieve zooming, all you have to do is scale the viewportLayer and not worry about the children. all the calculations for collision will still work nicely because calculations for those would most likely to be done in child layers therefore not getting effected by the parent transformations.
Related
What is the correct way to zoom in and out of a scene in SceneKit?
So when I enable the standard camera control in a scene and pinch in and out the scene gets bigger and smaller. What is that pinch really doing?
Is it changing the scale of the whole scene? Is it moving the camera closer?
I want to implement the same effect but programmatically.
What should I do to obtain the same effect?
When you pinch it's the field of view (xFox and yFov properties) of your camera that's changed. Changing the field of view is not the best way to zoom because it can dramatically change the perspective.
Moving the camera closer to your object is a good solution.
Also note that the "free camera" behavior is suitable for 3D viewers (such as Preview.app) but will rapidly become frustrating in any other app. At this point you might want to implement your own camera controller.
At any given point the camera has a position in space, it has a rotation for each of its own axis compared to each of the world axis, to have a zoom in and zoom out, you have to move the camera in the +z/-z axis direction.
Along the Cameras own Z/-Z axis.
For those on OSX, I used this in my SCNView subclass:
override func scrollWheel(theEvent: NSEvent) {
let cam = pointOfView!.camera
cam!.xFov = cam!.xFov - Double(theEvent.deltaY)
cam!.yFov = cam!.yFov - Double(theEvent.deltaY)
}
There are two (minor) problems that could be addressed with a little extra code. One is that the values can go negative, at which point the image is flipped inside-out. The other is that mouse acceleration can cause the zoom level to go too fast if you really spin the wheel. Limits on both of these would be a good idea, but in my app the behaviour was fine as it is above.
I am currently creating Win 8.1 universal app. It should have both landscape and portrait view. But only difference between these two states is, that some elements just "turn" to proper direction, while main grid will not rotate.
I don't have enough reputation so here is link to picture of my notion
Is there any way to disable state change rotation only within some elements or do I have to detect the screen orientation and just rotate the desired elements?
This is rather a simple issue and you should implement your second method:
Identify the elements that should not rotate
Rotate them the opposite direction to the screen
I have an app that can be rotated, so I need to deal with portrait and landscape orientations. Additionally, users will be allowed to use pinch gestures to change the scale of views. Here is the basic hierarchy of the views.
mainView is a subview of self.view (from the context of the main view controller). It is a UIImageView, although the image part of it is relatively unimportant. In any case, this is the view within which the rest of the views in this discussion are placed as subviews.
The first is what I call the board. It is the view on which items are assembled by the user. These items are themselves image views.
Additionally, there are what I call palettes. These are simply views that can be resized and scaled by the user. Additionally, the image views just mentioned can be dragged from one palette to another or to the board. The palettes can be thought of as work space for the user. When they are finished their work, they place their assembly onto the board.
So far, I've been working with the app where the board is part of autolayout but the palettes are created programatically as needed. This is good because when the user rotates the device, autolayout automatically places the board appropriately. At least it did until I wanted to add pinch scaling to it.
Autolayout has the following constraints set to it in interface builder:
Leading, top, trailing, and bottom all set to superview default.
When the user scales the view, the result is that it sort of sticks to the upper-left corner of the screen. I'd rather have it retain the center.
I tried changing this programmatically by adding the following code to the pinch gesture recognizer for this view:
if (self.pinchView.tag == TAGBOARD) {
[NSLayoutConstraint constraintWithItem:self.pinchView
attribute:NSLayoutAttributeCenterX
relatedBy:NSLayoutRelationEqual
toItem:self.mainView
attribute:NSLayoutAttributeCenterX
multiplier:1.0
constant:0];
}
but this seemed to do nothing. I'm guessing it's because it conflicts with the IB constraints. Is there something else I can do to make this work with autolayout? Or should I just do it all programmatically like I do with the other views?
In this code example, self.pinchView is the view on which the pinch gesture is applied to. For the sake of this discussion, it is what I've called the board. The self.mainView view is its superview.
The part we're not seeing in the code is something like
[self.pinchView addConstraint:yourNewConstraint];
I can see where you create the constraint, but not add it to the view. If you want your constraint to win you'll need to remove the other views or make sure the new constraint has a higher priority.
If your view should be centered, try adjusting the constraints in the storyboard to pin the width and height to the default and then aligning horizontally in the center. That should satisfy autolayout and replicate what you're trying to add, then in your pinch recognizer you can change the constant of the width and height. Be sure to drag your width and height constraints into your controller to create an outlet so that you can adjust them during the pinch gesture.
I want to drag an image to one line by using the mouse and when the image is close to the line, the image will automatically move on to the line, like some "floor planner" program ------------you create wall and drag the door to this wall and when the door is close to the wall, the door will automatically show up on the wall.
Can OpenGL do it?
if it can, can anyone tell me how? If it can not, can anyone tell me how I can do it?
Show me an example.
OpenGL is a rendering API, it's purpose is to generate rasterized images based on descriptions provided to it by an application.
It knows nothing about user input, and even less about the application's "domain objects" such as doors, walls, and so on. All it deals with is abstract coordinates and matrices that describe the transforms and projections to take those 3D coordinates into 2D for rasterization, as well as shading for surfaces and so on.
So, it's up to you to implement that, so that the coordinates you eventually pass to OpenGL end up being what you want them to be.
Snapping is typically a combination of measuring the distance to some guiding object, and the following quantization of the input coordinates to correspond to the the guide.
I'm currently using the Silverlight Map control for WP7, and am trying to visualize driving directions on the map. In order to highlight the route needed, I am using a MapLayer with a MapPolyline. The problem is that even with CacheMode set to BitmapCache, the MapPolyline area gets redrawn whenever the user pans or zooms the map. I've used other controls such as Ellipses or Pushpins, and with BitmapCache on, none of them redraw and give the same performance hit as MapPolyline.
Here's a quick example
<maps:Map ZoomLevel="3">
<maps:MapPolyline Name="line" Stroke="Red" StrokeThickness="9">
<maps:MapPolyline.CacheMode>
<BitmapCache/>
</maps:MapPolyline.CacheMode>
<maps:MapPolyline.Locations>
<maps:LocationCollection>
<geo:GeoCoordinate Latitude="33" Longitude="33"/>
<geo:GeoCoordinate Latitude="36" Longitude="33"/>
<geo:GeoCoordinate Latitude="33" Longitude="36"/>
</maps:LocationCollection>
</maps:MapPolyline.Locations>
</maps:MapPolyline>
</maps:Map>
If you set App.Current.Host.Settings.EnableRedrawRegions = true; you can see the redrawing that occurs. The performance is particularly bad when you have a larger polyline and zoom in closer.
Is there anything that can be done to help? The native Bing Maps has pretty smooth route drawing, so I would think that there should be a way to solve this?
Thanks!
Can you explain a bit more what the problem is?
I've got an app - RunSat - in which I draw polylines with several hundred points (e.g. I just looked at a 3 hour long bike ride) and this draws fine - including during zoom operations.
I don't understand the problem - even using the sample code above. To help - are you testing on a phone or on the emulator?
As for CacheMode and BitmapCache, I'm really not sure about using these settings for the map - I don't use them in RunSat if that helps - I just leave the phone alone to work out its own GPU drawing.