MKMapView setRegion not exact - ios6

Starting Situation
iOS6, Apple Maps. I have two annotations on a MKMapView. The coordinates of the two annotations have the same values for the latitude component, but different longitudes.
What I want
I now want to zoom the map so that one annotation is exactly on the left edge of the mapView's frame, and the other annotation on the right edge of the frame. For that I tried MKMapView's setRegion and setVisibleMapRect methods.
The Problem
The problem is that those methods seem to snap to certain zoom levels and therefore not setting the region as exactly as I need it. I saw a lot of questions on Stack Overflow that point out, that this behavior is normal in iOS5 and below. But since Apple Maps are using vector graphics, the view is not bound to certain zoom levels to display imagery in proper resolution.
Tested in...
I tested on iPhone 4, 4S, 5, iPad 3 and iPad Mini (all with iOS6.1), and in the Simulator on iOS6.1.
My question
So why is setRegion and setVisibleMapRect snapping to certain zoom levels and does not adjust the exact region to the values I pass it?
Sample Code
in view did appear I define 4 different Locations as iVars, and set up the map view:
// define coords
_coord1 = CLLocationCoordinate2DMake(46.0, 13.0);
_coord2 = CLLocationCoordinate2DMake(46.0, 13.1);
_coord3 = CLLocationCoordinate2DMake(46.0, 13.2);
_coord4 = CLLocationCoordinate2DMake(46.0, 13.3);
// define frame for map in landscape mode
CGRect mainScreen = [[UIScreen mainScreen] bounds];
CGRect newSRect = mainScreen;
newSRect.size.width = mainScreen.size.height;
newSRect.size.height = mainScreen.size.width;
// setup map view
customMapView = [[MKMapView alloc] initWithFrame:newSRect];
[customMapView setDelegate:self];
[customMapView setShowsUserLocation:NO];
[self.view addSubview:customMapView];
Then I add 3 buttons. Those trigger all the same method addAnnotationsWithCoord1:coord2. The first button passes _coord1 and _coord2, the second button passes _coord1 and _coord3 and the third button passes _coord1 and _coord4. The method looks like so (TapAnnotation is my subclass of MKAnnotation):
-(void)addAnnotationsWithCoord1:(CLLocationCoordinate2D)coord1 coord2:(CLLocationCoordinate2D)coord2{
// Make 2 new annotations with the passed coordinates
TapAnnotation *annot1 = [[TapAnnotation alloc] initWithNumber:0 coordinate:coord1];
TapAnnotation *annot2 = [[TapAnnotation alloc] initWithNumber:0 coordinate:coord2];
// Remove all existing annotations
for(id<MKAnnotation> annotation in customMapView.annotations){
[customMapView removeAnnotation:annotation];
}
// Add annotations to map
[customMapView addAnnotation:annot1];
[customMapView addAnnotation:annot2];
}
After that I determine the SouthWest and NorthEast points that will determine the rect which is containing my 2 annotations.
// get northEast and southWest
CLLocationCoordinate2D northEast;
CLLocationCoordinate2D southWest;
northEast.latitude = MAX(coord1.latitude, coord2.latitude);
northEast.longitude = MAX(coord1.longitude, coord2.longitude);
southWest.latitude = MIN(coord1.latitude, coord2.latitude);
southWest.longitude = MIN(coord1.longitude, coord2.longitude);
Then I calculate the center point between the two coordinates and set the center coordinate of the map to it (remember, since all coordinates have same latitudes the following calculation should be correct):
// determine center coordinate and set to map
double centerLon = ((coord1.longitude + coord2.longitude) / 2.0f);
CLLocationCoordinate2D center = CLLocationCoordinate2DMake(southWest.latitude, centerLon);
[customMapView setCenterCoordinate:center animated:NO];
Now I try to set the region of the map so that it fits like I want:
CLLocation *loc1 = [[CLLocation alloc] initWithLatitude:southWest.latitude longitude:southWest.longitude];
CLLocation *loc2 = [[CLLocation alloc] initWithLatitude:northEast.latitude longitude:northEast.longitude];
CLLocationDistance meterSpanLong = [loc1 distanceFromLocation:loc2];
CLLocationDistance meterSpanLat = 0.1;
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(center, meterSpanLat, meterSpanLong);
[customMapView setRegion:region animated:NO];
This does not behave as expected, so I try this:
MKCoordinateSpan span = MKCoordinateSpanMake(0.01, northEast.longitude-southWest.longitude);
MKCoordinateRegion region = MKCoordinateRegionMake(center, span);
[customMapView setRegion:region animated:NO];
This still not behaves as expected, so I try it with setVisibleMapRect:
MKMapPoint westPoint = MKMapPointForCoordinate(southWest);
MKMapPoint eastPoint = MKMapPointForCoordinate(northEast);
MKMapRect mapRect = MKMapRectMake(westPoint.x, westPoint.y,eastPoint.x-westPoint.x,1);
[customMapView setVisibleMapRect:mapRect animated:NO];
And still, it does not behave like I want. As a verification, I calculate the point distance from the left annotation to the left edge of the mapView's frame:
// log the distance from the soutwest point to the left edge of the mapFrame
CGPoint tappedWestPoint = [customMapView convertCoordinate:southWest toPointToView:customMapView];
NSLog(#"distance: %f", tappedWestPoint.x);
For _coord1 and _coord2 it shows: 138
For _coord1 and _coord3 it shows: 138
For _coord1 and _coord4 it shows: 65
So why do I get these values? If anything works as expected, these should all be 0.
Thanks for any help, struggling with this problem for a week now.

Read the docs on setRegion:
When setting this property, the map may adjust the new region value so that it fits the visible area of the map precisely. This is normal and is done to ensure that the value in this property always reflects the visible portion of the map. However, it does mean that if you get the value of this property right after setting it, the returned value may not match the value you set. (You can use the regionThatFits: method to determine the region that will actually be set by the map.)
You will have to figure out the math yourself if you want a precise zoom.

Related

How to eliminate grayish ARKit/SceneKit shadow plane?

I've implemented one of the many ways to add a shadow plane to an ARKit and SceneKit scene. It works pretty well and the shadows look fine.
The problem is that most of the time the plane also has a grayish cast to it. In other words, it's not completely transparent. On the other hand, sometimes the grayish cast goes away only to reappear a few seconds later. I've tried tweaking just about every SCNNode and SCNMaterial property I can think of, but so far, I can't seem to get the gray to reliably go away. Does anyone have any suggestion on how to solve this?
// Make a transparent shadow plane for the Ground.
let shadowPlane = SCNPlane(width: CGFloat(self.width * 2), height: CGFloat(self.depth * 2))
shadowPlane.cornerRadius = 2
let shadowPlaneNode = SCNNode(geometry: shadowPlane)
shadowPlaneNode.name = shadowPlaneNodeName
shadowPlaneNode.eulerAngles.x = -.pi / 2
shadowPlaneNode.castsShadow = false
let material = SCNMaterial()
material.isDoubleSided = false
material.lightingModel = .constant // .shadowOnly does not show any shadows on iOS
material.colorBufferWriteMask = [.alpha]
shadowPlane.materials = [material]
node.addChildNode(shadowPlaneNode)
After more experimentation I found a solution that seems to be working well. Setting the material .lightingModel to .shadowOnly actually works correctly without any gray cast, but only if you set the .shadowModel on the shadow-producing direct light to .forward instead of .deferred.
In addition, I found that there seems to be a bug in .shadowOnly that causes the plane to be rendered completely black if there is any light in the scene with .type == .omni or == .spot.
Here's the code that's working for me:
let shadowPlane = SCNPlane(width: CGFloat(self.width * 1.5), height: CGFloat(self.depth * 1.5))
let shadowPlaneNode = SCNNode(geometry: shadowPlane)
shadowPlaneNode.name = shadowPlaneNodeName
shadowPlaneNode.eulerAngles.x = -.pi / 2
shadowPlaneNode.castsShadow = false
let material = SCNMaterial()
material.isDoubleSided = false
material.lightingModel = .shadowOnly // Requires SCNLight shadowMode = .forward and no .omni or .spot lights in the scene or material rendered black
shadowPlane.materials = [material]
node.addChildNode(shadowPlaneNode)

SceneKit ARKit glowing effect

Hi I'm trying to have a glowing effect around a node.
I used the SCNNode filters property and set to an array of CIFilter.
It works and renders only when the node has no node behind it which I don't understand. I tried to set the rendering order and the readDepth options without success. I'm really stuck at this point and would appreciate your input!
Please see the screenshot for an example and the code sample.
func addBloom() -> [CIFilter]? {
let bloomFilter = CIFilter(name:"CIBloom")!
bloomFilter.setValue(10.0, forKey: "inputIntensity")
bloomFilter.setValue(30.0, forKey: "inputRadius")
return [bloomFilter]
}
Calling this using:
myNode.filters = addBloom()
A final note, I noticed that for CIFilter to work with Metal the antiAliasing needs to be set to .none
arSceneView.antialiasingMode = .none
Thanks a lot!
Adrien
Have you tried setting the writesToDepthBuffer to false for those nodes which you aren't apply the filters to?
For your information writesToDepthBuffer refers to:
SceneKit’s rendering process uses a depth buffer to determine the
ordering of rendered surfaces relative to the viewer. The default
value of this property is YES, specifying that SceneKit saves depth
information for each rendered pixel for use by later rendering passes.
Typically, you disable writing to the depth buffer when rendering
semitransparent objects, because later stages of the rendering process
may require depth information about the opaque objects behind them.
This example seems to be working fine:
/// Generates An SCNPlane & A Red & Green SCNSphere
func generateNodes(){
let planeNode = SCNNode(geometry: SCNPlane(width: 1, height: 0.5))
planeNode.geometry?.firstMaterial?.diffuse.contents = UIColor.black
planeNode.position = SCNVector3(0, 0, -1)
let redSphereNode = SCNNode(geometry: SCNSphere(radius: 0.1))
redSphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.red
redSphereNode.position = SCNVector3(-0.3, 0, -1)
redSphereNode.filters = addBloom()
let greenSphereNode = SCNNode(geometry: SCNSphere(radius: 0.1))
greenSphereNode.geometry?.firstMaterial?.diffuse.contents = UIColor.green
greenSphereNode.position = SCNVector3(0.3, 0, -1)
greenSphereNode.filters = addBloom()
self.augmentedRealityView.scene.rootNode.addChildNode(planeNode)
self.augmentedRealityView.scene.rootNode.addChildNode(redSphereNode)
self.augmentedRealityView.scene.rootNode.addChildNode(greenSphereNode)
planeNode.geometry?.firstMaterial?.writesToDepthBuffer = false
}
/// Creates An Array Of CIBloom Filters
///
/// - Returns: [CIFilter]?
func addBloom() -> [CIFilter]? {
let bloomFilter = CIFilter(name:"CIBloom")!
bloomFilter.setValue(10.0, forKey: "inputIntensity")
bloomFilter.setValue(30.0, forKey: "inputRadius")
return [bloomFilter]
}
One thing to note however, which I did notice was that if I used an image with a transparent background for the contents of the SCNPlane it didn't work, although with another image it was fine.
Hope it points you in the right direction...

Accessing the number of elements in an array and applying gravity behaviour

I'm having issues with getting ALL elements of an array to fall using the Gravity module. I have managed to get the LAST element in the array to fall and then the remaining elements just stay at the top of the screen during testing. Upon debugging
I am using UIKit and want to understand this language thoroughly before using other various engines such as SpriteKit and GameplayKit.
func mainGame()
{
let cars = ["car5", "car1", "car6", "car3", "car2", "car4"]
var random2 = Int(arc4random_uniform(UInt32(cars.count))) + 1
for i in 1...random2
{
let image = UIImage(named: cars[i - 1])
let carView = UIImageView(image: image!)
carView.frame = CGRect(x:i * 52, y:0 , width: 40, height: 50)
view.addSubview(carView)
dynamicAnimator = UIDynamicAnimator(referenceView: self.view)
gravityBehavior = UIDynamicItemBehavior(items: [carView]) //cars falling
dynamicAnimator.addBehavior(gravityBehavior)
collisionBehavior = UICollisionBehavior(items: [carView, mainCar]) //collide
collisionBehavior.translatesReferenceBoundsIntoBoundary = false
gravityBehavior.addLinearVelocity(CGPoint(x: 0, y: 200), for: carView)
dynamicAnimator.addBehavior(collisionBehavior)
}
collisionBehavior.addBoundary(withIdentifier: "Barrier" as NSCopying, for: UIBezierPath(rect: mainCar.frame))
collisionBehavior.removeAllBoundaries()
}
With the game so far the last car in the array falls and the main player car that I control has collision behaviour, which is a big step for me!
You are creating a new UIDynamicAnimator with each iteration of the loop and assigning it to dynamicAnimator. That is why only the last element is working, because it is the last one assigned to that variable.
To fix it, just move this line to somewhere that would only be called once.
dynamicAnimator = UIDynamicAnimator(referenceView: self.view)
viewDidLoad is a possible place that should work.
UIKitDynamics is backwards of most similar frameworks. You don't animate the object. You have an animator and attach objects to it. As Clever Error notes, you only want one animator in this case.
The key point is that you don't attach gravity to cars; you attach cars to behaviors (gravity), and then behaviors to the animator. Yes, that's bizarre and backwards.
I haven't tested this, but the correct code would be closer to this:
func mainGame()
{
let cars = ["car5", "car1", "car6", "car3", "car2", "car4"]
var random2 = Int(arc4random_uniform(UInt32(cars.count))) + 1
var carViews: [UIImageView] = []
dynamicAnimator = UIDynamicAnimator(referenceView: self.view)
// First create all the views
for i in 1...random2
{
let image = UIImage(named: cars[i - 1])
let carView = UIImageView(image: image!)
carView.frame = CGRect(x:i * 52, y:0 , width: 40, height: 50)
view.addSubview(carView)
carViews.append(carView)
}
// and then attach those to behaviors:
gravityBehavior = UIGravityBehavior(items: carViews) //cars falling
dynamicAnimator.addBehavior(gravityBehavior)
collisionBehavior = UICollisionBehavior(items: carView + mainCar) //collide
collisionBehavior.translatesReferenceBoundsIntoBoundary = false
dynamicAnimator.addBehavior(collisionBehavior)
collisionBehavior.addBoundary(withIdentifier: "Barrier" as NSCopying, for: UIBezierPath(rect: mainCar.frame))
collisionBehavior.removeAllBoundaries()
// You don't need this; it's built into Gravity
// gravityBehavior.addLinearVelocity(CGPoint(x: 0, y: 200), for: carView)
}
The main way that UIKitDynamics is different than most animation frameworks is that things that are animated don't know they're being animated. You can't ask a car what behaviors it has, because it doesn't have any. A UIDynamicAnimator basically is a timing loop that updates the center and transform of its targets. There's really not anything fancy about it (in contrast to something like Core Animation which has many fancy things going on). With a little iOS experience, you could probably implement all of UIKitDynamics by hand with a single GCD queue (it probably doesn't even need that, since it runs everything on main....)

Rotating CGPoints; CIVector CGAffineTransform?

I have 4 CGPoints that form an irregular figure. How can I rotate that figure 90-degrees and get the new CGPoints?
FWIW, I was able to "fake" this when the figure is a CGRect by swapping origin.x and origin.y, and width and height. Will I need to do something similar (calculating distance/direction between points) or is there a AffineTransform I can apply to a CIVector?
Hints and/or pointers to tutorials/guides welcome.
Skippable Project Background:
Users will take pictures of documents and the app will OCR them. Since users have a great deal of trouble getting a non-skewed image, I allow them to create a 4-point figure around the text body and the app skews that back to a rectangle. The camera images coming in are CIImages and so have their origin already in lower-left, so the 4-point figure must be rotated from the UIView to match...
For the record, I used CGAffineTransforms to rotate the points:
CGAffineTransform t1 = CGAffineTransformMakeTranslation(0, cropView.frame.size.height);
CGAffineTransform s = CGAffineTransformMakeScale(1, -1);
CGAffineTransform r = CGAffineTransformMakeRotation(90 * M_PI/180);
CGAffineTransform t2 = CGAffineTransformMakeTranslation(0, -cropView.frame.size.height);
CGPoint a = CGPointMake(70, 23);
a = CGPointApplyAffineTransform(a, t1);
a = CGPointApplyAffineTransform(a, s);
a = CGPointApplyAffineTransform(a, r);
a = CGPointApplyAffineTransform(a, t2);
&c. for the other points.

VirtualEarth: determine min/max visible latitude/longitude

Is there an easy way to determine the maximum and minimum visible latitude and longitude in a VirtualEarth map?
Given that it's not a flat surface (VE uses Mercator projection it looks like) I can see the math getting fairly complicated, I figured somebody may know of a snippet to accomplish this.
Found it! VEMap.GetMapView() returns the bounding rectangle, even works for 3D mode as well (where the boundary is not even a rectangle).
var view = map.GetMapView();
latMin = view.BottomRightLatLong.Latitude;
lonMin = view.TopLeftLatLong.Longitude;
latMax = view.TopLeftLatLong.Latitude;
lonMax = view.BottomRightLatLong.Longitude;
Using the Virtual Earth Interactive SDK, you can see how to convert a pixel point to a LatLong object:
function GetMap()
{
map = new VEMap('myMap');
map.LoadMap();
}
function DoPixelToLL(x, y)
{
var ll = map.PixelToLatLong(new VEPixel(x, y)).toString()
alert("The latitude,longitude of the pixel at (" + x + "," + y + ") is: " + ll)
}
Take a further look here: http://dev.live.com/virtualearth/sdk/ in the menu go to: Get map info --> Convert pixel to LatLong
To get the Max/Min visible LatLong's, you could call the DoPixelToLL method for each corner of the map.

Resources