Rotating CGPoints; CIVector CGAffineTransform? - ios6

I have 4 CGPoints that form an irregular figure. How can I rotate that figure 90-degrees and get the new CGPoints?
FWIW, I was able to "fake" this when the figure is a CGRect by swapping origin.x and origin.y, and width and height. Will I need to do something similar (calculating distance/direction between points) or is there a AffineTransform I can apply to a CIVector?
Hints and/or pointers to tutorials/guides welcome.
Skippable Project Background:
Users will take pictures of documents and the app will OCR them. Since users have a great deal of trouble getting a non-skewed image, I allow them to create a 4-point figure around the text body and the app skews that back to a rectangle. The camera images coming in are CIImages and so have their origin already in lower-left, so the 4-point figure must be rotated from the UIView to match...

For the record, I used CGAffineTransforms to rotate the points:
CGAffineTransform t1 = CGAffineTransformMakeTranslation(0, cropView.frame.size.height);
CGAffineTransform s = CGAffineTransformMakeScale(1, -1);
CGAffineTransform r = CGAffineTransformMakeRotation(90 * M_PI/180);
CGAffineTransform t2 = CGAffineTransformMakeTranslation(0, -cropView.frame.size.height);
CGPoint a = CGPointMake(70, 23);
a = CGPointApplyAffineTransform(a, t1);
a = CGPointApplyAffineTransform(a, s);
a = CGPointApplyAffineTransform(a, r);
a = CGPointApplyAffineTransform(a, t2);
&c. for the other points.

Related

I was working on this project for real time face mask detection, but the i am facing an error cannot reshape array of size 2 into shape (1,100,100,1)

Please tell me so as to what should i do to solve this problem or what values should i put to solve this problem
while(True):
ret,img = source.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces=face_clsfr.detectMultiScale(gray,1.3,5)
for x,y,w,h in faces:
face_img = gray[y:y+w,x:x+w]
resized = cv2.resize(face_img,(100,100))
normalized=resized/255,0
reshaped = np.reshape(normalized, (1,100,100,1))
result=model.predict(reshaped)
label=np.argmax(result,axis=1)[0]
cv2.rectangle(img,(x,y),(x+w,y+h), colordict[label],2)
cv2.rectangle(img,(x,y-40),(x+w,y), colordict[label],-1)
cv2.putText(img,labels_dict[label], (x,y-10), cv2.FONT_HERSHEY_SIMPLEX,0.8,(255,255,255),2)
cv2.imshow('Frame',img)
key=cv2.waitKey(1)
if(key=='q'):
break
cv2.destroyAllWindows()
source.release
You change the array type of image to a tuple in:
normalized = resized/255,0
change to:
normalized = resized/255

How to eliminate grayish ARKit/SceneKit shadow plane?

I've implemented one of the many ways to add a shadow plane to an ARKit and SceneKit scene. It works pretty well and the shadows look fine.
The problem is that most of the time the plane also has a grayish cast to it. In other words, it's not completely transparent. On the other hand, sometimes the grayish cast goes away only to reappear a few seconds later. I've tried tweaking just about every SCNNode and SCNMaterial property I can think of, but so far, I can't seem to get the gray to reliably go away. Does anyone have any suggestion on how to solve this?
// Make a transparent shadow plane for the Ground.
let shadowPlane = SCNPlane(width: CGFloat(self.width * 2), height: CGFloat(self.depth * 2))
shadowPlane.cornerRadius = 2
let shadowPlaneNode = SCNNode(geometry: shadowPlane)
shadowPlaneNode.name = shadowPlaneNodeName
shadowPlaneNode.eulerAngles.x = -.pi / 2
shadowPlaneNode.castsShadow = false
let material = SCNMaterial()
material.isDoubleSided = false
material.lightingModel = .constant // .shadowOnly does not show any shadows on iOS
material.colorBufferWriteMask = [.alpha]
shadowPlane.materials = [material]
node.addChildNode(shadowPlaneNode)
After more experimentation I found a solution that seems to be working well. Setting the material .lightingModel to .shadowOnly actually works correctly without any gray cast, but only if you set the .shadowModel on the shadow-producing direct light to .forward instead of .deferred.
In addition, I found that there seems to be a bug in .shadowOnly that causes the plane to be rendered completely black if there is any light in the scene with .type == .omni or == .spot.
Here's the code that's working for me:
let shadowPlane = SCNPlane(width: CGFloat(self.width * 1.5), height: CGFloat(self.depth * 1.5))
let shadowPlaneNode = SCNNode(geometry: shadowPlane)
shadowPlaneNode.name = shadowPlaneNodeName
shadowPlaneNode.eulerAngles.x = -.pi / 2
shadowPlaneNode.castsShadow = false
let material = SCNMaterial()
material.isDoubleSided = false
material.lightingModel = .shadowOnly // Requires SCNLight shadowMode = .forward and no .omni or .spot lights in the scene or material rendered black
shadowPlane.materials = [material]
node.addChildNode(shadowPlaneNode)

Getting black and white image when doing rotation?

I want to rotate my image by 45 degree. I have defined a function for the same. When I run it over my RGB images, it is getting saved as grayscale images?
def rotateImage(image, angle):
row,col = image.shape
center=tuple(np.array([row,col])/2)
rot_mat = cv2.getRotationMatrix2D(center,angle,1.0)
new_image = cv2.warpAffine(image, rot_mat, (col,row))
return new_image
rotate_img_path = '../data/rotate_45_img'
rotate_mask_path = '../data/rotate_45_mask'
real_img_path = '../data/img'
real_mask_path = '../data/mask'
for img in os.listdir(real_img_path):
edge_img = cv2.imread(os.path.join(real_img_path,img))
edges = rotateImage(edge_img, 45)
cv2.imwrite(os.path.join(rotate_img_path, img), edges)
print("Finished Copying images")
for img in os.listdir(real_mask_path):
edge_img = cv2.imread(os.path.join(real_mask_path,img))
edges = rotateImage(edge_img, 45)
cv2.imwrite(os.path.join(rotate_mask_path, img), edges)
# cv2.imwrite('edge_' + '.jpg', edges)
print("Finished Copying masks")
A clue here may be
row,col = image.shape
which would raise an error if image was three dimensional (i.e., in color with separate channels for B, G, and R) instead of two (i.e., grayscale). Unless you're reading RGB images with code not shown here, this suggests that your images are already grayscale.

MKMapView setRegion not exact

Starting Situation
iOS6, Apple Maps. I have two annotations on a MKMapView. The coordinates of the two annotations have the same values for the latitude component, but different longitudes.
What I want
I now want to zoom the map so that one annotation is exactly on the left edge of the mapView's frame, and the other annotation on the right edge of the frame. For that I tried MKMapView's setRegion and setVisibleMapRect methods.
The Problem
The problem is that those methods seem to snap to certain zoom levels and therefore not setting the region as exactly as I need it. I saw a lot of questions on Stack Overflow that point out, that this behavior is normal in iOS5 and below. But since Apple Maps are using vector graphics, the view is not bound to certain zoom levels to display imagery in proper resolution.
Tested in...
I tested on iPhone 4, 4S, 5, iPad 3 and iPad Mini (all with iOS6.1), and in the Simulator on iOS6.1.
My question
So why is setRegion and setVisibleMapRect snapping to certain zoom levels and does not adjust the exact region to the values I pass it?
Sample Code
in view did appear I define 4 different Locations as iVars, and set up the map view:
// define coords
_coord1 = CLLocationCoordinate2DMake(46.0, 13.0);
_coord2 = CLLocationCoordinate2DMake(46.0, 13.1);
_coord3 = CLLocationCoordinate2DMake(46.0, 13.2);
_coord4 = CLLocationCoordinate2DMake(46.0, 13.3);
// define frame for map in landscape mode
CGRect mainScreen = [[UIScreen mainScreen] bounds];
CGRect newSRect = mainScreen;
newSRect.size.width = mainScreen.size.height;
newSRect.size.height = mainScreen.size.width;
// setup map view
customMapView = [[MKMapView alloc] initWithFrame:newSRect];
[customMapView setDelegate:self];
[customMapView setShowsUserLocation:NO];
[self.view addSubview:customMapView];
Then I add 3 buttons. Those trigger all the same method addAnnotationsWithCoord1:coord2. The first button passes _coord1 and _coord2, the second button passes _coord1 and _coord3 and the third button passes _coord1 and _coord4. The method looks like so (TapAnnotation is my subclass of MKAnnotation):
-(void)addAnnotationsWithCoord1:(CLLocationCoordinate2D)coord1 coord2:(CLLocationCoordinate2D)coord2{
// Make 2 new annotations with the passed coordinates
TapAnnotation *annot1 = [[TapAnnotation alloc] initWithNumber:0 coordinate:coord1];
TapAnnotation *annot2 = [[TapAnnotation alloc] initWithNumber:0 coordinate:coord2];
// Remove all existing annotations
for(id<MKAnnotation> annotation in customMapView.annotations){
[customMapView removeAnnotation:annotation];
}
// Add annotations to map
[customMapView addAnnotation:annot1];
[customMapView addAnnotation:annot2];
}
After that I determine the SouthWest and NorthEast points that will determine the rect which is containing my 2 annotations.
// get northEast and southWest
CLLocationCoordinate2D northEast;
CLLocationCoordinate2D southWest;
northEast.latitude = MAX(coord1.latitude, coord2.latitude);
northEast.longitude = MAX(coord1.longitude, coord2.longitude);
southWest.latitude = MIN(coord1.latitude, coord2.latitude);
southWest.longitude = MIN(coord1.longitude, coord2.longitude);
Then I calculate the center point between the two coordinates and set the center coordinate of the map to it (remember, since all coordinates have same latitudes the following calculation should be correct):
// determine center coordinate and set to map
double centerLon = ((coord1.longitude + coord2.longitude) / 2.0f);
CLLocationCoordinate2D center = CLLocationCoordinate2DMake(southWest.latitude, centerLon);
[customMapView setCenterCoordinate:center animated:NO];
Now I try to set the region of the map so that it fits like I want:
CLLocation *loc1 = [[CLLocation alloc] initWithLatitude:southWest.latitude longitude:southWest.longitude];
CLLocation *loc2 = [[CLLocation alloc] initWithLatitude:northEast.latitude longitude:northEast.longitude];
CLLocationDistance meterSpanLong = [loc1 distanceFromLocation:loc2];
CLLocationDistance meterSpanLat = 0.1;
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(center, meterSpanLat, meterSpanLong);
[customMapView setRegion:region animated:NO];
This does not behave as expected, so I try this:
MKCoordinateSpan span = MKCoordinateSpanMake(0.01, northEast.longitude-southWest.longitude);
MKCoordinateRegion region = MKCoordinateRegionMake(center, span);
[customMapView setRegion:region animated:NO];
This still not behaves as expected, so I try it with setVisibleMapRect:
MKMapPoint westPoint = MKMapPointForCoordinate(southWest);
MKMapPoint eastPoint = MKMapPointForCoordinate(northEast);
MKMapRect mapRect = MKMapRectMake(westPoint.x, westPoint.y,eastPoint.x-westPoint.x,1);
[customMapView setVisibleMapRect:mapRect animated:NO];
And still, it does not behave like I want. As a verification, I calculate the point distance from the left annotation to the left edge of the mapView's frame:
// log the distance from the soutwest point to the left edge of the mapFrame
CGPoint tappedWestPoint = [customMapView convertCoordinate:southWest toPointToView:customMapView];
NSLog(#"distance: %f", tappedWestPoint.x);
For _coord1 and _coord2 it shows: 138
For _coord1 and _coord3 it shows: 138
For _coord1 and _coord4 it shows: 65
So why do I get these values? If anything works as expected, these should all be 0.
Thanks for any help, struggling with this problem for a week now.
Read the docs on setRegion:
When setting this property, the map may adjust the new region value so that it fits the visible area of the map precisely. This is normal and is done to ensure that the value in this property always reflects the visible portion of the map. However, it does mean that if you get the value of this property right after setting it, the returned value may not match the value you set. (You can use the regionThatFits: method to determine the region that will actually be set by the map.)
You will have to figure out the math yourself if you want a precise zoom.

VirtualEarth: determine min/max visible latitude/longitude

Is there an easy way to determine the maximum and minimum visible latitude and longitude in a VirtualEarth map?
Given that it's not a flat surface (VE uses Mercator projection it looks like) I can see the math getting fairly complicated, I figured somebody may know of a snippet to accomplish this.
Found it! VEMap.GetMapView() returns the bounding rectangle, even works for 3D mode as well (where the boundary is not even a rectangle).
var view = map.GetMapView();
latMin = view.BottomRightLatLong.Latitude;
lonMin = view.TopLeftLatLong.Longitude;
latMax = view.TopLeftLatLong.Latitude;
lonMax = view.BottomRightLatLong.Longitude;
Using the Virtual Earth Interactive SDK, you can see how to convert a pixel point to a LatLong object:
function GetMap()
{
map = new VEMap('myMap');
map.LoadMap();
}
function DoPixelToLL(x, y)
{
var ll = map.PixelToLatLong(new VEPixel(x, y)).toString()
alert("The latitude,longitude of the pixel at (" + x + "," + y + ") is: " + ll)
}
Take a further look here: http://dev.live.com/virtualearth/sdk/ in the menu go to: Get map info --> Convert pixel to LatLong
To get the Max/Min visible LatLong's, you could call the DoPixelToLL method for each corner of the map.

Resources