iOS 11 SceneKit hitTest:options: fails - scenekit

I'm facing a difficult situation using hitTest:options: in SceneKit on iOS 11.
In a maping application I have a terrain node. Using hitTest:options: I was able for long to spot a point on the terrain from a touch on the screen. It still work as expected with released binary on iOS 11, and also on Xcode 9 compiled binary for iOS 10 simulator.
But iOS 11 binary on iOS 11 SDK gives totaly eratic results. Return array from hitTest:options: may contain no result or too many. Moreover, most of the time none of the results is valid. Here below are images to illustrate the point. All image are from a scene with no hidden node.
Edit: I made a test today using hitTestWithSegmentFromPoint:toPoint:options: and got false results also.
First with working simulator.
It shows a normal hit on the terrain. The hit point is illustrated with a red ball. It is half inset in the terrain as its center is right on the terrain.
These two images show a case where the "ray" cross the terrain 3 times. We got 3 hits all placed correctly on the terrain.The second image change the angle of view to show the 3 points.
Now the failing iOS 11 situation:
On this picture we got one hit but it is "nowhere" between the two mountains, not on the terrain.
The last two pictures show other attempts with 4 and 16 hits, all "in the blue" with no connection to the terrain.
Sometimes the hit are "away" past the terrain, sometimes they are between the camera and the terrain.

I was facing the same problem on iOS 11. My solution:
var hitTestOptions = [SCNHitTestOption.sortResults : NSNumber(value: true),
SCNHitTestOption.boundingBoxOnly : NSNumber(value: true)]
if #available(iOS 11.0, *) {
hitTestOptions[SCNHitTestOption.searchMode] = SCNHitTestSearchMode.all.rawValue as NSNumber
}

Four years latter I went back to this problem and found a solution to my original problem.
After Apple released iOS 11.2, multiples hits were solved but we got a "no hits" conundrum.
The problem lies in a specific situation that was not fully explained in the original question. After a terrain is originally computed and displayed we always get a first hit. Then we pan the terrain to center the hit point and rebuild a new terrain sector. In the process, we save computing by reusing severals geometry elements, only changing the z coordinates of the terrain vertexes. The problem lies in reusing the triangle strip SCNGeometryElement. From now on, any terrain built by reusing this object is fine looking but fails the hitTest method.
It turns out that the SCNGeometryElement can't be reused and should be rebuilt.
The originally working code was :
t_strip = [geom_cour geometryElementAtIndex:0];
To workaround the HitTest: failure we have to do :
//get current triangle strip
SCNGeometryElement *t_strip_g = [geom_cour geometryElementAtIndex:0];
//create a new one using the current as a template
t_strip = [SCNGeometryElement geometryElementWithData:t_strip_g.data
primitiveType:t_strip_g.primitiveType
primitiveCount:t_strip_g.primitiveCount
bytesPerIndex:t_strip_g.bytesPerIndex];
The current SCNGeometryElement is used as a template to recreate a new one with exactly the same values.

Related

How to train a custom Object detector from scratch in tensorflow.js?

I followed multiple example, to train a custom object detector in TensorflowJS . The main problem I am facing every where it is using pretrained model.
Pretrained models are fine for general use cases, but custom scenario it fails. For example, take this this is example form official Tensorflowjs examples, here it is using mobilenet, and mobilenet and mobilenet has image size restriction 224x224 which defeats all the purpose, because my images are big and also not of same ratio so resizing is not an option.
I have tried multiple example, all follows same path oneway or another.
What I want ?
Any example by which I can train a custom objector from scratch in Tensorflow.js.
Although the answer sounds simple but trust me I searching for this for multiple days. Any help will be greatly appreciated. Thanks
Currently it is not yet possible to use tensorflow object detection api in nodejs. But the image size should not be a restriction. Instead of resizing, you can crop your image and keep only the part that contain your object to be detected.
One approach will be like partition the image in 224x224 and run for all partitions but what if the object is between two partitions
The image does not need to be partitioned for it. When labelling the image, you will need to know the x, y coordinates (from the top left) and the w, h of the detected box. You only need to crop a part of the image that will contain the box. Cropping at the coordinates x - (224-w)/2, y- (224-h)/2 can be a good start. There are two issues with these coordinates:
the detected boxes will always be in the center, so the training will be biaised. To prevent it, a randomn factor can be used. x - (224-w)/r , y- (224-h)/r where r can be randomly taken from [1-10] for instance
if the detected boxes are bigger than 224 * 224 maybe you might first choose to resize the video keeping it ratio before cropping. In this case the boxe size (w, h) will need to be readjusted according to the scale used for the resizing

How to get my MapContainer bounding box in Codename One

My Codename One app features a MapContainer. I need to show points of interest (POIs) on it which coordinates reside on the server. There can be hundreds (maybe thousands in the future) of such POIs on the server. That's why I would like to only download from the server the POIs that can be shown on the map. Consequently I need to get the map boundaries to pass them to the server.
I read this for Android and this other SO question for iOS and the key seems to get the map Projection and the map bounding box. However the getProjection() method or the getBoundingBox() seem not to be exposed.
A solution could be to mix the coordinates from getCameraLocation() which is the map center and getZoom() to infer those boundaries. But it may vary depending on the device (see the shown area can be larger).
How can get the map boundaries in Codename one ?
Any help appreciated,
Cheers,
The problem is in the javadocs for getCoordAtPosition(). This will be corrected. getCoordAtPosition() expects absolute coordinates, not relative.
E.g
Coord NE = currentMap.getCoordAtPosition(currentMap.getWidth(), 0);
Coord SW = currentMap.getCoordAtPosition(0, currentMap.getHeight());
Should be
Coord NE = currentMap.getCoordAtPosition(currentMap.getAbsoluteX() + currentMap.getWidth(), currentMap.getAbsoluteY());
Coord SW = currentMap.getCoordAtPosition(currentMap.getAbsoluteX(), currentMap.getAbsoluteY() + currentMap.getHeight());
I tried this out on the coordinates that you provided and it returns valid results.
EDIT March 21, 2017 : It turns out that some of the platforms expected relative coordinates, and others expected absolute coordinates. I have had to standardize it, and I have chosen to use relative coordinates across all platforms to be consistent with the Javadocs. So your first attempt:
Coord NE = currentMap.getCoordAtPosition(currentMap.getWidth(), 0);
Coord SW = currentMap.getCoordAtPosition(0, currentMap.getHeight());
Will now work in the latest version of the library.
I have also added another method : getBoundingBox() that will get the bounding box for you without worrying about relative/absolute coordinates.
This is probably something that can be exposed easily by forking the project and providing a pull request. We're currently working on updating the map component so this is a good time to make changes and add features.

Big SCNGeometry SceneKit for iOS

I am working on a cocoa/iOS projet.
I have a common swift class which manage a Scenekit scene.
I want to draw a big terrain (about 5000x5000 points).
I have 2 triangles per 4 points. I have created a scngeometry object for the whole terrain (is it a good thing ?)
I decided to store those points in a 6-Float structure (x,y,z and r,g,b). I tried to create an empty array or to allocate a big array at the begining : i got the same issue.
I work with Int datatype for indices array.
The project works fine on Cocoa but i get memory errors on iOS. I think this is because of the need to have a big and contigous array for vertex.
I tried to create several chunks of geometry objects but scene kit does not like if we erase a previous buffer.
What is the best practice in this case ?
Is there a way to store vertex on the mass storage instead of memory arrays/buffers ?
Thanks
So...twice as many terrain points as there are pixels on a shiny new 5K display? That's a huge amount of memory to be using at once on iOS. And you won't be able to see that resolution on an iOS device.
So how about:
Break your 25 million pixel terrain into smaller tiles, each in its own SCNNode. Loop through the tiles, create one SCNNode, throw away the 6-Float array for that tile and move to the next.
Use SCNLevelOfDetail to produce much simpler versions of those nodes, for display when they're very far away.
Do the construction work on OS X. Archive your scene (NSSecureCoding). Bundle that scene into the iOS app.
Consider using reference nodes in your main SCNScene, and archive each tile as a separate SCNScene file.
Hopefully you're already using triangle strips, not triangles, to build your geometry.

Why does my touch develop script keep crashing?

The question isn't exactly concerned with touch develop rather just basic programming "structure" or syntax.
what I am trying to do is create a simple compass working on the phones heading capability. The heading capability just spits out degree readings to several (like 12) decimal places.
Anyway, even just letting the phone spit out the heading, eventually the phone will crash, why is that? Running out of memory?
The reason I came here is because of this:
I want to update the page with a photo of an associated rotation based on degree readout. I can't figure out how to do something like if 0 < x < 1 post this picture. Since the heading readout varies like 321.18364947363 and 321.10243635471
So currently I am testing this: several if / if else statements saying if heading output is 1 post picture with 1 degree rotation, 2 post picture with 2 degree rotation. This definitely and guaranteed crashes the phone. Why? Memory?
If you are a touch developer, would it be easier and more sane to simply take a round object, center it in relation to a square image and use it as a sprite or object which then you can dictate what angular velocity and position the object has without doing / using 360 individual images.
GAH! Damn character limits / thread format
this is what follows what I last wrote below for anyone that cares :
The concept seems simple enough but I am basically a programming noob, I was all over the place trying to learn Python, Java and C/C#/C++. ( I wrote this on my Windows Phone 8 but I was unable to copy the text ( GAY ) ) I am happy to have come across Touch Develop because it is better for me as a visual learner. (Thanks for the life story )right ? haha
The idea would have been to use this dumb pink against black giant compass with three headings / points of interests namely A fixed relative north, the heading and a position given by the person to be found's lat and long coordinates relative to the finder's phone's current location (lat and long ). This app in my mind would be used for party scenarios. I would have benefited from this app had the circumstances been right, I was lost at a party and I had to take a cab home for $110.00 because I didn't drive to that party.

ios 6 MapKit annotation rotation

Our app has a rotating map view which aligns with the compass heading. We counter-rotate the annotations so that their callouts remain horizontal for reading. This works fine on iOS5 devices but is broken on iOS6 (problem seen with same binary as used on iOS5 device and with binary built with iOS6 SDK). The annotations initially rotate to the correct horizontal position and then a short time later revert to the un-corrected rotation. We cannot see any events that are causing this. This is the code snippet we are using in - (MKAnnotationView *)mapView:(MKMapView *)theMapView viewForAnnotation:(id )annotation
CATransform3D transformZ = CATransform3DIdentity;
transformZ = CATransform3DRotate(transformZ, _rotationZ, 0, 0, 1);
annotation.myView.layer.transform = transformZ;
Anyone else seen this and anyone got any suggestions on how to fix it on iOS6?
I had an identical problem so my workaround may work for you. I've also submitted a bug to Apple on it. For me, every time the map got panned by the user the Annotations would get "unrotated".
In my code I set the rotations using CGAffineTransformMakeRotation and I don't set it in viewForAnnotation but whenever the users location get's updated. So that is a bit different than you.
My workaround was to add an additional minor rotation at the bottom of my viewForAnnotation method.
if(is6orMore) {
[annView setTransform:CGAffineTransformMakeRotation(.001)]; //iOS6 BUG WORKAROUND !!!!!!!
}
So for you, I'm not sure if that works, since you are rotating differently and doing it in viewForAnnotation. But give it a try.
Took me forever to find and I just happened across this fix.

Resources