How to find obstacles in horizontal surface using ARKIT of iOS 11(Scenekit)? - scenekit

I am working on a POC using the sample provided by apple https://developer.apple.com/sample-code/wwdc/2017/PlacingObjects.zip.
Right now placing object works fine after detecting the surface. But when i move the object from the detected surface to some other space like walls or some obstacle, it is overlapping with the 3D object.
Is it possible to detect the obstacles while placing/moving the 3D object through camera? Is there any sort of API available in ARKit to find the obstacle in the surface?
If not is there any workaround or calculation that we can do to find the obstacle/wall and let user not place/move the object above/beyond the obstacle/wall?

The short answer at this stage is no, unfortunately.
Detecting vertical planes, or objects in a scene, is quite difficult. My understanding is that Apple is working on vertical plane detection, and that there are a couple of startups doing the object detection stuff.
The best option will be to wait for 6d.ai, as this is what they are working on (although they are in stealth so hard to tell exactly).
If you have any Core ML experience then you could use an object detection model (find a third party one) to recognise objects in a scene and use that as a proxy for geometry that is off limits. There's also Matroid which provide object detection / tracking capabilities.
The following are not specific ARKit / iOS examples, but might help you later on.
Vuforia has support for scene understanding: https://library.vuforia.com/articles/Training/Getting-Started-with-Smart-Terrain
Hololens sort of has support for it as well: https://elbruno.com/2017/04/21/hololens-spatial-understanding-vs-spatial-mapping-and-a-step-by-step-on-how-to-use-it/

Related

WPF fast 2d plotting...what to use?

I've got an application where I am processing images from a camera at around 20Hz. Each image is segmented into a matrix of regions, let's say 100 x 50 (for example). Each region is processed, resulting in a single floating point metric. I'm trying to create simple 2d plot for each region's data as it is created. So, on the screen would be a matrix layout of 500 (in this worse case example) plots/charts.
I'm currently processing the images without issue using managedCUDA and writing some CUDA kernels to take care of that. What I'm faced with now is trying to create a way to logically view all this data has it's coming in. Things I've considered:
Building an "image" on the gpu with dimensions matching the target display control. This image would be segmented into the appropriate number of regions and a rudimentary chart would be drawn, pixel by pixel.
Learn Direct3D or OpenGL and code the algorithms necessary to draw the charts
Use the native WPF capabilities to draw the charts myself.
Use a commercial or open-source charting tool
Option 1 seems crazy to me (but I had a gpu-centric friend suggest it).
Option 2 seems like I'd have to learn all the unnessary 3D overhead of D3D or OpenGL just to draw 2D plots.
Option 3 and 4 probably have the most appeal to me, but I'm worried about performance.
So just looking for advice before I charge off in one of these directions.
I thoroughly recommend SciChart if you have the money. We develop scientific software that need to process large amounts of data being received from external devices, and have found SciChart to be excellent (features and performance). And no I'm not affiliated to them in any way!
Like all charting components it takes a while to get your head around the many features, but it's worth it. If you download their trial it includes a load of samples, including demos of real-time performance.

Implementing true Pinch and Zoom in the OxyPlot 2D library with MonoTouch

For plotting graphs I used the coreplot library for a while in my MonoTouch based iPhone app, but with iOS 6.0 the already annoying binding problems become so many that I decided to drop it for a library natively written in C#.
Searching around I found the excellent OxyPlot 2D library, and more specifically the MonoTouch port made by dvkwong.
The library works fine and has tons of useful features, but its output is just a rendered bitmap UIImage.
This means that I need to add myself the pinch and zoom features to the library.
The current implementation, based on the dvkwong preliminary example, uses the UIScrollView to zoom and unzoom the resulting bitmap image added to a simple subview.
This is not a good solution because when zooming the aliasing of the bitmap is made visible , and if the resolution is increased the text fonts becomes unreadable because are not optimized for the current zoom resolution.
I need to render the image each time at the correct resolution, without using UIScrollView but just overriding the DrawRect() call in a custom UIView.
But how to reproduce the the pinch-zoom gesture of Apple UIScrollView and draw the correct subrect of the OxyPlot plot model?
I tried to implement this method suggested here:
position the pinched view between the two fingers
But it doesn't work because I need to know the sub rect to draw, not applying a transformation matrix. Also there is no "draw sub rect" method in the OxyPlot library, so I need to set a cliprect in the image context and drawing a bigger image first and then clipping it. This is clearly too slow, because at some zoom levels the image can become huge (and I need the user to be able to zoom indefinitely on any part of the graph).
Any help is appreciated.
Thanks in advance.
I solved the problem myself.
I've created another MonoTouch port of the OxyPlot 2D library, this time supporting both Pan & Pinch-Zoom gestures. I've also added iPad support.
Now we have a native C# plot library for MonoTouch.
You can download it under the MIT Licence here:
https://github.com/Emasoft/OxyPlot.2DGraphLib.MonoTouch

WPF real-time rendering

I'm designing a game and thinking about using WPF for making a simple prototype of the basic gameplay.
Is it possible to render basic 2d-graphics in WPF in real-time? By basic graphics I mean simple shapes such as lines, circles, etc. By "real-time" I mean things are rendered based on parameters such as velocity, acceleration, etc. that changes depending on player input - which I assume means I can't use storyboards for the animations.
Thanks
Check out the previous question High Performance Graphics using the WPF Visual Layer for a good related discussion. While WPF provides a great framework for rich vector graphics, it lacks somewhat for real-time 2D performance.
There are workarounds, for instance, depending on your scene complexity you may get away with using DrawingVisuals or virtualized Shape classes (WPF Vector graphics) to draw your sprites. Going a little lower level, you could cache sprites using the BitmapCache mode available in .NET4.0, or pre-prendering them to Bitmaps and using various optimization patterns to improve throughput.
Going lower level still, you can mix Vector/Raster graphics using the fantantastic WriteableBitmapEx project, or Vector/GPU graphics using the D3DImage.
Regarding how to update your scene, you'll need to write a primitive game engine where on the CompositionTarget.Rendering event (fired on redraw of the screen) you get the updated parameters and compute positions/orientations of your sprites. Something that might help with this is this great codeplex project which integrates WPF/Silverlight and Farseer physics.

Display 360 Image in Silverlight 3.0 (Not Panorama)

I have a lot of images taken from a 360 camera which I would like to be able to display in Silverlight 3. They are NOT regular panorama images. The camera which took the image actually creates a distorted jpeg that becomes undistorted once wrapped around a sphere as a texture. I have desktop software that will allow viewing of the image (not just side-to-side, but straight up, down, etc.) and I need to try to get the same functionality in Silverlight. It is very similar to Google StreetView.
What I think I need is to create a sphere, wrap the jpeg on the sphere as a texture, then put the "camera" inside the sphere. I doubt this is possible in Silverlight, but perhaps there is a way to simulate this?
So far, Google searches aren't bringing anything up. Can anyone point me in the right direction to figure out how to do this? Are there any existing projects that do this?
An example of a typical image is here.
These might help you out (probably not). They are 3d engines for silverlight, but they will probably wrap the image outside of the sphere instead of inside, which is probably what you need.
Kit3D http://www.codeplex.com/Kit3D
Balder http://www.codeplex.com/Balder
Another, possibly more promising option, would be to use javascript. So far you've probably researched how to do this in Silverlight, but you might do some similar searching for using javascript for this. There may be an option out there already, and since Silverlight can interopt with Javascript, you might be in luck.
Your gonna have to map the texture to a sphere then, like you said. But afaik silverlight 3 doesn't support hardware accelerated 3d.
So your options are:
Try and find a silverlight software 3d library (Like this)
Write your own software rasterizer (multi page guide)
Hope this helps
You might want to try cropping a window from the image and display it. if the user want to go right, move the window right and crop. if the user wants to go left, move the window left and crop. to zoom out, expand the window, to zoom in make the window smaller. if you move the frame far right then stitch the image data from the left side.
You might need to modify the image to eliminate the distortion, this shouldn't be too hard and depends on the camera lens focal length.
Don't try mapping the image to a sphere, it is much harder.
At https://hdviewsl.codeplex.com it says that HD View SL (Silverlight version) supports
"orthographic (2D), with wrapping for 360-degree panoramas"
Also you could try to port PtViewer source code to Silverlight from Java if no one else has
UPDATE:
VRLight might be the solution in your case:
http://vrlight.thecloudsite.net/
http://vrlight.thecloudsite.net/tutorial.html
http://ivrpa.org/blog/3651/vrlight_vredit_20
Its author (Jurgen Eidt) is also making cPicture (http://cpicture.thecloudsite.net/index.en.html), if you can't find him from the VRLight site, try from the cPicture one, or try from his blog at IVRPA website (http://ivrpa.org/blog/3651), which seems to have recent posts

WPF capabilities

In my company we have in mind a redesign of the user interface of an application and we would like to make it ... let say "fancy". We have in mind a simple story board but I doubt between WPF, XNA or DirectX. I prefer WPF so I'd need to know if it support the following capabilities and how difficult to implement are they:
Transparency: We'd like to display information layers on top of the main display.
3D support: We want network nodes (part of the interface is a network graphic) to be simple spheres connected with lines in a 3D enviroment, and the ability to control the camera so rotation of the screen is possible.
Effects: Such as shading, lens flare or glow to "signal" the discovery or deletion of a node.
Text animations: Specifically the ability to display the text as if it's being written... You know, the information text will be "filling" the panel top down, left to right...
Good news. WPF is the technology you want and it can handle your requirements with relative ease.
Transparency is simple.
3D support is good as well. For an example, check out Tim Sneath: Five Great WPF 3D Nuggets. You even get hardware acceleration.
Effects are definitely do-able via timeline animations.
The previous statement goes double for Text Animations.
...the hardest part would be the 3D support, but it's still going to be a lot easier than getting things done in XNA or using DirectX libraries directly.
AFAIK WPF 3.5 supports all of this, and even leverages hardware acceleration to get a decent performance.
It's possible to embed an XNA application in a WPF form so you could use XNA for the representation of your network and WPF controls for the GUI in front of it.

Resources