ARKit SceneKit Metal Integration - scenekit

I am trying to integrate the creation of objects in ARKit with
SceneKit and Metal. There are objects I can easily create with SeneKit and others with Metal, so I'd like to be able to use both side by side.
For example creating an ArKit project with the sceneKit template places a space ship as a demo through SceneKit Nodes. On the other hand using the Metal template creates a cube in ARKit with a tap using an anchor.
How would I be able to use both in the same project? Creating a cube through Metal and a spaceship with SceneKit, for example.
Another example, there is a MTKView that I want to wrap as an object or layer it on top of an object in SceneKit, and I also have other objects that I am creating with PBR in SceneKit, I want to have these side by side in ARKit. If possible.
Maybe with this? :
https://developer.apple.com/documentation/scenekit/scnprogram
I think this, if possible would be very useful for the growing ARKit community.
Thanks!

You should be able to use SceneKit and Metal content in your scene at the same time - you can use SCNSceneRendererDelegate for that by having access to the renderer and it's currentRenderCommandEncoder property (https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate):
renderer(_:willRenderScene:atTime:) - for rendering Metal content before SceneKit's
renderer(_:didRenderScene:atTime:) - for rendering Metal content after SceneKit's
Or use SCNNodeRendererDelegate to replace the content of the node with Metal content (https://developer.apple.com/documentation/scenekit/scnnoderendererdelegate):
renderNode(_:renderer:arguments:)

How would I be able to use both in the same project?
You have to choose wether you want to build your app with Metal or SceneKit, both have they ups and downs.
I think there a two approaches you could take:
Use SceneKit. Since SceneKit uses Metal you can enhance and change shaders or the program. Maybe start reading the documentation about using Metal in SceneKit. Maybe you can do what you want to do in Metal while still using SceneKits abstractions where possible.
Use Metal. If using Metal in SceneKit as Apple intended it is not enough you have to build everything in Metal.
To load complex models you can use Apples ModelIO to load 3D content into metal.

Related

3D model formats in ARKit / ARCore development

I am a beginner at AR game development for both iOS and Android. I have the following questions:
What kinds of 3D model formats are supported by ARKit for iOS and ARCore for Android respectively? (I tried .dae and .obj are supported by ARkit, not yet test ARCore.)
Our 3D model vendor can only provide us FBX format. How can I convert it to the formats supported by ARKit and ARCore? I tried using 3D model converter, but the converted model has no texture.
Updated: January 22, 2023.
SceneKit
Apple SceneKit framework handles 3D models for ARKit and VR apps. SceneKit supports the following 3D assets with corresponding material files:
.dae (supports animations)
.obj (single-frame) with its texture and .mtl file
.abc (only single-frame supported)
.usdz (supports animations)
.scn (native SceneKit's format, supports animations)
RealityKit
Apple RealityKit framework also handles 3D models for ARKit, AR and VR apps. You can prototype a content for RealityKit in a standalone app called Reality Composer. RealityKit supports the following 3D assets:
.usdz (supports animations and dynamics)
.reality (supports animations and dynamics, optimized for a faster loading)
.rcproject (supports animations and dynamics)
Additionally you can use Terminal's usdzconvert command to generate .usdz from the following formats:
.obj
.glTF
.fbx
.abc
.usda
.usdc
.usd
And, of course, you can use Reality Converter app with its simple GUI.
Sceneform
Pity but since June 2020 Sceneform has been archived and no longer maintained by Google.
Google Sceneform handles 3D models for ARCore SDK. Sceneform supports the following 3D assets with their material dependencies:
.obj (with its .mtl dependency)
.glTF (animations not supported)
.fbx (with or without animations)
.sfa (ascii asset definition, deprecated in Sceneform 1.16)
.sfb (binary asset definition, deprecated in Sceneform 1.16)
SceneKit, RealityKit, Sceneform and Reality Composer support Physically Based Rendering.
ARKit and ARCore
But what's the role of ARKit and ARCore then?
These two AR modules don't care about importing and rendering of a 3D geometry. They are only responsible for tracking (world, image, face, geo, etc) and scene understanding (i.e. plane detection, hit-testing & raycasting, depth perception, light estimation, and geometry reconstruction).
ARKit doesn't care about model formats, because ARKit doesn't display 3D models (or anything else) itself.
ARKit provides information for use by a higher level rendering engine — Apple's SceneKit or SpriteKit, third-party Unreal or Unity plug-ins, or an engine you build yourself with Metal or OpenGL. The rendering engine is responsible for drawing the camera feed and overlaying 3D content to create AR illusions, and it uses the information ARKit provides in order to do so convincingly.
I don't know much about ARCore, but from all appearances it has the same role in the Android ecosystem — it's Unity, Unreal, or some other engine that handles the 3D models there, too.
So, questions like this are specific to whatever 3D engine you're using with ARKit/ARCore. SceneKit can handle DAE and OBJ directly, and a few more formats via Model I/O (see MDLAsset and SCNScene.init(mdlAsset:)). For Unreal, Unity, and whatever else you use with ARCore handle... see the documentation for those technologies.
ARcore itself doesn't come with any 3d model handling logic at this moment. To render a 3D model in any format, you need to parse the data and draw it using openGL. The sample app in the package demonstrates how this can be done for a simple 3D model i.e 1 OBJ and 1 texture file.
I not sure did you checked ARCore properly. The basic example delivered by Google is working on .obj format. ARCore is set of tools related with AR only. You can use 3D format whatever you want as long as you will be able to use it on Android. It is not related with ARCore
Question 2 is not related with ARCore and/or android or even arkit

choosing 3D library / platform for mobile and Desktop application

I want to build an application for mobile (ios,android) and Desktop (Windows) or Web.
The application will look like this: a 3D object which the user can play with the camera perspective around it and some menus.
What I need is to manipulate a 3D Object like torus or tube. by manipulate I mean: change materials and edit the object like a polygon. of course I also need menus and communication to a server.
*Optional: I am not sure if I need to load 3D model from a file.
What I don't understand is should I look for all in one solution or combine a cross platform framework with other libraries? Are Game engine suitable for this task?
My options so far:
Use Three.js with PhoneGap and write in java script
Use OGRE and write in C++ with some cross-platform framework that allow me to write in C++.
I never developed for mobile and I wonder about those cross-platform frameworks: does the application's size is big? does the application runs well? which obstacles should i have comparing writing separate applications for each platform?
Thank you for your help
Might want to look at libgdx .

Options for developing 3D Web App

I am developing a webapp which the most challenging aspect is the 3D model section. It also contains other things such as drag and drop and a sliding bar with arrows each side which the user can go through to select different items.
I have been looking into WebGL but it seems IE doesn't support it without using a plugin. This isn't ideal so I was wondering what other options I have.
Flash? Silverlight? Anything else?
Silverlight is WAY too rare to be used for any meaningful development, and ajax is simply not stunning enuogh. That leaves us with Flash, which in my opinion, is the best of the three options. Flash Player 11 introduces the new Stage3D API which can create pretty stunning 3D graphics. There are also many AS3 libraries for 3D rendering. I prefer Away3D
Just found this, which looks promising.
http://iewebgl.com/
Edit: Disregard, seems the user has to download the executable for it to work.
at the moment flash is most popular plugin for 3D based web apps. which is cross browser, but no mobile browsers support :(
Many technologies and every one have his disatvantages

Manipulation with GIS content on the web using the WebGL

I have task to create program for manipulation with 3d content on the web. When I said 3d content i mean
on 3d map (witch i have and it is something like *.sdm) which i should load into browser and work some basic operation with it (rotate screen, change camera etc...).
Because i am totaly n00b i want to ask a couple of questions:
1. How to load maps into browser. Just for notice that my map have sdm extension. Is this possible?
2. What i should use for represent 3d content. I am thinking of GLGE framework for webGL, if it is possible of course
What should be the most painless and the most effective way to do this? Maybe i was totally wrong when choose webGL?
Programs that use WebGL aren't mature enough to do what you want. Within the next few years, when GIS applications start popping up it may be possible, but not now.
Also, keep in mind that WebGL is what gives you access to a low-level graphics library. It does not directly have anything to do with GIS data.
You may want to take a look at OpenLayers (2d, javascript based) or WorldWind-Java (3d, jogl/java based). Both of these programs can display map information in a browser.
http://openlayers.org/
http://worldwind.arc.nasa.gov/java/

Cross platform 2D Vector + Raster API + hardware accelerated - does this exist?

Requirements:
Retained graphics mode API
For 2D objects only (though 3D transforms of these 2D objects is of interest)
Cross-platform
Vector graphics drawing
Raster compositing + support for opacity masks - hardware accelerated of course...
Animation API
Package size - can it run in an embedded environment?
This is not for a game, but I am not opposed to using a game type API.
Some thoughts:
Qt is probably too heavy-weight, but I am not familiar enough with the API to know if it would meet the requirements. I am not interested in Qts window management (there are no windows) or widget / control set as it is not for a desktop type application. Also, I am not sure if Qt has an animation framework? Thoughts here?
Most likely what this would be is a framework built on top of OpenGL. I just don't know if such a thing exists. Also, I am unclear about 2D graphics in OpenGL. Are 2D graphics truly 2D or are they simply 3D objects drawn on a plane oriented to look 2D?
WPF is to DirectX as _____________ is to OpenGL
If the blank can be filled that is what I am looking for.
Update #2
I spent some time this weekend with Qt and have discovered QGraphicsScene class - which seems to be the fundamental class for Qt's 2D retained type graphics mode - and QGraphicsWidget which allows some auto-layout functionality of the QWidget class.
Qt is close to passing my litmus test. One final thing to figure out is a good designer to developer workflow when dealing with Vector images, i.e., how do I take an icon created in Illustrator and turn that into a QGraphicsItem - this might be a good candidate for a new (more focused) question.
You might want to check out Cairo, it has an OpenGL backend. I don't think it has an animation API though.
As to using Qt.
It's not heavy-weight in any meaningful sense. The dynamic library is a few megs and the graphic operations are quite optimized I believe.
It does not have a stable version using OpenGL acceleration -- this is coming in Qt4.5.
It does use XRender or something, for 2D accel
Also it has a great drawing API, and an animation API (QTimeLine for simple things and recently the more powerful QtAnimation)
While OpenGL is a great tool for 3D rendering it is important to understand that at the end of the day the output medium is inherently 2D. Perceptions of the 3rd dimension is achieved through visual clues such as lighting, far object appearing smaller then near objects, and near objects occluding far objects.
These visual clues are implemented as computations at various stages of the graphics pipeline. Lighting and shading, viewport transforms, and depth queries are some of the operations used to create the illusion of 3D.
When using OpenGL for 2D, many of the pipeline operations typically used for 3D rendering can be ignored. This can result in performance improvements due to reduced computation and has the added benefit of simplifying the source code. There are also a number of operations that work specifically on a 2d raster such as drawing sprites.
Instead of thinking about 2D rendering as a reduced set of 3D rendering, I would encourage you to consider 3D rendering as the result of carefully constructed 2D elements.
WPF is also 3D. The library is tuned and designed to make 2D easy, but it has all the 3D transformations in there as well.

Resources