Reduce Application Lag while using shadows in SceneKit - scenekit

I am working on a 3D map in SceneKit. When I enable the Cast Shadow Property of a directional light in SceneKit, the shadows appear, but the application becomes very slow.
How do I reduce the lag while still maintaining shadows in the scene?

Use Fake Shadows (shadows generated as a texture in 3D authoring or 2D authoring app) rather than True Shadow Map. To apply fake shadows as a texture for 3D plane use PNG file format with premultiplied alpha channel (RGB * A). It considerably reduces Processing and Memory Usage when using shadows in RealityKit or SceneKit.

Related

How to generate shadow png file from Autodesk Maya?

I am rendering 3d furniture models in Autodesk Maya for an AR app. For showing a shadow of 3d model in scene file I need export of transparent PNG file from Autodesk Maya which I can use as diffuse image for shadow plain. I have attached example of the kind of file needed but I don't know how to generate the same in Autodesk Maya. Please help.Shadow file generated for chair model.
If you want to generate in Maya a UV texture containing shadows for ARKit / SceneKit models follow these steps (I should say that it's not easy if you're a beginner in Maya):
Create objects with appropriate shaders
Create UV map for your models in UV Editor
Activate Maya Software Renderer
Create lights with Raytraced Shadows
Don't forget turn Raytracing on in Render Options
Duplicate all 3D objects and lights
Select an object with cast shadows and a corresponding light
Go to Rendering module and choose Lighting/Shading - Transfer Maps... menu.
Setup all the properties for Shaded Output Map and its appropriate location and then press Bake.
Also you can output four additional passes in Maya Software Renderer:
Normal Map
Displace Map
Diffuse Map
Alpha Map
If you want just shadows, without diffuse texture, use white coloured Lambertt Shader in Maya.

What projection matrix does SceneKit use for directional light shadows?

I'm trying to prevent shadow swimming when moving the directional light around to cover only the visible part of my scene. Despite rounding the translation components in light space, they swimming artifacts are still there.
I even tried pointing the light directly down (with a scale of 2 and shadow map size of 32) and translated the directional light along the world x-axis by -1 and +1. The shadows still changed. I can only assume they aren't using normal orthographic projection. The shadows seem to move more when near the edge too.
Has anyone had luck preventing shadow swimming in SceneKit, or knows what projection matrix is being used for the directional light shadows?

Applying a pixel shader to a ViewPort3D

I'm new to pixel shaders, and I'm trying to apply an underwater-effect to my 3d scene. I can apply it to an image and animate it easily enough, but not to my ViewPort3D. The computer just hangs when calling BeginAnimation on the effect being applied to the Viewport3D. Is this something that cannot be done in WPF?
After a little digging I learned that pixel shaders are only applied to 2 dimensional types, like images. So what I would need is called a vertex shader, which for WPF, there are none.

direct access to pixel data in wpf 3d

Is there a lockbits equivalent of 2d in 3d in Windows Presentation Foundation to get direct pixel access ?
I understand you can paint a triangle at a time: 3d for the threst of us. Wouldn't it be easier to paint in cubes instead of triangles ? (I need to paint a stack of images such as an mri sequence).
The WriteableBitmap class allows you to access the pixels, I'm a bit unsure what you want with regards to 3d but you should be able to use a WriteableBitmap as the texture for each item and position them in 3D as required. For creating a stack of images 3D Panel and FluidKit's ElementFlow might be of interest to you.
Triangles are used because 3 points always make a flat surface this makes shading simpler and predictable plus you can make any shape if you use enough triangles.
If by painting in cubes you mean tiny cubes similar to how you use pixels in 2D these are known as Voxels they have their use-cases but most hardware and software are designed with polygons in mind.

Can pixel shaders be used when rendering to an offscreen surface?

I'm considering integrating some D3D code I have with WPF via the new D3DImage as described
here:
My question is this: Do pixel shader's work on offscreen surfaces?
Rendering to an offscreen surface is generally less constrained than rendering directly to a back buffer. The only constraints that come with using an offscreen surface with D3DImage is that it must be in a 32-bit RGB/ARGB format (depending on your platform). Other than that, all that the hardware has to offer is at your disposal.
In fact, tons of shader effects take advantage of offscreen surfaces for multipass, or full screen post-processing.
I don't know if there's anything special about it with WPF, but in general yes, pixel shaders work on offscreen surfaces.
For some effects rendering to a different surface is required - glass refraction in front of a shader-rendered scene for example. Pixel shaders cannot access the current screen contents and so the view has to be first rendered to a buffer and then used as a texture in the refraction shader pass so that it can take the background colour from a pixel other than the one being calculated.

Resources