I'm having trouble casting a shadow in my scene. Steps I've taken:
shadowMapEnabled attribute added to the React3 element
the directional light in my scene has verbatim the properties I see here in the react-three-renderer example
all three meshes (one cube and two planes) in the scene have castShadow and receiveShadow
I have a black cube in the image below to show where the directional light is eminating from.
Here's a gist of my code. (abbreviated)
Try reducing the shadow camera's near value, it looks too high. If you can provide a full example it may be easier to diagnose what's happening
Additionally try to place a different object at the "lookAt" target for the light, this should help identify in which direction it will face
Related
I'm using physically based lighting to light my scene in ARKit, however, I also want to add shadows to make it more realistic. I tried adding a directional light and setting the intensity as low as possible but I still am not able to reach my desired effect. I basically want a light to only cast shadows and have no effect on the lighting in the scene.
Is there any way I can achieve this effect?
You should use correct SCNShadowMode property.
From the apple's documentation:
Each shadow mode may have a positive or negative effect on rendering performance, depending on the contents of the scene. Test your app to determine which shadow mode provides the best balance between performance and quality for the scenes you want to render.
case forward:
SceneKit renders shadows during lighting computations.
case deferred:
SceneKit renders shadows in a post-processing pass.
case modulated:
SceneKit renders shadows by projecting the light’s gobo image. The light does not illuminate the scene.
So your desired option is should be modulated.
I hope it helped!
P.S. If this answer is useful to you, don't forget to press up arrow and mark it as a correct. Best of luck!
Update.
Lights source:
Directional:
- Intensity - 1000.
- Mode - dynamic.
- Color #000000 (rgb(0, 0, 0)).
- Shadow Mode: modulated.
I would like to allow the user to rotate the scene by touch but have the lighting remain fixed. This works quite well using the default camera and default lighting. However, the default light is "straight on", i.e. along the screen's -z axis. I would rather it be directed at an angle more like a stage light, say from the front upper right.
But when I create my own light it appears that it needs to be attached to an existing node, the rootNode for example. When this is done, the light then rotates around with the model as the user manipulates the scene.
Is there a simple way to keep the lighting fixed while rotating with the default camera or do I need to get seriously involved creating a custom camera?
The lighting is already "fixed": that is, each light source keeps its position and direction within the scene unless you do something to change it. But it sounds like instead, you want to have a light that is fixed relative to a camera.
To achieve this, don't attach the light to the scene's root node. Instead, attach it to the same node that the camera is attached to. Or if you want to adjust the light's position relative to the camera, you could construct a small node tree, with one leaf containing the camera and the other leaf containing a directional light.
You'll almost always want to create your own camera or cameras in SceneKit. The default user-manipulable camera is useful for quickly getting up and running, and debugging, but not something that you want to expose to end users.
I am doing some game-related rendering with Silverlight, and when I attach a pixel shader to an image that has a (rotational) transformation, I am seeing a strange, fuzzy, pixelation effect.
Here is a screenshot of the problem. The image on the left has just a transformation. The image on the right has a transformation and a pixel shader.
(source: andrewrussell.net)
You can see this in action here on my blog (click the Silverlight control to add the pixel shader).
The pixel shader in question is the one from SilverSprite used to tint an image's colour. You can view its source code here.
The transformation I am applying is a MatrixTransform (with a hand-calculated translate, scale, rotate matrix). The problem appears when rotating the image.
The element that both the shader and the transform are being applied to is an Image that is added to a Canvas in code. The Image's ImageSource is a WriteableBitmap but the effect also happens with a BitmapImage.
My question is: what is causing this fuzzy pixelation? and what can be done to reduce or remove it?
After watching this presentation from PDC09, I have a much better idea of how the rendering system in Silverlight works. This problem isn't directly addressed by the presentation, but knowing the rendering order of things helps.
The order of the rendering steps relevant to my question is: An object's children (and/or itself) are rendered, that rendering passes through the Effect and then passes through the RenderTransform.
It appears that in any case where a RenderTransform is applied to an object where that object or its children have had an Effect applied (ie: a RenderTransform that comes after an Effect in the rendering tree), that RenderTransform is done in a "low quality" mode that produces this "fuzzyness".
The solution, then, is to move the Effect to after the RenderTransform. In my case this means putting the Image on its own Canvas, applying the RenderTransform to the Image, and the Effect to the Canvas.
What i'm trying to do is making a "light projector" with visible ray(like with fog) also called volumetric light;
and which project a image (bitmap) ;
Because i would like to keep this project connected with a wpf application ( to get brush, position, rotation from data), i've choose to use WPF 3D
But it seem that WPF can't handle light projection or render ray.
So to do that, i have extruded each pixel of my source bitmap into a polygon colored by a solidColorBrush of the pixel color.
and keep the pixel order with (x,y) position.
For performance issue, i've set all the bitmaps to 32x32 px ( 1024 polygon for only one light !!)
But the result is too pixelated as you can see on the picture.
Moreover, it probably take much memory for nothing ...
my question is, how can i make it smooth or even rethink the extrusion system to optimize performance ...
Is any other tehnology that can be integrated into a wpf application and do that better or easier ?
Thanks, and sorry my English is pretty bad ...
alt text http://www.visualdmx.fr/pic_example.png
I have a few 3d rectangles on my screen that I want to pivot around the Y axis.
I want to press down with the mouse, and rotate the 3d object to a max rotation, but when the user moves their mouse, I want to slightly rotate it so that it looks like a see-saw (rotating from a range of -13 to 13 degrees on the Y-Axis).
At the moment, I can do this, but my frame rate really really suffers when I move the mouse quickly. So for example, when I click the left side of the rectangle, I generate a storyboard and animation objects, then rotate the 3d object to -13 degrees. Then when I slightly move the mouse to the right, I want to rotate it to -12.5, and so on...
Again, I can do all of this, its just that the performance suffers greatly! It goes down to 5-FPS in some cases... which is not acceptable.
My question is am I doing this the best way? How else could you animate a rotation base on the users position on the screen?
Thanks for any help you can provide!
Mark
I assume you are doing the following:
Using a separate Model3D for the object you are rotating & including it in a Model3DGroup
Giving it a RotateTransform3D containing an AxisAngleRotation3D
Animating the AxisAngle3D's Axis property in the Storyboard
If my assumptions are correct I think we can conclude the problem is efficiency in rendering, since the CPU required to update a single Axis value, recompute the Transform, and update MILCore is negligible.
I can think of several ways that could improve the rendering performance:
If your Model3D being rotated is a GeometryModel3D backed by a MeshGeometry3D, do as much as you can to simplify the mesh and the materials used. It can also help to substitute a different mesh for closeups.
If the Model3D being rotated is a GeometryModel3D that uses VisualBrush brushes, during animation temporarily replace the VisualBrush with an ImageBrush containing a BitmapImage that is a snapshot of the Visual underlying the VisualBrush as of the instant the animation starts. When the animation ends, put backthe VisualBrush. The user probably won't notice that the contents of the object freeze temporarily when it rotates. (Note that this same technique can be used if your Visual3D is a Viewport2DVisual3D.)
If the Model3D being rotated is combined into a Model3DGroup with other objects but lies entirely in front of or behind the other groups, separate the model into its own separate Viewport3DVisual, appropriately layered to get the effect you want.
Simplify lighting or Material types
Monitor the actual frame rate and if it is going too low when using the storyboard, rotate the object immediately to where the mouse indicates without using an animation.
MSDN presents some tips on what impacts in WPF 3D performance. If you didn't stumble on it yet, check the items on "Performance Impact: High" list.
Edit: In march 2009, Josh Smith published an article on CodeProject that involves rotating 3D objetcs. Maybe you want to check his solution.