SceneKit: adding directional light to camera node has no effect - scenekit

Based on the advice in this answer, the goal is to add an ambient light to a scene along with a directional light to the camera.
This works in the Xcode Scene Editor, with the directional light set at 70% white and the ambient light at 50% white. The Euler angles for the directional light use default values.
Coding this arrangement fails. It's as if the directional light isn't being added. There is no effect on the scene; it's as if only the ambient light exists.
Different values for the x component of the Euler angles make no difference -- PI/2, PI, and other values were tried but still no change.
Adding the directional light to the scene's root node (scene.rootNode.addChildNode(directionalLightNode)), however, does indeed add the light to the scene.
fileprivate func addLightNode() {
// Create ambient light
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = .ambient
ambientLightNode.light!.color = UIColor(white: 0.70, alpha: 1.0)
// Add ambient light to scene
scene.rootNode.addChildNode(ambientLightNode)
// Create directional light
let directionalLight = SCNNode()
directionalLight.light = SCNLight()
directionalLight.light!.type = .directional
directionalLight.light!.color = UIColor(white: 1.0, alpha: 1.0)
directionalLight.eulerAngles = SCNVector3(x: 0, y: 0, z: 0)
// Add directional light to camera
let cameraNode = sceneView.pointOfView!
cameraNode.addChildNode(directionalLight)
}

Create a SCNNode with for example the name cameraNode.
Create a SCNCamera and assign it to the camera property of cameraNode.
Add the cameraNode to the scene (as a child of the rootNode).
After that you can add the light node as a child node of cameraNode or, given it’s the only camera, as a child of the pointOfView node (which now represents the cameraNode you created and added to the scene). The default camera and its pointOfView node are not part of the scene’s object hierarchy.

Related

react-three-fiber scene background brighter than original image

My react-three-fiber background is supposed to be a dark starry sky. But in the scene it is very bright. Here is the original image:
This is the code for the skybox:
const { scene } = useThree();
const texture = useLoader(TextureLoader, "textures/skybox.jpg");
scene.background = texture;
This is what the scene rendered:
(Don't worry about the blurriness. It is just because I took a screen shot of such a small area of my screen.)
I am asking about the brightnessI asked in the Poimandres Discord server and one of the admins said that it may be an encoding error and to ask here.
Edit:
Thanks to Marquizzo for solving the issue. I just added texture.encoding = THREE.sRGBEncoding; and the background became this:
It might be an issue with your texture encoding property. By default textures use linear, but for a straight background JPG that doesn't interact with lighting you could use sRGB.
const texture = useLoader(TextureLoader, "textures/skybox.jpg");
texture.encoding = THREE.sRGBEncoding;
scene.background = texture;

How to nest a camera in a group in react three fiber

I'm at a loss to do this in react-three-fiber. I want to create a rig for a camera (which is straightforward in three.js). I put the camera inside of a group. If I want to rotate the camera around the Y axis, i change the rotation of the parent group. If I want to rotate it around the X axis, I rotate the camera itself. This way, I don't deal with gimbal issues (like if y rotation is 180 degrees then x rotation is inverted).
in three.js i would do:
const cameraRig = new THREE.Group();
cameraRig.add(camera);
scene.add(cameraRig);
cameraRig.rotation.y = Math.PI;
camera.rotation.x = Math.PI/8;
But in react-three-fiber I don't know.
I have a camera I am using from useThree
const {camera} = useThree();
ReactDOM.render(
<Canvas>
<ambientLight />
<group rotation={[0,Math.PI,0]}>
<camera rotation={[Math.PI/8,0,0]}/>
</group>
</Canvas>,
document.getElementById('root')
);
react-three-fiber brings its own camera, this one is a default and you wont be able to nest children in there. you can use your own cameras of course, the state model has a "setDefaultCamera" function for this, so that pointer events and everything else gets to use your new camera. cameras unfortunately arent straight forward in three, so there's still some churn involved as you need to update it on resize, you need to calculate aspect etc. all this has been abstracted here: https://github.com/pmndrs/drei#perspectivecamera-
<PerspectiveCamera makeDefault>
<mesh />
</PerspectiveCamera>
that takes care of everything. now that camera is the system default, and it takes children.
Classically in gaming, to make cameras move like FPS cameras do and not lose "up" being up, you would put a camera inside of another 3D object and rotate around the Y axis in the exterior object and around the X axis of the camera object, to prevent gimbal weirdness. Doing this in react three fiber caused a weird bug that froze the camera. To solve this, instead of building a rig, you can change the rotation ORDER of the Euler that represents the rotation of the camera. To do so, just do this:
const { scene, gl, size, camera } = useThree();
camera.rotation.order = "YXZ";

Default camera position not changing?

So I am trying to understand how allowsCameraControl really work.
I have a scene and I set the allowsCameraControl = true. When I pan around and the scene rotates, or translates (two fingers...), I don't understand what scene kit really changes for me.
I was expecting the camera node to change position, or rotation. Not the case.
I also logged the position and rotation of the rootNode of the scene...no change.
So just to be clear, in the render delegate called for every frame update, I log the position and rotation of the camera node I set for the scene, I logged the position and rotation of the root node, I also logged te position and rotation of a node I added to the scene. None of these show any change in position and or rotation.
Can anyone explain to me what scene kit changes when the scene rotates or translates using the standard camera controls?
A new camera is indeed created, leaving the original one unchanged. The scene graph is also unchanged. The new camera is not part of the scene graph and can be accessed with
scnView.pointOfView
A new camera is created, leaving the original one unchanged.
If you show the inspector using the showsStatistics property, you'll notice the Point of View changes from the camera you had (even if it is "Untitled") to kSCNFreeViewCameraName.
Sadly there is not much documentation about this behavior but you might be able to find it in the node hierarchy.
Let us know what you find!
Just came across an easy way of detecting the moment when a new camera is switched by SceneKit using KVO.
Swift 5
Declare an observer property in your class:
var povObserver: NSKeyValueObservation?
Then, setup the observer:
povObserver = defaultCameraController.observe(\.pointOfView, options: [.new]) { (cameraController, change) in
if let newPov = cameraController.pointOfView {
// Access new camera here
}
}
When finished you can set povObserver to nil

Drawing pixels from buffer and position (glDrawPixels)

Basically I am doing some tests to simulate various window inside a scene. Everything works fine until I try to position better the window that I am drawing inside the scene.
The important code is here:
// camFront = glReadPixels ...
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
//glRasterPos3f(1.0, 0.5, 0.0); // <-- commented out
// Zooming window
glPixelZoom(0.5, 0.5);
glDrawPixels(500, 250, GL_RGB, GL_UNSIGNED_BYTE, camFront); //> camFront is the buffer of the window
glutSwapBuffers();
Basically when glRasterPos3f is commented out I got my nice window drawn inside my scene:
Now If i try to position that window with glRasterPos3f, the window disappears completly from the scene... Any clues?
One possible The cause of this problem is an invalid rasterpos. The raster pos is set after transforming x,y and z just like any other pixel. This includes the clipping stage.
The easy test is to see if when a bright point (or something more visible) is drawn at your x,y and z it appears on the screen.
Where is (1.0, 0.5, 0.0) in your screen? Is it visible?
The coordinate has to be a visible point that is projected onto screen, becoming a 2d coordinate. Try putting the code before the modelview part, maybe then the coordinate will be where you expected.
Because you reset the matrix with glLoadIdentity, the point (1.0, 0.5, 0.0) will be at the right edge of screen - possibly clipped as too far right or too close to camera.
GLboolean valid;
glGetBooleanv(GL_CURRENT_RASTER_POSITION_VALID, &valid);
if(valid == GL_FALSE)
printf("Error");
(The second test is better than drawing something, but won't help tell you where it is being drawn if it is not invalid)

Using GLKBaseEffect is it possible to colorize a texture

I have a sheet of black shapes surrounded by transparency. I have successfully loaded this texture with GLKit and I can draw the shapes using GLKBaseEffect into rectangles. Is there a way to change the color of the black (ie non-transparent) pixels, so I can draw yellow shapes or blue shapes etc? Or do I need a custom shader to do this?
It seems like you'll need a custom shader (which I highly recommend working with) as you need to check individual texel color values, but here are some suggestions to try first:
You can pass color as per-vertex data in a vertex attribute array pointing to GLKVertexAttribColor. This will allow you to individually set the color of each vertex (and ultimately, faces) but it will be difficult to see where they line up against your texture.
You can try enabling the following property on your effect:
effect.colorMaterialEnabled = YES;
But, for both cases, if your texels are completely black then I don't think any changes in color will show.
I think a custom shader is definitely the way to go, as you'll need to do something like this:
highp vec4 finalColor;
highp vec4 textureColor = texture2D(uTexture, vTexel);
highp vec4 surfaceColor = uColor;
// If texel is non-transparent (check alpha channel)
if(textureColor.a > 0.001)
finalColor = surfaceColor;
else
finalColor = vec4(0.0, 0.0, 0.0, 0.0);
gl_FragColor = finalColor;
Where anything prefixed with u is a uniform variable passed into the shader.
To get fully colorized textures, use:
self.effect.texture2d0.envMode = GLKTextureEnvModeModulate;
Which tells OpenGL to take whatever color is in your texture and multiply it by whatever color the underlying geometry is. You can then use vertex coloring to get neat fades and whatnot.
NOTE: You'll want to change your texture from black to white (1, 1, 1, 1), so multiplication works correctly.
NOTE: Here are some other settings that you should already have in place:
self.effect.texture2d0.enabled = GL_TRUE;
self.effect.texture2d0.target = GLKTextureTarget2D;
self.effect.texture2d0.name = self.texture.name;
self.effect.colorMaterialEnabled = GL_TRUE;
NOTE: You can experiment with GLKTextureEnvModeDecal, too, which blends your texture on top of colored geometry (as when applying a decal), so the transparent parts of the texture show the geometry underneath.

Resources