I'm using #react-three/fiber and I'm implementing first person controls (WASD & cursor keys) with addition of OrbitControls to navigate a scene. I'm pretty much cloning PointerLockControls and I've got it working for the WASD controls as such (updating the target vector of the OrbitControls, passed as ref):
const moveForward = (distance) => {
vec.setFromMatrixColumn(camera.matrix, 0)
vec.crossVectors(camera.up, vec)
camera.position.addScaledVector(vec, distance)
orbitControls.current.target.addScaledVector(vec, distance)
}
const moveRight = (distance) => {
vec.setFromMatrixColumn(camera.matrix, 0)
camera.position.addScaledVector(vec, distance)
orbitControls.current.target.addScaledVector(vec, distance)
}
However, I'm not quite sure how to go about updating the target when the camera is rotated. Here's how I'm rotating the camera and its working just fine without OrbitControls:
const euler = new THREE.Euler(0, 0, 0, 'YXZ' );
euler.setFromQuaternion(camera.quaternion)
euler.y -= 0.25 * radians;
camera.quaternion.setFromEuler(euler)
Preview here: https://codesandbox.io/s/wasd-with-orbit-9edup7
OK, so you can see the working version here: https://yvod70.csb.app/
The idea is quite simple, attach the camera to your player/box and then move said object instead of the camera. The camera being a child of the player will be translated and rotated relative to the player.
To do this the first step is to get a reference to the mesh:
const scene = () => {
const mesh = useRef();
return (
<Canvas>
<mesh
ref={mesh}
>
<boxBufferGeometry args={[1, 1, 1]} />
<meshBasicMaterial wireframe color={"green"} />
</mesh>
<Controls mesh={mesh} />
</Canvas>
);
}
After setting this up, we just pass the mesh ref to whatever React component we want and use it however. In our case it's to replace the camera for movement and attach the camera to the box. Which can be done like so in your Controls component:
const Controls = ({ orbitControls, mesh }) => {
/** #type {THREE.Mesh} */
const box = mesh.current;
const { camera } = useThree();
const code = useCodes();
// ...your movement code here, remember to replace camera with box.
// FIXME: Move the Controls component lower down the component tree so that we don't end up with these race conditions.
if (!!box) {
box.add(camera);
}
// ...animation code
}
These were the steps I took to attach the orbit controls to the player
Related
I am having issues trying to make collision work properly with an imported glb file only used for collision.
There are two parts to this question:
I am currently getting a whole lot of faceNormal errors and vertices warnings in the console.
.faceNormals[767] = Vec3(0.999994684003945,-4.252105447140875e-10,0.003260669233505899) looks like it points into the shape? The vertices follow. Make sure they are ordered CCW around the normal, using the right hand rule.
.vertices[555] = Vec3(-18.135730743408203,9.071623802185059,-13.433568000793457)
I am not sure if this error has something to do with how I'm merging the geometry of the object or how the object is built inside blender.
When my sphere collides with the glb model it will clip through it sometimes with enough force. (holding down the W key to move the sphere forward). It clips through especially on the edges of the environment, against a flat wall it gives some resistance but still manages to clip through with enough time holding down a movement key.
Here is the code I'm using to import my glb model:
import React, { useMemo } from "react";
import { useConvexPolyhedron } from "#react-three/cannon";
import { Geometry } from "three-stdlib/deprecated/Geometry";
import {useGLTF} from "#react-three/drei";
import collision from "./Collision6.glb";
function toConvexProps(bufferGeometry) {
const geo = new Geometry().fromBufferGeometry(bufferGeometry);
geo.mergeVertices();
return [geo.vertices.map((v) => [v.x, v.y, v.z]), geo.faces.map((f) => [f.a, f.b, f.c]), []];
}
export default function Collision(props) {
const { nodes } = useGLTF(collision);
const geo = useMemo(() => toConvexProps(nodes.Collision001.geometry), [nodes]);
const [ref] = useConvexPolyhedron(() => ({ type: 'Static', mass: 100 ,position:[0,0,0], args: geo
}));
return (
<mesh
ref={ref}
geometry={nodes.Collision001.geometry}
>
<meshStandardMaterial wireframe color="#ff0000" opacity={0} transparent />
</mesh>
);
}
I'm using this for a 360 image and I need the camera to stay fixed at (0,0,0)
If I update the camera position the controls stop working.
I've seen this post https://codeworkshop.dev/blog/2020-04-03-adding-orbit-controls-to-react-three-fiber/ which kind of has a fix but seems out of date, the extend functionality doesn't make orbitControls available.
I've tried various combinations of
const onChange = () => {
camera.position.set(0, 0, 0);
ref.current.update();
};
Or
useEffect(() => {
if (ref.current && scene.projection === SceneProjection['3603D']) {
camera.position.set(0, 0, 0);
ref.current.update();
}
}, [ref, scene]);
Is there a simple way to lock the camera position with OrbitControls and just rotate the camera?
I think you might be able to set controls.maxDistance to something really small, like 0.01. That way it only rotates around its origin by an imperceptible amount.
You can read more about the .maxDistance attribute in the docs](https://threejs.org/docs/index.html?#examples/en/controls/OrbitControls.maxDistance)
hey so I wanted to play a video as my konva stage. I used an example from sandbox but I cant figure out how to add the video player control to it.Can somebody help
Thanks
constructor(...args) {
super(...args);
const video = document.createElement("video");
video.src = Vid;
video.type = "video/mp4";
video.controls = true; //I tried adding controls here
this.state = {
video: video,
timestamps: [],
};
video.addEventListener("canplay", () => {
video.play();
this.image.getLayer().batchDraw();
this.requestUpdate();
});
// video.addEventListener("keydown", () => {
//video.pause();
// });
//}
requestUpdate = () => {
this.image.getLayer().batchDraw();
requestAnimationFrame(this.requestUpdate);
};
render() {
return (
<Image
ref={(node) => {
this.image = node;
}}
height={window.innerHeight}
width={window.innerWidth}
image={this.state.video}
controls
/>
);
}
Such video on the canvas doesn't have regular controls, that you used to see on video inserted inside DOM.
When you draw video on the canvas, it works as an animated image. Nothing else.
If you want to have controls you can:
Instead of drawing video into the canvas, just insert <video /> element somewhere in the DOM. You can place it on top of the canvas with absolute position.
Or draw all controls manually with Konva/canvas API. Draw play/pause buttons, draw progress bar, etc. That may be a lot of work.
In my app I want to add texture to the loaded .obj model. I have the same situation with my .fbx loaded models. Below is my example code, but this works only with something like sphereGeometry not with a loaded model.
Thanks in Advance!
import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader'
import { useTexture } from '#react-three/drei'
const OBJModel = ({ file }: { file: string }) => {
const obj = useLoader(OBJLoader, file)
const texture = useTexture(textureFile)
return (
<mesh>
<primitive object={obj} />
<meshStandardMaterial map={texture} attach="material" />
</mesh>
)
}
primitive is not a subset of mesh. it can be a children of group.
primitive requires both geometry and material as props. Mesh requires both geometry and material as props. it's pretty obvious both cannot be used as subset of each other.
to implement your idea, you need to use only one Mesh or primitive. I'd suggest using Mesh which has abundant documentations. primitive is not documented enough.
the OBJ acquired through useLoader may have complex group inside it. Models usually contain larger sets such as group or scene. Group and Scenes can't have textures. Mesh can.
OBJ(result of useLoader) = scene => group => mesh => geometry, texture
traversing is required to acquire geometry from mesh.
// I've implemented this with Typescript,
// but it is not necessary to use 'as' for type conversion.
const obj = useLoader(OBJLoader, "/rock.obj");
const texture = useTexture("/guide.png");
const geometry = useMemo(() => {
let g;
obj.traverse((c) => {
if (c.type === "Mesh") {
const _c = c as Mesh;
g = _c.geometry;
}
});
return g;
}, [obj]);
// I've used meshPhysicalMaterial because the texture needs lights to be seen properly.
return (
<mesh geometry={geometry} scale={0.04}>
<meshPhysicalMaterial map={texture} />
</mesh>
);
I've implemented it in codesandbox. here's the working code:
https://codesandbox.io/s/crazy-dawn-i6vzb?file=/src/components/Rock.tsx:257-550
i believe obj loader returns a group, or a mesh, it wouldn't make sense to put that into a top-level mesh, and giving the top level a texture wont change the loaded obj mesh.
there are three possible solutions:
use obj.traverse(...) and change the model by mutation, this is what people do in vanilla three and it is problematic because mutation is bad, you are destroying the source data and you won't be able to re-use the model
i would suggest to convert your model to a gltf, then you can use gltfjsx https://github.com/pmndrs/gltfjsx which can lay out the full model declaratively. click the video, this is exactly what you want
if you must use obj files you can create the declarative graph by hand. there is a hook that gives you { nodes, materials } https://docs.pmnd.rs/react-three-fiber/API/hooks#use-graph
On Konva (React-Konva to be precise) there is a Stage with a Layer with an Image.
Clicking on that Image gives the position of the Click on that Image.
But after Dragging (either the Stage or the Image), the position is different.
In React-Konva code this looks basically like this:
<Stage width={2000} height={2000} draggable={true}>
<Layer>
<MyImage url="_some_url_pointing_to_a_png_" />
</Layer>
</Stage>
The MyImage Component:
const MyImage = (props) => {
const url = props.url;
const [image, status, width, height] = useImage(url);
return (
<Image
image={image}
width={width}
height={height}
onClick={clickHandler}
/>
);
};
(Yes, I modified the useImage React-Hook to also return width and height)
The clickHandler is quite simple:
const clickHandler = (e) => {
const stage = e.target.getStage();
const stageLocation = stage.getPointerPosition();
console.table(stageLocation);
};
The problem is that clicking on that image does give me a position, which seems to look ok.
But when I drag the image, due to 'draggable={true}' on the Stage (so, 'drag the Stage' to be precise) and then click on the same position in the Image again, I get another location.
This is probably obvious because it gives me the position of the Click on the Stage and not the position of the Click on the Image.
When making the Stage not Draggable and the Image Draggable, then the same problem applies.
So, question remains:
How to get the position of the Click on the Image when it (or its Stage) is Draggable?
Think I found the solution.
The attrs of the Image contain the offset of that Image in the Stage. (or in the Layer?)
Code by which you get the position on an Image which is in a Layer which is in a Draggable Stage:
const clickHandler = (e) => {
const stage = e.target.getStage();
const pointerPosition = stage.getPointerPosition();
const offset = {x: e.target.attrs.x, y: e.target.attrs.y};
const imageClickX = pointerPosition.x - offset.x;
const imageClickY = pointerPosition.y - offset.y;
console.table({x: imageClickX, y: imageClickY});
};
However, this piece of code only works if the image is draggable also. In fact you're dragging the image probably.
If you have a draggable Stage and a non-draggable-Image, then the offset is found in the Stage:
const stage = e.target.getStage();
console.table({offsetX: stage.attrs.x, offsetY: stage.attrs.y});
(And I'll update this answer again if I encounter new clues on the final solution :-), this is getting quite a saga...)
I'm more than interested if anybody sees any pitfalls with this solution.
Thanks,
Bert