React state update during control change breaks OrbitControls from three - reactjs

I am using react three with drei orbitControls and try to detect wether the camera is above or below zero, while moving the camera around. Currently I'm using onEnd listener to trigger my code in my ParentComponent. If I change a state of my parent component like so everything works as expected. Unfortunately if I start another change of orbit controls before ending another and change my parents state, orbit controls will break. E.g. I rotate the camera and hold down my left mouse button, than get below zero with my camera and the scroll with my mouse wheel at the same time, Orbit controls no longer works. This is especially painful when using touch. As soon as a second finger touches and I cross the ground orbit control breaks.
This is my Controls Component:
const Controls = forwardRef(({ handleCamera }, ref) => {
const { gl, camera } = useThree();
const controls = useRef(null);
useImperativeHandle(ref, () => ({
getCamera() {
return camera
},
}))
const handleChange = (e) => {
handleCamera(camera);
}
return (
<OrbitControls ref={controls}
onEnd={(e) => handleChange(e)}
/>
);
});
export default Controls;
And this is my parent:
function App() {
const handleCamera = (camera) => {
setCameraBelowSurface(camera.position.y <= 0 ? true : false);
};
return
<Canvas >
<Suspense>
<Controls
ref={controlsRef}
handleCamera={handleCamera}
/>
</Suspense>
</Canvas>
}
export default App;

Related

Performant way of getting mouse position every frame for canvas animation in React?

I have an animated background using canvas and requestAnimationFrame in my React app and I am trying to have its moving particles interact with the mouse pointer, but every solution I try ranges from significantly slowing down the animation the moment I start moving the mouse to pretty much crashing the browser.
The structure of the animated background component goes something like this:
<BackgroundParentComponent> // Gets mounted only once
<Canvas> // Reutilizable canvas, updates every frame.
// v - dozens of moving particles (just canvas drawing logic, no JSX return).
// Each particle calculates its next frame updated state every frame.
{particlesArray.map(particle => <Particle/>}
<Canvas/>
<BackgroundParentComponent />
I have tried moving the event listeners to every level of the component structure, calling them with a custom hook with an useRef to hold the value without rerendering, throttling the mouse event listener so that it does not fire that often... nothing seems to help. This is my custom hook right now:
const useMousePosition = () => {
const mousePosition = useRef({ x: null, y: null });
useEffect(() => {
window.addEventListener('mousemove', throttle(200, (event) => {
mousePosition.current = { x: event.x, y: event.y };
}))
});
useEffect(() => {
window.addEventListener('mouseout', throttle(500, () => {
mousePosition.current = { x: null, y: null };
}));
});
return mousePosition.current;
}
const throttle = (delay: number, fn: (...args: any[]) => void) => {
let shouldWait = false;
return (...args: any[]) => {
if (shouldWait) return;
fn(...args);
shouldWait = true;
setTimeout(() => shouldWait = false, delay);
return;
// return fn(...args);
}
}
For reference, my canvas component responsible of the animation looks roughly like this:
const AnimatedCanvas = ({ children, dimensions }) => {
const canvasRef = useRef(null);
const [renderingContext, setRenderingContext] = useState(null);
const [frameCount, setFrameCount] = useState(0);
// Initialize Canvas
useEffect(() => {
if (!canvasRef.current) return;
const canvas = canvasRef.current;
canvas.width = dimensions.width;
canvas.height = dimensions.height;
const canvas2DContext = canvas.getContext('2d');
setRenderingContext(canvas2DContext);
}, [dimensions]);
// make component re-render every frame
useEffect(() => {
const frameId = requestAnimationFrame(() => {
setFrameCount(frameCount + 1);
});
return () => {cancelAnimationFrame(frameId)};
}, [frameCount, setFrameCount]);
// clear canvas with each render to erase previous frame
if (renderingContext !== null) {
renderingContext.clearRect(0, 0, dimensions.width, dimensions.height);
}
return (
<Canvas2dContext.Provider value={renderingContext}>
<FrameContext.Provider value={frameCount}>
<canvas ref={canvasRef}>
{children}
</canvas>
</FrameContext.Provider>
</Canvas2dContext.Provider>
);
};
The mapped <Particle/> components are are fed to the above canvas component as children:
const Particle = (props) => {
const canvas = useContext(Canvas2dContext);
useContext(FrameContext); // only here to force the force that the particle re-render each frame after the canvas is cleared.
// lots of state calculating logic here
// This is where I need to know mouse position every (or every few) frames in order to modify each particle's behaviour when near the pointer.
canvas.beginPath();
// canvas drawing logic
return null;
}
Just to clarify, the animation is always moving regardless of the mouse being idle, I've seen other solutions that only work for animations triggered exclusively by mouse movement.
Is there any performant way of accessing the mouse position each frame in the Particle mapped components without choking the browser? Is there a better way of handling this type of interactive animation with React?

React, insert/append/render existing <HTMLCanvasElement>

In my <App> Context, I have a canvas element (#offScreen) that is already hooked in the requestAnimationFrame loop and appropriately drawing to that canvas, verified by .captureStream to a <video> element.
In my <Canvas> react component, I have the following code (which works, but seems clunky/not the best way to copy an offscreen canvas to the DOM):
NOTE: master is the data object for the <App> Context.
function Canvas({ master, ...rest } = {}) {
const canvasRef = useRef(master.canvas);
const draw = ctx => {
ctx.drawImage(master.canvas, 0, 0);
};
useEffect(() => {
const canvas = canvasRef.current;
const ctx = canvas.getContext("2d");
let animationFrameId;
const render = () => {
draw(ctx)
animationFrameId = window.requestAnimationFrame(render)
}
render();
return () => {
window.cancelAnimationFrame(animationFrameId);
}
}, [ draw ]);
return (
<canvas
ref={ canvasRef }
onMouseDown={ e => console.log(master, e) }
/>
);
};
Edited for clarity based on comments
In my attempts to render the master.canvas directly (e.g. return master.canvas; in <Canvas>), I get some variation of the error "Objects cannot be React children" or I get [object HTMLCanvasElement] verbatim on the screen.
It feels redundant to take the #offScreen canvas and repaint it each frame. Is there, instead, a way to insert or append #offScreen into <Canvas>, so that react is just directly utilizing #offScreen without having to repaint it into the react component canvas via the ref?
Specific Issue: Functionally, I'm rendering a canvas twice--once off screen and once in the react component. How do I (replace/append?) the component's <canvas> element with the offscreen canvas (#offScreen), instead of repainting it like I'm doing now?
For anyone interested, this was actually fairly straightforward, as I overcomplicated it substantially.
export function Canvas({ canvas, ...rest }) {
const container = useRef(null);
useEffect(() => {
container.current.innerHTML = "";
container.current.append(canvas);
}, [ container, canvas ]);
return (
<div ref={ container } />
)
}

Tooltip delay on hover with RXJS

I'm trying to add tooltip delay (300msemphasized text) using rxjs (without setTimeout()). My goal is to have this logic inside of TooltipPopover component which will be later be reused and delay will be passed (if needed) as a prop.
I'm not sure how can I add "delay" logic inside of TooltipPopover component using rxjs?
Portal.js
const Portal = ({ children }) => {
const mount = document.getElementById("portal-root");
const el = document.createElement("div");
useEffect(() => {
mount.appendChild(el);
return () => mount.removeChild(el);
}, [el, mount]);
return createPortal(children, el);
};
export default Portal;
TooltipPopover.js
import React from "react";
const TooltipPopover = ({ delay??? }) => {
return (
<div className="ant-popover-title">Title</div>
<div className="ant-popover-inner-content">{children}</div>
);
};
App.js
const App = () => {
return (
<Portal>
<TooltipPopover>
<div>
Content...
</div>
</TooltipPopover>
</Portal>
);
};
Then, I'm rendering TooltipPopover in different places:
ReactDOM.render(<TooltipPopover delay={1000}>
<SomeChildComponent/>
</TooltipPopover>, rootEl)
Here would be my approach:
mouseenter$.pipe(
// by default, the tooltip is not shown
startWith(CLOSE_TOOLTIP),
switchMap(
() => concat(timer(300), NEVER).pipe(
mapTo(SHOW_TOOLTIP),
takeUntil(mouseleave$),
endWith(CLOSE_TOOLTIP),
),
),
distinctUntilChanged(),
)
I'm not very familiar with best practices in React with RxJS, but this would be my reasoning. So, the flow would be this:
on mouseenter$, start the timer. concat(timer(300), NEVER) is used because although after 300ms the tooltip should be shown, we only want to hide it when mouseleave$ emits.
after 300ms, the tooltip is shown and will be closed mouseleave$
if mouseleave$ emits before 300ms pass, the CLOSE_TOOLTIP will emit, but you could avoid(I think) unnecessary re-renders with the help of distinctUntilChanged

Live stream HTML5 video draw-to-canvas not working

I use ReactJS to display a live stream (from my webcam) using the HTML5 video element. The OpenVidu media server handles the backend.
I would like to use the canvas element to draw the video live stream onto the canvas using the drawImage() method.
I have seen other examples, but in them the video element always has a source. Mine does not have a source - when I inspect the video element all I see is:
<video autoplay id="remote-video-zfrstyztfhbojsoc_CAMERA_ZCBRG"/>
This is what I have tried, however the canvas does not work.
export default function Video({ streamManager }) {
const videoRef = createRef()
const canvasRef = createRef()
useEffect(() => {
if (streamManager && !!videoRef) {
//OpenVidu media server attaches the live stream to the video element
streamManager.addVideoElement(videoRef.current)
if (canvasRef.current) {
let ctx = canvasRef.current.getContext('2d')
ctx.drawImage(videoRef.current, 0, 0)
}
}
})
return (
<>
//video element displays the live stream
<video autoPlay={true} ref={videoRef} />
// canvas element NOT working, nothing can be seen on screen
<canvas autoplay={true} ref={canvasRef} width="250" height="250" />
</>
)
}
UPDATE: after further investigation I realised I needed to use the setInterval() function and therefore provided the solution below.
The solution is to extract the canvas logic in a self contained component, and use a setInterval() so that it will draw the video element on the canvas, every 100 ms (or as needed).
Video Component
import React, { useEffect, createRef } from 'react'
import Canvas from './Canvas'
export default function Video({ streamManager }) {
const videoRef = createRef()
useEffect(() => {
if (streamManager && !!videoRef) {
//OpenVidu media server attaches the live stream to the video element
streamManager.addVideoElement(videoRef.current)
}
})
return (
<>
//video element displays the live stream
<video autoPlay={true} ref={videoRef} />
//extract canvas logic in new component
<Canvas videoRef={videoRef} />
</>
)
}
Canvas Component
import React, { createRef, useEffect } from 'react'
export default function Canvas({ videoRef }) {
const canvasRef = createRef(null)
useEffect(() => {
if (canvasRef.current && videoRef.current) {
const interval = setInterval(() => {
const ctx = canvasRef.current.getContext('2d')
ctx.drawImage(videoRef.current, 0, 0, 250, 188)
}, 100)
return () => clearInterval(interval)
}
})
return (
<canvas ref={canvasRef} width="250" height="188" />
)
}
You are using useEffect without parameter which mean your useEffect will call in every render
useEffect(() => {
// every render
})
If you want to run your drawImage at only mount time then use useEffect with []
useEffect(() => {
// run at component mount
},[])
Or if you want to run drawImage when any parameter change then pass your parameter to it
useEffect(() => {
// run when your streamManager change
},[streamManager])
use it as per your requirement

React.useEffect stack execution prevents parent from setting defaults

I have attached a simplified example that demonstrates my issue:
https://codesandbox.io/s/reactusehook-stack-issue-piq15
I have a parent component that receives a configuration, of which screen should be rendered. the rendered screen should have control over the parent appearance. In the example above I demonstrated it with colors. But the actual use case is flow screen that has next and back buttons which can be controlled by the child.
in the example I define common props for the screens:
type GenericScreenProps = {
setColor: (color: string) => void;
};
I create the first screen, that does not care about the color (parent should default)
const ScreenA = (props: GenericScreenProps) => {
return <div>screen A</div>;
};
I create a second screen that explicitly defines a color when mounted
const ScreenB = ({ setColor }: GenericScreenProps) => {
React.useEffect(() => {
console.log("child");
setColor("green");
}, [setColor]);
return <div>screen B</div>;
};
I create a map to be able to reference the components by an index
const map: Record<string, React.JSXElementConstructor<GenericScreenProps>> = {
0: ScreenA,
1: ScreenB
};
and finally I create the parent, that has a button that swaps the component and sets the default whenever the component changes
const App = () => {
const [screenId, setScreenId] = useState(0);
const ComponentToRender = map[screenId];
const [color, setColor] = useState("red");
React.useEffect(() => {
console.log("parent");
setColor("red"); // default when not set should be red
}, [screenId]);
const onButtonClick = () => setScreenId((screenId + 1) % Object.keys(map).length)
return (
<div>
<button style={{ color }} onClick={onButtonClick}>
Button
</button>
<ComponentToRender setColor={setColor} />
</div>
);
};
In this example, the default color should be red, for screen A. and green for the second screen.
However, the color stays red because useEffect is using a stack to execute the code. if you run the code you will see that once the button clicked there will be child followed by parent in log.
I have considered the following solution, but they are not ideal:
each child has to explicitly define the color, no way to enforce it without custom lint rules
convert the parent into a react class component, there has to be a hooks solution
This might be an anti-pattern where child component controls how its parent behave, by I could not identify a way of doing that without replicating the parent container for each screen. the reason I want to keep a single parent is to enable transition between the screens.
If I understood the problem correctly, there is no need to pass down setColor to the children. Making expressions more explicit might make a bit longer code, but I think it helps in readability. As what you shared is a simplified example, please let me know if it fits your real case:
const ScreenA = () => {
return <div>screen A</div>;
};
const ScreenB = () => {
return <div>screen B</div>;
};
const App = () => {
const [screen, setScreen] = useState<"a" | "b">("a");
const [color, setColor] = useState<"red" | "green">("red");
const onButtonClick = () => {
if (screen === "a") {
setScreen("b");
setColor("green");
} else {
setScreen("a");
setColor("red");
}
};
return (
<div>
<button style={{ color }} onClick={onButtonClick}>
Button
</button>
{screen === "a" ? <ScreenA /> : <ScreenB />}
</div>
);
};
render(<App />, document.getElementById("root"));

Resources