gl-react-expo shader with external image texture input - reactjs

I'm trying to use gl-react-expo shaders to apply some effects on some images in an Expo project. The images are from an external url (like http://i.imgur.com/rkiglmm.jpg).
I can make simple shaders that don't use a texture input, and it works perfectly. But I am not finding the correct way to pass the image to the shader. I'm trying to implement the DiamondCrop example from this site (http://greweb.me/2016/06/glreactconf/) and all the other simple examples that I found that passes an image to the shader. But none of them work.
This is my shader definition:
import React from "react";
import { Shaders, Node, GLSL } from "gl-react";
const frags = {
diamond: GLSL`
precision highp float;
varying vec2 uv;
uniform sampler2D t;
void main () {
gl_FragColor = mix(
texture2D(t, uv),
vec4(0.0),
step(0.5, abs(uv.x - 0.5) + abs(uv.y - 0.5))
);
}`
}
const shaders = Shaders.create({
DiamondCrop: {
frag: frags.image
}
});
const DiamondCrop = ({ children: t }) => (
<Node
shader={shaders.DiamondCrop}
// uniformsOptions={{
// t: { interpolation: "nearest" },
// }}
uniforms={ { t } }
>
</Node>
);
export { DiamondCrop }
I tried passing the image in the following ways:
// 1
<Surface style={{width: 200, height: 200}}>
<DiamondCrop>
{{uri:'http://i.imgur.com/rkiglmm.jpg'}}
</DiamondCrop>
</Surface>
// 2
<Surface style={{width: 200, height: 200}}>
<DiamondCrop>
{{image:{uri:'http://i.imgur.com/rkiglmm.jpg'}}}
</DiamondCrop>
</Surface>
// 3
<Surface style={{width: 200, height: 200}}>
<DiamondCrop>
http://i.imgur.com/rkiglmm.jpg
</DiamondCrop>
</Surface>
// 4
<Surface style={{width: 200, height: 200}}>
<DiamondCrop>
{'http://i.imgur.com/rkiglmm.jpg'}
</DiamondCrop>
</Surface>
And the errors I get are the following:
// 1 (on 'expo red screen of death')
undefined is not an object (evaluating '_expo2.default.FileSystem')
// 2 (Expo warning; nothing appears on the Surface region)
Node#1(DiamondCrop#2), uniform t: no loader found for value, Object {
"image": Object {
"uri": "http://i.imgur.com/rkiglmm.jpg",
},
}, Object {
"image": Object {
"uri": "http://i.imgur.com/rkiglmm.jpg",
},
}
// 3 (Expo warning; nothing appears on the Surface region)
Node#1(DiamondCrop#2), uniform t: no loader found for value, http://i.imgur.com/rkiglmm.jpg, http://i.imgur.com/rkiglmm.jpg
// 4 (Expo warning; nothing appears on the Surface region)
Node#1(DiamondCrop#2), uniform t: no loader found for value, http://i.imgur.com/rkiglmm.jpg, http://i.imgur.com/rkiglmm.jpg
Could anyone point me in the right directions to accomplish this task??

This question is quite old, but I wanted to write an answer for those who may be into the gl related stuff with react-native and Expo.
TL;DR The reason this is happening is because the library gl-react-expo is outdated, it is doing import Expo from "Expo" which is deprecated(breaks the entire thing actually) in V33 of expo SDK.
I made a github repo that hosts the corrected libraries, you may want to use that. Here is the link gl-expo-libraries Also, If you want to stick to original libraries then go to the node_modules folder, go to the gl-react-expo fodler and look for _expo2 variable in the files and change _expo2.default to _expo. It will do the trick.
Cheers :)

Related

Framer Motion - Making a div randomly move across the page

I am new to using Framer Motion but I do love the library a lot. Fantastic so far. However, I am stuck right now when trying to move a background div across the whole page.
https://codesandbox.io/s/svg-background-60ht8?file=/src/App.js
This is my codesandbox
I tried everything, read all the docs and I did get one tip to use Motionvalue and calculate the boundaries so the ball doesn't go off-screen. But I can't figure it out yet...
Does anyone know how to make the SVG/ball/div infinitely animate across the whole page? Thanks in advance!
EDIT:
It is moving randomly now, but on local machine the animation breaks and repeats when going outside the screen.
Not entirely sure if this kind of effect was what you were looking for, but I tried to add the random movement plus screen constraints in an example more akin to the DVD bouncing logo. However, I guess the motionValue should be used in all instances, since using state incurs rendering costs as well as a loss of precision.
I was not able to do the same demo using motion values, so that might be something worth exploring further.
I did something similar using MathRandom, with two state variables that represented both x and y positions on the screen, along with a setInterval() so it would change after a couple of seconds. Something like:
const Test = () => {
const { height, width } = getDimensions();
const [Position, setPosition] = useState({ x: 150, y: 150 })
useEffect(() => {
let timer = setInterval(() => {
setPosition({
...Position,
x: width * (1 - Math.random()) - 250,
y: height * (1 - Math.random()) - 250,
}, 1500)
return () => {
clearInterval(timer)
}
}, [Position])
return (
<div className="screen">
<div className='Animate' style={{
top: Position.y,
left: Position.x,
borderRadius: 100,
height: 100,
width: 100,
...
}}
/>
</div>
)
}

React-Viro AR ImageTracking sub elements are positioned inside camera when target is seen

I am currently using Viro-React (React-Viro) for a AR project in which, if a certain pictures gets seen by the camera a video is played infront of it. I had it working perfectly, but somehow and a few days later, without changing the code, the video and everything inside the ViroARImageMarker is always positioned inside the camera, when the target gets seen.
This behavior only seems to happen in my own projects and not in the samples provided by Viro Media.
I have tried to:
Reinstalling node modules
Compared the package.json's and reinstalled.
Changed the position of the elements inside the ViroARImageMarker
And reorganised the elements.
But nothing seems to work.
As I said the code itself shows and hides the video, but does not position the video (every inside the ViroARImageMarker correctly, but positions them in the position of the camera when the targets gets seen and then keeps them there.
Here is the code. (Snippet at the end)
I pass this function to the ViroARSceneNavigator in another script.
There are a few animations, which just scale up/down the video depending if the target is in view or not.
(I removed the whitespaces to fit more onto one screen)
Main Code
Viro Animations and Material
"use strict";
import React, { useState } from "react";
import { ViroARScene, ViroNode, ViroARImageMarker, ViroVideo, ViroARTrackingTargets, ViroAnimations, ViroMaterials } from "react-viro";
const MainScene = (props) => {
const videoPath = require("./res/Test_Video.mp4");
const [playVideoAnimation, setPlayVideoAnimation] = useState(false);
const [videoAnimationName, setVideoAnimationString] = useState("showVideo");
const [shouldPlayVideo, setShouldPlayVideo] = useState(false);
function onAnchorFound() {
setPlayVideoAnimation(true);
setVideoAnimationString("showVideo");
setShouldPlayVideo(true);
}
function onAnchorRemoved() {
setShouldPlayVideo(false);
setVideoAnimationString("closeVideo");
setPlayVideoAnimation(true);
}
function onVideoAnimationFinish() {
setPlayVideoAnimation(false);
}
function onVideoFinish() {
setShouldPlayVideo(false);
setVideoAnimationString("closeVideo");
setPlayVideoAnimation(true);
}
return (
<ViroARScene>
<ViroARImageMarker target={"targetOne"} onAnchorFound={onAnchorFound} onAnchorRemoved={onAnchorRemoved}>
<ViroNode rotation={[-90, 0, 0]}>
<ViroVideo
position={[0, 0, 0]}
scale={[0, 0, 0]}
dragType="FixedToWorld"
animation={{ name: videoAnimationName, run: playVideoAnimation, onFinish: onVideoAnimationFinish }}
source={videoPath}
materials={["chromaKeyFilteredVideo"]}
height={0.2 * (9 / 16)}
width={0.2}
paused={!shouldPlayVideo}
onFinish={onVideoFinish}
/>
</ViroNode>
</ViroARImageMarker>
</ViroARScene>
);
};
ViroAnimations.registerAnimations({
showVideo: {
properties: { scaleX: 0.9, scaleY: 0.9, scaleZ: 0.9 },
duration: 1,
easing: "bounce",
},
closeVideo: {
properties: { scaleX: 0, scaleY: 0, scaleZ: 0 },
duration: 1,
},
});
ViroMaterials.createMaterials({
chromaKeyFilteredVideo: {
chromaKeyFilteringColor: "#00FF00",
},
});
ViroARTrackingTargets.createTargets({
targetOne: {
source: require("./res/Test_Bild.png"),
orientation: "Up",
physicalWidth: 0.01, // real world width in meters
},
});
export default MainScene;
I was able to resolve the issue by copying (downgrading) my dependencies in my package.json from the React-Viro codesamples and decreasing the width/height (inside the element) and scale (in the animation) of the video.
Note that if the sub element of the ViroARImageMarker is too big (in scale and size), the issue comes back.

Use images instead of svg circle (react-d3-tree)

I recently updated to version 2.0.1 and I am struggling to set images instead of svg circles to individual nodes. In the older versions I used nodeSvgShape property:
nodeSvgShape: {
shape: 'image',
shapeProps: {
href: AppState.config.api.img + mainTile.image,
width: 100,
height: 100,
x: -50,
y: -17,
},
},
However in the current version this does nothing. Is there any way how can I achieve this in the current version?
Thank you in advance
If you are still having this issue after a year, here is how I did this.
I was able to do this with other SVG images by using the renderCustomNodeElement properties of the Tree component. By passing a render function that you create, you are able to apply the render function to each node (found on the docs here: https://www.npmjs.com/package/react-d3-tree#styling-nodes ).
Below is an example of how to implement it, say using an object that maps the name of the node to an SVG string and then passing that to the Tree component:
import SVG from 'react-inlinesvg';
const renderMol = ({ nodeDatum, toggleNode }) => (
<g>
<SVG src={svgMapping[nodeDatum.name]}/>
</g>
);
return (
<Tree data={mydata}
renderCustomNodeElement={nameOrMol} />
)

React particles not interacting on hover after adding an image in background?

I have the normal react particle code which works fine but after adding an image to the background the particles are not interacting on hover anymore. Image is in y.js file which is loaded in x.js file where react particles code exists
const ParticleOptions={
particles: {
number: {
value: 170,
density:{
enable: true,
value_area:850
}
}
},
interactivity:{
detect_on:"canvas",
events:{
onhover:{
enable:true,
mode: "repulse"
}
},
modes:{
repulse:{
distance:70,
duration: 0.4
}
}
}
}
My separate image in a different JS file:
return(
<div className='abx'>
<img src='https://samples.clarifai.com/face-det.jpg' alt=''/>
</div>
);
I expected thr interaction of particles on mouse movements to remain after adding a small image to the center but it does not
Just change detect_on:"canvas",to window

C3.js responsive x axis timeseries

I'm trying to create a graph that can look good on both mobile and desktop using c3 (http://c3js.org/).
However, I'm having trouble getting the x-axis, which is a timeseries, to display change size for mobile. The only way I've managed to do this is by destroying the graph and re-creating it from scratch. But this prevents me from adding a nice transition to the chart, which would be really nice to have.
[![desktop view][1]][1]
I tried using the tick culling option on the x axis and set a max value of 8, but this option gets ignored when using either flush() or resize() methods.
So my question is: is there any way to change the x axis values without having to destroy the graph and re-generate it?
http://imgur.com/EMECqqB
I have used the onresized: function () {...}
or you can use the onresize: function () { ... }
link
to change the culling max on resized screen:
axis: {
x: {
type: 'timeseries',
tick: {
culling: true,
format: '%m-%d',
fit:true,
culling: {
max: window.innerWidth > 500 ? 8 : 5
}
}
}
},
onresized: function () {
window.innerWidth > 500 ? chart.internal.config.axis_x_tick_culling_max = 8 : chart.internal.config.axis_x_tick_culling_max = 5;
},
Here a example: https://jsfiddle.net/07s4zx6c/2/
add mediaquery according to device size
like this.
#media screen and (max-width:810px) {
.c3 svg { font: .8em sans-serif; }
}

Resources