I've been trying to render about 10k+ instances of the same image (with various simple transformations such as rotation/position) on a element with react-pixi using hooks.
import * as React from 'react'
import * as PIXI from 'pixi.js'
import { ParticleContainer, useApp, useTick, Stage, Sprite } from '#inlet/react-pixi'
const texture = PIXI.Texture.from('./images/bg-arrow.png')
export function PixiTest() {
// Unused right now but I think I have to use this maybe with
// `app.stage.addChild( ??? )` ?
const app = useApp()
const sprite = (i: number) => <Sprite
texture={texture}
anchor={0.5}
x={i}
y={i}
scale={0.01 * i}
/>
// Very dumb implementation that starts lagging heavily before 1000
return (
<ParticleContainer position={[0, 0]} properties={{ position: true }}>
{
Array(100).fill(undefined).map((_, i) => sprite(i))
}
</ParticleContainer>
)
}
The above is called inside <Stage {...}></Stage> tags.
How can I make this work with constant 50-60 fps and 10k+ instances ?
It sounds like the problem isn't your code, it's the limits of the browser. If I had to guess, it's because the texture that you're using is too high-resolution. You're scaling the sprite down by a factor of 100 in your code. Instead, use an image that's 100x smaller.
Consider:
- If the image is 32x32px, with 100 sprites, you're rendering 102,400 pixels per frame.
- If the image is 32x32px, with 1000 sprites, you're rendering 1,024,000 pixels per frame.
- If the image is 3x3px, with 1000 sprites, you're rendering 9,000 pixels per frame.
Also, it doesn't sound like you need useApp at all. I'd also recommend wrapping PixiTest in React.memo, since it accepts no input and won't ever need to re-render (in React, that is).
Related
I'm trying to create multiple 3d models of a cube varying in the texture used on the faces of each using react-three-fiber. I would like them exported as gltf files.
I have been using this --- Exporting scene in GLTF format from react-three-fiber --- as a guideline. I have the ref component and callback, however I'm not sure how to call GLTFExporter with, say, a button, so that it exports a cube with different faces every time (until the options from a given directory runs out). I also have one scene I am able to export as example for now.
Ideally, I would like to have a directory for the different cubes, and the GLTFExporter "button" would access a different folder from there with the different faces each time.
A bit different than the solution you suggested, but a similar option would be to first export the scene from GLTFExporter and then generate permutations of the glTF file afterward:
import { WebIO } from '#gltf-transform/core';
import { KHRONOS_EXTENSIONS } from '#gltf-transform/extensions';
const exporter = new THREE.GLTFExporter();
// NOTE: three.js r135 will change method signature here...
// scene, onLoad, options → scene, onLoad, onError, options
exporter.parse(scene, (glb) => createVariations(glb), {binary: true});
async function createVariations (glb) {
const io = new WebIO().registerExtensions(KHRONOS_EXTENSIONS);
const document = io.readBinary(glb);
// assign a blank texture to all materials' base color
const texture = document.createTexture();
for (const material of document.getRoot().listMaterials()) {
material.setBaseColorTexture(texture);
}
for (const textureURL of MY_TEXTURE_URLS) {
// load each texture and attach it.
const image = await fetch(textureURL).then((r) => r.arrayBuffer());
texture.setImage(image).setMimeType('image/png'); // or 'image/jpeg'
// write this variant of the GLB
const variantGLB = io.writeBinary(document); // → ArrayBuffer
}
}
The same thing could be done offline in a script, using NodeIO instead of WebIO.
I'm using the Victory Pie in my React Native app and it renders fine without any issues, but my requirement is that each slice within the pie chart should be have circular corners and overlap on each other like in the image:
I'm able to get the circular corners by applying the attribute:
cornerRadius
but is it possible to make them overlap like in the picture or should I create a custom component?
Ran into same issue, here's my workaround:
You have to override Victory's data component and create an overlap by increasing endAngle a bit.
Example below also shows how to achieve circular shape of slices.
function CustomSlice(props: SliceProps) {
const { datum } = props;
const sliceOverride = {
...props.slice,
endAngle: (props.slice?.endAngle ?? 0) + 0.3,
};
return (
<Slice {...props}
slice={sliceOverride}
cornerRadius={50}
sliceStartAngle={datum.background ? 0 : props.sliceStartAngle}
sliceEndAngle={datum.background ? 360 : props.sliceStartAngle} />
);
}
/* ... */
<VictoryPie dataComponent={CustomSlice} />
Result for [2, 1, 1] data values:
This answer is quite late, but after a tone of searching the only solution I could find to this was to create a second <VictoryPie> graph and have it resting directly underneath the graph you are wanting the background color for.
This solution only worked for having a single background + color and not multiple like you required in your example though.
The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).
The code bellow will paint he component on the image graphics - but it will translate it - instead of fitting it to the size of the graphics.
int ratio = 2;
Image screenshotImage = Image.createImage(getWidth()* ratio,getHeight()* ratio);
c.revalidate();
c.setVisible(true);
c.paintComponent(screenshotImage.getGraphics(), true);
I cannot use the the image and just scale it because some of the content will be truncated.
Can the component paint itself on image graphics with specified size.
Many thanks
Use:
c.setX(0);
c.setY(0);
c.setWidth(img.getWidth());
c.setHeight(img.getHeight());
if(c instanceof Container) {
c.setShouldLayout(true);
c.layoutContainer();
}
c.remove();
c.paintComponent(myImage.getGraphics(), true);
// add c back to it's parent container and revalidate
If the component is currently on a form you would want to undo all of that by calling revalidate on the parent form afterwards.
Im currently building fps-like game for android environment.
I had notice that if I make object with use of pixel low resolution devices can play game so easy than hight resolution phones.
If I use percentage for building objects this time bigger devices gain advantage. Such as Tablets have great size than phones and they can shot my object easly.
I want my objects exact same size on every device is it possible?
More specificly I use python-kivy is it possible to define object as cm/ft or etc.
You can make a relative_size and relative_position method. And make them relative to the windows width or height.
You get the size of the window from the Window class.
Remember only to make the objects size (w,h) relative to only one of width or height. Or your objects will be warped.
from kivy.uix.widget import Widget
from kivy.graphics import Canvas,InstructionGroup,Color,Ellipse
from kivy.core.window import Window
from kivy.app import App
class MyCanvas(Widget):
def __init__(self,**kwargs):
super(MyCanvas,self).__init__(**kwargs)
#Window.size = (200,100)
self.size = Window.size
self.orientation = "vertical"
self.ball = InstructionGroup()
self.ball_size = self.relative_size(10,10)
self.color = Color(0, 1, 1)
self.ellipse = Ellipse(size=self.ball_size,pos=self.relative_position(100,50,self.ball_size))
self.ball.add(self.color)
self.ball.add(self.ellipse)
self.canvas.add(self.ball)
def relative_position(self,x,y,obj_size=(0,0)): # pass obj_size if you want the position to be the center of the object11
x = ( self.width / 100.0 ) * x - obj_size[0]/2.0
y = ( self.height / 100.0 ) * y - obj_size[1]/2-.0
return (x,y)
def relative_size(self,w,h):
return (self.width/float(w),self.width/float(h)) # only make it relative to your width or heigh
# or your object will be warped
class MyApp(App):
def build(self):
return MyCanvas()
if __name__ == "__main__":
MyApp().run()
The relative_position method here will now be a percantage. So you pass from 0-100 in both directions. If you want something else, change the 100s in tht method.
Try to uncomment the #Window.size = (200,100) and play with the window size, and see how it works.
You could also make an event if your application changes size, like if your phone changes orientation.
As I did not make that, this will only work for the size the application started with.