I'm using the Victory Pie in my React Native app and it renders fine without any issues, but my requirement is that each slice within the pie chart should be have circular corners and overlap on each other like in the image:
I'm able to get the circular corners by applying the attribute:
cornerRadius
but is it possible to make them overlap like in the picture or should I create a custom component?
Ran into same issue, here's my workaround:
You have to override Victory's data component and create an overlap by increasing endAngle a bit.
Example below also shows how to achieve circular shape of slices.
function CustomSlice(props: SliceProps) {
const { datum } = props;
const sliceOverride = {
...props.slice,
endAngle: (props.slice?.endAngle ?? 0) + 0.3,
};
return (
<Slice {...props}
slice={sliceOverride}
cornerRadius={50}
sliceStartAngle={datum.background ? 0 : props.sliceStartAngle}
sliceEndAngle={datum.background ? 360 : props.sliceStartAngle} />
);
}
/* ... */
<VictoryPie dataComponent={CustomSlice} />
Result for [2, 1, 1] data values:
This answer is quite late, but after a tone of searching the only solution I could find to this was to create a second <VictoryPie> graph and have it resting directly underneath the graph you are wanting the background color for.
This solution only worked for having a single background + color and not multiple like you required in your example though.
Related
I'm still pretty new to 3d with blender (and coding in 3d), but I'm trying to create and export a material using nodes (instead of a jpg image):
In React, I'm coding as follows:
export default function Ring({ ...props }: JSX.IntrinsicElements['group']) {
const group = useRef<THREE.Group>(null)
const { nodes, materials } = useGLTF('../assets/ring.glb', 'https://www.gstatic.com/draco/versioned/decoders/1.4.1/') as GLTFResult
return (
<group ref={group} {...props} dispose={null}>
<mesh castShadow receiveShadow geometry={nodes.ring.geometry} material={materials['Material.001']} />
</group>
)
}
However, in the browser I am getting this result:
I'm trying to figure out if the issue is:
a. There is nothing to reflect, and therefore no color (I added <Sky .. /> and <Environment .. /> tags hoping this would solve it.
b. The code needs some extra code to pull in other parameters which are part of the object itself:
c. I need to add extra shaders or something because of 'unknown unknowns' in the way I'm using things.
d. I've exported it wrongly - which might be the case because I've tried importing here: https://3dviewer.net/ and it is still white. (I tried using 'apply modifiers' when exporting).
Either way, I need to figure out if this is an exporting issue, or a coding issue. If it is an issue with the way I'm using Blender, I'll close this question and re-ask it on https://blender.stackexchange.com.
Thanks for the help.
I've been trying to render about 10k+ instances of the same image (with various simple transformations such as rotation/position) on a element with react-pixi using hooks.
import * as React from 'react'
import * as PIXI from 'pixi.js'
import { ParticleContainer, useApp, useTick, Stage, Sprite } from '#inlet/react-pixi'
const texture = PIXI.Texture.from('./images/bg-arrow.png')
export function PixiTest() {
// Unused right now but I think I have to use this maybe with
// `app.stage.addChild( ??? )` ?
const app = useApp()
const sprite = (i: number) => <Sprite
texture={texture}
anchor={0.5}
x={i}
y={i}
scale={0.01 * i}
/>
// Very dumb implementation that starts lagging heavily before 1000
return (
<ParticleContainer position={[0, 0]} properties={{ position: true }}>
{
Array(100).fill(undefined).map((_, i) => sprite(i))
}
</ParticleContainer>
)
}
The above is called inside <Stage {...}></Stage> tags.
How can I make this work with constant 50-60 fps and 10k+ instances ?
It sounds like the problem isn't your code, it's the limits of the browser. If I had to guess, it's because the texture that you're using is too high-resolution. You're scaling the sprite down by a factor of 100 in your code. Instead, use an image that's 100x smaller.
Consider:
- If the image is 32x32px, with 100 sprites, you're rendering 102,400 pixels per frame.
- If the image is 32x32px, with 1000 sprites, you're rendering 1,024,000 pixels per frame.
- If the image is 3x3px, with 1000 sprites, you're rendering 9,000 pixels per frame.
Also, it doesn't sound like you need useApp at all. I'd also recommend wrapping PixiTest in React.memo, since it accepts no input and won't ever need to re-render (in React, that is).
The SceneKit rendering loop is well documented here https://developer.apple.com/documentation/scenekit/scnscenerendererdelegate and here https://www.raywenderlich.com/1257-scene-kit-tutorial-with-swift-part-4-render-loop. However neither of these documents explains what SceneKit does between calls to didApplyConstraints and willRenderScene.
I've modified my SCNSceneRendererDelegate to measure the time between each call and I can see that around 5ms elapses between those two calls. It isn't running my code in that time, but presumably some aspect of the way I've set up my scene is creating work which has to be done there. Any insight into what SceneKit is doing would be very helpful.
I am calling SceneKit myself from an MTKView's draw call (rather than using an SCNView) so that I can render the scene twice. The first render is normal, the second uses the depth buffer from the first but draws just a subset of the scene that I want to "glow" onto a separate colour buffer. That colour buffer is then scaled down, gaussian blurred, scaled back up and then blended over the top of the first scene (all with custom Metal shaders).
The 5ms spent between didApplyConstraints and willRenderScene started happening when I introduced this extra rendering pass. To control which nodes are in each scene I switch the opacity of a small number of parent nodes between 0 and 1. If I remove the code which switches opacity but keep everything else (so there are two rendering passes but they both draw everything) the extra 5ms disappears and the overall frame rate is actually faster even though much more rendering is happening.
I'm writing Swift targeting MacOS on a 2018 MacBook Pro.
UPDATE: mnuages has explained that changing the opacity causes SceneKit to rebuild the scene graph and it that explains part of the lost time. However I've now discovered that my use of a custom SCNProgram for the nodes in one rendering pass also triggers a 5ms pause between didApplyConstraints and willRenderScene. Does anyone know why this might be?
Here is my code for setting up the SCNProgram and the SCNMaterial, both done once:
let device = MTLCreateSystemDefaultDevice()!
let library = device.makeDefaultLibrary()
glowProgram = SCNProgram()
glowProgram.library = library
glowProgram.vertexFunctionName = "emissionGlowVertex"
glowProgram.fragmentFunctionName = "emissionGlowFragment"
...
let glowMaterial = SCNMaterial()
glowMaterial.program = glowProgram
let emissionImageProperty = SCNMaterialProperty(contents: emissionImage)
glowMaterial.setValue(emissionImageProperty, forKey: "tex")
Here's where I apply the material to the nodes:
let nodeWithGeometryClone = nodeWithGeometry.clone()
nodeWithGeometryClone.categoryBitMask = 2
let geometry = nodeWithGeometryClone.geometry!
nodeWithGeometryClone.geometry = SCNGeometry(sources: geometry.sources, elements: geometry.elements)
glowNode.addChildNode(nodeWithGeometryClone)
nodeWithGeometryClone.geometry!.firstMaterial = glowMaterial
The glow nodes are a deep clone of the regular nodes, but with an alternative SCNProgram. Here's the Metal code:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct NodeConstants {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
};
struct EmissionGlowVertexIn {
float3 pos [[attribute(SCNVertexSemanticPosition)]];
float2 uv [[attribute(SCNVertexSemanticTexcoord0)]];
};
struct EmissionGlowVertexOut {
float4 pos [[position]];
float2 uv;
};
vertex EmissionGlowVertexOut emissionGlowVertex(EmissionGlowVertexIn in [[stage_in]],
constant NodeConstants &scn_node [[buffer(1)]]) {
EmissionGlowVertexOut out;
out.pos = scn_node.modelViewProjectionTransform * float4(in.pos, 1) + float4(0, 0, -0.01, 0);
out.uv = in.uv;
return out;
}
constexpr sampler linSamp = sampler(coord::normalized, address::clamp_to_zero, filter::linear);
fragment half4 emissionGlowFragment(EmissionGlowVertexOut in [[stage_in]],
texture2d<half, access::sample> tex [[texture(0)]]) {
return tex.sample(linSamp, in.uv);
}
By changing the opacity of nodes you're invalidating parts of the scene graph which can result in additional work for the renderer.
It would be interesting to see if setting the camera's categoryBitMask is more performant (it doesn't modify the scene graph).
I have a SCNPlane that I created in the SceneKit editor and I want 1 side of the plane to have a certain image and the other side of the plane to have another image. How do I do that in the Scenekit editor
So far what I've tried to do is adding 2 materials to the plane. I tried adding 2 materials and unchecking double-sided but that doesn't work.
Any help would be appreciated!
Per the SCNPlane docs:
The surface is one-sided. Its surface normal vectors point in the positive z-axis direction of its local coordinate space, so it is only visible from that direction by default. To render both sides of a plane, ether set the isDoubleSided property of its material to true or create two plane geometries and orient them back to back.
That implies a plane has only one material — isDoubleSided is a property of a material, letting that one material render on both sides of a surface, but there's nothing you can do to one material to turn it into two.
If you want a flat surface with two materials, you can arrange two planes back to back as the doc suggests. Make them both children of a containing node and you can then use that to move them together. Or you could perhaps make an SCNBox that's very thin in one dimension.
Very easy to do in 2022.
It's very easy and common to do this, you just add the rear as a child.
To be clear the node (and the rear you add) should both use the single-sided shader.
Obviously, the rear you add points in the other direction!
Do note that they are indeed in "exactly the same place". Sometimes folks new to 3D mesh think the two meshes would need to be "a little apart", not so.
public var rear = SCNNode()
private var theRearPlane = SCNPlane()
private func addRear() {
addChildNode(rear)
rear.eulerAngles = SCNVector3(0, CGFloat.pi, 0)
theRearPlane. ... set width, height etc
theRearPlane.firstMaterial?.isDoubleSided = false
rear.geometry = theRearPlane
rear.geometry?.firstMaterial!.diffuse.contents = .. your rear image/etc
}
So ...
///Double-sided sprite
class SCNTwoSidedNode: SCNNode {
public var rear = SCNNode()
private var thePlane = SCNPlane()
override init() {
super.init()
thePlane. .. set size, etc
thePlane.firstMaterial?.isDoubleSided = false
thePlane.firstMaterial?.transparencyMode = .aOne
geometry = thePlane
addRear()
}
Consuming code can just refer to .rear , for example,
playerNode. ... the drawing of the Druid
playerNode.rear. ... Druid rules and abilities text
enemyNode. ... the drawing of the Mage
enemyNode.rear. ... Mage rules and abilities text
If you want to do this in the visual editor - very easy
It's trivial. Simply add the rear as a child. Rotate the child 180 degrees on Y.
It's that easy.
Make them both single-sided and put anything you want on the front and rear.
Simply move the main one (the front) normally and everything works.
I am facing an issue where my graph is tree layout and looks fine initially. However, if I choose to change GraphSource upon user input/ clicks using PopulateGraphSource like in the OrgChart example, I get all the nodes stacked on top of each other with no links and all in corner.
I tried resetting graphSource by creating a new one
this.graphSource = new GraphSource();
I also tried to use the Clear method for GraphSource. Neither did solve the problem, I keep having the same issue.
I am using
ObservableCollection<Node> hierarchicalDataSource;
to fill up my GraphSource object.
All I do is create a new one and then call
PopulateGraphSource();
method.
Similar issues: question in telerik support , telerik support different question
Try calling the Layout method on the diagram control. Here is a little fragment of code
TreeLayoutSettings settings = new TreeLayoutSettings()
{
TreeLayoutType = TreeLayoutType.TreeDown,
VerticalSeparation = 60,
HorizontalSeparation=30
};
if (this.diagram.Shapes.Count > 0)
{
settings.Roots.Add(this.diagram.Shapes[0]);
this.diagram.Layout(LayoutType.Tree, settings);
this.diagram.AutoFit();
//this.diagram.Zoom = 1;
}