How do I draw edges that are non-overlapping using arbor.js ? for example, I have 2 nodes A and B in my graph. There is a directed edge from A->B and another one from B->A. How do I display these edges in such a way that they don't overlap ?
well,you have the edge renderer at your disposal, so you can draw an arc-ed line in canvas between the two points...(to do this,use particlesystem.eachedge and see if B and A are connected, then make a arc-ed edge)
Related
I have a d3 graph that is a series of dots on a horizontal line (for ease, let's say a red one, blue one, green one). Not too tough.
I have several of these graphs stacked on top of each other. Easy.
A solid line then needs to connect all the red dots vertically across the different graphs. This is where my mind explodes.
Is it possible to draw a line across multiple graphs.
Do I make a single graph that sits over all of the individual graphs.
Is there a way to find the location of a node in a different graph and draw a line towards that one.
Do I need to calculate a phantom node based on the height of a row and some trigonometry to draw the line.
Any suggestions of approaches would be greatly appreciated. I am in pre-dev stage, trying to figure out level of effort for the designers who are asking this.
I'm given a 2d binary array. Some of the dots are on, some are off (1 for on, 0 for off).
I know that the "on" dots were created before by putting circles on the 2d array.
The circles are of the same radius, and each time a circle was put, the dots inside it changed to 1 instead of 0.
All the circles are within the edges of the array and dot touching the edge of the circle is lit.
An illustration can be seen below. The circles are ordered randomly and may touch.
Notice that the dots inside the circles are 1 and all other are 0.
Can you find how many circles were there just by looking at the 2d array without the circles after I had put them? Is this problem solvable?
My attempt at solving this problem was:
First, I assumed that my circles can contain dots as in the figure (radius big enough to contain 4 to 7 dots.
Then I tried to categorize what possible orientation can the circles have, however there are just a lot.
I would like to find these two circles. Notice that they can cannot overlap but can be just one near the other.
If your circles don't overlap, you can use connected component labeling algorithm and get number of circles:
NCircles = (NComponents - 1) / 2
(if inner empty regions of circles and outer empty place form separate components)
Edit: with these dots it is worth to select only connected conponents with size in some range to exclude dots and other false regions.
Simple kind of CCL suitable for this picture:
scan image until black pixel is met
do flood fill while possible, keep bounding box of scanned black pixels
if box corresponds to circle size, count it
scan further from any unmarked pixel
One more possible approach: you can try Hough algorithm for circles of predefined radius.
For example, OpenCV library contains labeling function that works with images and arrays (and Hough transform too)
Why not just generate randomly generate circles and count them?
When you insert a new circle, just check if they do not overlap.
And stop inserting new circles after you tried a certain times and failed to insert a new circle. With this last value you probably need to play a bit.
You can probably repeat this a couple of times and average the result like that.
I'm trying to make a node rotate along the X axis to look at another node.
I tried using the SCNLookAtConstraint with the gimbal turned off, but this still allows the node to rotate on both the X and Y axes. (Also, it makes the rear of the node face the target, not the front.)
How do I calculate how to rotate one node to face another from two vector 3 positions?
The docs talk about the orientation of a node, and what it means to 'look' at another node:
A node points in the direction of the negative z-axis of its local coordinate system. This axis defines the view direction for nodes containing cameras and the lighting direction for nodes containing spotlights or directional lights, as well as the orientation of the node’s geometry and child nodes. When Scene Kit evaluates a look-at constraint, it updates the constrained node’s transform property so that the node’s negative z-axis points toward the constraint’s target node.
You can modify that by specifying a different value for constraint.localFront, such as SCNVector3(0,0,1) to point with the positive z-axis instead of the negative.
If you need more control over which axis you want to engage, then you have a couple of options.
Create an invisible target node that remains positioned along the plane perpendicular to your rotation axis
Instead of using constraints, use SCNNode.look(at:) to update the node within your game loop, providing a translated target coordinate along the perpendicular plane.
If the target is the camera, then checkout SCNBillboardConstraint, which allows you to specify which axes are free to rotate:
let billBoard = SCNBillboardConstraint()
billBoard.freeAxes = [.X]
node.constraints = [billBoard]
In SceneKit, you can add a lookAtConstraint constraint to your SceneView's Point Of View, to make Camera look at a certain node.
Is there a standard way of doing the same but for a specific face of a geometry?
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face? So that the cube would look like a plane form the new perspective.
No.
That would require movement of the camera, in addition to re-aiming it.
Imagine I'm in front of my house. I have a great view of the front and can just barely see the side to my left. In my Scene I tap the side of the house. A LookAt constraint would merely change the angle of the camera. It would not be aligned with the normal of that barely visible side.
To align with the normal, I'd have to walk around the house until I can stare at the house and be perpendicular to the side I tapped. At what radius? What path? You have to figure that out yourself.
Depending on what effect you're trying for, you might want to rotate the model instead of moving the camera. Rotate the tapped node locally (or as a child of an invisible parent) so that its minus-Z axis points out the tapped face, and keep a lookAtConstraint on the node, not the camera. This approach will change the look of the object, though: you will see it rotating, and the shading changing appropriately.
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face?
Supposing you are using hit-testing to determine what object got touched, a SCNHitTestResult will give you both localCoordinates and localNormal from which it should be fairly easy to derive a camera transform.
One easy way would be to have the camera as a child node of the box, compute a position that would look like localCoordinates + distance * localNormal and finally a transform using GLKMatrix4MakeLookAt and SCNMatrix4FromGLKMatrix4.
Note that you can also use worldCoordinates, worldNormal, as well as conversion utilities such as SCNNode.convertTransform(_:from:).
mutating on mnuages answer, use a hit test or ray trace to find where the user tapped on the mesh, then add a node at that location, and constrain the camera to lookAt that node.
I want to know whether glRotate rotates the camera, the world axis, or the object. Explain how they are different with examples.
the camera
There is no camera in OpenGL.
the world axis
There is no world in OpenGL.
or the object.
There are no objects in OpenGL.
Confused?
OpenGL is a drawing system, that operates with points, lines and triangles. There is no concept of a scene or a world in OpenGL. All there is are vertices of which each has a set of attributes and there is the state of OpenGL which determines how vertices are turned into pixels.
The very first stage of this process is getting the vertex positions within the viewport. In the fixed function pipeline (i.e. without shaders), to get those, each vertex position if first multiplied with the so called "modelview" matrix, the intermediary result is used for lighting calculations and then multiplied with the "projection" matrix. After that clipping and then normalization into viewport coordinates are applied.
Those two matrices I mentioned save two purposes. The first one "modelview" is used to apply some transformation on the incoming vertices so that those end up in the desired spot relative to the origin. There is no difference in first moving geometry to some place in the world, and then moving the viewpoint within the world. Or keeping the viewpoint at the origin and move the whole world in the opposite. All this can be described by the modelview matrix.
The second one "projection" works together with the normalization process to behave like a kind of "lens", so to speak. With this you set the field of view (and a few other parameters, like shift, which you need for certain applications – don't worry about it).
The interesting thing about matrices is, that they're non-commutative, i.e. for two given matrices N, M
M * N =/= N * M ; for most M, N
This ultimately means, that you can compose a series of transformations A, B, C, D... into one single compound transformation matrix T by multiplying the primitive transformations onto each other in the right order.
The OpenGL matrix manipulation functions (they're obsolete BTW), do just that. You have a matrix selected for manipulation (the matrix mode) for example the modelview M. Then glRotate effectively does this:
M *= R(angle,axis)
i.e. the active matrix gets multiplied on a rotation matrix constructed from the given parameters. Similar for scale and translate.
If this happens to appear to behave like a camera or placing a object depends entirely on how and in which order those manipulations are combined.
But for OpenGL there are just numbers/vectors (vertex attributes), which somehow translate into 2-dimensional viewport coordinates, that get drawn as points for filled inbetween as line or a triangle.
glRotate works on the current matrix. So it depend if the matrix is the camera one or a world trasformation one. To know more about the current matrix have a look at glMatrixMode().
Finding example is just googling: I found this one that in order to me should help you to figure out what's happening.