Move an SCNNode and make it look at the same point it was looking at before moving - scenekit

I created SCNNodes with a SCNCylinder geometry to represent lines between points (ie: SCNSphere).
It does work very well.
Now i am moving one point, and want to move the 2 lines that were "linked" to this point. Lets concentrate on only the first line for simplicity.
To move a line:
I move the center of the cylinder node and update its length.
I use simdLook to change its orientation.
lineNode.simdLook(at: targetPoint, up: shapeRootNode.WorldUp, localFront: lineNode.WorldUp)
The line correctly moves to the center point, and has the correct length, but gets an incorrect orientation every 2 calls. After the 1st call it is correct. After the 2nd call it is perpendicular to the orientation it should have. After the 3rd call it is correct. Etc... 😭
✅ I verified that targetPoint is correct, the worldUp of the scene (on its root node) is constant. So this leaves the worldUp of the line node ❌.
This 3rd parameter of simdLook is a local front. As a SCNCylinder is built vertically, its local worldUp is the line direction. so it should work 😶?

Ok i found a workaround.
I reset lineNode.Orientation to SCNQuaternion's Identity before using Look.
So the issue is Look incorrectly uses the node current's Orientation property to compute the new ... Orientation property for the same node!
This is silly.

Related

Scenekit scnbillboard and lookatconstraints conflict

I'm trying to point a circle with arrow at the top at an scnnode in scenekit, but keep it constrained or flat toward the point of view at all times. It seems that with a look at constraint, the billboard constraint gets ignored.
Here is the setup for constraints:
let billboardConstraint = SCNBillboardConstraint ()
billboardConstraint.freeAxes = SCNBillboardAxis.Y
let lookatConstraint = SCNLookAtConstraint ( target: targetNode )
lookatConstraint.localFront = SCNVector3Make ( 0, 1, 0 )
lookatConstraint.worldUp = SCNVector3Make ( 0, 1, 0 )
arrowNode.constraints = [billboardConstraint, lookatConstraint]
self.sceneView.pointOfView?.addChildNode ( arrowNode )
Here are example images of the issue. The first image is how I want to keep it, the second image shows the flattening that I can't seem to control:
No matter what I try, it doesn't stay facing the camera. I've tried equating the z distance for both the arrow node and the target node (still rotates the arrow node undesirably), tried setting a gimbal lock (flips the arrow node out), tried to adjust the angles for the arrow node to keep it "facing" the point of view, and tried an SCNCone as a pointer.
I also tried just moving this arrow node into an imageview in the overlay, with an invisible arrow node inside Scenekit, but couldn't get the math right when trying to CGAffineTransform the 2d UIImageView. I tried getting the rotation vector for the SCNNode that has the SCNLookatConstraint, tried projectpoint, etc. Not quite getting it. Maybe I should've paid more attention in high school :-(
Anyone have any ideas:
This post: 59251351 - it's like a 3d box with a tube (a tank with a gun), that targets a specific node - "aims" at it. That's your arrow, so first - make sure that you have set it up with the correct rotation from the beginning, otherwise it will never work. Arrow should be a subnode so that your main node and all subnodes rotate together as you set the constraints for the main node.
The example is a fixed -z for related nodes and the point of view doesn't change. Assuming you want to move the camera forward, then you need a way to maintain perspective - that's keeping Z for all in question at the same distance from the camera.
So, if your target node and indicator can stay on the same z distance plane (as you indicated in your example that it can work that way), I "think" this should help. You might set your target node slightly off on z compared to your arrrow nodes, something insignificant such as .001 - just so the lookAt math doesn't actually return a (NAN) though some happenstance.
If it's more complex than that, then someone with more math skills is required.
Hope that helps

Oxyplot with WPF

I am working with and I oxyplot download an example the following link:
http://blog.bartdemeyer.be/2013/03/creating-graphs-in-wpf-using-oxyplot/
I added my own data plotting to go, but the incoming points are accumulated and that makes the graph becomes unreadable.
how I do like to go update the chart so that the old points are eliminated and new points are displayed normally and not stacked.
http://blog.bartdemeyer.be/wp-content/uploads/image_thumb19.png
You need to zoom it. This thread from oxyplot disscusion will help you.
http://oxyplot.codeplex.com/discussions/402272
Use LineSeries.Points.RemoveAt(index)
Example:
(DataPlot.Series[0] as LineSeries).Points.Add(new DataPoint(xValue, yValue0));
(DataPlot.Series[1] as LineSeries).Points.Add(new DataPoint(xValue, yValue1));
if (valueRange > 10000) //points will accumulate until the x-axis reaches 10000
{ //after 10000
(DataPlot.Series[0] as LineSeries).Points.RemoveAt(0); //removes first point of first series
(DataPlot.Series[1] as LineSeries).Points.RemoveAt(0); //removes first point of second series
}
But you must use it together - adding one new point and removing one. Then the points will not accumulate and you will have x-axis of range you wish.

How can I combine UIBezierPath drawings?

I'm trying to combine several UIBezierPath drawings.
I have different types of drawings I can make (line, cubic bezier, quadratic beziers), and each of these can be filled or unfilled. I'm selecting the drawing type randomly, and my goal is to make 3 different drawings which are connected at a point.
So where the first, say, line drawing ends, the second path - maybe a cubic bezier — begins. Where that ends, a third, maybe a filled line drawing begins.
I've got a square UIView that I'm trying to draw this in, and each path should have its own part of the UIView: the first 1/3rd, the second and the third.
Would I be able to create this with one UIBezierPath object, or do I need to create 3 different ones? How to make them end and start at the same point? Is there a way to do this with subpaths?
UIBezierPath has its instance methods like (DOC)
-addLineToPoint:
-addArcWithCenter:radius:startAngle:endAngle:clockwise:
-addCurveToPoint:controlPoint1:controlPoint2:
-addQuadCurveToPoint:controlPoint:
-appendPath:
You can combine paths one by one. When you've done, use -closePath to close the path.
Feel free to take a look at my open sourced lib which called UIBezierPath-Symbol. ;)
And if you want more customise path drawing, I recommend CGMutablePath. You can create each path as complex as you want (you can combine simple paths by CGPathAdd... methods). Finally, use CGPathAddPath() to combine them together.
void CGPathAddPath (
CGMutablePathRef path1, // The mutable path to change.
const CGAffineTransform *m, // A pointer to an affine transformation matrix, or NULL if no transformation is needed. If > specified, Quartz applies the transformation to path2 before it is added to path1.
CGPathRef path2 // The path to add.
);
You can combine paths like this:
UIBezierPath *endPath = [UIBezierPath bezierPath];
[endPath appendPath:leftLine];
[endPath appendPath:rightLine];
[endPath appendPath:midLine];
A UIBezierPath is just a wrapper for a CGPath, which itself is just a set of instructions for drawing (by stroke or fill, or both). That drawing can take place anywhere. In other words, a UIBezierPath is just a tool for drawing; the important thing is the drawing itself. Given a graphics context (which might be a UIView, a UIImage, a CALayer, whatever), you can do as much drawing as you like in succession - say, a line, then a cubic bezier, then a filled line drawing. But how you perform those drawing bits is totally up to you. You shouldn't really care whether you do it with three UIBezierPaths, one UIBezierPath, multiple paths, one path, subpaths, whatever (or even by copying other drawings into this one) - the final effect is all that matters, i.e. the accumulated drawing ultimately done in this graphics context.
Your question is like asking, "Should I draw this circle with my right hand or my left hand, and should I draw it counter-clockwise or clockwise?" It doesn't matter. Once it's done, what will have been drawn is a circle; that is what's important.

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

Very Strange: Every other time an array is updated its values are screwed up

You can see the code there: http://jsfiddle.net/jocose/CkL5F/901/
(double click on the box and move your mouse)
NOTE: This is a simplified example that is part of a larger system. My ultimate goal is to manipulate individual vertices of a path.
Update: I crunched the numbers and the math actually apears to be correct. What I want to do is calculate the offset from each point to the mouse, and then move that point to the mouses position + the offset.
So if I have a mouse of 224 then 224-103 = 121 then I add: 121+224=345
These creates a cycle of ups and downs that I am seeing. I don't know why these is stumping me so badly, any help would be much appreciated.
I need to manually update a Raphael path element.
To do this I convert an absolute path into an array using Raphael great built in function "parsePathString"
I then loop through that array and modify the values based off the mouse position.
The update is done to the X values only, and is in real time; called each time the mouse moves.
When the element moves it flickers back and forth between the correct position and some anomalous one.
I have no clue why its doing this. I have spent almost 5 hours trying to figure this out and I'm officially stuck.
Here is a sample of the result where you can see the values jumping around:
MOUSE224
M,103.676287
MOUSE225
M,346.323713
MOUSE227
M,107.676287
MOUSE228
M,348.323713 12
MOUSE228
M,107.676287
MOUSE229
M,350.323713
MOUSE231
M,111.67S287
MOUSE232
M,3S2.323713
MOUSE233
M,113.676287
MOUSE233
M,3S2.323713
Here's my version of your fiddle modified to do what I think you need. At least, it seems to work. It's the same type of problem I had to fix for the Raphael 2 transformations here.
Basically, in your mousemove, I've changed mx to be a calculation of the offset between where your mouse is now and where it was the last time mousemove was called. Your move() function now only has to add this value to the x-coords.
Hope this helps you out somewhat

Resources