How to get Point.X at a Point.Y on a FlattenPathGeometry - WPF - wpf

I have a "FlattenedPathGeometry" and I want to be able to get a specific point.X from the path based on a specific Point.Y
Basically I just need the X value at any given Y.
Thanks in advance for any help.

GetFlattenedPathGeometry gives you back a polygonal so basically you have to consider loop all the points and calculating the minimum distance to your point.
If you can make any assumption on the Geometry shape or your point, you can speed up the search.
For example if the path is very long, you can speed up by intersecting the shape with a circle/square centered in your point. This limit the number of points of the shape to test but be careful that the intersection method is very expensive. You'll have to measure the performances with a stopwatch to understand what's better in your case.

Related

Uniform random sampling of CIELUV for RGB colors

Selecting a random color on a computer is a touch harder than I thought it would be.
The naive way of uniform random sampling of 0..255 for R,G,B will tend to draw lots of similar greens. It would make sense to sample from a perceptually uniform space like CIELUV.
A simple way to do this is to sample L,u,v on a regular mesh and ensure the color solid is contained in the bounds (I've seen different bounds for this). If the sample falls outside embedded RGB solid (tested by mapping it XYZ then RGB), reject it and sample again. You can settle for a kludgy-but-guaranteed-to-terminate "bailout" selection (like the naive procedure) if you reject more then some arbitrary threshold number of times.
Testing if the sample lies within RGB needs to be sure to test for the special case of black (some implementations end up being silent on the divide by zero), I believe. If L=0 and either u!=0 or v!=0, then the sample needs to be rejected or else you would end up oversampling the L=0 plane in Luv space.
Does this procedure have an obvious flaw? It seems to work but I did notice that I was rolling black more often than I thought made sense until I thought about what was happening in that case. Can anyone point me to the right bounds on the CIELUV grid to ensure that I am enclosing the RGB solid?
A useful reference for those who don't know it:
https://www.easyrgb.com/en/math.php
The key problem with this is that you need bounds to reject samples that fall outside of RGB. I was able to find it worked out here (nice demo on page, API provides convenient functions):
https://www.hsluv.org/
A few things I noticed with uniform sampling of CIELUV in RGB:
most colors are green and purple (this is true independent of RGB bounds)
you have a hard time sampling what we think of as yellow (very small volume of high lightness, high chroma space)
I implemented various strategies that focus on sampling hues (which is really what we want when we think of "sampling colors") by weighting according to the maximum chromas at that lightness. This makes colors like chromatic light yellows easier to catch and avoids oversampling greens and purples. You can see these methods in actions here (select "randomize colors"):
https://www.mysticsymbolic.art/
Source for color randomizers here:
https://github.com/mittimithai/mystic-symbolic/blob/chromacorners/lib/random-colors.ts
Okay, while you don't show the code you are using to generate the random numbers and then apply them to the CIELUV color space, I'm going to guess that you are creating a random number 0.0-100.0 from a random number generator, and then just assigning it to L*.
That will most likely give you a lot of black or very dark results.
Let Me Explain
L* of L * u * v* is not linear as to light. Y of CIEXYZ is linear as to light. L* is perceptual lightness, so an exponential curve is applied to Y to make it linear to perception but then non-linear as to light.
TRY THIS
To get L* with a random value 0—100:
Generate a random number between 0.0 and 1.0
Then apply an exponent of 0.42
Then multiply by 100 to get L*
Lstar = Math.pow(Math.random(), 0.42) * 100;
This takes your random number that represents light, and applies a powercurve that emulates human lightness perception.
UV Color
As for the u and v values, you can probably just leave them as linear random numbers. Constrain u to about -84 and +176, and v to about -132.5 and +107.5
Urnd = (Math.random() - 0.5521) * 240;
Vrnd = (Math.random() - 0.3231) * 260;
Polar Color
It might be interesting converting uv to LChLUV or LshLUV
For hue, it's probably as simple as H = Math.random() * 360
For chroma contrained 0—178: C = Math.random() * 178
The next question is, should you find chroma? Or saturation? CIELUV can provide either Hue or Sat — but for directly generating random colors, it seems that chroma is a bit better.
And of course these simple examples are not preventing over-runs, so they color values to be tested to see if they are legal sRGB or not. There's a few things that can be done to constrain the generated values to legal colors, but the object here was to get you to a better distribution without excess black/dark results.
Please let me know of any questions.

how to understand steer force for steering behavior

I read a tutorial of how to implement Seek behavior of steering behavior.The link is here
.And this is the graph to illustrate the algorithm:
.
I know the velocity, force, acceleration are all vector. But how come "steering" in formular "steering = desired_velocity - current_velocity" becomes into a force rather than a velocity in this article? why does this make sense? Does it mean that we can mix them in one calculation? Does that mean that a velocity vector add or subtract another velocity vector can product a force vector? if not , why the result is called "force"? I know how the steering behaviors work in AI. The key point of achieving this is that we can sum up all the different steering forces together and get a result total force. This total force can be used in formular "a = F/m" to get the acceleration. After that , we can use this acceleration to calculate new position and velocity of object in game loop update.
Based on my view , the "F" should be steering force , but I'm stucking on understanding the way to calculate it.

WPF PathGeometry - Bounds are wrong?

I've got a fairly simple PathGeometry:
M567764.539,5956314.087L567815.077,5956179.775L567821.625,5956182.314L567773.425,5956311.248L567858.513,5956349.923L567950.858,5956392.466L567949.039,5956399.843L567942.252,5956396.685L567873.018,5956364.467L567799.816,5956330.421L567771.226,5956317.186L567764.539,5956314.087
Now when I query the PathGeometry.Bounds attribute for this data I get the following bounds:
567764.5625,5956180 567950.875,5956400
The expected bounds would be:
567764.539,5956179.775 567950.858,5956399.843
My main problem: the bounds are smaller than the geometry, so parts of the geometry might be outside the bounds.
I create the PathGeometry and show the bounds like this:
PathGeometry geo = PathGeometry.CreateFromGeometry(Geometry.Parse("M567764.539,5956314.087L567815.077,5956179.775L567821.625,5956182.314L567773.425,5956311.248L567858.513,5956349.923L567950.858,5956392.466L567949.039,5956399.843L567942.252,5956396.685L567873.018,5956364.467L567799.816,5956330.421L567771.226,5956317.186L567764.539,5956314.087"));
System.Diagnostics.Trace.WriteLine(geo.Bounds);
What am I doing wrong?
And, more important, how do I get the right bounds for a PathGeometry?
At some point, I would think WPF has to convert to single point for rendering, and I wonder if the value of Bounds is based off of the rendered result. In this case, you're probably seeing a precision limitation based off of the large numbers you're using. I noticed that your Y values were a factor of 10 larger than X, and coincidentally the error was also a factor of 10 larger than the error in X.
If it's possible to subtract off the min X and Y before creating the PathGeometry, I think you'll get better numbers. Assuming you're displaying the PathGeometry, you could place it in a Canvas and apply Canvas.Left/Top to your values to get the right offset on screen. To get the correct bounds, you would then add the Top/Left offsets to the result of your Bounds.
Just a reminder that there's a bit of speculation in this answer. I haven't looked at the innerworkings of Bounds, but the relative error seems to point to a conversion to and from floats.
I think you're seeing the imprecision due fact that the numbers PathGeometry is made up of large floating point numbers.
I'm not sure if you'll be able to obtain the precision that you need.
You will probably have to compare the bounds using an acceptable tolerance, like:
bool isMatch = (Math.Abs(MyPath.Bounds.X - ExpectedBounds.X) < TOLERANCE);
where you can set the TOLERANCE to 0.25 or something.

Chart optimization: More than million points

I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference.
I have a link to the component which have functionallity exactly what i need
(2 million points demo):
I will be grateful for any matherials, links or thoughts how to realize such functionallity.
If I understand your question correctly, then you are looking to plot a graph of a dataset where you have ~1M points, but the chart's horizontal resolution is much smaller? If so, you can down-sample your dataset to get about the number of available x values. If your data is sorted in equal intervals, you can extract every N'th point and plot it. Choose N such that the number of points is, say, double the resolution (in this case, N=2000 will give you 500 points to display).
If the intervals are very different from eachother (not regularly spaced), you can approximate your graph with a polynomial, or spline or any other method that fits, and then interpolate 300-600 points from that approximation.
EDIT:
Depending on the nature of the data, you may end up with aliasing artifacts when you simply sample every N't point. There are probably better methods for coping with this problem, but again - it depends on what exactly you want to plot.
You could always buy the control - it is for sale!
John-Daniel Trask (Co-founder of Mindscape ;-)

similarity between an image and its rotated version using SIFT

I have implemented SIFT in opencv for comparing images... i have not yet written the program for comparing.Thinking of using FLANN for the same.But,my problem is that,looking into the 128 elements of the descriptor,cannot really understand the similarity of an image and its rotated version.
By reading Lowe's paper,i do understand that the descriptor co-ordinates are all rotated in terms of the keypoint orientation...but,how exactly is the similarity obtained.Can we undertstand the similarity by just viewing the 128 values.
pls,help me...this is for my project presentation.
You can first use Lowe's metric to compute some putative matches between the two images. The metric is that for any given descriptor de in image 1, find the distance to all descriptors de' in image 2. If the ratio of the closest distance to the second closest distance is below a threshold, then accept it.
After this, you can do RANSAC or other form of robust estimation or Hough Transform to check geometric consistency in terms of position, orientation, and scale of the keypoints that you accepted as putative matches.
If I recall correctly, SIFT will give you a set of 128-value descriptors that describe each of the interest points. You also have the location of each point in each of the images, as well as its "direction" (I forget what the "direction" is called in the paper) and scale in each image.
Once you've found two points that have matching descriptors, you can calculate the transformation from the interest point in one image to the same point in the other image by comparing coordinates and directions.
If you have enough matches, you see if all (or a majority of) the interest points have the same transformation. If they do, the images are similar, if they don't, the images are different.
Hope this helps...
What you are looking for is basically ASIFT
You can find the code here and some overview

Resources