Unprecise rendering of huge WPF visuals - any solutions? - wpf

When rendering huge visuals in WPF, the visual gets distorted and more distorted with increasing coordinates. I assume that it has something to do with the floating point data types used in the render pipeline, but I'm not completely sure. Either way, I'm searching for a practical solution to solve the problem.
To demonstrate what I'm talking about, I created a sample application which just contains a custom control embedded in a ScrollViewer that draws a sine curve.
You can see here that the drawing is alright for double values <= 2^24 (in this case the horizontal coordinate value), but from that point on it gets distorted.
The distortion gets worse at 2^25 and so the distortion continues to increase with every additional bit until it just draws some vertical lines.
For performance reasons I'm just drawing the visible part of the graph, but for layouting reasons I cannot "virtualize" the control which would make this problem obsolete. The only solution I could come up with is to draw the visible part of the graph to a bitmap, and then render the bitmap at the appropriate point - but there I have again the precision problem with big values, as I cannot accurately place the bitmap at the position where I need it.
Does anybody have an idea how to solve this?

It is not WPF's fault.
Floating point numbers get less and less precise the farther from zero they are - it is a cost of stuffing enormous data range (-Inf, +Inf) into 32 (float) / 64 (double) bits of data space. Floats actually become less precise than integer at around 2^30.
64bit integers have constant spacing (1), but have limited range of −9,223,372,036,854,775,808 to +9,223,372,036,854,775,807.
You may also consider using Decimal type (which however has also limited value range).

(update: oh didnt see how old this post was... i guess i clicked the wrong filter button in stack overflow...)
The relative precision is relevant here. So just saying "look 2^24 is fine and 2^25 is not" is not enough information. You said it is a sin, thus I guess y-axis max and min never changes between those pictures. So y-axis doesnt matter. Furthermore the x-step size stays the same, i guess? But you did not tell us the sin period length or the x-step size, you chose. That is relevant here. The relative precision of the x-size steps becomes worse when you go to higher x-values, because the x-step-size becomes too small relativly to the x-value itself.
precision of c# floating point types:
https://learn.microsoft.com/de-de/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
example:
x-step size = 1.
x = 1 (no problem)
x = 1000 (no problem)
x = >2^23 (32 bit starts to get problems with step size = 1; 64 bit no problems yet)
x = >2^52 (64 bit starts to get problems with step size = 1)

Related

How would i store a value from a constantly changing value and compare the stored one against the chaning one in C?

So i'm trying to figure out how much Y a player has fallen in a game to then figure out how much damage should be taken. I'm doing this by trying to store the current y position of a player when they aren't on the ground, getting the new y position then subtracting the new position with the old one. Problem is oldy is always set to the current y and i don't know how to keep it separate. I don't have a lot of experience with C so any help would be appreciated
if (player->grounded==false){
player->blocksfallen=player->position.y-player->oldy;
player->oldy=player->position.y;
} else {
player->blocksfallen=0;
}
The damage should depend on the speed with which the ground is hit, not by the height fallen, though there is a relation between those.
The relation is not 1:1, consider somebody starting to fall with already a downward speed, e.g. being thrown downwards. Or the other direction, the speed might be slowed by something like a parachute.
Inside the code this translates to having a variable with (at least the vertical component of) the speed. Increase it for acceleration, decrease it for any breaking effects (more precisely phrased: increase it less for breaking effects).
When you hit the ground, calculate damage based on the speed at that point.
Introducing that speed variable will also solve your problem in the absence of thrown characters and parachutes. It is then something of an accumulator of accelearation, which is very close to an accumulator of distance fallen - just easier to update. You just need to increase it by the gravity-caused acceleration each turn/frame/animation-step.
This means to turn the calculation around. Instead of trying to calculate the speed/total-height/damage from the previous height andcurrent height, you calculate the current height from previous height + speed. If you look at how you calculate the current height form previous you will notice that you already do something similar there (let's say I am pretty sure you do, though it is not seen in the shown code, something like height=height-speedconstant).

Uniform random sampling of CIELUV for RGB colors

Selecting a random color on a computer is a touch harder than I thought it would be.
The naive way of uniform random sampling of 0..255 for R,G,B will tend to draw lots of similar greens. It would make sense to sample from a perceptually uniform space like CIELUV.
A simple way to do this is to sample L,u,v on a regular mesh and ensure the color solid is contained in the bounds (I've seen different bounds for this). If the sample falls outside embedded RGB solid (tested by mapping it XYZ then RGB), reject it and sample again. You can settle for a kludgy-but-guaranteed-to-terminate "bailout" selection (like the naive procedure) if you reject more then some arbitrary threshold number of times.
Testing if the sample lies within RGB needs to be sure to test for the special case of black (some implementations end up being silent on the divide by zero), I believe. If L=0 and either u!=0 or v!=0, then the sample needs to be rejected or else you would end up oversampling the L=0 plane in Luv space.
Does this procedure have an obvious flaw? It seems to work but I did notice that I was rolling black more often than I thought made sense until I thought about what was happening in that case. Can anyone point me to the right bounds on the CIELUV grid to ensure that I am enclosing the RGB solid?
A useful reference for those who don't know it:
https://www.easyrgb.com/en/math.php
The key problem with this is that you need bounds to reject samples that fall outside of RGB. I was able to find it worked out here (nice demo on page, API provides convenient functions):
https://www.hsluv.org/
A few things I noticed with uniform sampling of CIELUV in RGB:
most colors are green and purple (this is true independent of RGB bounds)
you have a hard time sampling what we think of as yellow (very small volume of high lightness, high chroma space)
I implemented various strategies that focus on sampling hues (which is really what we want when we think of "sampling colors") by weighting according to the maximum chromas at that lightness. This makes colors like chromatic light yellows easier to catch and avoids oversampling greens and purples. You can see these methods in actions here (select "randomize colors"):
https://www.mysticsymbolic.art/
Source for color randomizers here:
https://github.com/mittimithai/mystic-symbolic/blob/chromacorners/lib/random-colors.ts
Okay, while you don't show the code you are using to generate the random numbers and then apply them to the CIELUV color space, I'm going to guess that you are creating a random number 0.0-100.0 from a random number generator, and then just assigning it to L*.
That will most likely give you a lot of black or very dark results.
Let Me Explain
L* of L * u * v* is not linear as to light. Y of CIEXYZ is linear as to light. L* is perceptual lightness, so an exponential curve is applied to Y to make it linear to perception but then non-linear as to light.
TRY THIS
To get L* with a random value 0—100:
Generate a random number between 0.0 and 1.0
Then apply an exponent of 0.42
Then multiply by 100 to get L*
Lstar = Math.pow(Math.random(), 0.42) * 100;
This takes your random number that represents light, and applies a powercurve that emulates human lightness perception.
UV Color
As for the u and v values, you can probably just leave them as linear random numbers. Constrain u to about -84 and +176, and v to about -132.5 and +107.5
Urnd = (Math.random() - 0.5521) * 240;
Vrnd = (Math.random() - 0.3231) * 260;
Polar Color
It might be interesting converting uv to LChLUV or LshLUV
For hue, it's probably as simple as H = Math.random() * 360
For chroma contrained 0—178: C = Math.random() * 178
The next question is, should you find chroma? Or saturation? CIELUV can provide either Hue or Sat — but for directly generating random colors, it seems that chroma is a bit better.
And of course these simple examples are not preventing over-runs, so they color values to be tested to see if they are legal sRGB or not. There's a few things that can be done to constrain the generated values to legal colors, but the object here was to get you to a better distribution without excess black/dark results.
Please let me know of any questions.

Making OpenGL polygonOffset and DirectX9 depthBias behave the same

For our multiplatform engine that supports both OpenGL and DirectX9 I am adding support for decals. In OpenGL I can set glPolygonOffset(-1.0f, -1.0f) to fix z-fighting between the wall and the decals. I want the DirectX version to behave exactly the same, so I call this:
float offsetFloat = -1.0f;
DWORD offsetDWord = *((DWORD*)&offsetFloat);
device->SetRenderState(D3DRS_DEPTHBIAS, offsetDWord);
device->SetRenderState(D3DRS_SLOPESCALEDEPTHBIAS, offsetDWord);
However, this gives me an extremely large depth bias. It seems I need to use extremely small values in DirectX9. However, I can't seem to find how small.
I noticed that in the OGRE engine's source they're dividing by 250000, but despite the comment I don't quite see where that number comes from. Also, they only divide the constant by that for some reason?
// D3D also expresses the constant bias as an absolute value, rather than
// relative to minimum depth unit, so scale to fit
constantBias = -constantBias / 250000.0f;
__SetRenderState(D3DRS_DEPTHBIAS, FLOAT2DWORD(constantBias));
slopeScaleBias = -slopeScaleBias;
__SetRenderState(D3DRS_SLOPESCALEDEPTHBIAS, FLOAT2DWORD(slopeScaleBias));
So my question: what do I need to pass to DirectX9 to get the exact same result as glPolygonOffset?
I haven't found an exact number anywhere, but by experimenting I have figured out that to get roughly the same effect in OpenGL and DirectX, I need to divide by 3500000, instead of the 250000 mentioned above.
If anyone knows the exact number or why it's this, I'd love to hear that, but for practical purposes I think this conclusion will do for me.
If you look at the equations for OpenGL polygon offset and the equivalent Direct3D 9 renderstates, you'll find that they're identical, other than OpenGL has a term for an implementation-dependent constant value. If we assume that value is 1, the equations do become identical.
The obvious problem here is: what happens when the value is not 1?
Unfortunately OpenGL doesn't seem to provide a way of querying it, so the only thing you can do is twiddle parameters until you find something that works, then twiddle them some more as edge cases arise, and never assume that what works for you will work for anyone else.
Direct3D's assumption that the value is always 1 may not necessarily hold good all the time either.
Bottom line is that polygon offset is not a 100% robust method for fixing z-fighting under any circumstances. Have you tried other methods, such as pushing your decals out an epsilon along the surface normal?

WPF PathGeometry - Bounds are wrong?

I've got a fairly simple PathGeometry:
M567764.539,5956314.087L567815.077,5956179.775L567821.625,5956182.314L567773.425,5956311.248L567858.513,5956349.923L567950.858,5956392.466L567949.039,5956399.843L567942.252,5956396.685L567873.018,5956364.467L567799.816,5956330.421L567771.226,5956317.186L567764.539,5956314.087
Now when I query the PathGeometry.Bounds attribute for this data I get the following bounds:
567764.5625,5956180 567950.875,5956400
The expected bounds would be:
567764.539,5956179.775 567950.858,5956399.843
My main problem: the bounds are smaller than the geometry, so parts of the geometry might be outside the bounds.
I create the PathGeometry and show the bounds like this:
PathGeometry geo = PathGeometry.CreateFromGeometry(Geometry.Parse("M567764.539,5956314.087L567815.077,5956179.775L567821.625,5956182.314L567773.425,5956311.248L567858.513,5956349.923L567950.858,5956392.466L567949.039,5956399.843L567942.252,5956396.685L567873.018,5956364.467L567799.816,5956330.421L567771.226,5956317.186L567764.539,5956314.087"));
System.Diagnostics.Trace.WriteLine(geo.Bounds);
What am I doing wrong?
And, more important, how do I get the right bounds for a PathGeometry?
At some point, I would think WPF has to convert to single point for rendering, and I wonder if the value of Bounds is based off of the rendered result. In this case, you're probably seeing a precision limitation based off of the large numbers you're using. I noticed that your Y values were a factor of 10 larger than X, and coincidentally the error was also a factor of 10 larger than the error in X.
If it's possible to subtract off the min X and Y before creating the PathGeometry, I think you'll get better numbers. Assuming you're displaying the PathGeometry, you could place it in a Canvas and apply Canvas.Left/Top to your values to get the right offset on screen. To get the correct bounds, you would then add the Top/Left offsets to the result of your Bounds.
Just a reminder that there's a bit of speculation in this answer. I haven't looked at the innerworkings of Bounds, but the relative error seems to point to a conversion to and from floats.
I think you're seeing the imprecision due fact that the numbers PathGeometry is made up of large floating point numbers.
I'm not sure if you'll be able to obtain the precision that you need.
You will probably have to compare the bounds using an acceptable tolerance, like:
bool isMatch = (Math.Abs(MyPath.Bounds.X - ExpectedBounds.X) < TOLERANCE);
where you can set the TOLERANCE to 0.25 or something.

Chart optimization: More than million points

I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference.
I have a link to the component which have functionallity exactly what i need
(2 million points demo):
I will be grateful for any matherials, links or thoughts how to realize such functionallity.
If I understand your question correctly, then you are looking to plot a graph of a dataset where you have ~1M points, but the chart's horizontal resolution is much smaller? If so, you can down-sample your dataset to get about the number of available x values. If your data is sorted in equal intervals, you can extract every N'th point and plot it. Choose N such that the number of points is, say, double the resolution (in this case, N=2000 will give you 500 points to display).
If the intervals are very different from eachother (not regularly spaced), you can approximate your graph with a polynomial, or spline or any other method that fits, and then interpolate 300-600 points from that approximation.
EDIT:
Depending on the nature of the data, you may end up with aliasing artifacts when you simply sample every N't point. There are probably better methods for coping with this problem, but again - it depends on what exactly you want to plot.
You could always buy the control - it is for sale!
John-Daniel Trask (Co-founder of Mindscape ;-)

Resources