finding proportion/ratio for clamping a font size given a min/ideal/max font size and min/current/max screen width - responsive-design

I am trying to create a clamping formula (same logic from CSS) to make the typography more responsive in PowerApps. I have a minimum, maximum and an ideal size that is a dynamic calculation. Which gives us:
Max(min_,(Min(ideal_),max_)))
Now I am struggling to find that ratio. In my case the screen width will never go lower than 360px and the min font size for this example is 16px, the max 40px, when the screen is large/extra large (meaning anything above 900px in our case).
Now how can I represent a formula that calculates a value in between these two that considers the current width of the screen? This has very little to do with PowerApps is more of a math question and general responsive design, I just don't know how to do it :D
I could guess compound proportion as in:
16 px f -> 360px w
x px f -> current px w
40 px f -> > 900 px w
Is this logic right? What do I do now? This might look obvious to you so please try to guide through or give some video/link/article.
Thank you all.

For whoever wondering about this. I think I found the answer.
Max(minsize_,(Min((minsize_+(maxsize_-minsize_)*((App.Width - App.MinScreenWidth) / (maxscreenwidth_ - App.MinScreenWidth))),maxsize_)))
Taken and adapted from https://css-tricks.com/snippets/css/fluid-typography/

Related

Uniform random sampling of CIELUV for RGB colors

Selecting a random color on a computer is a touch harder than I thought it would be.
The naive way of uniform random sampling of 0..255 for R,G,B will tend to draw lots of similar greens. It would make sense to sample from a perceptually uniform space like CIELUV.
A simple way to do this is to sample L,u,v on a regular mesh and ensure the color solid is contained in the bounds (I've seen different bounds for this). If the sample falls outside embedded RGB solid (tested by mapping it XYZ then RGB), reject it and sample again. You can settle for a kludgy-but-guaranteed-to-terminate "bailout" selection (like the naive procedure) if you reject more then some arbitrary threshold number of times.
Testing if the sample lies within RGB needs to be sure to test for the special case of black (some implementations end up being silent on the divide by zero), I believe. If L=0 and either u!=0 or v!=0, then the sample needs to be rejected or else you would end up oversampling the L=0 plane in Luv space.
Does this procedure have an obvious flaw? It seems to work but I did notice that I was rolling black more often than I thought made sense until I thought about what was happening in that case. Can anyone point me to the right bounds on the CIELUV grid to ensure that I am enclosing the RGB solid?
A useful reference for those who don't know it:
https://www.easyrgb.com/en/math.php
The key problem with this is that you need bounds to reject samples that fall outside of RGB. I was able to find it worked out here (nice demo on page, API provides convenient functions):
https://www.hsluv.org/
A few things I noticed with uniform sampling of CIELUV in RGB:
most colors are green and purple (this is true independent of RGB bounds)
you have a hard time sampling what we think of as yellow (very small volume of high lightness, high chroma space)
I implemented various strategies that focus on sampling hues (which is really what we want when we think of "sampling colors") by weighting according to the maximum chromas at that lightness. This makes colors like chromatic light yellows easier to catch and avoids oversampling greens and purples. You can see these methods in actions here (select "randomize colors"):
https://www.mysticsymbolic.art/
Source for color randomizers here:
https://github.com/mittimithai/mystic-symbolic/blob/chromacorners/lib/random-colors.ts
Okay, while you don't show the code you are using to generate the random numbers and then apply them to the CIELUV color space, I'm going to guess that you are creating a random number 0.0-100.0 from a random number generator, and then just assigning it to L*.
That will most likely give you a lot of black or very dark results.
Let Me Explain
L* of L * u * v* is not linear as to light. Y of CIEXYZ is linear as to light. L* is perceptual lightness, so an exponential curve is applied to Y to make it linear to perception but then non-linear as to light.
TRY THIS
To get L* with a random value 0—100:
Generate a random number between 0.0 and 1.0
Then apply an exponent of 0.42
Then multiply by 100 to get L*
Lstar = Math.pow(Math.random(), 0.42) * 100;
This takes your random number that represents light, and applies a powercurve that emulates human lightness perception.
UV Color
As for the u and v values, you can probably just leave them as linear random numbers. Constrain u to about -84 and +176, and v to about -132.5 and +107.5
Urnd = (Math.random() - 0.5521) * 240;
Vrnd = (Math.random() - 0.3231) * 260;
Polar Color
It might be interesting converting uv to LChLUV or LshLUV
For hue, it's probably as simple as H = Math.random() * 360
For chroma contrained 0—178: C = Math.random() * 178
The next question is, should you find chroma? Or saturation? CIELUV can provide either Hue or Sat — but for directly generating random colors, it seems that chroma is a bit better.
And of course these simple examples are not preventing over-runs, so they color values to be tested to see if they are legal sRGB or not. There's a few things that can be done to constrain the generated values to legal colors, but the object here was to get you to a better distribution without excess black/dark results.
Please let me know of any questions.

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

How to get Point.X at a Point.Y on a FlattenPathGeometry - WPF

I have a "FlattenedPathGeometry" and I want to be able to get a specific point.X from the path based on a specific Point.Y
Basically I just need the X value at any given Y.
Thanks in advance for any help.
GetFlattenedPathGeometry gives you back a polygonal so basically you have to consider loop all the points and calculating the minimum distance to your point.
If you can make any assumption on the Geometry shape or your point, you can speed up the search.
For example if the path is very long, you can speed up by intersecting the shape with a circle/square centered in your point. This limit the number of points of the shape to test but be careful that the intersection method is very expensive. You'll have to measure the performances with a stopwatch to understand what's better in your case.

Chart optimization: More than million points

I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference.
I have a link to the component which have functionallity exactly what i need
(2 million points demo):
I will be grateful for any matherials, links or thoughts how to realize such functionallity.
If I understand your question correctly, then you are looking to plot a graph of a dataset where you have ~1M points, but the chart's horizontal resolution is much smaller? If so, you can down-sample your dataset to get about the number of available x values. If your data is sorted in equal intervals, you can extract every N'th point and plot it. Choose N such that the number of points is, say, double the resolution (in this case, N=2000 will give you 500 points to display).
If the intervals are very different from eachother (not regularly spaced), you can approximate your graph with a polynomial, or spline or any other method that fits, and then interpolate 300-600 points from that approximation.
EDIT:
Depending on the nature of the data, you may end up with aliasing artifacts when you simply sample every N't point. There are probably better methods for coping with this problem, but again - it depends on what exactly you want to plot.
You could always buy the control - it is for sale!
John-Daniel Trask (Co-founder of Mindscape ;-)

Unprecise rendering of huge WPF visuals - any solutions?

When rendering huge visuals in WPF, the visual gets distorted and more distorted with increasing coordinates. I assume that it has something to do with the floating point data types used in the render pipeline, but I'm not completely sure. Either way, I'm searching for a practical solution to solve the problem.
To demonstrate what I'm talking about, I created a sample application which just contains a custom control embedded in a ScrollViewer that draws a sine curve.
You can see here that the drawing is alright for double values <= 2^24 (in this case the horizontal coordinate value), but from that point on it gets distorted.
The distortion gets worse at 2^25 and so the distortion continues to increase with every additional bit until it just draws some vertical lines.
For performance reasons I'm just drawing the visible part of the graph, but for layouting reasons I cannot "virtualize" the control which would make this problem obsolete. The only solution I could come up with is to draw the visible part of the graph to a bitmap, and then render the bitmap at the appropriate point - but there I have again the precision problem with big values, as I cannot accurately place the bitmap at the position where I need it.
Does anybody have an idea how to solve this?
It is not WPF's fault.
Floating point numbers get less and less precise the farther from zero they are - it is a cost of stuffing enormous data range (-Inf, +Inf) into 32 (float) / 64 (double) bits of data space. Floats actually become less precise than integer at around 2^30.
64bit integers have constant spacing (1), but have limited range of −9,223,372,036,854,775,808 to +9,223,372,036,854,775,807.
You may also consider using Decimal type (which however has also limited value range).
(update: oh didnt see how old this post was... i guess i clicked the wrong filter button in stack overflow...)
The relative precision is relevant here. So just saying "look 2^24 is fine and 2^25 is not" is not enough information. You said it is a sin, thus I guess y-axis max and min never changes between those pictures. So y-axis doesnt matter. Furthermore the x-step size stays the same, i guess? But you did not tell us the sin period length or the x-step size, you chose. That is relevant here. The relative precision of the x-size steps becomes worse when you go to higher x-values, because the x-step-size becomes too small relativly to the x-value itself.
precision of c# floating point types:
https://learn.microsoft.com/de-de/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types
example:
x-step size = 1.
x = 1 (no problem)
x = 1000 (no problem)
x = >2^23 (32 bit starts to get problems with step size = 1; 64 bit no problems yet)
x = >2^52 (64 bit starts to get problems with step size = 1)

Resources