In Scenekit how coefficient of friction applies between pair of materials - scenekit

I am learning SceneKit and inside physics section, there is option of setting friction of any object.
I have 2 question regarding that
What kind of friction it is? (Static or Kinetic)
Given coefficient of friction for system of 2 objects, how can I assign value of friction property in SceneKit so that it matches real world physics
e.g. For Bowling Game, Friction coefficient between Ball and Lane is 0.12, what value I should assign to Ball and Lane in SceneKit as there is no option of setting static and dynamic friction as well as setting friction between system of 2 objects.
I can assign Friction for Ball between 0 and 1, but what does it represent, as According to physics friction can only be calculated between pair of objects and not single object
Edit1
I understand that I have to set values by trail and error, but I want to know how SceneKit resolves these values.
e.g. in unity they are giving 4 option between 2 material (Average, Maximum, minimum, multiply)
Consider 2 objects A and B and their friction value is given by 0.1, 0.2
then
μAB = 0.1 (minimum)
μAB = 0.2 (maximum)
μAB = 0.15 (average)
μAB = 0.02 (multiply)
Please see this link https://docs.unity3d.com/Manual/class-PhysicMaterial.html

Try using static for the lane (walls and floors), use dynamic for ball. You can still set other properties for each. There is a lot more going on though - mass, gravity, rolling friction, damping, etc.
Friction is assigned to the object(s) itself and is calculated when the objects collide or roll/rub against each other.
Sorry I don't know the exact number to use- best guess is since they are both smooth surfaces, the numbers should be pretty close to the same. There are a number of things you have to be concerned with, so I'd start with the defaults and adjust from there. After a bunch of trail and error, what I did was to overlay some UIKit controls over the top of scenekit to adjust the physics body properties on the fly and repeated the movements I wanted to test - then I adjusted the properties until I got the behavior I wanted.

Related

Uniform random sampling of CIELUV for RGB colors

Selecting a random color on a computer is a touch harder than I thought it would be.
The naive way of uniform random sampling of 0..255 for R,G,B will tend to draw lots of similar greens. It would make sense to sample from a perceptually uniform space like CIELUV.
A simple way to do this is to sample L,u,v on a regular mesh and ensure the color solid is contained in the bounds (I've seen different bounds for this). If the sample falls outside embedded RGB solid (tested by mapping it XYZ then RGB), reject it and sample again. You can settle for a kludgy-but-guaranteed-to-terminate "bailout" selection (like the naive procedure) if you reject more then some arbitrary threshold number of times.
Testing if the sample lies within RGB needs to be sure to test for the special case of black (some implementations end up being silent on the divide by zero), I believe. If L=0 and either u!=0 or v!=0, then the sample needs to be rejected or else you would end up oversampling the L=0 plane in Luv space.
Does this procedure have an obvious flaw? It seems to work but I did notice that I was rolling black more often than I thought made sense until I thought about what was happening in that case. Can anyone point me to the right bounds on the CIELUV grid to ensure that I am enclosing the RGB solid?
A useful reference for those who don't know it:
https://www.easyrgb.com/en/math.php
The key problem with this is that you need bounds to reject samples that fall outside of RGB. I was able to find it worked out here (nice demo on page, API provides convenient functions):
https://www.hsluv.org/
A few things I noticed with uniform sampling of CIELUV in RGB:
most colors are green and purple (this is true independent of RGB bounds)
you have a hard time sampling what we think of as yellow (very small volume of high lightness, high chroma space)
I implemented various strategies that focus on sampling hues (which is really what we want when we think of "sampling colors") by weighting according to the maximum chromas at that lightness. This makes colors like chromatic light yellows easier to catch and avoids oversampling greens and purples. You can see these methods in actions here (select "randomize colors"):
https://www.mysticsymbolic.art/
Source for color randomizers here:
https://github.com/mittimithai/mystic-symbolic/blob/chromacorners/lib/random-colors.ts
Okay, while you don't show the code you are using to generate the random numbers and then apply them to the CIELUV color space, I'm going to guess that you are creating a random number 0.0-100.0 from a random number generator, and then just assigning it to L*.
That will most likely give you a lot of black or very dark results.
Let Me Explain
L* of L * u * v* is not linear as to light. Y of CIEXYZ is linear as to light. L* is perceptual lightness, so an exponential curve is applied to Y to make it linear to perception but then non-linear as to light.
TRY THIS
To get L* with a random value 0—100:
Generate a random number between 0.0 and 1.0
Then apply an exponent of 0.42
Then multiply by 100 to get L*
Lstar = Math.pow(Math.random(), 0.42) * 100;
This takes your random number that represents light, and applies a powercurve that emulates human lightness perception.
UV Color
As for the u and v values, you can probably just leave them as linear random numbers. Constrain u to about -84 and +176, and v to about -132.5 and +107.5
Urnd = (Math.random() - 0.5521) * 240;
Vrnd = (Math.random() - 0.3231) * 260;
Polar Color
It might be interesting converting uv to LChLUV or LshLUV
For hue, it's probably as simple as H = Math.random() * 360
For chroma contrained 0—178: C = Math.random() * 178
The next question is, should you find chroma? Or saturation? CIELUV can provide either Hue or Sat — but for directly generating random colors, it seems that chroma is a bit better.
And of course these simple examples are not preventing over-runs, so they color values to be tested to see if they are legal sRGB or not. There's a few things that can be done to constrain the generated values to legal colors, but the object here was to get you to a better distribution without excess black/dark results.
Please let me know of any questions.

How to efficiently find co-planar point in array of 3D points (Terrain)

I am representing a terrain using a two dimensional array height map (500x500) where points are 1 unit apart (1 unit = 1 meter in this case).
I am also representing a player using a 3D-point which is represented as (x-float,y-float,z-float). Since the players positioning allows for fine-tuned positioning while the height map is lower resolution, i need a method to find the approximation of the height (z-axis) of the character based on the x/y coordinates that he falls within the terrain height-map (co-planar point).
Therefore my question comes in two parts:
How to find co-planar point where the x/y portion of the co-planar point is already solved for?
Is there a recommended library that can handle this in C#?

Making a dynamic gradient with HSL or RGB

I have a standard 50-state map built with d3 in which I'm dynamically coloring states according to various datasets. Whatever the dataset, the values are normalized on a scale of 0 to 1, where 1 corresponds to the state with the highest value. I'm looking for a way to calculate the shade of the state using the value of the normalized data point.
In the past, I've chosen a base color that I like -- say, #900 -- and set the fill of each state to that color and the opacity to the normalized value. This works okay save for two problems:
when the canvas has a background color, it requires drawing a blank white state beneath every shaded state; and
fading out colors this way can look pasty
But I really like being able to set the color dynamically rather than dealing with bins for the data and preset arrays of RGB values for the gradient. So I'm wondering if there's a better way. I can take care of conversion if an alternate color system would work better.
d3 has a baked-in HSL converter, so I tried this:
// 0 <= val <= 1
function colorize(val) {
// nudge in the extremes
val = 0.2 + 0.6 * val;
return d3.hsl(0, val, 1 - val);
}
It works okay -- This is a map of fishing jobs, which are most prevalent in Maine and Oregon -- but I suspect there's a better way. Ideas?
I like what you did actually, but if you wish to do something different, you can always do a D3 scale. For example:
var scale = d3.scale.linear().domain([rangeMin, rangeMid,
rangeMax]).range(["#Color1","#Color2","#Color3"]);
And then set each state by
return scale(dataValue);
You can set your rangeMin and rangeMax variables to be the minimum and maximum values of your data. The median number, rangeMid, that I added is optional. I would suggest using this if you would like some variety in your color. I have used this scale feature to make a word frequency heatmap that came out pretty nice. I hope that I was able to help in some way!
Note: I used this with css hex values, but I believe RGB and HSL could also work.

Chart optimization: More than million points

I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference.
I have a link to the component which have functionallity exactly what i need
(2 million points demo):
I will be grateful for any matherials, links or thoughts how to realize such functionallity.
If I understand your question correctly, then you are looking to plot a graph of a dataset where you have ~1M points, but the chart's horizontal resolution is much smaller? If so, you can down-sample your dataset to get about the number of available x values. If your data is sorted in equal intervals, you can extract every N'th point and plot it. Choose N such that the number of points is, say, double the resolution (in this case, N=2000 will give you 500 points to display).
If the intervals are very different from eachother (not regularly spaced), you can approximate your graph with a polynomial, or spline or any other method that fits, and then interpolate 300-600 points from that approximation.
EDIT:
Depending on the nature of the data, you may end up with aliasing artifacts when you simply sample every N't point. There are probably better methods for coping with this problem, but again - it depends on what exactly you want to plot.
You could always buy the control - it is for sale!
John-Daniel Trask (Co-founder of Mindscape ;-)

How to recognizing money bills in Images?

I'm having some images, of euro money bills. The bills are completely within the image
and are mostly flat (e.g. little deformation) and perspective skew is small (e.g. image quite taken from above the bill).
Now I'm no expert in image recognition. I'd like to achieve the following:
Find the boundingbox for the money bill (so I can "cut out" the bill from the noise in the rest of the image
Figure out the orientation.
I think of these two steps as pre-processing, but maybe one can do the following steps without the above two. So with that I want to read:
The bills serial-number.
The bills face value.
I assume this should be quite possible to do with OpenCV. I'm just not sure how to approach it right. Would I pick a FaceDetector like approach or houghs or a contour detector on an edge detector?
I'd be thankful for any further hints for reading material as well.
Hough is great but it can be a little expensive
This may work:
-Use Threshold or Canny to find the edges of the image.
-Then cvFindContours to identify the contours, then try to detect rectangles.
Check the squares.c example in opencv distribution. It basically checks that the polygon approximation of a contour has 4 points and the average angle betweeen those points is close to 90 degrees.
Here is a code snippet from the squares.py example
(is the same but in python :P ).
..some pre-processing
cvThreshold( tgray, gray, (l+1)*255/N, 255, CV_THRESH_BINARY );
# find contours and store them all as a list
count, contours = cvFindContours(gray, storage)
if not contours:
continue
# test each contour
for contour in contours.hrange():
# approximate contour with accuracy proportional
# to the contour perimeter
result = cvApproxPoly( contour, sizeof(CvContour), storage,
CV_POLY_APPROX_DP, cvContourPerimeter(contour)*0.02, 0 );
res_arr = result.asarray(CvPoint)
# square contours should have 4 vertices after approximation
# relatively large area (to filter out noisy contours)
# and be convex.
# Note: absolute value of an area is used because
# area may be positive or negative - in accordance with the
# contour orientation
if( result.total == 4 and
abs(cvContourArea(result)) > 1000 and
cvCheckContourConvexity(result) ):
s = 0;
for i in range(4):
# find minimum angle between joint
# edges (maximum of cosine)
t = abs(angle( res_arr[i], res_arr[i-2], res_arr[i-1]))
if s<t:
s=t
# if cosines of all angles are small
# (all angles are ~90 degree) then write quandrange
# vertices to resultant sequence
if( s < 0.3 ):
for i in range(4):
squares.append( res_arr[i] )
-Using MinAreaRect2 (Finds circumscribed rectangle of minimal area for given 2D point set), get the bounding box of the rectangles. Using the bounding box points you can easily calculate the angle.
you can also find the C version squares.c under samples/c/ in your opencv dir.
There is a good book on openCV
Using a Hough transform to find the rectangular bill shape (and angle) and then find rectangles/circles within it should be quick and easy
For more complex searching, something like a Haar classifier - if you needed to find odd corners of bills in an image?
You can also take a look at the Template Matching methods in OpenCV; another option would be to use SURF features. They let you search for symbols & numbers in size, angle etc. invariantly.

Resources