Related
I have written a C script to implement the inverse Vincenty's formula to calculate the distance between two sets of GPS coordinates based on the equations shown at https://en.wikipedia.org/wiki/Vincenty%27s_formulae
However, my results are different to the results given by this online calculator https://www.cqsrg.org/tools/GCDistance/ and Google maps. My results are consistently around 1.18 times the result of the online calculator.
My function is below, any tips on where I could be going wrong would be very much appreciated!
double get_distance(double lat1, double lon1, double lat2, double lon2)
{
double rad_eq = 6378137.0; //Radius at equator
double flattening = 1 / 298.257223563; //flattenig of earth
double rad_pol = (1 - flattening) * rad_eq; //Radius at poles
double U1,U2,L,lambda,old_lambda,sigma,sin_sig,cos_sig,alpha,cos2sigmam,A,B,C,u_sq,delta_s,dis;
//Convert to radians
lat1=M_PI*lat1/180.0;
lat2=M_PI*lat2/180.0;
lon1=M_PI*lon1/180.0;
lon2=M_PI*lon2/180.0;
//Calculate U1 and U2
U1=atan((1-flattening)*tan(lat1));
U2=atan((1-flattening)*tan(lat2));
L=lon2-lon1;
lambda=L;
double tolerance=pow(10.,-12.);//iteration tollerance should give 0.6mm
double diff=1.;
while (abs(diff)>tolerance)
{
sin_sig=sqrt(pow(cos(U2)*sin(lambda),2.)+pow(cos(U1)*sin(U2)-(sin(U1)*cos(U2)*cos(lambda)),2.));
cos_sig=sin(U1)*cos(U2)+cos(U1)*cos(U2)*cos(lambda);
sigma=atan(sin_sig/cos_sig);
alpha=asin((cos(U1)*cos(U2)*sin(lambda))/(sin_sig));
cos2sigmam=cos(sigma)-(2*sin(U1)*sin(U2))/((pow(cos(alpha),2.)));
C=(flattening/16)*pow(cos(alpha),2.)*(4+(flattening*(4-(3*pow(cos(alpha),2.)))));
old_lambda=lambda;
lambda=L+(1-C)*flattening*sin(alpha)*(sigma+C*sin_sig*(cos2sigmam+C*cos_sig*(-1+2*pow(cos2sigmam,2.))));
diff=abs(old_lambda-lambda);
}
u_sq=pow(cos(alpha),2.)*((pow(rad_eq,2.)-pow(rad_pol,2.))/(pow(rad_pol,2.)));
A=1+(u_sq/16384)*(4096+(u_sq*(-768+(u_sq*(320-(175*u_sq))))));
B=(u_sq/1024)*(256+(u_sq*(-128+(u_sq*(74-(47*u_sq))))));
delta_s=B*sin_sig*(cos2sigmam+(B/4)*(cos_sig*(-1+(2*pow(cos2sigmam,2.)))-(B/6)*cos2sigmam*(-3+(4*pow(sin_sig,2.)))*(-3+(4*pow(cos2sigmam,2.)))));
dis=rad_pol*A*(sigma-delta_s);
//Returns distance in metres
return dis;
}
This formula is not symmetric:
cos_sig = sin(U1)*cos(U2)
+ cos(U1)*cos(U2) * cos(lambda);
And turns out to be wrong, a sin is missing.
Another style of formatting (one including some whitespace) could also help.
Besides the fabs for abs and one sin for that cos I also changed the loop; there were two abs()-calls and diff had to be preset with the while-loop.
I inserted a printf to see how the value progresses.
Some parentheses can be left out. These formulas are really difficult to realize. Some more helper variables could be useful in this jungle of nested math operations.
do {
sin_sig = sqrt(pow( cos(U2) * sin(lambda), 2)
+ pow(cos(U1)*sin(U2)
- (sin(U1)*cos(U2) * cos(lambda))
, 2)
);
cos_sig = sin(U1) * sin(U2)
+ cos(U1) * cos(U2) * cos(lambda);
sigma = atan2(sin_sig, cos_sig);
alpha = asin(cos(U1) * cos(U2) * sin(lambda)
/ sin_sig
);
double cos2alpha = cos(alpha)*cos(alpha); // helper var.
cos2sigmam = cos(sigma) - 2*sin(U1)*sin(U2) / cos2alpha;
C = (flat/16) * cos2alpha * (4 + flat * (4 - 3*cos2alpha));
old_lambda = lambda;
lambda = L + (1-C) * flat * sin(alpha)
*(sigma + C*sin_sig
*(cos2sigmam + C*cos_sig
*(2 * pow(cos2sigmam, 2) - 1)
)
);
diff = fabs(old_lambda - lambda);
printf("%.12f\n", diff);
} while (diff > tolerance);
For 80,80, 0,0 the output is (in km):
0.000885870048
0.000000221352
0.000000000055
0.000000000000
9809.479224
which corresponds to the millimeter with WGS-84.
I've made a function to find a color within a image, and return x, y. Now I need to add a new function, where I can find a color with a given tolerence. Should be easy?
Code to find color in image, and return x, y:
def FindColorIn(r,g,b, xmin, xmax, ymin, ymax):
image = ImageGrab.grab()
for x in range(xmin, xmax):
for y in range(ymin,ymax):
px = image.getpixel((x, y))
if px[0] == r and px[1] == g and px[2] == b:
return x, y
def FindColor(r,g,b):
image = ImageGrab.grab()
size = image.size
pos = FindColorIn(r,g,b, 1, size[0], 1, size[1])
return pos
Outcome:
Taken from the answers the normal methods of comparing two colors are in Euclidean distance, or Chebyshev distance.
I decided to mostly use (squared) euclidean distance, and multiple different color-spaces. LAB, deltaE (LCH), XYZ, HSL, and RGB. In my code, most color-spaces use squared euclidean distance to compute the difference.
For example with LAB, RGB and XYZ a simple squared euc. distance does the trick:
if ((X-X1)^2 + (Y-Y1)^2 + (Z-Z1)^2) <= (Tol^2) then
...
LCH, and HSL is a little more complicated as both have a cylindrical hue, but some piece of math solves that, then it's on to using squared eucl. here as well.
In most these cases I've added "separate parameters" for tolerance for each channel (using 1 global tolerance, and alternative "modifiers" HueTol := Tolerance * hueMod or LightTol := Tolerance * LightMod).
It seems like colorspaces built on top of XYZ (LAB, LCH) does perform best in many of my scenarios. Tho HSL yields very good results in some cases, and it's much cheaper to convert to from RGB, RGB is also great tho, and fills most of my needs.
Computing distances between RGB colours, in a way that's meaningful to the eye, isn't as easy a just taking the Euclidian distance between the two RGB vectors.
There is an interesting article about this here: http://www.compuphase.com/cmetric.htm
The example implementation in C is this:
typedef struct {
unsigned char r, g, b;
} RGB;
double ColourDistance(RGB e1, RGB e2)
{
long rmean = ( (long)e1.r + (long)e2.r ) / 2;
long r = (long)e1.r - (long)e2.r;
long g = (long)e1.g - (long)e2.g;
long b = (long)e1.b - (long)e2.b;
return sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}
It shouldn't be too difficult to port to Python.
EDIT:
Alternatively, as suggested in this answer, you could use HLS and HSV. The colorsys module seems to have functions to make the conversion from RGB. Its documentation also links to these pages, which are worth reading to understand why RGB Euclidian distance doesn't really work:
http://www.poynton.com/ColorFAQ.html
http://www.cambridgeincolour.com/tutorials/color-space-conversion.htm
EDIT 2:
According to this answer, this library should be useful: http://code.google.com/p/python-colormath/
Here is an optimized Python version adapted from Bruno's asnwer:
def ColorDistance(rgb1,rgb2):
'''d = {} distance between two colors(3)'''
rm = 0.5*(rgb1[0]+rgb2[0])
d = sum((2+rm,4,3-rm)*(rgb1-rgb2)**2)**0.5
return d
usage:
>>> import numpy
>>> rgb1 = numpy.array([1,1,0])
>>> rgb2 = numpy.array([0,0,0])
>>> ColorDistance(rgb1,rgb2)
2.5495097567963922
Instead of this:
if px[0] == r and px[1] == g and px[2] == b:
Try this:
if max(map(lambda a,b: abs(a-b), px, (r,g,b))) < tolerance:
Where tolerance is the maximum difference you're willing to accept in any of the color channels.
What it does is to subtract each channel from your target values, take the absolute values, then the max of those.
Assuming that rtol, gtol, and btol are the tolerances for r,g, and b respectively, why not do:
if abs(px[0]- r) <= rtol and \
abs(px[1]- g) <= gtol and \
abs(px[2]- b) <= btol:
return x, y
Here's a vectorised Python (numpy) version of Bruno and Developer's answers (i.e. an implementation of the approximation derived here) that accepts a pair of numpy arrays of shape (x, 3) where individual rows are in [R, G, B] order and individual colour values ∈[0, 1].
You can reduce it two a two-liner at the expense of readability. I'm not entirely sure whether it's the most optimised version possible, but it should be good enough.
def colour_dist(fst, snd):
rm = 0.5 * (fst[:, 0] + snd[:, 0])
drgb = (fst - snd) ** 2
t = np.array([2 + rm, 4 + 0 * rm, 3 - rm]).T
return np.sqrt(np.sum(t * drgb, 1))
It was evaluated against Developer's per-element version above, and produces the same results (save for floating precision errors in two cases out of one thousand).
A cleaner python implementation of the function stated here, the function takes 2 image paths, reads them using cv.imread and the outputs a matrix with each matrix cell having difference of colors. you can change it to just match 2 colors easily
import numpy as np
import cv2 as cv
def col_diff(img1, img2):
img_bgr1 = cv.imread(img1) # since opencv reads as B, G, R
img_bgr2 = cv.imread(img2)
r_m = 0.5 * (img_bgr1[:, :, 2] + img_bgr2[:, :, 2])
delta_rgb = np.square(img_bgr1- img_bgr2)
cols_diffs = delta_rgb[:, :, 2] * (2 + r_m / 256) + delta_rgb[:, :, 1] * (4) +
delta_rgb[:, :, 0] * (2 + (255 - r_m) / 256)
cols_diffs = np.sqrt(cols_diffs)
# lets normalized the values to range [0 , 1]
cols_diffs_min = np.min(cols_diffs)
cols_diffs_max = np.max(cols_diffs)
cols_diffs_normalized = (cols_diffs - cols_diffs_min) / (cols_diffs_max - cols_diffs_min)
return np.sqrt(cols_diffs_normalized)
Simple:
def eq_with_tolerance(a, b, t):
return a-t <= b <= a+t
def FindColorIn(r,g,b, xmin, xmax, ymin, ymax, tolerance=0):
image = ImageGrab.grab()
for x in range(xmin, xmax):
for y in range(ymin,ymax):
px = image.getpixel((x, y))
if eq_with_tolerance(r, px[0], tolerance) and eq_with_tolerance(g, px[1], tolerance) and eq_with_tolerance(b, px[2], tolerance):
return x, y
from pyautogui source code
def pixelMatchesColor(x, y, expectedRGBColor, tolerance=0):
r, g, b = screenshot().getpixel((x, y))
exR, exG, exB = expectedRGBColor
return (abs(r - exR) <= tolerance) and (abs(g - exG) <= tolerance) and (abs(b - exB) <= tolerance)
you just need a little fix and you're ready to go.
Here is a simple function that does not require any libraries:
def color_distance(rgb1, rgb2):
rm = 0.5 * (rgb1[0] + rgb2[0])
rd = ((2 + rm) * (rgb1[0] - rgb2[0])) ** 2
gd = (4 * (rgb1[1] - rgb2[1])) ** 2
bd = ((3 - rm) * (rgb1[2] - rgb2[2])) ** 2
return (rd + gd + bd) ** 0.5
assuming that rgb1 and rgb2 are RBG tuples
I suppose this is more of a math question than anything.
Here is a basic shader:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
if(uv.x < .5) col = vec3(0.0,0.0,0.0);
// Output to screen
fragColor = vec4(col,1.0);
}
First we are normalizing our X coordinates between (0.0,0.1) with 0.0 being the far left of the screen and 1.0 being the far right. By turning all pixels with x coordinates < .5 black, I am simply masking half the screen in black. This results in the following:
If I use screen space coordinates I can achieve a similar result, the width of the actual screen is 800 pixels. So I can mask every pixel with an x < 400 with black by doing the following:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
if(fragCoord.x < 400.) col = vec3(0.0,0.0,0.0);
// Output to screen
fragColor = vec4(col,1.0);
}
Which results in the same:
Logically then, I should be able to use Modulo on the screen space coordinates to create stripes. By taking mod(fragCoord.x,10.0) and checking where the result is 0.0 I should be disabling any row of pixels where its x value is a factor of 10.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
if(mod(fragCoord.x, 10.0) == 0.0) col = vec3(0.0,0.0,0.0);
// Output to screen
fragColor = vec4(col,1.0);
}
However, what I expect isn't happening:
Can somebody explain why I am not seeing rows of black pixels wherever x%10 == 0?
I assume fragCoord is set by gl_FragCoord.
mod is a floating point operation and the values of gl_FragCoord are not integral. See Khronos OpenGL reference:
By default, gl_FragCoord assumes a lower-left origin for window coordinates and assumes pixel centers are located at half-pixel centers. For example, the (x, y) location (0.5, 0.5) is returned for the lower-left-most pixel in a window.
Therefore the result of the modulo operation will never be 0.0. Convert fragCoord.x to an integral value and use the % operator:
if(mod(fragCoord.x, 10.0) == 0.0) col = vec3(0.0,0.0,0.0);
if (int(fragCoord.x) % 10 == 0) col = vec3(0.0);
In case anybody wants to see the result of Rabbid76's answer
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
if (int(fragCoord.x) % 10 == 0) col = vec3(0.0);
// Output to screen
fragColor = vec4(col,1.0);
}
I'm working on a RayTracer and I can't figure out what I'm doing wrong when I try to calculate an intersection with a cone. I have my ray vector and the position of the cone with its axis. I know that compute a cone along a simple axis is easy but I want to do it with an arbitrary axis.
I'm using this link http://mrl.nyu.edu/~dzorin/rend05/lecture2.pdf for the cone equation (page 7-8) and here is my code :
alpha = cone->angle * (PI / 180);
axe.x = 0;
axe.y = 1;
axe.z = 0;
delt_p = vectorize(cone->position, ray.origin);
tmp1.x = ray.vector.x - (dot_product(ray.vector, axe) * axe.x);
tmp1.y = ray.vector.y - (dot_product(ray.vector, axe) * axe.y);
tmp1.z = ray.vector.z - (dot_product(ray.vector, axe) * axe.z);
tmp2.x = (delt_p.x) - (dot_product(delt_p, axe) * axe.x);
tmp2.y = (delt_p.y) - (dot_product(delt_p, axe) * axe.y);
tmp2.z = (delt_p.z) - (dot_product(delt_p, axe) * axe.z);
a = (pow(cos(alpha), 2) * dot_product(tmp1, tmp1)) - (pow(sin(alpha), 2) * dot_product(ray.vector, axe));
b = 2 * ((pow(cos(alpha), 2) * dot_product(tmp1, tmp2)) - (pow(sin(alpha), 2) * dot_product(ray.vector, axe) * dot_product(delt_p, axe)));
c = (pow(cos(alpha), 2) * dot_product(tmp2, tmp2)) - (pow(sin(alpha), 2) * dot_product(delt_p, axe));
delta = pow(b, 2) - (4 * a * c);
if (delta >= 0)
{
t1 = (((-1) * b) + sqrt(delta)) / (2 * a);
t2 = (((-1) * b) - sqrt(delta)) / (2 * a);
t = (t1 < t2 ? t1 : t2);
return (t);
}
I initialised my axis with the y axis so I can rotate it.
Here is what I get : http://i.imgur.com/l3kaavc.png
Instead of a cone, I have that paraboloid red shape on the right, and I know that it's almost the same equation as a cone.
You probably need to implement arbitrary transformations on primitives using homogenous matrices, rather than support arbitrary orientation for each primitive.
For example, it's not uncommon for ray tracers to only support cones that have their base on the origin, and that point along the vertical axis. You would then use affine transformations to move the cone to the right place and orientation.
My own ray tracer (which thus far only supports planes, boxes and spheres) has the same problem, and implementation transformation matrices is my next task.
I would like to know how the vertices of glVertex2f(x, y) map to actual screen integer co-ordinates.
I intend to use a complex plane with minR, minI and maxR, maxI (I and R - Imaginary and Real part), such that the plane gets mapped to 512 x 512 pixels on the screen. I have points of 512 steps between the min and max values.
The mapping between the vertices is unclear since, I had to scale the my planar image using glScalef(100, 100, 0) to get it roughly fit the screen. But still, a large portion of it is left blank.
Please note that I am using the glBegin(GL_POINTS) routine to map the points in the plane to the screen.
The code looks thus,
for (X = 0; X < 512; X++)
for (Y = 0; Y < 512; Y++)
glVertex2f (Complexplane[X][Y].real, Complexplane[X][Y].imag);
P.S.:
Complexplane[0][0].real = -2, Complexplane[0][0].imag = -1.2
Complexplane[511][511].real = 1.0, Complexplane[0][0].imag = 1.8
I'm assuming you haven't set the projection or modelview matrices - they will be set to the identity matrix by default BTW...
For X,Y coordinates, a point will be visible if: -1 <= X <= 1, -1 <= Y <= 1
The glViewport function describes how this range is mapped to the window. It is initially set to (0, 0, window_width, window_height) when the GL context is created. The fact that glScale(100, 100, 0) is only taking up a portion of the window suggests that you are applying another transform elsewhere.
The mapping depends on the transformation matrices set. In up to OpenGL-2 the pipeline is
v_eye = ModelviewMatrix * v
v_projected = ProjectionMatrix * v_eye
v_clipped = clip(v_projected)
v_NDC.xyzw = v_clipped.xyzw / v_clipped.w
The default matrices are identity, so the only operation applied in the default state is the clipping. v_NDC then undergoes the viewport transform:
p.xyz = (v_NDC.xyz + 1) * viewport.wh / 2 + viewport.xy