I'm actually using the algorithme of Toto Briac from Raycaster, searching for more effective floor/ceiling raycast
When using it, the floor casting is working well for east and west side, but for north and south it just does weird things (just look image).
double pixelsToBottom;
double pixelsToMid;
double directDistFloor;
double realDistance;
double y;
t_point f_p;
pixelsToBottom = (double)data->s_height - wall[1].y;
pixelsToMid = (double)data->s_height / 2 - pixelsToBottom;
for (int i = pixelsToMid; i < data->s_height / 2; i += 1)
{
directDistFloor = (data->dist_proj * (double)(data->s_height / 2)) / i;
realDistance = directDistFloor / fabs(cos(angle));
f_p.x = data->player.pos.x + cos(angle) * (realDistance) / (data->dist_proj / (64.0));
f_p.y = data->player.pos.y + sin(angle) * (realDistance) / (data->dist_proj / (64.0));
y = (wall->x + (i + data->s_height / 2) * data->s_width) / data->s_width;
pixel_put(&data->obj, wall->x, y, f_pixel(data, f_p));
}
But i'm facing an issu, when i'm facing north and south side it's all ok, but when the ray is going into east or west side, the texture just do a weird thing like that :
I know that it refer to : realDistance = directDistFloor / fabs(cos(angle));
if i replace the cos(angle) by sin(angle) in this line, it just invert things. I didn't find a way to change the calcule in right moment. I you have any idea I will take it ! Thank you !!
It's okay I found the answer, for the people who are interest, in the line realDistance = directDistFloor / fabs(cos(angle)); I was using the angle of the ray in the world, I changed it by the angle relative to my player dir (so 0° if it's the player dir ray) and it work properly ! Thank's #ZwergofPhoenix for the time you took !
I have written a C script to implement the inverse Vincenty's formula to calculate the distance between two sets of GPS coordinates based on the equations shown at https://en.wikipedia.org/wiki/Vincenty%27s_formulae
However, my results are different to the results given by this online calculator https://www.cqsrg.org/tools/GCDistance/ and Google maps. My results are consistently around 1.18 times the result of the online calculator.
My function is below, any tips on where I could be going wrong would be very much appreciated!
double get_distance(double lat1, double lon1, double lat2, double lon2)
{
double rad_eq = 6378137.0; //Radius at equator
double flattening = 1 / 298.257223563; //flattenig of earth
double rad_pol = (1 - flattening) * rad_eq; //Radius at poles
double U1,U2,L,lambda,old_lambda,sigma,sin_sig,cos_sig,alpha,cos2sigmam,A,B,C,u_sq,delta_s,dis;
//Convert to radians
lat1=M_PI*lat1/180.0;
lat2=M_PI*lat2/180.0;
lon1=M_PI*lon1/180.0;
lon2=M_PI*lon2/180.0;
//Calculate U1 and U2
U1=atan((1-flattening)*tan(lat1));
U2=atan((1-flattening)*tan(lat2));
L=lon2-lon1;
lambda=L;
double tolerance=pow(10.,-12.);//iteration tollerance should give 0.6mm
double diff=1.;
while (abs(diff)>tolerance)
{
sin_sig=sqrt(pow(cos(U2)*sin(lambda),2.)+pow(cos(U1)*sin(U2)-(sin(U1)*cos(U2)*cos(lambda)),2.));
cos_sig=sin(U1)*cos(U2)+cos(U1)*cos(U2)*cos(lambda);
sigma=atan(sin_sig/cos_sig);
alpha=asin((cos(U1)*cos(U2)*sin(lambda))/(sin_sig));
cos2sigmam=cos(sigma)-(2*sin(U1)*sin(U2))/((pow(cos(alpha),2.)));
C=(flattening/16)*pow(cos(alpha),2.)*(4+(flattening*(4-(3*pow(cos(alpha),2.)))));
old_lambda=lambda;
lambda=L+(1-C)*flattening*sin(alpha)*(sigma+C*sin_sig*(cos2sigmam+C*cos_sig*(-1+2*pow(cos2sigmam,2.))));
diff=abs(old_lambda-lambda);
}
u_sq=pow(cos(alpha),2.)*((pow(rad_eq,2.)-pow(rad_pol,2.))/(pow(rad_pol,2.)));
A=1+(u_sq/16384)*(4096+(u_sq*(-768+(u_sq*(320-(175*u_sq))))));
B=(u_sq/1024)*(256+(u_sq*(-128+(u_sq*(74-(47*u_sq))))));
delta_s=B*sin_sig*(cos2sigmam+(B/4)*(cos_sig*(-1+(2*pow(cos2sigmam,2.)))-(B/6)*cos2sigmam*(-3+(4*pow(sin_sig,2.)))*(-3+(4*pow(cos2sigmam,2.)))));
dis=rad_pol*A*(sigma-delta_s);
//Returns distance in metres
return dis;
}
This formula is not symmetric:
cos_sig = sin(U1)*cos(U2)
+ cos(U1)*cos(U2) * cos(lambda);
And turns out to be wrong, a sin is missing.
Another style of formatting (one including some whitespace) could also help.
Besides the fabs for abs and one sin for that cos I also changed the loop; there were two abs()-calls and diff had to be preset with the while-loop.
I inserted a printf to see how the value progresses.
Some parentheses can be left out. These formulas are really difficult to realize. Some more helper variables could be useful in this jungle of nested math operations.
do {
sin_sig = sqrt(pow( cos(U2) * sin(lambda), 2)
+ pow(cos(U1)*sin(U2)
- (sin(U1)*cos(U2) * cos(lambda))
, 2)
);
cos_sig = sin(U1) * sin(U2)
+ cos(U1) * cos(U2) * cos(lambda);
sigma = atan2(sin_sig, cos_sig);
alpha = asin(cos(U1) * cos(U2) * sin(lambda)
/ sin_sig
);
double cos2alpha = cos(alpha)*cos(alpha); // helper var.
cos2sigmam = cos(sigma) - 2*sin(U1)*sin(U2) / cos2alpha;
C = (flat/16) * cos2alpha * (4 + flat * (4 - 3*cos2alpha));
old_lambda = lambda;
lambda = L + (1-C) * flat * sin(alpha)
*(sigma + C*sin_sig
*(cos2sigmam + C*cos_sig
*(2 * pow(cos2sigmam, 2) - 1)
)
);
diff = fabs(old_lambda - lambda);
printf("%.12f\n", diff);
} while (diff > tolerance);
For 80,80, 0,0 the output is (in km):
0.000885870048
0.000000221352
0.000000000055
0.000000000000
9809.479224
which corresponds to the millimeter with WGS-84.
I've made a function to find a color within a image, and return x, y. Now I need to add a new function, where I can find a color with a given tolerence. Should be easy?
Code to find color in image, and return x, y:
def FindColorIn(r,g,b, xmin, xmax, ymin, ymax):
image = ImageGrab.grab()
for x in range(xmin, xmax):
for y in range(ymin,ymax):
px = image.getpixel((x, y))
if px[0] == r and px[1] == g and px[2] == b:
return x, y
def FindColor(r,g,b):
image = ImageGrab.grab()
size = image.size
pos = FindColorIn(r,g,b, 1, size[0], 1, size[1])
return pos
Outcome:
Taken from the answers the normal methods of comparing two colors are in Euclidean distance, or Chebyshev distance.
I decided to mostly use (squared) euclidean distance, and multiple different color-spaces. LAB, deltaE (LCH), XYZ, HSL, and RGB. In my code, most color-spaces use squared euclidean distance to compute the difference.
For example with LAB, RGB and XYZ a simple squared euc. distance does the trick:
if ((X-X1)^2 + (Y-Y1)^2 + (Z-Z1)^2) <= (Tol^2) then
...
LCH, and HSL is a little more complicated as both have a cylindrical hue, but some piece of math solves that, then it's on to using squared eucl. here as well.
In most these cases I've added "separate parameters" for tolerance for each channel (using 1 global tolerance, and alternative "modifiers" HueTol := Tolerance * hueMod or LightTol := Tolerance * LightMod).
It seems like colorspaces built on top of XYZ (LAB, LCH) does perform best in many of my scenarios. Tho HSL yields very good results in some cases, and it's much cheaper to convert to from RGB, RGB is also great tho, and fills most of my needs.
Computing distances between RGB colours, in a way that's meaningful to the eye, isn't as easy a just taking the Euclidian distance between the two RGB vectors.
There is an interesting article about this here: http://www.compuphase.com/cmetric.htm
The example implementation in C is this:
typedef struct {
unsigned char r, g, b;
} RGB;
double ColourDistance(RGB e1, RGB e2)
{
long rmean = ( (long)e1.r + (long)e2.r ) / 2;
long r = (long)e1.r - (long)e2.r;
long g = (long)e1.g - (long)e2.g;
long b = (long)e1.b - (long)e2.b;
return sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}
It shouldn't be too difficult to port to Python.
EDIT:
Alternatively, as suggested in this answer, you could use HLS and HSV. The colorsys module seems to have functions to make the conversion from RGB. Its documentation also links to these pages, which are worth reading to understand why RGB Euclidian distance doesn't really work:
http://www.poynton.com/ColorFAQ.html
http://www.cambridgeincolour.com/tutorials/color-space-conversion.htm
EDIT 2:
According to this answer, this library should be useful: http://code.google.com/p/python-colormath/
Here is an optimized Python version adapted from Bruno's asnwer:
def ColorDistance(rgb1,rgb2):
'''d = {} distance between two colors(3)'''
rm = 0.5*(rgb1[0]+rgb2[0])
d = sum((2+rm,4,3-rm)*(rgb1-rgb2)**2)**0.5
return d
usage:
>>> import numpy
>>> rgb1 = numpy.array([1,1,0])
>>> rgb2 = numpy.array([0,0,0])
>>> ColorDistance(rgb1,rgb2)
2.5495097567963922
Instead of this:
if px[0] == r and px[1] == g and px[2] == b:
Try this:
if max(map(lambda a,b: abs(a-b), px, (r,g,b))) < tolerance:
Where tolerance is the maximum difference you're willing to accept in any of the color channels.
What it does is to subtract each channel from your target values, take the absolute values, then the max of those.
Assuming that rtol, gtol, and btol are the tolerances for r,g, and b respectively, why not do:
if abs(px[0]- r) <= rtol and \
abs(px[1]- g) <= gtol and \
abs(px[2]- b) <= btol:
return x, y
Here's a vectorised Python (numpy) version of Bruno and Developer's answers (i.e. an implementation of the approximation derived here) that accepts a pair of numpy arrays of shape (x, 3) where individual rows are in [R, G, B] order and individual colour values ∈[0, 1].
You can reduce it two a two-liner at the expense of readability. I'm not entirely sure whether it's the most optimised version possible, but it should be good enough.
def colour_dist(fst, snd):
rm = 0.5 * (fst[:, 0] + snd[:, 0])
drgb = (fst - snd) ** 2
t = np.array([2 + rm, 4 + 0 * rm, 3 - rm]).T
return np.sqrt(np.sum(t * drgb, 1))
It was evaluated against Developer's per-element version above, and produces the same results (save for floating precision errors in two cases out of one thousand).
A cleaner python implementation of the function stated here, the function takes 2 image paths, reads them using cv.imread and the outputs a matrix with each matrix cell having difference of colors. you can change it to just match 2 colors easily
import numpy as np
import cv2 as cv
def col_diff(img1, img2):
img_bgr1 = cv.imread(img1) # since opencv reads as B, G, R
img_bgr2 = cv.imread(img2)
r_m = 0.5 * (img_bgr1[:, :, 2] + img_bgr2[:, :, 2])
delta_rgb = np.square(img_bgr1- img_bgr2)
cols_diffs = delta_rgb[:, :, 2] * (2 + r_m / 256) + delta_rgb[:, :, 1] * (4) +
delta_rgb[:, :, 0] * (2 + (255 - r_m) / 256)
cols_diffs = np.sqrt(cols_diffs)
# lets normalized the values to range [0 , 1]
cols_diffs_min = np.min(cols_diffs)
cols_diffs_max = np.max(cols_diffs)
cols_diffs_normalized = (cols_diffs - cols_diffs_min) / (cols_diffs_max - cols_diffs_min)
return np.sqrt(cols_diffs_normalized)
Simple:
def eq_with_tolerance(a, b, t):
return a-t <= b <= a+t
def FindColorIn(r,g,b, xmin, xmax, ymin, ymax, tolerance=0):
image = ImageGrab.grab()
for x in range(xmin, xmax):
for y in range(ymin,ymax):
px = image.getpixel((x, y))
if eq_with_tolerance(r, px[0], tolerance) and eq_with_tolerance(g, px[1], tolerance) and eq_with_tolerance(b, px[2], tolerance):
return x, y
from pyautogui source code
def pixelMatchesColor(x, y, expectedRGBColor, tolerance=0):
r, g, b = screenshot().getpixel((x, y))
exR, exG, exB = expectedRGBColor
return (abs(r - exR) <= tolerance) and (abs(g - exG) <= tolerance) and (abs(b - exB) <= tolerance)
you just need a little fix and you're ready to go.
Here is a simple function that does not require any libraries:
def color_distance(rgb1, rgb2):
rm = 0.5 * (rgb1[0] + rgb2[0])
rd = ((2 + rm) * (rgb1[0] - rgb2[0])) ** 2
gd = (4 * (rgb1[1] - rgb2[1])) ** 2
bd = ((3 - rm) * (rgb1[2] - rgb2[2])) ** 2
return (rd + gd + bd) ** 0.5
assuming that rgb1 and rgb2 are RBG tuples
Given theta angles in radians, width and height of the rotated image, how do I calculate the new width and height of the outer rectangle that contains the rotated image?
In other words how do I calculate the new bonding box width/height?
Note that the image could actually be circle and have transparent pixels on the edges.
That would be: x1, y1.
I am actually rotating a pixbuf with the origin at center using cairo_rotate() and I need to know the newly allocated area. What I tried is this:
double geo_rotated_rectangle_get_width (double a, double b, double theta)
{
return abs(a*cos(theta)) + abs(b*sin(theta));
}
And it will work in the sense of always returning sufficient space to contain the rotated image, but it also always returns higher values than it should, when image is not rotated in a multiple of 90o and is a fully opaque image (a square).
EDIT:
This is the image I am rotating:
Interestingly enough, I just tried with a fully opaque image with the same size and it was OK. I use gdk_pixbuf_get_width() to get width and it returns the same value for both regardless. So I assume the formula is correct and the problem is that the transparency is not accounted for. When rotated with a diagonal orientation there are edges from the rectangle of the rotated image that are transparent.
I'll leave the above so that it is helpful to others :)
Now the question becomes how to account for transparent pixels on the edges
To determine the bbox of the rotated rectangle, you can compute the coordinates of the 4 vertices and take the bbox of these 4 points.
a is the width of the unrotated rectangle and b its height ;
let diag = sqrt(a * a + b * b) / 2 the distance from the center to the top right corner of this rectangle. You can use diag = hypot(a, b) / 2 for better precision ;
first compute the angle theta0 of the first diagonal for theta=0: theta0 = atan(b / a) or better theta0 = atan2(b, a) ;
the 4 vertices are:
{ diag * cos(theta0 + theta), diag * sin(theta0 + theta) }
{ diag * cos(pi - theta0 + theta), diag * sin(pi - theta0 + theta) }
{ diag * cos(pi + theta0 + theta), diag * sin(pi + theta0 + theta) }
{ diag * cos(-theta0 + theta), diag * sin(-theta0 + theta) }
which can be simplified as:
{ diag * cos(theta + theta0), diag * sin(theta + theta0) }
{ -diag * cos(theta - theta0), -diag * sin(theta - theta0) }
{ -diag * cos(theta + theta0), -diag * sin(theta + theta0) }
{ diag * cos(theta - theta0), diag * sin(theta - theta0) }
which gives x1 and y1:
x1 = diag * fmax(fabs(cos(theta + theta0)), fabs(cos(theta - theta0))
y1 = diag * fmax(fabs(sin(theta + theta0)), fabs(sin(theta - theta0))
and the width and height of the rotated rectangle follow:
width = 2 * diag * fmax(fabs(cos(theta + theta0)), fabs(cos(theta - theta0))
height = 2 * diag * fmax(fabs(sin(theta + theta0)), fabs(sin(theta - theta0))
This is the geometric solution, but you must take into account the rounding performed by the graphics primitive, so it is much preferable to use the graphics API and retrieve the pixbuf dimensions with gdk_pixbuf_get_width() and gdk_pixbuf_get_height(), which will allow for precise placement.
I'd say "let cairo compute those coordinates". If you have access to a cairo_t*, you can do something like the following (untested!):
double x1, y1, x2, y2;
cairo_save(cr);
cairo_rotate(cr, theta); // You can also do cairo_translate() and whatever your heart desires
cairo_new_path(cr);
cairo_rectangle(cr, x, y, width, height);
cairo_restore(cr); // Note: This preserved the path!
cairo_fill_extents(cr, &x1, &y1, &x2, &y2);
cairo_new_path(cr); // Clean up after ourselves
printf("Rectangle is inside of (%g,%g) to (%g,%g) (size %g,%g)\n",
x1, y1, x2, y2, x2 - x1, y2 - y1);
The above code applies some transformation, then constructs a path. This makes cairo apply the transformation to the given coordinates. Afterwards, the transformation is "thrown away" with cairo_restore(). Next, we ask cairo for the area covered by the current path, which it provides in the current coordinate system, i.e. without the transformation.
I have function in my library which computes N (N = 500 to 2000) explicit rather simple operations but it is called hundreds of thousands of times by he main software. Each small computation is independent from other and each one is slightly different (polynomial coefficients and sometimes other additional features vary) and therefore no loop is made but the cases are hard coded into the function.
Unfortunately the calls (loop) in the main software cannot be threaded because before the actual call to this particular function is made, the code there is not thread safe. (bigger software package to deal with here...)
I already tested to create a team of openmp threads in the beginning of this function and execute the computations in e.g. 4 blocks via the sections functionality in openmp, but it seems that the overhead of the thread creation #pragma omp parallel, was too high (Can it be?)
Any nice ideas how to speed-up this kind of situation? Perhaps applying SIMD features but how would it happen when I don't have an explicit for loop here to deal with?
#include "needed.h"
void eval_func (const double x, const double y, const double * __restrict__ z, double * __restrict__ out1, double * __restrict__ out2) {
double logx = log(x);
double tmp1;
double tmp2;
//calculation 1
tmp1 = exp(3.6 + 2.7 * logx - (3.1e+03 / x));
out1[0] = z[6] * z[5] * tmp1;
if (x <= 1.0) {
tmp2 = (-4.1 + 9.2e-01 * logx + x * (-3.3e-03 + x * (2.95e-06 + x * (-1.4e-09 + 3.2e-13 * x))) - 8.8e+02 / x);
} else {
tmp2 = (2.71e+00 + -3.3e-01 * logx + x * (3.4e-04 + x * (-6.8e-08 + x * (8.7e-12 + -4.2e-16 * x))) - 1.0e+03 / x);
}
tmp2 = 1.3 * exp(tmp2);
out2[0] = z[3] * z[7] * tmp1 / tmp2;
//calculation 2
.
.
out1[1] = ...
out2[1] = ...
//calculation N
.
.
out1[N-1] = ...
out2[N-1] = ...