i'm trying to do something in C, I'm building a server that will receive a Latitude and Longitude coordinate, -23.0001, -43.3417 to be exact, i'm trying to make a 10km radius circle around that coordinate, is it possible ? I gave up trying to make a circle and was trying to make a square with 20km sides, where the coordinate I gave is the center, but I keep failing, here is what I'm trying
Quad *cria_quadrado_complex(Coords *b)
{
Quad* a = (Quad*) malloc(sizeof(Quad));
a->x1 = b->x + 0.0433;
a->x2 = b->x + 0.0587;
a->y1 = b->y + 0.0433;
a->y2 = b->y + 0.0490;
return a;
}
the structs used are:
struct coordenadas
{
double x,
y;
};
struct quadrado
{
double x1,
x2,
y1,
y2;
};
typedef struct coordenadas Coords;
typedef struct quadrado Quad;
those 0.0 something values are values I measured from google maps but they are faar from the center and dindn't work, is there a better way to do that ? help
ps -23.0001, -43.3417 are coordinates from google maps
here is an example of what i'm trying to do:
1---------- 1 : top coordinate of the square
-----------
-----C----- C : center coordinate (-23.0001, -43.3417)
-----------
----------2 2 : bottom coordinate of the square
1 and 2 I want to generate automatically with the center coordinates, they will be away from the center so the sides of the square is 10 km
It certainly is possible, but you'll want to get the right formula. A good start might be to read http://www.movable-type.co.uk/scripts/gis-faq-5.1.html.
You don't say why you're doing this in C, and I can't tell if you need to calculate this for an arbitrary lat/lng or just for the one that you give. If you can do it in another language or find a C library that will process KML, and if you only need the one set of coordinates to work, you can use the tool at http://www.freemaptools.com/radius-around-point.htm to generate KML or a Google Maps Static API URL. If you're only concerned about the single pair of coordinates you specify, a hardcoded KML file may be the way to go.
Related
I am having trouble to make the camera rotate around my object. I am going to try to express myself as good as i can.
The objective is that when i load a new object and select it, (selecting it means that _selected_object points to the new object), the active camera points only to the center of the object(when analisis mode is activated), so the rotations and translation( translations just in the Z axis) are made around this object.
This is what i am trying right now:
_selected_camera->at.x=_selected_object->mtptr->M[3];
_selected_camera->at.y=_selected_object->mtptr->M[7];
_selected_camera->at.z=_selected_object->mtptr->M[11];
This code happens when the analisis mode (camera pointing and moving towards object) is activated. But when i move the object, the camera points nowhere or where it was initially pointing.
at.x from selected camera is the point towards the camera is looking. mtptr is the 4x4 matrix position of the object (in which object transformations are made), where last column has the center of the object.
If you're using legacy OpenGL, use gluLookAt.
Otherwise (pseudo code):
mat4 lookAt(vec3 eye, vec3 target, vec3 up)
{
//calculate axes based on the provided parameters
//cross product gives us the perpendicular vector
vec3 z = eye - target;
vec3 x = cross(up, z);
vec3 y = cross(z, x);
//normalize to make unit vectors
normalize(x);
normalize(y);
normalize(z);
//translation vector
//based on the angle between the eye vector (camera position) and the axes
//hint: angle = dot(a, b) -> see: https://en.wikipedia.org/wiki/Dot_product
vec3 t = {
-dot(eye, x),
-dot(eye, y),
-dot(eye, z)
};
return { //a combined scale, skew, rotation and translation matrix
x[0], x[1], x[2], t[0],
y[0], y[1], y[2], t[1],
z[0], z[1], z[2], t[2],
0, 0, 0, 1
};
}
Multiply the resulting (view) matrix with a projection matrix (ortho, perspective).
I'm trying to play around with image manipulation in C and I want to be able to read and write pixels on an SDL Surface. (I'm loading a bmp to a surface to get the pixel data) I'm having some trouble figuring out how to properly use the following functions.
SDL_CreateRGBSurfaceFrom();
SDL_GetRGB();
SDL_MapRGB();
I have only found examples of these in c++ and I'm having a hard time implementing it in C because I don't fully understand how they work.
so my questions are:
how do you properly retrieve pixel data using GetRGB? + How is the pixel addressed with x, y cordinates?
What kind of array would I use to store the pixel data?
How do you use SDL_CreateRGBSurfaceFrom() to draw the new pixel data back to a surface?
Also I want to access the pixels individually in a nested for loop for y and x like so.
for(int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
// get/put the pixel data
}
}
First have a look at SDL_Surface.
The parts you're interested in:
SDL_PixelFormat*format
int w, h
int pitch
void *pixels
What else you should know:
On position x, y (which must be greater or equal to 0 and less than w, h) the surface contains a pixel.
The pixel is described by the format field, which tells us, how the pixel is organized in memory.
The Remarks section of SDL_PixelFormat gives more information on the used datatype.
The pitch field is basically the width of the surface multiplied by the size of the pixel (BytesPerPixel).
With the function SDL_GetRGB, one can easily convert a pixel of any format to a RGB(A) triple/quadruple.
SDL_MapRGB is the reverse of SDL_GetRGB, where one can specify a pixel as RGB(A) triple/quadruple to map it to the closest color specified by the format parameter.
The SDL wiki provides many examples of the specific functions, i think you will find the proper examples to solve your problem.
Very complicated for me to explain the problem, but I will try my best.
I am making a game. There is an area of game objects and a canvas that draws every object in that area using some "draw_from" function - void draw_from(const char *obj, int x, int y, double scale) so that it looks as if a copy of that area is made on-screen.
This gives the advantage of scaling that area using the scale parameter of the draw_from() function.
However, a problem occurs when doing so. For simplicity imagine there are just two actors in that area - one that is right above the other one.
When they are scaled-down, they will appear in different vertical positions, further from each other.
I need to calculate the new correct positions for each of the objects and pass them to draw_from, but I just seem to be unable to figure out how. What is the correct way to recalculate the new positions if each of those objects is scaled down with the same value?
Here is a decent illustration of the problem more or less:
As you can tell the draw_from function will draw the object centered on the x/y coordinates. To draw an object at 0:0 (top-left corner) you must do draw_from(obj, obj->width/2, obj->height/2, 1.0); Not sure if the scaling is implemented that way exactly, but I created a function to obtain the new width and height of the scaled object:
void character_draw_get_scaled_dimensions (Actor* srcActor, double scale, double* sWidth, double* sHeight)
{
double sCharacterWidth = 0;
double sCharacterHeight = 0;
if(srcActor->width >= srcActor->height)
{
sCharacterWidth = (double)srcActor->width * scale / 1.0;
sCharacterHeight = sCharacterWidth * (double)srcActor->height / (double)srcActor->width;
}
else
{
sCharacterHeight = (double)srcActor->height * scale / 1.0;
sCharacterWidth = sCharacterHeight * (double)srcActor->width / (double)srcActor->height;
}
if(sWidth)
(*sWidth) = sCharacterWidth;
if(sHeight)
(*sHeight) = sCharacterHeight;
}
In other words, I need to maintain the distances between those objects across down-scales and I explained how draw_from and /somehow/ how its scaling works.
I need the correct parameters to pass to the draw_from's x and y arguments.
From that point, I think it will get just too broad if I continue elaborating further.
Not the solution I hoped for, but it is still a solution.
The more hacky and less practical (including performance-wise) solution is to draw every object on an offscreen canvas with a scale of 1.0 then draw from that canvas to the main canvas at any scale desired.
That way only the canvas should be repositioned and not every object. It gets really easy from there. I still would prefer the conventional purposed mathematical solution.
I have a set of data points, which I want to test if they lie on a logarithmic spiral arm for given parameters. The following program seems to work, but does not return any points close to the center of my plane, which contains all the data points. The image attached shows that my program does not seem to find any points which overlap with the spiral near the center. Here is the link :
http://imgur.com/QbNPg5S. Moreover, it seems to show two spirals in the overlapped points, which is another issue.
int main(){
float radial[10000]={0}, angle[10000]={0}; // my points of interest
float theta, r_sp; // radius and the angle theta for the spiral
Construct a spiral which lies in the same plane as my sources (green in the image)
for (j=0;j<=PI*10; j++){
theta=j*3./10;
r_sp=a_sp*exp(b_sp*theta);
Calculating the radial and angular components from x and y given coordinates (read from a file)
for (m=0;m<=30;m++){
radial[m]=pow((x_comp*x_comp+y_comp*y_comp),0.5);
angle[m]= atan2f(y_comp, x_comp);
Change the range from [ -pi, pi] to [0, 2*pi] consistent with "theta" of spiral
if (angle[m] < 0.){
angle[m]=angle[m]+PI;
}
Check if the point (radial and angle) lies on/around the spiral. For the realistic effect, I am considering the points at a radial distance "dr=0.5" (jitter) away from the "r_sp" value of the spiral.
if (fabs(r_sp-radial[m]) <=0.5 && fabs(theta-angle[m]) <= 1.0e-2){
printf("%f\t%f\t%f\t%f\n",l[k],b[k],ns[k],radial[m]);
}
}
}
return 0;
}
You check the conditions only for the first turn of spiral that lies in angle range 0..2*Pi.
At first you have to estimate potential turn number from r = radial[m]
r=a*exp(b*t)
r/a=exp(b*t)
ln(r/a)=b*t
t = ln(r/a) / b
turnnumber = Floor(ln(r/a) / b)
Now you can use
angle[m] = YourAngleFromArctan + 2 * Pi * turnnumber
to compare
I want to be able to move a particle in a straight line within a 3D environment but I can't think how to work out the next location based on two points within a 3D space?
I have created a struct which represents a particle which has a location and a next location? Would this be suitable to work out the next location to move too? I know how to initially set the next location using the following method:
// Set particle's direction to a random direction
void setDirection(struct particle *p)
{
float xnm = (p->location.x * -1) - p->velocity;
float xnp = p->location.x + p->velocity;
float ynm = (p->location.y * -1) - p->velocity;
float ynp = p->location.y + p->velocity;
float znm = (p->location.z * -1) - p->velocity;
float znp = p->location.z + p->velocity;
struct point3f nextLocation = { randFloat(xnm, xnp), randFloat(ynm, ynp), randFloat(znm, znp) };
p->nextLocation = nextLocation;
}
The structs I have used are:
// Represents a 3D point
struct point3f
{
float x;
float y;
float z;
};
// Represents a particle
struct particle
{
enum TYPES type;
float radius;
float velocity;
struct point3f location;
struct point3f nextLocation;
struct point3f colour;
};
Am I going about this completely the wrong way?
here's all my code http://pastebin.com/m469f73c2
The other answer is a little mathish, it's actually pretty straight forward.
You need a "Velocity" which you are moving. It also has x, y and z coordinates.
In one time period, to move you just add the x velocity to your x position to get your new x position, repeat for y and z.
On top of that, you can have an "Acceleration" (also x,y,z) For instance, your z acceleration could be gravity, a constant.
Every time period your velocity should be recalcualted in the same way, Call velocity x "vx", so vx should become vx + ax, repeat for y and z (again).
It's been a while since math, but that's how I remember it, pretty straight forward unless you need to keep track of units, then it gets a little more interesting (but still not bad)
I'd suggest that a particle should only have one location member -- the current location. Also, the velocity should ideally be a vector of 3 components itself. Create a function (call it move, displace whatever) that takes a particle and a time duration t. This will compute the final position after t units of time has elapsed:
struct point3f move(struct *particle, int time) {
particle->location->x = particle->velocity->x * t;
/* and so on for the other 2 dimensions */
return particle->location;
}
I would recomend two things:
read an article or two on basic vector math for animation. For instance, this is a site that explains 2d vectors for flash.
start simple, start with a 1d point, ie a point only moving along x. Then try adding a second dimension (a 2d point in a 2d space) and third dimension. This might help you get a better understanding of the underlying mechanics.
hope this helps
Think of physics. An object has a position (x, y, z) and a movement vector (a, b, c). Your object should exist at its position; it has a movement vector associated with it that describes its momentum. In the lack of any additional forces on the object, and assuming that your movement vector describes the movement over a time period t, the position of your object at time x will be (x + (at), y + (bt), z + (c*t)).
In short; don't store the current position and the next position. Store the current position and the object's momentum. It's easy enough to "tick the clock" and update the location of the object by simply adding the momentum to the position.
Store velocity as a struct point3f, and then you have something like this:
void move(struct particle * p)
{
p->position.x += p->velocity.x;
p->position.y += p->velocity.y;
p->position.z += p->velocity.z;
}
Essentially the velocity is how much you want the position to change each second/tick/whatever.
You want to implement the vector math X_{i+1} = X_{i} + Vt. For the Xs and V vectors representing position and velocity respectively, and t representing time. I've parameterized the distance along the track by time because I'm a physicist, but it really is the natural thing to do. Normalize the velocity vector if you want to give track distance (i.e. scale V such that V.x*V.x + V.y*V.y + V.z*V.z = 1).
Using the struct above makes it natural to access the elements, but not so convenient to do the addition: arrays are better for that. Like this:
double X[3];
double V[3];
// initialize
for (int i=0; i<3 ++1){
X[i] = X[i] + V[i]*t;
}
With a union, you can get the advantages of both:
struct vector_s{
double x;
double y;
double z;
}
typedef
union vector_u {
struct vector_s s; // s for struct
double a[3]; // a for array
} vector;
If you want to associate both the position and the velocity of with the particle (a very reasonable thing to do) you construct a structure that support two vectors
typedef
struct particle_s {
vector position;
vector velocity;
//...
} particle_t;
and run an update routine that looks roughly like:
void update(particle *p, double dt){
for (int i=0; i<3 ++i){
p->position.a[i] += p->velocity.a[i]*dt;
}
}
Afaik, there are mainly two ways on how you can calculate the new position. One is like the other have explaint to use an explicit velocity. The other possibility is to store the last and the current position and to use the Verlet integration. Both ways have their advantages and disadvantages. You might also take a look on this interresting page.
If you are trying to move along a straight line between two points, you can use the interpolation formula:
P(t) = P1*(1-t) + P2*t
P(t) is the calculated position of the point, t is a scalar ranging from 0 to 1, P1 and P2 are the endpoints, and the addition in the above is vector addition (so you apply this formula separately to the x, y and z components of your points). When t=0, you get P1; when t=1, you get P2, and for intermediate values, you get a point part way along the line between P1 and P2. So t=.5 gives you the midpoint between P1 and P2, t=.333333 gives you the point 1/3 of the way from P1 to P2, etc. Values of t outside the range [0, 1] extrapolate to points along the line outside the segment from P1 to P2.
Using the interpolation formula can be better than computing a velocity and repeatedly adding it if the velocity is small compared to the distance between the points, because you limit the roundoff error.