Moving sprites in sine wave - c

can someone slap me an idea or math formula on how to make my enemies move in a sine wave
tried something like this but they just move at the same time so they just create a straight line of enemies moving left and right.
for(int i = 0; i < 5; i++){
float y = sinf( 100+delta_time*0.06f) * 75;
float x = game->enemy[i].base_x + y;
game->enemy[i].x = x ;
game->enemy[i].y += 1;
SDL_Rect rect = { game->enemy[i].x , game->enemy[i].y ,game->enemy[i].w, game->enemy[i].h};
SDL_RenderCopy(game->renderer , game->enemy[i].sprite , NULL , &rect);
}

Let v=(v_x,v_y) be the overall direction of the enemy. Let o be the vector o=(-v_y/||v||,v_x/||v||) where ||v||=sqrt(v_x.v_x+v_y.v_y) is the norm of v. The vector o is perpendicular to v. A sinusoidal motion is wanted in that direction. Consequently, the position p(t)=(x(t),y(t)) is defined as :
x(t)=v_x.t-A.v_y/||v||.sin(w.t)
y(t)=v_y.t+A.v_x/||v||.sin(w.t)
where A is the magnitude of the ossilations and w the pulsation of the ossilations. The corresponding frequency is f=w/(2pi).Then, the wavelength lambda=||v||/f corresponds to the length of ossilations.
If the enemy is moving in the x direction (v_y=0) then :
x(t)=v_x.t
y(t)=A.sin(w.t)
The length of ossilations is lambda=2pi.v_x/w.

Related

Intesection problem with Möller-Trumbore algorithm on 1 dimension of the triangle

I am currently working on a raytracer project and I just found out a issue with the triangle intersections.
Sometimes, and I don't understand when and why, some of the pixels of the triangle don't appear on the screen. Instead I can see the object right behind it. It only occurs on one dimension of the triangle and it depends on the camera and the triangle postions (e.g. picture below).
Triangle with pixels missing
I am using Möller-Trumbore algorithm to compute every intersection. Here's my implementation :
t_solve s;
t_vec v1;
t_vec v2;
t_vec tvec;
t_vec pvec;
v1 = vec_sub(triangle->point2, triangle->point1);
v2 = vec_sub(triangle->point3, triangle->point1);
pvec = vec_cross(dir, v2);
s.delta = vec_dot(v1, pvec);
if (fabs(s.delta) < 0.00001)
return ;
s.c = 1.0 / s.delta;
tvec = vec_sub(ori, triangle->point1);
s.a = vec_dot(tvec, pvec) * s.c;
if (s.a < 0 || s.a > 1)
return ;
tvec = vec_cross(tvec, v1);
s.b = vec_dot(dir, tvec) * s.c;
if (s.b < 0 || s.a + s.b > 1)
return ;
s.t1 = vec_dot(v2, tvec) * s.c;
if (s.t1 < 0)
return ;
if (s.t1 < rt->t)
{
rt->t = s.t1;
rt->last_obj = triangle;
rt->flag = 0;
}
The only clue at the moment is that by using a different method of calculating my ray (called dir in the code), the result is that I have less pixels missing.
Moreover, when I turn the camera and look behind, I see that the bug occurs on the opposite side of the triangle. All of this make me think that the issue is mainly linked with the ray..
Take a look at Watertight Ray/Triangle Intersection. I would much appropriate if you could provide a minimal example where a ray should hit the triangle, but misses it. I had this a long time ago with the Cornel Box - inside the box there were some "black" pixels because on edges none of the triangles has been hit. It's a common problem stemming from floating-point imprecision.

How to make turret pointing at an object using 2d frames in tower defense games?

I'm working on tower defense game and I'm using stencyl.
I want to make 2d tower defense game like (clash of clans), so I want to know how to make a turret pointing at an object using frames like (canon in clash of clans).
I mean when an object enters the range of tower the tower will point at it without rotating the tower but using 2d frames instead by some way using code or mathematical way.
I've found the solution.
Do this :
float Direction = 0;
float FinalDirection = 0;
float DirectionDegree = 0;
int NumberOfDirections = 0; // eg: 24 or 32 or even 128 or anything Directions
DirectionDegree = 360 / NumberOfDirections;
void update() // this will run every frame
{
Direction = Math.atan2(target.y - tower.y, target.x - tower.x ) * (180 / Math.PI);
if(Direction < 0)
{
Direction += 360;
}
FinalDirection = Direction / DirectionDegree;
tower.frame = FinalDirection;
}

Length of the intercept from intersection of a line with a cylinder (ring)

I have some sources with coordinates (xn, yn, zn) w.r.t a center C of a ring and unit vectors (ns_ux, ns_uy, ns_uz) along my line of sight. I want to calculate whether these sources pass through a cylinder of inner and outer radius 9.5 and 10.5 units, respectively. If they intersect this cylinder (or I call it ring, sometimes), then I would like to calculate the length of this intercept. My position is outside of this ring and there are sources which lie beyond the center C on the other side. These sources, therefore will pass through this ring twice. This picture should help visualize this problem.
#define PI 3.142
int main(){
int k,number=200;
float r_min=9.50000;
float r_max=10.500000;
float step=0.3;
float z_c = 3.0;
float ns_ux[number],ns_uy[number],ns_uz[number],xn[number], yn[number],zn[number],l[number],b[number],ns[number],x_comp,y_comp,z_comp,radial;
FILE* val= NULL;
val=fopen("novae_uniform_unitvectors.txt", "r");
for(k=0;k<=(number-1);k++){
fscanf(val,"%f %f %f %f %f %f %f %f %f", &xn[k], &yn[k], &zn[k], &ns_ux[k], &ns_uy[k], &ns_uz[k], &l[k], &b[k], &ns[k]);
float u=0.;
for (u=0.;u<=30.;u=u+step){
x_comp=xn[k]+u*ns_ux[k];
vector addition : calculating the x_comp w.r.t the center C when stepped by 'u' units along my l.o.s.
y_comp=yn[k]+u*ns_uy[k];
radial=pow((x_comp*x_comp+y_comp*y_comp),0.5);
if (radial >=r_min && radial <=r_max){
z_comp=zn[k]+u*ns_uz[k];
checking if the height is consistent with the ring's height
if(z_comp >=-z_c && z_comp <= z_c)
printf("%f\t%f\t%f\t%f\n",l[k],u, z_comp, radial);
}
}
}
return 0.;
}
This 'radial' values gives me a list of points where my line of sight intersects with the ring. But, I require only the end points to calculate the length of the intercept on the ring.
e.g. in the case listed below, my l.o.s. passes through the ring at I and then comes off at II. Then it keeps going until it hits the ring again at III and then come out of it at IV. I need to store only I, II , III and IV points in my file. How would I be able to do it ?
longitude..........u........ z_comp........radial
121.890999 0.100000 0.016025 9.561846 I
121.890999 0.200000 0.038453 9.538050
121.890999 0.300000 0.060881 9.515191 II
121.890999 4.799998 1.070159 9.518372 III
121.890999 4.899998 1.092587 9.541364
121.890999 4.999998 1.115016 9.565292
...... skipping to save space........
121.890999 7.399995 1.653297 10.400277
121.890999 7.499995 1.675725 10.444989
121.890999 7.599995 1.698153 10.490416 IV
Figured out a way to store only the final and initial values by using a boolean operator as follows (continued from the code in the question) :
define bool change = true;
...(rest of the program)...
if(radial >= r_min && radial <= r_max) {
z_comp = zn[k] + u * ns_uz[k];
if (z_comp >= -z_c && z_comp <= z_c)
if (change) {
printf("%f\t%f\t%f\t%f\t", l[k], b[k], ns[k], radial[i]);
change = !change;
}
} else { // if the condition of radial and z_comp is not met
if (!change) {
fprintf(fp, "%f\n", radial[i - 1]);
change = !change;
}
}
This would store only the first and the last values of the radial component (i.e. the end points of the intercept of the line of sight vector on the ring)

Kalman Filter implementation - what could be wrong

I am sorry for being this tedious but I reviewed my code several times with the help of a dozen of articles but still my KF doesn't work. By "doesn't work" I mean that the estimates by KF are wrong. Here is a nice paste of Real, Noised and KF estimated positions (just a small chunk).
My example is the same as in every tutorial I've found - I have a state vector of position and velocity. Position is in meters and represents vertical position in air. My real world case is skydiving (with parachute). In my sample generated data I've assumed we start at 3000m and the velocity is 10m/s.
P.S.: I am pretty sure matrix computations are OK - there must be an error with the logic.
Here I generate data:
void generateData(float** inData, float** noisedData, int x, int y){
inData[0][0]= 3000; //start position
inData[1][0]= -10; // 10m/s velocity; minus because we assume it's falling
noisedData[0][0]= 2998;
noisedData[1][0]= -10;
for(int i=1; i<x; i++){
inData[0][i]= inData[0][i-1] + inData[1][i-1];
inData[1][i]= inData[1][i-1]; //the velocity doesn't change for simplicity's sake
noisedData[0][i]=inData[0][i]+(rand()%6-3); //we add noise to real measurement
noisedData[1][i]=inData[1][i]; //velocity has no noise
}
}
And this is my implementation (matrices initialization is based on Wikipedia Kalman example):
int main(int argc, char** argv) {
srand(time(NULL));
float** inData = createMatrix(100,2); //2 rows, 100 columns
float** noisedData = createMatrix(100,2);
float** estData = createMatrix(100,2);
generateData(inData, noisedData, 100, 2);
float sampleRate=0.1; //10hz
float** A=createMatrix(2,2);
A[0][0]=1;
A[0][1]=sampleRate;
A[1][0]=0;
A[1][1]=1;
float** B=createMatrix(1,2);
B[0][0]=pow(sampleRate,2)/2;
B[1][0]=sampleRate;
float** C=createMatrix(2,1);
C[0][0]=1; //we measure only position
C[0][1]=0;
float u=1.0; //acceleration magnitude
float accel_noise=0.2; //acceleration noise
float measure_noise=1.5; //1.5 m standard deviation
float R=pow(measure_noise,2); //measure covariance
float** Q=createMatrix(2,2); //process covariance
Q[0][0]=pow(accel_noise,2)*(pow(sampleRate,4)/4);
Q[0][1]=pow(accel_noise,2)*(pow(sampleRate,3)/2);
Q[1][0]=pow(accel_noise,2)*(pow(sampleRate,3)/2);
Q[1][1]=pow(accel_noise,2)*pow(sampleRate,2);
float** P=createMatrix(2,2); //covariance update
P[0][0]=0;
P[0][1]=0;
P[1][0]=0;
P[1][1]=0;
float** P_est=createMatrix(2,2);
P_est[0][0]=P[0][0];
P_est[0][1]=P[0][1];
P_est[1][0]=P[1][0];
P_est[1][1]=P[1][1];
float** K=createMatrix(1,2); //Kalman gain
float** X_est=createMatrix(1,2); //our estimated state
X_est[0][0]=3000; X_est[1][0]=10;
// !! KALMAN ALGORITHM START !! //
for(int i=0; i<100; i++)
{
float** temp;
float** temp2;
float** temp3;
float** C_trans=matrixTranspose(C,2,1);
temp=matrixMultiply(P_est,C_trans,2,2,1,2); //2x1
temp2=matrixMultiply(C,P_est,2,1,2,2); //1x2
temp3=matrixMultiply(temp2,C_trans,2,1,1,2); //1x1
temp3[0][0]+=R;
K[0][0]=temp[0][0]/temp3[0][0]; // 1. KALMAN GAIN
K[1][0]=temp[1][0]/temp3[0][0];
temp=matrixMultiply(C,X_est,2,1,1,2);
float diff=noisedData[0][i]-temp[0][0]; //diff between meas and est
X_est[0][0]=X_est[0][0]+(K[0][0]*diff); // 2. ESTIMATION CORRECTION
X_est[1][0]=X_est[1][0]+(K[1][0]*diff);
temp=createMatrix(2,2);
temp[0][0]=1; temp[0][1]=0; temp[1][0]=0; temp[1][1]=1;
temp2=matrixMultiply(K,C,1,2,2,1);
temp3=matrixSub(temp,temp2,2,2,2,2);
P=matrixMultiply(temp3,P_est,2,2,2,2); // 3. COVARIANCE UPDATE
temp=matrixMultiply(A,X_est,2,2,1,2);
X_est[0][0]=temp[0][0]+B[0][0]*u;
X_est[1][0]=temp[1][0]+B[1][0]*u; // 4. PREDICT NEXT STATE
temp=matrixMultiply(A,P,2,2,2,2);
float** A_inv=getInverse(A,2);
temp2=matrixMultiply(temp,A_inv,2,2,2,2);
P_est=matrixAdd(temp2,Q,2,2,2,2); // 5. PREDICT NEXT COVARIANCE
estData[0][i]=X_est[0][0]; //just saving here for later to write out
estData[1][i]=X_est[1][0];
}
for(int i=0; i<100; i++) printf("%4.2f : %4.2f : %4.2f \n", inData[0][i], noisedData[0][i], estData[0][i]); // just writing out
return (EXIT_SUCCESS);
}
It looks like you are assuming a rigid body model for the problem. If that is the case, then for the problem you are solving, I would not put in the input u when you do the process update to predict the next state. Maybe I am missing something but the input u does not play any role in generating the data.
Let me put it another way, setting u to +1 looks like your model is assuming that the body should move in the +x direction because there is an input in that direction, but the measurement is telling it to go the other way. So if you put a lot of weight on the measurements, it's going to go in the -ve direction, but if you put a lot of weight on the model, it should go in the +ve direction. Anyway, based on the data generated, I don't see a reason for setting u to anything but zero.
Another thing, your sampling rate is 0.1 Hz, But when you generate data, you are assuming it's one second, since every sample, the position is changed by -10 meters per second.
Here is a matlab/octave implementation.
l = 1000;
Ts = 0.1;
y = 3000; %measurement to be fed to KF
v = -10; % METERS PER SECOND
t = [y(1);v]; % truth for checking if its working
for i=2:l
y(i) = y(i-1) + (v)*Ts;
t(:,i) = [y(i);v]; % copy to truth vector
y(i) = y(i) + randn; % noise it up
end
%%%%% Let the filtering begin!
% Define dynamics
A = [1, Ts; 0, 1];
B = [0;0];
C = [1,0];
% Steady State Kalman Gain computed for R = 0.1, Q = [0,0;0,0.1]
K = [0.44166;0.79889];
x_est_post = [3000;0];
for i=2:l
x_est_pre = A*x_est_post(:,i-1); % Process update! That is our estimate in case no measurement comes in.
%%% OMG A MEASUREMENT!
x_est_post(:,i) = x_est_pre + K*(-x_est_pre(1)+y(i));
end
You are doing a lot of weird array indexing.
float** A=createMatrix(2,2);
A[0][0]=1;
A[0][3]=sampleRate;
A[1][0]=0;
A[1][4]=1;
What is the expected outcome of indexing outside of the bounds of the array?

OpenGL - Mapping between x and y in glVertex2f(x, y) to screen integer coordinates

I would like to know how the vertices of glVertex2f(x, y) map to actual screen integer co-ordinates.
I intend to use a complex plane with minR, minI and maxR, maxI (I and R - Imaginary and Real part), such that the plane gets mapped to 512 x 512 pixels on the screen. I have points of 512 steps between the min and max values.
The mapping between the vertices is unclear since, I had to scale the my planar image using glScalef(100, 100, 0) to get it roughly fit the screen. But still, a large portion of it is left blank.
Please note that I am using the glBegin(GL_POINTS) routine to map the points in the plane to the screen.
The code looks thus,
for (X = 0; X < 512; X++)
for (Y = 0; Y < 512; Y++)
glVertex2f (Complexplane[X][Y].real, Complexplane[X][Y].imag);
P.S.:
Complexplane[0][0].real = -2, Complexplane[0][0].imag = -1.2
Complexplane[511][511].real = 1.0, Complexplane[0][0].imag = 1.8
I'm assuming you haven't set the projection or modelview matrices - they will be set to the identity matrix by default BTW...
For X,Y coordinates, a point will be visible if: -1 <= X <= 1, -1 <= Y <= 1
The glViewport function describes how this range is mapped to the window. It is initially set to (0, 0, window_width, window_height) when the GL context is created. The fact that glScale(100, 100, 0) is only taking up a portion of the window suggests that you are applying another transform elsewhere.
The mapping depends on the transformation matrices set. In up to OpenGL-2 the pipeline is
v_eye = ModelviewMatrix * v
v_projected = ProjectionMatrix * v_eye
v_clipped = clip(v_projected)
v_NDC.xyzw = v_clipped.xyzw / v_clipped.w
The default matrices are identity, so the only operation applied in the default state is the clipping. v_NDC then undergoes the viewport transform:
p.xyz = (v_NDC.xyz + 1) * viewport.wh / 2 + viewport.xy

Resources