To move objects with a variable time step I just have to do:
ship.position += ship.velocity * deltaTime;
But when I try this with:
ship.velocity += ship.power * deltaTime;
I get different results with different time steps. How can I fix this?
EDIT:
I am modelling an object falling to the ground on one axis with a single fixed force (gravity) acting on it.
ship.position = ship.position + ship.velocity * deltaTime + 0.5 * ship.power * deltaTime ^ 2;
ship.velocity += ship.power * deltaTime;
http://www.ugrad.math.ubc.ca/coursedoc/math101/notes/applications/velocity.html
The velocity part of your equations is correct and they must both be updated at every time step.
This all assumes that you have constant power (acceleration) over the deltaTime as pointed out by belisarius.
What you are doing (mathematically) is evaluating integrals. In the first case, the linear approximation is exact, as you have a linear relationship.
In the second case, you have at least a parabola, so your results are only approximate. You may get better results by using a smaller deltaTime, or by using the real integral equations, if available.
Edit
Brian's answer is right as long as the ship.power remains always constant, and you recalculate ship.velocity at each step. It is indeed the integral equation for a constant accelerated movement.
This is an inherent problem trying to integrate numerically. There will be an error. Lowering delta will give you more accurate results, but more computation is needed. If your power function is integrable, you could try that.
Your simulation is numerically solving the equation of motion for a single mass point. The time discretisation you are using is called "Euler method", and it is possible to show that it does not preserve energy (as the exact solution does in some way). A much better yet simple way of solving equations of motion is the "leapfrog integration".
You can use Verlet integration to calculate position and velocity of object. Acceleration you can calculate from a = m*F where m is mass and F is force. This is one of the easiest algorithm
In your code you use setInterval(moveBoxes,20) to update the boxes, and subsequently you use (new Date()).getTime()) to calculate deltaT. This is somewhat redundant, because you could have used the number 20 to calculate deltaT directly.
It is better write the code so that you use exacly the same value for deltaT during each time step. (In other words deltaT should not depend on the value of (new Date()).getTime())). This way your code becomes reproducible and it is easier for you to write unit tests.
Let us look at a situation where the browser has less CPU-time available for a short time interval. In this situation you want to avoid long term effects on the dynamics. One the lack of CPU-time is over you want the browser to return to a state that is unaffected by the short lack of CPU-time. You can achieve this by using the same value of deltaT in each time step.
By the way. I think that the following code
if(box.x < 0) {
box.x = 0;
box.vx *= -1;
}
Could be replaced with
if(box.x < 0) {
box.x *= -1 ;
box.vx *= -1;
}
Good luck with the project - and please include code samples in the first version of your question next time you ask :-)
Related
I want to create my own C function to interpolate some data points and find an accurate minimum (Overall project is audio frequency tuning, and I'm using the YIN algorithm which is working well). I am implementing this on a digital DSP K22F ARM chip, so I want to minimize floating point multiplies as much as possible to implement within the interrupt while the main function pushes to a display and #/b indicators.
I have gotten to the point where I need to interpolate I have implemented the algorithm, found the integer minimum, and now need to interpolate. Currently I am trying to parabolically interpolate using the 3 points I have. I have found one that works within a small margin of error Most interpolation functions seem to only be made between two points.
It seems as though the secant method on this page would work well for my application. However, I am at a loss for how to combine the 3 points with this 2 point method. Maybe I am going about this the wrong way?
Can someone help me implement the secant method of interpolation?
I found some example code that gets the exact same answer as my code.
Example code:
betterTau = minTau + (fc - fa) / (2 * (2 * fb - fc - fa));
My code:
newpoint = b + ((fa - fb)*(c-b).^2 - (fc - fb)*(b-a)^2) / ...
(2*((fa-fb)*(c-b)+(fc-fb)*(b-a)))
where the x point values are a, b, and c. The values of each point is fa, fb, and fc, respectively
Currently I am just simulating in MATLAB before I put it on the board which is why the syntax isn't C. Mathematically, I am not seeing how these two equations are equivalent.
Can someone explain to me how these two functions are equivalent?
Thank you in advance.
I'm trying to create a Gauss Eliminator in C. For this, from time to time, I need to check whether a matrix is numerically singular: if a certain number (double) is very very very small.
My problem is, that if I try to do this:
if(0 == matrix->items[from]){
fprintf(stderr,"Matrix is still singular after attempting pivot. Exitig.\n");
}
This will never yield true. Because of the inaccuracy of double, it will never be exactly 0. However, when trying to run the program, cases like this fill up the numbers with inf or NaN, depending on whether multiplying or dividing with it and its combinations.
In order to filter these, I would need something like this:
#define EPSILON very_small
// rest of the code
if(matrix->items[from] < EPSILON){
...singular
}
What is the recommended value for this EPSILON? Is it the absolute accuracy of double, or maybe a bit larger value?
By the way, which would be better, defining it as a macro as above, or using it like:
const double EPSILON = ...;
Sorry if I'm not being clear enough, English is not my native language.
Thanks for your replies.
I need to check whether a matrix is numerically singular
Usually this is detected by preventing double overflow.
// Check if 1.0/determinant will overflow.
if (fabs(determinant) <= 1.0/(0.99*DBL_MAX)) {
Handle_Singular_Case()
} else {
one_over_det = 1.0/determinant;
}
Using DBL_EPSILON (example: 2e-16) is usually the wrong solution. double math needs relative comparisons to insure good calculations far way from 1.0 magnitude.
// Rarely the right thing to do.
#define EPSILON DBL_EPSILON
if(fabs(matrix->items[from]) < EPSILON){
Yet this is very context sensitive #Weather Vane.
Yet OP's real problem is certainly here: "when trying to run the program, cases like this fill up the numbers with inf or NaN, depending on whether multiplying or dividing with it and its combinations.". Various techniques can be used to avoid this issue like doing elimination with partial pivoting.
To address that issue, best to post code and sample data.
I want to implement the following pseudo-code in SIMULINK:
q^0 = q_init, i = 0
IF q^i ! = q_final
q^(i+1) = q^i + alpha * F(q^i)
i = i + 1
ELSE return <q^0,q^1 ... q^i>
The main problem is, that F(q^i) is a function of the current q and is calculated in every iteration. When trying to impement this in SIMULINK, I run into an algebraic loop that cannot be solved.
What is the proper way of solving the problem (in SIMULINK) ?
Thank you !
Miklos
Understand what to partition inside of a Simulink simulation and what should be the output.
You'd usually cache the outputs q^0 to q^i as outputs from the model, rather than trying to get the whole model to output that.
That means the Simulink model would just be dealing with calculating q^(i+1) from q^(i) (which you store with a unit-delay block).
Regarding the if condition: It is best not to run a simulation forever unless you know it will definitely tend towards q_final. If you're not using integers that might get very hard for the == to exactly evaluate. In that case you should use (abs(q^i - final) > threshold) then, and check it will end.
Ending a simulation exactly at that point might be possible with a breakpoint, but a simpler option would to allowing the simulation to run past it and then trim your output (q^0 to q^i, then some more values) in Matlab.
I'm trying to solve a differential equation for a pendulum movement, given the pendulum initial angle (x), gravity acceleration (g), line length (l), and a time step (h). I've tried this one using Euler method and everything's alright. But now i am to use Runge-Kutta method implemented in GSL. I've tried to implement it learning from the gsl manual, but I'm stuck at one problem. The pendulum doesn't want to stop. Let's say that I start it with initial angle 1 rad, it always has it's peak tilt at 1 rad, no matter how many swings it already did. Here's the equation and the function i use to give it to GSL:
x''(t) + g/l*sin(x(t)) = 0
transforming it:
x''(t) = -g/l*sin(x(t))
and decomposing:
y(t) = x'(t)
y'(t) = -g/l*sin(x(t))
Here's the code snippet, if that's not enough i can post the whole program (it's not too long), but maybe here's the problem somewhere:
int func (double t, const double x[], double dxdt[], void *params){
double l = *(double*) params;
double g = *(double*) (params+sizeof(double));
dxdt[0] = x[1];
dxdt[1] = -g/l*sin(x[0]);
return GSL_SUCCESS;
}
The parameters g and l are passed correctly to the function, I've already checked that.
As Barton Chittenden noted in a comment, the pendulum should keep going in the absence of friction. This is expected.
As for why it slows and stops when you use the Euler method, that's touching on a subtle and interesting subject. A (ideal, friction-free) physical pendulum has the property that energy in the system is conserved. Different integration schemes preserve that property to different degrees. With some integration schemes, the energy in the system will grow, and the pendulum will swing progressively higher. With others, energy is lost, and the pendulum comes to a halt. The speed at which either of these happens depends partially on the order of the method; a more accurate method will often lose energy more slowly.
You can easily observe this by plotting the total energy in your system (potential + kinetic) for different integration schemes.
Finally, there is a whole fascinating sub-field of integration methods which preserve certain conserved quantities of a system like this, called symplectic methods.
I have some code that runs fairly well, but I would like to make it run better. The major problem I have with it is that it needs to have a nested for loop. The outer one is for iterations (which must happen serially), and the inner one is for each point particle under consideration. I know there's not much I can do about the outer one, but I'm wondering if there is a way of optimizing something like:
void collide(particle particles[], box boxes[],
double boxShiftX, double boxShiftY) {/*{{{*/
int i;
double nX;
double nY;
int boxnum;
for(i=0;i<PART_COUNT;i++) {
boxnum = ((((int)(particles[i].sX+boxShiftX))/BOX_SIZE)%BWIDTH+
BWIDTH*((((int)(particles[i].sY+boxShiftY))/BOX_SIZE)%BHEIGHT));
//copied and pasted the macro which is why it's kinda odd looking
particles[i].vX -= boxes[boxnum].mX;
particles[i].vY -= boxes[boxnum].mY;
if(boxes[boxnum].rotDir == 1) {
nX = particles[i].vX*Wxx+particles[i].vY*Wxy;
nY = particles[i].vX*Wyx+particles[i].vY*Wyy;
} else { //to make it randomly pick a rot. direction
nX = particles[i].vX*Wxx-particles[i].vY*Wxy;
nY = -particles[i].vX*Wyx+particles[i].vY*Wyy;
}
particles[i].vX = nX + boxes[boxnum].mX;
particles[i].vY = nY + boxes[boxnum].mY;
}
}/*}}}*/
I've looked at SIMD, though I can't find much about it, and I'm not entirely sure that the processing required to properly extract and pack the data would be worth the gain of doing half as many instructions, since apparently only two doubles can be used at a time.
I tried breaking it up into multiple threads with shm and pthread_barrier (to synchronize the different stages, of which the above code is one), but it just made it slower.
My current code does go pretty quickly; it's on the order of one second per 10M particle*iterations, and from what I can tell from gprof, 30% of my time is spent in that function alone (5000 calls; PART_COUNT=8192 particles took 1.8 seconds). I'm not worried about small, constant time things, it's just that 512K particles * 50K iterations * 1000 experiments took more than a week last time.
I guess my question is if there is any way of dealing with these long vectors that is more efficient than just looping through them. I feel like there should be, but I can't find it.
I'm not sure how much SIMD would benefit; the inner loop is pretty small and simple, so I'd guess (just by looking) that you're probably more memory-bound than anything else. With that in mind, I'd try rewriting the main part of the loop to not touch the particles array more than needed:
const double temp_vX = particles[i].vX - boxes[boxnum].mX;
const double temp_vY = particles[i].vY - boxes[boxnum].mY;
if(boxes[boxnum].rotDir == 1)
{
nX = temp_vX*Wxx+temp_vY*Wxy;
nY = temp_vX*Wyx+temp_vY*Wyy;
}
else
{
//to make it randomly pick a rot. direction
nX = temp_vX*Wxx-temp_vY*Wxy;
nY = -temp_vX*Wyx+temp_vY*Wyy;
}
particles[i].vX = nX;
particles[i].vY = nY;
This has the small potential side effect of not doing the extra addition at the end.
Another potential speedup would be to use __restrict on the particle array, so that the compiler can better optimize the writes to the velocities. Also, if Wxx etc. are global variables, they may have to get reloaded each time through the loop instead of possibly stored in registers; using __restrict would help with that too.
Since you're accessing the particles in order, you can try prefetching (e.g. __builtin_prefetch on GCC) a few particles ahead to reduce cache misses. Prefetching on the boxes is a bit tougher since you're accessing them in an unpredictable order; you could try something like
int nextBoxnum = ((((int)(particles[i+1].sX+boxShiftX) /// etc...
// prefetch boxes[nextBoxnum]
One last one that I just noticed - if box::rotDir is always +/- 1.0, then you can eliminate the comparison and branch in the inner loop like this:
const double rot = boxes[boxnum].rotDir; // always +/- 1.0
nX = particles[i].vX*Wxx + rot*particles[i].vY*Wxy;
nY = rot*particles[i].vX*Wyx + particles[i].vY*Wyy;
Naturally, the usual caveats of profiling before and after apply. But I think all of these might help, and can be done regardless of whether or not you switch to SIMD.
Just for the record, there's also libSIMDx86!
http://simdx86.sourceforge.net/Modules.html
(On compiling you may also try: gcc -O3 -msse2 or similar).
((int)(particles[i].sX+boxShiftX))/BOX_SIZE
That's expensive if sX is an int (can't tell). Truncate boxShiftX/Y to an int before entering the loop.
Do you have sufficient profiling to tell you where the time is spent within that function?
For instance, are you sure it's not your divs and mods in the boxnum calculation where the time is being spent? Sometimes compilers fail to spot possible shift/AND alternatives, even where a human (or at least, one who knew BOX_SIZE and BWIDTH/BHEIGHT, which I don't) might be able to.
It would be a pity to spend lots of time on SIMDifying the wrong bit of the code...
The other thing which might be worth looking into is if the work can be coerced into something which could work with a library like IPP, which will make well-informed decisions about how best to use the processor.
Your algorithm has too many memory, integer and branch instructions to have enough independent flops to profit from SIMD. The pipeline will be constantly stalled.
Finding a more effective way to randomize would be top of the list. Then, try to work either in float or int, but not both. Recast conditionals as arithmetic, or at least as a select operation. Only then does SIMD become a realistic proposition