I need to create a accurate delay (around 100us) inside a thread function. I tried using the nanosleep function but it was no accurate enough. I read some post about how to read the hardware 1MHz timer, so on my function in order to create a 100us delay y tried something like this:
prev = *timer;
do {
t = *timer;
} while ((t - prev) < 100);
However, the program seems to stay inside the loop. But if I insert a small nano sleep inside the loop it works (but loosing precision):
sleeper.tv_sec = 0;
sleeper.tv_nsec = (long)(1);
prev = *timer;
do {
nanosleep (&sleeper, &dummy);
t = *timer;
} while ((t - prev) < 500);
I tried the first version in a stand along program and it works, but in my main program, where this is inside a thread it does not.
Does anyone know what the first version (without a small nanosleep) does not work?
I'm sorry to say but Raspberry Pi's OS is not a "real-time OS". In another words, you won't get consistent 100us precision in a user space program due to inherent OS scheduling limitations. If you need that kind of precision, you should use an embedded controller like an Arduino.
Related
I'm making a lander mole version (Atari 1979) in C.I need to implement a timer in my game and then print on the screen.
I'm using SDLDrawLine because i have a vector that represent's my charactes. I need my code to run a string of characters, that's good. I use the sprintf function to transform a number into a string of characters to print it on the screen.
char aux_str[MAX_VALUES];
sprintf(aux_str,"%d",*value);
for(j=0;j<strlen(aux_str);j++){
*tam_caracter_numero = letra_a_longitud(aux_str[j]);
*ptr_valor = letra_a_vector(aux_str[j]);
for(i=0;i<*tam_caracter_numero-1;i++){
SDL_RenderDrawLine(
renderer,
(*ptr_valor)[i][0] * escalado + pos_x,
-(*ptr_valor)[i][1] * escalado + pos_y,
(*ptr_valor)[i+1][0] * escalado + pos_x,
-(*ptr_valor)[i+1][1] * escalado + pos_y
);
}
This works properly, but I need my timer to start in 0000, changing to 0001,0002,.... But when i transform my number in a string using sprintf, the results is only 1, and doesn't print de 0s. There is some function o some way to make this posible? That let begin in 0000?
To print leading zeros just use the format "%04d" instead of plain "%d" in your sprintf-statement.
On account of timing the whole thing I'd recommend going through Lazyfoos's SDL tutorials about timers.
To get the time use SDL_GetTicks(). This will get the time in milliseconds since SDL was initialized. See the man page for more details on that.
To use it as a timer you have to get the delta time since last call to SDL_GetTicks and update it as such.
unsigned time = SDL_GetTicks();
while(game_is_running) { // Your game loop or whatever thread
// keeps track of time
unsigned now = SDL_GetTicks();
unsigned delta_time = now - time;
// Either delay next frame to get a stable FPS and/or use it as
// calculation in physics/collision
// Update the time
time = now;
}
SDL_GetTime is expensive so I would not recommend calling it more than once per loop.
Off-topic: I also wouldn't recommend allocating a string repeatedly in a loop. instead allocate once and overwrite every iteration.
edit: In my "off-topic"-comment I am referring to aux_str[MAX_VALUES];, even though it's 1 instruction it's still unnecessary if it's inside OP's gameloop.
For an assignment I have to use a video driver and system timer handler to display the current running time of the Linux system to the corner of the screen.
However, I have not found anywhere that points me into the direction of obtaining the system time from the kernel when my program runs. I am guessing it is in kernel memory at some address and I can just do something like:
hour = get_word(MEM_LOCATION_OF_HOUR);
sec = get_word(MEM_LOCATION_OF_SEC);
ect...
But I cannot find out if this is possible. My guess is that we are not allowed to use library calls like clock() but if that is the only possible way then maybe we are.
Thanks
Can't use library calls? - that's just craziness. Anyway:
getnstimeofday(struct timespec *ts); is one of many methods from here
In kernel, ktime can be used.
a simple example(for calculating time diff) for your reference.
#include <linux/ktime.h>
int fun()
{
ktime_t entry_stamp, now;
s64 delta;
/* Get the current time .*/
entry_stamp = ktime_get();
/* Do your Stuff... */
now = ktime_get();
delta = ktime_to_ns(ktime_sub(now, entry_stamp));
printk(KERN_INFO "Time Taken:%lld ns to execute\n",(long long)delta);
return 0;
}
I have found that the Real-time clock holds the correct values on boot. The CMOS contains all the needed info.
Here is a link to what I found. http://wiki.osdev.org/CMOS#The_Real-Time_Clock
I am trying to use gettimeofday on an embedded ARM device, however it seems as though I am unable to use it:
gnychis#ubuntu:~/Documents/coexisyst/econotag_firmware$ make
Building for board: redbee-econotag
CC obj_redbee-econotag/econotag_coexisyst_firmware.o
LINK (romvars) econotag_coexisyst_firmware_redbee-econotag.elf
/home/gnychis/Documents/CodeSourcery/Sourcery_G++_Lite/bin/../lib/gcc/arm-none- eabi/4.3.2/../../../../arm-none-eabi/lib/libc.a(lib_a-gettimeofdayr.o): In function `_gettimeofday_r':
gettimeofdayr.c:(.text+0x1c): undefined reference to `_gettimeofday'
/home/gnychis/Documents/CodeSourcery/Sourcery_G++_Lite/bin/../lib/gcc/arm-none-eabi/4.3.2/../../../../arm-none-eabi/lib/libc.a(lib_a-sbrkr.o): In function `_sbrk_r':
sbrkr.c:(.text+0x18): undefined reference to `_sbrk'
collect2: ld returned 1 exit status
make[1]: *** [econotag_coexisyst_firmware_redbee-econotag.elf] Error 1
make: *** [mc1322x-default] Error 2
I am assuming I cannot use gettimeofday() ? Does anyone have any suggestions for being able to tell elapsed time? (e.g., 100ms)
What you need to do is create your own _gettimeofday() function to get it to link properly. This function could use the appropriate code to get the time for your processor, assuming you have a free-running system timer available.
#include <sys/time.h>
int _gettimeofday( struct timeval *tv, void *tzvp )
{
uint64_t t = __your_system_time_function_here__(); // get uptime in nanoseconds
tv->tv_sec = t / 1000000000; // convert to seconds
tv->tv_usec = ( t % 1000000000 ) / 1000; // get remaining microseconds
return 0; // return non-zero for error
} // end _gettimeofday()
What I usually do, is to have a timer running at 1khz, so it will generate an interrupt every millisecond, in the interrupt handler I increment a global var by one, say ms_ticks then do something like:
volatile unsigned int ms_ticks = 0;
void timer_isr() { //every ms
ms_ticks++;
}
void delay(int ms) {
ms += ms_ticks;
while (ms > ms_ticks)
;
}
It is also possible to use this as a timestamp, so let's say I want to do something every 500ms:
last_action = ms_ticks;
while (1) { //app super loop
if (ms_ticks - last_action >= 500) {
last_action = ms_ticks;
//action code here
}
//rest of the code
}
Another alternative, since ARMs are 32bits and your timer will probably be a 32bits one, is to instead of generating a 1khz interrupt, you leave it free running and simply use the counter as your ms_ticks.
Use one of the timers in the chip...
It looks like you are using the Econotag which is based on the MC13224v from Freescale.
The MACA_CLK register provides a very good timebase (assuming the radio is running). You can also use the the RTC with CRM->RTC_COUNT. The RTC may or may not be very good depending on if you have an external 32kHz crystal or not (the econotag does NOT).
e.g. with MACA_CLK:
uint32_t t;
t = *MACA_CLK;
while (*MACA_CLK - t > SOMETIME);
See also the timer examples in libmc1322x:
http://git.devl.org/?p=malvira/libmc1322x.git;a=blob;f=tests/tmr.c
Alternate methods are to use etimers or rtimers in Contiki (which has good support for the Econotag). (see http://www.sics.se/contiki/wiki/index.php/Timers )
I've done this before in one of my applications. Just use :
while(1)
{
...
}
Okay, so I've got some C code to perform a mathematical operation which could, pretty much, take any length of time (depending on the operands supplied to it, of course). I was wondering if there is a way to register some kind of method which will be called every n seconds which can analyse the state of the operation, i.e. what iteration it is currently at, possibly using a hardware timer interrupt or something?
The reason I ask this is because I know the common way to implement this is to be keeping track of the current iteration in a variable; say, an integer called progress and have an IF statement like this in the code:
if ((progress % 10000) == 0)
printf("Currently at iteration %d\n", progress);
but I believe that a mod operation takes a relatively long time to execute, so the idea of having it inside a loop which will be ran many, many times scares me, from an optimisation point of view.
So I get the feeling that having an external way of signalling a progress print is nice and efficient. Are there any great ways to perform this, or is the simple 'mod check' the best (in terms of optimising)?
I'd go with the mod check, but maybe with subtractions instead :-)
icount = 0;
progress = 10000;
/* ... */
if (--progress == 0) {
progress = 10000;
printf("Currently at iteration %d0000\n", ++icount);
}
/* ... */
While mod operations are usually slow, the compiler should be able to optimize and predict this really well and only mis-predict once ever 10'000 ifs, burning one mod operation and ~20 cycles (for the mis-prediction) on it, which is fine. So you are trying to optimize one mod operation every 10'000 iterations. Of course this assumes you are running it on a modern and typical CPU, and not some embedded system with unknown specs. This should even be faster than having a counter variable.
Suggestion: Test it with and without the timing code, and figure out a complex solution if there is really a problem.
Premature optimisation is the root of all evil. -Knuth
mod is about the same speed as division, on most CPU's these days that means about 5-10 cycles... in other words hardly anything, slower than multiply/add/subtract, but not enough to really worry about.
However you are right to want to avoid sting in a loop spinning if you're doing work in another thread or something like that, if you're on a unixish system there's timer_create() or on linux the much easier to use timerfd_create()
But for single threaded, just putting that if in is enough.
Use alarm setitimer to raise SIGALRM signals at regular intervals.
struct itimerval interval;
void handler( int x ) {
write( STDOUT_FILENO, ".", 1 ); /* Defined in POSIX, not in C */
}
int main() {
signal( SIGALRM, &handler );
interval.it_value.tv_sec = 5; /* display after 5 seconds */
interval.it_interval.tv_sec = 5; /* then display every 5 seconds */
setitimer( ITIMER_REAL, &interval, NULL );
/* do computations */
interval.it_interval.tv_sec = 0; /* don't display progress any more */
setitimer( ITIMER_REAL, &interval, NULL );
printf( "\n" ); /* done with the dots! */
}
Note, only a smattering of functions are OK to call inside handler. They are listed partway down this page. If you want to communicate anything for a fancier printout, do it through a sig_atomic_t variable.
you could have a global variable for the iterations, which you could monitor from an external thread.
While () {
Print(iteration);
Sleep(1000);
}
You may need to watch out for data races though.
I currently have something close to the following implementation of a FPS independent game loop for physics based games. It works very well on just about every computer I have tested it on, keeping the game speed consistent when frame rate drops. However I am going to be porting to embedded devices which will likely struggle harder with video and I am wondering if it will still cut the mustard.
edits:
For this question assume that msecs() returns the time passed in milliseconds which the program has run. The implementation of msecs is different on different platforms. This loop is also run in different ways on different platforms.
#define MSECS_PER_STEP 20
int stepCount, stepSize; // these are not globals in the real source
void loop() {
int i,j;
int iterations =0;
static int accumulator; // the accumulator holds extra msecs
static int lastMsec;
int deltatime = msec() - lastMsec;
lastMsec = msec();
// deltatime should be the time since the last call to loop
if (deltatime != 0) {
// iterations determines the number of steps which are needed
iterations = deltatime/MSECS_PER_STEP;
// save any left over millisecs in the accumulator
accumulator += deltatime%MSECS_PER_STEP;
}
// when the accumulator has gained enough msecs for a step...
while (accumulator >= MSECS_PER_STEP) {
iterations++;
accumulator -= MSECS_PER_STEP;
}
handleInput(); // gathers user input from an event queue
for (j=0; j<iterations; j++) {
// here step count is a way of taking a more granular step
// without effecting the overall speed of the simulation (step size)
for (i=0; i<stepCount; i++) {
doStep(stepSize/(float) stepCount); // forwards the sim
}
}
}
I just have a few comments. The first is that you don't have enough comments. There are places where it's not clear what you are trying to do so it is difficult to say if there is a better way to do it, but I'll point those out as I come to them. First, though:
#define MSECS_PER_STEP 20
int stepCount, stepSize; // these are not globals in the real source
void loop() {
int i,j;
int iterations =0;
static int accumulator; // the accumulator holds extra msecs
static int lastMsec;
These are not initialized to anything. The probably turn up as 0, but you should have initialized them. Also, rather than declaring them as static you might want to consider putting them in a structure that you pass into loop by reference.
int deltatime = msec() - lastMsec;
Since lastMsec wasn't (initialized and is probably 0) this probably starts out as a big delta.
lastMsec = msec();
This line, just like the last line, calls msec. This is probably meant as "the current time", and these calls are close enough that the returned value is probably the same for both calls, which is probably also what you expected, but still, you call the function twice. You should change these lines to int now = msec(); int deltatime = now - lastMsec; lastMsec = now; to avoid calling this function twice. Current time getting functions often have much higher overhead than you think.
if (deltatime != 0) {
iterations = deltatime/MSECS_PER_STEP;
accumulator += deltatime%MSECS_PER_STEP;
}
You should have a comment here that says what this does, as well as a comment above
that says what the variables were meant to mean.
while (accumulator >= MSECS_PER_STEP) {
iterations++;
accumulator -= MSECS_PER_STEP;
}
This loop needs a comment. It also needs to not be there. It appears that it could have been replaced with iterations += accumulator/MSECS_PER_STEP; accumulator %= MSECS_PER_STEP;. The division and modulus should run in shorter and more consistent time than the loop on any machine that has hardware division (which many do).
handleInput(); // gathers user input from an event queue
for (j=0; j<iterations; j++) {
for (i=0; i<stepCount; i++) {
doStep(stepSize/(float) stepCount); // forwards the sim
}
}
Doing steps in a loop independent of input will have the effect of making the game unresponsive if it does execute slow and get behind. It appears, at least, that if the game gets behind all of the input will start to stack up and get executed together and all of the in-game time will pass in one chunk. This is a less than graceful way to fail.
Additionally, I can guess what the j loop (outer loop) means, but the inner loop I am less clear on. also, the value passed to the doStep function -- what does that mean.
}
This is the last curly brace. I think that it looks lonely.
I don't know what goes on as far as whatever calls your loop function, which may be out of your control, and that may dictate what this function does and how it looks, but if not I hope that you will reconsider the structure. I believe that a better way to do it would be to have a function that is called repeatedly but with only one event at the time (issued regularly at a relatively short period). These events can be either user input events or timer events. User input events just set things up to react upon the next timer event. (when you don't have any events to process you sleep)
You should always assume that each timer event is processed at the same period, even though there may be some drift here if the processing gets behind. The main oddity that you may notice here is that if the game gets behind on processing timer events and then catches up again the time within the game may appear to slow down (below real time), then speed up (to real time), and then slow back down (to real time).
Ways to deal with this include only allowing one timer event to be in the event queue at one time, which would result in time appearing to slow down (below real time) and then speed back up (to real time) with no super speed interval.
Another way to do this, which is functionally similar to what you have, would be to have the last step of processing each timer event be to queue up the next timer event (note that no one else should send timer events {except for the first one} if this is the way you choose to implement the game). This would mean doing away with the regular time intervals between timer events and also restrict the ability for the program to sleep, since at the very least every time the event queue were inspected there would be a timer event to process.