are these msec<->timeval functions correct? - c

I have a bug in this program, and I keep coming back to these two functions, but they look right to me. Anything wrong here?
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + time_->tv_usec / 1000;
}
int visual_time_set_from_msec(VisTime *time_, long msec)
{
visual_log_return_val_if_fail(time_ != NULL, -VISUAL_ERROR_TIME_NULL);
long sec = msec / 1000;
long usec = 0;
visual_time_set(time_, sec, usec);
return VISUAL_OK;
}

Your first function is rounding down, so that 1.000999 seconds is rounded to 1000ms, rather than 1001ms. To fix that (make it round to nearest millisecond), you could do this:
long visual_time_get_msec(VisTime *time_)
{
visual_log_return_val_if_fail(time_ != NULL, 0);
return time_->tv_sec * 1000 + (time_->tv_usec + 500) / 1000;
}
Fuzz has already pointed out the truncation in your second example - the only thing I would add is that you can simplify it a little using the modulo operator:
long sec = msec / 1000;
long usec = (msec % 1000) * 1000;
(The above all assume that you're not dealing with negative timevals - if you are, it gets more complicated).

visual_time_set_from_msec doesnt look right...
if someone calls visual_time_set_from_msec(time, 999), then your struct will be set to zero, rather the 999,000us.
What you should do is:
// Calculate number of seconds
long sec = msec / 1000;
// Calculate remainding microseconds after number of seconds is taken in to account
long usec = (msec - 1000*sec) * 1000;
it really depends on your inputs, but thats my 2 cents :-)

Related

C - Timing my program in seconds with microsecond precision on

I'm trying to find out the user time (on a Linux machine) of C program I've written. Currently I'm calling gettimeofday() once at the beginning of my code and once at the end. I'm using the timeval struct and difftime(stop.tv_sec,start.tv_sec) to get the number of seconds elapsed. This returns whole seconds, like "1.000000". However, my project requires that my program be timed in seconds with microsecond precision, for example "1.234567". How can I find this value? I know gettimeofday() also records microseconds in .tv_usec, but I'm not sure how to use this value and format it correctly.
Use the timersub function. It takes three arguments: the first is the initial time, the second the final one, and the third is the result (the diference). All of the three arguments are pointers to timeval struct.
Try putting the initial time into storage then subtract that from the final time when you are done.
storedValue = timer.tv_sec + 0.000001 * timer.tv_usec
timer.tv_sec + 0.000001 * timer.tv_usec - storedValue
If you want microseconds:
long long usec;
usec = tv.tv_sec;
usec *= 1000000;
usec += tv.tv_usec;
If you want fractional seconds [floating point]:
double asec;
asec = tv.tv_usec;
asec /= 1e6;
asec += tv.tv_sec;
Here are some complete functions:
// tvsec -- get current time in absolute microseconds
long long
tvsec(void)
{
struct timeval tv;
long long usec;
gettimeofday(&tv,NULL);
usec = tv.tv_sec;
usec *= 1000000;
usec += tv.tv_usec;
return usec;
}
// tvsecf -- get current time in fractional seconds
double
tvsecf(void)
{
struct timeval tv;
double asec;
gettimeofday(&tv,NULL);
asec = tv.tv_usec;
asec /= 1e6;
asec += tv.tv_sec;
return asec;
}
Note: If you want even higher accuracy (e.g. nanoseconds), you can use clock_gettime(CLOCK_REALTIME,...) and apply similar conversions

PIC Microcontroller using C

I am trying to get this code to work in MM:SS:FFFFFF, where MM is minutes, SS seconds and FFFFFF micro seconds, but my minutes are bot working properly. Instead of getting anything like 01:05:873098 I get 00:65_873098. Thanks for any tip.
#include <prototype.h>
int16 overflow_count;
#int_timer1
void timer1_isr(){
overflow_count++;
}
void main(){
int32 time;
setup_timer_1(T1_INTERNAL | T1_DIV_BY_1);
enable_interrupts(int_timer1);
while(TRUE){
enable_interrupts(global);
while(input(PUSH_BUTTON)); //Wait for press
set_timer1(0);
overflow_count=0;
while(!input(PUSH_BUTTON)); //WAIT FOR RELEASE
disable_interrupts(global);
time=get_timer1();
time=time+((int32)overflow_count<<16);
time-=15; //substract overhead
printf("Time is %02lu:%02lu.%06lu minutes.\r\n",
time/1000000000, (time/6000000), (time/5)%1000000);
}
}
I would suggest that you introduce some intermediate variables like "ticks", "microsecs", "secs", and "mins". Do the calculations step by step, from smallest unit to largest, remembering to subtract off each part before converting the next larger part. Make sure the units work out at each step of the conversion (e.g. don't add or subtract values that have different units). Think about how you'd do it with pencil and paper: probably not the way you've written it! When you break it down like that, it will be easier to get the logic correct.
you aren't ever subtracting out the whole parts
if time is time in micro's... something like:
time = time % (24 * 60 * 60 * 1000); //mod out any extra days
int hours = time / (60 * 60 * 1000);
time = time % (60 * 60 * 1000); // or time -= hours*(60 * 60 * 1000)
int min = time / (60 * 1000);
time = time % (60 * 1000); // or time-= min*(60*1000)
...

converting integers to minutes

I am new to C programming, but experienced in Java. I am creating a simple console application to calculate time between two chosen values. I am storing the chosen values in an int array like this:
static int timeVals[] = {748,800,815,830,845,914,929,942,953,1001,1010,1026,1034,1042,1048};
I am calling a method diff to calculate the time between to values liek this:
int diff (int start, int slut) {
/* slut means end in danish. not using rude words, but I am danish */
int minutes = 0;
int time = 0;
double hest;
hest = (100/60);
printf("hest = %f", hest);
if(timeVals[start] > timeVals[slut])
{
minutes = timeVals[start] - timeVals[slut];
/* printing hest to see what the value is */
printf("t = %f",hest);
time = minutes * (100/60);
printf("minut diff: %d\n", time);
}
else
{
minutes = timeVals[slut] - timeVals[start];
tiem = time + (minutes * (100/60));
}
return time;
}
The weird thing is that when I print out hest value I get 1.000000, which I mean isn't right... I have been struggling with this for hours now, and I can't find the issue.. maybe I'm just bad at math :P
hope you can help me
The issue is
hest = (100/60)
This result will be 1 because 100 / 60 = 1.6666...., but this is integer division, so you will lose your decimals, so hest = 1. Use
hest = (100.0 / 60.0)
Same with
time = minutes * (100/60);
Changed to
time = minutes * (100.0 / 60.0);
In this case again, you will lose your decimals because time is an int.
Some would recommend, if speed is an issue, that you perform all integer calculations and do store all your items as ints in 1/100th's of a second (i.e. 60 minutes in 1/100ths of a second = 60 minutes*60 seconds*100)
EDIT: Just to clarify, the link is for C++, but the same principles apply. But on most x86 based systems this isn't as big of a deal as it is on power limited embedded systems. Here's another link that discusses this issue
Following statement is a NOP
time = minutes * (100/60);
(100 / 60) == 1 because 100 and 60 are integers. You must write this :
time = (minutes * 100) / 60;
For instance if minutes == 123 time will be calculated as (123 * 100) / 60 which is 12300 / 60 which is 205.
As stated, you are mixing floating point and integer arithmetic. When you divide two integers, your result is integer, but you are trying to print that result as float. You might consider using the modulus operator (%) and compute quotient and remainder,
int // you might want to return float, since you are comingling int and float for time
diff (int start, int slut)
{
/* slut means end in danish. not using rude words, but I am danish */
int minutes = 0, seconds = 0, diff;
int time = 0;
double hest = (100.0) / (60.0); //here
printf("hest = %f", hest);
if(timeVals[start] > timeVals[slut])
{
diff = timeVals[start] - timeVals[slut];
time = diff * (100.0/60.0);
minutes = diff/60;
seconds = diff%60; //modulus (remainder after division by 60
}
else
{
diff = timeVals[slut] - timeVals[start];
time = time + diff * (100.0/60.0);
minutes = diff/60;
seconds = diff%60; //modulus (remainder after division by 60
}
/* printing hest to see what the value is */
printf("t = %f",hest);
printf("minut:seconds %d:%02d\n", minutes, seconds );
printf("minut diff: %d\n", time);
return time;
}

clock_gettime on Raspberry Pi with C

I want to measure the time between the start to the end of the function in a loop. This difference will be used to set the amount of loops of the inner while-loops which does some here not important stuff.
I want to time the function like this :
#include <wiringPi.h>
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BILLION 1E9
float hz = 1000;
long int nsPerTick = BILLION/hz;
double unprocessed = 1;
struct timespec now;
struct timespec last;
clock_gettime(CLOCK_REALTIME, &last);
[ ... ]
while (1)
{
clock_gettime(CLOCK_REALTIME, &now);
double diff = (last.tv_nsec - now.tv_nsec );
unprocessed = unprocessed + (diff/ nsPerTick);
clock_gettime(CLOCK_REALTIME, &last);
while (unprocessed >= 1) {
unprocessed --;
DO SOME RANDOM MAGIC;
}
}
The difference between the timer is always negative. I was told this was where the error was:
if ( (last.tv_nsec - now.tv_nsec)<0) {
double diff = 1000000000+ last.tv_nsec - now.tv_nsec;
}
else {
double diff = (last.tv_nsec - now.tv_nsec );
}
But still, my variable difference and is always negative like "-1095043244" (but the time spent during the function is a positive of course).
What's wrong?
Your first issue is that you have `last.tv_nsec - now.tv_nsec, which is the wrong way round.
last.tv_nsec is in the past (let's say it's set to 1), and now.tv_nsec will always be later (for example, 8ns later, so it's 9). In that case, last.tv_nsec - now.tv_nsec == 1 - 9 == -8.
The other issue is that tv_nsec isn't the time in nanoseconds: for that, you'd need to multiply the time in seconds by a billion and add that. So to get the difference in ns between now and last, you want:
((now.tv_sec - last.tv_sec) * ONE_BILLION) + (now.tv_nsec - last.tv_nsec)
(N.B. I'm still a little surprised that although now.tv_nsec and last.tv_nsec are both less than a billion, subtracting one from the other gives a value less than -1000000000, so there may yet be something I'm missing here.)
I was just investigating timing on Pi, with similar approach and similar problems. My thoughts are:
You don't have to use double. In fact you also don't need nano-seconds, as the clock on Pi has 1 microsecond accuracy anyway (it's the way the Broadcom did it). I suggest you to use gettimeofday() to get microsecs instead of nanosecs. Then computation is easy, it's just:
number of seconds + (1000 * 1000 * number of micros)
which you can simply calculate as unsigned int.
I've implemented the convenient API for this:
typedef struct
{
struct timeval startTimeVal;
} TIMER_usecCtx_t;
void TIMER_usecStart(TIMER_usecCtx_t* ctx)
{
gettimeofday(&ctx->startTimeVal, NULL);
}
unsigned int TIMER_usecElapsedUs(TIMER_usecCtx_t* ctx)
{
unsigned int rv;
/* get current time */
struct timeval nowTimeVal;
gettimeofday(&nowTimeVal, NULL);
/* compute diff */
rv = 1000000 * (nowTimeVal.tv_sec - ctx->startTimeVal.tv_sec) + nowTimeVal.tv_usec - ctx->startTimeVal.tv_usec;
return rv;
}
And the usage is:
TIMER_usecCtx_t timer;
TIMER_usecStart(&timer);
while (1)
{
if (TIMER_usecElapsedUs(timer) > yourDelayInMicroseconds)
{
doSomethingHere();
TIMER_usecStart(&timer);
}
}
Also notice the gettime() calls on Pi take almost 1 [us] to complete. So, if you need to call gettime() a lot and need more accuracy, go for some more advanced methods of getting time... I've explained more about it in this short article about Pi get-time calls
Well, I don't know C, but if it's a timing issue on a Raspberry Pi it might have something to do with the lack of an RTC (real time clock) on the chip.
You should not be storing last.tv_nsec - now.tv_nsec in a double.
If you look at the documentation of time.h, you can see that tv_nsec is stored as a long. So you will need something along the lines of:
long diff = end.tv_nsec - begin.tv_nsec
With that being said, only comparing the nanoseconds can go wrong. You also need to look at the number of seconds also. So to convert everything to seconds, you can use this:
long nanosec_diff = end.tv_nsec - begin.tv_nsec;
time_t sec_diff = end.tv_sec - begin.tv_sec; // need <sys/types.h> for time_t
double diff_in_seconds = sec_diff + nanosec_diff / 1000000000.0
Also, make sure you are always subtracting the end time from the start time (or else your time will still be negative).
And there you go!

Why alter curl_multi_timeout() return value?

This sample code contains:
curl_multi_timeout(multi_handle, &curl_timeo);
if(curl_timeo >= 0) {
timeout.tv_sec = curl_timeo / 1000;
if(timeout.tv_sec > 1)
timeout.tv_sec = 1;
else
timeout.tv_usec = (curl_timeo % 1000) * 1000;
}
Why is tv_sec clipped to 1 second? Why isn't the value returned by curl_multi_timeout() used as-is (after dividing by 1000)?
Assuming there's a good reason for the above, then is there a case when you would NOT clip the value to 1 second? What case is that?
The code is just setting a maximum wait time for the later call to select(). If anything, this looks like a bug. It looks as if the code is protecting itself from an unreasonable answer from curl_multi_timeout(). My guess is that the coder was thinking, "if the curl timeout function returns something longer than one minute, then don't wait any longer than that." ...and then proceeded to typo one minute as one second. It probably should be doing
if (timeout.tv_sec > 60) {
timeout.tv_sec = 60;
else if (timeout.tv_sec == 0) {
timeout.tv_usec = curl_timeo * 1000;
}
The mod by 1000 is unnecessary since curl_multi_timeout() returns milliseconds, so if tv_sec is zero, that means that the returned value is in the range of 0 - 999.

Resources