Storing a Timestamp as a Binary Fraction in C - c

I'm working on an implementation of an obscure network protocol, and one of the requirements is that each packet header should contain a 56-bit timestamp, with the first 4 bytes containing an integer number of seconds since the epoch, and the remaining 3 bytes used to contain a binary fraction of the current second. In other words, the last 3 bytes should represent the number of 2^-24 seconds since the previous second. The first part of the timestamp is trivial, but I'm struggling to implement the C code that would store the fractional part of the timestamp. Can anyone shed some light on how to do this?
For completeness' sake, here's the timestamp code I have so far. primaryHeader is a char* that I'm using to store the header data for the packet. You can assume that the first 6 bytes in primaryHeader contain valid, unrelated data, and that primaryHeader is large enough to contain everything that needs to be stored in it.
int secs = (int)time(NULL);
memcpy(&primaryHeader[7], &secs, sizeof(int));
// TODO: Compute fractional portion of the timestamp and memcpy to primaryHeader[11]

The time() function will only give you seconds. You a higher resolution timer.
struct timespec t;
timespec_get(&ts, TIME_UTC);
int secs = t.tv_sec; // Whole seconds
int frac = ((int64_t)t.tv_nsec << 24) / 1000000000; // Fractional part.
If this is not available you can use clock_gettime, but clock_gettime is not found on Windows.

Related

Program stops unexpectedly when I use gettimeofday() in an infinite loop

I've written a code to ensure each loop of while(1) loop to take specific amount of time (in this example 10000µS which equals to 0.01 seconds). The problem is this code works pretty well at the start but somehow stops after less than a minute. It's like there is a limit of accessing linux time. For now, I am initializing a boolean variable to make this time calculation run once instead infinite. Since performance varies over time, it'd be good to calculate the computation time for each loop. Is there any other way to accomplish this?
void some_function(){
struct timeval tstart,tend;
while (1){
gettimeofday (&tstart, NULL);
...
Some computation
...
gettimeofday (&tend, NULL);
diff = (tend.tv_sec - tstart.tv_sec)*1000000L+(tend.tv_usec - tstart.tv_usec);
usleep(10000-diff);
}
}
from man-page of usleep
#include <unistd.h>
int usleep(useconds_t usec);
usec is unsigned int, now guess what happens when diff is > 10000 in below line
usleep(10000-diff);
Well, the computation you make to get the difference is wrong:
diff = (tend.tv_sec - tstart.tv_sec)*1000000L+(tend.tv_usec - tstart.tv_usec);
You are mixing different integer types, missing that tv_usec can be an unsigned quantity, which your are substracting from another unsigned and can overflow.... after that, you get as result a full second plus a quantity that is around 4.0E09usec. This is some 4000sec. or more than an hour.... aproximately. It is better to check if there's some carry, and in that case, to increment tv_sec, and then substract 10000000 from tv_usec to get a proper positive value.
I don't know the implementation you are using for struct timeval but the most probable is that tv_sec is a time_t (this can be even 64bit) while tv_usec normally is just a unsigned 32 bit value, as it it not going to go further from 1000000.
Let me illustrate... suppose you have elapsed 100ms doing calculations.... and this happens to occur in the middle of a second.... you have
tstart.tv_sec = 123456789; tstart.tv_usec = 123456;
tend.tv_sec = 123456789; tend.tv_usec = 223456;
when you substract, it leads to:
tv_sec = 0; tv_usec = 100000;
but let's suppose you have done your computation while the second changes
tstart.tv_sec = 123456789; tstart.tv_usec = 923456;
tend.tv_sec = 123456790; tend.tv_usec = 23456;
the time difference is again 100msec, but now, when you calculate your expression you get, for the first part, 1000000 (one full second) but, after substracting the second part you get 23456 - 923456 =*=> 4294067296 (*) with the overflow.
so you get to usleep(4295067296) or 4295s. or 1h 11m more.
I think you have not had enough patience to wait for it to finish... but this is something that can be happening to your program, depending on how struct timeval is defined.
A proper way to make carry to work is to reorder the summation to do all the additions first and then the substractions. This forces casts to signed integers when dealing with signed and unsigned together, and prevents a negative overflow in unsigneds.
diff = (tend.tv_sec - tstart.tv_sec) * 1000000 + tstart.tv_usec - tend.tv_usec;
which is parsed as
diff = (((tend.tv_sec - tstart.tv_sec) * 1000000) + tstart.tv_usec) - tend.tv_usec;

converting nanosec to microsecond

I am using the below code to convert nano to micro sec
This code runs fines mostly but some times I see the usTick gives a value far beyond the current time.
For ex. if the current time in usTick is 63290061063 then sometimes this value is coming as 126580061060. If you see it is double.
Similarly one more instance I got is current time is 45960787154, but the usTick is showing as 91920787152
typedef unsigned long long TUINT64
unsigned long long GetMonoUSTick()
{
static unsigned long long usTick;
struct timespec t;
clock_gettime(CLOCK_MONOTONIC, &t);
usTick = ((TUINT64)t.tv_nsec) / 1000;
usTick = usTick +((TUINT64)t.tv_sec) * 1000000;
return usTick;
}
If multiple threads of the same process in parallel access variables concurrently for reading/writing or writing/writing those variables need to protected. This can be achieved by using a mutex.
In this case the local variable usTick needs to be protected as it is defined static.
Using POSIX-threads the code could look like this:
pthread_mutex_lock(&ustick_mutex);
usTick = ((TUINT64)t.tv_nsec) / 1000;
usTick = usTick +((TUINT64)t.tv_sec) * 1000000;
pthread_mutex_unlock(&ustick_mutex);
(error checking left out for clarity)
Take care to initialise ustick_mutex properly before using it.
Sounds like you are using threads and that once in a while two threads are calling this function at the same time. Because usTick is static, they are working with the same variable (not two different copies). They both convert the nanoseconds to microseconds and assign to usTick, one after the other, and then they both convert the seconds to microseconds and both add them to usTick (so the seconds get added twice).
EDIT - The following solution I proposed was based on two assumption:
If two threats called the function at the same time, the difference between the current time returned by clock_gettime() would be too small to matter (the difference between the results calculated by the threads would at most be 1).
On most modern CPU's, reading/writing an integer is an atomic operation.
I think you should be able to fix this by changing:
usTick = ((TUINT64)t.tv_nsec) / 1000;
usTick = usTick +((TUINT64)t.tv_sec) * 1000000;
to:
usTick = ((TUINT64)t.tv_nsec) / 1000 + ((TUINT64)t.tv_sec) * 1000000;
The problem with my solution was that even if the later assumption would be correct, it might not hold for long long. Therefore, for example this could happen:
Thread A and B call the function at (almost) the same time.
Thread A calculates current time in microsecond as 0x02B3 1F02 FFFF FFFF.
Thread B calculates current time in microsecond as 0x02B3 1F03 0000 0000.
Thread A writes the least significant 32 bits (0xFFFF FFFF) to usTick.
Thread B writes the least significant 32 bits (0x0000 0000) to usTick.
Thread B writes the most significant 32 bits (0x02B3 1F03)to usTick.
Thread A writes the most significant 32 bits (0x02B3 1F02)to usTick.
and then the function would in both threads return 0x02B3 1F02 0000 0000 which is off by 4294967295.
So do as #alk said, use mutex to protect the read/write of usTick

how the ping implementation calculating the round trip time?

I was learning about the ping implementation.
In that I had one doubt. The doubt is, how they calculating the round trip time.
They done some calculation to calculate the round trip time. I am not able to understand that calculation.
Here is the code for the round trip time calculation.
tsum += triptime;
tsum2 += (long long)triptime * (long long)triptime;
if (triptime < tmin)
tmin = triptime;
if (triptime > tmax)
tmax = triptime;
if (!rtt)
rtt = triptime*8;
else
rtt += triptime-rtt/8;
The tsum, tsum2, triptime, tmax variables are initially 0.
The tmin contains the value as 2147483647 as initially.
The triptime is calculated by the before the packet sending, noted one time. In destination the packet is received, before it send replay it note one time and it fill that in reply packet and it sends the reply.
The two times are subtracted and convert that subtracted time into micro seconds. The triptime variable contains that micro seconds.
For example, take the below output for calculating the rtt.
The trip time for the first packet is 42573, and second packet 43707, third packet 48047, and fourth packet 42559.
Using this how they calculate the round trip time. Why they multiply with 8 in the starting and after that they divide with 8 and subtract with the first rtt. I am not able to find why they calculating the rtt like that.
Can any one please explain me why they multiply with 8 in starting and after that why they divide with 8 and subtract with the before calculated rtt.
The below link contains the full code for the ping implementation.
ping_common.c
ping.c
Thanks in advance.
rtt is Modified Moving Average of triptime values, multiplied by 8 for easy calculations, with N==8.
rtt in the program variable name is not necessarily rtt in the output - And here it isn't.
The 'average round trip delay' in the implementation you show is in tsum / count of packets. When you look at rtt, you are actually looking at something different. That is only displayed when you use pingin adaptive mode.
Because you are dealing with bits. Bit rate and transmission time are not the same thing, so you need to do a tiny bit of arithmetic to convert. The formula is:
Packet transmission time = Packet size / Bit rate
So assuming 100 Mbit/s and a packet size of 1526 bytes, you get:
1526 bytes x 8 bits / (100 x 106 bits/second)) = 116 microseconds
The bit unit cancels out and you are left with seconds.
Now here's another example. Say you have a round trip time of 225 milliseconds and your throughput is 32 kilobytes. You get:
32,000 bytes * 8 bits / 0.255 = 1,003,921 bits per second

adding/subtracting floats/ints linux C

(can skip this part just an explanation of the code below. my problems are under the code block.)
hi. i'm trying to algro for throttling loop cycles based on how much bandwidth the linux computer is using. i'm reading /proc/net/dev once a second and keeping track of the bytes transmitted in 2 variables. one is the last time it was checked the other is the recent time. from there subtracts the recent one from the last one to calculate how many bytes has been sent in 1 second.
from there i have the variables max_throttle, throttle, max_speed, and sleepp.
the idea is to increase or decrease sleepp depending on bandwidth being used. the less bandwidth the lower the delay and the higher the longer.
i am currently having to problems dealing with floats and ints. if i set all my variables to ints max_throttle becomes 0 always no matter what i set the others to and even if i initialize them.
also even though my if statement says "if sleepp is less then 0 return it to 0" it keeps going deeper and deeper into the negatives then levels out at aroung -540 with 0 bandwidth being used.
and the if(ii & 0x40) is for speed and usage control. in my application there will be no 1 second sleep so this code allows me to limit the sleepp from changing about once every 20-30 iterations. although im also having a problem with it where after the 2X iterations when it does trigger it continues to trigger every iteration after instead of only being true once and then being true again after 20-30 more iterations.
edit:: simpler test cast for my variable problem.
#include <stdio.h>
int main()
{
int max_t, max_s, throttle;
max_s = 400;
throttle = 90;
max_t = max_s * (throttle / 100);
printf("max throttle:%d\n", max_t);
return 0;
}
In C, operator / is an integer division when used with integers only. Therefore, 90/100 = 0. In order to do floating-point division with integers, first convert them to floats (or double or other fp types).
max_t = max_s * (int)(((float)throttle / 100.0)+0.5);
The +0.5 is rounding before converting to int. You might want to consider some standard flooring functions, I don't know your use case.
Also note that the 100.0 is a float literal, whereas 100 would be an intger literal. So, although they seem identical, they are not.
As kralyk pointed out, C’s integer division of 90/100 is 0. But rather than using floats you can work with ints… Just do the division after the multiplication (note the omission of parentheses):
max_t = max_s * throttle / 100;
This gives you the general idea. For example if you want the kind of rounding kralyk mentions, add 50 before doing the division:
max_t = (max_s * throttle + 50) / 100;

How to get the running of time of my program with gettimeofday()

So I get the time at the beginning of the code, run it, and then get the time.
struct timeval begin, end;
gettimeofday(&begin, NULL);
//code to time
gettimeofday(&end, NULL);
//get the total number of ms that the code took:
unsigned int t = end.tv_usec - begin.tv_usec;
Now I want to print it out in the form "**code took 0.007 seconds to run" or something similar.
So two problems:
1) t seems to contain a value of the order 6000, and I KNOW the code didn't take 6 seconds to run.
2) How can I convert t to a double, given that it's an unsigned int? Or is there an easier way to print the output the way I wanted to?
timeval contains two fields, the seconds part and the microseconds part.
tv_usec (u meaning the greek letter mu, which stands for micro) is the microseconds. Thus when you get 6000, that's 6000 microseconds elapsed.
tv_sec contains the seconds part.
To get the value you want as a double use this code:
double elapsed = (end.tv_sec - begin.tv_sec) +
((end.tv_usec - begin.tv_usec)/1000000.0);
Make sure you include the tv_sec part in your calculations. gettimeofday returns the current time, so when the seconds increments the microsecond will go back to 0, if this happens while your code is running and you don't use tv_sec in your calculation, you will get a negative number.
1) That's because usec is not 6000 milliseconds, it's 6000 microseconds (6 milliseconds or 6 one-thousandths of a second).
2) Try this: (double)t / 1000000 That will convert t to a double, and then divide it by one million to find the number of seconds instead of the number of microseconds.

Resources