UTC time stamp on Windows - c

I have a buffer with the UTC time stamp in C, I broadcast that buffer after every ten seconds. The problem is that the time difference between two packets is not consistent. After 5 to 10 iterations the time difference becomes 9, 11 and then again 10. Kindly help me to sort out this problem.
I am using <time.h> for UTC time.

If your time stamp has only 1 second resolution then there will always be +/- 1 uncertainty in the least significant digit (i.e. +/- 1 second in this case).
Clarification: if you only have a resolution of 1 second then your time values are quantized. The real time, t, represented by such a quantized value has a range of t..t+0.9999. If you take the difference of two such times, t0 and t1, then the maximum error in t1-t0 is -0.999..+0.999, which when quantized is +/-1 second. So in your case you would expect to see difference values in the range 9..11 seconds.

A thread that sleeps for X milliseconds is not guaranteed to sleep for precisely that many milliseconds. I am assuming that you have a statement that goes something like:
while(1) {
...
sleep(10); // Sleep for 10 seconds.
// fetch timestamp and send
}
You will get a more accurate gauge of time if you sleep for shorter periods (say 20 milliseconds) in a loop checking until the time has expired. When you sleep for 10 seconds, your thread gets moved further out of the immediate scheduling priority of the underlying OS.
You might also take into account that the time taken to send the timestamps may vary, depending on network conditions, etc, if you do a sleep(10) -> send ->sleep(10) type of loop, the time taken to send will be added onto the next sleep(10) in real terms.
Try something like this (forgive me, my C is a little rusty):
bool expired = false;
double last, current;
double t1, t2;
double difference = 0;
while(1) {
...
last = (double)clock();
while(!expired) {
usleep(200); // sleep for 20 milliseconds
current = (double)clock();
if(((current - last) / (double)CLOCKS_PER_SEC) >= (10.0 - difference))
expired = true;
}
t1 = (double)clock();
// Set and send the timestamp.
t2 = (double)clock();
//
// Calculate how long it took to send the stamps.
// and take that away from the next sleep cycle.
//
difference = (t2 - t1) / (double)CLOCKS_PER_SEC;
expired = false;
}
If you are not bothered about using the standard C library, you could look at using the high resolution timer functionality of windows such as QueryPerformanceFrequency/QueryPerformanceCounter functions.
LONG_INTEGER freq;
LONG_INTEGER t2, t1;
//
// Get the resolution of the timer.
//
QueryPerformanceFrequency(&freq);
// Start Task.
QueryPerformanceCounter(&t1);
... Do something ....
QueryPerformanceCounter(&t2);
// Very accurate duration in seconds.
double duration = (double)(t2.QuadPart - t1.QuadPart) / (double)freq.QuadPart;

Related

Creating a Program that slowly Increases the brightness of an LED as a start-up

I am wanting to create a program using a for-loop that slowly increases the brightness of an LED as "start-up" when I press a button.
I have basically no knowledge of for loops. I've tried messing around by looking at similar programs and potential solutions, but I was unable to do it.
This is my start code, which I have to use PWMperiod to achieve.
if (SW3 == 0) {
for (unsigned char PWMperiod = 255; PWMperiod != 0; PWMperiod --) {
if (TonLED4 == PWMperiod) {
TonLED4 += 1;
}
__delay_us (20);
}
}
How would I start this/do it?
For pulse width modulation, you'd want to turn the LED off for a certain amount of time, then turn the LED on for a certain amount of time; where the amounts of time depend on how bright you want the LED to appear and the total period ("on time + off time") is constant.
In other words, you want a relationship like period = on_time + off_time where period is constant.
You also want the LED to increase brightness slowly. E.g. maybe go from off to max. brightness over 10 seconds. This means you'll need to loop total_time / period times.
How bright the LED should be, and therefore how long the on_time should be, will depend on how much time has passed since the start of the 10 seconds (e.g. 0 microseconds at the start of the 10 seconds and period microseconds at the end of the 10 seconds). Once you know the on_time you can calculate off_time by rearranging that "period = on_time + off_time" formula.
In C it might end up something like:
#define TOTAL_TIME 10000000 // 10 seconds, in microseconds
#define PERIOD 1000 // 1 millisecond, in microseconds
#define LOOP_COUNT (TOTAL_TIME / PERIOD)
int on_time;
int off_time;
for(int t = 0; t < LOOP_COUNT; t++) {
on_time = period * t / LOOP_COUNT;
off_time = period - on_time;
turn_LED_off();
__delay_us(off_time);
turn_LED_on();
__delay_us(on_time);
}
Note: on_time = period * t / LOOP_COUNT; is a little tricky. You can think it as on_time = period * (t / LOOP_COUNT); where t / LOOP_COUNT is a fraction that goes from 0.00000 to 0.999999 representing the fraction of the period that the LED should be turned on, but if you wrote it like that the compiler will truncate the result of t / LOOP_COUNT to an integer (round it towards zero) so the result will be zero. When written like this; C will do the multiplication first, so it'll behave like on_time = (period * t) / LOOP_COUNT; and truncation (or rounding) won't be a problem. Sadly, doing the multiplication first solves one problem while possibly causing another problem - period * t might be too big for an int and might cause an overflow (especially on small embedded systems where an int could be 16 bits). You'll have to figure out how big an int is for your computer (for the values you use - changing TOTAL_TIME or PERIOD with change the maximum value that period * t could be) and use something larger (e.g. a long maybe) if an int isn't enough.
You should also be aware that the timing won't be exact, because it ignores time spent executing your code and ignores anything else the OS might be doing (IRQs, other programs using the CPU); so the "10 seconds" might actually be 10.5 seconds (or worse). To fix that you need something more complex than a __delay_us() function (e.g. some kind of __delay_until(absolute_time) maybe).
Also; you might find that the LED doesn't increase brightness linearly (e.g. it might slowly go from off to dull in 8 seconds then go from dull to max. brightness in 2 seconds). If that happens; you might need a lookup table and/or more complex maths to correct it.

C programming: I want subtract a weight per second

I am new in stackoverflow and i am sorry if i make mistakes.
I am starter in C language and i have one project what it needs to subtact a weight per second.For example
Weight: 50kg
Subtract per second 4%
i found this code
while(waitFor(1))
{
*weight=(*weight)-(*weight)*0.4;
}
void waitFor(int secs)
{
int retTime;
retTime = time(0) + secs; // Get finishing time.
while (time(0) < retTime); // Loop until it arrives.
}
But i dont want to wait x seconds to finish. I want faster solution. Any ideas
**Note: I want to know how much seconds need to make weight 0
Sleep command it is not working in my computer.
**
For cleaning and disinfecting of the pool dropped into the water of a chemical
a solid body. This solid body upon contact with water
immediately begins to dissolve, losing weight equal to 4% by mass per second.
If the dissolution rate of the chemical remains constant, to implement program
will accept the weight of the solid body in grams and displays after how
time will completely dissolve. The time is displayed in a "hours: minutes: seconds".
For example, if the dissolution time is 3,740 seconds, displaying 01: 02: 20.
To calculate the time you need to implement a function which accepts
gram and return the three time parameters ie
hours, minutes and seconds. Note that the time printed on the main
function.
you can use sleep(int) function in loop it will wait suspend the process up to the integer value.
while((1) && weight > 0)
{
sleep(1);
*weight=(*weight)-(*weight)*0.4;
}
it will wait for 1 seconds and subtraction will made, it will run continuously
Edit:
To find out the number of seconds required for the weight to reach 0:
unsigned seconds = 0;
while(*weight != 0){
*weight -= *weight * 0.04
seconds++;
//in case you have the patience to attend:
sleep(1000); //in milliseconds => 1 second
}
Please note that weight is considered to be a pointer to an integer type.

Self-correcting periodic timer using gettimeofday()

I have a loop which runs every X usecs, which consists of doing some I/O then sleeping for the remainder of the X usecs. To (roughly) calculate the sleep time, all I'm doing is taking a timestamp before and after the I/O and subtract the difference from X. Here is the function I'm using for the timestamp:
long long getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (long long) (time.tv_sec + time.tv_usec);
}
As you can imagine, this starts to drift pretty fast and the actual time between I/O bursts is usually quite a few ms longer than X.
To try and make it a little more accurate, I thought maybe if I keep a record of the previous starting timestamp, every time I start a new cycle I can calculate how long the previous cycle took (the time between this starting timestamp and the previous one). Then, I know how much longer than X it was, and I can modify my sleep for this cycle to compensate.
Here is how I'm trying to implement it:
long long start, finish, offset, previous, remaining_usecs;
long long delaytime_us = 1000000;
/* Initialise previous timestamp as 1000000us ago*/
previous = getus() - delaytime_us;
while(1)
{
/* starting timestamp */
start = getus();
/* here is where I would do some I/O */
/* calculate how much to compensate */
offset = (start - previous) - delaytime_us;
printf("(%lld - %lld) - %lld = %lld\n",
start, previous, delaytime_us, offset);
previous = start;
finish = getus();
/* calculate to our best ability how long we spent on I/O.
* We'll try and compensate for its inaccuracy next time around!*/
remaining_usecs = (delaytime_us - (finish - start)) - offset;
printf("start=%lld,finish=%lld,offset=%lld,previous=%lld\nsleeping for %lld\n",
start, finish, offset, previous, remaining_usecs);
usleep(remaining_usecs);
}
It appears to work on the first iteration of the loop, however after that things get messed up.
Here's the output for 5 iterations of the loop:
(1412452353 - 1411452348) - 1000000 = 5
start=1412452353,finish=1412458706,offset=5,previous=1412452353
sleeping for 993642
(1412454788 - 1412452353) - 1000000 = -997565
start=1412454788,finish=1412460652,offset=-997565,previous=1412454788
sleeping for 1991701
(1412454622 - 1412454788) - 1000000 = -1000166
start=1412454622,finish=1412460562,offset=-1000166,previous=1412454622
sleeping for 1994226
(1412457040 - 1412454622) - 1000000 = -997582
start=1412457040,finish=1412465861,offset=-997582,previous=1412457040
sleeping for 1988761
(1412457623 - 1412457040) - 1000000 = -999417
start=1412457623,finish=1412463533,offset=-999417,previous=1412457623
sleeping for 1993507
The first line of output shows how the previous cycle time was calculated. It appears that the first two timestamps are basically 1000000us apart (1412452353 - 1411452348 = 1000005). However after this the distance between starting timestamps starts looking not so reasonable, along with the offset.
Does anyone know what I'm doing wrong here?
EDIT: I would also welcome suggestions of better ways to get an accurate timer and be
able to sleep during the delay!
After some more research I've discovered two things wrong here-
Firstly, I'm calculating the timestamp wrong. getus() should return like this:
return (long long) 1000000 * (time.tv_sec + time.tv_usec);
And secondly, I should be storing the timestamp in unsigned long long or uint64_t.
So getus() should look like this:
uint64_t getus ()
{
struct timeval time;
gettimeofday(&time, NULL);
return (uint64_t) 1000000 * (time.tv_sec + time.tv_usec);
}
I won't actually be able to test this until tomorrow, so I will report back.

timestamp in c with milliseconds precision

I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?

C GetTickCount (windows function) to Time (nanoseconds)

I'm testing one code provided from my colleague and I need to measure the time of execution of one routine than performs a context switch (of threads).
What's the best choice to measure the time? I know that is available High Resolution Timers like,
QueryPerformanceCounter
QueryPerformanceFrequency
but how can I translate using that timers to miliseconds or nanoseconds?
LARGE_INTEGER lFreq, lStart;
LARGE_INTEGER lEnd;
double d;
QueryPerformanceFrequency(&lFreq);
QueryPerformanceCounter(&lStart);
/* do something ... */
QueryPerformanceCounter(&lEnd);
d = ((doublel)End.QuadPart - (doublel)lStart.QuadPart) / (doublel)lFreq.QuadPart;
d is time interval in seconds.
As the operation than i am executing is in order of 500 nanos, and the timers doens't have precision, what i made was,
i saved actual time with GetTickCount() - (Uses precision of ~ 12milis) and performs the execution of a route N_TIMES (Number of times than routine executed) than remains until i press something on console.
Calculate the time again, and make the difference dividing by N_TIMES, something like that:
int static counter;
void routine()
{
// Operations here..
counter++;
}
int main(){
start <- GetTickCount();
handle <- createThread(....., routine,...);
resumeThread(handle);
getchar();
WaitForSingleObject(handle, INFINITE);
Elapsed = (GetTickCount() - start) * 1000000.0) / counter;
printf("Nanos: %d", elapsed);
}
:)

Resources