When I output the microseconds field for gettimeofday(), I notice that the microsecond field is larger than 1,000,000. Does anyone know why this is? And does this imply that I've been interpreting gettimeofday() wrong?
For the record, my assumption is that the current time (in microseconds) according to gettimeofday() is the following:
struct timeval ts;
gettimeofday(&ts, NULL);
printf("%zu", ts.tv_sec * 1000000 + ts.tv_usec);
Edit: Here is the code that is causing the problem. After the comments below, the printf() might be at fault.
struct timeval curr_time;
gettimeofday(&curr_time, NULL);
printf("Done-arino! Onto the matrix multiplication (at %zu s, %03zu ms)\n", curr_time.tv_sec, curr_time.tv_usec);
// Matrix Multiplication
struct timeval start_tv, end_tv, elapsed_tv;
gettimeofday(&start_tv, NULL);
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
for (k = 0; k < N; k++)
C[i][j] += A[i][k] * B[k][j];
gettimeofday(&end_tv, NULL);
timersub(&end_tv, &start_tv, &elapsed_tv);
// Print results
printf("Elapsed time: %zu s, %03zu ms\n", elapsed_tv.tv_sec, elapsed_tv.tv_usec / 1000);
After a successful to gettimeofday, yes, tv_usec is guaranteed to be strictly less than 1000000.
If you (think you) saw a value of 1000000 or greater, then yes, it's likely you were doing something wrong.
A frequent mistake is to add or subtract two struct timeval values naively, without implementing proper carry or borrow between the tv_sec and tv_usec fields, and this can easily lead to (mistaken and wrong) values in tv_usec greater than 1000000. (In your edited post you mention subtracting timespecs, but you're using the system-supplied timersub function which ought to get the borrow right.)
If you were using a struct timespec instead of struct timeval, and if a leap second were going on, and if you were (miraculously) using an OS kernel that implemented the CLOCK_UTC clock type proposed by Markus Kuhn at https://www.cl.cam.ac.uk/~mgk25/posix-clocks.html, you'd see tv_nsec values greater than 1000000000, but that's a lot of "if"s. (And to my knowledge no kernel in widespread use has ever implemented CLOCK_UTC.)
You'll need to show some more convincing code, and identify the platform on which you run into the problem.
For example:
#include <stdio.h>
#include <sys/time.h>
int main(void)
{
while (1)
{
struct timeval ts;
if (gettimeofday(&ts, 0) == 0 && ts.tv_usec >= 1000000)
printf("%lu s; %lu µs\n", (long)ts.tv_sec, (long)ts.tv_usec);
}
return 0;
}
The very busy loop is a tad irksome; maybe you should use nanosleep() to sleep for a microsecond or two on every iteration:
#include <stdio.h>
#include <sys/time.h>
#include <time.h>
int main(void)
{
while (1)
{
struct timeval tv;
if (gettimeofday(&tv, 0) == 0 && tv.tv_usec >= 1000000)
printf("%lu s; %lu µs\n", (long)tv.tv_sec, (long)tv.tv_usec);
struct timespec ts = { .tv_sec = 0, .tv_nsec = 2000 };
nanosleep(&ts, 0);
}
return 0;
}
Or, including a progress meter to demonstrate that the code is running:
#include <stdio.h>
#include <sys/time.h>
#include <time.h>
int main(void)
{
size_t loop_count = 0;
size_t line_count = 0;
while (1)
{
struct timeval tv;
if (gettimeofday(&tv, 0) == 0 && tv.tv_usec >= 1000000)
printf("%lu s; %lu µs\n", (long)tv.tv_sec, (long)tv.tv_usec);
struct timespec ts = { .tv_sec = 0, .tv_nsec = 2000 };
nanosleep(&ts, 0);
if (++loop_count > 100000)
{
loop_count = 0;
putchar('.');
line_count++;
if (line_count >= 50)
{
putchar('\n');
line_count = 0;
}
fflush(stdout);
}
}
return 0;
}
timersub()
On an Ubuntu 16.04 LTS VM, I can find file /usr/include/x86_64-linux-gnu/sys/time.h containing a macro:
# define timersub(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \
(result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
if ((result)->tv_usec < 0) { \
--(result)->tv_sec; \
(result)->tv_usec += 1000000; \
} \
} while (0)
All the indications I can see are that tv_usec is an __u32, an unsigned quantity. If that's the case, then the < 0 condition will never be true and you may sometimes see grotesquely large positive values instead. YMMV, of course.
Even my mileage varies
Further scrutiny shows that while there are headers that seem to use __u32 for tv_usec, those are not the main system headers.
/usr/include/linux/time.h: __kernel_suseconds_t tv_usec; /* microseconds */
/usr/include/linux/can/bcm.h: long tv_usec;
/usr/include/drm/exynos_drm.h: __u32 tv_usec;
/usr/include/drm/exynos_drm.h: __u32 tv_usec;
/usr/include/drm/vmwgfx_drm.h: uint32_t tv_usec;
/usr/include/drm/drm.h: __u32 tv_usec;
/usr/include/rpc/auth_des.h: uint32_t tv_usec; /* Microseconds. */
/usr/include/valgrind/vki/vki-darwin.h:#define vki_tv_usec tv_usec
/usr/include/valgrind/vki/vki-linux.h: vki_suseconds_t tv_usec; /* microseconds */
/usr/include/rpcsvc/rstat.x: unsigned int tv_usec; /* and microseconds */
/usr/include/rpcsvc/rstat.h: u_int tv_usec;
/usr/include/x86_64-linux-gnu/bits/utmpx.h: __int32_t tv_usec; /* Microseconds. */
/usr/include/x86_64-linux-gnu/bits/time.h: __suseconds_t tv_usec; /* Microseconds. */
/usr/include/x86_64-linux-gnu/bits/utmp.h: int32_t tv_usec; /* Microseconds. */
/usr/include/x86_64-linux-gnu/sys/time.h: (ts)->tv_nsec = (tv)->tv_usec * 1000; \
/usr/include/x86_64-linux-gnu/sys/time.h: (tv)->tv_usec = (ts)->tv_nsec / 1000; \
/usr/include/x86_64-linux-gnu/sys/time.h:# define timerisset(tvp) ((tvp)->tv_sec || (tvp)->tv_usec)
/usr/include/x86_64-linux-gnu/sys/time.h:# define timerclear(tvp) ((tvp)->tv_sec = (tvp)->tv_usec = 0)
/usr/include/x86_64-linux-gnu/sys/time.h: ((a)->tv_usec CMP (b)->tv_usec) : \
/usr/include/x86_64-linux-gnu/sys/time.h: (result)->tv_usec = (a)->tv_usec + (b)->tv_usec; \
/usr/include/x86_64-linux-gnu/sys/time.h: if ((result)->tv_usec >= 1000000) \
/usr/include/x86_64-linux-gnu/sys/time.h: (result)->tv_usec -= 1000000; \
/usr/include/x86_64-linux-gnu/sys/time.h: (result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
/usr/include/x86_64-linux-gnu/sys/time.h: if ((result)->tv_usec < 0) { \
/usr/include/x86_64-linux-gnu/sys/time.h: (result)->tv_usec += 1000000; \
It's worrying to see any code using an unsigned type for a member with that name, but that doesn't mean it is happening for code using struct timeval and timersub().
This code:
#include <sys/time.h>
#include <stdio.h>
int main(void)
{
struct timeval t = { .tv_sec = 0, .tv_usec = -1 };
printf("%ld %ld\n", (long)t.tv_sec, (long)t.tv_usec);
return 0;
}
compiled for 64-bit (so long is big enough to hold anything that tv_usec could be defined as) prints 0 -1 as it should. It would be possible to initialize the tv_usec member to 0, decrement it, and verify that it is negative, and various other related tests.
So, the problem isn't as simple as "timersub() is wrong" — which is an immense relief.
Your printf formats are suspect, and could be causing this problem.
The %zu format is for printing size_t values. But neither tv_sec nor tv_usec has type size_t.
On a modern system, size_t is likely to be 64 bits. But if either tv_sec or tv_usec is not, printf will end up printing those values quite wrongly.
I changed your printfs to
printf("Done-arino! Onto the matrix multiplication (at %ld s, %03u ms)\n",
curr_time.tv_sec, curr_time.tv_usec);
and
printf("Elapsed time: %ld s, %03u ms\n",
elapsed_tv.tv_sec, elapsed_tv.tv_usec / 1000);
These won't necessarily be correct for you, though — it depends on your system's specific choices for tv_sec and tv_usec.
The general and portable way to print a value of an implementation-defined type like this is to explicitly cast it to the largest type it can be, then use the printf format corresponding to the cast-to type. For example:
printf("Done-arino! Onto the matrix multiplication (at %ld s, %03ld ms)\n",
(long)curr_time.tv_sec, (long)curr_time.tv_usec);
printf("Elapsed time: %ld s, %03ld ms\n",
(long)elapsed_tv.tv_sec, (long)elapsed_tv.tv_usec / 1000);
The cast might or might not be a no-op, but the point is that, no matter what the original type was, you end up with something that matches what you've told printf to expect.
Related
Consider the following code:
struct timespec ts;
uint64_t start_time;
uint64_t stop_time;
if (clock_gettime(CLOCK_REALTIME, &ts) != 0) {
abort();
}
start_time = ts.tv_sec * UINT64_C(1000000000) + ts.tv_nsec;
/* some computation... */
if (clock_gettime(CLOCK_REALTIME, &ts) != 0) {
abort();
}
stop_time = ts.tv_sec * UINT64_C(1000000000) + ts.tv_nsec;
printf("%" PRIu64 "\n", (stop_time - start_time + 500000000) / 1000000000);
In the vast majority of cases, the code works as I expected, i.e., prints the number of seconds that took the computation.
Very rarely, however, one anomaly occurs.
The program reports the number of seconds like 18446743875, 18446743877, 18446743962, etc.
I figured this number roughly matched 264 nanoseconds (~584 years).
So I got the suspicion that ts.tv_nsec is sometimes equal to −1.
So my question is:
What's wrong with my code?
Where and why does adding 264 nanoseconds happen?
I don't see anything wrong with your code. I suspect your OS is occasionally delivering an anomalous value for CLOCK_REALTIME — although I'm surprised, and I can't quite imagine what it might be.
I suggest rewriting your code like this:
struct timespec start_ts, stop_ts;
uint64_t start_time;
uint64_t stop_time;
if (clock_gettime(CLOCK_REALTIME, &start_ts) != 0) {
abort();
}
start_time = start_ts.tv_sec * UINT64_C(1000000000) + start_ts.tv_nsec;
/* some computation... */
if (clock_gettime(CLOCK_REALTIME, &stop_ts) != 0) {
abort();
}
stop_time = stop_ts.tv_sec * UINT64_C(1000000000) + stop_ts.tv_nsec;
uint64_t elapsed = (stop_time - start_time + 500000000) / 1000000000;
printf("%" PRIu64 "\n", elapsed);
if(elapsed > 365 * 86400 * UINT64_C(1000000000)) {
printf("ANOMALY:\n");
printf("start_ts = %lu %lu\n", start_ts.tv_sec, start_ts.tv_nsec);
printf("stop_ts = %lu %lu\n", stop_ts.tv_sec, stop_ts.tv_nsec);
}
Then, if/when it happens again, you'll have more information to go on.
I have a function and I want the function to stop running once it has been running for a certain number of milliseconds. This function works for seconds but I want to test it for milliseconds. How do I do this? If I set eliminate = 1, it corresponds to 1 second. How do I set eliminate = 5 ms?
Function:
void clsfy_proc(S_SNR_TARGET_SET pSonarTargetSet, unsigned char *target_num, time_t eliminate)
{
// get timing
time_t _start = time(NULL);
time_t _end = _start + eliminate;
int _eliminate = 0;
//some code
time_t start = time(NULL);
time_t end = start + eliminate;
for(_tidx = 0; _tidx < pSonarTargetSet[_i].num; _tidx++) {
// check timing
time_t _current = time(NULL);
if (_current > _end) {
printf("clsfy_proc(1), Eliminate due to timeout\n");
_eliminate = 1;
break;
}
//some code
if (_eliminate == 1)
break;
}
//some code
}
time_t is an absolute time, represented as the integer number of seconds since the UNIX epoch (midnight GMT, 1 January 1970). It is useful as an unambiguous, easy-to-work-with representation of a point in time.
clock_t is a relative measurement of time, represented by an integer number of clock ticks since some point in time (possibly the computer's bootup, but no guarantees, as it may roll over quite often). There are CLOCKS_PER_SEC clock ticks per second; the value of this constant can vary between different operating systems. It is sometimes used for timing purposes, but it is not very good at it due to its relatively low resolution.
One small example for clock_t:
#include <time.h>
#include <stdio.h>
int main () {
clock_t start_t, end_t, total_t;
int i;
start_t = clock();
printf("Starting of the program, start_t = %ld\n", start_t);
for(i=0; i< 10000000; i++) { }
end_t = clock();
printf("End of the big loop, end_t = %ld\n", end_t);
total_t = (double)(end_t - start_t) / CLOCKS_PER_SEC;
printf("Total time taken by CPU: %f\n", total_t );
return(0);
}
You can use getrusage().
Please see the example:
Source: http://www.cs.tufts.edu/comp/111/examples/Time/getrusage.c
#include <stdio.h>
#include <sys/time.h>
#include <sys/resource.h>
///////////////////////////////////
// measure user and system time using the "getrusage" call.
///////////////////////////////////
//struct rusage {
// struct timeval ru_utime; /* user CPU time used */
// struct timeval ru_stime; /* system CPU time used */
// long ru_maxrss; /* maximum resident set size */
// long ru_ixrss; /* integral shared memory size */
// long ru_idrss; /* integral unshared data size */
// long ru_isrss; /* integral unshared stack size */
// long ru_minflt; /* page reclaims (soft page faults) */
// long ru_majflt; /* page faults (hard page faults) */
// long ru_nswap; /* swaps */
// long ru_inblock; /* block input operations */
// long ru_oublock; /* block output operations */
// long ru_msgsnd; /* IPC messages sent */
// long ru_msgrcv; /* IPC messages received */
// long ru_nsignals; /* signals received */
// long ru_nvcsw; /* voluntary context switches */
// long ru_nivcsw; /* involuntary context switches */
//};
//struct timeval
// {
// long int tv_sec; /* Seconds. */
// long int tv_usec; /* Microseconds. */
// };
main () {
struct rusage buf;
// chew up some CPU time
int i,j; for (i=0,j=0; i<100000000; i++) { j+=i*i; }
getrusage(RUSAGE_SELF, &buf);
printf("user seconds without microseconds: %ld\n", buf.ru_utime.tv_sec);
printf("user microseconds: %ld\n", buf.ru_utime.tv_usec);
printf("total user seconds: %e\n",
(double) buf.ru_utime.tv_sec
+ (double) buf.ru_utime.tv_usec / (double) 1000000);
}
Im trying to write to the terminal one line at a time but it just prints the whole thing without sleeping. It works if I use sleep(1). Am I just not understanding how nanosleep is suppose to work?
void
display_all(int fdin, int fdout)
{
struct timespec tm1,tm2;
tm1.tv_sec = 0;
tm1.tv_nsec = 1000000000L;
while (display_line(fdin, fdout) == 80)
{
nanosleep(&tm1,&tm2);
}
}
display_line is using the function write to write to STDOUT.
From the nanosleep man page:
The value of the nanoseconds field must be in the range 0 to 999999999
#include <stdio.h>
#include <time.h>
#define MILISECONDS 300
#define NLOOPS 10
void nsleep(long miliseconds) {
struct timespec ts = {0, miliseconds * 1000000L};
nanosleep(&ts, NULL);
}
int main() {
short i;
for (i = 0; i < NLOOPS; i++)
fflush(stdout),
nsleep((long) MILISECONDS),
printf("%d miliseconds\n", MILISECONDS);
return 0;
}
Or:
void nsleep(long miliseconds) {
struct timespec ts;
ts.tv_sec = 0;
ts.tv_nsec = miliseconds * 1000000L;
nanosleep(&ts, NULL);
}
I'd really like to implement a 25µs delay in a C program I am writing to read a sensor via an RPi 3. I've employed nanosleep() and usleep(), but the accuracy seems a bit off -- likely because the program cedes thread time to other programs, then must wait for them to finish. I run with 'nice -n -20' to ensure priority, but it still seems a bit less accurate than I'd like. I've also tried a for loop, but can't quite nail down the clock-tick:for-loop-count ratio required to get 25 µs (I'm very new to all this)... or maybe gcc is optimizing the empty loop into oblivion?
At any rate, might someone be able to point me in the direction of a microDelay() function or something like this? (I've spent hours googling and experimenting, but can't quite seem to find what I'm looking for). Thanks!
Achieving this low resolutions (less than 1ms) is almost impossible in conventional multitasking operating systems without hardware support, but there is a software technique which could help you. (I've tested it before)
Software delay loop isn't accurate solution because of processes preemption by operating system's scheduler. But you can patch your kernel with RT_PREEMPT and enable it via CONFIG_RT_PREEMPT, now you have a kernel with realtime scheduling support, the realtime kernel let you run a process with realtime priority, the process with realtime priority run until it wants nobody could preempt it, so if you run a delay loop the process will not preempted by operating system so you could created accurate delays with these loops.
There was a point in Linux 2.something where nanosleep had a specific behavior for processes scheduled under a real-time policy like SCHED_FIFO or SCHED_RR where it would busy-wait when the specified sleep was below the minimum clock resolution or granularity, but it was removed. (Try man nanosleep, I believe this behavior is mentioned there).
I had a need to have a more precise sleep interval so I wrote my own version to call in those special cases. On the target machine I was able to get < 10 µs delays with only occasional blips (see the comment in the code).
Just remember that for non-real-time scheduling policies, if your application tries to sleep for less than the minimum clock resolution, it may still be preempted.
Here is a little test program that I wrote to test this, the busy loop calls clock_gettime() so it knows when it's time to wake:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#include <time.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <signal.h>
void usage(char *name, const char *msg)
{
if ( msg )
fprintf(stderr,"%s\n",msg);
fprintf(stderr,"Usage: %s, -s<sleepNanoseconds> [-c<loopCount>] [-e]\n", name);
fprintf(stderr," -s<sleepNanoseconds> is the number nanoseconds to busy-sleep, usually < 60000\n");
fprintf(stderr," -c<loopCount> the number of loops to execute the busy sleep, default 1000000 \n");
fprintf(stderr," -e do not calculate min, max and avg. only elapsed time \n");
}
# define tscmp(a, b, CMP) \
(((a)->tv_sec == (b)->tv_sec) ? \
((a)->tv_nsec CMP (b)->tv_nsec) : \
((a)->tv_sec CMP (b)->tv_sec))
# define tsadd(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec + (b)->tv_sec; \
(result)->tv_nsec = (a)->tv_nsec + (b)->tv_nsec; \
if ((result)->tv_nsec >= 1000000000) \
{ \
++(result)->tv_sec; \
(result)->tv_nsec -= 1000000000; \
} \
} while (0)
# define tssub(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \
(result)->tv_nsec = (a)->tv_nsec - (b)->tv_nsec; \
if ((result)->tv_nsec < 0) { \
--(result)->tv_sec; \
(result)->tv_nsec += 1000000000; \
} \
} while (0)
///////////////////////////////////////////////////////////////////////////////
///
/// busySleep uses clock_gettime and a elapsed time check to provide delays
/// for less than the minimum sleep resolution (~58 microseconds). As tested
/// on a XEON E5-1603, a sleep of 0 yields a delay of >= 375 Nsec, 1-360 about
/// 736 Nsec, 370-720 a little more than 1 Usec, 720-1080 a little less than
/// 1.5 Usec and generally it's pretty linear for delays of 10 Usec on up in
/// increments of 10 Usec, e.g., 10 Usec =~ 10.4, 20 Usec =~ 20.4 and so on.
///
///////////////////////////////////////////////////////////////////////////////
int busySleep( uint32_t nanoseconds )
{
struct timespec now;
struct timespec then;
struct timespec start;
struct timespec sleep;
if ( nanoseconds > 999999999 )
{
return 1;
}
clock_gettime( CLOCK_MONOTONIC_RAW, &start);
now = start;
sleep.tv_sec = 0;
sleep.tv_nsec = nanoseconds;
tsadd( &start, &sleep, &then );
while ( tscmp( &now, &then, < ) )
{
clock_gettime( CLOCK_MONOTONIC_RAW, &now);
}
return 0;
}
int main(int argc, char **argv)
{
uint32_t sleepNsecs = 1000000000;
uint32_t loopCount = 1000000;
bool elapsedOnly = false;
uint32_t found = 0;
int opt;
if ( argc < 2 )
{
sleepNsecs = atol(argv[1]);
usage( argv[0], "Required options were not given" );
return 1;
}
while ( (opt = getopt(argc, argv, "s:d:e")) != -1 )
{
switch ( opt )
{
case 's':
sleepNsecs = strtoul(optarg,NULL,0);
break;
case 'd':
loopCount = strtoul(optarg,NULL,0);
break;
case 'e':
elapsedOnly = true;
break;
default:
usage(argv[0],"Error: unrecognized option\n");
return 1;
}
found++;
}
if ( found < 1 )
{
usage( argv[0], "Invalid command line." );
return 1;
}
if ( sleepNsecs > 999999999 )
{
usage( argv[0], "Sleep nanoseconds must be less than one second." );
return 1;
}
printf("sleepNsecs set to %d\n",sleepNsecs);
struct timespec start;
struct timespec now;
struct timespec prev;
struct timespec elapsed;
struct timespec trem;
uint64_t count = 0;
int64_t sum = 0;
int64_t min = 99999999;
int64_t max = 0;
clock_gettime( CLOCK_MONOTONIC_RAW, &start);
now = start;
prev = start;
//while ( tscmp( &now, &then, < ) )
for ( uint32_t i = 0; i < loopCount; i++ )
{
int rc = busySleep( sleepNsecs );
if ( rc != 0 )
{
fprintf( stderr, "busySleep returned an error!\n" );
return 1;
}
if ( ! elapsedOnly )
{
clock_gettime( CLOCK_MONOTONIC_RAW, &now);
tssub( &now, &prev, &trem );
min = ( min < trem.tv_nsec ? min : trem.tv_nsec );
max = ( max > trem.tv_nsec ? max : trem.tv_nsec );
count++;
sum += trem.tv_nsec;
prev = now;
}
}
if ( ! elapsedOnly )
{
printf("Min: %lu, Max: %lu, avg %lu, count %lu\n",min,max,(sum / count),count);
}
else
{
clock_gettime( CLOCK_MONOTONIC_RAW, &now);
tssub( &now, &start, &elapsed );
double secs = ((double)elapsed.tv_sec) + ((double) elapsed.tv_nsec / (double)1e9 );
fprintf( stderr, "Elapsed time of %ld.%09ld for %u sleeps of duration %u, avg. = %.9f Ns\n",
elapsed.tv_sec, elapsed.tv_nsec, loopCount, sleepNsecs, (secs / loopCount) );
}
return 0;
}
What is the use of tim.tv_sec and tim.tv_nsec in the following?
How can I sleep execution for 500000 microseconds?
#include <stdio.h>
#include <time.h>
int main()
{
struct timespec tim, tim2;
tim.tv_sec = 1;
tim.tv_nsec = 500;
if(nanosleep(&tim , &tim2) < 0 )
{
printf("Nano sleep system call failed \n");
return -1;
}
printf("Nano sleep successfull \n");
return 0;
}
Half a second is 500,000,000 nanoseconds, so your code should read:
tim.tv_sec = 0;
tim.tv_nsec = 500000000L;
As things stand, you code is sleeping for 1.0000005s (1s + 500ns).
tv_nsec is the sleep time in nanoseconds. 500000us = 500000000ns, so you want:
nanosleep((const struct timespec[]){{0, 500000000L}}, NULL);
500000 microseconds are 500000000 nanoseconds. You only wait for 500 ns = 0.5 µs.
This worked for me ....
#include <stdio.h>
#include <time.h> /* Needed for struct timespec */
int mssleep(long miliseconds)
{
struct timespec rem;
struct timespec req= {
(int)(miliseconds / 1000), /* secs (Must be Non-Negative) */
(miliseconds % 1000) * 1000000 /* nano (Must be in range of 0 to 999999999) */
};
return nanosleep(&req , &rem);
}
int main()
{
int ret = mssleep(2500);
printf("sleep result %d\n",ret);
return 0;
}
I usually use some #define and constants to make the calculation easy:
#define NANO_SECOND_MULTIPLIER 1000000 // 1 millisecond = 1,000,000 Nanoseconds
const long INTERVAL_MS = 500 * NANO_SECOND_MULTIPLIER;
Hence my code would look like this:
timespec sleepValue = {0};
sleepValue.tv_nsec = INTERVAL_MS;
nanosleep(&sleepValue, NULL);
More correct variant:
{
struct timespec delta = {5 /*secs*/, 135 /*nanosecs*/};
while (nanosleep(&delta, &delta));
}
POSIX 7
First find the function: http://pubs.opengroup.org/onlinepubs/9699919799/functions/nanosleep.html
That contains a link to a time.h, which as a header should be where structs are defined:
The header shall declare the timespec structure, which shall > include at least the following members:
time_t tv_sec Seconds.
long tv_nsec Nanoseconds.
man 2 nanosleep
Pseudo-official glibc docs which you should always check for syscalls:
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};