I have a function and I want the function to stop running once it has been running for a certain number of milliseconds. This function works for seconds but I want to test it for milliseconds. How do I do this? If I set eliminate = 1, it corresponds to 1 second. How do I set eliminate = 5 ms?
Function:
void clsfy_proc(S_SNR_TARGET_SET pSonarTargetSet, unsigned char *target_num, time_t eliminate)
{
// get timing
time_t _start = time(NULL);
time_t _end = _start + eliminate;
int _eliminate = 0;
//some code
time_t start = time(NULL);
time_t end = start + eliminate;
for(_tidx = 0; _tidx < pSonarTargetSet[_i].num; _tidx++) {
// check timing
time_t _current = time(NULL);
if (_current > _end) {
printf("clsfy_proc(1), Eliminate due to timeout\n");
_eliminate = 1;
break;
}
//some code
if (_eliminate == 1)
break;
}
//some code
}
time_t is an absolute time, represented as the integer number of seconds since the UNIX epoch (midnight GMT, 1 January 1970). It is useful as an unambiguous, easy-to-work-with representation of a point in time.
clock_t is a relative measurement of time, represented by an integer number of clock ticks since some point in time (possibly the computer's bootup, but no guarantees, as it may roll over quite often). There are CLOCKS_PER_SEC clock ticks per second; the value of this constant can vary between different operating systems. It is sometimes used for timing purposes, but it is not very good at it due to its relatively low resolution.
One small example for clock_t:
#include <time.h>
#include <stdio.h>
int main () {
clock_t start_t, end_t, total_t;
int i;
start_t = clock();
printf("Starting of the program, start_t = %ld\n", start_t);
for(i=0; i< 10000000; i++) { }
end_t = clock();
printf("End of the big loop, end_t = %ld\n", end_t);
total_t = (double)(end_t - start_t) / CLOCKS_PER_SEC;
printf("Total time taken by CPU: %f\n", total_t );
return(0);
}
You can use getrusage().
Please see the example:
Source: http://www.cs.tufts.edu/comp/111/examples/Time/getrusage.c
#include <stdio.h>
#include <sys/time.h>
#include <sys/resource.h>
///////////////////////////////////
// measure user and system time using the "getrusage" call.
///////////////////////////////////
//struct rusage {
// struct timeval ru_utime; /* user CPU time used */
// struct timeval ru_stime; /* system CPU time used */
// long ru_maxrss; /* maximum resident set size */
// long ru_ixrss; /* integral shared memory size */
// long ru_idrss; /* integral unshared data size */
// long ru_isrss; /* integral unshared stack size */
// long ru_minflt; /* page reclaims (soft page faults) */
// long ru_majflt; /* page faults (hard page faults) */
// long ru_nswap; /* swaps */
// long ru_inblock; /* block input operations */
// long ru_oublock; /* block output operations */
// long ru_msgsnd; /* IPC messages sent */
// long ru_msgrcv; /* IPC messages received */
// long ru_nsignals; /* signals received */
// long ru_nvcsw; /* voluntary context switches */
// long ru_nivcsw; /* involuntary context switches */
//};
//struct timeval
// {
// long int tv_sec; /* Seconds. */
// long int tv_usec; /* Microseconds. */
// };
main () {
struct rusage buf;
// chew up some CPU time
int i,j; for (i=0,j=0; i<100000000; i++) { j+=i*i; }
getrusage(RUSAGE_SELF, &buf);
printf("user seconds without microseconds: %ld\n", buf.ru_utime.tv_sec);
printf("user microseconds: %ld\n", buf.ru_utime.tv_usec);
printf("total user seconds: %e\n",
(double) buf.ru_utime.tv_sec
+ (double) buf.ru_utime.tv_usec / (double) 1000000);
}
Related
When I output the microseconds field for gettimeofday(), I notice that the microsecond field is larger than 1,000,000. Does anyone know why this is? And does this imply that I've been interpreting gettimeofday() wrong?
For the record, my assumption is that the current time (in microseconds) according to gettimeofday() is the following:
struct timeval ts;
gettimeofday(&ts, NULL);
printf("%zu", ts.tv_sec * 1000000 + ts.tv_usec);
Edit: Here is the code that is causing the problem. After the comments below, the printf() might be at fault.
struct timeval curr_time;
gettimeofday(&curr_time, NULL);
printf("Done-arino! Onto the matrix multiplication (at %zu s, %03zu ms)\n", curr_time.tv_sec, curr_time.tv_usec);
// Matrix Multiplication
struct timeval start_tv, end_tv, elapsed_tv;
gettimeofday(&start_tv, NULL);
for (i = 0; i < N; i++)
for (j = 0; j < N; j++)
for (k = 0; k < N; k++)
C[i][j] += A[i][k] * B[k][j];
gettimeofday(&end_tv, NULL);
timersub(&end_tv, &start_tv, &elapsed_tv);
// Print results
printf("Elapsed time: %zu s, %03zu ms\n", elapsed_tv.tv_sec, elapsed_tv.tv_usec / 1000);
After a successful to gettimeofday, yes, tv_usec is guaranteed to be strictly less than 1000000.
If you (think you) saw a value of 1000000 or greater, then yes, it's likely you were doing something wrong.
A frequent mistake is to add or subtract two struct timeval values naively, without implementing proper carry or borrow between the tv_sec and tv_usec fields, and this can easily lead to (mistaken and wrong) values in tv_usec greater than 1000000. (In your edited post you mention subtracting timespecs, but you're using the system-supplied timersub function which ought to get the borrow right.)
If you were using a struct timespec instead of struct timeval, and if a leap second were going on, and if you were (miraculously) using an OS kernel that implemented the CLOCK_UTC clock type proposed by Markus Kuhn at https://www.cl.cam.ac.uk/~mgk25/posix-clocks.html, you'd see tv_nsec values greater than 1000000000, but that's a lot of "if"s. (And to my knowledge no kernel in widespread use has ever implemented CLOCK_UTC.)
You'll need to show some more convincing code, and identify the platform on which you run into the problem.
For example:
#include <stdio.h>
#include <sys/time.h>
int main(void)
{
while (1)
{
struct timeval ts;
if (gettimeofday(&ts, 0) == 0 && ts.tv_usec >= 1000000)
printf("%lu s; %lu µs\n", (long)ts.tv_sec, (long)ts.tv_usec);
}
return 0;
}
The very busy loop is a tad irksome; maybe you should use nanosleep() to sleep for a microsecond or two on every iteration:
#include <stdio.h>
#include <sys/time.h>
#include <time.h>
int main(void)
{
while (1)
{
struct timeval tv;
if (gettimeofday(&tv, 0) == 0 && tv.tv_usec >= 1000000)
printf("%lu s; %lu µs\n", (long)tv.tv_sec, (long)tv.tv_usec);
struct timespec ts = { .tv_sec = 0, .tv_nsec = 2000 };
nanosleep(&ts, 0);
}
return 0;
}
Or, including a progress meter to demonstrate that the code is running:
#include <stdio.h>
#include <sys/time.h>
#include <time.h>
int main(void)
{
size_t loop_count = 0;
size_t line_count = 0;
while (1)
{
struct timeval tv;
if (gettimeofday(&tv, 0) == 0 && tv.tv_usec >= 1000000)
printf("%lu s; %lu µs\n", (long)tv.tv_sec, (long)tv.tv_usec);
struct timespec ts = { .tv_sec = 0, .tv_nsec = 2000 };
nanosleep(&ts, 0);
if (++loop_count > 100000)
{
loop_count = 0;
putchar('.');
line_count++;
if (line_count >= 50)
{
putchar('\n');
line_count = 0;
}
fflush(stdout);
}
}
return 0;
}
timersub()
On an Ubuntu 16.04 LTS VM, I can find file /usr/include/x86_64-linux-gnu/sys/time.h containing a macro:
# define timersub(a, b, result) \
do { \
(result)->tv_sec = (a)->tv_sec - (b)->tv_sec; \
(result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
if ((result)->tv_usec < 0) { \
--(result)->tv_sec; \
(result)->tv_usec += 1000000; \
} \
} while (0)
All the indications I can see are that tv_usec is an __u32, an unsigned quantity. If that's the case, then the < 0 condition will never be true and you may sometimes see grotesquely large positive values instead. YMMV, of course.
Even my mileage varies
Further scrutiny shows that while there are headers that seem to use __u32 for tv_usec, those are not the main system headers.
/usr/include/linux/time.h: __kernel_suseconds_t tv_usec; /* microseconds */
/usr/include/linux/can/bcm.h: long tv_usec;
/usr/include/drm/exynos_drm.h: __u32 tv_usec;
/usr/include/drm/exynos_drm.h: __u32 tv_usec;
/usr/include/drm/vmwgfx_drm.h: uint32_t tv_usec;
/usr/include/drm/drm.h: __u32 tv_usec;
/usr/include/rpc/auth_des.h: uint32_t tv_usec; /* Microseconds. */
/usr/include/valgrind/vki/vki-darwin.h:#define vki_tv_usec tv_usec
/usr/include/valgrind/vki/vki-linux.h: vki_suseconds_t tv_usec; /* microseconds */
/usr/include/rpcsvc/rstat.x: unsigned int tv_usec; /* and microseconds */
/usr/include/rpcsvc/rstat.h: u_int tv_usec;
/usr/include/x86_64-linux-gnu/bits/utmpx.h: __int32_t tv_usec; /* Microseconds. */
/usr/include/x86_64-linux-gnu/bits/time.h: __suseconds_t tv_usec; /* Microseconds. */
/usr/include/x86_64-linux-gnu/bits/utmp.h: int32_t tv_usec; /* Microseconds. */
/usr/include/x86_64-linux-gnu/sys/time.h: (ts)->tv_nsec = (tv)->tv_usec * 1000; \
/usr/include/x86_64-linux-gnu/sys/time.h: (tv)->tv_usec = (ts)->tv_nsec / 1000; \
/usr/include/x86_64-linux-gnu/sys/time.h:# define timerisset(tvp) ((tvp)->tv_sec || (tvp)->tv_usec)
/usr/include/x86_64-linux-gnu/sys/time.h:# define timerclear(tvp) ((tvp)->tv_sec = (tvp)->tv_usec = 0)
/usr/include/x86_64-linux-gnu/sys/time.h: ((a)->tv_usec CMP (b)->tv_usec) : \
/usr/include/x86_64-linux-gnu/sys/time.h: (result)->tv_usec = (a)->tv_usec + (b)->tv_usec; \
/usr/include/x86_64-linux-gnu/sys/time.h: if ((result)->tv_usec >= 1000000) \
/usr/include/x86_64-linux-gnu/sys/time.h: (result)->tv_usec -= 1000000; \
/usr/include/x86_64-linux-gnu/sys/time.h: (result)->tv_usec = (a)->tv_usec - (b)->tv_usec; \
/usr/include/x86_64-linux-gnu/sys/time.h: if ((result)->tv_usec < 0) { \
/usr/include/x86_64-linux-gnu/sys/time.h: (result)->tv_usec += 1000000; \
It's worrying to see any code using an unsigned type for a member with that name, but that doesn't mean it is happening for code using struct timeval and timersub().
This code:
#include <sys/time.h>
#include <stdio.h>
int main(void)
{
struct timeval t = { .tv_sec = 0, .tv_usec = -1 };
printf("%ld %ld\n", (long)t.tv_sec, (long)t.tv_usec);
return 0;
}
compiled for 64-bit (so long is big enough to hold anything that tv_usec could be defined as) prints 0 -1 as it should. It would be possible to initialize the tv_usec member to 0, decrement it, and verify that it is negative, and various other related tests.
So, the problem isn't as simple as "timersub() is wrong" — which is an immense relief.
Your printf formats are suspect, and could be causing this problem.
The %zu format is for printing size_t values. But neither tv_sec nor tv_usec has type size_t.
On a modern system, size_t is likely to be 64 bits. But if either tv_sec or tv_usec is not, printf will end up printing those values quite wrongly.
I changed your printfs to
printf("Done-arino! Onto the matrix multiplication (at %ld s, %03u ms)\n",
curr_time.tv_sec, curr_time.tv_usec);
and
printf("Elapsed time: %ld s, %03u ms\n",
elapsed_tv.tv_sec, elapsed_tv.tv_usec / 1000);
These won't necessarily be correct for you, though — it depends on your system's specific choices for tv_sec and tv_usec.
The general and portable way to print a value of an implementation-defined type like this is to explicitly cast it to the largest type it can be, then use the printf format corresponding to the cast-to type. For example:
printf("Done-arino! Onto the matrix multiplication (at %ld s, %03ld ms)\n",
(long)curr_time.tv_sec, (long)curr_time.tv_usec);
printf("Elapsed time: %ld s, %03ld ms\n",
(long)elapsed_tv.tv_sec, (long)elapsed_tv.tv_usec / 1000);
The cast might or might not be a no-op, but the point is that, no matter what the original type was, you end up with something that matches what you've told printf to expect.
I need to get the process/thread time. I use linux-4.13.4 in Ubuntu 16.04
I read some posts and get that
sum_exec_runtime
Total time process ran on CPU
In real time
Nano second units (10^−9)
Updated in __update_curr(), called from update_curr()
So I think if it is a single thread program. Somehow, I can get the running time of the thread by exact sum_exec_runtime from task_struct
I add syscall to get time:
So I make some little change inside linux kernel.
struct task_struct {
...
...
struct sched_entity se;
// TODO: to get start time and end time
u64 start_time;
u64 end_time;
...
...
};
Then I add my syscall to store sum_exec_runtime into start_time and end_time when I call
asmlinkage long sys_start_vm_timer(int __user vm_tid);
asmlinkage long sys_end_vm_timer(int __user vm_tid, unsigned long long __user *time);
SYSCALL_DEFINE1(start_vm_timer,
int __user, vm_tid){
(find_task_by_vpid(vm_tid))->vm_start_time =
(find_task_by_vpid(vm_tid))->se.sum_exec_runtime;
return 0;
}
SYSCALL_DEFINE2(end_timer,
int __user, tid,
unsigned long long __user, *time){
u64 vm_time;
(find_task_by_vpid(vm_tid))->vm_end_time =
(find_task_by_vpid(vm_tid))->se.sum_exec_runtime;
printk("-------------------\n");
printk("end_vm_time: vm_elapsed_time = %llu \n", ((find_task_by_vpid(vm_tid))->vm_end_time - (find_task_by_vpid(vm_tid))->vm_start_time) );
vm_time = ((find_task_by_vpid(vm_tid))->vm_end_time - (find_task_by_vpid(vm_tid))->vm_start_time);
copy_to_user(time, &vm_time, sizeof(unsigned long long));
return 0;
}
I test with this program tries to get the time of a for loop.
#include <stdio.h>
#include <linux/kernel.h>
#include <sys/syscall.h>
#include <unistd.h>
#include <time.h>
#include <sys/time.h>
#include <stdlib.h>
int main(){
int tid = syscall(SYS_gettid);
printf("tid = %d \n", tid);
printf("My process ID : %d\n", getpid());
printf("My parent's ID: %d\n", getppid());
struct timeval start, end;
unsigned long long elapsedTime = 0;
gettimeofday(&start, NULL);
syscall(336, tid);
int i = 0;
int j = 0;
for(i = 0; i < 65535; i++){
j += 1;
}
syscall(337, tid, &elapsedTime);
gettimeofday(&end, NULL);
printf("thread time = %llu microseconds \n", elapsedTime/1000);
printf("gettimeofday = %ld microseconds \n", ((end.tv_sec * 1000000 + end.tv_usec)- (start.tv_sec * 1000000 + start.tv_usec)));
return 0;
}
I get unexpected result.
wxf#wxf:/home/wxf/cPrj$ ./thread_time
tid = 6905
My process ID : 6905
My parent's ID: 6595
thread time = 0 microseconds
gettimeofday = 422 microseconds
From dmesg,
[63287.065285] tid = 6905
[63287.065288] start_vm_timer = 0
[63287.065701] tid = 6905
[63287.065702] -------------------
[63287.065703] end_vm_timer = 0
[63287.065704] end_vm_time: vm_elapsed_time = 0
I expect that they should be almost the same. But why process/thread time is 0?
The value of sum_exec_runtime does not include the current runtime of the thread. It's updated when needed, not continuously. See the update_curr function.
I'm trying to record the time my program takes to finish in seconds, with sub-second accuracy.
I'm not recording CPU time or cycles, I simply want to be able to mark a start point (Wall Clock time), then at the end of my program mark a finish (Wall Clock time), and calculate the delta.
Have a look at the function:
int clock_gettime(clockid_t clk_id, struct timespec *tp);
The function will fill the structure struct timespec you provide. Here is its definition:
struct timespec {
time_t tv_sec; /* secondes */
long tv_nsec; /* nanosecondes */
};
So the returned time in nanosecondes is: tp->tv_sec * 1e9 + tp->tv_nsec.
You can find all the possible clk_id in the man. I would recommend you to use CLOCK_MONOTONIC as it guarantees you that the time given will always be continuous, even if the time of the system is modified.
Just call time(NULL) to get the current time and use difftime to calculate the time between two points.
#include <time.h>
// ...
time_t start = time(NULL);
// do stuff here
time_t end = time(NULL);
printf("Took %f seconds\n", difftime(end, start));
This displays start/end time stamps and calculates a delta in seconds.
#include <time.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
void print_timestamp(char *, time_t);
int main (int argc, char *argv[]) {
time_t start = time(0);
print_timestamp("Start: ", start);
sleep(2);
time_t end = time(0);
print_timestamp("End: ", end);
double diff = difftime(end, start);
printf("Elapsed: %5.2lf seconds\n", diff);
}
void
print_timestamp(char *msg, time_t time) {
struct tm *tm;
if ((tm = localtime (&time)) == NULL) {
printf ("Error extracting time stuff\n");
return;
}
printf ("%s %04d-%02d-%02d %02d:%02d:%02d\n",
msg,
1900 + tm->tm_year,
tm->tm_mon+1,
tm->tm_mday,
tm->tm_hour,
tm->tm_min,
tm->tm_sec);
}
Sample output:
Start: 2017-02-04 15:33:36
End: 2017-02-04 15:33:38
Elapsed: 2.00 seconds
You may also be able to use the time command available on (at least) Unix systems.
After compiling your program, run the command like this:
# compile your code
$ gcc foo.c -o foo
# compute the time
$ time ./foo
You can use the clock() function to record the number of ticks taken and then convert this to seconds:
#include <time.h>
#include <stdio.h>
#include <math.h>
int main () {
clock_t start_t = clock();
double a=0;
for(int i=0; i< 10000000; i++) {
a+=sqrt(a);
}
clock_t end_t = clock();
double total_t = (double)(end_t - start_t) / CLOCKS_PER_SEC;
printf("Total time taken by CPU: %lf\n", total_t );
return(0);
}
I have a packet sniffer (see below). To measure bandwidth I think I need to start a timer at the beginning of receving data. Record the number of bytes that has been transmitted and then calculate the average bandwidth. To measure time time to receive/send data I did:
int main() {
//usual packet sniffer staff up untill while loop
struct pcap_pkthdr header;
const unsigned char *packet;
char errbuf[PCAP_ERRBUF_SIZE];
char *device;
pcap_t *pcap_handle;
int i;
device = pcap_lookupdev(errbuf);
if (device == NULL) perror("pcap_lookupdev failed");
printf("Sniffing on device %s\n", device);
pcap_handle = pcap_open_live(device, 4096, 1, 0, errbuf);
if (pcap_handle == NULL) perror("pcap_open_live failed");
while (1) {
//starting the timer
double diff = 0.0;
time_t start;
time_t stop;
char buff[128];
time(&start);
//receiving packet
packet = pcap_next(pcap_handle, &header);
//stopping the timer
time(&stop);
//measuring time of receiving data
diff = difftime(stop, start);
process_packet(packet, header.len, diff);
}
}
diff turns out to always be 0.0000, which is probably wrong. Is my understanding correct, if yes, Is there any problems with code?
I also tries using milliseconds:
float diff;
clock_t start;
clock_t stop;
char buff[128];
start = clock();
packet = pcap_next(pcap_handle, &header);//just creates a pointer in no time
stop = clock();
diff = (((float)stop - (float)start) / 1000000.0F ) * 1000;
The same output...
Insufficient number of samples or too coarse a clock.
The number of packets between the start and stop are likely far too small. time_t typically only has a resolution of 1 second. clock_t has an implementation defined number of ticks per seconds of CLOCKS_PER_SEC. I've seen values of 18.2 or 100 or 1000 and others. It too, may be insufficient for 1 data packet.
Recommend increase the time of bytes transferred to be at least 10x the clock period. So if you are using time_t and running at 19,200 baud, then transfer 192,000 bytes.
For consistency, synchronizing the start time helps. Sample below works for clock_t, just scale accordingly.
// sync
time_t was;
time(&was);
time_t now;
do {
time(&now);
} while (now == was);
// do test
do_test(); // about 10 seconds;
// results
time_t later;
time(&later);
time_t delta = late - now;
BitsPerDataByte = 1+8+1;
double TestedBaud = 1.0*DataBytesSent*BitsPerDataByte/delta;
What is the use of tim.tv_sec and tim.tv_nsec in the following?
How can I sleep execution for 500000 microseconds?
#include <stdio.h>
#include <time.h>
int main()
{
struct timespec tim, tim2;
tim.tv_sec = 1;
tim.tv_nsec = 500;
if(nanosleep(&tim , &tim2) < 0 )
{
printf("Nano sleep system call failed \n");
return -1;
}
printf("Nano sleep successfull \n");
return 0;
}
Half a second is 500,000,000 nanoseconds, so your code should read:
tim.tv_sec = 0;
tim.tv_nsec = 500000000L;
As things stand, you code is sleeping for 1.0000005s (1s + 500ns).
tv_nsec is the sleep time in nanoseconds. 500000us = 500000000ns, so you want:
nanosleep((const struct timespec[]){{0, 500000000L}}, NULL);
500000 microseconds are 500000000 nanoseconds. You only wait for 500 ns = 0.5 µs.
This worked for me ....
#include <stdio.h>
#include <time.h> /* Needed for struct timespec */
int mssleep(long miliseconds)
{
struct timespec rem;
struct timespec req= {
(int)(miliseconds / 1000), /* secs (Must be Non-Negative) */
(miliseconds % 1000) * 1000000 /* nano (Must be in range of 0 to 999999999) */
};
return nanosleep(&req , &rem);
}
int main()
{
int ret = mssleep(2500);
printf("sleep result %d\n",ret);
return 0;
}
I usually use some #define and constants to make the calculation easy:
#define NANO_SECOND_MULTIPLIER 1000000 // 1 millisecond = 1,000,000 Nanoseconds
const long INTERVAL_MS = 500 * NANO_SECOND_MULTIPLIER;
Hence my code would look like this:
timespec sleepValue = {0};
sleepValue.tv_nsec = INTERVAL_MS;
nanosleep(&sleepValue, NULL);
More correct variant:
{
struct timespec delta = {5 /*secs*/, 135 /*nanosecs*/};
while (nanosleep(&delta, &delta));
}
POSIX 7
First find the function: http://pubs.opengroup.org/onlinepubs/9699919799/functions/nanosleep.html
That contains a link to a time.h, which as a header should be where structs are defined:
The header shall declare the timespec structure, which shall > include at least the following members:
time_t tv_sec Seconds.
long tv_nsec Nanoseconds.
man 2 nanosleep
Pseudo-official glibc docs which you should always check for syscalls:
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};