Measure time for sending a file via socket in C [duplicate] - c

What is the method to use a timer in C? I need to wait until 500 ms for a job. Please mention any good way to do this job. I used sleep(3); But this method does not do any work in that time duration. I have something that will try until that time to get any input.

Here's a solution I used (it needs #include <time.h>):
int msec = 0, trigger = 10; /* 10ms */
clock_t before = clock();
do {
/*
* Do something to busy the CPU just here while you drink a coffee
* Be sure this code will not take more than `trigger` ms
*/
clock_t difference = clock() - before;
msec = difference * 1000 / CLOCKS_PER_SEC;
iterations++;
} while ( msec < trigger );
printf("Time taken %d seconds %d milliseconds (%d iterations)\n",
msec/1000, msec%1000, iterations);

You can use a time_t struct and clock() function from time.h.
Store the start time in a time_t struct by using clock() and check the elapsed time by comparing the difference between stored time and current time.

Yes, you need a loop. If you already have a main loop (most GUI event-driven stuff does) you can probably stick your timer into that. Use:
#include <time.h>
time_t my_t, fire_t;
Then (for times over 1 second), initialize your timer by reading the current time:
my_t = time(NULL);
Add the number of seconds your timer should wait and store it in fire_t. A time_t is essentially a uint32_t, you may need to cast it.
Inside your loop do another
my_t = time(NULL);
if (my_t > fire_t) then consider the timer fired and do the stuff you want there. That will probably include resetting it by doing another fire_t = time(NULL) + seconds_to_wait for next time.
A time_t is a somewhat antiquated unix method of storing time as the number of seconds since midnight 1/1/1970 but it has many advantages. For times less than 1 second you need to use gettimeofday() (microseconds) or clock_gettime() (nanoseconds) and deal with a struct timeval or struct timespec which is a time_t and the microseconds or nanoseconds since that 1 second mark. Making a timer works the same way except when you add your time to wait you need to remember to manually do the carry (into the time_t) if the resulting microseconds or nanoseconds value goes over 1 second. Yes, it's messy. See man 2 time, man gettimeofday, man clock_gettime.
sleep(), usleep(), nanosleep() have a hidden benefit. You see it as pausing your program, but what they really do is release the CPU for that amount of time. Repeatedly polling by reading the time and comparing to the done time (are we there yet?) will burn a lot of CPU cycles which may slow down other programs running on the same machine (and use more electricity/battery). It's better to sleep() most of the time then start checking the time.
If you're trying to sleep and do work at the same time you need threads.

May be this examples help to you
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
/*
Implementation simple timeout
Input: count milliseconds as number
Usage:
setTimeout(1000) - timeout on 1 second
setTimeout(10100) - timeout on 10 seconds and 100 milliseconds
*/
void setTimeout(int milliseconds)
{
// If milliseconds is less or equal to 0
// will be simple return from function without throw error
if (milliseconds <= 0) {
fprintf(stderr, "Count milliseconds for timeout is less or equal to 0\n");
return;
}
// a current time of milliseconds
int milliseconds_since = clock() * 1000 / CLOCKS_PER_SEC;
// needed count milliseconds of return from this timeout
int end = milliseconds_since + milliseconds;
// wait while until needed time comes
do {
milliseconds_since = clock() * 1000 / CLOCKS_PER_SEC;
} while (milliseconds_since <= end);
}
int main()
{
// input from user for time of delay in seconds
int delay;
printf("Enter delay: ");
scanf("%d", &delay);
// counter downtime for run a rocket while the delay with more 0
do {
// erase the previous line and display remain of the delay
printf("\033[ATime left for run rocket: %d\n", delay);
// a timeout for display
setTimeout(1000);
// decrease the delay to 1
delay--;
} while (delay >= 0);
// a string for display rocket
char rocket[3] = "-->";
// a string for display all trace of the rocket and the rocket itself
char *rocket_trace = (char *) malloc(100 * sizeof(char));
// display trace of the rocket from a start to the end
int i;
char passed_way[100] = "";
for (i = 0; i <= 50; i++) {
setTimeout(25);
sprintf(rocket_trace, "%s%s", passed_way, rocket);
passed_way[i] = ' ';
printf("\033[A");
printf("| %s\n", rocket_trace);
}
// erase a line and write a new line
printf("\033[A");
printf("\033[2K");
puts("Good luck!");
return 0;
}
Compile file, run and delete after (my preference)
$ gcc timeout.c -o timeout && ./timeout && rm timeout
Try run it for yourself to see result.
Notes:
Testing environment
$ uname -a
Linux wlysenko-Aspire 3.13.0-37-generic #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ gcc --version
gcc (Ubuntu 4.8.5-2ubuntu1~14.04.1) 4.8.5
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Related

How to extend clock() execution time

My C program can run more than 3 hours. For the sake of my experiment, I want to calculate the duration time (i.e., execution time) taken by the program until it finishes. I use start = clock(); at the beginning of main(), at the end I do end = clock(), finally subtract end - start the get the execution time. However, as it is said here, clock_t clock(void) is limited to 72 minutes. How can I enforce it to count the whole execution time not only 72 minutes?
The time() function is specified in C89, C99, C11. It has second resolution and usually spans more than 30-bits worth of seconds. It's likely the most portable solution. In fact, I'd never heard of clock() until today. Counting ticks is rarely what you want even if you need high resolution.
If you don't need a portable way to measure CPU/execution time, use procfs. proc/self/stat's stime field and sysconf(_SC_CLK_TCK) should be all you need.
Use gettimeofday() (https://linux.die.net/man/2/gettimeofday). It offers microsecond resolution over a very long period. Record the start time and the end time and calculate the difference.
The standard across all POSIXy systems, including, Linux, is the clock_gettime() POSIX.1 function.
Consider the following example:
#define _POSIX_C_SOURCE 200809L
#include <stdlib.h>
#include <time.h>
/* Clock used by wall_start()/wall_elapsed() */
#ifdef CLOCK_MONOTONIC
#define WALL_CLOCK_ID CLOCK_MONOTONIC
#else
#define WALL_CLOCK_ID CLOCK_REALTIME
#endif
static struct timespec wall_started = { 0 };
static inline void wall_start(void)
{
if (clock_gettime(WALL_CLOCK_ID, &wall_started)) {
wall_started.tv_sec = 0;
wall_started.tv_nsec = 0;
}
}
static inline void wall_elapsed(void)
{
struct timespec t;
if (!clock_gettime(WALL_CLOCK_ID, &t))
return (double)(t.tv_sec - wall_started.tv_sec)
+ (double)(t.tv_nsec - wall_started.tv_nsec) / 1000000000.0;
else
return -1.0;
}
/* Return the number of seconds of CPU time
used by this process (includes all threads)
*/
static inline double cpu_elapsed(void)
{
struct timespec t;
if (!clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &t))
return (double)t.tv_sec
+ (double)t.tv_nsec / 1000000000.0;
return -1.0;
}
If you want to display the time in days, hours, minutes, and seconds, you'll also need a simple function to split the (floating-point) seconds into days, hours, and minutes.
Here is one implementation, which takes pointers to ints for days, hours, and minutes; you can use NULL if you don't want to split that out. The function returns the remaining seconds:
static inline double split_seconds(double secs,
int *days,
int *hours,
int *minutes)
{
/* We split the absolute number of seconds, only. */
if (secs < 0.0)
secs = 0.0;
if (days) {
const int ndays = (int)(secs / 86400.0);
secs -= (double)ndays * 86400.0;
*days = ndays;
}
if (hours) {
const int nhours = (int)(secs / 3600.0);
secs -= (double)nhours * 3600.0;
*hours = nhours;
}
if (minutes) {
const int nminutes = (int)(secs / 60.0);
secs -= (double)nminutes * 60.0;
*minutes = nminutes;
}
return secs;
}
For example, calling split_seconds(3661.25, NULL, &h, NULL) returns 61.25 with h == 1. Calling split_seconds(3661.25, &d, &h, &m) returns 1.25, with d == 0, h == 1, m == 1, corresponding to 0 days, 1 hour, 1 minute, and 1.25 seconds.
The CLOCK_REALTIME clock is the standard wall clock in POSIXy systems, but it is affected by NTP (Network Time Protocol) changes, and the system administrator can directly set it. It is not, however, affected by Daylight Savings Time or anything related to timezones, because it is in UTC, not local time.
The CLOCK_MONOTONIC clock is similar to CLOCK_REALTIME, except that its epoch is unknown (probably set to some time in the past when the machine last booted), and it is not affected by NTP time jumps (but is affected by small incremental changes by NTP, to keep the computer clock synchronized to network time sources), and is not affected by system time changes by the system administrator.
If available, CLOCK_MONOTONIC is considered better for measuring elapsed real-world time than CLOCK_REALTIME; CLOCK_REALTIME is better suited to cases where you compare to an absolute real-world time, or check if a specific date/time has already passed or not.
If you intend to store a timestamp to e.g. a file, you must use CLOCK_REALTIME and not CLOCK_MONOTONIC, because the latter is only meaningful on that same machine, and only until the next boot.
When using CLOCK_REALTIME, remember that it is in UTC, and users normally specify their times and dates in local time; you probably want to use strptime() POSIX.1 function to parse the text (use #define _XOPEN_SOURCE in Linux), and mktime() to generate the time_t you can store to the tv_sec member of a struct timespec structure.

Simple <Time.h> program takes large amount CPU

I was trying to familiarize myself with the C time.h library by writing something simple in VS. The following code simply prints the value of x added to itself every two seconds:
int main() {
time_t start = time(NULL);
time_t clock = time(NULL);
time_t clockTemp = time(NULL); //temporary clock
int x = 1;
//program will continue for a minute (60 sec)
while (clock <= start + 58) {
clockTemp = time(NULL);
if (clockTemp >= clock + 2) { //if 2 seconds has passed
clock = clockTemp;
x = ADD(x);
printf("%d at %d\n", x, timeDiff(start, clock));
}
}
}
int timeDiff(int start, int at) {
return at - start;
}
My concern is with the amount of CPU that this program takes, about 22%. I figure this problem stems from the constant updating of the clockTemp (just below the while statement), but I'm not sure how to fix this issue. Is it possible that this is a visual studio problem, or is there a special way to check for time?
Solution
the code needed the sleep function so that it wouldn't need to run constantly.
I added sleep with #include <windows.h> and put Sleep (2000) //2 second sleep at the end of the while
while (clock <= start + 58) {
...
Sleep(2000); }
The problem is not in the way you are checking the current time. The problem is that there is nothing to limit the frequency with which the loop runs. Your program continues to execute statements as quickly as it can, and eats up a ton of processor time. (In the absence of other programs, on a single-threaded CPU, it would use 100% of your processor time.)
You need to add a "sleep" method inside your loop, which will indicate to the processor that it can stop processing your program for a short period of time. There are many ways to do this; this question has some examples.

timestamp in c with milliseconds precision

I'm relatively new to C programming and I'm working on a project which needs to be very time accurate; therefore I tried to write something to create a timestamp with milliseconds precision.
It seems to work but my question is whether this way is the right way, or is there a much easier way? Here is my code:
#include<stdio.h>
#include<time.h>
void wait(int milliseconds)
{
clock_t start = clock();
while(1) if(clock() - start >= milliseconds) break;
}
int main()
{
time_t now;
clock_t milli;
int waitMillSec = 2800, seconds, milliseconds = 0;
struct tm * ptm;
now = time(NULL);
ptm = gmtime ( &now );
printf("time before: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
/* wait until next full second */
while(now == time(NULL));
milli = clock();
/* DO SOMETHING HERE */
/* for testing wait a user define period */
wait(waitMillSec);
milli = clock() - milli;
/*create timestamp with milliseconds precision */
seconds = milli/CLOCKS_PER_SEC;
milliseconds = milli%CLOCKS_PER_SEC;
now = now + seconds;
ptm = gmtime( &now );
printf("time after: %d:%d:%d:%d\n",ptm->tm_hour,ptm->tm_min,ptm->tm_sec, milliseconds );
return 0;
}
The following code seems likely to provide millisecond granularity:
#include <windows.h>
#include <stdio.h>
int main(void) {
SYSTEMTIME t;
GetSystemTime(&t); // or GetLocalTime(&t)
printf("The system time is: %02d:%02d:%02d.%03d\n",
t.wHour, t.wMinute, t.wSecond, t.wMilliseconds);
return 0;
}
This is based on http://msdn.microsoft.com/en-us/library/windows/desktop/ms724950%28v=vs.85%29.aspx. The above code snippet was tested with CYGWIN on Windows 7.
For Windows 8, there is GetSystemTimePreciseAsFileTime, which "retrieves the current system date and time with the highest possible level of precision (<1us)."
Your original approach would probably be ok 99.99% of the time (ignoring one minor bug, described below). Your approach is:
Wait for the next second to start, by repeatedly calling time() until the value changes.
Save that value from time().
Save the value from clock().
Calculate all subsequent times using the current value of clock() and the two saved values.
Your minor bug was that you had the first two steps reversed.
But even with this fixed, this is not guaranteed to work 100%, because there is no atomicity. Two problems:
Your code loops time() until you are into the next second. But how far are you into it? It could be 1/2 a second, or even several seconds (e.g. if you are running a debugger with a breakpoint).
Then you call clock(). But this saved value has to 'match' the saved value of time(). If these two calls are almost instantaneous, as they usually are, then this is fine. But Windows (and Linux) time-slice, and so there is no guarantee.
Another issue is the granularity of clock. If CLOCKS_PER_SEC is 1000, as seems to be the case on your system, then of course the best you can do is 1 msec. But it can be worse than that: on Unix systems it is typically 15 msecs. You could improve this by replacing clock with QueryPerformanceCounter(), as in the answer to timespec equivalent for windows, but this may be otiose, given the first two problems.
Clock periods are not at all guaranteed to be in milliseconds. You need to explicitly convert the output of clock() to milliseconds.
t1 = clock();
// do something
t2 = clock();
long millis = (t2 - t1) * (1000.0 / CLOCKS_PER_SEC);
Since you are on Windows, why don't you just use Sleep()?

Using clock() to measure execution time

I am running a C program using GCC and a proprietary DSP cross-compiler to simulate some functioality. I am using the following code to measure the execution time of particular part of my program:
clock_t start,end;
printf("DECODING DATA:\n");
start=clock();
conv3_dec(encoded, decoded,3*length,0);
end=clock();
duration = (double)(end - start) / CLOCKS_PER_SEC;
printf("DECODING TIME = %f\n",duration);
where conv3_dec() is a function defined in my program and I want to find the run-time of this function.
Now the thing is when my program runs, the conv3_dec() functions runs for almost 2 hours but the output from the printf("DECODING TIME = %f\n",duration) says that the execution of the function finished in just half a second (DECODING TIME = 0.455443) . This is very confusing for me.
I have used the clock_t technique to measure the runtimes of programs previously but the difference has never been so huge. Is this being caused by the cross-compiler. Just as a side note, the simulator simulates a DSP processor running at just 500MHz, so is the difference in the clock speeds of the DSP processor and my CPU causing the error is measuring the CLOCKS_PER_SEC.
clock measures the cpu time and not the wallclock time. Since you are not running the majority of your code on the cpu, this is not the right tool.
For durations like two hours, I wouldn't be too concerned about clock(), it's far more useful for measuring sub-second durations.
Just use time() if you want the actual elapsed time, something like (dummy stuff supplied for what was missing):
#include <stdio.h>
#include <time.h>
// Dummy stuff starts here
#include <unistd.h>
#define encoded 0
#define decoded 0
#define length 0
static void conv3_dec (int a, int b, int c, int d) {
sleep (20);
}
// Dummy stuff ends here
int main (void) {
time_t start, end, duration;
puts ("DECODING DATA:");
start = time (0);
conv3_dec (encoded, decoded, 3 * length, 0);
end = time (0);
duration = end - start;
printf ("DECODING TIME = %d\n", duration);
return 0;
}
which generates:
DECODING DATA:
DECODING TIME = 20
gettimeofday() function also can be considered.
The gettimeofday() function shall obtain the current time, expressed as seconds and microseconds since the Epoch, and store it in the timeval structure pointed to by tp. The resolution of the system clock is unspecified.
Calculating elapsed time in a C program in milliseconds
http://www.ccplusplus.com/2011/11/gettimeofday-example.html

Creating a timeout using time and difftime

gcc (GCC) 4.6.0 20110419 (Red Hat 4.6.0-5)
I am trying to get the time of start and end time. And get the difference between them.
The function I have is for creating a API for our existing hardware.
The API wait_events take one argument that is time in milli-seconds. So what I am trying to get the start before the while loop. And using time to get the number of seconds. Then after 1 iteration of the loop get the time difference and then compare that difference with the time out.
Many thanks for any suggestions,
/* Wait for an event up to a specified time out.
* If an event occurs before the time out return 0
* If an event timeouts out before an event return -1 */
int wait_events(int timeout_ms)
{
time_t start = 0;
time_t end = 0;
double time_diff = 0;
/* convert to seconds */
int timeout = timeout_ms / 100;
/* Get the initial time */
start = time(NULL);
while(TRUE) {
if(open_device_flag == TRUE) {
device_evt.event_id = EVENT_DEV_OPEN;
return TRUE;
}
/* Get the end time after each iteration */
end = time(NULL);
/* Get the difference between times */
time_diff = difftime(start, end);
if(time_diff > timeout) {
/* timed out before getting an event */
return FALSE;
}
}
}
The function that will call will be like this.
int main(void)
{
#define TIMEOUT 500 /* 1/2 sec */
while(TRUE) {
if(wait_events(TIMEOUT) != 0) {
/* Process incoming event */
printf("Event fired\n");
}
else {
printf("Event timed out\n");
}
}
return 0;
}
=============== EDIT with updated results ==================
1) With no sleep -> 99.7% - 100% CPU
2) Setting usleep(10) -> 25% CPU
3) Setting usleep(100) -> 13% CPU
3) Setting usleep(1000) -> 2.6% CPU
4) Setting usleep(10000) -> 0.3 - 0.7% CPU
You're overcomplicating it - simplified:
time_t start = time();
for (;;) {
// try something
if (time() > start + 5) {
printf("5s timeout!\n");
break;
}
}
time_t should in general just be an int or long int depending on your platform counting the number of seconds since January 1st 1970.
Side note:
int timeout = timeout_ms / 1000;
One second consists of 1000 milliseconds.
Edit - another note:
You'll most likely have to ensure that the other thread(s) and/or event handling can happen, so include some kind of thread inactivity (using sleep(), nanosleep() or whatever).
Without calling a Sleep() function this a really bad design : your loop will use 100% of the CPU. Even if you are using threads, your other threads won't have much time to run as this thread will use many CPU cycles.
You should design something like that:
while(true) {
Sleep(100); // lets say you want a precision of 100 ms
// Do the compare time stuff here
}
If you need precision of the timing and are using different threads/processes, use Mutexes (semaphores with a increment/decrement of 1) or Critical Sections to make sure the time compare of your function is not interrupted by another process/thread of your own.
I believe your Red Hat is a System V so you can sync using IPC

Resources