How to make minutes and seconds timer in C - c

I'm struggling to make a timer in c that counts minutes and seconds. I'm trying to test it by printing the time to the console but it doesn't seem to show anything. Does anything look wrong in my code?
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#define TRUE 1
int main( void )
{
int min = 0;
int sec = 0;
while (TRUE)
{
sec++;
Sleep(1000);
printf("%2d:%2d", min, sec);
if (sec == 59)
{
min++;
sec = 0;
}
}
return 0;
}

For performance reason, printf is buffered. That is, it won't display until the buffer is full. First thing to try is to add a new-line character to the end:
printf("%2d:%2d\n", min, sec);
If that doesn't work, you can force the output buffer to flush by calling fflush(stdout);

I would just check the system time rather than keeping track of seconds/minutes. This is because, your Sleep may not be exactly 1000ms, so over time your counter will not be accurate.
Since you're using Windows, here's a slightly modified version of your code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <windows.h>
int main()
{
for (;;)
{
time_t now = time(NULL);
struct tm* ptm = localtime(&now);
printf("%02d:%02d\n", ptm->tm_min, ptm->tm_sec);
Sleep(1000);
}
return 0;
}
I hope that helps.

As you ask, there are several things wrong in your code, I'll tell you as I read it. See below.
#include <stdio.h>
#include <stdlib.h>
#include <windows.h>
#define TRUE 1
int main( void )
{
int min = 0;
int sec = 0;
while (TRUE)
{
sec++;
Sleep(1000);
There's a problem with Sleep(). The thing is that you ask the kernel to pause your program for 1000 milliseconds, but that means that the kernel will awake your program 1 second after your call it to pause. This doesn't take into account the fact that the kernel will put in the schedule queue your process and it will, in general never take the cpu immediately, but after some delay. Then, even if you get the cpu immediately, your code will need some time to execute, making the total loop longer than 1000 ms. And your clock will be slow. It is better to get the system time and show it on the screen, or to take a timestamp when you start... and then show the difference in time from the start time to the timestamp you get at each display. The system time is maintained by an interrupt, that happens at regular intervals (by means of a precise clock oscillator) so you'll get a good clock time that way, instead of your slow clock (how slow it is will depend on things like how many other processes you are running on the system)
printf("%2d:%2d", min, sec);
This has already been stated in other answers, but let me explain how it works so you can understand how buffering works. A buffer is a large block of memory (normally it is 512 bytes) that is filled by printf() so it only calls write() when it has filled a complete buffer of data. This allows stdio to save system calls to do the actual writing and so, be more efficient when transferring large amounts of data.
On interactive applications, this is not applicable, as no output would be done if you don't force the buffers to fflush() before any input is done, so when the output is a tty device (something that stdio can know from the file descriptor associated to standard output) then it switches to line mode buffering, and that means that printf() will flush out the buffer when: 1) it is filled up, or 2) when a newline \n character is found in the output. So, one way to solve your problem is to put a \n at the end of the string, or to call fflush(stdout); after calling printf().
if (sec == 59)
as you increment your seconds before comparing, you have to check against 60 and not 59, as you compare with 60 to convert it to 0 after you have already incremented the seconds.
{
min++;
You should have to do the same with the minutes, when minutes get to 60. As you have not included this code, I assume you are not considering an hours chrono.
sec = 0;
}
}
return 0;
}
A complete solution for what I mean can be:
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <windows.h>
int main()
{
long start_timestamp;
long display_timestamp;
start_timestamp = time(NULL);
for(;;) { /* this is the same as while(TRUE) */
display_timestamp = time(NULL);
int elapsed = display_timestamp - start_timestamp;
int min = elapsed / 60;
int sec = elapsed % 60;
/* the \r in next printf will make to print a single carry
* return in the same line, so the time updates on top of
* itself */
printf("\r%02d:%02d", min, sec); fflush(stdout);
Sleep(100); /* as you print the time elapsed, you don't
* mind if you shorten the time between
* displays. */
}
}

Does anything look wrong in my code?
Several things look wrong. The first is stdout line buffering - see eduffy's answer for that.
The second problem is that you're doing sec++; sleep(1000);, which means that sec will be incremented once every 1000 seconds or more.
The third problem is that if(sec == 59; is wrong and needs to be if(sec == 60). Otherwise you'll have 59 seconds per minute.
The fourth problem is that sleep(1) will sleep for at least 1 second, but may sleep for 2 seconds, or 10 seconds, or 1234 seconds. To guard against this you want something more like this:
expiry = now() + delay;
while(true) {
sleep(expiry - now() ):
expiry += delay;
}
The basic idea being that if one sleep takes too long then the next sleep will sleep less; and it'll end up being "correct on average".
The last problem is that sleep() doesn't really have enough precision. For 1 second delays you want to be able to sleep for fractions of a second (e.g. like maybe 9/10ths of a second). Depending on which compiler for which OS, there's probably something better you can use (e.g. maybe nanosleep()). Sadly there may be nothing that's actually good (e.g. some sort of "nanosleep_until(expiry_time)" that prevents jitter caused by IRQs and/or task switches that occur after you determine now but before you call something like "nanosleep()").

Related

Only updating the displayed time elapsed when the time change in C

I would like to display the seconds elapsed after the start of program:
volatile time_t start_time = time(NULL);
volatile time_t target_seconds = 60*60*17;
volatile time_t time_passed = 0;
while(1)
{
time_passed = time(NULL)-start_time;
printf("\rTime elapsed=%lu/%lu(seconds)", time_passed, target_seconds);
}
Output:
Time elapsed=1/61200(second)
But it will keep updating the display no matter what value time_passed is.
Now I only want to update the displaying time elapsed when the actual time is incremented.
So I changed the program in this way:
volatile time_t start_time = time(NULL);
volatile time_t target_seconds = 60*60*17;
volatile time_t time_passed = 0;
while(1){
if ((time(NULL)-start_time) != time_passed)
{
time_passed = time(NULL)-start_time;
printf("\rTime elapsed=%lu/%lu(seconds)", time_passed, target_seconds);
}
}
Now it displays nothing.
Can anyone explain why and how to solve it.
Your code is fine.
But depending on your platform the output buffer is flushed only when a \n is printed.
Therefore you should add fflush(stdout); right after the printf.
if ((time(NULL)-start_time) != time_passed)
{
time_passed = time(NULL)-start_time;
printf("\rTime elapsed=%lu/%lu(seconds)", time_passed, target_seconds);
fflush(stdout);
}
BTW: if you wait long enough, you'll end up seeing some output because eventually the output buffer will be full and then everything will be displayed at once, which of course doesn't make much sense here.
The reason why you see immediately output with the first version of your code is that your're printing contiuously and therefore the output buffer will be full very quickly, and it will be flushed contiuously hence you see output.
The volatile keyword is not required here, it's absolutely unnecessary but it doesn't harm either.
Apart from calling fflush after every printf it's also possible to turn off buffering on stdout. use either setbuf or setvbuf
#include <stdio.h>
setbuf(stdout, NULL);
setvbuf(stdout, NULL, _IONBF, 0);
Now every character will get printed

How to delete random number sequence

I am new to C programming.
I am trying to work through an example in my textbook.
Problem:
1 : Can't make random number generator pause for one second, without having to
insert printf(); in a place where I shouldn't.
2: Can't make the program pause for 1 second, and then delete random sequence. I have tried using printf(\r), but it just deletes the entire sequence without pausing for 1 second.
Help appreciated.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main(void)
{
time_t Start_Of_Seq = (time(NULL));
time_t Now = 0;
Now = clock();
srand((unsigned int)Start_Of_Seq);
for(int i = 1; i <= 5; i++)
{
printf("%d",rand()% 10);
}
printf("\n"); //This shouldn't be here.
for(; clock() - Now < CLOCKS_PER_SEC;);
printf("Testing the to see if there is a pause\n");
}
The printf function outputs everything to a buffer. The buffer is actually printed only after a newline. Try fflush(stdout); to print the buffer contents immediately.
Besides, if you use Linux or another Unix-like system, for pauses there is a system call sleep. Try the man 3 sleep command to see more info.

delay function using clock()

#include <stdio.h>
#include <time.h>
#include <stdlib.h>
void delay(double sec)
{
clock_t start = clock();
while ((clock() - start) / CLOCKS_PER_SEC < sec)
;
}
int main()
{
for (int i = 0; i < 100000; i++) {
printf("%d ", i);
delay(1);
}
return 0;
}
I wrote a delay function and tested it with this code, but I didn't see any number in standard output.
Then I changed the printf() call like this :
printf("%d \n", i);
Interestingly, it worked. I also tried without delay function like this:
for (int i = 0; i < 100000; i++)
printf("%d ", i);
It also worked. What am I missing here? Why can't I see any number when I run the first code? Thanks for helps :)
Because the output is buffered. Most terminals will buffer the standard-output until either a newline (\n) is encountered, or fflush(stdout) is called.
Your loop that includes the call to delay should eventually print the numbers, but when the program is finished.
Add fflush(stdout); after the printf-call to make the numbers appear immediately without newlines in between.
Well, there are two reasons. First, printf() doesn't always flush its output, so you may actually get past the printf statement and still not see anything on your terminal. The text is buffered. Putting in a \n may have caused it to flush its output, so that's why that worked.
The second problem is that you are not passing any value to your delay() function. So it's probably using some random value, and hanging.
I'd also like to point out that clock() returns CPU time, not "wall clock" time, so it may actually take longer than you think.
Delay functions are tricky, that's why there are lots of system calls to do them. See sleep().

Erratic average execution times for C function

I'm trying to optimize a chunk of code given to me by a friend but my baseline for average execution times of it are extremely erratic and I'm lost to as why/how to fix it.
Code:
#include <sys/time.h>
#include <time.h>
#include <stdio.h>
#include "wall.h" /* Where his code is */
int main()
{
int average;
struct timeval tv;
int i;
for(i = 0; i < 1000; i++) /* Running his code 1,000 times */
{
gettimeofday(&tv, NULL); /* Starting time */
start(); /* Launching his code */
int ret = tv.tv_usec; /* Finishing time */
ret /= 1000; /* Converting to milliseconds */
average += ret; /* Adding to the average */
}
printf("Average execution time: %d milliseconds\n", average/1000);
return 0;
}
Output of 5 different runs:
804 milliseconds
702 milliseconds
394 milliseconds
642 milliseconds
705 milliseconds
I've tried multiple different ways of getting the average execution time, but each one either doesn't give me a precise enough answer or gives me a completely erratic one. I'm lost as to what to do now, any help would be greatly appreciated!
I know these types of benchmarks are very much system dependent, so I've listed my system specs below:
Ubuntu 12.10 x64
7.8 GiB RAM
Intel Core i7-3770 CPU # 3.40GHz x 8
GeForce GT 620/PCIe/SSE2
Edit
Thank you all for your input but I decided to go with a gprof instead of constructing my own. Thank you, once again!
Your line int ret = tv.tv_usec; /* Finishing time */ doesn't give you the finishing time, it's still the starting time. You should make a second struct timeval, call gettimeofday with that and compare the two.
However, using clock() is probably easier. Of course, if you want to really analyse the performance of your code use a profiler.
There are several problems here, including zero details on what the code you're benchmarking is doing, and that you're using "gettimeofday()" incorrectly (and, perhaps, inappropriately).
SUGGESTIONS:
1) Don't use "gettimeofday()":
http://blog.habets.pp.se/2010/09/gettimeofday-should-never-be-used-to-measure-time
2) Supplement your "time elapsed" with gprof:
http://www.cs.duke.edu/~ola/courses/programming/gprof.html

Getting getrusage() to measure system time in C

I would like to measure the system time it takes to execute some code. To do this I know I would sandwich said code between two calls to getrusage(), but I get some unexpected results...
#include <sys/time.h>
#include <sys/resource.h>
#include <unistd.h>
#include <stdio.h>
int main() {
struct rusage usage;
struct timeval start, end;
int i, j, k = 0;
getrusage(RUSAGE_SELF, &usage);
start = usage.ru_stime;
for (i = 0; i < 10000; i++) {
/* Double loop for more interesting results. */
for (j = 0; j < 10000; j++) {
k += 20;
}
}
getrusage(RUSAGE_SELF, &usage);
end = usage.ru_stime;
printf("Started at: %ld.%lds\n", start.tv_sec, start.tv_usec);
printf("Ended at: %ld.%lds\n", end.tv_sec, end.tv_usec);
return 0;
}
I would hope that this produces two different numbers, but alas! After seeing my computer think for a second or two, this is the result:
Started at: 0.1999s
Ended at: 0.1999s
Am I not using getrusage() right? Why shouldn't these two numbers be different? If I am fundamentally wrong, is there another way to use getrusage() to measure the system time of some source code? Thank you for reading.
You're misunderstanding the difference between "user" and "system" time. Your example code is executing primarily in user-mode (ie, running your application code) while you are measuring, but "system" time is a measure of time spent executing in kernel-mode (ie, processing system calls).
ru_stime is the correct field to measure system time. Your test application just happens not to accrue any such time between the two points you check.
You should use usage.ru_utime, which is user CPU time used, instead.
Use gprof. This will give give you the time taken by each function.
Install gprof and use these flags for compilation: -pg -fprofile-arcs -ftest-coverage.

Resources