static void timeDelay(int no_of_seconds)
{
#ifdef _WIN32
Sleep(1000 * no_of_seconds);
#else
sleep(no_of_seconds);
#endif
}
void somefunction(){
printf("\t\t Load ... \n\t\t");
fflush(stdout);
for (int i = 1; i <= 60; i++)
{
fflush(stdout);
timeDelay(1);
if (i == 31)
printf("\n\t\t");
printf("*****");
}
}
I have included the header files too:
#ifdef _WIN32
#include <windows.h>
#else
#include <unistd.h>
The stars are printed instantaneously.
I added fflush(stdout) after seeing the answers here. I also commented out the if (i==31) {} portion to check if that's causing the problem but it isn't. So what is wrong with my code?
According to the sleep() manpage...
NOTES
On Linux, sleep() is implemented via nanosleep(2). See the nanosleep(2) man page for a discussion of the
clock used.
So I have rewritten your program to use nanosleep. As you are working with WSL, I've dropped any reference to Win32, only Linux. Well, the thing is that this program exits nanosleep() prematurely, with an "Invalid argument" error. I cannot see why is that.
#include <stdio.h>
#include <unistd.h>
#include <time.h>
#include <errno.h>
void timeDelay (time_t no_of_seconds)
{
struct timespec req, rem;
int res;
req.tv_sec = no_of_seconds;
req.tv_nsec = 0;
do
{
rem.tv_sec = 0;
rem.tv_nsec = 0;
res = nanosleep (&req, &rem);
req = rem;
}
while (res == EINTR);
if (res)
perror("nanosleep");
}
void somefunction()
{
printf("\t\t Load ... \n\t\t");
fflush(stdout);
for (int i = 1; i <= 60; i++)
{
fflush(stdout);
timeDelay(1);
putchar('*');
}
}
int main()
{
somefunction();
return 0;
}
I've also tried with NULL instead of rem, and reissuing nanosleep() with the original time instead of the remaining time, and putting the test and the perror() within the loop to print all possible errors from nanosleep. No matter what I do, I always receive an EINVAL from the first call to nanosleep()
So, there seems to be a real problema with nanosleep() on WSL. See https://github.com/microsoft/WSL/issues/4898 . It mentions a problem for WSL being unable to read the realtime clock from a certain version of glibc. I tried this in my WSL terminal:
$ sleep 1
sleep: cannot read realtime clock: Invalid argument
There is a mention of a workaround here:
https://github.com/microsoft/WSL/issues/4898#issuecomment-612622828
I've tried with another approach: using clock_nanosleep() so I can choose another source for the clock: in the above program, just change the call to nanosleep() with this other to clock_nanosleep()
clock_nanosleep (CLOCK_MONOTONIC, 0, &req, &rem);
There is no problem with CLOCK_MONOTONIC, and now the program works!
Related
I'm trying to reproduce from code from Ivor Horton's Beginning C. I couldn't get the results that the code is expecting so I wrote a smaller program with the specific code that I'm having a problem with.
Please see the first for loop. It should print out two random numbers. If I comment out the second for loop which creates an approximate 5 second delay and the subsequent printf("\rHello) I will see the two random numbers. However if I uncomment the second for loop and the subsequent printf, I will never see the two random numbers, only the output Hello, even though the for loop delays the printf("\rHello") for 5 seconds. I thought I would be able to see the two random numbers for at least 4 seconds or so before they are overwritten by the printf("\rHello"). Can anybody tell me what is going on here?
#define _CRT_SECURE_NO_WARNINGS
#include <stdio.h>
#include <time.h>
#include <ctype.h>
#include <stdlib.h>
int main(void) {
clock_t wait_start = 0;
time_t seed = 0;
unsigned int digits = 2;
unsigned int seconds = 5;
wait_start = clock();
srand((unsigned int) time(&seed));
for (unsigned int i = 1; i <= digits; ++i)
printf("%u ", rand() % 10);
// printf("%Lf\n", ((long double)clock()) / CLOCKS_PER_SEC);
for (; clock() - wait_start < seconds * CLOCKS_PER_SEC; )
;
// printf("%Lf\n", ((long double)clock()) / CLOCKS_PER_SEC);
printf("\rHello");
return 0;
}
The suggested answer is good, if you know what you're looking for. The title of my question is very straight forward literal explanation of what is happening in my code. A beginner may not search for "flush" and "stdout buffer". I didn't search for that and didn't find this solution to my question. It is complementary to the solution to this question to give more understanding to the beginner. The solution to my question gave a straight-forward solution to my question, and then it was followed with more information giving insight as to why I need to use fflush.
The reason the random digits do not appear is you do not flush the stdout stream to the terminal and since you did not output a trailing newline, it is pending in the stream buffer because stdout is usually line buffered by default when attached to a terminal.
Note also that waiting for 5 seconds with a busy loop asking for elapsed CPU time is wasteful. You should instead use a sleep() system call or the equivalent call on the target system (probably _sleep() on Microsoft legacy systems).
Here is a modified version:
#define _CRT_SECURE_NO_WARNINGS
#include <stdio.h>
#include <stdlib.h>
#if defined _WIN32 || defined _WIN64
#include <windows.h>
#else
#include <unistd.h>
#endif
int main(void) {
int digits = 2;
int seconds = 5;
srand((unsigned int)time(NULL));
for (int i = 0; i < digits; i++) {
printf("%d ", rand() % 10);
}
fflush(stdout);
#if defined _WIN32 || defined _WIN64
Sleep(seconds * 1000UL);
#else
sleep(seconds);
#endif
printf("\rHello\n");
return 0;
}
I've been trying to time how long it takes for an invocation of popen to complete. popen initializes a process which then creates a pipe, forks, and invokes the shell. In my particular case, I'm using the call to read another programs stdout output.
The problem: I'm expecting the call I make to return the correct length of time it took the program to execute (around 15 seconds for a test program). What I get is that the program took no time at all to finish (0.000223s). Despite all the various functions I have tried, I seem unable to time the call correctly.
Here is a reproducible example of my problem. It is composed of the timing program and a child program that the timing program runs (the child takes about 15s to run on my system):
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h>
#ifdef __MACH__
#include <mach/clock.h>
#include <mach/mach.h>
#endif
#define MAXBUF 10
static void gettime (struct timespec *t) {
#ifdef __MACH__
clock_serv_t cclock;
mach_timespec_t mts;
host_get_clock_service(mach_host_self(), REALTIME_CLOCK, &cclock);
clock_get_time(cclock, &mts);
mach_port_deallocate(mach_task_self(), cclock);
t->tv_sec = mts.tv_sec;
t->tv_nsec = mts.tv_nsec;
#else
clock_gettime(CLOCK_REALTIME, t);
#endif
}
int main (void) {
FILE *fp;
struct timespec tic, toc;
char *executableName = "./a.out";
char answer[MAXBUF];
gettime(&tic);
if ((fp = popen(executableName, "r")) == NULL) {
fprintf(stderr, "The file couldn't be opened.\n");
return 1;
}
gettime(&toc);
fgets(answer, MAXBUF, fp);
double elapsed = (double)(toc.tv_nsec - tic.tv_nsec) / 1E9;
fprintf(stdout, "The program says %s, and took %fs to run!\n", answer, elapsed);
pclose(fp);
return 0;
}
Here is the child program:
#include <stdio.h>
#include <stdlib.h>
int timeWastingFunction (long long n) {
if ((n % 2) == 0) {
return 1;
}
for (int i = 1; i < (n / 2); i += 2) {
if ((n % i) == 0) {
return 1;
}
}
return 0;
}
int main (void) {
int start = 687217000;
while (start--) {
timeWastingFunction(start);
}
fprintf(stdout, "Hello!");
return 0;
}
This might look a bit overdone, but I had previously tried using clock_t, (a CPU based timing facility) to do the timing, and gotten the same answers from it. I therefore tried this solution which you see above. I picked: CLOCK_REALTIME as it seemed appropriate for the job. I unfortunately don't have the option to specify if this clock is on a per-process or per-thread level though (I'd want it to be process independent).
Note: I haven't tried using gettimeofday yet, but I don't want to since its apparently inappropriate for timing this way, dependent on the date of the system using it, and being phased out in favor of clock_gettime.
Edit: Just to be clear, what happens when I run this is that the program calling popen will actually stall for the 15 seconds the other program takes to run, before printing the 'wrong' time. It doesn't immediately print the time implying it didn't wait for the call to complete.
popen() only fork and open a pipe. Your test only show the time that take popen() to create the child and the pipe.
A simple way to solve your problem is to get the time after your pclose(), note that will be not perfect because when you read the data return by your child, it could finish before your call to pclose()
Plus your solution to get the result is broken, you only make the difference between nanosecond, I found a solution on git:
void timespec_diff(struct timespec *start, struct timespec *stop,
struct timespec *result)
{
if ((stop->tv_nsec - start->tv_nsec) < 0) {
result->tv_sec = stop->tv_sec - start->tv_sec - 1;
result->tv_nsec = stop->tv_nsec - start->tv_nsec + 1000000000;
} else {
result->tv_sec = stop->tv_sec - start->tv_sec;
result->tv_nsec = stop->tv_nsec - start->tv_nsec;
}
return;
}
The last thing is that CLOCK_REALTIME should be used if you want the date. Here you just want a duration. So you should use CLOCK_MONOTONIC if it's available on your system because CLOCK_REALTIME can rollback. (REALTIME_CLOCK of host_get_clock_service() seam monotonic too).
CLOCK_MONOTONIC: Clock that cannot be set and represents monotonic time since some unspecified starting point.
REALTIME_CLOCK: A moderate resolution clock service that (typically) tracks time since the system last boot.
So the working code could look like that:
int main (void) {
FILE *fp;
struct timespec tic, toc;
char *executableName = "./a.out";
char answer[MAXBUF];
gettime(&tic);
if ((fp = popen(executableName, "r")) == NULL) {
fprintf(stderr, "The file couldn't be opened.\n");
return 1;
}
fgets(answer, MAXBUF, fp);
pclose(fp);
gettime(&toc);
struct timespec result;
timespec_diff(&tic, &toc, &result);
fprintf(stdout, "The program says %s, and took %lld.%.9lds\n", answer, (long long)result.tv_sec, result.tv_nsec);
return 0;
}
Credit:
How to subtract two struct timespec?
How to print struct timespec?
I am new to uvlib. Is it normal to call uv_run twice if one wants to avoid blocking inside function? If not, then which instruments are available, except threads? Here I just open and close file.
#include <uv.h>
#include <stdio.h>
#include <fcntl.h>
#include <conio.h>
#ifdef _WIN32
#include <conio.h>
#include <Windows.h>
#define Sleep(x) Sleep(x)
#else
#include <unistd.h>
#define Sleep(x) sleep(x)
#endif
uv_loop_t* loop;
uv_fs_t open_req;
uv_fs_t close_req;
void open_cb(uv_fs_t*);
void close_cb(uv_fs_t*);
const char *filename = "C:/c/somedata.txt";
int main(int argc, char **argv) {
int r;
loop = uv_loop_new();
r = uv_fs_open(loop, &open_req, filename, O_RDONLY, S_IREAD, open_cb);
if (r < 0) {
printf("Error at opening file: %s\n", uv_strerror(r));
}
printf("in main now\n");
uv_run(loop, UV_RUN_DEFAULT);
uv_loop_close(loop);
return 0;
}
void open_cb(uv_fs_t* req) {
int result = req->result;
if (result < 0) {
printf("Error at opening file: %s\n", uv_strerror(result));
} else {
printf("Successfully opened file.\n");
}
uv_fs_req_cleanup(req);
uv_fs_close(loop, &close_req, result, close_cb);
uv_run(loop, UV_RUN_DEFAULT);
Sleep(5000);
printf("ok now\n");
}
void close_cb(uv_fs_t* req) {
int result = req->result;
printf("in close_cb now\n");
if (result < 0) {
printf("Error at closing file: %s\n", uv_strerror(result));
} else {
printf("Successfully closed file.\n");
}
}
Set aside your example, libuv offers the opportunity to run the loop more than once.
See the documentation for further details.
In particular, uv_run function accepts a parameter of type uv_run_mode.
Possible values are:
UV_RUN_DEFAULT: it doesn't stop unless you explicitly stop it and until there exists at least on referenced or active resource on the loop.
UV_RUN_ONCE: poll for I/O once and execute all the functions that are ready to be served. It has the drawback that it is blocking if there are no pending callbacks.
UV_RUN_NOWAIT: this is probably the one you are looking for, similar to the previous one, but it doesn't block if there are no pending callbacks.
Note that with both UV_RUN_ONCE and UV_RUN_NOWAIT you'll have to run the loop more than once.
Return value usually indicates if there are some other pending callbacks. In this case, the loop must be run sooner or later in the future.
The last mode, UV_RUN_NOWAIT, is probably the one you are looking for.
As an example, it can be used in scenarios where the client has its own loop and cannot block on the libuv's one.
Is it normal to run the loop more than once?
Well, yes, but it mostly depends on your actual problem if it's right.
It's hard to say from a 100 line snippet on SO.
I am trying to create some error handling code in C. I am having trouble finding an input that breaks the function:
time(time_t *tloc)
I want to simulate time() breaking so I can test my error handling code. Either the input I try passes without an error or the entire program crashes (segfault). One error I am trying to produce is when the calendar cannot be located. In this instance, time() would return -1 and set errno to 14 ("Bad address"). How could I force this outcome without the kernel segfaulting me?
Because the libc time() function on a modern x86 linux kernel is implemented using a vdso syscall (See man 7 vdso), we have to resort to some trickery.
With vdso, the kernel is never entered. This is done for high frequency, speed critical calls (like time, gettimeofday, etc). All activity takes place in userspace. Hence, the segfault. The real [non-vdso] syscall will do checks and return the desired error codes.
We must use syscall(SYS_time,...) to force entry into the kernel's version of the syscall.
Here's a sample program:
#include <stdio.h>
#include <errno.h>
#include <string.h>
#include <time.h>
#include <unistd.h>
#include <sys/syscall.h>
int opt_t;
int
main(int argc,char **argv)
{
char *cp;
time_t tv;
time_t *tp;
--argc;
++argv;
for (; argc > 0; --argc, ++argv) {
cp = *argv;
if (*cp != '-')
break;
switch (cp[1]) {
case 't':
opt_t = 1;
break;
}
}
tp = (time_t *) 0xC0000000;
// this segfaults due to vdso
if (opt_t) {
printf("using time ...\n");
tv = time(tp);
}
// this returns error code
else {
printf("using syscall ...\n");
tv = syscall(SYS_time,tp);
}
printf("tv=%8.8lX errno=%d -- %s\n",tv,errno,strerror(errno));
return 0;
}
one way to force the time() function to fail is to run the application on a system that does not have a RTC (Real Time Clock). However such a system would also fail for almost everything else.
For example could I make it type something like
"Hello"
"This"
"Is"
"A"
"Test"
With 1 second intervals in-between each new line?
Thanks,
Well the sleep() function does it, there are several ways to use it;
On linux:
#include <stdio.h>
#include <unistd.h> // notice this! you need it!
int main(){
printf("Hello,");
sleep(5); // format is sleep(x); where x is # of seconds.
printf("World");
return 0;
}
And on windows you can use either dos.h or windows.h like this:
#include <stdio.h>
#include <windows.h> // notice this! you need it! (windows)
int main(){
printf("Hello,");
Sleep(5); // format is Sleep(x); where x is # of milliseconds.
printf("World");
return 0;
}
or you can use dos.h for linux style sleep like so:
#include <stdio.h>
#include <dos.h> // notice this! you need it! (windows)
int main(){
printf("Hello,");
sleep(5); // format is sleep(x); where x is # of seconds.
printf("World");
return 0;
}
And that is how you sleep in C on both windows and linux! For windows both methods should work. Just change the argument for # of seconds to what you need, and insert wherever you need a pause, like after the printf as I did. Also, Note: when using windows.h, please remember the capital S in sleep, and also thats its milliseconds! (Thanks to Chris for pointing that out)
something not as elegant as sleep(), but uses the standard library:
/* data declaration */
time_t start, end;
/* ... */
/* wait 2.5 seconds */
time(&start);
do time(&end); while(difftime(end, start) <= 2.5);
I'll leave for you the finding out the right header (#include) for time_t, time() and difftime(), and what they mean. It's part of the fun. :-)
You can look at sleep() which suspends the thread for the specified seconds.
The most easiest way is to give a loop. Be it while or for loop
int main()
{
while(i<100000) //delay
{
i++;
}
}
Works on all OS
int main()
{
char* sent[5] ={"Hello ", "this ", "is ", "a ", "test."};
int i =0;
while( i < 5 )
{
printf("%s", sent[i] );
int c =0, i++;
while( c++ < 1000000 ); // you can use sleep but for this you dont need #import
}
return 0;
}