I expect that gettimeofday() will call a system call to do the work of actually getting the time. However, running the following program
#include <stdlib.h>
#include <sys/time.h>
#include <stdio.h>
int main(int argc, char const *argv[])
{
struct timeval tv;
printf("Before gettimeofday() %ld!\n", tv.tv_sec);
int rc = gettimeofday(&tv, NULL);
printf("After gettimeofday() %ld\n", tv.tv_sec);
if (rc == -1) {
printf("Error: gettimeofday() failed\n");
exit(1);
}
printf("Exiting ! %ld\n", tv.tv_sec);
return 0;
}
under dtruss -d returns a long list of system calls, the last of which are:
RELATIVE SYSCALL(args) = return
... lots of syscalls with earlier timestamps ...
3866 fstat64(0x1, 0x7FFF56ABC8D8, 0x11) = 0 0
3868 ioctl(0x1, 0x4004667A, 0x7FFF56ABC91C) = 0 0
3882 write_nocancel(0x1, "Before gettimeofday() 0!\n\0", 0x19) = 25 0
3886 write_nocancel(0x1, "After gettimeofday() 1480913810\n\0", 0x20) = 32 0
3887 write_nocancel(0x1, "Exiting ! 1480913810\n\0", 0x15) = 21 0
It looks like gettimeofday() isn't using a syscall, but this seems wrong-surely the kernel takes responsibility of the system clocks? Is dtruss missing something? Am I reading the output incorrectly?
As TheDarkKnight pointed out, there is a gettimeofday system call. However, the userspace gettimeofday function often does not call the corresponding system call, but rather __commpage_gettimeofday, which tries to read the time from a special part of the process' address space called the commpage. Only if this call fails does the gettimeofday system call get used as a fallback. This reduces the cost of most calls to gettimeofday from that of an ordinary system call to just a memory read.
The book Mac OSX Internals: A Systems Approach describes the commpage. Briefly, it is a special area of kernel memory that is mapped into the last eight pages of the address space of every process. Among other things, it contains time values that are "updated asynchronously from the kernel and read atomically from user space, leading to occasional failures in reading".
To see how often the gettimeofday() system call is called by the userspace function, I wrote a test program that called gettimeofday() 100 million times in a tight loop:
#include <sys/time.h>
int main(int argc, char const *argv[])
{
const int NUM_TRIALS = 100000000;
struct timeval tv;
for (int i = 0; i < NUM_TRIALS; i++) {
gettimeofday(&tv, NULL);
}
return 0;
}
Running this under dtruss -d on my machine showed that this triggered between 10-20 calls to the gettimeofday() system calls (0.00001%-0.00002% of all the userspace calls).
For those interested, the relevant lines in the source code for the userspace gettimeofday() function (for macOS 10.11 - El Capitan) are
if (__commpage_gettimeofday(tp)) { /* first try commpage */
if (__gettimeofday(tp, NULL) < 0) { /* if it fails, use syscall */
return (-1);
}
}
The function __commpage_gettimeofday combines the timestamp read from the commpage and a reading of the time stamp counter register to calculate the time since epoch in seconds and microseconds. (The rdstc instruction is inside _mach_absolute_time.)
The use of dtrace instead of dtruss will clear your doubt.
gettimeofday() is itself a system call. You can see this system call getting called if you run dtrace script.
You can use following dtrace script
"dtrace1.d"
syscall:::entry
/ execname == "foo" /
{
}
(foo is the name of your executable)
to run above dtrace use: dtrace -s dtrace1.d
and then execute your program to see all system call used by your program
Related
Given the following code, the expectation is for there to be a one-second sleep each time select() is called. However, the sleep only occurs on the first call and all subsequent calls result in no delay:
#include <stdio.h>
#include <stdlib.h>
int main()
{
struct timeval tv;
tv.tv_sec = 1;
tv.tv_usec = 0;
for (;;)
{
/* Sleep for one second */
int result=select(0, NULL, NULL, NULL, &tv);
printf("select returned: %d\n",result);
}
}
Why do all calls to select() except the first return immediately?
Compiler: gcc 4.9.2
OS: Centos 7 (Linux)
Kernel info: 3.10.0-327.36.3.el7.x86_64
From the man page:
On Linux, select() modifies timeout to reflect the amount of time not
slept
So, set tv [in the loop] before calling select
As stated in the manpage
On Linux, select() modifies timeout to reflect the amount of time not
slept; most other implementations do not do this. (POSIX.1 permits
either behavior.) This causes problems both when Linux code which
reads timeout is ported to other operating systems, and when code is
ported to Linux that reuses a struct timeval for multiple select()s
in a loop without reinitializing it. Consider timeout to be unde‐
fined after select() returns.
As the first run ended by timeout, the tv value is reset to 0 seconds. Solution: reinitialize tv on every run.
I am trying to get the memory consumed by an algorithm, so I have created a group of functions that would stop the execution in periods of 10 milliseconds to let me read the memory using the getrusage() function. The idea is to set a timer that will raise an alarm signal to the process which will be received by a handler medir_memoria().
However, the program stops in the middle with this message:
[1] 3267 alarm ./memory_test
The code for reading the memory is:
#include "../include/rastreador_memoria.h"
#if defined(__linux__) || defined(__APPLE__) || (defined(__unix__) && !defined(_WIN32))
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <signal.h>
#include <sys/resource.h>
static long max_data_size;
static long max_stack_size;
void medir_memoria (int sig)
{
struct rusage info_memoria;
if (getrusage(RUSAGE_SELF, &info_memoria) < 0)
{
perror("Not reading memory");
}
max_data_size = (info_memoria.ru_idrss > max_data_size) ? info_memoria.ru_idrss : max_data_size;
max_stack_size = (info_memoria.ru_isrss > max_stack_size) ? info_memoria.ru_isrss : max_stack_size;
signal(SIGALRM, medir_memoria);
}
void rastrear_memoria ()
{
struct itimerval t;
t.it_interval.tv_sec = 0;
t.it_interval.tv_usec = 10;
t.it_value.tv_sec = 0;
t.it_value.tv_usec = 10;
max_data_size = 0;
max_stack_size = 0;
setitimer(ITIMER_REAL, &t,0);
signal(SIGALRM, medir_memoria);
}
void detener_rastreo ()
{
signal(SIGALRM, SIG_DFL);
printf("Data: %ld\nStack: %ld\n", max_data_size, max_stack_size);
}
#else
#endif
The main() function works calling all of them in this order:
rastrear_memoria()
Function of the algorithm I am testing
detener_rastreo()
How can I solve this? What does that alarm message mean?
First, setting an itimer to ring every 10 µs is optimistic, since ten microseconds is really a small interval of time. Try with 500 µs (or perhaps even 20 milliseconds, i.e. 20000 µs) instead of 10 µs first.
stop the execution in periods of 10 milliseconds
You have coded for a period of 10 microseconds, not milliseconds!
Then, you should exchange the two lines and code:
signal(SIGALRM, medir_memoria);
setitimer(ITIMER_REAL, &t,0);
so that a signal handler is set before the first itimer rings.
I guess your first itimer rings before the signal handler was installed. Read carefully signal(7) and time(7). The default handling of SIGALRM is termination.
BTW, a better way to measure the time used by some function is clock_gettime(2) or clock(3). Thanks to vdso(7) tricks, clock_gettime is able to get some clock in less than 50 nanoseconds on my i5-4690S desktop computer.
trying to get the memory consumed
You could consider using proc(5) e.g. opening, reading, and closing quickly /proc/self/status or /proc/self/statm etc....
(I guess you are on Linux)
BTW, your measurements will disappoint you: notice that quite often free(3) don't release memory to the kernel (thru munmap(2)...) but simply mark & manage that zone to be reusable by future malloc(3). You might consider mallinfo(3) or malloc_info(3) but notice that it is not async-signal-safe so cannot be called from inside a signal handler.
(I tend to believe that your approach is deeply flawed)
I have a very strange problem in C. A function from a proprietary library reduces sleep(n) to sleep(0).
My code looks like:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h> /*sleep() */
int main(int argc, char** argv){
//...
AStreamEngine.init();
AStreamEngine.setCodec(3);
AStreamEngine.start(); //problematic function
printf("a");
sleep(100);
printf("b");
return 0;
}
If the problematic function is commented out, then printing "a" follows by printing "b" after 100 sec. But if isn't commented out, "ab" is printed. So the program ends very quickly and I cannot notice if the engine works.
I found that:
if I replace sleep() by getchar() the engine works correctly.
if I put active waiting by a for-loop then it also works.
Does anyone have any idea why happens? And how to fix (bypass) this feature/bug? I don't want use getchar and active waiting.
Update:
I don't have source of the library. I have only binary .so file.
Base on responces I add a below code add end:
struct timespec to_sleep = { 1, 0 };
int ret = nanosleep(&to_sleep,&to_sleep);
printf("%d\n",ret);
if(ret == -1){
printf(" break sleep : %d %s", errno,strerror(errno));
}
And I get output:
-1
break sleep : 4 Interrupted system callc
Now I try to bypass by thread.
"sleep" can be interrupted by signals - see http://man7.org/linux/man-pages/man3/sleep.3.html . My guess is that the "start" function started a thread which might have caused signals to be sent to your program. Put "sleep" in a loop like this:
unsigned int x = 100;
while (x > 0) { x = sleep (x); }
Another point: printf may be line-buffered. In that mode, the output is only seen if you print a "\n" character. Try using "a\n".
As rightly said by Jack, usleep and sleep can be interrupted by the delivery of signals (E.g presence of ioctl, read, write function calls).
One of the smart way to avoid this issue is to use nanosleep. Like sleep and usleep, nanosleep can also be interrupted by the delivery of signals but the difference is, its second argument tells you how much time is remaining.
You can use this argument to make sure your code sleeps for the specified amount of time. Replace your sleep and usleep function with a while loop containing nanosleep. Following is the example usage,
struct timespec to_sleep = { 1, 0 }; // Sleep for 1 second
while ((nanosleep(&to_sleep, &to_sleep) == -1) && (errno == EINTR));
Off course this solutions is not suitable for the application where exact amount sleep is required but it is very useful in the cases where minimum delay is required before executing next function.
After examination, I think that the using of the sleep() function is useless on your case.
Indeed, the sleep function uses the pause() function, which wait for a signal and stops if your process receive a signal call.
In your case it's probably the problem, and it explains why the sleep() function stops when you call the AStreamEngine.start() function.
I think that the best solution is to use the usleep() function, which should not stops if your process receive a signal.
Try this :
usleep(VALUE_IN_MILISECONDS);
Good luck ! :)
I begin learn about the Linux C,but i meet the problem so confused me.
I use the function times.but return the value equals 0.
ok i made the mistak,I changed the code:
But the is not much relative with printf. clock_t is define with long in Linux.so i convert clock_t to long.
This is my code:
#include <sys/times.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
long clock_times;
struct tms begintime;
sleep(5);
if((clock_times=times(&begintime))==-1)
perror("get times error");
else
{
printf("%ld\n",(long)begintime.tms_utime);
printf("%ld\n",(long)begintime.tms_stime);
printf("%ld\n",(long)begintime.tms_cutime);
printf("%ld\n",(long)begintime.tms_cstime);
}
return 0;
}
the output:
0
0
0
0
the also return 0;
and I using gdb to debug,Also the variable of begintimes also to be zero.
there is no relative with printf function.
Please
This is not unusual; the process simply hasn't used enough CPU time to measure. The amount of time the process spends in sleep() doesn't count against the program's CPU time, as times() measures The CPU time charged for the execution of user instructions (among other related times) which is the amount of time the process has spent executing user/kernel code.
Change your program to the following which uses more CPU and can therefore be measured:
#include <sys/times.h>
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
int main()
{
long clock_times;
struct tms begintime;
unsigned i;
for (i = 0; i < 1000000; i++)
time(NULL); // An arbitrary library call
if((clock_times=times(&begintime))==-1)
perror("get times error");
else
{
printf("%ld %ld %ld %ld\n",
(long)begintime.tms_utime,
(long)begintime.tms_stime,
(long)begintime.tms_cutime,
(long)begintime.tms_cstime);
}
return 0;
}
Your code using close to none CPU time, so results are correct. Sleep suspends your program execution - everything that happens in this time is not your execution time, so it wouldn't be counted.
Add empty loop and you'll see the difference. (ofc., disable compiler optimisations - or empty loop will be removed).
Take a look at 'time' program output (time ./a.out) - it prints 'real' time (estimated by gettimeofday(), i suppose), user time (time wasted by your userspace code) and system time (time wasted within system calls - e.g. write to file, open network connection, etc.).
(sure, by 'wasted' i mean 'used', but whatever)
I am writing a C library which needs to fork() during initialization. Therefore, I want to assert() that the application code (which is outside of my control) calls my library initialization code from a single threaded context (to avoid the well known "threads and fork don't mix" problem). Once my library has been initialized, it is thread safe (and expected that the application level code may create threads). I am only concerned with supporting pthreads.
It seems impossible to count the number of threads in the current process space using pthreads. Indeed, even googletest only implements GetThreadCount() on Mac OS and QNX.
Given that I can't count the threads, is it somehow possible that I can instead assert a single threaded context?
Clarification: If possible, I would like to avoid using "/proc" (non-portable), an additional library dependency (like libproc) and LD_PRELOAD-style pthread_create wrappers.
Clarification #2: In my case using multiple processes is necessary as the workers in my library are relatively heavy weight (using webkit) and might crash. However, I want the original process to survive worker crashes.
You could mark your library initialization function to be run prior to the application main(). For example, using GCC,
static void my_lib_init(void) __attribute__((constructor));
static void my_lib_init(void)
{
/* ... */
}
Another option is to use posix_spawn() to fork and execute the worker processes as separate, slave binaries.
EDITED TO ADD:
It seems to me that if you wish to determine if the process has already created (actual, kernel-based) threads, you will have to rely on OS-specific code.
In the Linux case, the determination is simple, and safe to run on other OSes too. If it cannot determine the number of threads used by the current process, the function will return -1:
#include <unistd.h>
#include <sys/types.h>
#include <dirent.h>
#include <errno.h>
int count_threads_linux(void)
{
DIR *dir;
struct dirent *ent;
int count = 0;
dir = opendir("/proc/self/task/");
if (!dir)
return -1;
while (1) {
errno = 0;
ent = readdir(dir);
if (!ent)
break;
if (ent->d_name[0] != '.')
count++;
}
if (errno) {
const int saved_errno = errno;
closedir(dir);
errno = saved_errno;
return -1;
}
if (closedir(dir))
return -1;
return count;
}
There are certain cases (like chroot without /proc/) when that check will fail even in Linux, so the -1 return value should always be treated as unknown rather than error (although errno will indicate the actual reason for the failure).
Looking at the FreeBSD man pages, I wonder if the corresponding information is available at all.
Finally:
Rather than try detecting the problematic case, I seriously recommend you fork() and exec() (or posix_spawn()) the slave processes, using only async-signal-safe functions (see man 7 signal) in the child process (prior to exec()), thus avoiding the fork()-thread complications. You can still create any shared memory segments, socket pairs, et cetera before forking(). The only drawback I can see is that you have to use separate binaries for the slave workers. Which, given your description of them, does not sound like a drawback to me.
If you send a SIGINFO signal to the process' controlling tty, the process should describe the status of threads. From the description it should be possible to deduce whether any threads have been created.
You may have to develop a small utility that is invoked via popen to read the output back into your library.
Added sample code Fri Dec 21 14:45
Run a simple program that creates five threads. Threads basically sleep. Before the program exits, send a SIGINFO signal to get the status of the threads.
openbsd> cat a.c
#include <unistd.h>
#include <pthread.h>
#define THREADS 5
void foo(void);
int
main()
{
pthread_t thr[THREADS];
int j;
for (j = 0; j < THREADS; j++) {
pthread_create(&thr[j], NULL, (void *)foo, NULL);
}
sleep(200);
return(0);
}
void
foo()
{
sleep(100);
}
openbsd> gcc a.c -pthread
openbsd> a.out &
[1] 1234
openbsd> kill -SIGINFO 1234
0x8bb0e000 sleep_wait 15 -c---W---f 0000
0x8bb0e800 sleep_wait 15 -c---W---f 0000
0x8bb0e400 sleep_wait 15 -c---W---f 0000
0x7cd3d800 sleep_wait 15 -c---W---f 0000
0x7cd3d400 sleep_wait 15 -c---W---f 0000
0x7cd3d000 sleep_wait 15 -c---W---f 0000 main
You could use use pthread_once() to ensure no other thread is doing the same thing: this way you don't have to care that multiple threads are calling your initialisation function, only one will be really executed.
Make your public initialisation function run a private initialisation through pthread_once().
static pthread_once_t my_initialisation_once = PTHREAD_ONCE_INIT;
static void my_initialisation(void)
{
/* do something */
}
int lib_initialisation(void)
{
pthread_once(&my_initialisation_conce, my_initialisation);
return 0;
}
Another example can be found here.
Links
How to protect init function of a C based library?
POSIX threads - do something only once