Using CLOCK_PROCESS_CPUTIME_ID in clock_gettime - c

I read http://linux.die.net/man/3/clock_gettime and http://www.guyrutenberg.com/2007/09/22/profiling-code-using-clock_gettime/comment-page-1/#comment-681578
It said to use this to
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &stop_time);
measure how long it take for a function to run.
I tried that in my program. When I run it, it returns saying it took 15 sec. But when I compare against using a stop watch to measure it, it is 30 sec.
Can you please tell me why clock_gettime return 1/2 of the actual time it took?
Thank you.

In a multi-process environment, processes are constantly migrating from CPU(s) to 'run queue(s)'.
When performance testing an application, it is often convenient to know the amount of time a process has been running on a CPU, while excluding time that the process was waiting for a CPU resource on a 'run queue'.
In the case of this question, where CPU-time is about half of REAL-time, it is likely that other processes were actively competing for CPU time while your process was also running. It appears that your process was fairly successful in acquiring roughly half the CPU resources during its run.
Instead of using CLOCK_PROCESS_CPUTIME_ID, you might consider using CLOCK_REALTIME?
For additional details, see: Understanding the different clocks of clock_gettime()

Related

What could be delaying my select() call?

I have a small program running on Linux (on an embedded PC, dual-core Intel Atom 1.6GHz with Debian 6 running Linux 2.6.32-5) which communicates with external hardware via an FTDI USB-to-serial converter (using the ftdi_sio kernel module and a /dev/ttyUSB* device). Essentially, in my main loop I run
clock_gettime() using CLOCK_MONOTONIC
select() with a timeout of 8 ms
clock_gettime() as before
Output the time difference of the two clock_gettime() calls
To have some level of "soft" real-time guarantees, this thread runs as SCHED_FIFO with maximum priority (showing up as "RT" in top). It is the only thread in the system running at this priority, no other process has such priorities. My process has one other SCHED_FIFO thread with a lower priority, while everything else is at SCHED_OTHER. The two "real-time" threads are not CPU bound and do very little apart from waiting for I/O and passing on data.
The kernel I am using has no RT_PREEMPT patches (I might switch to that patch in the future). I know that if I want "proper" realtime, I need to switch to RT_PREEMPT or, better, Xenomai or the like. But nevertheless I would like to know what is behind the following timing anomalies on a "vanilla" kernel:
Roughly 0.03% of all select() calls are timed at over 10 ms (remember, the timeout was 8 ms).
The three worst cases (out of over 12 million calls) were 31.7 ms, 46.8 ms and 64.4 ms.
All of the above happened within 20 seconds of each other, and I think some cron job may have been interfering (although the system logs are low on information apart from the fact that cron.daily was being executed at the time).
So, my question is: What factors can be involved in such extreme cases? Is this just something that can happen inside the Linux kernel itself, i.e. would I have to switch to RT_PREEMPT, or even a non-USB interface and Xenomai, to get more reliable guarantees? Could /proc/sys/kernel/sched_rt_runtime_us be biting me? Are there any other factors I may have missed?
Another way to put this question is, what else can I do to reduce these latency anomalies without switching to a "harder" realtime environment?
Update: I have observed a new, "worse worst case" of about 118.4 ms (once over a total of around 25 million select() calls). Even when I am not using a kernel with any sort of realtime extension, I am somewhat worried by the fact that a deadline can apparently be missed by over a tenth of a second.
Without more information it is difficult to point to something specific, so I am just guessing here:
Interrupts and code that is triggered by interrupts take so much time in the kernel that your real time thread is significantly delayed. This depends on the frequency of interrupts, which interrupt handlers are involved, etc.
A thread with lower priority will not be interrupted inside the kernel until it yields the cpu or leaves the kernel.
As pointed out in this SO answer, CPU System Management Interrupts and Thermal Management can also cause significant time delays (up to 300ms were observed by the poster).
118ms seems quite a lot for a 1.6GHz CPU. But one driver that accidently locks the cpu for some time would be enough. If you can, try to disable some drivers or use different driver/hardware combinations.
sched_rt_period_us and sched_rt_period_us should not be a problem if they are set to reasonable values and your code behaves as you expect. Still, I would remove the limit for RT threads and see what happens.
What else can you do? Write a device driver! It's not that difficult and interrupt handlers get a higher priority than realtime threads. It may be easier to switch to a real time kernel but YMMV.

Scheduling events at microsecond granularity in POSIX

I'm trying to determine the granularity I can accurately schedule tasks to occur in C/C++. At the moment I can reliably schedule tasks to occur every 5 microseconds, but I'm trying to see if I can lower this further.
Any advice on how to achieve this / if it is possible would be greatly appreciated.
Since I know timer granularity can often be OS dependent: I am currently running on Linux, but would use Windows if the timing granularity is better (although I don't believe it is, based on what I've found for the QueryPerformanceCounter)
I execute all measurements on bare-metal (no VM). /proc/timer_info confirms nanosecond timer resolution for my CPU (but I know that doesn't translate to nanosecond alarm resolution)
Current
My current code can be found as a Gist here
At the moment, I'm able to execute a request every 5 microseconds (5000 nanoseconds) with less then 1% late arrivals. When late arrivals do occur, they are typically only one cycle (5000 nanoseconds) behind.
I'm doing 3 things at the moment
Setting the process to real-time priority (some pointed out by #Spudd86 here)
struct sched_param schedparm;
memset(&schedparm, 0, sizeof(schedparm));
schedparm.sched_priority = 99; // highest rt priority
sched_setscheduler(0, SCHED_FIFO, &schedparm);
Minimizing the timer slack
prctl(PR_SET_TIMERSLACK, 1);
Using timerfds (part of the 2.6 Linux kernel)
int timerfd = timerfd_create(CLOCK_MONOTONIC,0);
struct itimerspec timspec;
bzero(&timspec, sizeof(timspec));
timspec.it_interval.tv_sec = 0;
timspec.it_interval.tv_nsec = nanosecondInterval;
timspec.it_value.tv_sec = 0;
timspec.it_value.tv_nsec = 1;
timerfd_settime(timerfd, 0, &timspec, 0);
Possible improvements
Dedicate a processor to this process?
Use a nonblocking timerfd so that I can create a tight loop, instead of blocking (tight loop will waste more CPU, but may also be quicker to respond to an alarm)
Using an external embedded device for triggering (can't imagine why this would be better)
Why
I'm currently working on creating a workload generator for a benchmarking engine. The workload generator simulates an arrival rate (X requests / second, etc.) using a Poisson process. From the Poisson process, I can determine the relative times at which requests must be made from the benchmarking engine.
So for instance, at 10 requests a second, we may have requests made at:
t = 0.02, 0.04, 0.05, 0.056, 0.09 seconds
These requests need to be scheduled in advance and then executed. As the number of requests per second increases, the granularity required for scheduling these requests increases (thousands of requests per second requires sub-millisecond accuracy). As a result, I'm trying to figure out how to scale this system further.
You're very close to the limits of what vanilla Linux will offer you, and it's way past what it can guarantee. Adding the real-time patches to your kernel and tuning for full pre-emption will help give you better guarantees under load. I would also remove any dynamic memory allocation from your time critical code, malloc and friends can (and will) stall for a not-inconsequential (in a real-time sense) period of time if it has to reclaim the memory from the i/o cache. I would also be considering removing swap from that machine to help guarantee performance. Dedicating a processor to your task will help to prevent context switch times but, again, it's no guarantee.
I would also suggest that you be careful with that level of sched_priority, you're above various important bits of Linux there, which can lead to very strange effects.
What you gain from building a realtime kernel is more reliable guarantees (ie lower maximum latency) of the time between an IO/timer event handled by the kernel, and control being passed to your app in response. This comes at the price of lower throughput, and you might notice an increase in your best-case latency times.
However, the only reason for using OS timers to schedule events with high-precision is if you're afraid of burning CPU cycles in a loop while you wait for your next due event. OS timers (especially in MS Windows) are not reliable for high granularity timing events, and are very dependant on the sort of timing/HPET hardware available in your system.
When I require highly accurate event scheduling, I use a hybrid method. First, I measure the worst case latency - that is, the biggest difference between the time I requested to sleep, and the actual clock time after sleeping. Let's call this difference "D". (You can actually do this on-the-fly during normal running, by tracking "D" every time you sleep, with something like "D = (D*7 + lastD) / 8" to produce a temporal average).
Then never request to sleep beyond "N - D*2", where "N" is the time of the next event. When within "D*2" time of the next event, enter a spin loop and wait for "N" to occur.
This eats a lot more CPU cycles, but depending on the accuracy you require, you might be able to get away with a "sched_yield()" in your spin loop, which is more kind to your system.

C: How can a threaded program run faster than a non-threaded on the same single core?

I have a server and a client program. The server program runs all the time waiting for requests from clients. For the server to respond to each client it takes 5 seconds using the sleep() function.
On the multithreaded version if I invoke two clients at the same time it takes about 5 seconds for each to get a response. The same test gives 5 secs for the first client and 10 to the second client in the non-multithreaded version. Something to be expected obviously. Though there is some problem. The processor is a single core Athlon!
How is it possible for the multithreaded server module to run faster than the non-threaded when the core on the cpu is only one?
Because you are simulating the "work" by sleeping. So the scheduler need only put one thread to sleep for 5 seconds; afterwards it is free to "service" another thread.
In short, you aren't using the CPU so in theory you could service a lot of clients this way. Now if you were to do something CPU-bound for 5 seconds, you would see the difference.
The same would happen if the "work" would be I/O. Since the CPU isn't actually involved, many many threads appear to work concurrently.
The threading is handled by the OS, and is not directly related to the number of processors you have. While one thread sleeps, other threads still run.
From wiki:
In computer science, a thread of execution is the smallest unit of
processing that can be scheduled by an operating system.
For the server to respond to each client it takes 5 seconds using the sleep() function.
Two threads can both be sleeping at the same time. Sleeping doesn't take up a whole core...
If your code were CPU-bound then you wouldn't see a benefit from running with multiple threads. But realistically, even with "real work" instead of sleeping, you can often get a benefit from using threading with a single core - because any time one operation is waiting for IO or something like that, another operation can use the processor.
"Single core" just means "one CPU operation at a time" (assuming there's no hyperthreading) - not "one overall operation at a time".
It's not running faster, it's your sleep function is called sequentially on single threaded application and in parallel on multithreaded.
Sleep doesn't do anything.
Therefore you aren't doing anything for 5 seconds. So each thread is waiting 5 seconds and they won't stop each other from waiting or running while sleeping.
If you had it actually DOING something for 5 seconds ( a really long loop or reading from disk) it would go back to 10 seconds total.
Threads are like people. Suppose you need a hole to be dug, you have one shovel, and it takes two hours.
A worker can dig for 1 hour, but then has to sleep for 5 hours. Should you hire one worker or two?
One worker can dig for 1 hour, sleep for 5 hours, dig for 1 hour, sleep for 5 hours, for a total of 12 hours.
With two workers, one can dig for an hour and go to sleep, then the second one can dig for an hour and also go to sleep, while the first one's sleeping. So the total time is 7 hours, before they both wake up and are ready for another job.
Keep in mind that there are resources in use besides the CPU; disk access, memory I/O, peripheral I/O can all be utilized simultaneously but not by a single thread. Multiple threads can use more resources at the same time to accomplish more in less time.
Additionally, the CPU and operating system are designed to compartmentalize "time shares" of the cpu resources... a single thread will never get the processors undivided attention. Having more threads can actually get more CPU use because each will be allocated its own cpu time.

average time between kernel launch and execution?

If I understand correctly, when you launch a CUDA kernel asynchronously, it may begin execution immediately or it may wait for previous asynchronous calls (transfers, kernels, etc) to complete first. (I also understand that kernels can run concurrently in some cases, but I want to ignore that for now).
How can I find out the time between launching a kernel ("queuing") and when it actually begins execution. In fact, I really just want to know the average "queued time" for all launches in a single run of my program (generally in the tens or hundreds of thousands of kernel launches.)
I can easily calculate the average execution time per kernel with events (~500us). I tried to simulate - I dropped the results of CLOCK() every time a kernel is launched, with the idea that I could then determine how long the launch queue was when each kernel was launched. But CLOCK() does not have high enough precision (0.01s) - sometimes as many as 60 kernels appear to be launched at a single time, when of course in reality many are not.
Rather than clock use the QueryPerformanceTimer which counts based on machine clock cycles.
Code for QueryPerformanceTimer
Secondly, the profiling tool (Visual Profiler) only measures serial launches [see page 24] and [see post number 3].
Thus the best option is (1) use QueryPerformanceTimer (or the Visual Profiler) such that you get an accurate measurement of a single launch and (2) use QueryPerformanceTimer to get the timing of multiple launches and observe whether the timing results suggest that asynchronous launching took place.

How to find execution time (if possible in nanoseconds) of the sections in a C code on linux with intel dual core?

I have a C code with some functions.I need to find out the execution time of each function, I have tried using gettimeofday and rdtsc,but i guess that because its multi core system the output time provided involves the switching time between the processors. I wanted it to be serialized. So if somebody can give me an idea that how should i calulate the time or at least let me know about the syntax of rdstcp.
P.S. please reply as soon as possible
Thanks :)
It's a little impractical to expect nanosecond resolution.
You can't add code just to output the execution times of functions without increasing execution time. When you take the code out, the timing changes.
In practice, this kind of measurement is made by observing the CPU timing signals on an oscilloscope (or logic analyser).
If you have multiple cores, then the CPU timer won't be stable between them. So set the thread affinity to keep it on the one core. You also might want to use a real time timer to measure the time for the process or thread using clock_gettime(CLOCK_PROCESS_CPUTIMER_ID). Read the note for SMP systems in the usage for that function.
Both of these will effect the timing of the program, so perform multiple iterations of whatever you are benchmarking, and don't call the timing functions too often to try and mitigate this.
There should be some way to set processor affinity to tell the operating system to only run that thread on a particuar core.
In windows there is a SetThreadAffinity system call, I imagine there is a similar function in linux, although I don't know what it is called.
You could boot your dual core system to use one core only using the following kernel parameter:
maxcpus=1
But the measured time will still comprise process contest switching and thus depend on the activity on the other processes on the system. Are you interested in the execution time, or the CPU time needed to execute your task ?
Mate, I'm not sure about this, but even if you're dual core,unless the program is threaded, it will only run in 1 thread (meaning 1 core), so it should not involve the time of switching between processors, I believe there is no such thing...
Pavium is correct, the only way to get decent timing at this resolution in with an oscilloscope and toggling GPIO pins.
Bear in mind that this is all a bit academic anyway: I suppose you are running with an operating system, etc, so there is no way to get a straight run at the hardware.
You really need to look at the reason you want this measurement. Is it a performance benchmark for some code? You could try running the code many thousands of times and get some statistics. For this kind of approach I would recommend you read Zed Shaws diatribe to make sure the numbers aren't fooling you.
Precise Performance Measuring was impossible until Linux kernel 2.6.31. In this kernel a new library for accessing the performacne counters of the CPU and IMHO correcting times in the scheduler was added.
Unfortunately i don't have more details but maybe it is a starting point for more information search. I'm just adding this because nobody mentioned it before
Use the struct timespec structure & clock_gettime function as follows to obtain the time of execution of the code in nanoseconds precision
struct timespec start, end;
clock_gettime(CLOCK_REALTIME,&start);
/* Do something */
clock_gettime(CLOCK_REALTIME,&end);
It returns a value as ((((unsigned64)start.tv_sec) * ((unsigned64)(1000000000L))) + ((unsigned64)(start.tv_nsec))))
Moreover this I've used for multithreaded concepts too..
Hope this answer will be more helpful for you to get your desired execution time in nanoseconds.

Resources