How to measure mutex contention? - c

I have some threaded code using PThreads on Linux that, I suspect, is suffering from excessive lock contention. What tools are available for me to measure this?
Solaris has DTrace and plockstat. Is there something similar on Linux? (I know about a recent DTrace port for Linux but it doesn't seem to be ready for prime time yet.)

mutrace is the tool:
http://0pointer.de/blog/projects/mutrace.html
Its easy to build, install and use.

After not having much luck with SystemTap, I decided to try and use the DTrace Linux port with some success, despite the lack of a plockstat provider. The following DTrace script is not quite a plockstat replacement but it managed to show me some of the information I was after.
#!/usr/sbin/dtrace -s
/* Usage: ./futex.d '"execname"' */
long total;
END
{
printf("total time spent on futex(): %ldms\n", total);
}
/* arg1 == 0 means FUTEX_WAIT */
syscall::futex:entry
/execname == $1 && arg1 == 0/
{
self->start = timestamp;
}
syscall::futex:return
/self->start/
{
this->elapsed = (timestamp - self->start) / 1000000;
#[execname] = quantize(this->elapsed);
total += this->elapsed;
self->start = 0;
}
Here's an example using the above DTrace script to measure time spent in FUTEX_WAIT for a simple test program from this DTrace article.
$ ./futex.d '"mutex-test"'
dtrace: script './futex.d' matched 3 probes
^C
CPU ID FUNCTION:NAME
1 2 :END total time spent on futex(): 11200ms
mutex-test
value ------------- Distribution ------------- count
128 | 0
256 |#################### 1
512 | 0
1024 | 0
2048 | 0
4096 | 0
8192 |#################### 1
16384 | 0
Definitely not great, but at least it's a starting point.

valgrind latest versions has a lock contention and lock validation tools:
http://valgrind.org/docs/manual/drd-manual.html
Which is great if you can produce the issue under Valgrind (it effects code run time speed) and have enough memory to run Valgrind.
For other uses, the more hard core Linux Trace Toolkit NG is recommended:
http://ltt.polymtl.ca/
Cheers,
Gilad

The latest version of systemtap comes with lots of example scripts. One in particular seems like it would server as a good starting point for helping you accomplish your task:
#! /usr/bin/env stap
global thread_thislock
global thread_blocktime
global FUTEX_WAIT = 0
global lock_waits
global process_names
probe syscall.futex {
if (op != FUTEX_WAIT) next
t = tid ()
process_names[pid()] = execname()
thread_thislock[t] = $uaddr
thread_blocktime[t] = gettimeofday_us()
}
probe syscall.futex.return {
t = tid()
ts = thread_blocktime[t]
if (ts) {
elapsed = gettimeofday_us() - ts
lock_waits[pid(), thread_thislock[t]] <<< elapsed
delete thread_blocktime[t]
delete thread_thislock[t]
}
}
probe end {
foreach ([pid+, lock] in lock_waits)
printf ("%s[%d] lock %p contended %d times, %d avg us\n",
process_names[pid], pid, lock, #count(lock_waits[pid,lock]),
#avg(lock_waits[pid,lock]))
}
I was attempting to diagnose something similar with a MySQL process previously and observed output similar to the following using the above script:
mysqld[3991] lock 0x000000000a1589e0 contended 45 times, 3 avg us
mysqld[3991] lock 0x000000004ad289d0 contended 1 times, 3 avg us
While the above script collects information on all processes running on the system, it would be quite easy to modify it to only work on a certain process or executable. For example, we could change the script to take a process ID argument and modify the probe on entering the futex call to look like:
probe begin {
process_id = strtol(#1, 10)
}
probe syscall.futex {
if (pid() == process_id && op == FUTEX_WAIT) {
t = tid ()
process_names[process_id] = execname()
thread_thislock[t] = $uaddr
thread_blocktime[t] = gettimeofday_us()
}
}
Obviously, you could modify the script lots of ways to suit what you want to do. I'd encourage you to have a look at the various example scripts for SystemTap. They are probably the best starting point.

In the absence of DTrace, your best bet is probably SystemTap. Here's a positive write up.
http://davidcarterca.wordpress.com/2009/05/27/systemtap/

Related

How can I use Dtrace to calculate the time taken between arbitrary C statements

I'm using ImageMagick(via PHP imagick extension) to generate a simple gif animation, like this one.
And I've found WriteGIFImage()(https://github.com/ImageMagick/ImageMagick/blob/c807b69de68a33b42fc8725486d5ac81688afd16/coders/gif.c#L1506) function takes long time to write the gif data by following D script.
pid$target::WriteGIFImage:entry
{
self->start_WriteGIFImage = timestamp;
printf(" -> WriteGIFImage\n");
}
pid$target::WriteGIFImage:return
{
this->delta = (timestamp - self->start_WriteGIFImage) / 1000 / 1000;
#deltas["WriteGIFImage"] = sum(this->delta);
printf(" <- WriteGIFImage elapsed %d ms\n", this->delta);
}
// Output
ImagesToBlob
-> WriteImage
-> WriteGIFImage
<- WriteGIFImage elapsed 821 ms
<- WriteImage elapsed 821 ms
ImagesToBlob elapsed 821 ms
Total (ms):
RelinquishMagickMemory 0
WriteBlobByte 0
ImagesToBlob 821
WriteGIFImage 821
WriteImage 821
WriteGIFImage() is a big function, I want to know time taken between 2 statements, in order to find the slowest code block. e.g. I suspect this for loop takes long time, so I need Dtrace tell me the time diff between line 1673 and 1678. How can I use D script to finish it?
1673 for (i=0; i < (ssize_t) image->colors; i++)
1674 {
1675 *q++=ScaleQuantumToChar(ClampToQuantum(image->colormap[i].red));
1676 *q++=ScaleQuantumToChar(ClampToQuantum(image->colormap[i].green));
1677 *q++=ScaleQuantumToChar(ClampToQuantum(image->colormap[i].blue));
1678 }
btw, I found ScaleQuantumToChar() and ClampToQuantum() are both inline functions, and pid*::ScaleQuantumToChar:entry/return probe can not work. How to trace inline functions with D?
In addition to the entry and return probes, the pid provider also has probes for each instruction offset. If you do sudo dtrace -l -n 'pid$target::WriteGIFImage:*' -p <pid>, it will list them. Then, you have to disassemble the code to determine what offset corresponds to what line of code. And, when a program is built with optimizations turned on, that correspondence might not be clean. (Instructions can be out of order with respect to code lines.)
You can also define your own User-Defined Static Tracing (USDT) providers and instrument your code with them. The dtrace man page explains how.
All of that said, though, DTrace probably isn't the best tool for this. Use the Time Profiler template of Instruments and it will tell you where your program is spending its time, down to the line (or even instruction).

Why does my Linux app get stopped every 0.5 seconds?

I have a 16 core Linux machine that is idle. If I run a trivial, single threaded C program that sits in a loop reading the cycle counter forever (using the rdtsc instruction), then every 0.5 seconds, I see a 0.17 ms jump in the timer value. In other words, every 0.5 seconds it seems that my application is stopped for 0.17ms. I would like to understand why this happens and what I can do about it. I understand Linux is not a real time operating system. I'm just trying to understand what is going on, so I can make the best use of what Linux provides.
I found someone else's software for measuring this problem - https://github.com/nokia/clocktick_jumps. Its results are consistent with my own.
To answer the "tell us what specific problem you are trying to solve" question - I work on high-speed networking applications using DPDK. About 60 million packets arrive per second. I need to decide what size to make the RX buffers and have reasons that the number I pick is sensible. The answer to this question is one part of that puzzle.
My code looks like this:
// Build with gcc -O2 -Wall
#include <stdio.h>
#include <unistd.h>
#include <x86intrin.h>
int main() {
// Bad way to learn frequency of cycle counter.
unsigned long long t1 = __rdtsc();
usleep(1000000);
double millisecs_per_tick = 1e3 / (double)(__rdtsc() - t1);
// Loop forever. Print message if any iteration takes unusually long.
t1 = __rdtsc();
while (1) {
unsigned long long t2 = __rdtsc();
double delta = t2 - t1;
delta *= millisecs_per_tick;
if (delta > 0.1) {
printf("%4.2f - Delay of %.2f ms.\n", (double)t2 * millisecs_per_tick, delta);
}
t1 = t2;
}
return 0;
}
I'm running on Ubuntu 16.04, amd64. My processor is an Intel Xeon X5672 # 3.20GHz.
I expect your system is scheduling another process to run on the same CPU, and you're either replaced or moved to another core with some timing penalty.
You can find the reason by digging into kernel events happening at the same time. For example systemtap, or perf can give you some insight. I'd start with the scheduler events to eliminate that one first: https://github.com/jav/systemtap/blob/master/tapset/scheduler.stp

Using multithreads to calculate data but it does't reduce the time

My CPU has four cores,MAC os. I use 4 threads to calculate an array. But the time of calculating does't being reduced. If I don't use multithread, the time of calculating is about 52 seconds. But even I use 4 multithreads, or 2 threads, the time doesn't change.
(I know why this happen now. The problem is that I use clock() to calculate the time. It is wrong when it is used in multithread program because this function will multiple the real time based on the num of threads. When I use time() to calculate the time, the result is correct.)
The output of using 2 threads:
id 1 use time = 43 sec to finish
id 0 use time = 51 sec to finish
time for round 1 = 51 sec
id 1 use time = 44 sec to finish
id 0 use time = 52 sec to finish
time for round 2 = 52 sec
id 1 and id 0 is thread 1 and thread 0. time for round is the time of finishing two threads. If I don't use multithread, time for round is also about 52 seconds.
This is the part of calling 4 threads:
for(i=1;i<=round;i++)
{
time_round_start=clock();
for(j=0;j<THREAD_NUM;j++)
{
cal_arg[j].roundth=i;
pthread_create(&thread_t_id[j], NULL, Multi_Calculate, &cal_arg[j]);
}
for(j=0;j<THREAD_NUM;j++)
{
pthread_join(thread_t_id[j], NULL);
}
time_round_end=clock();
int round_time=(int)((time_round_end-time_round_start)/CLOCKS_PER_SEC);
printf("time for round %d = %d sec\n",i,round_time);
}
This is the code inside the thread function:
void *Multi_Calculate(void *arg)
{
struct multi_cal_data cal=*((struct multi_cal_data *)arg);
int p_id=cal.thread_id;
int i=0;
int root_level=0;
int leaf_addr=0;
int neighbor_root_level=0;
int neighbor_leaf_addr=0;
Neighbor *locate_neighbor=(Neighbor *)malloc(sizeof(Neighbor));
printf("id:%d, start:%d end:%d,round:%d\n",p_id,cal.start_num,cal.end_num,cal.roundth);
for(i=cal.start_num;i<=cal.end_num;i++)
{
root_level=i/NUM_OF_EACH_LEVEL;
leaf_addr=i%NUM_OF_EACH_LEVEL;
if(root_addr[root_level][leaf_addr].node_value!=i)
{
//ignore, because this is a gap, no this node
}
else
{
int k=0;
locate_neighbor=root_addr[root_level][leaf_addr].head;
double tmp_credit=0;
for(k=0;k<root_addr[root_level][leaf_addr].degree;k++)
{
neighbor_root_level=locate_neighbor->neighbor_value/NUM_OF_EACH_LEVEL;
neighbor_leaf_addr=locate_neighbor->neighbor_value%NUM_OF_EACH_LEVEL;
tmp_credit += root_addr[neighbor_root_level][neighbor_leaf_addr].g_credit[cal.roundth-1]/root_addr[neighbor_root_level][neighbor_leaf_addr].degree;
locate_neighbor=locate_neighbor->next;
}
root_addr[root_level][leaf_addr].g_credit[cal.roundth]=tmp_credit;
}
}
return 0;
}
The array is very large, each thread calculate part of the array.
Is there something wrong with my code?
It could be a bug, but if you feel the code is correct, then the overhead of parallelization, mutexes and such, might mean the overall performance (runtime) is the same as for the non-parallelized code, for the size of elements to compute against.
It might be an interesting study, to do looped code, single-threaded, and the threaded code, against very large arrays (100k elements?), and see if the results start to diverge to be faster in the parallel/threaded code?
Amdahl's law, also known as Amdahl's argument,[1] is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.
https://en.wikipedia.org/wiki/Amdahl%27s_law
You don't always gain speed by multi-threading a program. There is a certain amount of overhead that comes with threading. Unless there is enough inefficiencies in the non-threaded code to make up for the overhead, you'll not see an improvement. A lot can be learned about how multi-threading works even if the program you write ends up running slower.
I know why this happen now. The problem is that I use clock() to calculate the time. It is wrong when it is used in multithread program because this function will multiple the real time based on the num of threads. When I use time() to calculate the time, the result is correct.

MKL Performance on Intel Phi

I have a routine that performs a few MKL calls on small matrices (50-100 x 1000 elements) to fit a model, which I then call for different models. In pseudo-code:
double doModelFit(int model, ...) {
...
while( !done ) {
cblas_dgemm(...);
cblas_dgemm(...);
...
dgesv(...);
...
}
return result;
}
int main(int argc, char **argv) {
...
c_start = 1; c_stop = nmodel;
for(int c=c_start; c<c_stop; c++) {
...
result = doModelFit(c, ...);
...
}
}
Call the above version 1. Since the models are independent, I can use OpenMP threads to parallelize the model fitting, as follows (version 2):
int main(int argc, char **argv) {
...
int numthreads=omp_max_num_threads();
int c;
#pragma omp parallel for private(c)
for(int t=0; t<numthreads; t++) {
// assuming nmodel divisible by numthreads...
c_start = t*nmodel/numthreads+1;
c_end = (t+1)*nmodel/numthreads;
for(c=c_start; c<c_stop; c++) {
...
result = doModelFit(c, ...);
...
}
}
}
When I run version 1 on the host machine, it takes ~11 seconds and VTune reports poor parallelization with most of the time spent idle. Version 2 on the host machine takes ~5 seconds and VTune reports great parallelization (near 100% of the time is spent with 8 CPUs in use). Now, when I compile the code to run on the Phi card in native mode (with -mmic), versions 1 and 2 both take approximately 30 seconds when run on the command prompt on mic0. When I use VTune to profile it:
Version 1 takes the same roughly 30 seconds, and the hotspot analysis shows that most time is spent in __kmp_wait_sleep and __kmp_static_yield. Out of 7710s CPU time, 5804s are spent in Spin Time.
Version 2 takes fooooorrrreevvvver... I kill it after running a couple minutes in VTune. The hotspot analysis shows that of 25254s of CPU time, 21585s are spent in [vmlinux].
Can anyone shed some light on what's going on here and why I'm getting such bad performance? I'm using the default for OMP_NUM_THREADS and set KMP_AFFINITY=compact,granularity=fine (as recommended by Intel). I'm new to MKL and OpenMP, so I'm certain I'm making rookie mistakes.
Thanks,
Andrew
The most probable reason for this behavior given that most of the time is spent in OS (vmlinux), is over-subscription caused by nested OpenMP parallel region inside MKL implementation of cblas_dgemm() and dgesv. E.g. see this example.
This version is supported and explained by Jim Dempsey at the Intel forum.
What about using MKL:sequential library? If you link MKL library with sequential option, it doesn't generate OpenMP threads inside of the MKL itself. I guess you may get better results than now.

Calculation time elapsed by a particular function in C program

I have a code in which i want to calculate the time taken by two sorting algorithms merge sort and quick sort to sort N numbers in microseconds or more precise.
The two times thus calculated will then we outputted to the terminal.
Code(part of code):
printf("THE LIST BEFORE SORTING IS(UNSORTED LIST):\n");
printlist(arr,n);
mergesort(extarr,0,n-1);
printf("THE LIST AFTER SORTING BY MERGE SORT IS(SORTED LIST):\n");
printlist(extarr,n);
quicksort(arr,0,n-1);
printf("THE LIST AFTER SORTING BY QUICK SORT IS(SORTED LIST):\n");
printlist(arr,n);
Help me by providing that how it will be done.I have tried clock_t by taking two variables as start stop and keeping them above and below the function call respectively but this doesnt help at all and always print out the its difference as zero.
Please suggest some other methods or function keeping in mind that it has no problem running in any type of OS.
Thanks for any help in advance.
Method : 1
To calculate total time taken by program You can use linux utility "time".
Lets your program name is test.cpp.
$g++ -o test test.cpp
$time ./test
Output will be like :
real 0m11.418s
user 0m0.004s
sys 0m0.004s
Method : 2
You can also use linux profiling method "gprof" to find the time by different functions.
First you have to compile the program with "-pg" flag.
$g++ -pg -o test test.cpp
$./test
$gprof test gmon.out
PS : gmon.out is default file created by gprof
You can call gettimeofday function in Linux and timeGetTime in Windows. Call these functions before calling your sorting function and after calling your sorting function and take the difference.
Please check the man page for further details. If you are still unable to get some tangible data (as the time taken may be too small due to smaller data sets), better to try to measure the time together for 'n' number of iterations and then deduce the time for a single run or increase the size of the data set to be sorted.
Not sure if you tried the following. I know your original post says that you have tried utilizing the CLOCKS_PER_SEC. Using CLOCKS_PER_SEC and doing (stop-start)/CLOCKS_PER_SEC will allow you get seconds. The double will provide more precision.
#include <time.h>
main()
{
clock_t launch = clock();
//do work
clock_t done = clock();
double diff = (done - launch) / CLOCKS_PER_SEC;
}
The reason to get Zeroas the result is likely the poor resolution of the time source you're using. These time sources typically increment by some 10 to 20 ms. This is poor but that's the way they work. When your sorting is done in less that this time increment, the result will be zero. You may increase this resultion into the 1 ms regime by increasing the systems interrupt frequency. There is no standard way to accomplish this for windows and Linux. They have their individual way.
An even higher resolution can be obtained by a high frequency counter. Windows and Linux do provide access to such counters, but again, the code may look slightly different.
If you deserve one piece of code to run on windows and linux, I'd recommend to perform the time measurement in a loop. Run the code to measure hundreds or even more times in a loop
and capture the time outside the loop. Divide the captured time by the numer of loop cycles and have the result.
Of course: This is for evaluation only. You don't want to have that in final code.
And: Taking into account that the time resolution is in the 1 to 20 ms you should make a good choice of the total time to go for to get decent resolution of you measurement. (Hint: Adjust the loop count to let it go for at least a second or so.)
Example:
clock_t start, end;
printf("THE LIST BEFORE SORTING IS(UNSORTED LIST):\n");
printlist(arr,n);
start = clock();
for(int i = 0; i < 100; i++){
mergesort(extarr,0,n-1);
}
end = clock();
double diff = (end - start) / CLOCKS_PER_SEC;
// and so on...
printf("THE LIST AFTER SORTING BY MERGE SORT IS(SORTED LIST):\n");
printlist(extarr,n);
quicksort(arr,0,n-1);
printf("THE LIST AFTER SORTING BY QUICK SORT IS(SORTED LIST):\n");
printlist(arr,n);
If you are in Linux 2.6.26 or above then getrusage(2) is the most accurate way to go:
#include <sys/time.h>
#include <sys/resource.h>
// since Linux 2.6.26
// The macro is not defined in all headers, but supported if your Linux version matches
#ifndef RUSAGE_THREAD
#define RUSAGE_THREAD 1
#endif
// If you are single-threaded then RUSAGE_SELF is POSIX compliant
// http://linux.die.net/man/2/getrusage
struct rusage rusage_start, rusage_stop;
getrusage(RUSAGE_THREAD, &rusage_start);
...
getrusage(RUSAGE_THREAD, &rusage_stop);
// amount of microseconds spent in user space
size_t user_time = ((rusage_stop.ru_utime.tv_sec - rusage_start.ru_stime.tv_sec) * 1000000) + rusage_stop.ru_utime.tv_usec - rusage_start.ru_stime.tv_usec;
// amount of microseconds spent in kernel space
size_t system_time = ((rusage_stop.ru_stime.tv_sec - rusage_start.ru_stime.tv_sec) * 1000000) + rusage_stop.ru_stime.tv_usec - rusage_start.ru_stime.tv_usec;

Resources