I have written a small C program. It does read some gzipped files, does some filtering and then again outputs to gzipped files.
I run gcc with -O3 -Ofast. Otherwise pretty standard.
If I do strace -c on my executable I get:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
46.01 0.077081 0 400582 read
42.73 0.071579 4771 15 munmap
9.34 0.015647 0 110415 brk
1.01 0.001688 32 52 openat
0.45 0.000746 3 228 mmap
0.20 0.000327 4 70 mprotect
0.15 0.000254 0 1128 write
0.06 0.000100 2 50 fstat
0.05 0.000087 1 52 close
0.00 0.000006 6 1 getrandom
0.00 0.000005 2 2 rt_sigaction
0.00 0.000004 2 2 1 arch_prctl
0.00 0.000003 3 1 1 stat
0.00 0.000003 1 2 lseek
0.00 0.000002 2 1 rt_sigprocmask
0.00 0.000002 2 1 prlimit64
0.00 0.000000 0 8 pread64
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 2 fdatasync
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 1 set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00 0.167534 512616 3 total
So my program is quite busy with reading the file. Now, I am not sure if I can get it faster. The relevant code is the following:
while (gzgets(file_pointer, line, LL) != Z_NULL) {
linkage = strtok(line,"\t");
linkage = strtok(NULL,"\t");
linkage[strcspn(linkage, "\n")] = 0;
add_linkage_entry(id_cnt, linkage);
id_cnt++;
}
Do you see see room for improvement here? Is it possible to intervene manually with gzread or is gzgets doint a good job here to not read char by char?
Any other advice? (Are the errors in the strace worrisome?)
EDIT:
add_linkage_entry does add an entry to a uthash hash table (https://troydhanson.github.io/uthash/)
I don't think that gzgets (and the related read system calls) are the bottleneck here.
The number of read calls is small for data that compresses well, and it will increase for data that has more entropy (zlib has to request uncompressed data from disk more frequently then). E.g., for text data generated from urandom (via
base64 /dev/urandom | tr -- '+HXA' '\t' | head -n 10000000 | gzip
) I get about 70000 read calls for 10M lines, equalling about 140 lines/call. This nicely matches your experience of 100..1000 lines per call.
What is more, the CPU time for reading those lines is still negligible (about 2.5M lines/s, including the strtok calls). Highly compressed data requires about 40 times fewer read calls and can be read about 4 times as fast -- but this factor of 4 can also be seen with raw decompression via gzip -d on the command lines.
It thus appears that your function add_linkage_entry is the bottleneck here. In particular the large number of brk calls looks unusal.
The errors in strace output look harmless.
Related
I am currently working on a C program in Debian. This program at first allocates several gigabytes of memory. the problem is that after the startup of the program, still it is allocating memory. I checked and there is no malloc or calloc or etc. in the main loop of the program. I have checked the memory with the RES column in the htop command.
then I decided to check the memory syscalls of the program with strace. I attached strace after program startup using this command:
strace -c -f -e trace=memory -p $(pidof myprogram)
Here is the result:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.000311 0 10392 mprotect
------ ----------- ----------- --------- --------- ----------------
100.00 0.000311 10392 total
So it is clear that there is no brk or mmap syscalls that can allocate memory.
Here is the list of all syscals:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
33.00 1.446748 6156 235 67 futex
32.41 1.420658 8456 168 poll
17.35 0.760549 31 24459 nanosleep
16.24 0.712000 44500 16 select
1.00 0.044000 7333 6 2 restart_syscall
0.00 0.000000 0 80 40 read
0.00 0.000000 0 40 write
0.00 0.000000 0 184 mprotect
0.00 0.000000 0 33 rt_sigprocmask
0.00 0.000000 0 21 sendto
0.00 0.000000 0 47 sendmsg
0.00 0.000000 0 138 44 recvmsg
0.00 0.000000 0 7 gettid
------ ----------- ----------- --------- --------- ----------------
100.00 4.383955 25434 153 total
Do you have any idea why is memory allocated?
I have a very large tab separated file. The tab separated file is binary and will be streamed by the tool samtools (which is very fast and not the bottleneck). Now I want to output only the content up to the first tab.
In my current piped command cut is the bottleneck:
samtools view -# 15 -F 0x100 file.bam | cut -f 1 | pigz > out.gz
I tried using awk '{print $1}'. This is not sufficiently faster I also tried using parallelin combination withcut` but this also does not increase the speed much.
I guess it would be better to have a tool which just outputs the string until first tab and then completely skips the entire line.
Do you have a suggestion for a tool which is faster for my purpose? Ideally, one would write a small C program I guess but my C is a bit rusty so would take too long for me.
You are interested in a small C program that just outputs lines from stdin until first tab.
In C you can do this easily with something like this:
#include <stdio.h>
#include <string.h>
#define MAX_LINE_LENGTH 1024
int main(void) {
char buf[MAX_LINE_LENGTH];
while(fgets(buf, sizeof(buf), stdin) != NULL) {
buf[strcspn(buf, "\n\t")] = '\0';
fputs(buf, stdout);
fputc('\n', stdout);
}
return 0;
}
It simply reads lines with fgets from stdin. The string is terminated with a NUL byte at the first tab \t. The same applies if there is a \n to have no extra line feeds in the output, just in case there is no tab on an input line.
Whether this is much faster in your use case I cannot say, but it should at least provide a starting point for trying out your idea.
GNU Awk 5.0.1, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.2.0)
You might give a try other implementation of AWK, according to test done in 2009¹ Don’t MAWK AWK – the fastest and most elegant big data munging language! nawk was found faster than gawk and mawk was found faster than nawk. You would need to run test with your data to find if using other implementation give noticeable boost.
¹so versions available in 2022 might give different result
In the question OP has mentioned that awk '{print $1}' is not sufficiently faster than cut; in my testing I'm seeing awk running about twice as fast as cut, so not sure how OP is using awk ... or if I'm missing something (basic) with my testing ...
OP has mentioned a 'large' tab-delimited file with up to 400 characters per line; we'll simulate this with the following code that generates a ~400MB file:
$ cat sam_out.awk
awk '
BEGIN { OFS="\t"; x="1234567890"
for (i=1;i<=40;i++) filler=filler x
for (i=1;i<=1000000;i++) print x,filler
}'
$ . ./sam_out.awk | wc
1000000 2000000 412000000
Test calls:
$ cat sam_tests.sh
echo "######### pipe to cut"
time . ./sam_out.awk | cut -f1 - > /dev/null
echo "######### pipe to awk"
time . ./sam_out.awk | awk '{print $1}' > /dev/null
echo "######### process-sub to cut"
time cut -f1 <(. ./sam_out.awk) > /dev/null
echo "######### process-sub to awk"
time awk '{print $1}' <(. ./sam_out.awk) > /dev/null
NOTE: also ran all 4 tests with output written to 4 distinct output files; diff of the 4 output files showed all were the same (wc: 1000000 1000000 11000000; head -1: 1234567890)
Results of running the tests:
######### pipe to cut
real 0m1.177s
user 0m0.205s
sys 0m1.454s
######### pipe to awk
real 0m0.582s
user 0m0.166s
sys 0m0.759s
######### process-sub to cut
real 0m1.265s
user 0m0.351s
sys 0m1.746s
######### process-sub to awk
real 0m0.655s
user 0m0.097s
sys 0m0.968s
NOTES:
test system: Ubuntu 10.04, cut (GNU coreutils 8.30), awk (GNU Awk 5.0.1)
earlier version of this answer showed awk running 14x-15x times faster than cut; that system: cygwin 3.3.5, cut (GNU coreutils 8.26), awk (GNU Awk 5.1.1)
You might consider process-substitutions instead of a pipeline.
$ < <( < <(samtools view -# 15 -F 0x100 file.bam) cut -f1 ) pigz
Note: I'm using process substitution to generate stdin and avoid using another FIFO. This seems to be much faster.
I've written a simple test script sam_test.sh that generates some output:
#!/usr/bin/env bash
echo {1..10000} | awk 'BEGIN{OFS="\t"}{$1=$1;for(i=1;i<=1000;++i) print i,$0}'
and compared the output of the following commands:
$ ./sam_test.sh | cut -f1 | awk '!(FNR%3)'
$ < <(./sam_test.sh) cut -f1 | awk '!(FNR%3)'
$ < <( < <(./sam_test.sh) cut -f1 ) awk '!(FNR%3)'
The later of the three cases is in 'runtime' significantly faster. Using strace -c , we can see that each pipeline adds a significant amount of wait4 syscalls. The final version is then also significantly faster (factor 700 in the above case).
Output of test case (short):
$ cat ./sam_test_full_pipe.sh
#!/usr/bin/env bash
./sam_test.sh | cut -f1 - | awk '!(FNR%3)' -
$ strace -c ./sam_test_full_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.22 0.643249 160812 4 1 wait4
0.30 0.001951 5 334 294 openat
0.21 0.001331 5 266 230 stat
0.04 0.000290 20 14 12 execve
<snip>
------ ----------- ----------- --------- --------- ----------------
100.00 0.648287 728 890 549 total
$ cat ./sam_test_one_pipe.sh
#!/usr/bin/env bash
< <(./sam_test.sh) cut -f1 - | awk '!(FNR%3)' -
$ strace -c ./sam_test_one_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
98.72 0.256664 85554 3 1 wait4
0.45 0.001181 3 334 294 openat
0.29 0.000757 2 266 230 stat
<snip>
------ ----------- ----------- --------- --------- ----------------
100.00 0.259989 295 881 547 total
$ cat ./sam_test_no_pipe.sh
#!/usr/bin/env bash
< <(< <(./sam_test.sh) cut -f1 - ) awk '!(FNR%3)' -
$ strace -c ./sam_test_no_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
39.43 0.002863 1431 2 1 wait4
19.68 0.001429 4 334 294 openat
14.87 0.001080 3 285 242 stat
10.00 0.000726 51 14 12 execve
<snip>
------ ----------- ----------- --------- --------- ----------------
100.00 0.007261 7 909 557 total
Output of test case (full):
$ cat ./sam_test_full_pipe.sh
#!/usr/bin/env bash
./sam_test.sh | cut -f1 - | awk '!(FNR%3)' -
$ strace -c ./sam_test_full_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.22 0.643249 160812 4 1 wait4
0.30 0.001951 5 334 294 openat
0.21 0.001331 5 266 230 stat
0.04 0.000290 20 14 12 execve
0.04 0.000276 6 42 mmap
0.04 0.000229 76 3 clone
0.03 0.000178 3 49 4 close
0.02 0.000146 3 39 fstat
0.02 0.000109 9 12 mprotect
0.01 0.000080 5 16 read
0.01 0.000053 2 18 rt_sigprocmask
0.01 0.000052 3 16 rt_sigaction
0.01 0.000038 3 10 brk
0.01 0.000036 18 2 munmap
0.01 0.000034 5 6 2 access
0.00 0.000029 3 8 1 fcntl
0.00 0.000024 3 7 lseek
0.00 0.000019 4 4 3 ioctl
0.00 0.000019 9 2 pipe
0.00 0.000018 3 5 getuid
0.00 0.000018 3 5 getgid
0.00 0.000018 3 5 getegid
0.00 0.000017 3 5 geteuid
0.00 0.000013 4 3 dup2
0.00 0.000013 13 1 faccessat
0.00 0.000009 2 4 2 arch_prctl
0.00 0.000008 4 2 getpid
0.00 0.000008 4 2 prlimit64
0.00 0.000005 5 1 sysinfo
0.00 0.000004 4 1 write
0.00 0.000004 4 1 uname
0.00 0.000004 4 1 getppid
0.00 0.000003 3 1 getpgrp
0.00 0.000002 2 1 rt_sigreturn
------ ----------- ----------- --------- --------- ----------------
100.00 0.648287 728 890 549 total
$ cat ./sam_test_one_pipe.sh
#!/usr/bin/env bash
< <(./sam_test.sh) cut -f1 - | awk '!(FNR%3)' -
$ strace -c ./sam_test_one_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
98.72 0.256664 85554 3 1 wait4
0.45 0.001181 3 334 294 openat
0.29 0.000757 2 266 230 stat
0.11 0.000281 20 14 12 execve
0.08 0.000220 5 42 mmap
0.06 0.000159 79 2 clone
0.05 0.000138 3 45 2 close
0.05 0.000125 3 39 fstat
0.03 0.000083 6 12 mprotect
0.02 0.000060 3 16 read
0.02 0.000054 3 16 rt_sigaction
0.02 0.000042 2 16 rt_sigprocmask
0.01 0.000038 6 6 2 access
0.01 0.000035 17 2 munmap
0.01 0.000027 2 10 brk
0.01 0.000019 3 5 getuid
0.01 0.000018 3 5 geteuid
0.01 0.000017 3 5 getgid
0.01 0.000017 3 5 getegid
0.00 0.000010 1 7 lseek
0.00 0.000009 2 4 3 ioctl
0.00 0.000008 4 2 getpid
0.00 0.000007 1 4 2 arch_prctl
0.00 0.000005 5 1 sysinfo
0.00 0.000004 4 1 uname
0.00 0.000003 3 1 getppid
0.00 0.000003 3 1 getpgrp
0.00 0.000003 1 2 prlimit64
0.00 0.000002 2 1 rt_sigreturn
0.00 0.000000 0 1 write
0.00 0.000000 0 1 pipe
0.00 0.000000 0 3 dup2
0.00 0.000000 0 8 1 fcntl
0.00 0.000000 0 1 faccessat
------ ----------- ----------- --------- --------- ----------------
100.00 0.259989 295 881 547 total
$ cat ./sam_test_no_pipe.sh
#!/usr/bin/env bash
< <(< <(./sam_test.sh) cut -f1 - ) awk '!(FNR%3)' -
$ strace -c ./sam_test_no_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
39.43 0.002863 1431 2 1 wait4
19.68 0.001429 4 334 294 openat
14.87 0.001080 3 285 242 stat
10.00 0.000726 51 14 12 execve
2.67 0.000194 4 42 mmap
1.83 0.000133 3 39 fstat
1.67 0.000121 121 1 clone
1.58 0.000115 2 41 close
0.88 0.000064 6 10 2 access
0.87 0.000063 5 12 mprotect
0.73 0.000053 3 16 rt_sigaction
0.70 0.000051 4 12 rt_sigprocmask
0.66 0.000048 3 16 read
0.48 0.000035 3 10 brk
0.48 0.000035 3 9 getuid
0.44 0.000032 16 2 munmap
0.41 0.000030 3 8 1 fcntl
0.41 0.000030 3 9 geteuid
0.40 0.000029 3 9 getegid
0.34 0.000025 2 9 getgid
0.22 0.000016 5 3 dup2
0.21 0.000015 3 4 3 ioctl
0.19 0.000014 2 7 lseek
0.18 0.000013 13 1 faccessat
0.12 0.000009 2 4 2 arch_prctl
0.11 0.000008 4 2 prlimit64
0.08 0.000006 3 2 getpid
0.06 0.000004 4 1 write
0.06 0.000004 4 1 rt_sigreturn
0.06 0.000004 4 1 uname
0.06 0.000004 4 1 sysinfo
0.06 0.000004 4 1 getppid
0.06 0.000004 4 1 getpgrp
------ ----------- ----------- --------- --------- ----------------
100.00 0.007261 7 909 557 total
In the end I hacked a small C program which directly filters the BAM file and also writes to a gzip -- with a lot of help of the htslib developers (which is the basis for samtools).
So piping is not needed any more. This solution is about 3-4 times faster than the solution with the C code above (from Stephan).
See here:
https://github.com/samtools/samtools/issues/1672
if you just need first field why not just
{m,n,g}awk NF=1 FS='\t'
In terms of performance, I don't have a tab file handy but i do have 12.5mn rows 1.85 GB .txt with plenty of multi-byte UTF-8 in it that's "=" separated :
rows = 12,494,275. | ascii+utf8 chars = 1,285,316,715. | bytes = 1,983,544,693.
- 4.44s mawk 2
- 4.95s mawk 1
- 10.48s gawk 5.1.1
- 40.07s nawk
Why some enjoy pushing for the slow awks is beyond me.
=
in0: 35.8MiB 0:00:00 [ 357MiB/s] [ 357MiB/s] [> ] 1% ETA 0:00:00
out9: 119MiB 0:00:04 [27.0MiB/s] [27.0MiB/s] [ <=> ]
in0: 1.85GiB 0:00:04 [ 428MiB/s] [ 428MiB/s] [======>] 100%
( pvE 0.1 in0 < "${m3t}" | mawk2 NF=1 FS==; )
4.34s user 0.45s system 107% cpu 4.439 total
1 52888940993baac8299b49ee2f5bdee7 stdin
=
in0: 1.85GiB 0:00:04 [ 384MiB/s] [ 384MiB/s] [=====>] 100%
out9: 119MiB 0:00:04 [24.2MiB/s] [24.2MiB/s] [ <=>]
( pvE 0.1 in0 < "${m3t}" | mawk NF=1 FS==; )
4.83s user 0.47s system 107% cpu 4.936 total
1 52888940993baac8299b49ee2f5bdee7 stdin
=
in0: 1.85GiB 0:00:10 [ 180MiB/s] [ 180MiB/s] [ ==>] 100%
out9: 119MiB 0:00:10 [11.4MiB/s] [11.4MiB/s] [ <=>]
( pvE 0.1 in0 < "${m3t}" | gawk NF=1 FS==; )
10.36s user 0.56s system 104% cpu 10.476 total
1 52888940993baac8299b49ee2f5bdee7 stdin
=
in0: 4.25MiB 0:00:00 [42.2MiB/s] [42.2MiB/s] [> ] 0% ETA 0:00:00
out9: 119MiB 0:00:40 [2.98MiB/s] [2.98MiB/s] [<=> ]
in0: 1.85GiB 0:00:40 [47.2MiB/s] [47.2MiB/s] [=====>] 100%
( pvE 0.1 in0 < "${m3t}" | nawk NF=1 FS==; )
39.79s user 0.88s system 101% cpu 40.068 total
1 52888940993baac8299b49ee2f5bdee7 stdin
But these pale compared to using the right FS to collect everything away :
barely 1.95 secs
( pvE 0.1 in0 < "${m3t}" | mawk2 NF-- FS='=.*$'; )
1.83s user 0.42s system 115% cpu 1.951 total
1 52888940993baac8299b49ee2f5bdee7 stdin
By comparison, even gnu-cut that is purely C-code binary is slower :
( pvE 0.1 in0 < "${m3t}" | gcut -d= -f 1; )
2.53s user 0.50s system 113% cpu 2.674 total
1 52888940993baac8299b49ee2f5bdee7 stdin
You can save a tiny bit (1.772 secs) more using a more verbose approach :
( pvE 0.1 in0 < "${m3t}" | mawk2 '{ print $1 }' FS='=.*$'; )
1.64s user 0.42s system 116% cpu 1.772 total
1 52888940993baac8299b49ee2f5bdee7 stdin
Unfortunately, complex FS really isn't gawk's forte, even after you give it a helping boost with the byte-level flag :
( pvE 0.1 in0 < "${m3t}" | gawk -F'=.+$' -be NF--; )
20.23s user 0.59s system 102% cpu 20.383 total
52888940993baac8299b49ee2f5bdee7 stdin
I need to get the start time of process using C code in userspace.
The process will run as root, So I can fopen /proc/PID/stat.
I saw implementation, e.g:
start time of a process on linux
or
http://brokestream.com/procstat.c
But they are invalid, Why they are invalid ? if the process 2nd parameter contains space, e.g:
[ilan#CentOS7286-64 tester]$ cat /proc/1077/stat
1077 (rs:main Q:Reg) S 1 1054 1054 0 -1 1077944384 21791 0 10 0 528 464 0 0 20 0 3 0 1056 321650688 1481 18446744073709551615 1 1 0 0 0 0 2146172671 16781830 1133601 18446744073709551615 0 0 -1 1 0 0 1 0 0 0 0 0 0 0 0 0 0
These solutions will not work.
Is there a better way retrieving a process start time other then parsing the /proc/PID/stat results ? I can do the following logic:
read long, first parameter is pid
read char, make sure that i finish reading only when hitting close ')'. - 2nd parameter is tcomm (filename of the executable)
read char - 3rd parameter process state.
In solaris, you simply read the result to psinfo_t struct.
You can simply use the stat(2) kernel call.
The creation time is not set by the proc file system. But you can use the modification time, because the modification time of a directory changes only, if files are added to or removed from the directory. And because the content of a directory in the proc filesystem changes only, if you replace the kernel module of the running kernel, you can be pretty sure, that the modification time is also the creation time.
Example:
$ stat -c %y /proc/1
2018-06-01 11:46:57.512000000 +0200
$ uptime -s
2018-06-01 11:46:57
NFS v4 with fast network and average IOPS disk. Load increase high on large file transfer.
The problem seems to be IOPS.
The test case:
/etc/exports
server# /mnt/exports 192.168.6.0/24(rw,sync,no_subtree_check,no_root_squash,fsid=0)
server# /mnt/exports/nfs 192.168.6.0/24(rw,sync,no_subtree_check,no_root_squash)
client# mount -t nfs 192.168.6.131:/nfs /mnt/nfstest -vvv
(or client# mount -t nfs 192.168.6.131:/nfs /mnt/nfstest -o nfsvers=4,tcp,port=2049,async -vvv)
It works well wits 'sync' flag but the transger drops form 50MB/s to 500kb/s
http://ubuntuforums.org/archive/index.php/t-1478413.html
The topic seems to be solved by reducing wsize to wsize=300 - small improvement but not the solution.
Simple test with dd:
client# dd if=/dev/zero bs=1M count=6000 |pv | dd of=/mnt/nfstest/delete_me
server# iotop
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
1863 be/4 root 0.00 B/s 14.17 M/s 0.00 % 21.14 % [nfsd]
1864 be/4 root 0.00 B/s 7.42 M/s 0.00 % 17.39 % [nfsd]
1858 be/4 root 0.00 B/s 6.32 M/s 0.00 % 13.09 % [nfsd]
1861 be/4 root 0.00 B/s 13.26 M/s 0.00 % 12.03 % [nfsd]
server# dstat -r --top-io-adv --top-io --top-bio --aio -l -n -m
--io/total- -------most-expensive-i/o-process------- ----most-expensive---- ----most-expensive---- async ---load-avg--- -NET/total- ------memory-usage-----
read writ|process pid read write cpu| i/o process | block i/o process | #aio| 1m 5m 15m | recv send| used buff cach free
10.9 81.4 |init [2] 1 5526B 20k0.0%|init [2] 5526B 20k|nfsd 10B 407k| 0 |2.92 1.01 0.54| 0 0 |29.3M 78.9M 212M 4184k
1.00 1196 |sshd: root#pts/0 1943 1227B1264B 0%|sshd: root#1227B 1264B|nfsd 0 15M| 0 |2.92 1.01 0.54| 44M 319k|29.1M 78.9M 212M 4444k
0 1365 |sshd: root#pts/0 1943 485B 528B 0%|sshd: root# 485B 528B|nfsd 0 16M| 0 |2.92 1.01 0.54| 51M 318k|29.5M 78.9M 212M 4708k
Do You know any way of limiting the load without big changes in the configuration?
I do consider limiting the network speed with wondershaper or iptables, though it is not nice since other traffic would be harmed as well.
Someone suggested cgroups - may be worth solving - but it still it is not my 'feng shui' - I would hope to find solution in NFS config - since the problem is here, would be nice to have in-one-place-solution.
If that would be possible to increase 'sync' speed to 10-20MB/s that would be enough for me.
I think I nailed it:
On the server, change disk scheduller:
for i in /sys/block/sd*/queue/scheduler ; do echo deadline > $i ; done
additionally (small improvement - find the best value for You):
/etc/default/nfs-kernel-server
# Number of servers to start up
-RPCNFSDCOUNT=8
+RPCNFSDCOUNT=2
restart services
/etc/init.d/rpcbind restart
/etc/init.d/nfs-kernel-server restart
ps:
My current configs
server:
/etc/exports
/mnt/exports 192.168.6.0/24(rw,no_subtree_check,no_root_squash,fsid=0)
/mnt/exports/nfs 192.168.6.0/24(rw,no_subtree_check,no_root_squash)
client:
/etc/fstab
192.168.6.131:/nfs /mnt/nfstest nfs rsize=32768,wsize=32768,tcp,port=2049 0 0
First of all I'm running MacOSX 10.7.1. I've installed all properly, Xcode 4 and all the libraries, to work with C lenguage.
I'm having troubles running gprof command in shell. I'll explain step by step what I'm doing and the output I'm receiving.
Step 1:
~ roger$ cd Path/to/my/workspace
~ roger$ ls
Output (Step 1):
queue.c queue.h testqueue.c
Step 2:
~ roger$ gcc -c -g -pg queue.c
~ roger$ ls
Output (Step 2):
queue.c queue.h queue.o testqueue.c
Step 3:
~ roger$ gcc -o testqueue -g -pg queue.o testqueue.c
~ roger$ ls
Output (Step 3):
queue.c queue.h queue.o testqueue testqueue.c
Step 4:
~ roger$ ./testqueue
~ roger$ ls
Output (Step 4):
enqueue element 16807
head=0,tail=1
enqueue element 282475249
head=0,tail=2
enqueue element 1622650073
head=0,tail=3
enqueue element 984943658
head=0,tail=4
enqueue element 1144108930
head=0,tail=5
enqueue element 470211272
head=0,tail=6
enqueue element 101027544
head=0,tail=7
enqueue element 1457850878
head=0,tail=8
enqueue element 1458777923
head=0,tail=9
enqueue element 2007237709
head=0,tail=10
queue is full
dequeue element 16807
dequeue element 282475249
dequeue element 1622650073
dequeue element 984943658
dequeue element 1144108930
dequeue element 470211272
dequeue element 101027544
dequeue element 1457850878
dequeue element 1458777923
dequeue element 2007237709
queue is empty
gmon.out queue.h testqueue
queue.c queue.o testqueue.c
Step 5:
~ roger$ gprof -b testqueue gmon.out > out.txt
~ roger$ nano out.txt
Output (Step 5):
GNU nano 2.0.6 File: out.txt
granularity: each sample hit covers 4 byte(s) no time propagated
called/total parents
index %time self descendents called+self name index
called/total children
^L
granularity: each sample hit covers 4 byte(s) no time accumulated
% cumulative self self total
time seconds seconds calls ms/call ms/call name
^L
Index by function name
Finally. The output file should show something like this:
% cumulative self self total
time seconds seconds calls ms/call ms/call name
33.34 0.02 0.02 7208 0.00 0.00 open
16.67 0.03 0.01 244 0.04 0.12 offtime
16.67 0.04 0.01 8 1.25 1.25 memccpy
16.67 0.05 0.01 7 1.43 1.43 write
16.67 0.06 0.01 mcount
0.00 0.06 0.00 236 0.00 0.00 tzset
0.00 0.06 0.00 192 0.00 0.00 tolower
0.00 0.06 0.00 47 0.00 0.00 strlen
0.00 0.06 0.00 45 0.00 0.00 strchr
0.00 0.06 0.00 1 0.00 50.00 main
0.00 0.06 0.00 1 0.00 0.00 memcpy
0.00 0.06 0.00 1 0.00 10.11 print
0.00 0.06 0.00 1 0.00 0.00 profil
0.00 0.06 0.00 1 0.00 50.00 report
...
And it shows blank field.
I searched here and I found nothing helpfully at all. I google it but the same thing.
I would be very grateful If anyone could help me please.
gprof does not work on OS X. The system call it needs was removed several versions ago. It's not clear why the utility still ships. The alternatives are to use dtrace and/or sample.
no need to give gmon.out in the last line, give gprof -b testqueue > out.txt
see http://www.network-theory.co.uk/docs/gccintro/gccintro_80.html for further reference