Memory allocation of program without any allocation syscalls - c

I am currently working on a C program in Debian. This program at first allocates several gigabytes of memory. the problem is that after the startup of the program, still it is allocating memory. I checked and there is no malloc or calloc or etc. in the main loop of the program. I have checked the memory with the RES column in the htop command.
then I decided to check the memory syscalls of the program with strace. I attached strace after program startup using this command:
strace -c -f -e trace=memory -p $(pidof myprogram)
Here is the result:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00 0.000311 0 10392 mprotect
------ ----------- ----------- --------- --------- ----------------
100.00 0.000311 10392 total
So it is clear that there is no brk or mmap syscalls that can allocate memory.
Here is the list of all syscals:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
33.00 1.446748 6156 235 67 futex
32.41 1.420658 8456 168 poll
17.35 0.760549 31 24459 nanosleep
16.24 0.712000 44500 16 select
1.00 0.044000 7333 6 2 restart_syscall
0.00 0.000000 0 80 40 read
0.00 0.000000 0 40 write
0.00 0.000000 0 184 mprotect
0.00 0.000000 0 33 rt_sigprocmask
0.00 0.000000 0 21 sendto
0.00 0.000000 0 47 sendmsg
0.00 0.000000 0 138 44 recvmsg
0.00 0.000000 0 7 gettid
------ ----------- ----------- --------- --------- ----------------
100.00 4.383955 25434 153 total
Do you have any idea why is memory allocated?

Related

Increase speed of reading gz file in C

I have written a small C program. It does read some gzipped files, does some filtering and then again outputs to gzipped files.
I run gcc with -O3 -Ofast. Otherwise pretty standard.
If I do strace -c on my executable I get:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
46.01 0.077081 0 400582 read
42.73 0.071579 4771 15 munmap
9.34 0.015647 0 110415 brk
1.01 0.001688 32 52 openat
0.45 0.000746 3 228 mmap
0.20 0.000327 4 70 mprotect
0.15 0.000254 0 1128 write
0.06 0.000100 2 50 fstat
0.05 0.000087 1 52 close
0.00 0.000006 6 1 getrandom
0.00 0.000005 2 2 rt_sigaction
0.00 0.000004 2 2 1 arch_prctl
0.00 0.000003 3 1 1 stat
0.00 0.000003 1 2 lseek
0.00 0.000002 2 1 rt_sigprocmask
0.00 0.000002 2 1 prlimit64
0.00 0.000000 0 8 pread64
0.00 0.000000 0 1 1 access
0.00 0.000000 0 1 execve
0.00 0.000000 0 2 fdatasync
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 1 set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00 0.167534 512616 3 total
So my program is quite busy with reading the file. Now, I am not sure if I can get it faster. The relevant code is the following:
while (gzgets(file_pointer, line, LL) != Z_NULL) {
linkage = strtok(line,"\t");
linkage = strtok(NULL,"\t");
linkage[strcspn(linkage, "\n")] = 0;
add_linkage_entry(id_cnt, linkage);
id_cnt++;
}
Do you see see room for improvement here? Is it possible to intervene manually with gzread or is gzgets doint a good job here to not read char by char?
Any other advice? (Are the errors in the strace worrisome?)
EDIT:
add_linkage_entry does add an entry to a uthash hash table (https://troydhanson.github.io/uthash/)
I don't think that gzgets (and the related read system calls) are the bottleneck here.
The number of read calls is small for data that compresses well, and it will increase for data that has more entropy (zlib has to request uncompressed data from disk more frequently then). E.g., for text data generated from urandom (via
base64 /dev/urandom | tr -- '+HXA' '\t' | head -n 10000000 | gzip
) I get about 70000 read calls for 10M lines, equalling about 140 lines/call. This nicely matches your experience of 100..1000 lines per call.
What is more, the CPU time for reading those lines is still negligible (about 2.5M lines/s, including the strtok calls). Highly compressed data requires about 40 times fewer read calls and can be read about 4 times as fast -- but this factor of 4 can also be seen with raw decompression via gzip -d on the command lines.
It thus appears that your function add_linkage_entry is the bottleneck here. In particular the large number of brk calls looks unusal.
The errors in strace output look harmless.

Faster drop-in replacement for bash `cut` for specific application

I have a very large tab separated file. The tab separated file is binary and will be streamed by the tool samtools (which is very fast and not the bottleneck). Now I want to output only the content up to the first tab.
In my current piped command cut is the bottleneck:
samtools view -# 15 -F 0x100 file.bam | cut -f 1 | pigz > out.gz
I tried using awk '{print $1}'. This is not sufficiently faster I also tried using parallelin combination withcut` but this also does not increase the speed much.
I guess it would be better to have a tool which just outputs the string until first tab and then completely skips the entire line.
Do you have a suggestion for a tool which is faster for my purpose? Ideally, one would write a small C program I guess but my C is a bit rusty so would take too long for me.
You are interested in a small C program that just outputs lines from stdin until first tab.
In C you can do this easily with something like this:
#include <stdio.h>
#include <string.h>
#define MAX_LINE_LENGTH 1024
int main(void) {
char buf[MAX_LINE_LENGTH];
while(fgets(buf, sizeof(buf), stdin) != NULL) {
buf[strcspn(buf, "\n\t")] = '\0';
fputs(buf, stdout);
fputc('\n', stdout);
}
return 0;
}
It simply reads lines with fgets from stdin. The string is terminated with a NUL byte at the first tab \t. The same applies if there is a \n to have no extra line feeds in the output, just in case there is no tab on an input line.
Whether this is much faster in your use case I cannot say, but it should at least provide a starting point for trying out your idea.
GNU Awk 5.0.1, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.2.0)
You might give a try other implementation of AWK, according to test done in 2009¹ Don’t MAWK AWK – the fastest and most elegant big data munging language! nawk was found faster than gawk and mawk was found faster than nawk. You would need to run test with your data to find if using other implementation give noticeable boost.
¹so versions available in 2022 might give different result
In the question OP has mentioned that awk '{print $1}' is not sufficiently faster than cut; in my testing I'm seeing awk running about twice as fast as cut, so not sure how OP is using awk ... or if I'm missing something (basic) with my testing ...
OP has mentioned a 'large' tab-delimited file with up to 400 characters per line; we'll simulate this with the following code that generates a ~400MB file:
$ cat sam_out.awk
awk '
BEGIN { OFS="\t"; x="1234567890"
for (i=1;i<=40;i++) filler=filler x
for (i=1;i<=1000000;i++) print x,filler
}'
$ . ./sam_out.awk | wc
1000000 2000000 412000000
Test calls:
$ cat sam_tests.sh
echo "######### pipe to cut"
time . ./sam_out.awk | cut -f1 - > /dev/null
echo "######### pipe to awk"
time . ./sam_out.awk | awk '{print $1}' > /dev/null
echo "######### process-sub to cut"
time cut -f1 <(. ./sam_out.awk) > /dev/null
echo "######### process-sub to awk"
time awk '{print $1}' <(. ./sam_out.awk) > /dev/null
NOTE: also ran all 4 tests with output written to 4 distinct output files; diff of the 4 output files showed all were the same (wc: 1000000 1000000 11000000; head -1: 1234567890)
Results of running the tests:
######### pipe to cut
real 0m1.177s
user 0m0.205s
sys 0m1.454s
######### pipe to awk
real 0m0.582s
user 0m0.166s
sys 0m0.759s
######### process-sub to cut
real 0m1.265s
user 0m0.351s
sys 0m1.746s
######### process-sub to awk
real 0m0.655s
user 0m0.097s
sys 0m0.968s
NOTES:
test system: Ubuntu 10.04, cut (GNU coreutils 8.30), awk (GNU Awk 5.0.1)
earlier version of this answer showed awk running 14x-15x times faster than cut; that system: cygwin 3.3.5, cut (GNU coreutils 8.26), awk (GNU Awk 5.1.1)
You might consider process-substitutions instead of a pipeline.
$ < <( < <(samtools view -# 15 -F 0x100 file.bam) cut -f1 ) pigz
Note: I'm using process substitution to generate stdin and avoid using another FIFO. This seems to be much faster.
I've written a simple test script sam_test.sh that generates some output:
#!/usr/bin/env bash
echo {1..10000} | awk 'BEGIN{OFS="\t"}{$1=$1;for(i=1;i<=1000;++i) print i,$0}'
and compared the output of the following commands:
$ ./sam_test.sh | cut -f1 | awk '!(FNR%3)'
$ < <(./sam_test.sh) cut -f1 | awk '!(FNR%3)'
$ < <( < <(./sam_test.sh) cut -f1 ) awk '!(FNR%3)'
The later of the three cases is in 'runtime' significantly faster. Using strace -c , we can see that each pipeline adds a significant amount of wait4 syscalls. The final version is then also significantly faster (factor 700 in the above case).
Output of test case (short):
$ cat ./sam_test_full_pipe.sh
#!/usr/bin/env bash
./sam_test.sh | cut -f1 - | awk '!(FNR%3)' -
$ strace -c ./sam_test_full_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.22 0.643249 160812 4 1 wait4
0.30 0.001951 5 334 294 openat
0.21 0.001331 5 266 230 stat
0.04 0.000290 20 14 12 execve
<snip>
------ ----------- ----------- --------- --------- ----------------
100.00 0.648287 728 890 549 total
$ cat ./sam_test_one_pipe.sh
#!/usr/bin/env bash
< <(./sam_test.sh) cut -f1 - | awk '!(FNR%3)' -
$ strace -c ./sam_test_one_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
98.72 0.256664 85554 3 1 wait4
0.45 0.001181 3 334 294 openat
0.29 0.000757 2 266 230 stat
<snip>
------ ----------- ----------- --------- --------- ----------------
100.00 0.259989 295 881 547 total
$ cat ./sam_test_no_pipe.sh
#!/usr/bin/env bash
< <(< <(./sam_test.sh) cut -f1 - ) awk '!(FNR%3)' -
$ strace -c ./sam_test_no_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
39.43 0.002863 1431 2 1 wait4
19.68 0.001429 4 334 294 openat
14.87 0.001080 3 285 242 stat
10.00 0.000726 51 14 12 execve
<snip>
------ ----------- ----------- --------- --------- ----------------
100.00 0.007261 7 909 557 total
Output of test case (full):
$ cat ./sam_test_full_pipe.sh
#!/usr/bin/env bash
./sam_test.sh | cut -f1 - | awk '!(FNR%3)' -
$ strace -c ./sam_test_full_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.22 0.643249 160812 4 1 wait4
0.30 0.001951 5 334 294 openat
0.21 0.001331 5 266 230 stat
0.04 0.000290 20 14 12 execve
0.04 0.000276 6 42 mmap
0.04 0.000229 76 3 clone
0.03 0.000178 3 49 4 close
0.02 0.000146 3 39 fstat
0.02 0.000109 9 12 mprotect
0.01 0.000080 5 16 read
0.01 0.000053 2 18 rt_sigprocmask
0.01 0.000052 3 16 rt_sigaction
0.01 0.000038 3 10 brk
0.01 0.000036 18 2 munmap
0.01 0.000034 5 6 2 access
0.00 0.000029 3 8 1 fcntl
0.00 0.000024 3 7 lseek
0.00 0.000019 4 4 3 ioctl
0.00 0.000019 9 2 pipe
0.00 0.000018 3 5 getuid
0.00 0.000018 3 5 getgid
0.00 0.000018 3 5 getegid
0.00 0.000017 3 5 geteuid
0.00 0.000013 4 3 dup2
0.00 0.000013 13 1 faccessat
0.00 0.000009 2 4 2 arch_prctl
0.00 0.000008 4 2 getpid
0.00 0.000008 4 2 prlimit64
0.00 0.000005 5 1 sysinfo
0.00 0.000004 4 1 write
0.00 0.000004 4 1 uname
0.00 0.000004 4 1 getppid
0.00 0.000003 3 1 getpgrp
0.00 0.000002 2 1 rt_sigreturn
------ ----------- ----------- --------- --------- ----------------
100.00 0.648287 728 890 549 total
$ cat ./sam_test_one_pipe.sh
#!/usr/bin/env bash
< <(./sam_test.sh) cut -f1 - | awk '!(FNR%3)' -
$ strace -c ./sam_test_one_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
98.72 0.256664 85554 3 1 wait4
0.45 0.001181 3 334 294 openat
0.29 0.000757 2 266 230 stat
0.11 0.000281 20 14 12 execve
0.08 0.000220 5 42 mmap
0.06 0.000159 79 2 clone
0.05 0.000138 3 45 2 close
0.05 0.000125 3 39 fstat
0.03 0.000083 6 12 mprotect
0.02 0.000060 3 16 read
0.02 0.000054 3 16 rt_sigaction
0.02 0.000042 2 16 rt_sigprocmask
0.01 0.000038 6 6 2 access
0.01 0.000035 17 2 munmap
0.01 0.000027 2 10 brk
0.01 0.000019 3 5 getuid
0.01 0.000018 3 5 geteuid
0.01 0.000017 3 5 getgid
0.01 0.000017 3 5 getegid
0.00 0.000010 1 7 lseek
0.00 0.000009 2 4 3 ioctl
0.00 0.000008 4 2 getpid
0.00 0.000007 1 4 2 arch_prctl
0.00 0.000005 5 1 sysinfo
0.00 0.000004 4 1 uname
0.00 0.000003 3 1 getppid
0.00 0.000003 3 1 getpgrp
0.00 0.000003 1 2 prlimit64
0.00 0.000002 2 1 rt_sigreturn
0.00 0.000000 0 1 write
0.00 0.000000 0 1 pipe
0.00 0.000000 0 3 dup2
0.00 0.000000 0 8 1 fcntl
0.00 0.000000 0 1 faccessat
------ ----------- ----------- --------- --------- ----------------
100.00 0.259989 295 881 547 total
$ cat ./sam_test_no_pipe.sh
#!/usr/bin/env bash
< <(< <(./sam_test.sh) cut -f1 - ) awk '!(FNR%3)' -
$ strace -c ./sam_test_no_pipe.sh > /dev/null
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
39.43 0.002863 1431 2 1 wait4
19.68 0.001429 4 334 294 openat
14.87 0.001080 3 285 242 stat
10.00 0.000726 51 14 12 execve
2.67 0.000194 4 42 mmap
1.83 0.000133 3 39 fstat
1.67 0.000121 121 1 clone
1.58 0.000115 2 41 close
0.88 0.000064 6 10 2 access
0.87 0.000063 5 12 mprotect
0.73 0.000053 3 16 rt_sigaction
0.70 0.000051 4 12 rt_sigprocmask
0.66 0.000048 3 16 read
0.48 0.000035 3 10 brk
0.48 0.000035 3 9 getuid
0.44 0.000032 16 2 munmap
0.41 0.000030 3 8 1 fcntl
0.41 0.000030 3 9 geteuid
0.40 0.000029 3 9 getegid
0.34 0.000025 2 9 getgid
0.22 0.000016 5 3 dup2
0.21 0.000015 3 4 3 ioctl
0.19 0.000014 2 7 lseek
0.18 0.000013 13 1 faccessat
0.12 0.000009 2 4 2 arch_prctl
0.11 0.000008 4 2 prlimit64
0.08 0.000006 3 2 getpid
0.06 0.000004 4 1 write
0.06 0.000004 4 1 rt_sigreturn
0.06 0.000004 4 1 uname
0.06 0.000004 4 1 sysinfo
0.06 0.000004 4 1 getppid
0.06 0.000004 4 1 getpgrp
------ ----------- ----------- --------- --------- ----------------
100.00 0.007261 7 909 557 total
In the end I hacked a small C program which directly filters the BAM file and also writes to a gzip -- with a lot of help of the htslib developers (which is the basis for samtools).
So piping is not needed any more. This solution is about 3-4 times faster than the solution with the C code above (from Stephan).
See here:
https://github.com/samtools/samtools/issues/1672
if you just need first field why not just
{m,n,g}awk NF=1 FS='\t'
In terms of performance, I don't have a tab file handy but i do have 12.5mn rows 1.85 GB .txt with plenty of multi-byte UTF-8 in it that's "=" separated :
rows = 12,494,275. | ascii+utf8 chars = 1,285,316,715. | bytes = 1,983,544,693.
- 4.44s mawk 2
- 4.95s mawk 1
- 10.48s gawk 5.1.1
- 40.07s nawk
Why some enjoy pushing for the slow awks is beyond me.
=
in0: 35.8MiB 0:00:00 [ 357MiB/s] [ 357MiB/s] [> ] 1% ETA 0:00:00
out9: 119MiB 0:00:04 [27.0MiB/s] [27.0MiB/s] [ <=> ]
in0: 1.85GiB 0:00:04 [ 428MiB/s] [ 428MiB/s] [======>] 100%
( pvE 0.1 in0 < "${m3t}" | mawk2 NF=1 FS==; )
4.34s user 0.45s system 107% cpu 4.439 total
1 52888940993baac8299b49ee2f5bdee7 stdin
=
in0: 1.85GiB 0:00:04 [ 384MiB/s] [ 384MiB/s] [=====>] 100%
out9: 119MiB 0:00:04 [24.2MiB/s] [24.2MiB/s] [ <=>]
( pvE 0.1 in0 < "${m3t}" | mawk NF=1 FS==; )
4.83s user 0.47s system 107% cpu 4.936 total
1 52888940993baac8299b49ee2f5bdee7 stdin
=
in0: 1.85GiB 0:00:10 [ 180MiB/s] [ 180MiB/s] [ ==>] 100%
out9: 119MiB 0:00:10 [11.4MiB/s] [11.4MiB/s] [ <=>]
( pvE 0.1 in0 < "${m3t}" | gawk NF=1 FS==; )
10.36s user 0.56s system 104% cpu 10.476 total
1 52888940993baac8299b49ee2f5bdee7 stdin
=
in0: 4.25MiB 0:00:00 [42.2MiB/s] [42.2MiB/s] [> ] 0% ETA 0:00:00
out9: 119MiB 0:00:40 [2.98MiB/s] [2.98MiB/s] [<=> ]
in0: 1.85GiB 0:00:40 [47.2MiB/s] [47.2MiB/s] [=====>] 100%
( pvE 0.1 in0 < "${m3t}" | nawk NF=1 FS==; )
39.79s user 0.88s system 101% cpu 40.068 total
1 52888940993baac8299b49ee2f5bdee7 stdin
But these pale compared to using the right FS to collect everything away :
barely 1.95 secs
( pvE 0.1 in0 < "${m3t}" | mawk2 NF-- FS='=.*$'; )
1.83s user 0.42s system 115% cpu 1.951 total
1 52888940993baac8299b49ee2f5bdee7 stdin
By comparison, even gnu-cut that is purely C-code binary is slower :
( pvE 0.1 in0 < "${m3t}" | gcut -d= -f 1; )
2.53s user 0.50s system 113% cpu 2.674 total
1 52888940993baac8299b49ee2f5bdee7 stdin
You can save a tiny bit (1.772 secs) more using a more verbose approach :
( pvE 0.1 in0 < "${m3t}" | mawk2 '{ print $1 }' FS='=.*$'; )
1.64s user 0.42s system 116% cpu 1.772 total
1 52888940993baac8299b49ee2f5bdee7 stdin
Unfortunately, complex FS really isn't gawk's forte, even after you give it a helping boost with the byte-level flag :
( pvE 0.1 in0 < "${m3t}" | gawk -F'=.+$' -be NF--; )
20.23s user 0.59s system 102% cpu 20.383 total
52888940993baac8299b49ee2f5bdee7 stdin

How do I correctly add a chain ID to my pdb file?

I am trying to conduct some analysis with my single-chain PDB file (766 residues long), but it requires a chain ID. Currently, there isn't one.
Here is a snippet of the pdb file:
ATOM 1 N MET 1 -69.269 78.953 -91.441 1.00 0.00 N
ATOM 2 CA MET 1 -69.264 78.650 -92.891 1.00 0.00 C
ATOM 4 C MET 1 -69.371 79.939 -93.633 1.00 0.00 C
ATOM 5 O MET 1 -68.379 80.649 -93.799 1.00 0.00 O
ATOM 3 CB MET 1 -70.475 77.774 -93.251 1.00 0.00 C
ATOM 6 CG MET 1 -70.505 76.455 -92.477 1.00 0.00 C
ATOM 7 SD MET 1 -69.115 75.332 -92.806 1.00 0.00 S
ATOM 8 CE MET 1 -69.473 74.270 -91.377 1.00 0.00 C
ATOM 9 N ASP 2 -70.583 80.284 -94.111 1.00 0.00 N
ATOM 10 CA ASP 2 -70.688 81.539 -94.789 1.00 0.00 C
ATOM 12 C ASP 2 -70.661 82.602 -93.737 1.00 0.00 C
ATOM 13 O ASP 2 -71.088 82.377 -92.606 1.00 0.00 O
ATOM 11 CB ASP 2 -71.963 81.733 -95.626 1.00 0.00 C
ATOM 14 CG ASP 2 -71.691 82.908 -96.557 1.00 0.00 C
ATOM 15 OD1 ASP 2 -70.569 82.953 -97.130 1.00 0.00 O
ATOM 16 OD2 ASP 2 -72.598 83.768 -96.717 1.00 0.00 O1-
ATOM 17 N HIS 3 -70.129 83.791 -94.077 1.00 0.00 N
ATOM 18 CA HIS 3 -70.045 84.846 -93.110 1.00 0.00 C
ATOM 20 C HIS 3 -71.342 85.581 -93.094 1.00 0.00 C
ATOM 21 O HIS 3 -72.113 85.574 -94.052 1.00 0.00 O
ATOM 19 CB HIS 3 -68.925 85.865 -93.404 1.00 0.00 C
ATOM 23 CG HIS 3 -68.749 86.908 -92.336 1.00 0.00 C
ATOM 25 CD2 HIS 3 -67.998 86.879 -91.200 1.00 0.00 C
ATOM 22 ND1 HIS 3 -69.357 88.144 -92.351 1.00 0.00 N
ATOM 26 CE1 HIS 3 -68.947 88.797 -91.234 1.00 0.00 C
ATOM 24 NE2 HIS 3 -68.121 88.068 -90.504 1.00 0.00 N
What's the best way for me to label the chain as chain A?
Here's the answer.
We need to read the file line by line and put a chain into column 22 of each line that begins with ATOM. Assuming the file is called myfile.pdb, we are trying to replace the empty space that is separated by 17 characters from ATOM with the letter A. This can be accomplished with a relatively simple sed command.
sed 's/^\(ATOM.\{17\}\) /\1A/' myfile.pdb > newfile.pdb
Hope this is helpful!

Output a for loop into a array

I need the below 4th row idle outputs to be put into a array and then take a average out of the same . below is lparstat output of an aix system.
$ lparstat 2 10
System configuration: type=Shared mode=Uncapped smt=4 lcpu=16 mem=8192MB psize=16 ent=0.20
%user %sys %wait %idle physc %entc lbusy app vcsw phint %nsp %utcyc
----- ----- ------ ------ ----- ----- ------ --- ----- ----- ----- ------
2.6 1.8 0.0 95.5 0.02 9.5 0.0 5.05 270 0 101 1.42
2.8 1.6 0.0 95.6 0.02 9.9 1.9 5.38 258 0 101 1.42
0.5 1.4 0.0 98.1 0.01 5.5 2.9 5.17 265 0 101 1.40
2.8 1.3 0.0 95.8 0.02 8.9 0.0 5.37 255 0 101 1.42
2.8 2.0 0.0 95.2 0.02 10.8 1.9 4.49 264 0 101 1.42
4.2 1.7 0.0 94.1 0.02 12.2 0.0 3.66 257 0 101 1.42
0.5 1.5 0.0 98.0 0.01 6.3 1.9 3.35 267 0 101 1.38
3.1 2.0 0.0 94.9 0.02 12.1 2.9 3.07 367 0 101 1.41
2.3 2.2 0.0 95.5 0.02 9.8 0.0 3.40 259 0 101 1.42
25.1 25.5 0.0 49.4 0.18 89.6 2.6 2.12 395 0 101 1.44
I have made a script like this but need to press enter to get the output .
$ for i in ` lparstat 2 10 | tail -10 | awk '{print $4}'`
> do
> read arr[$i]
> echo arr[$i]
> done
arr[94.0]
arr[97.7]
arr[94.9]
arr[91.0]
arr[98.1]
arr[97.7]
arr[93.0]
arr[94.8]
arr[97.9]
arr[89.2]
Your script only needs a small improvement to calculate the average. You can do that inside awk right away:
lparstat 2 10 | tail -n 10 | awk '{ sum += $4 } END { print sum / NR }'
The tail -n 10 takes 10 last lines.
{ sum += $4 } is calculated for each line - it sums the values at 4th column.
Then END block executes after the whole file is read. The { print sum / NR } prints the average. NR is "Number of Records", where one record is one line, so it's number of lines.
Notes:
backticks ` are discouraged. The modern $( ... ) syntax is much preferred.
The "for i in `cmd`" or more commonly for i in $(...) is a common antipattern in bash. Use while read -r line when reading lines from a command, like cmd | while read -r line; do echo "$line"; done or in bash while read -r line; do echo "$line"; done < <(cmd)

sbrk system call in unix

I studied like malloc uses the sbrk system call. But, some one says, the sbrk is
deprecated one. Now a days malloc using the mmap2 system call to allocate
memory. So, Is there any commands like (ls,cat, grep, sed) using the sbrk
system call. For Ex:
mohanraj#ltsp63:~/Development/chap8$ strace -c ls
a.out files flush.c fopen.c ld.c lld.c malloc.c opendir1.c t2.c t3.c t.c test.c
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
-nan 0.000000 0 12 read
-nan 0.000000 0 1 write
-nan 0.000000 0 13 open
-nan 0.000000 0 16 close
-nan 0.000000 0 1 execve
-nan 0.000000 0 1 time
-nan 0.000000 0 9 8 access
-nan 0.000000 0 3 brk
-nan 0.000000 0 3 ioctl
-nan 0.000000 0 1 readlink
-nan 0.000000 0 5 munmap
-nan 0.000000 0 1 uname
-nan 0.000000 0 11 mprotect
-nan 0.000000 0 1 _llseek
-nan 0.000000 0 1 getsid
-nan 0.000000 0 2 rt_sigaction
-nan 0.000000 0 1 rt_sigprocmask
-nan 0.000000 0 1 getcwd
-nan 0.000000 0 1 getrlimit
-nan 0.000000 0 28 mmap2
-nan 0.000000 0 1 stat64
-nan 0.000000 0 16 fstat64
-nan 0.000000 0 1 getuid32
-nan 0.000000 0 2 getdents64
-nan 0.000000 0 1 1 futex
-nan 0.000000 0 1 set_thread_area
-nan 0.000000 0 1 set_tid_address
-nan 0.000000 0 1 statfs64
-nan 0.000000 0 1 openat
-nan 0.000000 0 1 set_robust_list
-nan 0.000000 0 1 socket
-nan 0.000000 0 1 connect
-nan 0.000000 0 1 send
------ ----------- ----------- --------- --------- ----------------
100.00 0.000000 141 9 total
mohanraj#ltsp63:~/Development/chap8$
The above output shows that the ls command using the above syscall to
execute the output. Likewise any command using the sbrk system call, Is there
any what is that?
Thanks in Advance.
sbrk is not a system call in linux. It's a library function implemented in libc which uses the brk system call. Your strace shows brk being used.
If in libc a malloc implementation is redirecited to mmap instead of sbrk. Then every call to malloc will result in mmap. And you can find sbrk only if it is explicitly used in user level application (normally malloc is used)

Resources