Comparing/diffing tuning files - c

I have two files which look something like this:
#define TUNING_CONST 55
#define OTHER_TUNING_CONST 107
...
and
#define TUNING_CONST 65
#define OTHER_TUNING_CONST 93
...
You can think of these as an automatically-generated file and its static base. I would like to compare them, but I can't find a good way. diff isn't apparently able to see that the lines are the same apart from the constants. I tried a hacky approach with xargs but it was a little tricky... here's a start, showing each of the constants in the other file matched up line by line. But it doesn't show the name or the original constant, so it's not useful at this point.
egrep -o '^#define \S+' tuning.h | egrep -o '\S+$' | xargs -I % egrep "%" basetune.h | egrep -o '[0-9]+$'
This is surely a common case -- lots of programs generate tuning data -- and it can't be that rare to want to see how things change programmatically. Any ideas?

You haven't specified what the expected output should be like, but here there's an option
join -1 2 -2 2 -o 1.2,1.3,2.3 <(sort f1) <(sort f2)
output
OTHER_TUNING_CONST 107 93
TUNING_CONST 55 65

Related

Listing optimized binary function sizes in bytes

I am optimizing my code for an MCU, I want to see the overview of sizes of all functions in my C program including all libraries, like say:
$ objdump some arguments that I do not know | pipe through something | clean the result some how
_main: 300 bytes
alloc: 200 bytes
do_something: 1111 bytes
etc...
nm -S -t d a.out | grep function_name | awk '{gsub ("0*", "", $2); print $2}'
Or print a list of sorted sizes for each symbol:
nm -S --size-sort -t d a.out | awk '{gsub ("0*", "", $2); print $4 " " $2}'
nm list symbols from object files. We are using -S to print the size of defined symbols in the second column and -t d to make output in decimal format.
$ objdump a.out -t|grep "F .text"|cut -f 2-3
00000000000000b1 __slow_div
00000000000000d8 __array_compact_and_grow
00000000000000dc __log
00000000000000de __to_utf
00000000000000e9 __string_compact
00000000000000fe __gc_release
000000000000001d __gc_check
000000000000001f array_compact
00000000000001e8 __string_split
000000000000002a c_roots_clear
000000000000002f f
000000000000002f _start
000000000000002f wpr
000000000000003d array_get
Not perfect, sizes are in hex, but it's easy to write a script that will convert to decimals and sort arbitrarily.

About g++ -D in csh

I have a csh for C code as follow:
foreach i (COARSE_STATIC, COARSE_DYNAMIC, FINE_STATIC, FINE_DYNAMIC)
foreach j (1 2 4 8 12 16 20 24 28 32 36 40 44 48 52 56 60)
g++ -o proj2 project2.cpp -O3 -lm -openmp -D=$i -DNUMT=$j
./proj2 >> OUT
end
echo '\n' >> OUT
end
I have some problem for the -D=$i, I know it is incorrect, I don't know how to modify it to express:
#define COARSE_STATIC
Does anyone could tell me how to use it?
Just don't put the = sign if you only need to define the macro.
g++ ... -D$i
Also you have comas , as a separator in your first for list, and not in the second. The second is correct and you should remove the comas in the first. (Otherwise you'll try to define the macro CORESTATIC,.)

Getting output of shell command in bash array

I have a uniq -c output, that outputs about 7-10 lines with the count of each pattern that was repeated for each unique line pattern. I want to store the output of my uniq -c file.txt into a bash array. Right now all I can do is store the output into a variable and print it. However, bash currently thinks the entire output is just one big string.
How does bash recognize delimiters? How do you store UNIX shell command output as Bash arrays?
Here is my current code:
proVar=`awk '{printf ("%s\t\n"), $1}' file.txt | grep -P 'pattern' | uniq -c`
echo $proVar
And current output I get:
587 chr1 578 chr2 359 chr3 412 chr4 495 chr5 362 chr6 287 chr7 408 chr8 285 chr9 287 chr10 305 chr11 446 chr12 247 chr13 307 chr14 308 chr15 365 chr16 342 chr17 245 chr18 252 chr19 210 chr20 193 chr21 173 chr22 145 chrX 58 chrY
Here is what I want:
proVar[1] = 2051
proVar[2] = 1243
proVar[3] = 1068
...
proVar[22] = 814
proVar[X] = 72
proVar[Y] = 13
In the long run, I'm hoping to make a barplot based on the counts for each index, where every 50 counts equals one "=" sign. It will hopefully look like the below
chr1 ===========
chr2 ===========
chr3 =======
chr4 =========
...
chrX ==
chrY =
Any help, guys?
To build the associative array, try this:
declare -A proVar
while read -r val key; do
proVar[${key#chr}]=$val
done < <(awk '{printf ("%s\t\n"), $1}' file.txt | grep -P 'pattern' | uniq -c)
Note: This assumes that your command's output is composed of multiple lines, each containing one key-value pair; the single-line output shown in your question comes from passing $proVar to echo without double quotes.
Uses a while loop to read each output line from a process substitution (<(...)).
The key for each assoc. array entry is formed by stripping prefix chr from each input line's first whitespace-separated token, whereas the value is the rest of the line (after the separating space).
To then create the bar plot, use:
while IFS= read -r key; do
echo "chr${key} $(printf '=%.s' $(seq $(( ${proVar[$key]} / 50 ))))"
done < <(printf '%s\n' "${!proVar[#]}" | sort -n)
Note: Using sort -n to sort the keys will put non-numeric keys such as X and Y before numeric ones in the output.
$(( ${proVar[$key]} / 50 )) calculates the number of = chars. to display, using integer division in an arithmetic expansion.
The purpose of $(seq ...) is to simply create as many tokens (arguments) as = chars. should be displayed (the tokens created are numbers, but their content doesn't matter).
printf '=%.s' ... is a trick that effectively prints as many = chars. as there are arguments following the format string.
printf '%s\n' "${!proVar[#]}" | sort -n sorts the keys of the assoc. array numerically, and its output is fed via a process substitution to the while loop, which therefore iterates over the keys in sorted order.
You can create an array in an assignment using parentheses:
proVar=(`awk '{printf ("%s\t\n"), $1}' file.txt | grep -P 'pattern' | uniq -c`)
There's no built-in way to create an associative array directly from input. For that you'll need an additional loop.

Delete lines in a file containing argument passed on command line

I'm trying to delete specific lines based on the argument passed in.
My data.txt file contains
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
My del.sh contains
myvar=$1
sed'/$myvar/d' data.txt > temp.txt
mv temp.txt > data.txt
but it just prints every line in temp.txt to data.txt....however
sed '/64/d' data.txt > temp.txt
will do the correct data transfer (but I don't want to hardcode 64), I feel like there's some kind of syntax error with the argument. Any input please
It's because of the single quotes, change them to double quotes. Variables inside single quotes are not interpolated, so you are sending the literal string $myvar to sed, instead of the value of $myvar.
Change:
sed '/$myvar/d' data.txt
to:
sed "/$myvar/d" data.txt
Note: You will run into issues when $myvar contains regular expression meta characters or forward slashes as pointed out in this response from Ed Morton. If you are not in complete control of your input you will need to find another avenue to accomplish this.
Assuming this is undesirable behavior:
$ cat file
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
$ myvar=6
$ sed "/$myvar/d" file
Monitor 22 42 50
$ myvar=/
$ sed "/$myvar/d" file
sed: -e expression #1, char 3: unknown command: `/'
$ myvar=.
$ sed "/$myvar/d" file
$
Try this instead:
$ myvar=6
$ awk -v myvar="$myvar" '{for (i=1; i<=NF;i++) if ($i == myvar) next }1' file
Monitor 22 42 50
Game 32 64 128
$ myvar=/
$ awk -v myvar="$myvar" '{for (i=1; i<=NF;i++) if ($i == myvar) next }1' file
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
$ myvar=.
$ awk -v myvar="$myvar" '{for (i=1; i<=NF;i++) if ($i == myvar) next }1' file
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
and if you think you can just escape the /s and use sed, you can't because you might be adding a 2nd backslash to one already present:
$ foo='\/'
$ myvar=${foo//\//\\\/}
$ sed "/$myvar/d" file
sed: -e expression #1, char 5: unknown command: `/'
$ awk -v myvar="$myvar" '{for (i=1; i<=NF;i++) if ($i == myvar) next }1' file
Cpu 500 64 6
Monitor 22 42 50
Game 32 64 128
This is simply NOT a job you can in general do with sed due to it's syntax and it's restriction of only allowing REs in it's search.
You can also use awk to do the same,
awk '!/'$myvar'/' data.txt > temp.txt && mv temp.txt data.txt
Use -i option in addition to what #SeanBright proposed. Then you won't need > temp.txt and mv temp.txt data.txt.
sed -i "/$myvar/d" data.txt

How to split a large file into small ones by line number

I am trying to split my large big file into small bits using the line numbers. For example my file has 30,000,000 lines and i would like to divide this into small files wach of which has 10,000 lines(equivalent to 3000 small files).
I used the 'split' in unix but it seems that it is limited to only 100 files.
Is there a way of overcoming this limitation of 100 files?
If there is another way of doing this, please advise as well.
Thanks.
Using GNU awk
gawk '
BEGIN {
i=1
}
{
print $0 > "small"i".txt"
}
NR%10==0 {
close("file"i".txt"); i++
}' bigfile.txt
Test:
[jaypal:~/temp] seq 100 > bigfile.txt
[jaypal:~/temp] gawk 'BEGIN {i=1} {print $0 > "small"i".txt" } NR%10==0 { close("file"i".txt"); i++ }' bigfile.txt
[jaypal:~/temp] ls small*
small1.txt small10.txt small2.txt small3.txt small4.txt small5.txt small6.txt small7.txt small8.txt small9.txt
[jaypal:~/temp] cat small1.txt
1
2
3
4
5
6
7
8
9
10
[jaypal:~/temp] cat small10.txt
91
92
93
94
95
96
97
98
99
100
Not an answer, just added a way to do the renaming-part as requested in a comment
$ touch 000{1..5}.txt
$ ls
0001.txt 0002.txt 0003.txt 0004.txt 0005.txt
$ rename 's/^0*//' *.txt
$ ls
1.txt 2.txt 3.txt 4.txt 5.txt
I also tried the above with 3000-files without any problems.

Resources