By default, GDB's internal variables will be $1, $2, $3, .... How to restart naming them from $1?
(gdb) p v1
$1 = 7
(gdb) p v2
$2 = 8
(gdb) p v3
$3 = 9
(gdb) ??? // what should be put here?
$1 = 0
Looking at the documentation, there's no explicit command to clear the value history.
It does mention that the file and symbol-file commands, which can change the symbol table, clear the history.
Also, you can use output instead of print to avoid putting the printed value in the value history.
Related
Is it possible to do that with lldb:
(lldb) p my_var
(uint64_t) $9 = 2
(lldb) set set target.max-children-count 4
But instead of 4, I would like to call the set command with the current value of my_var, in this case 2.
In most lldb commands, backtick blocks are evaluated and the result substituted in their place. For instance,
(lldb) sett show stop-line-count-before
stop-line-count-before (int) = 3
(lldb) p 5
(int) $1 = 5
(lldb) sett set stop-line-count-before `$1`
(lldb) sett show stop-line-count-before
stop-line-count-before (int) = 5
It is easy to get the starting address of a function in C, but not its size. So I am currently doing an "nm" over the object file in order to locate my function and THEN locate the starting address of the next function. I need to do the "nm" because compiler could (and actually do, in my case) reorder functions, so source order can be different of object order.
I wonder if there are other ways of doing this. For example, instructing the compiler to preserve source code order in the object file, etc. Maybe some ELF magic?
My compilers are GCC, CLANG and Sun Studio. Platform: Solaris and derivatives, MacOSX, FreeBSD. To expand in the future.
I have found that the output of objdump -t xxx will give definitive function size/length values for program and object files (.o).
For example: (From one of my projects)
objdump -t emma | grep " F .text"
0000000000401674 l F .text 0000000000000376 parse_program_header
00000000004027ce l F .text 0000000000000157 create_segment
00000000004019ea l F .text 000000000000050c parse_section_header
0000000000402660 l F .text 000000000000016e create_section
0000000000401ef6 l F .text 000000000000000a parse_symbol_section
000000000040252c l F .text 0000000000000134 create_symbol
00000000004032e0 g F .text 0000000000000002 __libc_csu_fini
0000000000402240 g F .text 000000000000002e emma_segment_count
00000000004022f1 g F .text 0000000000000055 emma_get_symbol
00000000004021bd g F .text 000000000000002e emma_section_count
0000000000402346 g F .text 00000000000001e6 emma_close
0000000000401f00 g F .text 000000000000002f emma_init
0000000000403270 g F .text 0000000000000065 __libc_csu_init
0000000000400c20 g F .text 0000000000000060 estr
00000000004022c3 g F .text 000000000000002e emma_symbol_count
0000000000400b10 g F .text 0000000000000000 _start
0000000000402925 g F .text 000000000000074f main
0000000000401f2f g F .text 000000000000028e emma_open
I've pruned the list a bit, it was lengthy. You can see that the 5th column (the second wide column with lots of zeros....) gives a length value for every function. main is 0x74f bytes long, emma_close is 0x1e6, parse_symbol_section is a paltry 0x0a bytes... 10 bytes! (wait... is that a stub?)
Additionally, I grep'd for just the 'F'unctions in the .text section, thus limiting the list further. The -t option to objdump shows only the symbol tables, so it omits quite a bit of other information not particularly useful towards function length gathering.
I suppose you could use it like this:
objdump -t MYPROG | grep "MYFUNCTION$" | awk '{print "0x" $(NF-1)}' | xargs -I{} -- python -c 'print {}'
An example:
00000000004019ea l F .text 000000000000050c parse_section_header
$ objdump -t emma | grep "parse_section_header$" | awk '{print "0x" $(NF-1)}' | xargs -I{} -- python -c 'print {}'
1292
Checks out, since 0x50c == 1292.
I used $(NF-1) to grab the column in awk since the second field can vary in content and spaces depending on the identifiers relevant to the symbol involved. Also, note the trailing $ in the grep, causing main to find the main function, not the entry with main.c as its name.
The xargs -I{} -- python -c 'print {}' bit is to convert the value from hex to decimal. If anyone can think of an easier way, please chime in. (You can see where awk is sneaking the 0x prefix in there).
Ah, I just remembered that I have an alias for objdump which presets the demangle option for objdump. It'll make things easier to match if you add --demangle to the objdump invocation. (I also use --wide, much easier to read, but doesn't affect this particular output).
This works on any ELF object, library, program, object file, as long as it's NOT stripped. (I tested with and without debugging symbols too)
Hope this helps.
(I looked, parse_symbol_section IS a stub.)
Here is an all awk answer to this question to see size of all functions in certain section:
# call objdump with -t to get list of symbols
# awk filters out all the columns which are in text section
# awk sums the values in 5th column (prefixed with 0x as they are considered hex and then converted to dec with strtonum function)
objdump -t MYPROG | awk -F ' ' '($4 == ".text") {sum += strtonum("0x"$5)} END {print sum}'
And here is if you want to see only certain functions from certain section
# awk filters out all the columns which are in rom section and all function names which have anywhere in name funcname
# (we convert to lowercase the value in column 6 to avoid case sensitive regex)
# awk sums the values in 5th column (prefixed with 0x as they are considered hex and then converted to dec with strtonum function)
objdump -t MYPROG | awk -F ' ' '($4 == ".rom") && (tolower($6) ~ /_*funcname*/) {sum += strtonum("0x"$5)} END {print sum}'
Does gdb allow conditional regex breaks?
I have a source file timer.c, an int64_t ticks, and a function timer_ticks() which returns it.
Neither
rbreak timer.c:. if ticks >= 24
nor
rbreak timer.c:. if ticks_ticks() >= 24
place any breakpoints.
If I remove however either the regex part or the conditional part, the breakpoints are set.
Here's a way to get it done. It takes a couple of steps, and requires some visual inspection of gdb's output.
First, run the rbreak command and note the breakpoint numbers it sets.
(gdb) rbreak f.c:.
Breakpoint 1 at 0x80486a7: file f.c, line 41.
int f();
Breakpoint 2 at 0x80486ac: file f.c, line 42.
int g();
Breakpoint 3 at 0x80486b1: file f.c, line 43.
int h();
Breakpoint 4 at 0x8048569: file f.c, line 8.
int main(int, char **);
Now, loop through that range of breakpoints and use the cond command to add the condition to each:
(gdb) set $i = 1
(gdb) while ($i <= 4)
>cond $i ticks >= 24
>set $i = $i + 1
>end
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y 0x080486a7 in f at f.c:41
stop only if ticks >= 24
2 breakpoint keep y 0x080486ac in g at f.c:42
stop only if ticks >= 24
3 breakpoint keep y 0x080486b1 in h at f.c:43
stop only if ticks >= 24
4 breakpoint keep y 0x08048569 in main at f.c:8
stop only if ticks >= 24
Unfortunately, no, it doesn't.
However, there is workaround.
You want to break every function from the file conditionally, don't you?
If yes, you can use this answer as start point and then create conditional breaks.
So, first step: get a list of functions in the file:
nm a.out | grep ' T ' | addr2line -fe a.out |
grep -B1 'foo\.cpp' | grep -v 'foo\.cpp' > funclist
Next: create a gdb script that creates breaks:
sed 's/^/break /' funclist | sed 's/$/ if ticks>=24/' > stop-in-foo.gdb
And finally, import script in gdb:
(gdb): source stop-in-foo.gdb
Programming beginner here needs some help modifying an AWK script to make it conditional. Alternative non-awk solutions are also very welcome.
NOTE Main filtering is now working thanks to help from Birei but I have an additional problem, see note below in question for details.
I have a series of input files with 3 columns like so:
chr4 190499999 190999999
chr6 61999999 62499999
chr1 145499999 145999999
I want to use these rows to filter another file (refGene.txt) and if a row in file one mathces a row in refGene.txt, to output column 13 in refGene.txt to a new file 'ListofGenes_$f'.
The tricky part for me is that I want it to count as a match as long as column one (eg 'chr4', 'chr6', 'chr1' ) and column 2 AND/OR column 3 matches the equivalent columns in the refGene.txt file. The equivalent columns between the two files are $1=$3, $2=$5, $3=$6.
Then I am not sure in awk how to not print the whole row from refGene.txt but only column 13.
NOTE I have achieved the conditional filtering described above thanks to help from Birei. Now I need to incorporate an additional filter condition. I also need to output column $13 from the refGene.txt file if any of the region between value $2 and $3 overlaps with the region between $5 and $6 in the refGene.txt file. This seems a lot trickier as it involves mathmatical computation to see if the regions overlap.
My script so far:
FILES=/files/*txt
for f in $FILES ;
do
awk '
BEGIN {
FS = "\t";
}
FILENAME == ARGV[1] {
pair[ $1, $2, $3 ] = 1;
next;
}
{
if ( pair[ $3, $5, $6 ] == 1 ) {
print $13;
}
}
' $(basename $f) /files/refGene.txt > /files/results/$(basename $f) ;
done
Any help is really appreciated. Thanks so much!
Rubal
One way.
awk '
BEGIN { FS = "\t"; }
## Save third, fifth and seventh field of first file in arguments (refGene.txt) as the key
## to compare later. As value the field to print.
FNR == NR {
pair[ $3, $5, $6 ] = $13;
next;
}
## Set the name of the output file.
FNR == 1 {
output_file = "";
split( ARGV[ARGIND], path, /\// );
for ( i = 1; i < length( path ); i++ ) {
current_file = ( output_file ? "/" : "" ) path[i];
}
output_file = output_file "/ListOfGenes_" path[i];
}
## If $1 = $3, $2 = $5 and $3 = $6, print $13 to output file.
{
if ( pair[ $1, $2, $3 ] ) {
print pair[ $1, $2, $3 ] >output_file;
}
}
' refGene.txt /files/rubal/*.txt
I use a command for finding strings and numbers in a file
awk -F'[=,: ]' '{print /uid=/?$4:(/^telephoneN/)?$2:$3}' 1.txt
the output is something like
a
b
c
d
e
f
g
t
I would like to write this output in a file 2.xml
<xml>
<name>aaaa</name>
<surname>bbbb</surname>
...
</xml>
<xml>
<name>eeee</name>
<surname>ffff</surname>
...
</xml>
I don't know how to manage the result from awk.
Could you help me please?
Thanks in advance
I would be nice to see what your real data looks like, but given that your output shows 4 fields and your input shows 4 fields, here is the basic idea.
awk 'BEGIN {
RS="" # make blank line between sets of data the RecordSep
FS="\n" # make each line as a field in the rec (like $1, $2 ...)
}
{ # this is the main loop, each record set is procssed here
printf("<xml>\n\t<name>%s</name>\n\t<surname>%s</surname>\n\t<Addr1>%s</Addr1>\n\t<Addr2>%s</Addr2>\n</xml>",
$1, $2, $3, $4 )
} ' 1.txt > 1.xml
Note: there should be only 1 blank like between your record sets.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, or give it a + (or -) as a useful answer.