Unix not checking file permissions properly - file

I executed the following commands in unix machine:
blr59-adm1:~ # ls -l / | grep back
d-w--w--w- 2 root root 4096 Jun 9 13:31 backupmnt
blr59-adm1:~ # [ -x /backupmnt ]
blr59-adm1:~ # echo $?
0
blr59-adm1:~ #
I am not able to get why I am getting the output of echo $? as 0 even if my directory is not having execute permissions.
My shell script is failing because of this behavior.
Please correct me if I am doing wrong.

Related

How to execute in other directory?

I have three directories:
/home/Desktop/1
/home/Desktop/2
/home/Desktop/3
In the directories 1 and 2 are executable C programs, which can be executed in the terminal like this ./tst1 or ./tst2.
In the directory 3 I have a bash script, which executes a C program tst3.c from the same directory.
I want to execute these C programs from directories 1 and 2 using my bash script in the directory 3 like this
#!/bin/bash
sudo ./tst3
sleep 1
sudo ./tst1 # from directory 1
sleep 2
sudo ./tst2 # from directory 2
Any ideas?
You have multiple options, including at least:
Set PATH to include the directories where your commands are found:
#!/bin/bash
export PATH="$PATH:/home/Desktop/1:/home/Desktop/2:/home/Desktop/3"
sudo tst3 # from directory 3
sleep 1
sudo tst1 # from directory 1
sleep 2
sudo tst2 # from directory 2
Use absolute paths to the commands:
#!/bin/bash
sudo /home/Desktop/3/tst3 # from directory 3
sleep 1
sudo /home/Desktop/1/tst1 # from directory 1
sleep 2
sudo /home/Desktop/2/tst2 # from directory 2
Use relative paths to the commands:
#!/bin/bash
sudo ../3/tst3 # from directory 3
sleep 1
sudo ../1/tst1 # from directory 1
sleep 2
sudo ../2/tst2 # from directory 2
These treat the directories symmetrically. Another alternative is to place the commands in a directory already on your PATH (like $HOME/bin, perhaps), and then run them without any path. This is what I'd normally do — ensure the commands to be run are in a directory on my PATH.
If you are simply trying to locate the scripts:
#!/bin/bash
base_dir="$( dirname "$( readlink -e "$0" )" )"/..
sudo "$base_dir/3/tst3"
sleep 1
sudo "$base_dir/1/tst1"
sleep 2
sudo "$base_dir/2/tst2"
or
#!/bin/bash
cd "$( dirname "$( readlink -e "$0" )" )"/..
sudo 3/tst3
sleep 1
sudo 1/tst1
sleep 2
sudo 2/tst2
If you want the CWD to be changed the directory of each executable before executing it:
#!/bin/bash
cd "$( dirname "$( readlink -e "$0" )" )"
sudo ./tst3
cd ../1
sleep 1
sudo ./tst1
cd ../2
sleep 2
sudo ./tst2
These scripts will work properly even if its launched from a directory other than the directory it's found in. They will even work if they are launched via a symlink!

Need Shell script to monitor files on remote SUSE dir ApmPerfMonitor ApmPerfMonitor

We
would need monitoring on below folder for respective directories & sub directories to see if the files in the directory are greater than 100 files. Also none of the file should sit more than 4 hrs.
If files in the directory is more than 100 we would need an alert. Not sure why this script is working. Could you please confirm?
Path – /export/ftpaccounts/image-processor/working/
The Script:
#!/bin/bash
LOCKFILE=/tmp/findimages.lock
if [ -f ${LOCKFILE} ]; then
exit 0
fi
touch ${LOCKFILE}
NUM=`find /mftstaging/vim/inbound/active \
-ignore_readdir_race -depth -type f -m min +60 -print |
xargs wc -l`
if [[ ${NUM:0:1} -ne 0 ]]; then
echo "${NUM:0:1} files older than 60minutes" |
mail -s "batch import is slow" ${MAILTO}
fi
rm -rf ${LOCKFILE}
The format of your original post made it difficult to tell what you were trying to accomplish. If I understand you just want to find the number of files in the remote directory that are greater than 60 minutes old, then with a couple of changes your script should work fine. Try:
#!/bin/bash
LOCKFILE=/tmp/findimages.lock
ACTIVE=/mftstaging/vim/inbound/active
[ -f ${LOCKFILE} ] && exit 0
touch ${LOCKFILE}
# NUM=`find /mftstaging/vim/inbound/active \
# -ignore_readdir_race -depth -type f -m min +60 -print |
# xargs wc -l`
NUM=$(find $ACTIVE -type f -mmin +60 | wc -l)
## if [ $NUM -gt 100 ]; then # if you are test for more than 100
if [ $NUM -gt 0 ]; then
echo "$NUM files older than 60minutes" |
mail -s "batch import is slow" ${MAILTO}
fi
rm -rf ${LOCKFILE}
Note: you will want to implement some logic that deals with a stale lock file, and perhaps use trap to insure the lock is removed regardless of how the script terminates. e.g.:
trap 'rm -rf ${LOCKFILE}' SIGTERM SIGINT EXIT

nohup for batch command using " for x in find "

I am trying to run a Python script (qTrimHIV.py) on ALL the files ending in .fastq in the current directory. I'd like to append nohup at the beginning so that it does all the files out from a single command, and no matter if I close the terminal or logout.
The individual command:
for x in $(find . -name '*fastq'); do echo $x; python ../qTrimHIV.py -verbose -fastq $x -l 23 -m 20 -w 23 -o ${x%.fastq*}tr -mode 2; done
works well. But when I put nohup in front of the batch command:
nohup for x in $(find . -name '*fastq') ; do echo $x; python ../qTrimHIV.py -verbose -fastq $x -l 23 -m 20 -w 23 -o ${x%.fastq*}tr -mode 2; done
-bash: syntax error near unexpected token `do'
I get the error above.
However, it works well if I put the nohup before the actual command "python ../qTrimHIV.py ", but this is not really what I want, as I want to submit the task for all files at once and have it running until it's done, without having to keep logged in.
I've tried
for x in $(find . -name '*fastq') ; do echo $x; python ../qTrimHIV.py -verbose -fastq $x -l 23 -m 20 -w 23 -o ${x%.fastq*}qtrim -mode 2; done | at now
but it doesn't let me see the progress of the job, and I can't know if it is still running or not.
Can anyone give me any suggestions? Should I use a different command, other than nohup?
you can use:
nohup bash -c 'your command, everything including for'
also, why not to run shell script using nohup:
nohup <your script>

In one directory new files are executable by default

For some reason one of my directories has started producing
executables. By this I mean that new files in that directory are
a+x (but not, for example in the parent directory):
$ ls -ld .
drwxrwsr-x 2 me me 45 Dec 5 10:22 ./
drwxrwsr-x 10 me me 13 Dec 5 10:22 ../
$ rm -f test
$ touch test
$ ls -l test
-rwxrwxr-x 1 me me 0 Dec 5 10:25 test*
$ cd ..
$ rm -f test
$ touch test
$ ls -l test
-rw-rw-r--+ 1 me me 0 Dec 5 10:26 test
Also, notice the + at the end of the second permissions line, is it significant?
I know it cannot be a umask thing...but it's set at 0002.
How can I turn off this behavior?
EDIT:
In response to an answer below I ran the following (in the parent dir):
$ touch test
$ getfacl test
# file: test
# owner: me
# group: me
user::rw-
group::rw-
mask::rwx
other::r--
Why do I have this mask? Is this the right value for it? How can I change it?
The + indicates the presence of one or more ACLs on the entry. getfacl test will show you more information. The oddity with the apparent executability of new files may be related to the ACLs in the parent directory, but we'd have to see what they are to know for sure...

How do I easily package libraries needed to analyze a core dump (i.e. packcore)

The version of GDB that is available on HPUX has a command called "packcore", which creates a tarball containing the core dump, the executable and all libraries. I've found this extremely useful when trying to debug core dumps on a different machine.
Is there a similar command in the standard version of GDB that I might find on a Linux machine?
I'm looking for an easy command that someone that isn't necessarily a developer can run when things go bad on a production machine.
The core file includes the command from which it was generated. Ideally this will include the full path to the appropriate executable. For example:
$ file core.29529
core.29529: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/bin/sleep 60'
Running ldd on an ELF binary will show what libraries it depends on:
$ ldd /bin/sleep
linux-vdso.so.1 => (0x00007fff1d3ff000)
libc.so.6 => /lib64/libc.so.6 (0x0000003d3ce00000)
/lib64/ld-linux-x86-64.so.2 (0x0000003d3ca00000)
So now I know the executable and the libraries needed to analyze the core dump.
The tricky part here is extracting the executable path from the core file. There doesn't appear to be a good tool for reading this directly. The data is encoded in a prpsinfo structure (from /usr/include/sys/procfs.h), and you can find the location size of the data using readelf:
$ readelf -n core.29529
Notes at offset 0x00000468 with length 0x00000558:
Owner Data size Description
CORE 0x00000150 NT_PRSTATUS (prstatus structure)
CORE 0x00000088 NT_PRPSINFO (prpsinfo structure)
CORE 0x00000130 NT_AUXV (auxiliary vector)
CORE 0x00000200 NT_FPREGSET (floating point registers)
...so one could in theory write a code snippet to extract the command line from this structure and print it out in a way that would make this whole process easier to automate. You could, of course, just parse the output of file:
$ file core.29529 | sed "s/.*from '\([^']*\)'/\1/"
/bin/sleep 60
So that's all the parts. Here's a starting point for putting it all together:
#!/bin/sh
core=$1
exe=$(file $core | sed "s/.*from '\([^']*\)'/\1/" | awk '{print $1}')
libs=$(
ldd $exe |
awk '
/=> \// {print $3}
! /=>/ {print $1}
'
)
cat <<EOF | tar -cah -T- -f $1-all.tar.xz
$libs
$exe
EOF
For my example, if I name this script packcore and run it on the core file from the sleep command, I get this:
$ packcore core.29529
tar: Removing leading `/' from member names
$ tar -c -f core.29529-all.tar.xz
core.29529
lib64/libc.so.6
lib64/ld-linux-x86-64.so.2
bin/sleep
As it stands this script is pretty fragile; I've made lots of assumptions about the output from ldd based on only this sample output.
Here's a script that does the necessary steps (tested only on RHEL5, but might work elsewhere too):
#!/bin/sh
#
# Take a core dump and create a tarball of all of the binaries and libraries
# that are needed to debug it.
#
include_core=1
keep_workdir=0
usage()
{
argv0="$1"
retval="$2"
errmsg="$3"
if [ ! -z "$errmsg" ] ; then
echo "ERROR: $errmsg" 1>&2
fi
cat <<EOF
Usage: $argv0 [-k] [-x] <corefile>
Parse a core dump and create a tarball with all binaries and libraries
needed to be able to debug the core dump.
Creates <corefile>.tgz
-k - Keep temporary working directory
-x - Exclude the core dump from the generated tarball
EOF
exit $retval
}
while [ $# -gt 0 ] ; do
case "$1" in
-k)
keep_workdir=1
;;
-x)
include_core=0
;;
-h|--help)
usage "$0" 0
;;
-*)
usage "$0" 1 "Unknown command line arguments: $*"
;;
*)
break
;;
esac
shift
done
COREFILE="$1"
if [ ! -e "$COREFILE" ] ; then
usage "$0" 1 "core dump '$COREFILE' doesn't exist."
fi
case "$(file "$COREFILE")" in
*"core file"*)
break
;;
*)
usage "$0" 1 "per the 'file' command, core dump '$COREFILE' is not a core dump."
;;
esac
cmdname=$(file "$COREFILE" | sed -e"s/.*from '\(.*\)'/\1/")
echo "Command name from core file: $cmdname"
fullpath=$(which "$cmdname")
if [ ! -x "$fullpath" ] ; then
usage "$0" 1 "unable to find command '$cmdname'"
fi
echo "Full path to executable: $fullpath"
mkdir "${COREFILE}.pack"
gdb --eval-command="quit" "${fullpath}" ${COREFILE} 2>&1 | \
grep "Reading symbols" | \
sed -e's/Reading symbols from //' -e's/\.\.\..*//' | \
tar --files-from=- -cf - | (cd "${COREFILE}.pack" && tar xf -)
if [ $include_core -eq 1 ] ; then
cp "${COREFILE}" "${COREFILE}.pack"
fi
tar czf "${COREFILE}.pack.tgz" "${COREFILE}.pack"
if [ $keep_workdir -eq 0 ] ; then
rm -r "${COREFILE}.pack"
fi
echo "Done, created ${COREFILE}.path.tgz"
I've written shell script for this. It uses ideas from the answers above and adds some usage information and additional commands. In future I'll possibly add command for quick debugging in docker container with gdb.

Resources