process file line by line does not work in ksh - loops

This looks simple but i am not sure why it wont read beyond first line if I try to do any processing on data i read from file.
code:
while IFS="" read -r line
do
u=`echo $line | awk '{print $4}'`
h=`echo $line | awk '{print $2}'|cut -f1 -d'.'`
echo "$h : $u"
ssh $h grep $u /etc/shadow
done < "/var/tmp/user_data"
user_data is a file with one line for each user/system like:
xxx unixhost01 xxx admin69 xxx... ....
xxx host xxx uid xxx... ...
...
...
when i run this code it only works on first line then exit. debug in shell shows no issues. when i run it without the ssh operation/command it processes whole data file.
shell is ksh:
# ps
PID TTY TIME CMD
1424 pts/138 0:00 ps
18521 pts/138 0:02 ksh
on executing it only prints (the first line):
unixhost01 : admin69
admin69:$1$gFfcEQETGZAo6W0:17599:0:90:10:::
any idea?

The problem lies in ssh itself since it consumes standard input so the next time you loop around to get another line, there aren't any left. You can verify this by changing your command to:
ssh $h cat
and seeing that it outputs the rest of your file during the first ssh session.
You can fix this in at least two ways, the first by simply disconnecting your ssh from standard input:
ssh $h grep $u /etc/shadow </dev/null
The second (needed if you actually want to interact with the target box) is to use a different file handle for the input file stream:
while IFS="" read -u3 -r line ; do # reads file #3
blah blah blah
done 3</var/tmp/user_data # sends data via file #3

Related

Pipe the output of eval/source in fish shell

I want to pipe the output of eval to a file. That works as expected if the command execution is successful:
eval ls > log.txt 2>&1
cat log.txt # Documents Desktop
It also works if the command is not successful
eval rm Desktop > log.txt 2>&1
cat log.txt # rm: cannot remove 'Desktop': Is a directory
However, I do not manage to redirect stderr if the command does not exist
eval abcde > log.txt 2>&1 # fish: Unknown command abcde
cat log.txt # (empty)
How can I redirect also the output of the third case to a log file?
Something that works with source would also be very much appreciated:
echo abcde | source > log.txt 2>&1
However, I do not manage to redirect stderr if the command does not exist
That's because the output is not coming from eval or the command, it's coming from your command-not-found handler.
Try checking if the command exists before you try to execute it. If you absolutely can't, it's technically possible to silence the command-not-found error entirely by redefining __fish_command_not_found_handler:
function __fish_command_not_found_handler; end
You'd have to handle moving it back afterwards via functions --copy:
functions --copy __fish_command_not_found_handler oldcnf
Overall I don't recommend any of this and suspect you might be overusing eval.
Something that works with source would also be very much appreciated:
That's what eval is for, quite literally. Up to the upcoming 3.1 release eval is a function that's just source with some support code that mostly boils down to handling these redirections.
You should eval if command exists, this is possible with
test -f (whereis -b command | awk '{print $2}')
whereis -b will to search for the binaries for the command in your system
awk filter the output to only show the first result
test -f going to verify if the file exists
If the command exist, it return status 0. So to finish you can write it like
test -f (whereis -b abcde | awk '{print $2}') && abcde > log.txt 2>&1
Also can use this form
test (whereis -b abcde | awk '{print $2}') != '' && abcde > log.txt 2>&1

Using loop to convert multiple files into separate files

I used this command to convert multiple pcap log files to text using tcpdump :
$ cat /home/dalya/snort-2.9.9.0/snort_logs/snort.log.* | tcpdump -n -r - > /home/dalya/snort-2.9.9.0/snort_logs2/bigfile.txt
and it worked well.
Now I want to separate the output, each converted file in a separate output file using loop like this :
for f in /home/dalya/snort-2.9.9.0/snort_logs/snort.log.* ; do
tcpdump -n -r "$f" > /home/dalya/snort-2.9.9.0/snort_logs2/"$f.txt" ;
done
But it gave me :
bash: /home/dalya/snort-2.9.9.0/snort_logs2//home/dalya/snort-2.9.9.0/snort_logs/snort.log.1485894664.txt: No such file or directory
bash: /home/dalya/snort-2.9.9.0/snort_logs2//home/dalya/snort-2.9.9.0/snort_logs/snort.log.1485894770.txt: No such file or directory
bash: /home/dalya/snort-2.9.9.0/snort_logs2//home/dalya/snort-2.9.9.0/snort_logs/snort.log.1487346947.txt: No such file or directory
I think the problem in $f, Where did I go wrong?
If you run
for f in /home/dalya/snort-2.9.9.0/snort_logs/snort.log.* ; do
echo $f
done
You'll find that you're getting
/home/dalya/snort-2.9.9.0/snort_logs/snort.log.1485894664
/home/dalya/snort-2.9.9.0/snort_logs/snort.log.1485894770
/home/dalya/snort-2.9.9.0/snort_logs/snort.log.1487346947
You can use basename
To get only the filename, something like this:
for f in /home/dalya/snort-2.9.9.0/snort_logs/snort.log.* ; do
base="$(basename $f)"
echo $base
done
Once you're satisfied that this is working, remove the echo statement and use
tcpdump -n -r "$f" > /home/dalya/snort-2.9.9.0/snort_logs2/"$base.txt"
instead.
Edit: tcpdump -n -r "$base" > ... should have been tcpdump -n -r "$f" > ...; you only want to use $base in the context of creating the new filename, not in the context of reading the existing data.

Populate array to ssh in bash

Just some background, I have a file with 1000 servers in it new line delimted. I have to read them to an array the run about 5 commands over SSH. I have been using heredoc notation but that seems to fail. Currently I get an error saying the host isn't recognized.
IFS='\n' read -d '' -r -a my_arr < file
my_arr=()
for i in "${my_arr[#]}"; do
ssh "$1" bash -s << "EOF"
echo "making back up of some file"
cp /path/to/file /path/to/file.bak
exit
EOF
done
I get output that lists the first server but then all the ones in the array as well. I know that I am missing a redirect for STDIN that causes this.
Thanks for the help.
Do you need an array? What is wrong with:
while read -r host
do
ssh "$host" bash -s << "EOF"
echo "making back up of some file"
cp /path/to/file /path/to/file.bak
EOF
done < file
To be clear -- the problem here, and the only problem present in the code actually included in your question, is that you're using $1 inside your loop, whereas you specified $i as the variable that contains the entry being iterated over on each invocation of the loop.
That is to say: ssh "$1" needs to instead by ssh "$i".

importing data from a CSV in Bash

I have a CSV file that I need to use in a bash script. The CSV is formatted like so.
server1,file.name
server1,otherfile.name
server2,file.name
server3,file.name
I need to be able to pull this information into either an array or in some other way so that I can then filter the information and only pull out data for a single server that i can then pass to another command within the script.
I need it to go something like this.
Import workfile.csv
check hostname | return only lines from workfile.csv that have the hostname as column one and store column 2 as a variable.
find / -xdev -type f -perm -002 | compare to stored info | chmod o-w all files not in listing
I'm stuck using bash because of the environment that I'm working in.
The csv can be to big for adding all filenames in the find parameter list.
You also do not want to call find in a loop for every line in the csv.
Solution:
First make a complete list of files in a tmp file.
Second parse the csv and filter the files.
Third is chmod -w.
The next solution stores the files in a tmp
Make a script that gets the servername as a parameter.
See comment in the code:
# Before EDIT:
# Hostname by parameter 1
# Check that you have a hostname
if [ $# -ne 1 ]; then
echo "Usage: $0 hostname"
# Exit script, failure
exit 1
fi
hostname=$1
# Edit, get hostname by system call
hostname=$(hostname)
# Or: hostname=$(hostname -s)
# Additional check
if [ ! -f workfile.csv ]; then
echo "inputfile missing"
exit 1
fi
# After edits, ${hostname} is now filled.
find / -xdev -type f -perm -002 -name "${file}" > /tmp/allfiles.tmp
# Do not use cat workfile.csv | grep ..., you do not need to call cat
# grep with ^ for beginning of line, add a , for a complete first field
# grep "^${hostname}," workfile.csv
# cut for selecting second field with delimiter ','
# cut -d"," -f2
# while read file => can be improved with xargs but lets start with this.
grep "^${hostname}," workfile.csv | cut -d"," -f2 | while read file; do
# Using sed with #, not /, since you need / in the search string
# Variable in sed mist be outside the single quotes and in double quotes
# Add $ after the file for end-of-line
# delete the line with the file (#searchstring#d)
sed -i '#/'"${file}"'$#d' /tmp/allfiles.tmp
done
echo "Review /tmp/allfiles.tmp before chmodding all these files"
echo "Delete the echo and exit when you are happy"
# Just an exit for testing
exit
# Using < is for avoiding a call to cat
</tmp/allfiles.tmp xargs chmod -w
It might be easier when you can chmod -w all the files and chmod +w the files in the csv. This is a little different than you asked, since all files from the csv are writable after this process, maybe you do not want that.

Need bash to separate cat'ed string to separate variables and do a for loop

I need to get a list of files added to a master folder and copy only the new files to the respective backup folders; The paths to each folder have multiple folders, all named by numbers and only 1 level deep.
ie /tester/a/100
/tester/a/101 ...
diff -r returns typically "Only in /testing/a/101: 2093_thumb.png" per line in the diff.txt file generated.
NOTE: there is a space after the colon
I need to get the 101 from the path and filename into separate variables and copy them to the backup folders.
I need to get the lesserfolder var to get 101 without the colon
and mainfile var to get 2093_thumb.png from each line of the diff.txt and do the for loop but I can't seem to get the $file to behave. Each time I try testing to echo the variables I get all the wrong results.
#!/bin/bash
diff_file=/tester/diff.txt
mainfolder=/testing/a
bacfolder= /testing/b
diff -r $mainfolder $bacfolder > $diff_file
LIST=`cat $diff_file`
for file in $LIST
do
maindir=$file[3]
lesserfolder=
mainfile=$file[4]
# cp $mainfolder/$lesserFolder/$mainfile $bacfolder/$lesserFolder/$mainfile
echo $maindir $mainfile $lesserfolder
done
If I could just get the echo statement working the cp would work then too.
I believe this is what you want:
#!/bin/bash
diff_file=/tester/diff.txt
mainfolder=/testing/a
bacfolder= /testing/b
diff -r -q $mainfolder $bacfolder | egrep "^Only in ${mainfolder}" | awk '{print $3,$4}' > $diff_file
cat ${diff_file} | while read foldercolon mainfile ; do
folderpath=${foldercolon%:}
lesserFolder=${folderpath#${mainfolder}/}
cp $mainfolder/$lesserFolder/$mainfile $bacfolder/$lesserFolder/$mainfile
done
But it is much more reliable (and much easier!) to use rsync for this kind of backup. For example:
rsync -a /testing/a/* /testing/b/
You could try a while read loop
diff -r $mainfolder $bacfolder | while read dummy dummy dir file; do
echo $dir $file
done

Resources