Pass stdin into Unix host or dig command - host

Let's say I have a list of IPs coming into a log that I'm tailing:
1.1.1.1
1.1.1.2
1.1.1.3
I'd like to easily resolve them to host names. I'd like to be able to
tail -f access.log | host -
Which fails as host doesn't understand input from stdin in this way. What's the easiest way to do with without having to write a static file or fallback to perl/python/etc.?

Use xargs -l:
tail -f access.log | xargs -l host

You could also use the read builtin:
tail -f access.log | while read line; do host $line; done

In the commands below, replace cat with tail -f, etc. if needed.
Using host:
$ cat my_ips | xargs -i host {}
1.1.1.1.in-addr.arpa domain name pointer myhost1.mydomain.com.
1.1.1.2.in-addr.arpa domain name pointer myhost2.mydomain.com.
Using dig:
$ cat my_ips | xargs -i dig -x {} +short
myhost1.mydomain.com.
myhost2.mydomain.com.
Note that the -i option to xargs implies the -L 1 option.
To first get the IPs of one's host, see this answer.

In bash You can do:
stdout | (dig -f <(cat))
Example program:
(
cat <<EOF
asdf.com
ibm.com
microsoft.com
nonexisting.domain
EOF
) | (dig -f <(cat))
This way You only invoke 'dig' once.

Related

bash script split array of strings to designed output

Data in log file is something like:
"ipHostNumber: 127.0.0.1
ipHostNumber: 127.0.0.2
ipHostNumber: 127.0.0.3"
that's my code snippet:
readarray -t iparray < "$destlog"
unset iparray[0]
for ip in "${iparray[#]}"
do
IFS=$'\n' read -r -a iparraychanged <<< "${iparray[#]}"
done
And what I wanted to receive is to transfer IP's to another array and then read every line from that array and ping it.
UPDATE: I've acknowledged something, it's probably that I want one array to another but without some string, in this case cut "ipHostNumber: " and have remained only IPs.
Thanks in advance, if there's anything missing please let me know.
Why do you need arrays at all? Just read the input and do the work.
while IFS=' ' read -r _ ip; do
ping -c1 "$ip"
done < "$destlog"
See https://mywiki.wooledge.org/BashFAQ/001 .
Another way is to use xargs and filtering:
awk '{print $2}' "$destlog" | xargs -P0 -n1 ping -c1
I prefer KamilCuk's xargs solution, but if you just wanted the array because you plan to reuse it -
$: readarray -t ip < <( sed '1d; s/^.* //' file )
Now it's loaded.
$: declare -p ip
declare -a ip=([0]="127.0.0.2" [1]="127.0.0.3")
$: for addr in "${ip[#]}"; do ping -c 1 "$addr"; done

In Bash script trying to pass local variable to SSH and then execute the other commands

#!/bin/bash
count2=1
declare -a input
input=( "$#" )
echo " "
echo " Hostname passed by user is " ${input[0]}
HOST="${input[0]}"
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no user#$HOST /bin/bash << ENDSSH
echo " Connected "
echo $count2
echo $input
pwd
echo $count2: ${input[$count2]}
nic=${input[$count2]}
echo $nic
echo $(ethtool "${nic}" |& grep 'Link' | awk '{print $3}')
ENDSSH
So Actually want to pass variable 'count2' and 'input' to remote SSH and execute.
But unfortunately it is not getting passed. It is not echoing anything after SSH.
Need help in this.!!
I have sshpass installed in sever.
code output:
[user#l07 ~]$ ./check.sh <hostname> eno6
Hostname passed by user is <hostname>
Connected
After SSH it only echos "Connected". I'm not sure why $count2 and $input is not echoing.
I tired with backlash '\$count2' but that is also not working. All possible combination tried even with quote and unquote of ENDSSH. Pls help
Any help will be really appreciated!!
You basically want to supply to your remote bash a HERE-document to be executed. This is tricky, since you need to "compose" the full text of this document before you can supply it to ssh. I would therefore separate the task into two parts:
Creating the HERE-document
Running it on ssh
This makes it easy for debugging to output the document between steps 1 and 2 and to visually inspect its contents for correctness. Don't forget that once this code runs on the remote host, it can't access any of your variables anymore, unless you have "promoted" them to the remote side using the means provided by ssh.
Hence you could start like this:
# Create the parameters you want to use
nic=${input[$count2]}
# Create a variable holding the content of the remote script,
# which interpolates your parameters
read -r -d '' remote_script << ENDSSH
echo "Connected to host \$(hostname)"
echo "Running bash version: \$BASH_VERSION"
....
ethtool "$nic" |& grep Link | awk '{ print $3 }'
ENDSSH
# Print your script for verification
echo "$remote_script"
# Submit it to the host
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no "user#$HOST" /bin/bash <<<"$remote_script"
You have to add escapes(\) here:
...
echo \$nic
...
echo \$(ethtool "\${nic}" |& grep 'Link' | awk '{print \$3}')
...
But why echoing this? Try it without echo
...
ethtool "\${nic}" |& grep -i 'Link' | awk '{print \$3}'
...
#!/bin/bash
count2=1
declare -a input
input=( "$#" )
echo " Hostname passed by user is " "${input[0]}"
HOST="${input[0]}"
while [ $# -gt $count2 ]
do
sed -i 's/VALUE/'"${input[$count2]}"'/g' ./check.sh
sshpass -p '<pass>' scp ./check.sh user#"$HOST":/home/user/check.sh
sshpass -p '<pass>' ssh -o StrictHostKeyChecking=no user#"$HOST" "sh /home/user/check.sh && rm -rf /home/user/check.sh"
sed -i 's/'"${input[$count2]}"'/VALUE/g' ./check.sh
((count2++))
done
Found the another solution of this issue: It is working for me now !!!!
I wrote my entire logic which needs to be executed remotely in check.sh file and now replacing or storing the user input into this check.sh file and copying this file into remote server via scp and executing it over remotely and after successful execution removing this file from remote server and after ssh , again changing the user input to it's original value in local server using sed command.
Made this as dynamic script to work for multiple servers.

Substitute user mid-Bash-script & continue running commands (Mac OSX)

I'm building two scripts which combined will fully uninstall a program (Microsoft Lync) on Mac OS X. I need to be able to swap from an account with root access (this account initially executes the first script) to the user whom is currently logged in.
This is necessary because the second script needs to be executed not only by the logged-in user, but from said user's shell. The two scripts are name Uninstall1.sh and Uninstall2.sh in this example.
Uninstall1.sh (executed by root user):
#!/bin/bash
#commands ran by root user
function rootCMDs () {
pkill Lync
rm -rf /Applications/Microsoft\ Lync.app
killall cfprefsd
swapUser
}
function swapUser () {
currentUser=$(who | grep console | grep -v _mbsetupuser | grep -v root | awk '{print $1}' | head -n 1)
cp /<directory>/Uninstall2.sh${currentUser}
su -l ${currentUser} -c "<directory>/{currentUser}/testScript.sh";
<directory> actually declared in the scripts, but for the sake of privacy I've excluded it.
In the above script, I run some basic commands as the root user to remove the app to the trash, and kill cfprefsd to prevent having to reboot the machine. I then call the swapUser function, which dynamically identifies the current user account signed into and assigns this to the variable currentUser (in this case within our environment, it's safe to assume only one user is logged into the computer at a time). I'm not sure whether or not I'll need the cp directory/Uninstall2.sh portion yet, but this is intended to solve a different problem.
The main problem is getting the script to properly handle the su command. I use the -l flag to simulate a user login, which is necessary because this not only substitutes to the user account which is logged into, but it launches a new shell as said user. I need to use -l because OS X doesn't allow modifying another user's keychain from an admin account (the admin account in question has root access, but isn't nor does it switch to root). -c is intended to execute the copied script, which is as follows:
Uninstall2.sh (needs to be executed by the locally logged-in user):
#!/bin/bash
function rmFiles () {
# rm -rf commands
# rm -rf commands
certHandler1
}
function certHandler1 () {
myCert=($(security dump-keychain | grep <string> | grep alis | sed -e 's/"alis"<blob>="//' | sed -e 's/"//'))
cLen=${#myCert[#]} # Count the amount of items in the array; there are usually duplicates
for ((i = 0;
i < ${cLen};
i++));
do security delete-certificate -c ${myCert[$i]};
done
certHandler2
}
function certHandler2 () {
# Derive the name of, and delete Keychain items related to Microsoft Lync.
myAccount=$(security dump-keychain | grep KeyContainer | grep acct | sed -e 's/"acct"<blob>="//' | sed -e 's/"//')
security delete-generic-password -a ${myAccount}
lyncPW=$(security dump-keychain | grep Microsoft\ Lync | sed -e 's/<blob>="//' | awk '{print $2, $3}' | sed -e 's/"//')
security delete-generic-password -l "${lyncPW}"
}
rmFiles
In the above script, rmFiles kicks the script off by removing some files and directories from the user's ~/Library directory. This works without a problem, assuming the su from Uninstall1.sh properly executes this second script using the local user's shell.
I then use security dump-keychain to dump the local user's shell, find a specific certificate, then assign all results to the cLen array (because there may be duplicates of this item in a user's keychain). Each item in the array is then deleted, after which a few more keychain items are dynamically found and deleted.
What I've been finding is that the first script will either properly su to the logged-in user which it finds, at which point the second script doesn't run at all. Or, the second script is ran as the root user and thus doesn't properly delete the keychain items from the logged-in user it's supposed to su to.
Sorry for the long post, thanks for reading, and I look forward to some light shed on this situation!
Revision
I managed to find a way to achieve all that I am trying to do in a single bash script, rather than two. I did this by having the main script create another bash script in /tmp, then executing that as the local user. I'll provide it below to help anybody else whom may need this functionality:
Credit to the following source for the code on how to create another bash script within a bash script:
http://tldp.org/LDP/abs/html/here-docs.html - Example 19.8
#!/bin/bash
# Declare the desired directory and file name of the script to be created. I chose /tmp because I want this file to be removed upon next start-up.
OUTFILE=/tmp/fileName.sh
(
cat <<'EOF'
#!/bin/bash
# Remove user-local Microsoft Lync files and/or directories
function rmFiles () {
rm -rf ~/Library/Caches/com.microsoft.Lync
rm -f ~/Library/Preferences/com.microsoft.Lync.plist
rm -rf ~/Library/Preferences/ByHost/MicrosoftLync*
rm -rf ~/Library/Logs/Microsoft-Lync*
rm -rf ~/Documents/Microsoft\ User\ Data/Microsoft\ Lync\ Data
rm -rf ~/Documents/Microsoft\ User\ Data/Microsoft\ Lync\ History
rm -f ~/Library/Keychains/OC_KeyContainer*
certHandler1
}
# Need to build in a loop that determines the count of the output to determine whether or not we need to build an array or use a simple variable.
# Some people have more than one 'PRIVATE_STRING' certificate items in their keychain - this will loop through and delete each one. This may or may not be necessary for other applications of this script.
function certHandler1 () {
# Replace 'PRIVATE_STRING' with whatever you're searching for in Keychain
myCert=($(security dump-keychain | grep PRIVATE_STRING | grep alis | sed -e 's/"alis"<blob>="//' | sed -e 's/"//'))
cLen=${#myCert[#]} # Count the amount of items in the array
for ((i = 0;
i < ${cLen};
i++));
do security delete-certificate -c ${myCert[$i]};
done
certHandler2
}
function certHandler2 () {
# Derive the name of, then delete Keychain items related to Microsoft Lync.
myAccount=$(security dump-keychain | grep KeyContainer | grep acct | sed -e 's/"acct"<blob>="//' | sed -e 's/"//')
security delete-generic-password -a ${myAccount}
lyncPW=$(security dump-keychain | grep Microsoft\ Lync | sed -e 's/<blob>="//' | awk '{print $2, $3}' | sed -e 's/"//')
security delete-generic-password -l "${lyncPW}"
}
rmFiles
exit 0
EOF
) > $OUTFILE
# -----------------------------------------------------------
# Commands to be ran as root
function rootCMDs () {
pkill Lync
rm -rf /Applications/Microsoft\ Lync.app
killall cfprefsd # killing cfprefsd mitigates the necessity to reboot the machine to clear cache.
chainScript
}
function chainScript () {
if [ -f "$OUTFILE" ]
then
# Make the file in /tmp executable. This is necessary for /tmp as a non-root user cannot access files in this directory.
chmod 755 $OUTFILE
# Dynamically identify the user currently logged in. This may need some tweaking if multiple User Accounts are logged into the same computer at once.
currentUser=$(who | grep console | grep -v _mbsetupuser | grep -v root | awk '{print $1}' | head -n 1);
su -l ${currentUser} -c "bash /tmp/UninstallLync2.sh"
else
echo "Problem in creating file: \"$OUTFILE\""
fi
}
# This method also works for generating
#+ C programs, Perl programs, Python programs, Makefiles,
#+ and the like.
# Commence the domino effect.
rootCMDs
exit 0
# -----------------------------------------------------------
Cheers!

Move files containing X but not containing Y

To manage my backup sync folder, I am trying to come up with a command that would move files beginning with string1* but NOT ending with *string2 from /folder1 to /folder2
What would a command containing such two opposite conditions (HAS and HAS NOT) look like?
#!/bin/bash
for i in `ls -d /folder1/string1* | grep -v 'string2$'`
do
ls -ld $i | grep '^-' > /dev/null # Test that we have a regular file and not a directory etc.
if [ $? == 0 ]; then
mv $i /folder2
fi
done
Try something like
find /folder1 -mindepth 1 -maxdepth 1 -type f \
-name 'string1*' \! -name '*string2' -exec cp -iv {} /folder2 +
Note: If your have a older version of find you can replace + with \;
To me this is another case for (what I shall denote) the read while pattern.
cd /folder1
ls string1* | grep -v 'string2$' | while read f; do mv $f /folder2; done
The other answers are good alternatives, and in particular, find can do a lot. But I always get a headache using find, and never quite use it enough to do so without the manpage open.
Also, starting with ls or a simple find to get a list of files, and then using any or all of sed, awk, grep or whatever you have to hand, to adjust/trim/extend this list, and then bunging it into a loop, is a crude(ish) but pretty powerful technique.

Using a variable to pass grep pattern in bash

I am struggling with passing several grep patterns that are contained within a variable. This is the code I have:
#!/bin/bash
GREP="$(which grep)"
GREP_MY_OPTIONS="-c"
for i in {-2..2}
do
GREP_MY_OPTIONS+=" -e "$(date --date="$i day" +'%Y-%m-%d')
done
echo $GREP_MY_OPTIONS
IFS=$'\n'
MYARRAY=( $(${GREP} ${GREP_MY_OPTIONS} "/home/user/this path has spaces in it/"*"/abc.xyz" | ${GREP} -v :0$ ) )
This is what I wanted it to do:
determine/define where grep is
assign a variable (GREP_MY_OPTIONS) holding parameters I will pass to grep
assign several patterns to GREP_MY_OPTIONS
using grep and the patterns I have stored in $GREP_MY_OPTIONS search several files within a path that contains spaces and hold them in an array
When I use "echo $GREP_MY_OPTIONS" it is generating what I expected but when I run the script it fails with an error of:
/bin/grep: invalid option -- ' '
What am I doing wrong? If the path does not have spaces in it everything seems to work fine so I think it is something to do with the IFS but I'm not sure.
If you want to grep some content in a set of paths, you can do the following:
find <directory> -type f -print0 |
grep "/home/user/this path has spaces in it/\"*\"/abc.xyz" |
xargs -I {} grep <your_options> -f <patterns> {}
So that <patterns> is a file containing the patterns you want to search for in each file from directory.
Considering your answer, this shall do what you want:
find "/path\ with\ spaces/" -type f | xargs -I {} grep -H -c -e 2013-01-17 {}
From man grep:
-H, --with-filename
Print the file name for each match. This is the default when
there is more than one file to search.
Since you want to insert the elements into an array, you can do the following:
IFS=$'\n'; array=( $(find "/path\ with\ spaces/" -type f -print0 |
xargs -I {} grep -H -c -e 2013-01-17 "{}") )
And then use the values as:
echo ${array[0]}
echo ${array[1]}
echo ${array[...]}
When using variables to pass the parameters, use eval to evaluate the entire line. Do the following:
parameters="-H -c"
eval "grep ${parameters} file"
If you build the GREP_MY_OPTIONS as an array instead of as a simple string, you can get the original outline script to work sensibly:
#!/bin/bash
path="/home/user/this path has spaces in it"
GREP="$(which grep)"
GREP_MY_OPTIONS=("-c")
j=1
for i in {-2..2}
do
GREP_MY_OPTIONS[$((j++))]="-e"
GREP_MY_OPTIONS[$((j++))]=$(date --date="$i day" +'%Y-%m-%d')
done
IFS=$'\n'
MYARRAY=( $(${GREP} "${GREP_MY_OPTIONS[#]}" "$path/"*"/abc.xyz" | ${GREP} -v :0$ ) )
I'm not clear why you use GREP="$(which grep)" since you will execute the same grep as if you wrote grep directly — unless, I suppose, you have some alias for grep (which is then the problem; don't alias grep).
You can do one thing without making things complex:
First do a change directory in your script like following:
cd /home/user/this\ path\ has\ spaces\ in\ it/
$ pwd
/home/user/this path has spaces in it
or
$ cd "/home/user/this path has spaces in it/"
$ pwd
/home/user/this path has spaces in it
Then do what ever your want in your script.
$(${GREP} ${GREP_MY_OPTIONS} */abc.xyz)
EDIT :
[sgeorge#sgeorge-ld stack1]$ ls -l
total 4
drwxr-xr-x 2 sgeorge eng 4096 Jan 19 06:05 test tesd
[sgeorge#sgeorge-ld stack1]$ cat test\ tesd/file
SUKU
[sgeorge#sgeorge-ld stack1]$ grep SUKU */file
SUKU
EDIT :
[sgeorge#sgeorge-ld stack1]$ find */* -print | xargs -I {} grep SUKU {}
SUKU

Resources