I'm sysadmin for couple of webservers deployed with cpanel hosting panel. I'm trying to finish up with a backup script. There are two commands bundled with Cpanel, that will be used in this script. These commands are;
1. whmapi1 modifyacct user=USERNAME BACKUP=[01]
This Command has booleans to set, what it does is either enable or
disable backup for a specific user.
2. /usr/local/cpanel/bin/backup --force
Once backup is enabled for a user/users, then this command starts the
backup process on the server.
So here is my script logic & the script.
#!/bin/bash
Arrays
L=($( comm -23 <(du -h --max-depth=1 /home 2>/dev/null | grep G | awk -F"/" '{print $NF}' | sort | egrep -vw '(home|virtfs)') <(ls -al /var/cpanel/suspended/ | grep -v 'lock' | sort) ))
Above Array contains all the account whose home directories have
exceeded 1GB limit.
S=($(comm -23 <(du -h --max-depth=1 /home 2>/dev/null | egrep -v '(!G|.cp|cP|clamav)' | awk -F"/" '{print $NF}' | sort | egrep -vw '(home|virtfs)') <(ls -al /var/cpanel/suspended/ | grep -v 'lock' | sort) ))
Above Array contains all the account whose home directories are less
than 1GB limit.
whmapi1 modifyacct user=${L[#]} BACKUP=0 && whmapi1 modifyacct user=${S[#]} BACKUP=0
Above command disables backup for all users for start, to start from
scracth.
whmapi1 modifyacct user=${S[#]} BACKUP=1
T
his command enables backup for all accounts whose home dirs are less
than 1 GB
/usr/local/cpanel/bin/backup --force
This command starts backup process for all enabled users.
The logic is, that I want to create backup of small accounts first, and then when it's finished, I'll run it for larger accounts.
PROBLEM: all commands execute successfully when run directly in terminal, but it doesn't when run via a script. Problem occurs at account enabling & disabling.
It either disables all or enables all, and not the partial accounts, as intended by the logic of the script.
Can anyone point out, where & what I'm missing?
Thanks in advance !!
${l[#]} exands to user1 user2 user3 ..., so user=${L[#]} expands to user=user1 user2 user3 ..., if you want to fun foreach user, you need to loop over users.
du_buff=$(du -h --max-depth=1 /home 2>/dev/null)
lock_buff=$(ls -al /var/cpanel/suspended/ | grep -v 'lock' | sort)
L=($(comm -23 <(echo "$du_buff" | grep G | awk -F"/" '{print $NF}' | sort | egrep -vw '(home|virtfs)') <(echo "$lock_buff") ))
S=($(comm -23 <(echo "$du_buff" | egrep -v '(!G|.cp|cP|clamav)' | awk -F"/" '{print $NF}' | sort | egrep -vw '(home|virtfs)') <(echo "$lock_buff") ))
# for every user in L and S
for user in "${L[#]}" "${S[#]}"; do
whmapi1 modifyacct user=$user BACKUP=0
done
# for every user in S
for user in "${S[#]}"; do
whmapi1 modifyacct user=$user BACKUP=1
done
/usr/local/cpanel/bin/backup --force
Related
I have an assignment from university, where I need to read from a pipe a bash command and exec this command. I am thinking of using the execv* because i can create a buffer by splitting with space as delimiter. The problem is that I can't use the STL library and I can't figure out how this buffer should be created because the size of this buffer is variable. Any suggestions? Thanks in advance
If you do only one pipe commands like below, buffer size 30 is enough,even more
ls -l | wc -l
But you can support multiple pipe commands, you will get a command like bellow, and your buffer size must be enough it
cat index.html | grep ".com" | grep ".splunk." | cut -d "<" -f2 | cut -d ">" -f1 | grep ":" | cut -d":" -f2 | cut -d "/" -f3 | cut -d " " -f1 | grep ".splunk." | cut -d '"' -f1 | sort | uniq
Your buffer size should be 100,I choosed size 100 for my operating system projects like yours, we implemented a terminal, it supports almost everything,multiple pipe,redirection eg...
In addition,I used execvp like below,
execvp(args[0], &args[0]);
I'm building two scripts which combined will fully uninstall a program (Microsoft Lync) on Mac OS X. I need to be able to swap from an account with root access (this account initially executes the first script) to the user whom is currently logged in.
This is necessary because the second script needs to be executed not only by the logged-in user, but from said user's shell. The two scripts are name Uninstall1.sh and Uninstall2.sh in this example.
Uninstall1.sh (executed by root user):
#!/bin/bash
#commands ran by root user
function rootCMDs () {
pkill Lync
rm -rf /Applications/Microsoft\ Lync.app
killall cfprefsd
swapUser
}
function swapUser () {
currentUser=$(who | grep console | grep -v _mbsetupuser | grep -v root | awk '{print $1}' | head -n 1)
cp /<directory>/Uninstall2.sh${currentUser}
su -l ${currentUser} -c "<directory>/{currentUser}/testScript.sh";
<directory> actually declared in the scripts, but for the sake of privacy I've excluded it.
In the above script, I run some basic commands as the root user to remove the app to the trash, and kill cfprefsd to prevent having to reboot the machine. I then call the swapUser function, which dynamically identifies the current user account signed into and assigns this to the variable currentUser (in this case within our environment, it's safe to assume only one user is logged into the computer at a time). I'm not sure whether or not I'll need the cp directory/Uninstall2.sh portion yet, but this is intended to solve a different problem.
The main problem is getting the script to properly handle the su command. I use the -l flag to simulate a user login, which is necessary because this not only substitutes to the user account which is logged into, but it launches a new shell as said user. I need to use -l because OS X doesn't allow modifying another user's keychain from an admin account (the admin account in question has root access, but isn't nor does it switch to root). -c is intended to execute the copied script, which is as follows:
Uninstall2.sh (needs to be executed by the locally logged-in user):
#!/bin/bash
function rmFiles () {
# rm -rf commands
# rm -rf commands
certHandler1
}
function certHandler1 () {
myCert=($(security dump-keychain | grep <string> | grep alis | sed -e 's/"alis"<blob>="//' | sed -e 's/"//'))
cLen=${#myCert[#]} # Count the amount of items in the array; there are usually duplicates
for ((i = 0;
i < ${cLen};
i++));
do security delete-certificate -c ${myCert[$i]};
done
certHandler2
}
function certHandler2 () {
# Derive the name of, and delete Keychain items related to Microsoft Lync.
myAccount=$(security dump-keychain | grep KeyContainer | grep acct | sed -e 's/"acct"<blob>="//' | sed -e 's/"//')
security delete-generic-password -a ${myAccount}
lyncPW=$(security dump-keychain | grep Microsoft\ Lync | sed -e 's/<blob>="//' | awk '{print $2, $3}' | sed -e 's/"//')
security delete-generic-password -l "${lyncPW}"
}
rmFiles
In the above script, rmFiles kicks the script off by removing some files and directories from the user's ~/Library directory. This works without a problem, assuming the su from Uninstall1.sh properly executes this second script using the local user's shell.
I then use security dump-keychain to dump the local user's shell, find a specific certificate, then assign all results to the cLen array (because there may be duplicates of this item in a user's keychain). Each item in the array is then deleted, after which a few more keychain items are dynamically found and deleted.
What I've been finding is that the first script will either properly su to the logged-in user which it finds, at which point the second script doesn't run at all. Or, the second script is ran as the root user and thus doesn't properly delete the keychain items from the logged-in user it's supposed to su to.
Sorry for the long post, thanks for reading, and I look forward to some light shed on this situation!
Revision
I managed to find a way to achieve all that I am trying to do in a single bash script, rather than two. I did this by having the main script create another bash script in /tmp, then executing that as the local user. I'll provide it below to help anybody else whom may need this functionality:
Credit to the following source for the code on how to create another bash script within a bash script:
http://tldp.org/LDP/abs/html/here-docs.html - Example 19.8
#!/bin/bash
# Declare the desired directory and file name of the script to be created. I chose /tmp because I want this file to be removed upon next start-up.
OUTFILE=/tmp/fileName.sh
(
cat <<'EOF'
#!/bin/bash
# Remove user-local Microsoft Lync files and/or directories
function rmFiles () {
rm -rf ~/Library/Caches/com.microsoft.Lync
rm -f ~/Library/Preferences/com.microsoft.Lync.plist
rm -rf ~/Library/Preferences/ByHost/MicrosoftLync*
rm -rf ~/Library/Logs/Microsoft-Lync*
rm -rf ~/Documents/Microsoft\ User\ Data/Microsoft\ Lync\ Data
rm -rf ~/Documents/Microsoft\ User\ Data/Microsoft\ Lync\ History
rm -f ~/Library/Keychains/OC_KeyContainer*
certHandler1
}
# Need to build in a loop that determines the count of the output to determine whether or not we need to build an array or use a simple variable.
# Some people have more than one 'PRIVATE_STRING' certificate items in their keychain - this will loop through and delete each one. This may or may not be necessary for other applications of this script.
function certHandler1 () {
# Replace 'PRIVATE_STRING' with whatever you're searching for in Keychain
myCert=($(security dump-keychain | grep PRIVATE_STRING | grep alis | sed -e 's/"alis"<blob>="//' | sed -e 's/"//'))
cLen=${#myCert[#]} # Count the amount of items in the array
for ((i = 0;
i < ${cLen};
i++));
do security delete-certificate -c ${myCert[$i]};
done
certHandler2
}
function certHandler2 () {
# Derive the name of, then delete Keychain items related to Microsoft Lync.
myAccount=$(security dump-keychain | grep KeyContainer | grep acct | sed -e 's/"acct"<blob>="//' | sed -e 's/"//')
security delete-generic-password -a ${myAccount}
lyncPW=$(security dump-keychain | grep Microsoft\ Lync | sed -e 's/<blob>="//' | awk '{print $2, $3}' | sed -e 's/"//')
security delete-generic-password -l "${lyncPW}"
}
rmFiles
exit 0
EOF
) > $OUTFILE
# -----------------------------------------------------------
# Commands to be ran as root
function rootCMDs () {
pkill Lync
rm -rf /Applications/Microsoft\ Lync.app
killall cfprefsd # killing cfprefsd mitigates the necessity to reboot the machine to clear cache.
chainScript
}
function chainScript () {
if [ -f "$OUTFILE" ]
then
# Make the file in /tmp executable. This is necessary for /tmp as a non-root user cannot access files in this directory.
chmod 755 $OUTFILE
# Dynamically identify the user currently logged in. This may need some tweaking if multiple User Accounts are logged into the same computer at once.
currentUser=$(who | grep console | grep -v _mbsetupuser | grep -v root | awk '{print $1}' | head -n 1);
su -l ${currentUser} -c "bash /tmp/UninstallLync2.sh"
else
echo "Problem in creating file: \"$OUTFILE\""
fi
}
# This method also works for generating
#+ C programs, Perl programs, Python programs, Makefiles,
#+ and the like.
# Commence the domino effect.
rootCMDs
exit 0
# -----------------------------------------------------------
Cheers!
I working on a cmd-line that I execute with plink from PowerShell (PowerCLI) on ESXi.
The idea is to list vmdk files (with exceptions), with their symlink (because their real folders names are IDs) and first subfolder (that'd help me finding VMDK file as it may reflect VM folder). Output is CSV format so I can easily use it in PowerShell. This is where I came so far:
find /vmfs/volumes -type l -exec find {} -name "*.vmdk" -follow \; | awk '{n=split($0,a,"/"); print a[4]";"a[5]";"a[n] }' | grep -v ".*-flat.vmdk$" | grep -v ".*delta.vmdk$" | grep -v ".*-ctk.vmdk$"
This is good for me, but I'd like to add file size as last field (VMDKFileName;Size). Size format does not really matter, I'll be able to manipulate it within my PS script.
Idk if I'm on the right way to fulfill my needs.
Do not hesitate to ask for more informations.
P.S: a one-liner command would be great as I'm using PLink, it's easier for me to use.
TIA
Ok, anwser is here (lots of headaches) !
find $(find /vmfs/volumes -type l -maxdepth 1) -name "*.vmdk" -follow -exec ls -lHd {} \; | awk '{n=split($0,a,"/"); print a[4]";"a[5]";"a[n]";"$5}' | grep -v ".*-flat.vmdk" | grep -v ".*delta.vmdk" | grep -v ".*-ctk.vmdk"
bash as I can tell from the repetition of an IP within a log through a specific search?
By example:
#!/bin/bash
# Log line: [Sat Jul 04 21:55:35 2015] [error] [client 192.168.1.39] Access denied with status code 403.
grep "status\scode\s403" /var/log/httpd/custom_error_log | while read line ; do
pattern='^\[.*?\]\s\[error\]\s\[client\s(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\].*?403'
[[ $line =~ $pattern ]]
res_remote_addr="${BASH_REMATCH[1]}.${BASH_REMATCH[2]}.${BASH_REMATCH[3]}.${BASH_REMATCH[4]}"
echo "Remote Addr: $res_remote_addr"
done
I need to know the end results obtained a few times each message 403 ip, if possible sort highest to lowest.
By example output:
200.200.200.200 50 times.
200.200.200.201 40 times.
200.200.200.202 30 times.
... etc ...
This we need to create an html report from a monthly log of apache in a series of events (something like awstats).
there are better ways. following is my proposal, which should be more readable and easier to maintain:
grep -P -o '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}' log_file | sort | uniq -c | sort -k1,1 -r -n
output should be in a form of:
count1 ip1
count2 ip2
update:
filter only 403:
grep -P -o '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(?=.*403)' log_file | sort | uniq -c | sort -k1,1 -r -n
notice that a look ahead would suffice.
If log file is in the format as mentioned in question, the best is to use awk to filter out the status code needed plus output only the IP. Then use the uniq command to count each occurence:
awk '/code 403/ {print $8}' error.log | sort | uniq -c |sort -n
In awk, we filter by regexp /code 403/ and then for matching lines we print the 8th value (values are separated by whitespace), which is the IP.
Then we need to sort the output, so that the same IPs are one after another - this is requirement of the uniq program.
uniq -c prints each unique line from input only once - and preceded by the number of occurences. Finnaly we sort this list numericaly to get the IPs sorted by count.
Sample output (first is no. of occurences, second is IP):
1 1.1.1.1
10 2.2.2.2
12 3.3.3.3
After using bash -x to debug this script, it appears that a problem occurs near $InvalidText, where grep treats the output of InvalidText as separate file names.
The only solution I can think of is to direct the output of InvalidText to a file, then grep that file.. are there any other alternatives?
# find IP address of Invalid user and give the fake name they tried. Loop for all IP addresses
InvalidText=$(grep "Invalid user" syslog.log)
IPInvalidUser=$(grep "Invalid user" syslog.log | awk '{print $NF}' | uniq)
InvalidUserarray=( $IPInvalidUser )
for FAKEHOST in "${InvalidUserarray[#]}"
do
fakeinfo=$(grep ${FAKEHOST} $InvalidText | cut -d: -f5 | awk '{print $3}')
echo -e "Invalid user attempts:\n"
echo -e "$FAKEHOST $fakeinfo"
done