Script to change ownership of folders in Plesk vhosts - batch-file

I'm looking for some help in creating a shell script in Linux to perform a batch ownership change for certain folders in a Plesk environment where the owner:group is apache:apache.
I want to change the owner:group to :psacln.
The FTP user can be ascertained by looking at the owner of the httpdocs folder.
^this is the section I'm having trouble with.
If I was to set all owners to be the same, I could do a one-line:
find /var/www/vhosts/*/httpdocs -user apache -group apache -exec chown user:psacln {} \;
Can anyone help plug the user in to this command?
Thanks

Figured it out... for those who may want to use it in the future:
for dir in /var/www/vhosts/*
do
dir=${dir%*/}
permissions=`stat -c '%U' ${dir##*/}/httpdocs`
find ${dir##*/}/httpdocs -user apache -group apache -exec chown $permissions {} \;
done

Since stat doesn't work on al unices in the same way, I thought I would share my script to set the ownership of all websites to the correct owners in Plesk (tested on Plesk 11, 11.5, 12 and 12.5):
cd /var/www/vhosts/
for f in *; do
if [[ -d "$f" && ! -L "$f" ]]; then
# Get necessary variables
FOLDERROOT="/var/www/vhosts/"
FOLDERPATH="/var/www/vhosts/$f/"
FTPUSER="$(ls -ld /var/www/vhosts/$f/ | awk '{print $3}')"
# Set correct rights for current website, if website has hosting!
cd $FOLDERPATH
if [ -d "$FOLDERPATH/httpdocs" ]; then
chown -R $FTPUSER:psacln httpdocs
chmod -R g+w httpdocs
find httpdocs -type d -exec chmod g+s {} \;
# Print success message
echo "Done... $FTPUSER is now correct owner of $FOLDERPATH."
fi
# Make sure we are back at the root, so we can continue looping
cd $FOLDERROOT
fi
done
\\\
Explanation of code:
Go to vhosts folder
Loop through websites
Store vhosts path, because we are using cdin a loop
If httpdocsfolders exists for the current website, than
set the correct rights of httpdocs and
all underlying folders
Show succes message
cd back to vhosts folder, so we can continue looping
\\\

Related

Substitute user mid-Bash-script & continue running commands (Mac OSX)

I'm building two scripts which combined will fully uninstall a program (Microsoft Lync) on Mac OS X. I need to be able to swap from an account with root access (this account initially executes the first script) to the user whom is currently logged in.
This is necessary because the second script needs to be executed not only by the logged-in user, but from said user's shell. The two scripts are name Uninstall1.sh and Uninstall2.sh in this example.
Uninstall1.sh (executed by root user):
#!/bin/bash
#commands ran by root user
function rootCMDs () {
pkill Lync
rm -rf /Applications/Microsoft\ Lync.app
killall cfprefsd
swapUser
}
function swapUser () {
currentUser=$(who | grep console | grep -v _mbsetupuser | grep -v root | awk '{print $1}' | head -n 1)
cp /<directory>/Uninstall2.sh${currentUser}
su -l ${currentUser} -c "<directory>/{currentUser}/testScript.sh";
<directory> actually declared in the scripts, but for the sake of privacy I've excluded it.
In the above script, I run some basic commands as the root user to remove the app to the trash, and kill cfprefsd to prevent having to reboot the machine. I then call the swapUser function, which dynamically identifies the current user account signed into and assigns this to the variable currentUser (in this case within our environment, it's safe to assume only one user is logged into the computer at a time). I'm not sure whether or not I'll need the cp directory/Uninstall2.sh portion yet, but this is intended to solve a different problem.
The main problem is getting the script to properly handle the su command. I use the -l flag to simulate a user login, which is necessary because this not only substitutes to the user account which is logged into, but it launches a new shell as said user. I need to use -l because OS X doesn't allow modifying another user's keychain from an admin account (the admin account in question has root access, but isn't nor does it switch to root). -c is intended to execute the copied script, which is as follows:
Uninstall2.sh (needs to be executed by the locally logged-in user):
#!/bin/bash
function rmFiles () {
# rm -rf commands
# rm -rf commands
certHandler1
}
function certHandler1 () {
myCert=($(security dump-keychain | grep <string> | grep alis | sed -e 's/"alis"<blob>="//' | sed -e 's/"//'))
cLen=${#myCert[#]} # Count the amount of items in the array; there are usually duplicates
for ((i = 0;
i < ${cLen};
i++));
do security delete-certificate -c ${myCert[$i]};
done
certHandler2
}
function certHandler2 () {
# Derive the name of, and delete Keychain items related to Microsoft Lync.
myAccount=$(security dump-keychain | grep KeyContainer | grep acct | sed -e 's/"acct"<blob>="//' | sed -e 's/"//')
security delete-generic-password -a ${myAccount}
lyncPW=$(security dump-keychain | grep Microsoft\ Lync | sed -e 's/<blob>="//' | awk '{print $2, $3}' | sed -e 's/"//')
security delete-generic-password -l "${lyncPW}"
}
rmFiles
In the above script, rmFiles kicks the script off by removing some files and directories from the user's ~/Library directory. This works without a problem, assuming the su from Uninstall1.sh properly executes this second script using the local user's shell.
I then use security dump-keychain to dump the local user's shell, find a specific certificate, then assign all results to the cLen array (because there may be duplicates of this item in a user's keychain). Each item in the array is then deleted, after which a few more keychain items are dynamically found and deleted.
What I've been finding is that the first script will either properly su to the logged-in user which it finds, at which point the second script doesn't run at all. Or, the second script is ran as the root user and thus doesn't properly delete the keychain items from the logged-in user it's supposed to su to.
Sorry for the long post, thanks for reading, and I look forward to some light shed on this situation!
Revision
I managed to find a way to achieve all that I am trying to do in a single bash script, rather than two. I did this by having the main script create another bash script in /tmp, then executing that as the local user. I'll provide it below to help anybody else whom may need this functionality:
Credit to the following source for the code on how to create another bash script within a bash script:
http://tldp.org/LDP/abs/html/here-docs.html - Example 19.8
#!/bin/bash
# Declare the desired directory and file name of the script to be created. I chose /tmp because I want this file to be removed upon next start-up.
OUTFILE=/tmp/fileName.sh
(
cat <<'EOF'
#!/bin/bash
# Remove user-local Microsoft Lync files and/or directories
function rmFiles () {
rm -rf ~/Library/Caches/com.microsoft.Lync
rm -f ~/Library/Preferences/com.microsoft.Lync.plist
rm -rf ~/Library/Preferences/ByHost/MicrosoftLync*
rm -rf ~/Library/Logs/Microsoft-Lync*
rm -rf ~/Documents/Microsoft\ User\ Data/Microsoft\ Lync\ Data
rm -rf ~/Documents/Microsoft\ User\ Data/Microsoft\ Lync\ History
rm -f ~/Library/Keychains/OC_KeyContainer*
certHandler1
}
# Need to build in a loop that determines the count of the output to determine whether or not we need to build an array or use a simple variable.
# Some people have more than one 'PRIVATE_STRING' certificate items in their keychain - this will loop through and delete each one. This may or may not be necessary for other applications of this script.
function certHandler1 () {
# Replace 'PRIVATE_STRING' with whatever you're searching for in Keychain
myCert=($(security dump-keychain | grep PRIVATE_STRING | grep alis | sed -e 's/"alis"<blob>="//' | sed -e 's/"//'))
cLen=${#myCert[#]} # Count the amount of items in the array
for ((i = 0;
i < ${cLen};
i++));
do security delete-certificate -c ${myCert[$i]};
done
certHandler2
}
function certHandler2 () {
# Derive the name of, then delete Keychain items related to Microsoft Lync.
myAccount=$(security dump-keychain | grep KeyContainer | grep acct | sed -e 's/"acct"<blob>="//' | sed -e 's/"//')
security delete-generic-password -a ${myAccount}
lyncPW=$(security dump-keychain | grep Microsoft\ Lync | sed -e 's/<blob>="//' | awk '{print $2, $3}' | sed -e 's/"//')
security delete-generic-password -l "${lyncPW}"
}
rmFiles
exit 0
EOF
) > $OUTFILE
# -----------------------------------------------------------
# Commands to be ran as root
function rootCMDs () {
pkill Lync
rm -rf /Applications/Microsoft\ Lync.app
killall cfprefsd # killing cfprefsd mitigates the necessity to reboot the machine to clear cache.
chainScript
}
function chainScript () {
if [ -f "$OUTFILE" ]
then
# Make the file in /tmp executable. This is necessary for /tmp as a non-root user cannot access files in this directory.
chmod 755 $OUTFILE
# Dynamically identify the user currently logged in. This may need some tweaking if multiple User Accounts are logged into the same computer at once.
currentUser=$(who | grep console | grep -v _mbsetupuser | grep -v root | awk '{print $1}' | head -n 1);
su -l ${currentUser} -c "bash /tmp/UninstallLync2.sh"
else
echo "Problem in creating file: \"$OUTFILE\""
fi
}
# This method also works for generating
#+ C programs, Perl programs, Python programs, Makefiles,
#+ and the like.
# Commence the domino effect.
rootCMDs
exit 0
# -----------------------------------------------------------
Cheers!

Script to group numbered files into folders

I have around a million files in one folder in the form xxxx_description.jpg where xxx is a number ranging from 100 to an unknown upper.
The list is similar to this:
146467_description1.jpg
146467_description2.jpg
146467_description3.jpg
146467_description4.jpg
14646_description1.jpg
14646_description2.jpg
14646_description3.jpg
146472_description1.jpg
146472_description2.jpg
146472_description3.jpg
146500_description1.jpg
146500_description2.jpg
146500_description3.jpg
146500_description4.jpg
146500_description5.jpg
146500_description6.jpg
To get the file number down in the at folder I'd like to put them all into folders grouped by the number at the start.
ie:
146467/146467_description1.jpg
146467/146467_description2.jpg
146467/146467_description3.jpg
146467/146467_description4.jpg
14646/14646_description1.jpg
14646/14646_description2.jpg
14646/14646_description3.jpg
146472/146472_description1.jpg
146472/146472_description2.jpg
146472/146472_description3.jpg
146500/146500_description1.jpg
146500/146500_description2.jpg
146500/146500_description3.jpg
146500/146500_description4.jpg
146500/146500_description5.jpg
146500/146500_description6.jpg
I was thinking to try and use command line: find | awk {} | mv command or maybe write a script, but I'm not sure how to do this most efficiently.
If you really are dealing with millions of files, I suspect that a glob (*.jpg or [0-9]*_*.jpg may fail because it makes a command line that's too long for the shell. If that's the case, you can still use find. Something like this might work:
find /path -name "[0-9]*_*.jpg" -exec sh -c 'f="{}"; mkdir -p "/target/${f%_*}"; mv "$f" "/target/${f%_*}/"' \;
Broken out for easier reading, this is what we're doing:
find /path - run find, with /path as a starting point,
-name "[0-9]*_*.jpg" - match files that match this filespec in all directories,
-exec sh -c execute the following on each file...
'f="{}"; - put the filename into a variable...
mkdir -p "/target/${f%_*}"; - make a target directory based on that variable (read mkdir's man page about the -p option)
mv "$f" "/target/${f%_*}/"' - move the file into the directory.
\; - end the -exec expression
On the up side, it can handle any number of files that find can handle (i.e. limited only by your OS). On the down side, it's launching a separate shell for each file to be handled.
Note that the above answer is for Bourne/POSIX/Bash. If you're using CSH or TCSH as your shell, the following might work instead:
#!/bin/tcsh
foreach f (*_*.jpg)
set split = ($f:as/_/ /)
mkdir -p "$split[1]"
mv "$f" "$split[1]/"
end
This assumes that the filespec will fit in tcsh's glob buffer. I've tested with 40000 files (894KB) on one command line and not had a problem using /bin/sh or /bin/csh in FreeBSD.
Like the Bourne/POSIX/Bash parameter expansion solution above, this avoids unnecessary calls to external I haven't tested that, and would recommend the find solution even though it's slower.
You can use this script:
for i in [0-9]*_*.jpg; do
p=`echo "$i" | sed 's/^\([0-9]*\)_.*/\1/'`
mkdir -p "$p"
mv "$i" "$p"
done
Using grep
for file in *.jpg;
do
dirName=$(echo $file | grep -oE '^[0-9]+')
[[ -d $dirName ]] || mkdir $dirName
mv $file $dirName
done
grep -oE '^[0-9]+' extracts the starting digits in the filename as
146467
146467
146467
146467
14646
...
[[ -d $dirName ]] returns 1 if the directory exists
[[ -d $dirName ]] || mkdir $dirName ensures that the mkdir works only if the test [[ -d $dirName ]] fails, that is the direcotry does not exists

Looking to take only main folder name within a tarball & match it to folders to see if it's been extracted

I have a situation where I need to keep .tgz files & if they've been extracted, remove the extracted directory & contents.
In all examples, the only top-level directory within the tarball has a different name than the tarball itself:
[host1]$ find / -name "*\#*.tgz" #(has an # symbol somewhere in the name)
/1-#-test.tgz
[host1]$ tar -tzvf /1-#-test.tgz | head -n 1 | awk '{ print $6 }'
TJ #(directory name)
What I'd like to accomplish (pulling my hair out; rusty scripting fingers), is to look at each tarball, see if the corresponding directory name (like above) exists. If it does, echo "rm -rf /directoryname" into an output file for review.
I can read all of the tarballs into an array ... but how to check the directories?
Frustrated & appreciate any help.
Maybe you're looking for something like this:
find / -name "*#*.tgz" | while read line; do
dir=$(tar ztf "$line" | awk -F/ '{print $6; exit}')
test -d "$dir" && echo "rm -fr '$dir'"
done
Explanation:
We iterate over the *#*.tgz files found with a while loop, line by line
Get the list of files in the tgz file with tar ztf "$line"
Since paths are separated by /, use that as the separator in the awk, print the 6th field. After the print we exit, making this equivalent to but more efficient than using head -n1 first
With dir=$(...) we put the entire output of the tar..awk chain, thus the 6th field of the first file in the tar, into the variable dir
We check if such directory exists, if yes then echo an rm command so you can review and execute later if looks good
My original answer used a find ... -exec but I think that's not so good in this particular case:
find / -name "*#*.tgz" -exec \
sh -c 'dir=$(tar ztf "{}" | awk -F/ "{print \$6; exit}");\
test -d "$dir" && echo "rm -fr \"$dir\""' \;
It's not so good because of running sh for every file, and since we are using {} in the subshell, we lose the usual benefits of a typical find ... -exec where special characters in {} are correctly handled.

Getting specific files from server

Using Terminal and Shell/Bash commands is there a way to retrive specific files from a web directory? I.e.
Directory: www.site.com/samples/
copy all files ending in ".h" into a folder
The folder contains text files, and other files associated that are of no use.
Thanks :)
There are multiple ways of achieving this recursively:
1. using find
1.1 making directorys using find -p to create recursive folders without errors
cd path;
mkdir backup
find www.site.com/samples/ -type d -exec mkdir -p {} backup/{} \;
1.2 finding specific files and copying to backup folder -p to perserve permissions
find www.site.com/samples/ -name \*.h -exec cp -p {} backup/{} \;
Using tar well actually for reverse type of work i.e. to exclude specific files which the part of the question related to text files matches this answer more:
You can have as many excludes as you liked added on
tar --exclude=*.txt --exclude=*.filetype2 --exclude=*.filetype3 -cvzf site-backup.tar.gz www.site.com
mv www.site.com www.site.com.1
tar -xvzf site-backup.tar.gz
You can use the wget for that, but if there are no links to that files. I.e. they exist, but they are not referenced from any html page, then bruteforce is the only option.
cp -aiv /www.site.com/samples/*.h /somefolder/
http://linux.die.net/man/1/cp

for looping over an array

I am trying to resolve an issue with a bash script that is intended to search through each users home directory in /Users/ and find two different directories, stored in array "SUBDIRS." If these directories exist, I want to remove with the recursive and force options. If they do not exist I want the script to continue looking for the next directory, next home folder, etc.
#!/bin/sh
err=0
SUBDIRS=(
"Library/Application Support/Spotify"
"Library/Caches/com.spotify.client"
)
for HOMEDIR in /Users/*; do
for SUBDIR in ${SUBDIRS}; do
DIR="${HOMEDIR}/${SUBDIR}"
if [[ -d "${DIR}" ]]; then
rm -rf "${DIR}"
echo "${HOMEDIR}/${SUBDIR} has been removed."
APP=$(find "${HOMEDIR}" -name [sS]potify.app)
rm -rf "${APP}"
fi
done
done
exit $err
You need to signify that it's an array to be expanded (and quote it).
for SUBDIR in "${SUBDIRS[#]}"; do
You should quote the pattern in the find command so find will expand it instead of the shell.
APP=$(find "${HOMEDIR}" -name '[sS]potify.app')

Resources