Trying to change permissions using ksh script - concatenation

I am new to ksh and am trying to change multiple file permissions using a ksh script, but I am unable to concatenate an asterisk in my script.
#!/bin/ksh
for i in `cat /gpfs_cache/open/srcfile.csv`
do
echo "Changing permissions in $i"
chmod 0444 ${i}"*"
done
srcfile.csv contains
/gpfs_data/open/files/test1/
/gpfs_data/open/files/test2/
The output I get is
Changing permissions in /gpfs_data/open/files/test1/
chmod: cannot access `/gpfs_data/open/files/test1/\r*': No such file or directory
Changing permissions in /gpfs_data/open/files/test2/
chmod: cannot access `/gpfs_data/open/files/test2/\r*': No such file or directory
Any help would be greatly appreciated.

Don't quote the asterisks. That causes them to be interpreted literally.
#!/bin/ksh
for i in `cat /gpfs_cache/open/srcfile.csv`
do
echo "Changing permissions in $i"
chmod 0444 ${i}*
done
works for me.

Related

manipulating strings and use commands with the result in bash [duplicate]

I'm trying to write a small script to change the current directory to my project directory:
#!/bin/bash
cd /home/tree/projects/java
I saved this file as proj, added execute permission with chmod, and copied it to /usr/bin. When I call it by:
proj, it does nothing. What am I doing wrong?
Shell scripts are run inside a subshell, and each subshell has its own concept of what the current directory is. The cd succeeds, but as soon as the subshell exits, you're back in the interactive shell and nothing ever changed there.
One way to get around this is to use an alias instead:
alias proj="cd /home/tree/projects/java"
You're doing nothing wrong! You've changed the directory, but only within the subshell that runs the script.
You can run the script in your current process with the "dot" command:
. proj
But I'd prefer Greg's suggestion to use an alias in this simple case.
The cd in your script technically worked as it changed the directory of the shell that ran the script, but that was a separate process forked from your interactive shell.
A Posix-compatible way to solve this problem is to define a shell procedure rather than a shell-invoked command script.
jhome () {
cd /home/tree/projects/java
}
You can just type this in or put it in one of the various shell startup files.
The cd is done within the script's shell. When the script ends, that shell exits, and then you are left in the directory you were. "Source" the script, don't run it. Instead of:
./myscript.sh
do
. ./myscript.sh
(Notice the dot and space before the script name.)
To make a bash script that will cd to a select directory :
Create the script file
#!/bin/sh
# file : /scripts/cdjava
#
cd /home/askgelal/projects/java
Then create an alias in your startup file.
#!/bin/sh
# file /scripts/mastercode.sh
#
alias cdjava='. /scripts/cdjava'
I created a startup file where I dump all my aliases and custom functions.
Then I source this file into my .bashrc to have it set on each boot.
For example, create a master aliases/functions file: /scripts/mastercode.sh
(Put the alias in this file.)
Then at the end of your .bashrc file:
source /scripts/mastercode.sh
Now its easy to cd to your java directory, just type cdjava and you are there.
You can use . to execute a script in the current shell environment:
. script_name
or alternatively, its more readable but shell specific alias source:
source script_name
This avoids the subshell, and allows any variables or builtins (including cd) to affect the current shell instead.
Jeremy Ruten's idea of using a symlink triggered a thought that hasn't crossed any other answer. Use:
CDPATH=:$HOME/projects
The leading colon is important; it means that if there is a directory 'dir' in the current directory, then 'cd dir' will change to that, rather than hopping off somewhere else. With the value set as shown, you can do:
cd java
and, if there is no sub-directory called java in the current directory, then it will take you directly to $HOME/projects/java - no aliases, no scripts, no dubious execs or dot commands.
My $HOME is /Users/jleffler; my $CDPATH is:
:/Users/jleffler:/Users/jleffler/mail:/Users/jleffler/src:/Users/jleffler/src/perl:/Users/jleffler/src/sqltools:/Users/jleffler/lib:/Users/jleffler/doc:/Users/jleffler/work
Use exec bash at the end
A bash script operates on its current environment or on that of its
children, but never on its parent environment.
However, this question often gets asked because one wants to be left at a (new) bash prompt in a certain directory after execution of a bash script from within another directory.
If this is the case, simply execute a child bash instance at the end of the script:
#!/usr/bin/env bash
cd /home/tree/projects/java
echo -e '\nHit [Ctrl]+[D] to exit this child shell.'
exec bash
To return to the previous, parental bash instance, use Ctrl+D.
Update
At least with newer versions of bash, the exec on the last line is no longer required. Furthermore, the script could be made to work with whatever preferred shell by using the $SHELL environment variable. This then gives:
#!/usr/bin/env bash
cd desired/directory
echo -e '\nHit [Ctrl]+[D] to exit this child shell.'
$SHELL
I got my code to work by using. <your file name>
./<your file name> dose not work because it doesn't change your directory in the terminal it just changes the directory specific to that script.
Here is my program
#!/bin/bash
echo "Taking you to eclipse's workspace."
cd /Developer/Java/workspace
Here is my terminal
nova:~ Kael$
nova:~ Kael$ . workspace.sh
Taking you to eclipe's workspace.
nova:workspace Kael$
simply run:
cd /home/xxx/yyy && command_you_want
When you fire a shell script, it runs a new instance of that shell (/bin/bash). Thus, your script just fires up a shell, changes the directory and exits. Put another way, cd (and other such commands) within a shell script do not affect nor have access to the shell from which they were launched.
You can do following:
#!/bin/bash
cd /your/project/directory
# start another shell and replacing the current
exec /bin/bash
EDIT: This could be 'dotted' as well, to prevent creation of subsequent shells.
Example:
. ./previous_script (with or without the first line)
On my particular case i needed too many times to change for the same directory.
So on my .bashrc (I use ubuntu) i've added the
1 -
$ nano ~./bashrc
function switchp
{
cd /home/tree/projects/$1
}
2-
$ source ~/.bashrc
3 -
$ switchp java
Directly it will do: cd /home/tree/projects/java
Hope that helps!
It only changes the directory for the script itself, while your current directory stays the same.
You might want to use a symbolic link instead. It allows you to make a "shortcut" to a file or directory, so you'd only have to type something like cd my-project.
You can combine Adam & Greg's alias and dot approaches to make something that can be more dynamic—
alias project=". project"
Now running the project alias will execute the project script in the current shell as opposed to the subshell.
You can combine an alias and a script,
alias proj="cd \`/usr/bin/proj !*\`"
provided that the script echos the destination path. Note that those are backticks surrounding the script name.
For example, your script could be
#!/bin/bash
echo /home/askgelal/projects/java/$1
The advantage with this technique is that the script could take any number of command line parameters and emit different destinations calculated by possibly complex logic.
to navigate directories quicky, there's $CDPATH, cdargs, and ways to generate aliases automatically
http://jackndempsey.blogspot.com/2008/07/cdargs.html
http://muness.blogspot.com/2008/06/lazy-bash-cd-aliaes.html
https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5827311.html
In your ~/.bash_profile file. add the next function
move_me() {
cd ~/path/to/dest
}
Restart terminal and you can type
move_me
and you will be moved to the destination folder.
You can use the operator && :
cd myDirectory && ls
While sourcing the script you want to run is one solution, you should be aware that this script then can directly modify the environment of your current shell. Also it is not possible to pass arguments anymore.
Another way to do, is to implement your script as a function in bash.
function cdbm() {
cd whereever_you_want_to_go
echo "Arguments to the functions were $1, $2, ..."
}
This technique is used by autojump: http://github.com/joelthelion/autojump/wiki to provide you with learning shell directory bookmarks.
You can create a function like below in your .bash_profile and it will work smoothly.
The following function takes an optional parameter which is a project.
For example, you can just run
cdproj
or
cdproj project_name
Here is the function definition.
cdproj(){
dir=/Users/yourname/projects
if [ "$1" ]; then
cd "${dir}/${1}"
else
cd "${dir}"
fi
}
Dont forget to source your .bash_profile
This should do what you want. Change to the directory of interest (from within the script), and then spawn a new bash shell.
#!/bin/bash
# saved as mov_dir.sh
cd ~/mt/v3/rt_linux-rt-tools/
bash
If you run this, it will take you to the directory of interest and when you exit it it will bring you back to the original place.
root#intel-corei7-64:~# ./mov_dir.sh
root#intel-corei7-64:~/mt/v3/rt_linux-rt-tools# exit
root#intel-corei7-64:~#
This will even take you to back to your original directory when you exit (CTRL+d)
I did the following:
create a file called case
paste the following in the file:
#!/bin/sh
cd /home/"$1"
save it and then:
chmod +x case
I also created an alias in my .bashrc:
alias disk='cd /home/; . case'
now when I type:
case 12345
essentially I am typing:
cd /home/12345
You can type any folder after 'case':
case 12
case 15
case 17
which is like typing:
cd /home/12
cd /home/15
cd /home/17
respectively
In my case the path is much longer - these guys summed it up with the ~ info earlier.
As explained on the other answers, you have changed the directory, but only within the sub-shell that runs the script. this does not impact the parent shell.
One solution is to use bash functions instead of a bash script (sh); by placing your bash script code into a function. That makes the function available as a command and then, this will be executed without a child process and thus any cd command will impact the caller shell.
Bash functions :
One feature of the bash profile is to store custom functions that can be run in the terminal or in bash scripts the same way you run application/commands this also could be used as a shortcut for long commands.
To make your function efficient system widely you will need to copy your function at the end of several files
/home/user/.bashrc
/home/user/.bash_profile
/root/.bashrc
/root/.bash_profile
You can sudo kwrite /home/user/.bashrc /home/user/.bash_profile /root/.bashrc /root/.bash_profile to edit/create those files quickly
Howto :
Copy your bash script code inside a new function at the end of your bash's profile file and restart your terminal, you can then run cdd or whatever the function you wrote.
Script Example
Making shortcut to cd .. with cdd
cdd() {
cd ..
}
ls shortcut
ll() {
ls -l -h
}
ls shortcut
lll() {
ls -l -h -a
}
If you are using fish as your shell, the best solution is to create a function. As an example, given the original question, you could copy the 4 lines below and paste them into your fish command line:
function proj
cd /home/tree/projects/java
end
funcsave proj
This will create the function and save it for use later. If your project changes, just repeat the process using the new path.
If you prefer, you can manually add the function file by doing the following:
nano ~/.config/fish/functions/proj.fish
and enter the text:
function proj
cd /home/tree/projects/java
end
and finally press ctrl+x to exit and y followed by return to save your changes.
(NOTE: the first method of using funcsave creates the proj.fish file for you).
You need no script, only set the correct option and create an environment variable.
shopt -s cdable_vars
in your ~/.bashrc allows to cd to the content of environment variables.
Create such an environment variable:
export myjava="/home/tree/projects/java"
and you can use:
cd myjava
Other alternatives.
Note the discussion How do I set the working directory of the parent process?
It contains some hackish answers, e.g.
https://stackoverflow.com/a/2375174/755804 (changing the parent process directory via gdb, don't do this) and https://stackoverflow.com/a/51985735/755804 (the command tailcd that injects cd dirname to the input stream of the parent process; well, ideally it should be a part of bash rather than a hack)
It is an old question, but I am really surprised I don't see this trick here
Instead of using cd you can use
export PWD=the/path/you/want
No need to create subshells or use aliases.
Note that it is your responsibility to make sure the/path/you/want exists.
I have to work in tcsh, and I know this is not an elegant solution, but for example, if I had to change folders to a path where one word is different, the whole thing can be done in the alias
a alias_name 'set a = `pwd`; set b = `echo $a | replace "Trees" "Tests"` ; cd $b'
If the path is always fixed, the just
a alias_name2 'cd path/you/always/need'
should work
In the line above, the new folder path is set
This combines the answer by Serge with an unrelated answer by David. It changes the directory, and then instead of forcing a bash shell, it launches the user's default shell. It however requires both getent and /etc/passwd to detect the default shell.
#!/usr/bin/env bash
cd desired/directory
USER_SHELL=$(getent passwd <USER> | cut -d : -f 7)
$USER_SHELL
Of course this still has the same deficiency of creating a nested shell.

SOLR POST files with no extension

I am using SOLR 5 and I want to scan documents that have no extensions. Unfortunately changing the file to have extensions is not an option in my case.
the command I am using is simply:
$bin/post -c mycore ../foldertobescaned -type application/pdf
the command works fine for documents that do have extension but I am getting:
Entering auto mode. File endings considered are xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
If renaming the files is not an option, you can use the following script as a workaround until Solr improves its post method. It is a simple bash for loop that submits each file individually and works regardless of the file extension. Note that this script will be slower than using post on the whole folder, because each individual file transfer needs to be initialized.
Save the script below as postFolderToSolr.sh inside your Solr folder (so that Solrs bin/ folder is a subdirectory), make it executable with chmod +x postFolderToSolr.sh and then use it as follows: ./postFolderToSolr.sh mycore /home/user1/foldertobescaned/ application/pdf
Using no arguments or the wrong number of arguments prints a short usage message as help.
#!/bin/bash
set -o nounset
if [ "$#" -ne 3 ]
then
echo "Post contents of a folder to Solr."
echo
echo "Usage: postFolderToSolr.sh <colletionName> </path/to/folder> <MIME>"
echo
exit 1
fi
collection=$1
inputPath=${2%/} # remove suffix / if it exists
mime=$3
for element in $inputPath"/"*; do
bin/post -c $collection -type $mime $element
done

Error passing multiple commands to Cisco CLI via plink

I've gotten some help with an earlier part of this batch file, but now I'm having trouble with the final component.
I've tried a few things with no success. I tried changing the CRLF to LF which did nothing. I also tried rephrasing the commands a few ways but I am still not getting anywhere. The following is my main batch file.
#echo on
REM delete deauth command file
SET OutFile="C:\temp\Out2.txt"
IF EXIST "%OutFile%" DEL "%OutFile%"
plink -v -ssh *#x.x.x.x -pw PW -m "c:\temp\WirelessDump.txt" > "C:\temp\output.txt"
setlocal
for /f %%a in (C:\temp\output.txt) do >> "Out2.txt" echo wir cli mac-address %%a deauth forced
REM Use commands in out2 to deauth
plink -v -ssh *#x.x.x.x -pw PW -m "c:\temp\Out2.txt"
pause
Below this sentence is the command found in Out2 which I think is giving the actual trouble. The number of lines varies but they are all this particular command just with differing MACs.
wir cli mac-address xxxx.xxxx.xxxx deauth forced
If Out2 has only a single line it runs fine, no issues. But when there are multiple lines, it fails with an error stating that the Line has an invalid autocommand. It's almost as if it was reading it as one contiguous command. As I mentioned above I changed from CRLF to LF hoping IOS would like it better, but that failed. I've tried adding extra lines between the commands, and I've tried calling the login every time from that file.
I am hoping that there is a way to tailor the commands to pass all lines one at a time to keep this down to a minimum of files.
I had another thought but it is kinda/very clunky. If there was a way to output each of those MAC deauth commands to their own file in a saperate folder (out1, out2, out3), and have the BAT able to run all the randomly generated files in that folder so that each one is a separated plink session.
Let me know if I need to change/add/elaborate on anything. Thanks in advance for anything you guys are willing to help with. I appreciate it.
EDIT: Martin has pointed out what the limitation actually is. It appears to be a limitation on Cisco to accept blocks of commands through SSH. So I still have the same question really, I just need some help figuring a workaround to this issue. I'm thinking the multiple file solution I mentioned above may have some possibility. But I'm too much of a noob to know how to make that work. I'll update if I have any breakthroughs though. Thanks for any contributions!
It's actually a known limitation of Cisco, that it does not support multiple commands in an SSH "exec" channel command.
Quoting section 3.8.3.6 -m: read a remote command or script from a file of PuTTY/Plink manual:
With some servers (particularly Unix systems), you can even put multiple lines in this file and execute more than one command in sequence, or a whole shell script; but this is arguably an abuse, and cannot be expected to work on all servers. In particular, it is known not to work with certain ‘embedded’ servers, such as Cisco routers.
Though you can probably still feed multiple commands to Plink input:
(
echo command 1
echo command 2
echo command 3
echo exit
) | plink -v -ssh user#host -pw password > output.txt
Or you can simply use an input file:
plink -v -ssh user#host -pw password < input.txt > output.txt
Similar question: A way of typing multiple commands in cmd.txt file using PuTTY batch against Cisco
This works without cmd.exe and using files:
function Invoke-PlinkCommandsIOS {
param (
[Parameter(Mandatory=$true)][string] $Host,
[Parameter(Mandatory=$true)][System.Management.Automation.PSCredential] $Credential,
[Parameter(Mandatory=$true)][string] $Commands,
[Switch] $ConnectOnceToAcceptHostKey = $false
)
$PlinkPath="$PSScriptRoot\plink.exe"
$commands | & "$PSScriptRoot\plink.exe" -ssh -2 -l $Credential.GetNetworkCredential().username -pw "$($Credential.GetNetworkCredential().password)" $Host -batch
}
Usage: dont forget your exit's and terminal length 0 or it will hang
PS C:\> $Command = "terminal lenght 0
>> show running-config
>> exit
>> "
>>
PS C:\> Invoke-PlinkCommandsIOS -Host ace-dc1 -Credential $cred -Commands $Command
....
Sounds like your file 'Out2.txt' has only LF at end of line. Simple way to convert that to CRLF is to use MORE command and redirect output to a new file and then use the new file.
more Out2.txt > Out2CRLF.txt
I ran into the same issue when trying to pull the full list of ACLs on an ASA via plink in powershell.
Essentially, due to the abuse issue referenced in the documentation: https://the.earth.li/~sgtatham/putty/0.72/htmldoc/Chapter3.html#using-cmdline-m, I was getting inconsistent results in pulling the ACLs. Sometimes I would get 0, sometimes only 1 or 2, and sometimes I would get all of them. (I personally, had about a 1 in 5 success rate).
As I would occasionally be successful I used a while loop that would catch the unsuccessful attempts and retry. Just be sure to put some timing on the while loop to prevent it from spamming ssh connections too much.
It is not a good solution, but it worked as a last resort.

Script to change ownership of folders in Plesk vhosts

I'm looking for some help in creating a shell script in Linux to perform a batch ownership change for certain folders in a Plesk environment where the owner:group is apache:apache.
I want to change the owner:group to :psacln.
The FTP user can be ascertained by looking at the owner of the httpdocs folder.
^this is the section I'm having trouble with.
If I was to set all owners to be the same, I could do a one-line:
find /var/www/vhosts/*/httpdocs -user apache -group apache -exec chown user:psacln {} \;
Can anyone help plug the user in to this command?
Thanks
Figured it out... for those who may want to use it in the future:
for dir in /var/www/vhosts/*
do
dir=${dir%*/}
permissions=`stat -c '%U' ${dir##*/}/httpdocs`
find ${dir##*/}/httpdocs -user apache -group apache -exec chown $permissions {} \;
done
Since stat doesn't work on al unices in the same way, I thought I would share my script to set the ownership of all websites to the correct owners in Plesk (tested on Plesk 11, 11.5, 12 and 12.5):
cd /var/www/vhosts/
for f in *; do
if [[ -d "$f" && ! -L "$f" ]]; then
# Get necessary variables
FOLDERROOT="/var/www/vhosts/"
FOLDERPATH="/var/www/vhosts/$f/"
FTPUSER="$(ls -ld /var/www/vhosts/$f/ | awk '{print $3}')"
# Set correct rights for current website, if website has hosting!
cd $FOLDERPATH
if [ -d "$FOLDERPATH/httpdocs" ]; then
chown -R $FTPUSER:psacln httpdocs
chmod -R g+w httpdocs
find httpdocs -type d -exec chmod g+s {} \;
# Print success message
echo "Done... $FTPUSER is now correct owner of $FOLDERPATH."
fi
# Make sure we are back at the root, so we can continue looping
cd $FOLDERROOT
fi
done
\\\
Explanation of code:
Go to vhosts folder
Loop through websites
Store vhosts path, because we are using cdin a loop
If httpdocsfolders exists for the current website, than
set the correct rights of httpdocs and
all underlying folders
Show succes message
cd back to vhosts folder, so we can continue looping
\\\

Append some text to the end of multiple files in Linux

How can I append the following code to the end of numerous php files in a directory and its sub directory:
</div>
<div id="preloader" style="display:none;position: absolute;top: 90px;margin-left: 265px;">
<img src="ajax-loader.gif"/>
</div>
I have tried with:
echo "my text" >> *.php
But the terminal displays the error:
bash : *.php: ambiguous redirect
I usually use tee because I think it looks a little cleaner and it generally fits on one line.
echo "my text" | tee -a *.php
You don't specify the shell, you could try the foreach command. Under tcsh (and I'm sure a very similar version is available for bash) you can say something like interactively:
foreach i (*.php)
foreach> echo "my text" >> $i
foreach> end
$i will take on the name of each file each time through the loop.
As always, when doing operations on a large number of files, it's probably a good idea to test them in a small directory with sample files to make sure it works as expected.
Oops .. bash in error message (I'll tag your question with it). The equivalent loop would be
for i in *.php
do
echo "my text" >> $i
done
If you want to cover multiple directories below the one where you are you can specify
*/*.php
rather than *.php
BashFAQ/056 does a decent job of explaining why what you tried doesn't work. Have a look.
Since you're using bash (according to your error), the for command is your friend.
for filename in *.php; do
echo "text" >> "$filename"
done
If you'd like to pull "text" from a file, you could instead do this:
for filename in *.php; do
cat /path/to/sourcefile >> "$filename"
done
Now ... you might have files in subdirectories. If so, you could use the find command to find and process them:
find . -name "*.php" -type f -exec sh -c "cat /path/to/sourcefile >> {}" \;
The find command identifies what files using conditions like -name and -type, then the -exec command runs basically the same thing I showed you in the previous "for" loop. The final \; indicates to find that this is the end of arguments to the -exec option.
You can man find for lots more details about this.
The find command is portable and is generally recommended for this kind of activity especially if you want your solution to be portable to other systems. But since you're currently using bash, you may also be able to handle subdirectories using bash's globstar option:
shopt -s globstar
for filename in **/*.php; do
cat /path/to/sourcefile >> "$filename"
done
You can man bash and search for "globstar" for more details about this. This option requires bash version 4 or higher.
NOTE: You may have other problems with what you're doing. PHP scripts don't need to end with a ?>, so you might be adding HTML that the script will try to interpret as PHP code.
You can use sed combined with find. Assume your project tree is
/MyProject/
/MyProject/Page1/file.php
/MyProject/Page2/file.php
etc.
Save the code you want to append on /MyProject/. Call it append.txt
From /MyProject/ run:
find . -name "*.php" -print | xargs sed -i '$r append.txt'
Explain:
find does as it is, it looks for all .php, including subdirectories
xargs will pass (i.e. run) sed for all .php that have just been found
sed will do the appending. '$r append.txt' means go to the end of the file ($) and write (paste) whatever is in append.txt there. Don't forget -i otherwise it will just print out the appended file and not save it.
Source: http://www.grymoire.com/unix/Sed.html#uh-37
You can do (Work even if there's space in your file path) :
#!/bin/bash
# Create a tempory file named /tmp/end_of_my_php.txt
cat << EOF > /tmp/end_of_my_php.txt
</div>
<div id="preloader" style="display:none;position: absolute;top: 90px;margin-left: 265px;">
<img src="ajax-loader.gif"/>
</div>
EOF
find . -type f -name "*.php" | while read the_file
do
echo "Processing $the_file"
#cp "$the_file" "${the_file}.bak" # Uncomment if you want to save a backup of your file
cat /tmp/end_of_my_php.txt >> "$the_file"
done
echo
echo done
PS: You must run the script from the directory you want to browse
Inspired from #Dantastic answer :
echo "my text" | tee -a file1.txt | tee -a file2.txt

Resources