I am having some trouble when using a File descriptor. The end goal is to be able to use flock for because I am using this script to update a file and it could be run multiple times in parallel and I do not want any collisions. This script is called from another script and passed variables
call script:"call.sh"
#!/bin/ksh
scriptDir=/home/Scripts
###other stuff happens####
#Call to replacement script
. $scriptDir/replacement.sh var1 var2
replacement script:"replacement.sh"
#!/bin/ksh
var1=$1
var2=$2
file=/myfile.doc
exec 300>>$file
flock -x 300
##Replacement logic###
When I run call.sh regular or in debug (ksh) I get an error:
./call.sh: /replacement.sh[34]: 300: not found
At first I though maybe the file descriptor needed to be in the first script too, so I added:
exec 300>>$file
to the call.sh, but that returned an error like:
./call.sh[28]: 300 : not found
It would be awesome if someone could explain to me what I am missing!
Thanks in advance!
You have an invalid space after the = in file= /myfile.doc
ksh only supports single digit fds when used literally. Use 9 instead of 300.
ksh makes non-standard fds non-inheritable. Specifically redirect it to the command.
In all:
#!/bin/ksh
file=./myfile.doc
exec 9>>$file
flock -x 9 9>&9
Related
I am using the following simple shell script to read transaction file against the contents of a master file , and just output the matching lines from the transaction file.
Transaction file contains:
this is a sample line - first
this is a sample line - second
this is a sample line - nth line
Master File contains:
first
Output:
this is a sample line - first
for tranfile in transactionfile.txt
do
grep -f MasterFile.txt $tranfile >> out.txt
done
PS: When i execute this line outside of the above shel script it works like a charm ; Just that it wont return within this shell script.
What am i missing???
Without the script output text, knowing what shell, i'm just guessing, but i suspect you either fail to find grep in the $PATH ( or using a different version of grep) or fail to find one of the files or you are executing a shell in the command line and another different shell in the script.
Try adding shebang in the script with the correct shell and try to put the grep path (usually /bin/grep or /usr/bin/grep) and also add the full path to the files you are requiring.
To help debug, i suggest you to add a set -x to the top of the script, so the shell will output what is doing and you can notice what is missing. This set -x may be replaced with a -x option in the shebang (example #!/bin/bash -x )
set -x also work in the command line, use set +x to disable it
I would have done this using awk
awk 'NR==FNR {a=$0;next} $0~a' master transaction
FNR==NR {a=$0;next} stores the data from master file in variable a
$0~a test every line of transaction if it contains variable a, if so do the default action print the line.
If master file contains more than one word. Test the last field against all words like this:
awk 'FNR==NR {a[$0];next} $NF in a' master transaction
If word can be anywhere on the line:
awk 'FNR==NR {a[$0];next} {for (i in a) if ($0~i) print}' master transaction
I'm stucked on a bash script.
I'm having a config.ini files like this :
#Username
username=user
#Userpassword
userpassword=password
And i'm looking in a bash script to extract this information and put it in a associative array. My script looks like :
declare -A array
OIFS=$IFS
IFS='='
grep -vE '^(\s*$|#)' file | while read -r var1 var2
do
array+=([$var1]=$var2)
done
echo ${array[#]}
But the array seems to be empty because the commande echo ${array[#]} gives no output.
Any idea why me script don't work ? Thanks for your help and sorry for my bad english.
Common error - "grep | while" causes the while loop to be executed in a separate shell and the variables inside the loop are not global to your shell. Use a here string instead:
while read -r var1 var2
do
array+=([$var1]=$var2)
done <<< $(grep -vE '^(\s*$|#)' file)
Assuming the file can be trusted (ie the content is regulated and known), the simplest method would be to source the ini file and then directly use the variable names within the script:
. config.ini
You can either use the period (.) as above or the source builtin command
I have a script in unix that looks like this:
#!/bin/bash
gcc -osign sign.c
./sign < /usr/share/dict/words | sort | squash > out
Whenever I try to run this script it gives me an error saying that squash is not a valid command. squash is a shell script stored in the same directory as this script and looks like this:
#!/bin/bash
awk -f squash.awk
I have execute permissions set correctly but for some reason it doesn't run. Is there something else I have to do to make it able to run like shown? I am rather new to scripting so any help would be greatly appreciated!
As mentioned in #Biffen's comment, unless . is in your $PATH variable, you need to specify ./squash for the same reason you need to specify ./sign.
When parsing a bare word on the command line, bash checks all the directories listed in $PATH to see if said word is an executable file living inside any of them. Unless . is in $PATH, bash won't find squash.
To avoid this problem, you can tell bash not to go looking for squash by giving bash the complete path to it, namely ./squash.
I am trying to run a biological program called BLASTP which takes in two strings (fasta_GWIDD and fasta_UNIPROT in the code) and compares them. The problem that I am encountering is the use of echo/system in the code. Can anyone suggest what am I missing out??
for(i=0;i<index1;i++)
{
sprintf(fasta_GWIDD,">%s\\n%s\n",fasta_name1[i],fasta_seq1[i]);
setenv("GwiddVar", fasta_GWIDD, 1) ;
sprintf(fasta_UNIPROT,">%s\\n%s\n",fasta_name2[i],fasta_seq2[i]);
setenv("UniprotVar", fasta_UNIPROT, 1) ;
system("blastp -query <(echo -e $GwiddVar) -subject<(echo -e $UniprotVar)");
}
The error is:
sh: -c: line 0: syntax error near unexpected token `('
sh: -c: line 0: `blastp -query <(echo -e $GwiddVar) -subject<(echo -e $UniprotVar)'
It seems that the shell does not understand the
<(echo -e $GwiddVar)
syntax. Mind that the system command may use different shell than the one you are used to (like csh instead of bash, and so on). It's everything in somewhere in your OS config files and profile, but I can't guess what you have out there.
Btw. I think that you should be able to check which shell is being used by the system() command by either of these:
system("echo $SHELL") // should simply write the path to current shell
system("ps -aux") // look at it and find what is the parent of the PS
etc.
Considering that this was correct on some shell:
blastp -query <(echo -e $GwiddVar) -subject<(echo -e $UniprotVar)
The syntax cited above apparently is meant only to pass the variable as intput. I think you are overdoing it. You are using echo -e $GwiddVar to print and capture the data, which you already have in a vairable at hand. Have you tried something as simple as:
blastp -query $GwiddVar -subject $UniprotVar
I don't know which shell you are trying to use, but considering that echo got its data, then it should be exactly the same.
If you are worried about spaces, then various shells usually allow you to use quotation marks:
blastp -query "$GwiddVar" -subject "$UniprotVar"
Of course it depends on the shell. If your program uses a shell that does not like quotation marks, well, you have to adapt it. Not to your shell, but to the shell the system() has used.
Another thing is that using system is quite rough. When you have arguments that are difficult to escape correctly, you should be using other functions like execve that are able to take an array of real raw direct strings and pass them directly as ARGV to the process. Using these, you will not need (and you should not) add any quotes or escape any spaces in the strings to be passed.
sprintf(fasta_GWIDD,">%s\\n%s\n",fasta_name1[i],fasta_seq1[i]);
sprintf(fasta_UNIPROT,">%s\\n%s\n",fasta_name2[i],fasta_seq2[i]);
char** args = .....; // allocate an array of char*[5], malloc, or whatever
args[0] = "blastp";
args[1] = "-query";
args[2] = fasta_GWIDD;
args[3] = "-subject";
args[4] = fasta_UNIPROT;
int errcode = execve(4, args, null);
if( errcode ) ... // check the error (if any) and react
However! Note that the execve comes from the exec family, so it replaces your current process. This is why I write only a sketch and don't show the whole ready-to-run code. You will probably need to fork() before it and then wait for the children in the outer loop.
So, I'd first check the shell and syntax ;)
From man 3 system:
DESCRIPTION
system() executes a command specified in command by calling /bin/sh -c
command, and returns after the command has been completed.
On many systems, /bin/sh is not bash, and even when it is, it is a different configuration of bash (bash typically operates differently if it is invoked as /bin/sh). So, you are passing bash syntax to a shell that is either not bash, or doesn't allow the full set of bash-isms... Also, there's a space missing after -system that might be confusing things as well... And, I'm not entirely sure environment variables are expanded within system() strings...
I have few files in a folder with name pattern in which one of the section is variable.
file1.abc.12.xyz
file2.abc.14.xyz
file3.abc.98.xyz
So the third section (numeric) in above three file names changes everyday.
Now, I have a script which does some tasks on the file data. However, before doing the work, I want to check whether the file exists or not and then do the task:
if(file exist) then
//do this
fi
I wrote the below code using wildcard '*' in numeric section:
export mydir=/myprog/mydata
if[find $mydir/file1.abc.*.xyz]; then
# my tasks here
fi
However, it is not working and giving below error:
[find: not found [No such file or directory]
Using -f instead of find does not work as well:
if[-f $mydir/file1.abc.*.xyz]; then
# my tasks here
fi
What am I doing wrong here ? I am using korn shell.
Thanks for reading!
for i in file1.abc.*.xyz ; do
# use $i here ...
done
I was not using spaces before the unix keywords...
For e.g. "if[-f" should actually be " if [ -f" with spaces before and after the bracket.