Creating variables from command line in BASH - c

I'm a very new user to bash, so bear with me.
I'm trying to run a bash script that will take inputs from the command line, and then run a c program which its output to other c programs.
For example, at command line I would enter as follows:
$ ./script.sh -flag file1 < file2
The within the script I would have:
./c_program -flag file1 < file2 | other_c_program
The problem is, -flag, file1 and file2 need to be variable.
Now I know that for -flag and file this is fairly simple- I can do
FLAG=$1
FILE1=$2
./c_program $FLAG FILE1
My problem is: is there a way to assign a variable within the script to file2?
EDIT:
It's a requirement of the program that the script is called as
$ ./script.sh -flag file1 < file2

You can simply run ./c_program exactly as you have it, and it will inherit stdin from the parent script. It will read from wherever its parent process is reading from.
FLAG=$1
FILE1=$2
./c_program "$FLAG" "$FILE1" | other_c_program # `< file2' is implicit
Also, it's a good idea to quote variable expansions. That way if $FILE1 contains whitespace or other tricky characters the script will still work.

There is no simple way to do what you are asking. This is because when you run this:
$ ./script.sh -flag file1 < file2
The shell which interprets the command will open file2 for reading and pipe its contents to script.sh. Your script will never know what the file's name was, therefore cannot store that name as a variable. However, you could invoke your script this way:
$ ./script.sh -flag file1 file2
Then it is quite straightforward--you already know how to get file1, and file2 is the same.

Related

Shell script [ USe Grep to lookup a files and extract the specific pattern lines]

I am using the following simple shell script to read transaction file against the contents of a master file , and just output the matching lines from the transaction file.
Transaction file contains:
this is a sample line - first
this is a sample line - second
this is a sample line - nth line
Master File contains:
first
Output:
this is a sample line - first
for tranfile in transactionfile.txt
do
grep -f MasterFile.txt $tranfile >> out.txt
done
PS: When i execute this line outside of the above shel script it works like a charm ; Just that it wont return within this shell script.
What am i missing???
Without the script output text, knowing what shell, i'm just guessing, but i suspect you either fail to find grep in the $PATH ( or using a different version of grep) or fail to find one of the files or you are executing a shell in the command line and another different shell in the script.
Try adding shebang in the script with the correct shell and try to put the grep path (usually /bin/grep or /usr/bin/grep) and also add the full path to the files you are requiring.
To help debug, i suggest you to add a set -x to the top of the script, so the shell will output what is doing and you can notice what is missing. This set -x may be replaced with a -x option in the shebang (example #!/bin/bash -x )
set -x also work in the command line, use set +x to disable it
I would have done this using awk
awk 'NR==FNR {a=$0;next} $0~a' master transaction
FNR==NR {a=$0;next} stores the data from master file in variable a
$0~a test every line of transaction if it contains variable a, if so do the default action print the line.
If master file contains more than one word. Test the last field against all words like this:
awk 'FNR==NR {a[$0];next} $NF in a' master transaction
If word can be anywhere on the line:
awk 'FNR==NR {a[$0];next} {for (i in a) if ($0~i) print}' master transaction

Ubuntu how to combine multiple text files into one with terminal

I want to combine multiple text files into one text file. Is there any command in ubuntu terminal to do this, or do I have to do it manually?
Try cat like
cat file1 file2 file3 > outputFile
cat stands for concatenation.
> is for output redirection.
If outputFile already has something in it and you wish to append to it the contents of other files, use
cat file1 file2 file3 >> outputFile
as > would erase the old contents of outputFile if it already existed.
Have a look here as well.

How to find duplicate lines across 2 different files? Unix

From the unix terminal, we can use diff file1 file2 to find the difference between two files. Is there a similar command to show the similarity across 2 files? (many pipes allowed if necessary.
Each file contains a line with a string sentence; they are sorted and duplicate lines removed with sort file1 | uniq.
file1: http://pastebin.com/taRcegVn
file2: http://pastebin.com/2fXeMrHQ
And the output should output the lines that appears in both files.
output: http://pastebin.com/FnjXFshs
I am able to use python to do it as such but i think it's a little too much to put into the terminal:
x = set([i.strip() for i in open('wn-rb.dic')])
y = set([i.strip() for i in open('wn-s.dic')])
z = x.intersection(y)
outfile = open('reverse-diff.out')
for i in z:
print>>outfile, i
If you want to get a list of repeated lines without resorting to AWK, you can use -d flag to uniq:
sort file1 file2 | uniq -d
As #tjameson mentioned it may be solved in another thread.
Just would like to post another solution:
sort file1 file2 | awk 'dup[$0]++ == 1'
refer to awk guide to get some awk
basics, when the pattern value of a line is true this line will be
printed
dup[$0] is a hash table in which each key is each line of the input,
the original value is 0 and increments once this line occurs, when
it occurs again the value should be 1, so dup[$0]++ == 1 is true.
Then this line is printed.
Note that this only works when there are not duplicates in either file, as was specified in the question.

Cat command does not merge files in script

I am not sure what I am doing wrong in following situation - I want to merge multiple files into one, but I want to merge a lot of them, so I want to use something like this:
*input file* that contains lines
file1.txt >> mergedfiles1.txt
file2.txt >> mergedfiles1.txt
file3.txt >> mergedfiles2.txt
file4.txt >> mergedfiles2.txt
...
If I try to use simple script as I usually do
cat *input file* | while read i
do
cat $i
done
it actually doesn't merge the files, but writes
*content* of file1.txt
cat: >> : file does not exist
cat: mergedfiles1.txt : file does not exist
I have tried to put command cat right at the beginning of each line of the input file but it did not work as well.
I guess it is a simple mistake, but I am not able to find a solution.
Thanks for help.
You can merge your three files using cat this way:
cat file1 file2 filet3 > merged_files
you need to use it like this
cat input_file > output file
That's because bash treats an empty space as a line separator.
So you've got to manage this thing.
Actually, you can remove the >> from your input file and do something like this :
k=0
for line in $(cat inputFile); do
if [ $k -eq 0 ]; then
src=$line
let k=$k+1
else
cat $src>>$line
k=0
fi
done
It's been a while since my last bash script, but the logic is pretty simple.
Since bash uses the spaces a line separators, you have to keep a counter to know if the line's really over.
So we're using k.
Having k = 0 means that we're in the first half of the line, so we need to store the filename into a var (src).
When k is 1, it means that we're in the final half of our line, so we can actually execute the cat command.
My code will work if your text is like :
file1.txt mergedfiles1.txt
file2.txt mergedfiles1.txt
file3.txt mergedfiles2.txt
file4.txt mergedfiles2.txt
I wasn't quite sure what you wanted to do, but thought this example might help:
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for file in *
do
echo "$file"
done
IFS=$SAVEIFS
The manipulating of IFS is so you can pick up and process space separated files, which are more common coming out of Windows reports than -- at least from my experience -- Linux (Unix).

linux appending files

I have a program that I generally run like this: a.out<guifpgm.txt>gui.html
Were a.out is my compiled c program, guifpgm.txt is the input file and the gui.html is the output file. But what I really want to do is take the output from a.out<guifpgm.txt and rather than just replacing whatever is in gui.html with the output, place the output in the middle of the file.
So something like this:
gui.html contains the following to start: <some html>CODEGOESHERE<some more html>
output to a.outalert("this is some dynamically generated stuff");
I want gui.html to contain the following: <some html>alert("this is some dynamically generated stuff");<some more html>
How can I do this?
Thanks!
Sounds like you want to replace text. For that, use sed, not C:
sed -i s/CODEGOESHERE/alert(\"this is some dynamically generated stuff\")/g gui.html
If you really need to run a.out to get its output, then do something like:
sed -i s/CODEGOESHERE/`a.out`/g gui.html
I ended up using the linux cat function. output a.out>guifpgm.txt>output.txt. Then did cat before.txt output.txt after.txt > final.txt
A simplification of your cat method would be to use
./a.out < guifpgm.txt | cat header.txt - footer.txt > final.txt
The - is replaced with the input from STDIN. This cuts down somewhat on the intermediate files. Using > instead of >> overwrites the contents of final.txt, rather than appending.
Just for the fun of it... the awk solution is as follows for program a.out, template file template that needs to replace the line "REPLACE ME". This puts the resulting output in output_file.txt.
awk '/^REPLACE ME$/{while("./a.out <input.txt"|getline){print $0}getline} {print $0}' template > output_file.txt
EDIT: Minor correction to add input file, remove UUOC, and fix a minor bug (the last line of a.out was being printed twice)
Alternatively... perl:
perl -pe '$_=`./a.out <input.txt` if /REPLACE ME/' template > output_file.txt
Although dedicated perlers could probably do better

Resources