Using wildcard character in bash scripting - database

I'm creating a bash script that will load CSV files using SQL*Loader. Please refer to the code below:
#!/bin/bash
FILENAME = '/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv'
# LOAD CSV FILE USING SQL*LOADER
sqlldr username/password#localhost control=control.ctl data=$FILENAME
However, when I try to run this script, I recieved the following error: SQL*Loader-500: Unable to open file (/u02/logs/*2011-11-06*.csv). I figure out that the problem is my * wildcard which is being interpreted as a string instead of a wildcard in bash.
Is there a way to tell the bash that my asterisk (*) is a wildcard and not a string?
Thank you for your support.

i dont know about sqlldr, but i think you can try:
#!/bin/bash
FILENAME = '/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv'
# LOAD CSV FILE USING SQL*LOADER
for fname in $(ls $FILENAME); do
sqlldr username/password#localhost control=control.ctl data=$fname
done
hope it helps

Your use of single ticks is the problem. Also, I'm not used to seeing bash code have spaces surrounding the equals sign. Then again, I'm old skool. So this is what I'd do:
FILENAME=/u02/logs/"$(date -d '2 days ago' +%Y-%m-%d)*.csv"
That should do it. You don't need quotes around the first part since they're literal. Only use double quotes when you need the interpreter to do do some interpolation. Use single quotes when you don't want the interpreter to touch it.

Just use
FILE=/u02/logs/$(date -d '2 days ago' +%Y-%m-%d)*.csv
I also note additional * after the logs/ in your error message, but not in your code. Adjust accordingly.
The * will stay in the filename if there is no file matching the wildcard pattern. Also, be careful where more than one matching file exists.

Maybe you can try:
sqlldr username/password#localhost control=control.ctl data="$FILENAME"

Related

Accessing Variable effectively from shell script

I have one ini configuration file, I need to create a shell script using this configuration. What is the easiest method to access all variable, Can be used effectively from the shell script.
Can I use an array or something? Now planing to find the count of [] brackets then through awk get all variables one by one. Please suggest if any easiest way to effectively
cat app.ini
Below the output of my sample configuration file. Can be N no of Blocks.
[APP1]
name=Application1
StatusScript=/home/status_APP1.sh
startScript=/home/start_APP1.sh
stopScript=/home/stop_APP1.sh
restartScript=/home/restart.APP1.sh
logdir=/log/APP1/
[APP2]
name=Application2
StatusScript=/home/status_APP2.sh
startScript=/home/start_APP2.sh
stopScript=/home/stop_APP2.sh
restartScript=/home/restart.APP2.sh
logdir=/log/APP2/
.
.
.
.
.
[APPN]
name=ApplicationN
StatusScript=/home/status_APPN.sh
startScript=/home/start_APPN.sh
stopScript=/home/stop_APPN.sh
restartScript=/home/restart.APPN.sh
logdir=/log/APPN
/
Consider using a library like bash-ini-parser https://github.com/albfan/bash-ini-parser. It covers a lot of nuances like indentation, whitespaces, comments etc.
The example for your case may look like this:
#!/bin/bash
. bash-ini-parser
cfg_parser app.ini
cfg_section_APP1
echo $name
cfg_section_APP2
echo $logdir
cfg_section_APPN
echo $logdir
Below line help us to locate particular values in each section.
sed -nr "/^\[APP1\]/ { :l /^name[ ]*=/ { s/.*=[ ]*//; p; q;}; n; b l;}" app.ini

Run C program from shell script [duplicate]

I have a script in unix that looks like this:
#!/bin/bash
gcc -osign sign.c
./sign < /usr/share/dict/words | sort | squash > out
Whenever I try to run this script it gives me an error saying that squash is not a valid command. squash is a shell script stored in the same directory as this script and looks like this:
#!/bin/bash
awk -f squash.awk
I have execute permissions set correctly but for some reason it doesn't run. Is there something else I have to do to make it able to run like shown? I am rather new to scripting so any help would be greatly appreciated!
As mentioned in #Biffen's comment, unless . is in your $PATH variable, you need to specify ./squash for the same reason you need to specify ./sign.
When parsing a bare word on the command line, bash checks all the directories listed in $PATH to see if said word is an executable file living inside any of them. Unless . is in $PATH, bash won't find squash.
To avoid this problem, you can tell bash not to go looking for squash by giving bash the complete path to it, namely ./squash.

Exact command for starting a batch file by using powershell

I know this question has been asked before and I found a thread on here which almost gives me the solution I need.
Here is the link: How to run batch file using powershell
But this only works when I write out the full path. For example:
c:\Users\Administrator\Desktop\dcp_bearbeitet\start.bat -p c:\Users\Administrator\Desktop\dcp_bearbeitet\start.prop
What I want to reach is a solution which accepts a path with parameters, like this one here:
c:\Users\Administrator\Desktop\dcp_bearbeitet\$title\start.bat -p c:\Users\Administrator\Desktop\dcp_bearbeitet\$title\start.prop
Whereas $title contains the name of my file which I am using in this case. I know that I can create another parameter for the -p command and I know that this works, but unfortunately when I try the same method for the first command I always get an error message.
I hope you guys know a way to solve this problem.
I think Invoke-Expression could help here.
Just construct your path like you want it to be, for example:
$title = "file"
$path = "C:\Users\Administrator\Desktop\dcp_bearbeitet\$title\start.bat -p c:\Users\Administrator\Desktop\dcp_bearbeitet\$title\start.prop"
and then invoke it:
Invoke-Expression $path
Regards Paul

finding a file in unix using wildcards in file name

I have few files in a folder with name pattern in which one of the section is variable.
file1.abc.12.xyz
file2.abc.14.xyz
file3.abc.98.xyz
So the third section (numeric) in above three file names changes everyday.
Now, I have a script which does some tasks on the file data. However, before doing the work, I want to check whether the file exists or not and then do the task:
if(file exist) then
//do this
fi
I wrote the below code using wildcard '*' in numeric section:
export mydir=/myprog/mydata
if[find $mydir/file1.abc.*.xyz]; then
# my tasks here
fi
However, it is not working and giving below error:
[find: not found [No such file or directory]
Using -f instead of find does not work as well:
if[-f $mydir/file1.abc.*.xyz]; then
# my tasks here
fi
What am I doing wrong here ? I am using korn shell.
Thanks for reading!
for i in file1.abc.*.xyz ; do
# use $i here ...
done
I was not using spaces before the unix keywords...
For e.g. "if[-f" should actually be " if [ -f" with spaces before and after the bracket.

<0xEF,0xBB,0xBF> character showing up in files. How to remove them?

I am doing compressing of JavaScript files and the compressor is complaining that my files have  character in them.
How can I search for these characters and remove them?
You can easily remove them using vim, here are the steps:
1) In your terminal, open the file using vim:
vim file_name
2) Remove all BOM characters:
:set nobomb
3) Save the file:
:wq
Another method to remove those characters - using Vim:
vim -b fileName
Now those "hidden" characters are visible (<feff>) and can be removed.
Thanks for the previous answers, here's a sed(1) variant just in case:
sed '1s/^\xEF\xBB\xBF//'
On Unix/Linux:
sed 's/\xEF\xBB\xBF//' < inputfile > outputfile
On MacOSX
sed $'s/\xEF\xBB\xBF//' < inputfile > outputfile
Notice the $ after sed for mac.
On Windows
There is Super Sed an enhanced version of sed. For Windows this is a standalone .exe, intended for running from the command line.
perl -pi~ -CSD -e 's/^\x{fffe}//' file1.js path/to/file2.js
I would assume the tool will break if you have other utf-8 in your files, but if not, perhaps this workaround can help you. (Untested ...)
Edit: added the -CSD option, as per tchrist's comment.
Using tail might be easier:
tail --bytes=+4 filename > new_filename
#tripleee's solution didn't work for me. But changing the file encoding to ASCII and again to UTF-8 did the trick :-)
I've used vimgrep for this
:vim "[\uFEFF]" *
also normal vim search command
/[\uFEFF]
The 'file' command shows if the BOM is present:
For example: 'file myfile.xml' displays: "XML 1.0 document, UTF-8 Unicode (with BOM) text, with very long lines, with CRLF line terminators"
dos2unix will remove the BOM.
I'm suggest the use of "dos2unix" tool, please test to run dos2unix ./thefile.js.
If necessary try to use something like this for multiple files:
for x in $(find . -type f -exec echo {} +); do dos2unix $x ; done
My Regards.
In windows you could use backported recode utility from UnxUtils.
In Sublime Text you can install the Highlighter package and then customize the regular expression in your user settings.
Here I added \uFEFF to the end of the highlighter_regex property.
{
"highlighter_enabled": true,
"highlighter_regex": "(\t+ +)|( +\t+)|[\u2026\u2018\u2019\u201c\u201d\u2013\u2014\uFEFF]|[\t ]+$",
"highlighter_scope_name": "invalid",
"highlighter_max_file_size": 1048576,
"highlighter_delay": 3000
}
To overwrite the default package settings place the file here:
~/.config/sublime-text-3/Packages/User/highlighter.sublime-settings
Save the file without code signature.

Resources