I am fairly beginner level at shell scripts and following are the details..
Am looking for the best way to fire sql queries and and carry out some logic based on that data. I've used the following snippet..
shellvariable=sqlplus $user/$passwd <<END
select count(1) from table1;
end
EOF
if[$shellvariable -ne 0] then
<>
fi
Is there a better way to carry out the same..
you're on the right track. sqlplus is the best way to interact with the database when you are shell scripting, but two things to note:
use the "-S" parameter to stop
sqlplus from printing all its
application info
to read data directly into a variable, you will
need some sqlplus environment
settings to prune back the output to
just what you want
For any DBA's who are learning shell scripting to help them manage and automate Oracle database administration, I would highly recommend Jon Emmons' book Oracle Shell Scripting. It teaches a great shell scripting intro course, but in the context of tasks that are really useful and interesting to DBAs.
One final note: if you are doing anything any more than a simple DBA task, I would recommend not using shell scripts, but use a scripting language that has proper database support. Perl is a good option for Oracle, since it is installed with the database.
Here's an example of a script for Oracle done in both bash and perl. From the shell version, here's how it reads a specific value to a shell variable:
alertlog=$(sqlplus -S \/ as sysdba 2> /dev/null <<EOF
SET NEWPAGE 0
SET SPACE 0
SET LINESIZE 80
SET PAGESIZE 0
SET ECHO OFF
SET FEEDBACK OFF
SET VERIFY OFF
SET HEADING OFF
SELECT value
FROM v\$parameter
WHERE name = 'background_dump_dest';
EOF
)
Accessing database through shell scripting
Related
A project I'm working on at work involves modifying one of the subsystems to store/pull data that is currently stored in files into the database. Each of the files is a single, sometimes-large, chunk of custom (xml-based) script generated by another custom tool.
Conceptually, I'm looking for an easy way to do something like:
For Each file in folder_and_subfolders
INSERT INTO table
(script_name, version_num, script )
VALUES
({file_name}, 1, {file_contents})
;
Next
Preferably on an entire directory tree at once.
If there's no easy way to do this via T-SQL, I can write a utility to do the job, but I'd prefer something that didn't require having to write another custom tool that will only be used once.
So, I don't have SQL Server installed and therefore can't test this, but if you are looking for a simple batch file that could do what you're after, I'd suggest something like the following might well help;
#echo off
SET xmldir=./myxmlfiles/live/here/
echo --- Processing files
for %%f in ("%xmldir%*.xml") do (echo Running %%f.... && #sqlcmd -I -U %1 -P %2 -S %3 -d %4 -v filename="%xmldir%%%f" -i ProcessFile.sql)
I'm not sure how much you know about sqlcmd, but it is a command line tool that is generally provided by SQL Server. It will allow you to run SQL commands, or in the case above, run a script which is indicated by the -i parameter. I am assuming that you'd place your SQL statement in there to perform your additions to the table.
The other parameters to sqlcmd are described below;
-I sets QUOTED_IDENTIFIER on (you may or may not need this. I did for an earlier issue I faced with sqlcmd and QUOTED_IDENTIFIER)
-U sets the database username
-P sets the database password
-S sets the database server
-d sets the database to connect to
-v is the interesting one here as it lets you pass parameters to your script. Note that on the MSDN page describing this, it states that if your path or filename contains spaces, then you'll need to enclose it in quotes, so check that out. Basically though, you'd be able to refer to the parameter inside your sql script (ProcessFile.sql) like INSERT INTO mytable (file_name) VALUES ('$(filename)')
You'd have to use the logic described in the answer from my previous comment to ensure
My current scenario is like this:
I need to login to sqlplus from a shell script to call a stored procedure.
After that I need to create a CSV file by SPOOLING data from a table.
Then I need to check whether the CSV file has been created in a particular directory and depending on the result an update query needs to be run.
I know that this can be checked within sqlplus with the help of UTL_FILE package but unfortunately due to Client policies,the access of this package is restricted in the current system.
Another way is to exit from sqlplus and perform the file check in UNIX and then again log in to sqlplus to perform the rest actions. But this I believe would result in slower execution time and performance is an important factor in this implementation as the tables contain huge volumes of data(in millions).
So is there any other way to check this from sqlplus without exiting from the current session?
System Info:
OS - Red Hat Enterprise Linux
Database - Oracle 11g
If the file is on the same machine that you're running SQL*Plus on, you could potentially use the host command.
If the file you're checking is the same one you're spooling to, it must exist anyway, or you would have got an SP error of some kind; but if you do want to check the same file for some reason, and assuming you have a substitution variable with the file name:
define csv_file=/path/to/spool.csv
-- call procedure
spool &csv_file
-- do query
spool off
host ls &csv_file
update your_table
set foo=bar
where &_rc = 0;
If the file exists when the host command is run, the _rc substitution variable will be set to zero. If the file doesn't exist or isn't readable for any reason it will be something else - e.g. 2 if the file just doesn't exist. Adding the check &_rc = 0 to your update will mean no rows are updated if there was an error. (You can of course still have whatever other conditions you need for the update).
You could suppress the display of the file name by adding 1>/dev/null to the host command string; and could also suppress any error messages by also adding 2>/dev/null, though you might want to see those.
The documentation warns against using &_rc as it isn't portable; but it works on RHEL so as long as you don't need your script to be portable to other operating systems this may be good enough for you. What you can't do, though, is do anything with the contents of the file, or interpret anything about it. All you have available is the return code from the command you run. If you need anything more sophisticated you could call a script that generates specific return codes, but that's getting a bit messy.
I'm trying to set up a batch script that basically runs a SQL statement against a database, and if the script returns results it will follow some logic.
Is there a way to have SQLCMD actually return the number of rows it found, or something similar?
I see that I can have the output displayed on the screen or a file, but is there a way to have it put it into a variable so I can have the script evaluate the variable? For example:
SQLCMD -q "select count(*) from active_connections" -r #varactive
IF #varactive > 0 THEN
<do things>
ELSE END
Or would I need to switch to Powershell to handle this sort of logic?
While #Gary is technically correct that the only thing returned is the ERRORLEVEL, sqlcmd does also display its results to STDOUT. Armed with that, you could do something like this in a batch file:
set SERVERNAME=yoursqlserver
for /f "skip=2" %%x in ('sqlcmd -S %SERVERNAME% -Q "select count(*) from active_connections" ^| findstr /v /c:"rows affected"') do set COUNT=%%x
echo There are %COUNT% records in the active_connections table.
See Docs for sqlcmd and you will see quite a few options you probably never paid attention to.
The only thing an executatble "returns" to the batch script environment is the ERRORLEVEL. For SqlCmd you need the -b option to set this (based on the sql server error level)
If you use the -m option, you can control the error messages send to stdout -- I can't test at the moment, but I think this include the rows affected message (a level 0 error perhaps). You would then have to parse this too (ugly in batch scripts)
This sounds like a real kludge at best to be, you are likely better off to use a better scripting environment. PowerShell, Perl, Python, etc. all more powerful and you can find plenty of examples on-line.
Batch is best when you have a "no-deployment" requirement or you needs are simple. Easy to hit the wall as needs change.
I am trying to get a couple scripts to work with each other, but I am not entirely familiar with the if-then commands, I am using wizapp and I have my info ready to go, but I don't know how to map a specific location based on the output of wizapp, as a for instance
if %siteid%=="0"
How do I map that to a drive, I have 10 different drives that have to be mapped using that
info, and I am lost, siteid will obviously be different in each if then statement?
This is relatively easy to do. I will provide manual instructions as it is extremely useful to learn and will improve your coding skills.
C:\windows\system32> net view
Server Name Remark
---------------------------------------------------------------------------------
\\PC1
\\PC2
\\PC3
\\PC4
\\PC5
\\PC6
\\PC7
\\PC8
\\PC9
\\SERVER
The command completed successfully.
C:\windows\system32> net view \\PC1
Shared resources at \\PC1
Share name Type Used as Comment
-------------------------------------------------------------------------------
SharedDocs Disk
The command completed successfully.
C:\windows\system32> net use ( Drive letter A-Z ) \\PC1\SharedDocs
The command completed successfully.
Now open up My Computer and youll see that PC1 is registered as a drive onto your computer.
In Oracle you can use &&VAR_NAME in a script and then the script will ask you for that value when you run it.
In SQLSERVER you can use $(VAR_NAME) and reference a property file using:
:r c:/TEMP/sqlserver.properties
And in the property file you have something like:
:setvar VAR_NAME_some_value
Can you do the equivalent of &&VAR_NAME so the script asks you for the value when you run it instead of having the value predefined in a script.
If I've understood correctly, you're talking about variable substitution with the SQLCMD utility.
I don't see that SQLCMD supports the behaviour you describe.
An alternative would be to exploit the fact that SQLCMD will substitute the values of system or user environment variables (see the link above), and create a wrapper CMD script which prompts the user for the variable value(s) using SET with the /P flag.
There is nothing in sql server like this, you should predefine all parameters values before using them, like this:
DECLARE #i SMALLINT
SET #i = 1
The problem with having a form pop up and ask you for the parameter is that you normally want rather more control over the nature of the form, even for an admin script. I'd use the variable substitution in SQLCMD, from within a Powershell or Python script so you can provide the guy running the script a better and more helpful form. That would make a very powerful combination.
You can do quite a lot with template variable substitution in SSMS, but that would only go so far as formulating the correct SQL to execute. you'd then have to bang the button. It would be a bit clunky outside the development environment!