Parameters in a postgres utility script - database

I need to create an utility script to add an item in a postgres database.
My initial approach is to have a bash script with a minimum set of non default values and a postgres sql script with the remaining default columns. Item table has 20 columns.
So here is my simplified bash script:
## A unique system identifier for an Item.
ID_ITM='318'
## The description of the Item.
DE_ITM="A description"
psql -U postgres -d MYDB -v id_itm=$ID_ITM de_itm=$DE_ITM -f insertItemPostgres.sql
As you can see it calls the following sql script(simplified):
Insert into AS_ITM (ID_ITM,ID_LN_PRC,ID_ITM_SL_PRC,ID_MRHRC_GP,ID_RU_ITM_SL,ID_DPT_PS,FL_ITM_DSC,FL_ADT_ITM_PRC,NM_BRN,FL_AZN_FR_SLS,LU_ITM_USG,NM_ITM,DE_ITM,TY_ITM,LU_KT_ST,DE_ITM_LNG,FL_ITM_SBST_IDN,LU_CLN_ORD,LU_EXM_TX,FL_VLD_SRZ_ITM)
values (:id_itm,:de_itm,null,'64',null,null,null,null,null,'1',null,null,null,null,'0',null,'0',null,'0','0');
My problem is that to make this work I need to have the string and ids with two pairs of quotes:
DE_ITM="'A description'"
I need to find out how can I pass the parameters in a literal.
I will appreciate any better way to do it, because I know this is not the best and my db scripting skills are not the best. Also I'm using a bash script but I could be just a sql with the non default values that calls the one that has the insert.

If you have psql 9.0 or later, you can try the following:
First you'll need to quote the expansion of your two variables in the shell, like so:
psql -U postgres -d MYDB -v "id_itm=${ID_ITM}" -v "de_itm=${DE_ITM}" -f insertItemPostgres.sql
Then in your SQL you'll need to reference the variables using the following syntax:
INSERT INTO as_itm (id_itm, id_ln_prc, ...)
VALUES (:'id_itm', :'de_itm', ...)
Alas, this didn't work for you for some reason. So here's a more old-school approach which should work on all psql versions: Use special bash syntax to double the quotes in your variables.
psql -U postgres -d MYDB -f insertItemPostgres.sql \
-v "id_itm='${ID_ITM//\'/''}'" \
-v "de_itm='${DE_ITM//\'/''}'"
In this case the variable references in your SQL should look unchanged from the OP: VALUES (:id_itm, :de_itm, ...

Use a shell HERE-document: shell variables are expanded, even in single quotes.
#!/bin/sh
## A unique system identifier for an Item.
ID_ITM="318"
## The description of the Item.
DE_ITM="A description"
psql -U postgres -d MYDB << THE_END
Insert into AS_ITM(ID_ITM, ID_LN_PRC, ID_ITM_SL_PRC, ID_MRHRC_GP, ID_RU_ITM_SL
, ID_DPT_PS, FL_ITM_DSC, FL_ADT_ITM_PRC, NM_BRN, FL_AZN_FR_SLS
, LU_ITM_USG, NM_ITM,DE_ITM, TY_ITM, LU_KT_ST, DE_ITM_LNG
, FL_ITM_SBST_IDN, LU_CLN_ORD, LU_EXM_TX, FL_VLD_SRZ_ITM)
values ('$ID_TTM', null,'64', null, null
, null, null, '$DE_ITM' , '1', null
, null, null, null, '0', null
, '0', null, '0', '0');
THE_END

Related

How to create a new local table from a select query on remote db in PostgreSQL?

I can use the following command to do so as long as I create the table and the appropriate columns first. I would like the command to be able to create table for me based on the results of my query.
psql -h remote.host -U myuser -p 5432 -d remotedb -c "copy (SELECT view.column FROM schema.view LIMIT 10) to stdout" | psql -h localhost -U localuser -d localdb -c "copy localtable from stdin"
Again, it will populate the data properly if I create the table and columns ahead of time, but it would be much easier if I could automate that with a comand that creates the table according to the results of my query.

How Do I Generate Sybase BCP Fmt file?

I have a huge database which I want to dump out using BCP and then load it up elsewhere. I have done quite a bit of research on the Sybase version of BCP (being more familiar with the MSSQL one) and I see how to USE an Import file but I can't figure out for the life of me how to create one.
I am currently making my Sybase bcp out files of data like this:
bcp mytester.dbo.XTABLE out XTABLE.bcp -U sa -P mypass -T -n
and trying to import them back in like this:
bcp mytester.dbo.XTABLE in XTABLE.bcp -E -n -S Sybase_157 -U sa -P SyAdmin
Right now, the IN part gives me an error about IDENTITY_INSERT regardless of if the table has an identity or not:
Server Message: Sybase157 - Msg 7756, Level 16, State 1: Cannot use
'SET IDENTITY_INSERT' for table 'mytester.dbo.XTABLE' because the
table does not have the identity property.
I have often used the great info on this page for help, but this is the first time i've put in a question, so i humbly request any guidance you all can provide :)
In your BCP in, the -E flag tells bcp to take identity column values from the input file. I would try running it without that flag. fmt files in Sybase are a bit finicky, and I would try to avoid if possible. So as long as your schemas are the same between your systems the following command should work:
bcp mytester.dbo.XTABLE in XTABLE.bcp -n -S Sybase_157 -U sa -P SyAdmin
Also, the -T flag on your bcp out seems odd. I know SQLServer -T is a security setting, but in Sybase it indicates the max size of a text or image column, and is followed by a number..e.g -T 32000 (would be 32Kbytes)
But to answer the question in your title, if you run bcp out interactively (without specifying -c,-n, or -f) it will step through each column, prompting for information. At the end it will ask if you want to create a format file, and allow you to specify the name of the file.
For reference, here is the syntax and available flags:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc30191.1550/html/utility/X14951.htm
And the chapter in the Utility Guide:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc30191.1550/html/utility/BABGCCIC.htm

Need sqsh to ignore a dollar sign

The Environment:
Solaris 10, /bin/sh script using sqsh and freetds to talk to an MS SQL Server
The (TLDR) Problem:
I need sqsh to ignore a dollar sign and pass it through to MS SQL.
The Background:
I'm writing some code that dynamically builds SQL to alter existing indexes but when it runs, I get errors:
Msg 2727, Level 11, State 1
Server 'HOST\SERVER', Line 1
Cannot find index 'foo'.
I dig around and see that there is no index named foo but there is one named foo$bar.
The built-up input SQL input looks fine...
% grep 'foo' input.sql
alter index [foo$bar] on ...
...and running this SQL through a New Query session or a third party app succeeds. However, the results when passed through sqsh don't see past that dollar sign:
% sqsh -e -i input.sql -o output.sql
% grep 'foo' output.sql
1> alter index [foo] on ...
...which suggests it's interpreting $bar as a variable or something.
Anyone know how to get sqsh to escape a dollar sign or allow one to pass through? I've tried various combinations of quotes and backslashes.
Help me, stackoverlow. You're my only hope.
Another option is to disable buffer expansion altogether by executing:
\set expand=0
on the sqsh prompt, or specify this command in the .sqshrc file, or start sqsh with the parameter
sqsh -e -i input.sql -o output.sql -Lexpand=0
If expansion is on (default) then sqsh will substitute the variable $bar with its contents.
Reference manual on page 5 states: "Note that in order to prevent the expansion of a variable use either single quotes, or two \’s, like thus:
1> \echo \\$name
$name
So, I believe that to prevent sqsh to substitute $bar with an empty string, you have to write something like:
alter index [foo\\$bar] on ...
alter index ['foo$bar'] on ...
To test it you can try first with SELECT 'foo\\$bar' = 1 or something similar.

Using variables in SQLCMD for Linux

I'm running the Microsoft SQLCMD tool for Linux (CTP 11.0.1720.0) on a Linux box (Red Hat Enterprise Server 5.3 tikanga) with Korn shell. The tool is properly configured, and works in all cases except when using scripting variables.
I have an SQL script, that looks like this.
SELECT COLUMN1 FROM TABLE WHERE COLUMN2 = '$(param1)';
And I'm running the sqlcmd command like this.
sqlcmd -S server -d database -U user -P pass -i input.sql -v param1="DUMMYVALUE"
When I execute the above command, I get the following error.
Sqlcmd: 'param1=DUMMYVALUE': Invalid argument. Enter '-?' for help.
Help lists the below syntax.
[-v var = "value"...]
Am I missing something here?
You don't need to pass variables to sqlcmd. It auto picks from your shell variables:
e.g.
export param1=DUMMYVALUE
sqlcmd -S $host -U $user -P $pwd -d $db -i input.sql
In the RTP version (11.0.1790.0), the -v switch does not appear in the list of parameters when executing sqlcmd -?. Apparently this option isn't supported under the Linux version of the tool.
As far as I can tell, importing parameter values from environment variables doesn't work either.
If you need a workaround, one way would be to concatenate one or more :setvar statements with the text file containing the commands you want to run into a new file, then execute the new file. Based on your example:
echo :setvar param1 DUMMYVALUE > param_input.sql
cat input.sql >> param_input.sql
sqlcmd -S server -d database -U user -P pass -i param_input.sql
You can export the variable in linux. After that you won't need to pass the variable in sqlcmd. However, I did notice you will need to change your sql script and remove the :setvar command if it doesn't have a default value.
export dbName=xyz
sqlcmd -Uusername -Sservername -Ppassword -i script.sql
:setvar dbName --remove this line
USE [$(dbName)]
GO
I think you're just not quoting the input variables correctly. I created this bash script...
#!/bin/bash
# Create a sql file with a parameterized test script
echo "
set nocount on
select k = '-db', v = '\$(db)' union all
select k = '-schema', v = '\$(schema)' union all
select '-', 'static'
go" > ./test.sql
# capture input variables
DB=$1
SCHEMA="${2:-dbo}"
# Exec sqlcmd
sqlcmd -S 'localhost\lemur' -E -i ./test.sql -v "db=${DB}" -v "schema=${SCHEMA}"
... and tested it like so:
$ ./test.sh master
k v
------- ------
-db master
-schema dbo
- static

SQLCMD passing in double quote to scripting variable

I am trying to pass in double quote to a scripting variable in SQLCMD. Is there a way to do this?
sqlcmd -S %serverName% -E -d MSDB -i MyScript.sql -m 1 -v Parameter="\""MyValueInDoubleQuote\"""
And my sql script is as follow:
--This Parameter variable below is commented out since we will get it from the batch file through sqlcmd
--:SETVAR Parameter "\""MyValueInDoubleQuote\"""
INSERT INTO [MyTable]
([AccountTypeID]
,[Description])
VALUES
(1
,$(Parameter))
GO
If you have your sql script set up in this fashion:
DECLARE #myValue VARCHAR(30)
SET #myValue = $(MyParameter)
SELECT #myValue
Then you can get a value surrounded by double quotes into #myValue by just enclosing your parameter in single quotes:
sqlcmd -S MyDb -i myscript.sql -v MyParameter='"123"'
This works because -v is going to replace the $(MyParameter) string with the text '"123"'. The resulting script will look like this before it is executed:
DECLARE #myValue VARCHAR(30)
SET #myValue = '"123"'
SELECT #myValue
Hope that helps.
EDIT
This sample is working for me (tested on SQL Server 2008, Windows Server 2K3). It inserts a record into the table variable #MyTable, and the value in the Description field is enclosed in double quotes:
MyScript.sql (no need for setvar):
DECLARE #MyTable AS TABLE([AccountTypeID] INT, [Description] VARCHAR(50))
INSERT INTO #MyTable ([AccountTypeID] ,[Description])
VALUES(1, $(Parameter))
SELECT * FROM #MyTable
SQLCMD:
sqlcmd -S %serverName% -E -d MSDB -i MyScript.sql -m 1 -v Parameter='"MyValue"'
If you run that script, you should get the following output, which I think is what you're looking for:
(1 rows affected)
AccountTypeID Description
------------- --------------------------------------------------
1 "MyValue"
Based on your example, you don't need to include the quotes in the variable, as they can be in the sql command, like so:
sqlcmd -S %serverName% -E -d MSDB -i MyScript.sql -m 1 -v Parameter="MyValueNoQuotes"
and
INSERT INTO [MyTable]
([AccountTypeID]
,[Description])
VALUES
(1
,"$(Parameter)")
(Though I am more accustomed to use single quotes, as in ,'$(Parameter)'

Resources