I am exporting data from SQL Server table to CSV using SQLCMD:
sqlcmd -S serverdetails -s"^" -d dbname -U username -P password -W -Q "SET NOCOUNT on; Select * from table with (nolock) " > c:\\USERS\\a\\b\\export_file.csv -s"^" -W """;
I am getting data like this:
id^column1^column2^column3^column4
1^abc^cde^www.google.com^8776565
2^abc^cde^www.google.com^8776565
3^abc^cde^www.google.com^8776565
I want output like this:
"id"^"column1"^"column2"^"column3"^"column4"
"1"^"abc"^"cde"^"www.google.com"^"8776565"
"2"^"abc"^"cde"^"www.google.com"^"8776565"
"3"^"abc"^"cde"^"www.google.com"^"8776565"
Please suggest how I can do this with select * from table. I don't want to specify all columns and concatenate " with them.
The double quote character has to be escaped with a backslash,
try to add "\"^\"";
sqlcmd -S serverdetails -s"\"^\"" -d dbname -U username -P password -W -Q "SET NOCOUNT on; Select * from table with (nolock) " > c:\\USERS\\a\\b\\export_file.csv -s"\"^\"" -W """
How does one write, using SQLCMD:
multiple result sets to one output file?
or, multiple result sets to separate output files?
Discussion
After prototyping in SSMS, then moving to SQLCMD called from batch file, it's necessary to stay within the same connection (due to building some #temp tables along the way). The batch files will then be provided to production operations who will run them and give the output back to me for further processing.
CREATE TABLE #BatchFileType ( ... )
INSERT INTO #BatchFileType ( ... )
SELECT (...) FROM ...
CREATE NONCLUSTERED INDEX ...
CREATE TABLE #BrandingServiceDates ( ... )
INSERT INTO #BrandingServiceDates (
SELECT (...) FROM #BatchFileType JOIN (other tables)
CREATE NONCLUSTERED INDEX ...
SELECT [result set 1]
SELECT [result set 2]
SELECT [result set n]
...
Then, based on #BrandingServiceDates, create multiple result sets which we want to write to output files. Each run is based on a date range. The goal here is to not have to redo the #temp table processing time for each result set.
This is a one-time run so looking to solve this with sqlcmd, batch files, and parameters.
sqlcmd -S %1 -i %2 -W -o %3 -k -s,
Where -o (from what I can tell) doesn't take an array of filenames.
Alternatives to SQLCMD are also welcome.
I have done this before by using xp_cmdshell and sqlcmd to create delimited files. This example used a trusted connection back to the server that made the call. If there are any errors check the results written to #output. Or possibly the errors are written to the TXT file.
-- xp_cmdshell has a 8000 charater limit
declare #cmd varchar(8000)
-- create a global ##temp table
-- command string to output the table to a couple of different files
set #cmd = 'sqlcmd -E -S "[server]" -d "[database]" -Q"SET NOCOUNT ON select * from ##temp" -s"[delimiter]" -W -h-1 > "d:\file1.txt"'
+ ' & sqlcmd -E -S "[server]" -d "[database]" -Q"SET NOCOUNT ON select * from ##temp" -s"[delimiter]" -W -h-1 > "d:\file2.txt"'
-- replace dynamic values in the command string
set #cmd = replace(#cmd, '[server]', #server)
set #cmd = replace(#cmd, '[database]', #database)
set #cmd = replace(#cmd, '[delimiter]', #delimit)
-- execute the command and output results to a table variable
declare #output table (output varchar(max) null)
insert #output exec master..xp_cmdshell #cmd
In bcp utility, When import/export, command line will show us number of rows that is copied.
I try to get that number for using at another processing but i can't.
The Command line is:
bcp AdventureWorks2012.HumanResources.Department out D:\Department_Test.txt -S SERVER_NAME -T -c
The result:
Starting Copy...
16 rows copied.
....
Please help me to get the copied rows number.
I try to use FINDSTR in the command line to find rows copied, it can be solved that problem, but i want to find a better solution.
Thanks
It's a hack.
DECLARE #output TABLE (id INT IDENTITY, command NVARCHAR(256))
Declare #sql varchar(8000)='bcp AdventureWorks2012.HumanResources.Department out D:\Department_Test.txt -S SERVER_NAME -T -c'
INSERT INTO #output
exec master..xp_cmdshell #sql
SELECT cast(Replace(command,' rows copied.','') as int) FROM #output where command like '% rows copied.'
Taking this:
bcp AdventureWorks2012.HumanResources.Department out D:\Department_Test.txt -S SERVER_NAME -T -c
You are extracing the whole of the table, therefore you might try
bcp "SELECT count(*) FROM AdventureWorks2012.HumanResources.Department" queryout D:\Department_Rows.txt -S SERVER_NAME -T -c
This would output the number of records in the table to a separate txt file.
Or alternatively - which might be better - you can use BCPROWCOUNT
(Returns the number of rows affected by the current (or last) BCP operation.)
You can read about that here
I need to create an utility script to add an item in a postgres database.
My initial approach is to have a bash script with a minimum set of non default values and a postgres sql script with the remaining default columns. Item table has 20 columns.
So here is my simplified bash script:
## A unique system identifier for an Item.
ID_ITM='318'
## The description of the Item.
DE_ITM="A description"
psql -U postgres -d MYDB -v id_itm=$ID_ITM de_itm=$DE_ITM -f insertItemPostgres.sql
As you can see it calls the following sql script(simplified):
Insert into AS_ITM (ID_ITM,ID_LN_PRC,ID_ITM_SL_PRC,ID_MRHRC_GP,ID_RU_ITM_SL,ID_DPT_PS,FL_ITM_DSC,FL_ADT_ITM_PRC,NM_BRN,FL_AZN_FR_SLS,LU_ITM_USG,NM_ITM,DE_ITM,TY_ITM,LU_KT_ST,DE_ITM_LNG,FL_ITM_SBST_IDN,LU_CLN_ORD,LU_EXM_TX,FL_VLD_SRZ_ITM)
values (:id_itm,:de_itm,null,'64',null,null,null,null,null,'1',null,null,null,null,'0',null,'0',null,'0','0');
My problem is that to make this work I need to have the string and ids with two pairs of quotes:
DE_ITM="'A description'"
I need to find out how can I pass the parameters in a literal.
I will appreciate any better way to do it, because I know this is not the best and my db scripting skills are not the best. Also I'm using a bash script but I could be just a sql with the non default values that calls the one that has the insert.
If you have psql 9.0 or later, you can try the following:
First you'll need to quote the expansion of your two variables in the shell, like so:
psql -U postgres -d MYDB -v "id_itm=${ID_ITM}" -v "de_itm=${DE_ITM}" -f insertItemPostgres.sql
Then in your SQL you'll need to reference the variables using the following syntax:
INSERT INTO as_itm (id_itm, id_ln_prc, ...)
VALUES (:'id_itm', :'de_itm', ...)
Alas, this didn't work for you for some reason. So here's a more old-school approach which should work on all psql versions: Use special bash syntax to double the quotes in your variables.
psql -U postgres -d MYDB -f insertItemPostgres.sql \
-v "id_itm='${ID_ITM//\'/''}'" \
-v "de_itm='${DE_ITM//\'/''}'"
In this case the variable references in your SQL should look unchanged from the OP: VALUES (:id_itm, :de_itm, ...
Use a shell HERE-document: shell variables are expanded, even in single quotes.
#!/bin/sh
## A unique system identifier for an Item.
ID_ITM="318"
## The description of the Item.
DE_ITM="A description"
psql -U postgres -d MYDB << THE_END
Insert into AS_ITM(ID_ITM, ID_LN_PRC, ID_ITM_SL_PRC, ID_MRHRC_GP, ID_RU_ITM_SL
, ID_DPT_PS, FL_ITM_DSC, FL_ADT_ITM_PRC, NM_BRN, FL_AZN_FR_SLS
, LU_ITM_USG, NM_ITM,DE_ITM, TY_ITM, LU_KT_ST, DE_ITM_LNG
, FL_ITM_SBST_IDN, LU_CLN_ORD, LU_EXM_TX, FL_VLD_SRZ_ITM)
values ('$ID_TTM', null,'64', null, null
, null, null, '$DE_ITM' , '1', null
, null, null, null, '0', null
, '0', null, '0', '0');
THE_END
I'm running the Microsoft SQLCMD tool for Linux (CTP 11.0.1720.0) on a Linux box (Red Hat Enterprise Server 5.3 tikanga) with Korn shell. The tool is properly configured, and works in all cases except when using scripting variables.
I have an SQL script, that looks like this.
SELECT COLUMN1 FROM TABLE WHERE COLUMN2 = '$(param1)';
And I'm running the sqlcmd command like this.
sqlcmd -S server -d database -U user -P pass -i input.sql -v param1="DUMMYVALUE"
When I execute the above command, I get the following error.
Sqlcmd: 'param1=DUMMYVALUE': Invalid argument. Enter '-?' for help.
Help lists the below syntax.
[-v var = "value"...]
Am I missing something here?
You don't need to pass variables to sqlcmd. It auto picks from your shell variables:
e.g.
export param1=DUMMYVALUE
sqlcmd -S $host -U $user -P $pwd -d $db -i input.sql
In the RTP version (11.0.1790.0), the -v switch does not appear in the list of parameters when executing sqlcmd -?. Apparently this option isn't supported under the Linux version of the tool.
As far as I can tell, importing parameter values from environment variables doesn't work either.
If you need a workaround, one way would be to concatenate one or more :setvar statements with the text file containing the commands you want to run into a new file, then execute the new file. Based on your example:
echo :setvar param1 DUMMYVALUE > param_input.sql
cat input.sql >> param_input.sql
sqlcmd -S server -d database -U user -P pass -i param_input.sql
You can export the variable in linux. After that you won't need to pass the variable in sqlcmd. However, I did notice you will need to change your sql script and remove the :setvar command if it doesn't have a default value.
export dbName=xyz
sqlcmd -Uusername -Sservername -Ppassword -i script.sql
:setvar dbName --remove this line
USE [$(dbName)]
GO
I think you're just not quoting the input variables correctly. I created this bash script...
#!/bin/bash
# Create a sql file with a parameterized test script
echo "
set nocount on
select k = '-db', v = '\$(db)' union all
select k = '-schema', v = '\$(schema)' union all
select '-', 'static'
go" > ./test.sql
# capture input variables
DB=$1
SCHEMA="${2:-dbo}"
# Exec sqlcmd
sqlcmd -S 'localhost\lemur' -E -i ./test.sql -v "db=${DB}" -v "schema=${SCHEMA}"
... and tested it like so:
$ ./test.sh master
k v
------- ------
-db master
-schema dbo
- static