I have an Azure SQL Server database and a linux box. I have a csv file on the linux machine that I want to import into SQL Server. I have a table already created where I am going to import this file. I have the following questions -
1) Why does this command return an Unknown argument: -S
bcp table in ~/test.csv -S databaseServerName -d dbName -U myUsername -q -c -t
2) How do I import only part of the csv file? It has 20 columns, but I only want to import 2.
3) My table has these two columns - State, Province. My csv file has these two columns that I want to import - State, Region. How do I get Province to map to region.
For #2 and #3, you need to use a BCP format file. This allows you column-level control over which fields from the file go to which columns in the destination and which are left behind (not given a destination).
Use the -f option of BCP and specify the location and name of the format file you want to use. Sorry, no help yet with #1. Have a few questions/suggestions. But im not that familiar with Linux environments.
For part 2 of your question, you can use the Linux cut command to extract just the columns you want. A short awk script can do the same thing (see this SO answer). For both of these, you'll have to identify the "State" and "Region" columns by number. A non-native solution is [querycsv.py][1], which can also rename the "Region" column (disclaimer: I wrote querycsv.py).
For part 3 of your question, you can use the Linux sed command to change the column name on the first line of the CSV file, e.g., sed -e "1s/Region/Province/" file.csv >file2.csv.
Related
I'm trying to transfer table data from one SQL Server to another and wanting to use the bcp utility for it. This is purely to transfer data between two identical schemas, but I'm not able to use something like SSDT; I need something that can be scriptable and portable so it can be run by others with just SQL server and SSMS access.
I am generating a native output file and format file like so:
$> bcp database.TableName OUT c:\data\bcp\TableName.bcp -T -N -S SQLINSTANCE
$> bcp database.TableName format nul -f c:\data\bcp\TableName.fmt -T -N
Then in Management Studio I am trying to in turn read the files like this:
SELECT
*
FROM
OPENROWSET (BULK 'c:\data\bcp\TableName.bcp',
FORMATFILE = 'c:\data\bcp\TableName.fmt') AS t1
But am getting this error:
The bulk load failed. The column is too long in the data file for row 6, column 19. Verify that the field terminator and row terminator are specified correctly.
I have followed this process before successfully, and it works for other tables. But I'm running into issue with this table. The column mentioned is of datatype nvarchar(max). I can inspect what I think is the "problem" record in the source data and it's just a very long string but I don't see anything else special about it.
Is there something else I should be doing when generating the format file or what else am I missing?
If you are only exporting for the purpose of importing to another SQL Server, native format is the way to go. And is this case you don't need to use format files. Just do a native export and import.
Note you are specifying a capital -N and that's not native. Native is lower -n.
You should export using something like:
bcp database.Schema.TableName OUT c:\data\bcp\TableName.bcp -T -n -S SQLINSTANCE
Then on the importing side I sugest using BULK IMPORT, which don't need a format file for native at all:
BULK INSERT TargetDB.dbo.TargetTable
FROM 'c:\data\bcp\TableName.bcp'
WITH (DATAFILETYPE = 'native');
If you can't use BULK INSERT and must absolutely go for OPENROWSET, you need a format file. bcp can generate that for you, but again, lower case -n:
bcp database.Schema.TableName format nul -f c:\data\bcp\TableName.fmt -T -n -S SQLINSTANCE
Now your OPENROWSET should work.
I am using below command to write data to csv file in isql
$ISQL -S DSA1_PROD -U emer_r_gh5432 -X
Query -
Select * from SecDb..LoginOwnerTb where SvrId= 45566 and OwnerRitsId = '1001167635';
OUTPUT TO '/tmp/sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE '';
go
it says
Server 'ABC', Line 1:
Incorrect syntax near ';'.
Please help
NOTE: I'm assuming you're working with Sybase ASE and the isql command line tool. There may be other ways to accomplish what you're trying to do when going against the SQLAnywhere, IQ and/or Advantage database products ... *shrug* ...
The OUTPUT TO clause is used with the dbisql GUI tool.
To perform a somewhat-similar OUTPUT operation with the isql command line tool:
-- once logged in via isql ...
-- to write to new file; to overwrite existing file:
select ....
go > /path/to/local/file/accessible/by/user/running/isql
-- to append to existing file:
select ...
go >> /path/to/local/file/accessible/by/user/running/isql
To set the column delimiter you can use the -s flag when invoking isql from the command line, eg:
# set the column delimiter to a semi-colon:
$ isql ... -s ';' ...
# set the column delimiter to a pipe:
$ isql ... -s '|' ...
Keep in mind that the output will still be generated using fixed-width columns, with each column's width determined by either a) the column's datatype 'width' or b) the column's title/label width, whichever is wider.
I'm not aware of any way to perform the following with the isql command line tool:
designate a column delimiter on-the-fly while inside a isql session
designate a quote character
remove extra spaces (ie, ouput data in true delimited format as opposed to fixed-width format)
To generate true delimited files you have a few options:
see if the dbisql GUI tool serves your purpose [I don't use dbisql so I'm *assuming* the OUTPUT TO clause works as expected]
use the bcp (command line) utility to place the data into a delimited file [bcp options, and how to handle subsets of tables, is a much larger discussion, ie, too much to address in this response]
see if you can find another (3rd party) tool that can extract the desired data set to a delimited file
I was wondering if anyone can help.
I have a number of queries in SQL (all in separate *.sql files). I wanted to know if there is a way to run these queries automatically or mass run them to be saved to either a csv or txt file?
Also, I have come variables within these queries which will need to be amended on a weekly bases before the queries are run.
Thanks.
KJ
Could you please provide some additional help in relation to the variables? Previously I would declare and set variables as:
DECLARE #TW_FROM DATETIME
DECLARE #TW_TO DATETIME
SET #TW_FROM = '2015-11-16 00:00:00';
SET #TW_TO = '2015-11-22 23:00:00';
How do I do this using sqlcmd?
Yes, you can use sqlcmd to do this.
First of all - variables. You can refer to your variables in the .sql files using $(variablename) wherever you want to substitue the variable. For example,
use $(dbname);
select $(columnname) from table1 where column= '$(var1)'
You then call sqlcmd with the following command (note the argument -v variables)
sqlcmd -S servername -d database -i "yoursqlfile.sql" -v dbname="database" columnname="column" var1="Fred"
In order to output this to a file, you tag > filename.txt on the end
sqlcmd -S servername -d database -i "yoursqlfile.sql" -v dbname="database" columnname="column" var1="Fred" > filename.txt
If you want to output to a csv, you can also specify the delimiter using the argument -s (note the idfference with the capital S for server). So now we have
sqlcmd -S servername -d database -s "," -i "yoursqlfile.sql" -v dbname="database" columnname="column" var1="Fred" > filename.csv
If you want to output several commands to the same csv or txt file, use >> instead of > as it add to teh bottom of the file, rather than replacing it.
sqlcmd -S servername -d database -s "," -i "yoursqlfile.sql" -v dbname="database" columnname="column" var1="Fred" >> filename.csv
To run this for several scripts, you can put the statements in a batch file, and then change the variables every week.
You could write a batch file that uses sqlcmd:
MSDN sqlcmd
That will allow you to call script files in a loop and output the results to a file.
Convert your current scrips to a Stored Procedure.
You can then pass your variables to that and run the query.
If you have SQL Server agent available (SQL standard or better) you can use this to automate the running of the stored procedures.
Otherwise the same can be achieved with Task Scheduler in windows.
As for exporting to CSV this will be useful.
It depends on where your SQL Server is acutally running. It might be quite tricky to write anything to the location you want.
You could read about BCP.
My suggestion is:
Create an UDF (best is inline-UDF!) from all of your queries within your database. Than call them from EXCEL or any other fitting product. You might want to set up an Excel where all your queries are filled one on each Sheet automatically
I have a huge database which I want to dump out using BCP and then load it up elsewhere. I have done quite a bit of research on the Sybase version of BCP (being more familiar with the MSSQL one) and I see how to USE an Import file but I can't figure out for the life of me how to create one.
I am currently making my Sybase bcp out files of data like this:
bcp mytester.dbo.XTABLE out XTABLE.bcp -U sa -P mypass -T -n
and trying to import them back in like this:
bcp mytester.dbo.XTABLE in XTABLE.bcp -E -n -S Sybase_157 -U sa -P SyAdmin
Right now, the IN part gives me an error about IDENTITY_INSERT regardless of if the table has an identity or not:
Server Message: Sybase157 - Msg 7756, Level 16, State 1: Cannot use
'SET IDENTITY_INSERT' for table 'mytester.dbo.XTABLE' because the
table does not have the identity property.
I have often used the great info on this page for help, but this is the first time i've put in a question, so i humbly request any guidance you all can provide :)
In your BCP in, the -E flag tells bcp to take identity column values from the input file. I would try running it without that flag. fmt files in Sybase are a bit finicky, and I would try to avoid if possible. So as long as your schemas are the same between your systems the following command should work:
bcp mytester.dbo.XTABLE in XTABLE.bcp -n -S Sybase_157 -U sa -P SyAdmin
Also, the -T flag on your bcp out seems odd. I know SQLServer -T is a security setting, but in Sybase it indicates the max size of a text or image column, and is followed by a number..e.g -T 32000 (would be 32Kbytes)
But to answer the question in your title, if you run bcp out interactively (without specifying -c,-n, or -f) it will step through each column, prompting for information. At the end it will ask if you want to create a format file, and allow you to specify the name of the file.
For reference, here is the syntax and available flags:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc30191.1550/html/utility/X14951.htm
And the chapter in the Utility Guide:
http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc30191.1550/html/utility/BABGCCIC.htm
What i want to do is copy a table into a file, truncate the table and copy the data back into the table.
For this, i am using the following two commands:
Out: bcp TABLE out file.csv -S SERVER -U user -P password -r '\n' -t '^|' -c
In: bcp TABLE in file.csv -S SERVER -U user-P password-r '\n' -t '^|' -c -J iso_1 -b 5000
This is the error i get:
CSLIB Message: - L0/O0/S0/N36/1/0:
cs_convert: cslib user api layer: common library error: The result is truncated because the conversion/operation resulted in overflow.
The interesting part ( for me, at least ) is that i get the error only for rows with the first column being an ODD number. From the first 3 million rows, it cuts half of them, all having the first column ( the PK ) an odd number.
I tried with different options, but none seem to work: no problem with the charset as far as i can tell, there are no huge columns such that they are truncated and it is not the carriage return missing.
Any help would be greatly appreciated.
UPDATE: After creating a format-file there are no more errors, but it only copies half of the data back into the table.
UPDATE: I managed to create a format file which works and loads all data, but i cannot use it on another server (it works in testing environment, it needs to run in production environment), since it says Attempt to read an unknown version of bcp format-file.? I know what this means, but is there any way of finding the correct values of the version?
SOLVED: After digging back in the database, it seems that the problem was indeed data inconsistency due to the fact that the VIEW used in production to copy the table only copied 25 columns, but the table has 26 columns ( somebody altered the table and i didn't know and hadn't noticed that it happened ). Fixed the View and now it works.
Since you are going out of/into the same server, I recommend you use bcp with the native flag.
bcp DBNAME..TABLE out file.bcp -SSERVER -Uuser -Ppassword -n
bcp DBNAME..TABLE in file.bcp -SSERVER -Uuser -Ppassword -n -b5000
Character mode can get wierd, and I only use it when it is required.