I am using below command to write data to csv file in isql
$ISQL -S DSA1_PROD -U emer_r_gh5432 -X
Query -
Select * from SecDb..LoginOwnerTb where SvrId= 45566 and OwnerRitsId = '1001167635';
OUTPUT TO '/tmp/sometable.csv' FORMAT ASCII DELIMITED BY ';' QUOTE '';
go
it says
Server 'ABC', Line 1:
Incorrect syntax near ';'.
Please help
NOTE: I'm assuming you're working with Sybase ASE and the isql command line tool. There may be other ways to accomplish what you're trying to do when going against the SQLAnywhere, IQ and/or Advantage database products ... *shrug* ...
The OUTPUT TO clause is used with the dbisql GUI tool.
To perform a somewhat-similar OUTPUT operation with the isql command line tool:
-- once logged in via isql ...
-- to write to new file; to overwrite existing file:
select ....
go > /path/to/local/file/accessible/by/user/running/isql
-- to append to existing file:
select ...
go >> /path/to/local/file/accessible/by/user/running/isql
To set the column delimiter you can use the -s flag when invoking isql from the command line, eg:
# set the column delimiter to a semi-colon:
$ isql ... -s ';' ...
# set the column delimiter to a pipe:
$ isql ... -s '|' ...
Keep in mind that the output will still be generated using fixed-width columns, with each column's width determined by either a) the column's datatype 'width' or b) the column's title/label width, whichever is wider.
I'm not aware of any way to perform the following with the isql command line tool:
designate a column delimiter on-the-fly while inside a isql session
designate a quote character
remove extra spaces (ie, ouput data in true delimited format as opposed to fixed-width format)
To generate true delimited files you have a few options:
see if the dbisql GUI tool serves your purpose [I don't use dbisql so I'm *assuming* the OUTPUT TO clause works as expected]
use the bcp (command line) utility to place the data into a delimited file [bcp options, and how to handle subsets of tables, is a much larger discussion, ie, too much to address in this response]
see if you can find another (3rd party) tool that can extract the desired data set to a delimited file
Related
I have an Azure SQL Server database and a linux box. I have a csv file on the linux machine that I want to import into SQL Server. I have a table already created where I am going to import this file. I have the following questions -
1) Why does this command return an Unknown argument: -S
bcp table in ~/test.csv -S databaseServerName -d dbName -U myUsername -q -c -t
2) How do I import only part of the csv file? It has 20 columns, but I only want to import 2.
3) My table has these two columns - State, Province. My csv file has these two columns that I want to import - State, Region. How do I get Province to map to region.
For #2 and #3, you need to use a BCP format file. This allows you column-level control over which fields from the file go to which columns in the destination and which are left behind (not given a destination).
Use the -f option of BCP and specify the location and name of the format file you want to use. Sorry, no help yet with #1. Have a few questions/suggestions. But im not that familiar with Linux environments.
For part 2 of your question, you can use the Linux cut command to extract just the columns you want. A short awk script can do the same thing (see this SO answer). For both of these, you'll have to identify the "State" and "Region" columns by number. A non-native solution is [querycsv.py][1], which can also rename the "Region" column (disclaimer: I wrote querycsv.py).
For part 3 of your question, you can use the Linux sed command to change the column name on the first line of the CSV file, e.g., sed -e "1s/Region/Province/" file.csv >file2.csv.
I'm trying to transfer table data from one SQL Server to another and wanting to use the bcp utility for it. This is purely to transfer data between two identical schemas, but I'm not able to use something like SSDT; I need something that can be scriptable and portable so it can be run by others with just SQL server and SSMS access.
I am generating a native output file and format file like so:
$> bcp database.TableName OUT c:\data\bcp\TableName.bcp -T -N -S SQLINSTANCE
$> bcp database.TableName format nul -f c:\data\bcp\TableName.fmt -T -N
Then in Management Studio I am trying to in turn read the files like this:
SELECT
*
FROM
OPENROWSET (BULK 'c:\data\bcp\TableName.bcp',
FORMATFILE = 'c:\data\bcp\TableName.fmt') AS t1
But am getting this error:
The bulk load failed. The column is too long in the data file for row 6, column 19. Verify that the field terminator and row terminator are specified correctly.
I have followed this process before successfully, and it works for other tables. But I'm running into issue with this table. The column mentioned is of datatype nvarchar(max). I can inspect what I think is the "problem" record in the source data and it's just a very long string but I don't see anything else special about it.
Is there something else I should be doing when generating the format file or what else am I missing?
If you are only exporting for the purpose of importing to another SQL Server, native format is the way to go. And is this case you don't need to use format files. Just do a native export and import.
Note you are specifying a capital -N and that's not native. Native is lower -n.
You should export using something like:
bcp database.Schema.TableName OUT c:\data\bcp\TableName.bcp -T -n -S SQLINSTANCE
Then on the importing side I sugest using BULK IMPORT, which don't need a format file for native at all:
BULK INSERT TargetDB.dbo.TargetTable
FROM 'c:\data\bcp\TableName.bcp'
WITH (DATAFILETYPE = 'native');
If you can't use BULK INSERT and must absolutely go for OPENROWSET, you need a format file. bcp can generate that for you, but again, lower case -n:
bcp database.Schema.TableName format nul -f c:\data\bcp\TableName.fmt -T -n -S SQLINSTANCE
Now your OPENROWSET should work.
I'm running a powershell script to get the results of a SQL query (in JSON format from SQL2016) and the results come back broken up into individual lines with '...' on the end instead of one JSON string and some header info at the top of the file. This makes the JSON unuseable.
I verified that this is on the PowerShell side by running the same query in SSMS and the results came out as expected (valid JSON)
I couldn't find any command line arguments for controlling the output of Invoke-SqlCommand
I'm new to PowerShell... Any ideas how to help me get clean JSON from this PowerShell script?
The powershell script:
Invoke-Sqlcmd -InputFile "C:\Dashboard\sql\gtldata.sql" | Out-File -filepath "C:\Dashboard\json\gtldata.json"
A sample of the returned document:
JSON_F52E2B61-18A1-11d1-B105-00805F49916B
-----------------------------------------
{"KPI":[{"BusinessUnit":"Water - Industrial","Location":"SPFIN","TestDate":"2016-09-19T21:11:10.837","TestResult":"Fail","FailReason":"P...
ial":"100161431","PumpType":"xxx","Stages":0},{"BusinessUnit":"Water - Industrial","Location":"SPFIN","TestDate":"2016-09-20T01:48...
"PumpType":"xxx","Stages":0},{"BusinessUnit":"Pre-engineered","Location":"SPSPA","TestDate":"2016-09-20T10:46:38.403","TestResult"...
You need to add the -MaxCharLength option to your invocation. By default, sqlcmd (and thus Invoke-Sqlcmd) use an 80-character output width, as cited in this sqlcmd documentation:
-w column_width
Specifies the screen width for output. This option sets the sqlcmd scripting variable SQLCMDCOLWIDTH. The column width must be a number greater than 8 and less than 65536. If the specified column width does not fall into that range, sqlcmd generates and error message. The default width is 80 characters. When an output line exceeds the specified column width, it wraps on to the next line.
I'd try setting -MaxCharLength 65535 for your output, since unformatted JSON will not have any line breaks in it.
Try the below as It worked for me. The result was a non truncated output of a varbinary column to string
original post [here][1]
bcp "SELECT CAST(BINARYCOL AS VARCHAR(MAX)) FROM OLTP_TABLE WHERE ID=123123 AND COMPANYID=123" queryout "C:\Users\USER\Documents\ps_scripts\res.txt" -c -S myserver.db.com -U admin -P password
[1]: https://stackoverflow.com/questions/60525910/powershell-truncating-sql-query-output?noredirect=1#comment107077512_60525910
I have source file in which 90% of first field is empty. I want to load this file to SQL server table with BCP utility. When i run BCP command, BCP utility is not able to recognize or distinguish records.
My Source file has data as below.
|100168|27238800000|14750505|1|273
|100168|27238800000|14750505|1|273
|100681|88392930052|37080101|1|252
|101014|6810000088|90421505|12|799
|101595|22023000000|21050510|8|780
I am using
**bcp [DBNAME].[dbo].[TABLE1] in \\filelocation\filename -e \\filelocation\filename_Error.txt -c -t | -S ServerName -T -h TABLOCK -m 1**
I am getting error message in error.txt
as ## Row 1, Column 28: String data, right truncation ## 100168 27238800000 14750505 1 273
100168|27238800000|14750505|1|273. Here BCP is not able to recognize
records. Due to this BCP is trying loaded next record data into last
field which is causing data truncation.
Table schema is
CREATE TABLE [DBO].[TABLE1](
FLD1 VARCHAR(10)
,FLD2 VARCHAR(10)
,FLD3 VARCHAR(22)
,FLD4 VARCHAR(15)
,FLD5 VARCHAR(10)
,FLD6 VARCHAR(12) )
You need to quote the pipe. Pipe (character |) is used for redirecting standard output for command lines
The following simplified line works with your sample
bcp.exe [db].dbo.[table1] in "path\Data.dat" -S".\instance" -T -c -t"|"
I omitted the error limit -m, log -e and table lock hint -h, but those should not affect the import, but if you still have an issue try quoting parameters like the server name and filenames
I used a text file with standard \r\n row terminators as expected by -c
I'm trying to get BCP to insert the contents of a text file into a single field.
Example file content
Field1,field2,Field3
1,test,,
2,,test
3,test,test
The following command imports each line above as a new row into my temp table.
bcp mydb..tempTable in c:\testFile.txt -T -c
I think the solution is to use the -r switch to specify the row terminator as the end of the file but I'm unsure how to do this.
EDIT
I found the solution. The textfile I am importing is first created using the BCP, in my example all of the file contents comes from a single nvarchar(max) field and row. If I set the row terminator via -r during the export then this also becomes the end of my file. I can then import using bcp mydb..tempTable in c:\testFile.txt -T -c -r {eof}.
The only issue I have now is that the output from the BCP command states "Error = [Microsoft][SQL Server Native Client 10.0]Unexpected EOF encountered in BCP data-file", however, the data still imports as I want so presumably I can ignore this?