SQLite define output field separator on a single line command - database

I need to use a single command line to fetch a set of records from a database.
If I do this:
$ sqlite3 con.db "SELECT name,cell,email FROM contacts"
I get output with a separator "|", where the output looks like this:
Alison|+12345678|alison#mail.com
Ben|+23456789|ben#mail.com
Steve|+34567890|steve#mail.com
Is there a way (in single command line format like specified above) to change the output field separator to something else, like ";;;" or something else or more unique. This is because the output occasionally get the character "|" inside the records, and it causes issues.
My desired result is:
Alison;;;+12345678;;;alison#mail.com
Ben;;;+23456789;;;ben#mail.com
Steve;;;+34567890;;;steve#mail.com
Or any other unique separator, which is not likely to be found inside the values.
(The command is executed on a Linux machine)
Thank you.

The -separator option does what you want:
sqlite3 -separator ';;;' con.db "SELECT ..."
The only way to format the output so that you are guaranteed to not get the separator in the values is to quote all strings:
sqlite3 con.db "SELECT quote(name), quote(cell), quote(email) FROM contacts"
However, this would require you to parse the output according to the SQL syntax rules.

Related

Using MSSQL tool sqlcmd, is it possible to set a row delimiter?

Using bash, I use the sqlcmd MSSQL tool.
When a query returns more than one row, these rows are only separated by space, making it impossible to extract the rows from each others, as the data can contain spaces.
Is there a way to tell in the SQL query that I want the rows separated by a specific delimiter ?
Current output:
row1filed1, row1field with space, row1other field row2field1, row2field2, row2etc
Expected output (with say delimiter = |)
row1filed1, row1field with space, row1other field|row2field1, row2field2, row2etc

Load table issue - BCP from flat file - Sybase IQ

I am getting the below error while trying to do bcp from a flat delimited file into Sybase IQ table.
Could not execute statement.
Non-space text found after ending quote character for an enclosed field.
I couldn't observe any non space text in the file, but this error is stopping me from doing the bulk copy. | is column delimiter with " as text qualifier and \n is row delimiter.
Below is the sample template for the same, am using.
LOAD TABLE TABLE_NAME(a NULL('(null)'),b NULL('(null)'),c NULL('(null)'))
USING CLIENT FILE '/home/...../a.txt' //unix
QUOTES ON
FORMAT bcp
STRIP RTRIM
DELIMITED BY '|'
ROW DELIMITED BY '\n'
When i perform the same query with QUOTES OFF, the load was successful. But, the same query is getting failed with QUOTES ON. I would like to get quotes stripped off, as well.
Sample Data
12345|"abcde"|(null)
12346|"abcdf"|"zxf"
12347|(null)|(null)
12348|"abcdg"|"zyf"
Any leads would be helpful!
If IQ bcp is the same as ASE, then I think those '(null)' fields are being interpreted as strings, not fields that are NULL.
You'd need to stream edit out those (null).
You're on unix so use sed or perl -ne.
E.g. pipe the file through " | perl -pne 's/(null)//g'" to the loading command or filename.
QUOTES OFF might seem to work, but I wonder if when you look in your loaded data, you'll see double quotes inside the 2nd field, and '(null)' where you expect a field to be NULL.

Error on loading csv to SybaseIQ using dbisql

I am trying to upload a bunch of csv's to SybaseIQ using dbisql's Load command.
My CSV's look like this
"ps,abc","jgh","kyj"
"gh",""jkl","qr,t"
and my Load command and other options are these:
set temporary option ISQL_LOG = '/path/for/log/file';
set temporary option DATE_ORDER = 'MDY';
Load table test (a, b, c)
Format bcp
Strip RTRIM
Escapes OFF
quotes ON
Delimited by ','
I create a '.ctl' file like the one above and then execute it with the following command:
dbisql -c "uid = xxx; pwd = yyyy" -host aaaa - port 1089 -nogui test.ctl
On execution I get the following error:
Non-space text found after ending quote character for an enclosed field.
--(db_RecScanner.cxx 2473)
SQLCODE=-1005014, ODBC 3 state="HY000"
HOWEVER
It works fine with the following csv format
"ps,abc","jgh",kyj
"gh",""jkl",qrt
i.e if the last column doesn't have the double quotes it works. but all my files have double quotes around each element.
Also the following .ctl file
set temporary option ISQL_LOG = '/path/for/log/file';
set temporary option DATE_ORDER = 'MDY';
Load table test (a ASCII(8), b ASCII(7), c ASCII(7))
Strip RTRIM
Escapes OFF
quotes ON
Delimited by ','
is able to insert data as in the first csv sample into the database but it also inserts the quotes and the commas and also messes up the data:
ex: first row in the db would look like
"ps,abc" ,"jgh", "kyj""g
as opposed to
ps,abc jgh kyj
I am going nuts over trying to figure out what the issue is I have read the manual for sybase and dbisql and according to that the first control file should be able to load the data properly, but it doesn't do that. Any help on this will be appreciated. Thanks in advance
Also my table structure is like this:
a(varchar(8)) b(varchar(7)) c(varchar(7))

String comparison between postgresql and log

We have one database full of strings of type:
Cannot write block. Device at EOM. dev=$device ($device_path)
And then, we have our program which generates log entries, like, for example
2013-10-10 15:37:07program-sd JobId 50345: Fatal error: block.c:434
Cannot write block. Device at EOM. dev="st0" (/dev/st0)
So, we can use SELECT * FROM X WHERE Y LIKE="%LOG%" but, the strings are not the same, in our database, we have a clean string, while in our log we have the timestamp, more info and also the data for $device and $device_path, so the query will return 0 results, because both doesn't match...
We're trying to return a error code based on what we have on the database, for this example it will be RC: 1019 if the result for the query is not 0...
Is there any way to use regex or something to accomplish this?
Suppose your table of error message templates looks like this:
create table error_templates(id serial primary key, template text);
then you can use a query like this:
select id
from error_templates
where $1 like '%' || regexp_replace(template, '\$\w+', '%', 'g') || '%';
$1 is a placeholder for the error log you are trying to find.
The regex_replace replaces the variables in the message with %. It assumes the variables consist of a dollar sign followed by one or more word characters (a-zA-Z0-9_) - you might need to adjust that depending on your actual variable names.
Also note that this may be slow. It will have to scan through the whole table; indexes cannot be used.

Commas within CSV Data

I have a CSV file which I am directly importing to a SQL server table. In the CSV file each column is separated by a comma. But my problem is that I have a column "address", and the data in this column contains commas. So what is happening is that some of the data of the address column is going to the other columns will importing to SQL server.
What should I do?
For this problem the solution is very simple.
first select => flat file source => browse your file =>
then go to the "Text qualifier" by default its none write here double quote like (") and follow the instruction of wizard.
Steps are -
first select => flat file source => browse your file => Text qualifier (write only ") and follow the instruction of wizard.
Good Luck
If there is a comma in a column then that column should be surrounded by a single quote or double quote. Then if inside that column there is a single or double quote it should have an escape charter before it, usually a \
Example format of CSV
ID - address - name
1, "Some Address, Some Street, 10452", 'David O\'Brian'
New version supports the CSV format fully, including mixed use of " and , .
BULK INSERT Sales.Orders
FROM '\\SystemX\DiskZ\Sales\data\orders.csv'
WITH ( FORMAT='CSV');
I'd suggest to either use another format than CSV or try using other characters as field separator and/or text delimiter. Try looking for a character that isn't used in your data, e.g. |, #, ^ or #. The format of a single row would become
|foo|,|bar|,|baz, qux|
A well behave parser must not interpret 'baz' and 'qux' as two columns.
Alternatively, you could write your own import voodoo that fixes any problems. For the later, you might find this Groovy skeleton useful (not sure what languages you're fluent in though)
Most systems, including Excel, will allow for the column data to be enclosed in single quotes...
col1,col2,col3
'test1','my test2, with comma',test3
Another alternative is to use the Macintosh version of CSV, which uses TAB's as delimiters.
The best, quickest and easiest way to resolve the comma in data issue is to use Excel to save a comma separated file after having set Windows' list separator setting to something other than a comma (such as a pipe). This will then generate a pipe (or whatever) separated file for you that you can then import. This is described here.
I don't think adding quote could help.The best way I suggest is replacing the comma in the content with other marks like space or something.
replace(COLUMN,',',' ') as COLUMN
Appending a speech mark into the select column on both side works. You must also cast the column as a NVARCVHAR(MAX) to turn this into a string if the column is a TEXT.
SQLCMD -S DB-SERVER -E -Q "set nocount on; set ansi_warnings off; SELECT '""' + cast ([Column1] as nvarchar(max)) + '""' As TextHere, [Column2] As NormalColumn FROM [Database].[dbo].[Table]" /o output.tmp /s "," -W

Resources