I'm tasked with exporting data from an old application that is using SQL Anywhere, apparently version 5, maybe 5.6. I never worked with this database before so I'm not sure where to start here. Does anybody have a hint?
I'd like to export it in more or less any text representation that then I can work with. Thanks.
I ended up exporting the data by using isql and these commands (where #{table} is each of the tables, a list I built manually):
SELECT * FROM #{table};
OUTPUT TO "C:\export\#{table}.csv" FORMAT ASCII DELIMITED BY ',' QUOTE '"' ALL;
SELECT * FROM #{table};
OUTPUT TO "C:\export\#{table}.txt" FORMAT TEXT;
I used the CVS to import the data itself and the txt to pick up the name of the fields (only parsing the first line). The txt can become rather huge if you have a lot of data.
Have a read http://www.lansa.com/support/tips/t0220.htm
Related
I'm trying to import data into my MS SQL DB (as a flat file). However, there is a problem with one of the fields: it contains a line break within the data, which leads to the import wizard thinking it's the end-of-line, hence breaking each row into two. I've tried to import the data into excel as well (just to try it out), but it's the same behavior.
Does anyone know how to solve this? Any pre-import mechanism that might massage the data somehow?
(unfortunately, it's not practically possible for me to ask the source system to change the encoding)
//Eva-Lotta
Use to replace new line character in columns having values.
Replace(Replace(columnName,char(13),' '),char(10),' ')
Regards
I've managed to find a work-around! I start with splitting the files into chunks (as they are 3.8 GB in size ...), open them in UltraEdit, loop through them to join the 2 lines together, and import them into excel / my SQL DB. It's not neat, but it has solved my immediate problem ... but thanks for your engagement!
I am importing some Excel spreadsheets into a MS SQL Server. I load the spreadsheets, cleanse the data and then export it to SQL using Alteryx. Some files have text columns where the cells span multiple lines (i.e. with new line characters, like when you press ALT + ENTER in Excel). When I export the tables to SQL and then query the table, I see lots of '_x000D_' which are not in the original file.
Is it some kind of newline character encoding? How do I get rid of it?
I haven't been able to replicate the error. The original file contains some letters with accents (à á etc); I created multi-line spreadsheets with accented letters, but I managed to export these to SQL just fine, with no 'x000D'.
If these were CSV files I would think of character encoding, but Excel spreadsheets? Any ideas? Thanks!
I know this is old, but: if you're using Alteryx, just run it through the "Data Cleansing" tool as the last thing prior to your export to SQL. For the field in question, tell the tool to remove new lines by checking the appropriate checkbox.
If that still doesn't work... 0x000D is basically ASCII 13; (Hex "D" = Int 13)... so try running your data through a regular Formula tool, and for the [field] in question, just use the expression Replace([field],CharFromInt(13),""), which should remove that character by replacing it with the empty string.
This worked for me:
REGEX_REPLACE([field],"_x000D_","")
I would like to export a table from Firebird database into a CSV file. With MySQL, I can use an SQL with additional commands like INTO OUTFILE. Here is an example:
SELECT a,b,a+b INTO OUTFILE '/tmp/result.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
This query can be submitted on several sql-query-tools (MySQL WB, HeidiSQL, ccenter).
Is it possible using, for example FlameRobin, to submit an SQL statement like above to export data from Firebird?
I may use FlameRobin using the menu or another tool like FBexport, but I prefer the one-code solution without clicking on the menu or using additional tools (FBexport).
I know its a lil bit late... But
There is a way to export it using OUTPUT ISQL command.
It will be like:
`OUTPUT C:\file_name.txt;`
`SELECT A, B, A+B FROM EXMAPLE_TABLE;`
`OUTPUT;`
So the content of your select statement will be on your txt file.
Remember to create the empty file before using the OUTPUT;
Firebird does not support that directly. You'll have to use some tool (either GUI or CLI) to do it.
I extracted some 10 tables in CSV with " as the text qualifier. Problem is my extract does not look right in Excel because of special characters in a few columns. Some columns are breaking into a new row when it should stay in the column.
I've been doing it manually using the management studio export feature, but what's the best extract the 10 tables to CSV with the double quote qualifier using a script?
Will I have to escape commas and double quotes? Best way to do this?
How should I handle newline codes in my columns, we need them for migration to a new system, but the PM wants to open the files and make modifications using Excel. Can they have it both ways?
I understand that much of the problem is that Excel is interpreting the file where a load utility into another database might not do anything special with new line, but what about double quotes and commas in the data, if I don't care about excel, must I escape that?
Many Thanks.
If you are using SQL Server 2005 or later, the export wizard will export the excel file out for you.
Right click the database, select Tasks-> Export Data...
Set the source to be the database.
Set the destination to excel.
At the end of the wizard, select the option to create an SSIS package. You can then create a job to execute the package on a schedule or on demand.
I'd suggest never using commas for your delimiter - they show up too frequently in other places. Use a tab, since a tab isn't too easy to include in Excel tables.
Make sure you never start a field with a space unless you want that space in the field.
Try changing your text lf's into the literal text \n. That is:
You might have:
0,1,"Line 1
Line 2", 3
I suggest you want:
0 1 "Line 1\nLine 2" 3
(assuming the spacing between lines are tabs)
Good luck
As far as I know, you cannot have new line in csv columns. If you know a column could have comma, double quotes or new line, then you can use this SQL statement to extract the value as valid csv
SELECT '"' + REPLACE(REPLACE(REPLACE(CAST([yourColumnName] AS VARCHAR(MAX)), '"', '""'), char(13), ''), char(10), '') + '"' FROM yourTable.
I have a CSV file with quote text delimiters. Most of the 90000 rows are fine, but I have a few rows that have a text field that contains both a quote and a comma. For example the fields value would be:
AB",AB
When Delimited this becomes
"AB"",AB"
When SQL 2005 attempts to import this I get errors such as...
Messages
Error 0xc0202055: Data Flow Task: The column delimiter for column "Column 4" was not found.
(SQL Server Import and Export Wizard)
This only seems to happen when a quote and comma are in a text value together. Values like
AB"AB which becomes "AB""AB"
or
AB,AB which becomes "AB,AB"
work fine.
Here are some example rows...
"1464885","LEVER WM","","B","MP17"
"1465075",":PLT-BC !!NOTE!!","","B",""
"1465076","BRKT-STR MTR !NOTE!","","B",""
"1465172",":BRKT-SW MTG !NOTE!","","B","MP16"
"1465388","BUSS BAR !NOTE!","","B","MP10"
"1465391","PLT-BLKHD ""NOTE""","","B","MP20"
"1465564","SPROCKET:13TEETH,74MM OD,66MM","ID W/.25"" SETSCR","B","MP6"
"S01266330002","CABLE:224"",E122/261,8 CO","","B","MP11"
The last row is an example of the problem - the "", causes the error.
I've had MAJOR problems with SSIS. Things that Access, Excel and even DTS seemed to do very well, SSIS chokes on. Variable record-length data is another problem but, yes, these embedded qualifiers are a major problem. Especially if you do not have access to the import files because they're on someone else's server that you pay to gain access to and might even be 4 to 5 GB in size! Cant just to a "replace all" on that every import.
You may want to check into this at Microsoft Downloads called "UnDouble" and here is another workaround you might try.
Seems like with SSIS in SQL Server 2008, the bug is still there. I dont know why they havent addressed this in the parser but its like we went back in time with SSIS in basic import functionality.
UPDATE 11-18-2010: This bug still exists in SSIS. Amazing.
How about just:
Search/replace all "", with ''; (fix all the broken fields)
Search/replace all ;''; with ,"", (to "unfix" properly empty fields.)
Search/replace all '';''; with "","", (to "unfix" properly empty fields which follow a correct encapsulation of embedded delimiters.)
That converts your original to:
"1464885","LEVER WM","","B","MP17"
"1465075",":PLT-BC !!NOTE!!","","B",""
"1465076","BRKT-STR MTR !NOTE!","","B",""
"1465172",":BRKT-SW MTG !NOTE!","","B","MP16"
"1465388","BUSS BAR !NOTE!","","B","MP10"
"1465391","PLT-BLKHD ""NOTE""","","B","MP20"
"1465564","SPROCKET:13TEETH,74MM OD,66MM","ID W/.25"" SETSCR","B","MP6"
"S01266330002","CABLE:224'';E122/261,8 CO","","B","MP11"
Which seems to run the gauntlet fine in SSIS. You may have to step 3 recursively to account for 3 empty fields in a row ('';'';'';, etc.) but the bottom line here is that when you have embedded text qualifiers, you have to either escape them or replace them. Let this be a lesson in your CSV creation processes going forward.
Microsoft says doubled double quotes inside double quote delimited fields just don't work. A fix is planned for the end of 2011...
In the mean time we will have to use workarounds like described in the other answers.
I would just do a search/replace for ", and replace it with ,
Do you have access to the original file?