Im trying to load data to snowflake table which is having numeric datatypes but the column value contains number with special character 0,0000 due to this copy command is failing. Is their any chance we can handle this in file format instead of handling this in select statement?
There is no option in file format that would handle strings inside of a numeric field. This is the purpose of allowing you to use a SELECT statement in your COPY INTO command to handle simple manipulation.
Your other option would be to load it as a string/varchar during your COPY INTO command, and then handle the casting as a secondary step in your processing. This would likely be faster processing, actually.
Related
Right now, I run a stored procedure whose output feeds a "Create CSV Table" Data Operations component. This component, not surprisingly, outputs a comma-delimited list of fields, which is not supported by our remote system. The fields need to be tab-delimited. One would think that the Data Operations component would have a tab (or other character-delimited option). But no, only commas are available, and no other Data Operations component outputs a tab-delimited table.
Using any mechanism for which we'd have to write code is completely the last option, as there's no need for code to use CSV. Also, any mechanism which requires paying for 3rd party components is categorically out, as is using any solution which is in preview mode.
The only option we've thought of is to revamp the stored procedure which outputs a single "column" containing the tab-delimited columns, and then output to a file - ostensibly, a comma-delimited file, but one without commas embedded inside (which is allowed for my system) so that the single column isn't itself enquoted.
Otherwise, I guess Function Apps is the solution. Anyone with ideas?
The easiest way is to use string function and replace comma with other delimiter. If you could accept this way, after creating the csv table I initiate a string variable with this input replace(body('Create_CSV_table_2'),',',' ').
And this is the result.
And if you don't want this way, yes you have to solve it with code and the Function is a choice.
I'm struggling to find a built-in way to redirect empty rows as flat file source read errors in SSIS (without resorting to a custom script task).
as an example, you could have a source file with an empty row in the middle of it:
DATE,CURRENCY_NAME
2017-13-04,"US Dollar"
2017-11-04,"Pound Sterling"
2017-11-04,"Aus Dollar"
and your column types defined as:
DATE: database time [DT_DBTIME]
CURRENCY_NAME: string [DT_STR]
with all that, package still runs and takes the empty row all the way to destination where it, naturally fails. I was to be able to catch it early and identify as a source read failure. Is it possible w/o a script task? A simple derived column perhaps but I would prefer if this could be configured at the Connection Manager / Flat File Source level.
The only way to not rely on a script task is to define your source flat file with only one varchar(max) column, chose a delimiter that is never used within and write all the content into a SQL Server staging table. You can then clean those empty lines and parse the rest to a relational output using SQL.
This approach is not very clean and a takes lot more effort than using a script task to dump empty lines or ones not matching a pattern. It isn't that hard to create a transformation with the script component
This being said, my advise is to document a clear interface description and distribute it to all clients using your interface. Handle all files that throw an error while reading the flat file and send a mail with the file to the responsible client with information that it doesn't follow the interface rules and needs to be fixed.
Just imagine the flat file is manually generated, even worse using something like excel, you will struggle with wrong file encoding, missing columns, non ascii characters, wrong date format etc.
You will be working on handling all exceptions caused by quality issues.
Just add a Conditional Split component, and use the following expression to split rows
[DATE] == ""
And connect the default output connector to the destination
References
Conditional Split Transformation
I am using Slowly Changing Dimension task from SSIS 2008 for delta load. Flat file is the input to slowly changing dimension task. I have observed that '--' character from file is converted into ' â€' after delta load.
Input is the flat file and destination is the database table. Flat file contains few strings having '--' character but somehow after inserting this data to table this character is getting converted to 'â€'.
What can be the issue?
Kindly help me to resolve this issue.
Regards,
Sameer K.
In essence you need to scrub these characters from the data. This can be done in several places, but it's a well accepted design pattern to populate from the source file to a staging table where you can scrub the offending characters before bringing it into your slowly changing dimension. It's also possible to scrub the file prior to import, but it's typically easier to work with the data once it's in a database rather than in a flat file. You could also include a derived column task within SSIS to extract these characters one in the SSIS Pipeline, but you would need to manage this column by column which can become difficult to maintain.
I wrote an SSIS package which imports data from a fixed record length flat file into a SQL table. Within a single file, the record length is constant, but different files may have different record lengths. Each record ends with a CR/LF. How can I make it detect where the end of the record is, and use that length when importing it?
You can use a script task. Pass a ReadWriteVariable into the script task. Let's call the ReadWriteVariable intLineLength. In the script task code, detect the location of the CR/LF and write it to intLineLength. Use the intLineLength ReadWriteVariable in following package steps to import the data.
Here is an article with some good examples: script-task-to-dynamically-build-package-variables
Hope this helps.
This may not work for everyone, but what I ended up doing was simply setting it to a delimited flat file and setting CR/LF as the row delimiter, and leaving the column delimiter and text qualifier blank. This won't work if you actually need to have it split out columns in the import, but I was already using a Derived Column task to do the actual column splitting, because my column positions are variable, so it worked fine.
Has anyone been able to get a variable record length text file (CSV) into SQL Server via SSIS?
I have tried time and again to get a CSV file into a SQL Server table, using SSIS, where the input file has varying record lengths. For this question, the two different record lengths are 63 and 326 bytes. All record lengths will be imported into the same 326 byte width table.
There are over 1 million records to import.
I have no control of the creation of the import file.
I must use SSIS.
I have confirmed with MS that this has been reported as a bug.
I have tried several workarounds. Most have been where I try to write custom code to intercept the record and I cant seem to get that to work as I want.
I had a similar problem, and used custom code (Script Task), and a Script Component under the Data Flow tab.
I have a Flat File Source feeding into a Script Component. Inside there I use code to manipulate the incomming data and fix it up for the destination.
My issue was the provider was using '000000' as no date available, and another coloumn had a padding/trim issue.
You should have no problem importing this file. Just make sure when you create the Flat File connection manager, select Delimited format, then set SSIS column length to maximum file column length so it can accomodate any data.
It appears like you are using Fixed width format, which is not correct for CSV files (since you have variable length column), or maybe you've incorrectly set the column delimiter.
Same issue. In my case, the target CSV file has header & footer records with formats completely different than the body of the file; the header/footer are used to validate completeness of file processing (date/times, record counts, amount totals - "checksum" by any other name ...). This is a common format for files from "mainframe" environments, and though I haven't started on it yet, I expect to have to use scripting to strip off the header/footer, save the rest as a new file, process the new file, and then do the validation. Can't exactly expect MS to have that out-of-the box (but it sure would be nice, wouldn't it?).
You can write a script task using C# to iterate through each line and pad it with the proper amount of commas to pad the data out. This assumes, of course, that all of the data aligns with the proper columns.
I.e. as you read each record, you can "count" the number of commas. Then, just append X number of commas to the end of the record until it has the correct number of commas.
Excel has an issue that causes this kind of file to be created when converting to CSV.
If you can do this "by hand" the best way to solve this is to open the file in Excel, create a column at the "end" of the record, and fill it all the way down with 1s or some other character.
Nasty, but can be a quick solution.
If you don't have the ability to do this, you can do the same thing programmatically as described above.
Why can't you just import it as a test file and set the column delimeter to "," and the row delimeter to CRLF?