i am currently having some difficulties trying converting a Unicode character string (DT_WSTR) into an INT
this is what i have
(Note most writings are in German, but i think someone who has worked with SSIS will unterstand anyways)
Data Type selected: A four-byte, unsigned integer.but all it does is fail terribly
Try Derive Column component instead of Data Type Conversion.
Related
I need to read a Microfocus CoBOL data file (on PC) containing COMP fields. FYI, a COMP stores an integer in binary format.
If I transfer the raw binary in SQL Server, I can convert it to a BigInt using
CONVERT(bigint,compField,1).
That way, CONVERT(bigint,0x0000002B17,1) will become 11031.
I also need to deal with negative values. In T-SQL it looks like this:
CONVERT(bigint,0xFFFFFFD4E9,1) - CONVERT(bigint,0xFFFFFFFFFF,1)-0x0000000001
will give -11031.
Is there a way to do this directly in the data flow? I'm sure the info is out there somewhere, but I'm too dumb to find it.
I'm working with SSIS 2019 btw.
Thank you!
Simon.
I am facing a strange issue while using SSIS "Data Conversion component" to convert string to decimal datatype. I use SSIS 2016.
The source data input has values of mixed data types- string, integer, decimal and is defined as varchar in the flat file source. The target data type expected is numeric. When explicit type conversion happens from string to decimal, we expect the alphanumeric values to get rejected to error table and only the numeric values to pass through.
Instead, we are seeing some alphanumeric values shedding the characters in the value and passing through successfully with no error.
Examples: Value "3,5" converted to 35
Value "11+" converted to 11
We do not have control over source data and will not be able to replace char data before passing data into Data conversion component.
We have tried the below steps as a workaround and it has worked.
i.e,
First Data Conversion from DT_STR to DT_NUMERIC
Capture error rows that fail the above conversion
Second Data Conversion from DT_NUMERIC to DT_DECIMAL
But as the source data is not reliable, we may have to apply this workaround wherever there are numeric fields (int types & deicmals) which is not a friendly solution.
So checking with you all to understand if there is an easier and better solution tried out by anyone.
I did not expect this result, but I tried an expression task and it worked for DT_DECIMAL:
(DT_DECIMAL,1)"11+" -- evaluates to 11.0
But it does not work for DT_NUMERIC. SSIS won't allow a direct numeric result, but it can be nested inside a cast to DT_DECIMAL. Just to demonstrate that, in an expression task even this "numerically valid" cast would not be permitted, because the output simply can't be of type DT_NUMERIC:
(DT_NUMERIC, 3, 0)123
But this is permitted:
(DT_DECIMAL,0)((DT_NUMERIC, 3, 0)123)
So as long as you are happy to specify a precision and scale big enough to hold your data during the "validity" check done by DT_NUMERIC, and then cast it from there to DT_DECIMAL, all in a derived column transform, then DT_NUMERIC seems to enforce the strict semantics you want.
SSIS allows this:
(DT_DECIMAL,0)((DT_NUMERIC, 2, 0)"11")
But not either of these:
(DT_DECIMAL,0)((DT_NUMERIC, 2, 0)"11+")
(DT_DECIMAL,0)((DT_NUMERIC, 2, 0)"3,5")
#billinkc Sorry for not responding to you earlier.
We are working under some restrictions:
(1) All we want to do is capture datatype issues in input data, so we wanted to harness the capability of SSIS Data Conversion Component in SSIS.
(2) DBA doesn't want us to use SQL for type conversions, so we are required to do these conversions between flat file source and flat file destination using SSIS.
(3) We are required to capture the type conversion errors at every step of conversion into an error output file with error column name and error description, to be used later. So we cannot remove char data in the field before passing it to Data Conversion component.
#allmhuran - We have used Derived column task before Data Conversion component to replace unnecessary characters in one of the other fields, but using the same for type conversion makes achieving (3) difficult. Because error output from Derived column task and Data Conversion component cannot be redirected to the same error output file.
We can completely ignore Data Conversion component and use only Derived column task to do all type conversions, whether single or nested. I am trying this and the error descriptions do not always look good, but the cons of the former method can be overcome. I will try this out!
I have a field JSONStructure which has NT_TEXT Data Type and have to do a replace function on that.But, It looks like I cannot do a replace function on column having DT_NText DataType. I tried using Data conversion in SSIS But my JSONStructure can have more than 8000 characters and It is not working.
Can someone suggest me the best way to do it.
Thanks in Advance.
I think you'll need to use a Script Component, acting as a Transformation, and you'll need to specify that the column is read/write and then use C#/VB.NET string methods to perform the string manipulation
I have a column in my database that is a float. My database is in brazilian portuguese, so, the decimal separator from this column is comma (,).
I don't know if this is the cause, but Dapper is throwing the exception "Invalid cast from 'System.Double' to 'System.Nullable..." (my entity uses a Nullable for this column).
Can you help me?
This isn't anything to do with culture - the data that comes back is primitive, not stringified. Simply, it isn't happy to cast from double to decimal?. Since the database is returning double, a double? property would work fine. The core tries to allow as many conversions as are pragmatic, but it doesn't support all mappings.
I have just been bitten by issue described in SO question Binding int64 (SQL_BIGINT) as query parameter causes error during execution in Oracle 10g ODBC.
I'm porting a C/C++ application using ODBC 2 from SQL Server to Oracle. For numeric fields exceeding NUMBER(9) it uses __int64 datatype which is bound to queries as SQL_C_SBIGINT. Apparently such binding is not supported by Oracle ODBC. I must now do an application wide conversion to another method. Since I don't have much time---it's an unexpected issue---I would rather use proved solution, not trial and error.
What datatype should be used to bind as e.g. NUMBER(15) in Oracle? Is there documented recommended solution? What are you using? Any suggestions?
I'm especially interested in solutions that do not require any additional conversions. I can easily provide and consume numbers in form of __int64 or char* (normal non-exponential form without thousands separator or decimal point). Any other format requires additional conversion on my part.
What I have tried so far:
SQL_C_CHAR
Looks like it's going to work for me. I was worried about variability of number format. But in my use case it doesn't seem to matter. Apparently only fraction point character changes with system language settings.
And I don't see why I should use explicit cast (e.g. TO_NUMERIC) in SQL INSERT or UPDATE command. Everything works fine when I bind parameter with SQL_C_CHAR as C type and SQL_NUMERIC (with proper precision and scale) as SQL type. I couldn't reproduce any data corruption effect.
SQL_NUMERIC_STRUCT
I've noticed SQL_NUMERIC_STRUCT added with ODBC 3.0 and decided to give it a try. I am disappointed.
In my situation it is enough, as the application doesn't really use fractional numbers. But as a general solution... Simply, I don't get it. I mean, I finally understood how it is supposed to be used. What I don't get is: why anyone would introduce new struct of this kind and then make it work this way.
SQL_NUMERIC_STRUCT has all the needed fields to represent any NUMERIC (or NUMBER, or DECIMAL) value with it's precision and scale. Only they are not used.
When reading, ODBC sets precision of the number (based on precision of the column; except that Oracle returns bigger precision, e.g. 20 for NUMBER(15)). But if your column has fractional part (scale > 0) it is by default truncated. To read number with proper scale you need to set precision and scale yourself with SQLSetDescField call before fetching data.
When writing, Oracle thankfully respects scale contained in SQL_NUMERIC_STRUCT. But ODBC spec doesn't mandate it and MS SQL Server ignores this value. So, back to SQLSetDescField again.
See HOWTO: Retrieving Numeric Data with SQL_NUMERIC_STRUCT and INF: How to Use SQL_C_NUMERIC Data Type with Numeric Data for more information.
Why ODBC doesn't fully use its own SQL_NUMERIC_STRUCT? I don't know. It looks like it works but I think it's just too much work.
I guess I'll use SQL_C_CHAR.
My personal preference is to make the bind variables character strings (VARCHAR2), and let Oracle do the conversion from character to it's own internal storage format. It's easy enough (in C) to get data values represented as null terminated strings, in an acceptable format.
So, instead of writing SQL like this:
SET MY_NUMBER_COL = :b1
, MY_DATE_COL = :b2
I write the SQL like this:
SET MY_NUMBER_COL = TO_NUMBER( :b1 )
, MY_DATE_COL = TO_DATE( :b2 , 'YYYY-MM-DD HH24:MI:SS')
and supply character strings as the bind variables.
There are a couple of advantages to this approach.
One is that works around the issues and bugs one encounters with binding other data types.
Another advantage is that bind values are easier to decipher on an Oracle event 10046 trace.
Also, an EXPLAIN PLAN (I believe) expects all bind variables to be VARCHAR2, so that means the statement being explained is slightly different than the actual statement being executed (due to the implicit data conversions when the datatypes of the bind arguments in the actual statement are not VARCHAR2.)
And (less important) when I'm testing of the statement in TOAD, it's easier just to be able to type in strings in the input boxes, and not have to muck with changing the datatype in a dropdown list box.
I also let the buitin TO_NUMBER and TO_DATE functions validate the data. (In earlier versions of Oracle at least, I encountered issues with binding a DATE value directly, and it bypassed (at least some of) the validity checking, and allowed invalid date values to be stored in the database.
This is just a personal preference, based on past experience. I use this same approach with Perl DBD.
I wonder what Tom Kyte (asktom.oracle.com) has to say about this topic?