Invalid cast from System.Double to System.Nullable - dapper

I have a column in my database that is a float. My database is in brazilian portuguese, so, the decimal separator from this column is comma (,).
I don't know if this is the cause, but Dapper is throwing the exception "Invalid cast from 'System.Double' to 'System.Nullable..." (my entity uses a Nullable for this column).
Can you help me?

This isn't anything to do with culture - the data that comes back is primitive, not stringified. Simply, it isn't happy to cast from double to decimal?. Since the database is returning double, a double? property would work fine. The core tries to allow as many conversions as are pragmatic, but it doesn't support all mappings.

Related

Data Conversion text to numeric in SSIS is removing characters

I am facing a strange issue while using SSIS "Data Conversion component" to convert string to decimal datatype. I use SSIS 2016.
The source data input has values of mixed data types- string, integer, decimal and is defined as varchar in the flat file source. The target data type expected is numeric. When explicit type conversion happens from string to decimal, we expect the alphanumeric values to get rejected to error table and only the numeric values to pass through.
Instead, we are seeing some alphanumeric values shedding the characters in the value and passing through successfully with no error.
Examples: Value "3,5" converted to 35
Value "11+" converted to 11
We do not have control over source data and will not be able to replace char data before passing data into Data conversion component.
We have tried the below steps as a workaround and it has worked.
i.e,
First Data Conversion from DT_STR to DT_NUMERIC
Capture error rows that fail the above conversion
Second Data Conversion from DT_NUMERIC to DT_DECIMAL
But as the source data is not reliable, we may have to apply this workaround wherever there are numeric fields (int types & deicmals) which is not a friendly solution.
So checking with you all to understand if there is an easier and better solution tried out by anyone.
I did not expect this result, but I tried an expression task and it worked for DT_DECIMAL:
(DT_DECIMAL,1)"11+" -- evaluates to 11.0
But it does not work for DT_NUMERIC. SSIS won't allow a direct numeric result, but it can be nested inside a cast to DT_DECIMAL. Just to demonstrate that, in an expression task even this "numerically valid" cast would not be permitted, because the output simply can't be of type DT_NUMERIC:
(DT_NUMERIC, 3, 0)123
But this is permitted:
(DT_DECIMAL,0)((DT_NUMERIC, 3, 0)123)
So as long as you are happy to specify a precision and scale big enough to hold your data during the "validity" check done by DT_NUMERIC, and then cast it from there to DT_DECIMAL, all in a derived column transform, then DT_NUMERIC seems to enforce the strict semantics you want.
SSIS allows this:
(DT_DECIMAL,0)((DT_NUMERIC, 2, 0)"11")
But not either of these:
(DT_DECIMAL,0)((DT_NUMERIC, 2, 0)"11+")
(DT_DECIMAL,0)((DT_NUMERIC, 2, 0)"3,5")
#billinkc Sorry for not responding to you earlier.
We are working under some restrictions:
(1) All we want to do is capture datatype issues in input data, so we wanted to harness the capability of SSIS Data Conversion Component in SSIS.
(2) DBA doesn't want us to use SQL for type conversions, so we are required to do these conversions between flat file source and flat file destination using SSIS.
(3) We are required to capture the type conversion errors at every step of conversion into an error output file with error column name and error description, to be used later. So we cannot remove char data in the field before passing it to Data Conversion component.
#allmhuran - We have used Derived column task before Data Conversion component to replace unnecessary characters in one of the other fields, but using the same for type conversion makes achieving (3) difficult. Because error output from Derived column task and Data Conversion component cannot be redirected to the same error output file.
We can completely ignore Data Conversion component and use only Derived column task to do all type conversions, whether single or nested. I am trying this and the error descriptions do not always look good, but the cons of the former method can be overcome. I will try this out!

Count in Firebird 3.0 bigint vs Firebird 2.5 integer

On my system, after the migration from Firebird 2.5 to 3.0, many reports and other functions began to give problems stating that what was expected was Integer and the current one is LargeInt. I took a look and saw that some searches with the count in 3.0 return a bigInt column, while in 2.5 it returns an integer column.
To solve the way I know, I would have to cast a cast on everything that is giving error, tested and worked, but it is a big system, it will give a lot of work.
Does anyone know of any way to resolve this in Firebird itself? Some configuration, or something?
There is no configuration for it; the Firebird 3 release notes only say:
The COUNT() aggregator now returns its result as BIGINT instead of INTEGER.
You either need to explicitly apply a cast in your queries, or in your code, or see if your data access library can explicitly request integer instead of just the dynamic type. For example in the Java world the JDBC API has an explicit getInt which will work for BIGINT as long as the value fits in a 32 bit integer.
I use Unidac, and for the solution I used Map Rules (Data Type Mapping).
Use TYPECAST in SQL in any delphi component and will be ok. For example wite:
select
cast(count(*) as integer) BR
from ...
instead of
select
count(*) BR
from ...

How to use Dapper micro-ORM with Oracle to map NUMBER (OracleDecimal)

The ODP.NET provider raises an exception in IDataReader.GetValue()/GetValues() if the column type is NUMBER(x,y) such that it will overflow all .NET numeric types. So Dapper is unable to map such a column to a POCO property.
I have an Oracle stored procedure that uses a REF CURSOR output parameter to return 3-column records. Fundamentally all 3 are NUMBER(something), but the ODP.NET Oracle managed provider seems to decide what ODP.NET or .NET type to turn them into.
I've been having problems with Dapper's Query() mapping records from this sproc into POCOs. Perhaps it actually isn't my fault, for once - it seems when a column comes across as an ODP.NET type instead of a .NET type, Dapper fails. If I comment an offending column out of my POCO, everything works.
Here's a pair of rows to illustrate:
--------------------------------------------------------------------
RDWY_LINK_ID RLC_LINK_OSET SIGN
---------------------- ---------------------- ----------------------
1829 1.51639964279667746989761971196153763602 1
14380 578.483600357203322530102380288038462364 -1
The first column is seen in .NET as int, the second column as type OracleDecimal, and the third as decimal. The second one is the problem.
For example, removing Dapper for the moment and using vanilla ODP.NET to access these records thusly indicates the problem:
int linkid = (int)reader.GetValue(0);
decimal linksign = (decimal)reader.GetValue(2);
//decimal dlinkoffset = (decimal)reader.GetValue(1); //**invalid cast exception at at Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i)**
//object olinkoffset = reader.GetValue(1); //**same**
//decimal dlinkoffset = reader.GetDecimal(1); //**same**
//object[] values = new object[reader.FieldCount];
//reader.GetValues(values); //**same**
OracleDecimal linkoffset = (OracleDecimal)reader.GetProviderSpecificValue(1); //this works!
double dblinkoffset = reader.GetDouble(1); //interesting, this works too!
//decimal dlinkoffset = linkoffset.Value; //overflow exception
dblinkoffset = linkoffset.ToDouble(); //voila
What little browsing and breakpointing I've done in Dapper's SqlMapper.cs file shows me that it is extracting data from the reader with GetValue()/GetValues(), as above, which fails.
Any suggestions how to patch Dapper up? Many thanks.
UPDATE:
Upon reflection, I RTFMed: Section 3, "Obtaining Data from an OracleDataReader Object" of the Oracle Data Provider for .NET Developer’s Guide which explains. For NUMBER columns, ODP.NET's OracleDataReader will try a sequence of .NET types from Byte to Decimal to prevent overflow. But a NUMBER may still overflow Decimal, giving an invalid cast exception if you try any of the reader's .NET type accessors (GetValue()/GetValues()), in which case you have to use the reader's ODP.NET type accessor GetProviderSpecificValue(), which gives you an OracleDecimal, and if it overflows a Decimal, its Value property will give you an overflow exception and your only recourse is to coerce it into a lesser type with one of OracleDecimal's ToXxx() methods.
But of course the ODP.NET type accessor is not part of the IDataReader interface used by Dapper to hold reader objects, so it seems that Dapper, by itself, is Oracle-incompatible when a column type will overflow all .NET types.
The question remains - do the smart folk know how to extend Dapper to handle this. It seems to me I'd need an extension point where I could provide implementation on how to use the reader (forcing it to use GetDouble() instead of GetValue(), or casting to OracleDataReader and calling GetProviderSpecificValue()) for certain POCO property or column types.
To avoid this issue i used:
CAST(COLUMN AS BINARY_DOUBLE)
or
TO_BINARY_DOUBLE(COLUMN)
In the Oracle types listed here it's described as:
64-bit floating point number. This datatype requires 9 bytes, including the length byte.
Most of the other number types used by Oracle are 22 bytes max, so this is as good as it gets for .NET

Convert Datatypes in SSIS, Possible?

i am currently having some difficulties trying converting a Unicode character string (DT_WSTR) into an INT
this is what i have
(Note most writings are in German, but i think someone who has worked with SSIS will unterstand anyways)
Data Type selected: A four-byte, unsigned integer.but all it does is fail terribly
Try Derive Column component instead of Data Type Conversion.

What datatype should I bind as query parameter to use with NUMBER(15) column in Oracle ODBC?

I have just been bitten by issue described in SO question Binding int64 (SQL_BIGINT) as query parameter causes error during execution in Oracle 10g ODBC.
I'm porting a C/C++ application using ODBC 2 from SQL Server to Oracle. For numeric fields exceeding NUMBER(9) it uses __int64 datatype which is bound to queries as SQL_C_SBIGINT. Apparently such binding is not supported by Oracle ODBC. I must now do an application wide conversion to another method. Since I don't have much time---it's an unexpected issue---I would rather use proved solution, not trial and error.
What datatype should be used to bind as e.g. NUMBER(15) in Oracle? Is there documented recommended solution? What are you using? Any suggestions?
I'm especially interested in solutions that do not require any additional conversions. I can easily provide and consume numbers in form of __int64 or char* (normal non-exponential form without thousands separator or decimal point). Any other format requires additional conversion on my part.
What I have tried so far:
SQL_C_CHAR
Looks like it's going to work for me. I was worried about variability of number format. But in my use case it doesn't seem to matter. Apparently only fraction point character changes with system language settings.
And I don't see why I should use explicit cast (e.g. TO_NUMERIC) in SQL INSERT or UPDATE command. Everything works fine when I bind parameter with SQL_C_CHAR as C type and SQL_NUMERIC (with proper precision and scale) as SQL type. I couldn't reproduce any data corruption effect.
SQL_NUMERIC_STRUCT
I've noticed SQL_NUMERIC_STRUCT added with ODBC 3.0 and decided to give it a try. I am disappointed.
In my situation it is enough, as the application doesn't really use fractional numbers. But as a general solution... Simply, I don't get it. I mean, I finally understood how it is supposed to be used. What I don't get is: why anyone would introduce new struct of this kind and then make it work this way.
SQL_NUMERIC_STRUCT has all the needed fields to represent any NUMERIC (or NUMBER, or DECIMAL) value with it's precision and scale. Only they are not used.
When reading, ODBC sets precision of the number (based on precision of the column; except that Oracle returns bigger precision, e.g. 20 for NUMBER(15)). But if your column has fractional part (scale > 0) it is by default truncated. To read number with proper scale you need to set precision and scale yourself with SQLSetDescField call before fetching data.
When writing, Oracle thankfully respects scale contained in SQL_NUMERIC_STRUCT. But ODBC spec doesn't mandate it and MS SQL Server ignores this value. So, back to SQLSetDescField again.
See HOWTO: Retrieving Numeric Data with SQL_NUMERIC_STRUCT and INF: How to Use SQL_C_NUMERIC Data Type with Numeric Data for more information.
Why ODBC doesn't fully use its own SQL_NUMERIC_STRUCT? I don't know. It looks like it works but I think it's just too much work.
I guess I'll use SQL_C_CHAR.
My personal preference is to make the bind variables character strings (VARCHAR2), and let Oracle do the conversion from character to it's own internal storage format. It's easy enough (in C) to get data values represented as null terminated strings, in an acceptable format.
So, instead of writing SQL like this:
SET MY_NUMBER_COL = :b1
, MY_DATE_COL = :b2
I write the SQL like this:
SET MY_NUMBER_COL = TO_NUMBER( :b1 )
, MY_DATE_COL = TO_DATE( :b2 , 'YYYY-MM-DD HH24:MI:SS')
and supply character strings as the bind variables.
There are a couple of advantages to this approach.
One is that works around the issues and bugs one encounters with binding other data types.
Another advantage is that bind values are easier to decipher on an Oracle event 10046 trace.
Also, an EXPLAIN PLAN (I believe) expects all bind variables to be VARCHAR2, so that means the statement being explained is slightly different than the actual statement being executed (due to the implicit data conversions when the datatypes of the bind arguments in the actual statement are not VARCHAR2.)
And (less important) when I'm testing of the statement in TOAD, it's easier just to be able to type in strings in the input boxes, and not have to muck with changing the datatype in a dropdown list box.
I also let the buitin TO_NUMBER and TO_DATE functions validate the data. (In earlier versions of Oracle at least, I encountered issues with binding a DATE value directly, and it bypassed (at least some of) the validity checking, and allowed invalid date values to be stored in the database.
This is just a personal preference, based on past experience. I use this same approach with Perl DBD.
I wonder what Tom Kyte (asktom.oracle.com) has to say about this topic?

Resources