I'm writing a .NET web app on top of a existing DB app (SQL server). The original developer stored money in float columns instead of money or decimal. I really want to use Decimal inside my POCOs (especially because I will be doing further operations on the values after I get them).
Unfortunately, I cannot touch the DB schema. Is there a way to still use Decimal in my POCOs and tell EF "It's ok, I know decimal and float don't get along. I don't like it either. Just do your best."
With no special configuration, I get this error:
The specified cast from a materialized 'System.Double' type to the 'System.Decimal' type is not valid.
I tried using modelBuilder.Entity(Of myPocoType).Property(Function(x) x.MoneyProperty).HasColumnType("float"), but that gets me this error:
Schema specified is not valid. Errors:
(195,6) : error 0063: Precision facet isn't allowed for properties of type float.
(195,6) : error 0063: Scale facet isn't allowed for properties of type float.
Something like this would work:
public class MyPocoType
{
public float FloatProp { get; set; }
[NotMapped]
public decimal DecimalProp
{
get
{
return (decimal)FloatProp;
}
set
{
FloatProp = (float)value;
}
}
}
EF will ignore the decimal one, but you can use it and it'll set the underlying float. You can add in your own logic for handling the loss of precision if there's anything special you want it to do, and you might need to catch the cases where the value is out of range of what it's being converted to (float has much larger range, but decimal has much greater precision).
It's not a perfect solution, but if you're stuck with floats then it's not going to get much better. A float is inherently a little fuzzy, so that's going to trip you up somewhere.
If you really want to get complex, you could look at keeping the decimal value internally and then using the various events that happen at save time (some logic in an overridden SaveChanges(), or catching the SavingChanges event perhaps) to convert to float only once, to cut down on the buildup of conversion errors.
Related
tldnr;
How should I check the bounds on my SqlParameter values before attempting to put them in the database?
Longer version:
So, I have these dynamically generated SQL statements where I pass in a bunch of SqlParameters.
The way we declare SqlParameters is just
new SqlParameter("fieldName", value)
and we let the runtime figure out what the dbtype is.
That said, occasionally the update / insert statement fails, and we'd like to determine which field is too big (since our server only tells us that a field update failed, not WHICH one), by doing bounds checking. That is we can't put a two digit number in a column which only allows 1 digit (say a decimal(1,0) for example.)
We have the schema of the columns in memory (information_schema.columns ftw), so we could just try to do bounds checking on the SqlParameter value, but since the value is an object and not even necessarily a numeric type, how should I check that a value is in the range?
Or am I making the problem too hard and instead should have supplied the precision and scale when constructing the SqlParameters to begin with? Or even better, should we be using types that reflect what the columns in the database are?
Update:
Setting the precision / scale doesn't seem to have any consequence as seen in this code:
decimal d= 10.0M*(decimal)Math.Pow(10, 6);
SqlParameter p = new SqlParameter("someparam", SqlDbType.Decimal);
p.Precision = (byte)1;
p.Scale = (byte)0;
p.Value = d;
Console.WriteLine(p.SqlValue); // doesn't throw an error, i would think the sql value would be at most 10, not 10 million.
It seems that SqlParameter does not validate upon the Value property being set. And DataColumn does not allow for specifying either Precision or Scale so not terribly useful. However, there is a way:
Using the collection of schema info that you already have, dynamically create an array of SqlMetaData based on the size of the schema collection and populate it with the column name and size data:
SqlMetaData[] _TempColumns = new SqlMetaData[_SchemaCollection.Count];
loop-of-some-sort
{
switch (_SchemaCollection.DataType)
{
case "decimal":
_TempColumns[_Index] = new SqlMetaData(
_SchemaCollection.Name,
SqlDbType.Decimal,
(byte)_SchemaCollection.Precision,
(byte)_SchemaCollection.Scale
);
break;
case "others...."
}
}
Create a new SqlDataRecord using the SqlMetaData[] from step 1:
SqlDataRecord _TempRow = new SqlDataRecord(_TempColumns);
loop through _TempRow calling the appropriate Set method for each position, in a try / catch:
string _DataAintRight;
try
{
_TempRow.SetDecimal(_Index, _SchemaCollection.Value);
}
catch
{
_DataAintRight = _SchemaCollection.Name;
break;
}
NOTES:
This will only do the same validation that passing params to a proc would do. Meaning, it will silently truncate values that are too long, such as too many digits to the right of a decimal point, and a string that exceeds the max size.
Fixed-length numeric types should already be in their equivalent .Net types (i.e. SMALLINT value in an Int16 variable or property) and hence are already pre-verified. If this is indeed the case then there is no additional benefit from testing them. But if they currently reside in a more generic container (a larger Int type or even a string), then testing here is appropriate.
If you need to know that a string will be truncated, then that has to be tested separately. At least not as SqlMetaData, but in the loop and switch, just test the length of the string in that case.
Regardless of any of this testing stuff, it is best to not create parameters by having .Net guess the type via: new SqlParameter("fieldName", value) or even _Command.Parameters.AddWithValue(). So regarding the question of if you "should have supplied the precision and scale when constructing the SqlParameters to begin with", absolutely yes.
Another option (which I can elaborate on tomorrow when I will have time to update this) is to validate everything as if there were no built-in containers that are supposed to be reflections of the real database/provider datatypes. So, there are two main considerations that will drive the implementation:
Is the source data currently strongly typed or is it all serialized as strings?
and
Is there a need to know if the value will be truncated (specifically in cases where the value would otherwise be silently truncated, not causing an error, which could lead to unexpected behavior). The issue here is that inserting data into a field in a table that exceeds the specified max length will cause a string or binary data will be truncated error. But when passing data to a parameter (i.e. not direct to a table) that data will be truncated without causing an error. Sometimes this is ok, but sometimes this can lead to a situation where an input parameter has been specified incorrectly (or was correct but then the field was expanded and the parameter was never updated to match the new length) and might be chopping off the ends of some values that goes undetected until a customer reports that "something doesn't look quite right on a report, and oh, by the way, this has been happening off and on for maybe four or five months now, but I have been really busy and kept forgetting to mention it, maybe it's been nine months, I can't remember, but yeah, something's wrong". I mean, how often do we test our code by passing in the max value for each parameter to make sure that the system can handle it?
If the source data is in the appropriate .Net types:
There are several that do not need to be checked since fixed-length numeric types are the same between .Net and SQL Server. The ones that are pre-validated simply by existing in their respective .Net types are:
bool -> BIT
byte -> TINYINT
Int16 -> SMALLINT
Int32 -> INT
Int64 -> BIGINT
Double -> FLOAT
Single -> REAL
Guid -> UNIQUEIDENTIFIER
There are some that need to be checked only for truncation (if that is a concern) as their values should always be in the same range as their SQL Server counterparts. Keep in mind that here we are talking about strict truncation when the values are passed to parameters of a smaller scale, but will actually round up (well, at 5) when inserted directly into a column having a smaller scale. For example, sending in a DateTime value that is accurate to 5 decimal places will truncate the 3 right-most numbers when passed to a parameter defined as DATETIME2(2).
DateTimeOffset -> DATETIMEOFFSET(0 - 7)
DateTime -> DATETIME2(0 - 7)
DateTime -> DATE : 0001-01-01 through 9999-12-31 (no time)
TimeSpan -> TIME(0 - 7) : 00:00:00.0000000 through 23:59:59.9999999
There are some that need to be checked to make sure that they are not out of the valid range for the SQL Server datatype as out of range would cause an exception. They also possibly need to be checked for truncation (if that is a concern). Keep in mind that here we are talking about strict truncation when the values are passed to parameters of a smaller scale, but will actually round up (well, at 5) when inserted directly into a column having a smaller scale. For example, sending in a DateTime value will lose all seconds and fractional seconds when passed to a parameter defined as SMALLDATETIME.
DateTime -> DATETIME : 1753-01-01 through 9999-12-31, 00:00:00.000 through 23:59:59.997
DateTime -> SMALLDATETIME : 1900-01-01 through 2079-06-06, 00:00 through 23:59 (no seconds)
Decimal -> MONEY : -922,337,203,685,477.5808 to 922,337,203,685,477.5807
Decimal -> SMALLMONEY : -214,748.3648 to 214,748.3647
Decimal -> DECIMAL : range = -9[digits = (Precision - Scale)] to 9[digits = (Precision - Scale)], truncation depends on defined Scale
The following string types will silently truncate when passed to a parameter with a max length that is less than the length of their value, but will error with String or binary data would be truncated if directly inserted into a column with a max length that is less than the length of their value:
byte[] -> BINARY
byte[] -> VARBINARY
string -> CHAR
string -> VARCHAR
string -> NCHAR
string -> NVARCHAR
The following is tricky as the true validation requires knowing more the options it was created with in the database.
string -> XML -- By default an XML field is untyped and is hence very lenient regarding "proper" XML syntax. However, that behavior can be altered by associating an XML Schema Collection (1 or more XSDs) with the field for validation (see also: Compare Typed XML to Untyped XML). So true validation of an XML field would include getting that info, if it exists, and if so, checking against those XSDs. At the very least it should be well-formed XML (i.e. '<b>' will fail, but '<b />' will succeed).
For the above types, the pre-validated types can be ignored. The rest can be tested in a switch(DestinationDataType) structure:
Types that need to be validated for ranges can be done as follows
case "smalldatetime":
if ((_Value < range_min) || (_Value > range_max))
{
_ThisValueSucks = true;
}
break;
Numeric/DateTime truncation, if being tested for, might be best to do a ToString() and using IndexOf(".") for most, or IndexOf(":00.0") for DATE and SMALLDATETIME, to find the number of digits to the right of the decimal (or starting at the "seconds" for SMALLDATETIME)
String truncation, if being tested for, is a simple matter of testing the length.
Decimal range can be tested either numerically:
if ((Math.Floor(_Value) < -999) || (Math.Floor(_Value) > 999))
or:
if (Math.Abs(Math.Floor(_Value)).ToString().Length <= DataTypeMaxSize)
Xml
as XmlDocument is pre-validated outside of potential XSD validation associated with the XML field
as String could first be used to create an XmlDocument, which only leaves any potential XSD validation associated with the XML field
If the source data is all string:
Then they all need to be validated. For these you would first use TryParse methods associated to each type. Then you can apply the rules as noted above for each type.
The ODP.NET provider raises an exception in IDataReader.GetValue()/GetValues() if the column type is NUMBER(x,y) such that it will overflow all .NET numeric types. So Dapper is unable to map such a column to a POCO property.
I have an Oracle stored procedure that uses a REF CURSOR output parameter to return 3-column records. Fundamentally all 3 are NUMBER(something), but the ODP.NET Oracle managed provider seems to decide what ODP.NET or .NET type to turn them into.
I've been having problems with Dapper's Query() mapping records from this sproc into POCOs. Perhaps it actually isn't my fault, for once - it seems when a column comes across as an ODP.NET type instead of a .NET type, Dapper fails. If I comment an offending column out of my POCO, everything works.
Here's a pair of rows to illustrate:
--------------------------------------------------------------------
RDWY_LINK_ID RLC_LINK_OSET SIGN
---------------------- ---------------------- ----------------------
1829 1.51639964279667746989761971196153763602 1
14380 578.483600357203322530102380288038462364 -1
The first column is seen in .NET as int, the second column as type OracleDecimal, and the third as decimal. The second one is the problem.
For example, removing Dapper for the moment and using vanilla ODP.NET to access these records thusly indicates the problem:
int linkid = (int)reader.GetValue(0);
decimal linksign = (decimal)reader.GetValue(2);
//decimal dlinkoffset = (decimal)reader.GetValue(1); //**invalid cast exception at at Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i)**
//object olinkoffset = reader.GetValue(1); //**same**
//decimal dlinkoffset = reader.GetDecimal(1); //**same**
//object[] values = new object[reader.FieldCount];
//reader.GetValues(values); //**same**
OracleDecimal linkoffset = (OracleDecimal)reader.GetProviderSpecificValue(1); //this works!
double dblinkoffset = reader.GetDouble(1); //interesting, this works too!
//decimal dlinkoffset = linkoffset.Value; //overflow exception
dblinkoffset = linkoffset.ToDouble(); //voila
What little browsing and breakpointing I've done in Dapper's SqlMapper.cs file shows me that it is extracting data from the reader with GetValue()/GetValues(), as above, which fails.
Any suggestions how to patch Dapper up? Many thanks.
UPDATE:
Upon reflection, I RTFMed: Section 3, "Obtaining Data from an OracleDataReader Object" of the Oracle Data Provider for .NET Developer’s Guide which explains. For NUMBER columns, ODP.NET's OracleDataReader will try a sequence of .NET types from Byte to Decimal to prevent overflow. But a NUMBER may still overflow Decimal, giving an invalid cast exception if you try any of the reader's .NET type accessors (GetValue()/GetValues()), in which case you have to use the reader's ODP.NET type accessor GetProviderSpecificValue(), which gives you an OracleDecimal, and if it overflows a Decimal, its Value property will give you an overflow exception and your only recourse is to coerce it into a lesser type with one of OracleDecimal's ToXxx() methods.
But of course the ODP.NET type accessor is not part of the IDataReader interface used by Dapper to hold reader objects, so it seems that Dapper, by itself, is Oracle-incompatible when a column type will overflow all .NET types.
The question remains - do the smart folk know how to extend Dapper to handle this. It seems to me I'd need an extension point where I could provide implementation on how to use the reader (forcing it to use GetDouble() instead of GetValue(), or casting to OracleDataReader and calling GetProviderSpecificValue()) for certain POCO property or column types.
To avoid this issue i used:
CAST(COLUMN AS BINARY_DOUBLE)
or
TO_BINARY_DOUBLE(COLUMN)
In the Oracle types listed here it's described as:
64-bit floating point number. This datatype requires 9 bytes, including the length byte.
Most of the other number types used by Oracle are 22 bytes max, so this is as good as it gets for .NET
I have a column in my database that is a float. My database is in brazilian portuguese, so, the decimal separator from this column is comma (,).
I don't know if this is the cause, but Dapper is throwing the exception "Invalid cast from 'System.Double' to 'System.Nullable..." (my entity uses a Nullable for this column).
Can you help me?
This isn't anything to do with culture - the data that comes back is primitive, not stringified. Simply, it isn't happy to cast from double to decimal?. Since the database is returning double, a double? property would work fine. The core tries to allow as many conversions as are pragmatic, but it doesn't support all mappings.
When I browse the cube and pivot Sales by Month ,(for example), I get something like 12345.678901.
Is there a way to make it so that when a user browses they get values rounded up to nearest two decimal places, ie: 12345.68, instead?
Thanks,
-teddy
You can enter a format string in the properties for your measure or calculation and if your OLAP client supports it then the formatting will be used. e.g. for 1 decimal place you'd use something like "#,0.0;(#,0.0)". Excel supports format strings by default and you can configure Reporting Services to use them.
Also if you're dealing with money you should configure the measure to use the Currency data type. By default Analysis Services will use Double if the source data type in the database is Money. This can introduce rounding issues and is not as efficient as using Currency. See this article for more info: The many benefits of money data type. One side benefit of using Currency is you will never see more than 4 decimal places.
Either edit the display properties in the cube itself, so it always returns 2 decimal places whenever anyone edits the cube.
Or you can add in a format string when running MDX:
WITH MEMBER [Measures].[NewMeasure] AS '[Measures].[OldMeasure]', FORMAT_STRING='##0.00'
You can change format string property of your measure. There are two possible ways:
If measure is direct measure -
Go to measure's properties and update 'Format String'
If measure is calculated measure -
Go to Calculations and update 'Format String'
I have just been bitten by issue described in SO question Binding int64 (SQL_BIGINT) as query parameter causes error during execution in Oracle 10g ODBC.
I'm porting a C/C++ application using ODBC 2 from SQL Server to Oracle. For numeric fields exceeding NUMBER(9) it uses __int64 datatype which is bound to queries as SQL_C_SBIGINT. Apparently such binding is not supported by Oracle ODBC. I must now do an application wide conversion to another method. Since I don't have much time---it's an unexpected issue---I would rather use proved solution, not trial and error.
What datatype should be used to bind as e.g. NUMBER(15) in Oracle? Is there documented recommended solution? What are you using? Any suggestions?
I'm especially interested in solutions that do not require any additional conversions. I can easily provide and consume numbers in form of __int64 or char* (normal non-exponential form without thousands separator or decimal point). Any other format requires additional conversion on my part.
What I have tried so far:
SQL_C_CHAR
Looks like it's going to work for me. I was worried about variability of number format. But in my use case it doesn't seem to matter. Apparently only fraction point character changes with system language settings.
And I don't see why I should use explicit cast (e.g. TO_NUMERIC) in SQL INSERT or UPDATE command. Everything works fine when I bind parameter with SQL_C_CHAR as C type and SQL_NUMERIC (with proper precision and scale) as SQL type. I couldn't reproduce any data corruption effect.
SQL_NUMERIC_STRUCT
I've noticed SQL_NUMERIC_STRUCT added with ODBC 3.0 and decided to give it a try. I am disappointed.
In my situation it is enough, as the application doesn't really use fractional numbers. But as a general solution... Simply, I don't get it. I mean, I finally understood how it is supposed to be used. What I don't get is: why anyone would introduce new struct of this kind and then make it work this way.
SQL_NUMERIC_STRUCT has all the needed fields to represent any NUMERIC (or NUMBER, or DECIMAL) value with it's precision and scale. Only they are not used.
When reading, ODBC sets precision of the number (based on precision of the column; except that Oracle returns bigger precision, e.g. 20 for NUMBER(15)). But if your column has fractional part (scale > 0) it is by default truncated. To read number with proper scale you need to set precision and scale yourself with SQLSetDescField call before fetching data.
When writing, Oracle thankfully respects scale contained in SQL_NUMERIC_STRUCT. But ODBC spec doesn't mandate it and MS SQL Server ignores this value. So, back to SQLSetDescField again.
See HOWTO: Retrieving Numeric Data with SQL_NUMERIC_STRUCT and INF: How to Use SQL_C_NUMERIC Data Type with Numeric Data for more information.
Why ODBC doesn't fully use its own SQL_NUMERIC_STRUCT? I don't know. It looks like it works but I think it's just too much work.
I guess I'll use SQL_C_CHAR.
My personal preference is to make the bind variables character strings (VARCHAR2), and let Oracle do the conversion from character to it's own internal storage format. It's easy enough (in C) to get data values represented as null terminated strings, in an acceptable format.
So, instead of writing SQL like this:
SET MY_NUMBER_COL = :b1
, MY_DATE_COL = :b2
I write the SQL like this:
SET MY_NUMBER_COL = TO_NUMBER( :b1 )
, MY_DATE_COL = TO_DATE( :b2 , 'YYYY-MM-DD HH24:MI:SS')
and supply character strings as the bind variables.
There are a couple of advantages to this approach.
One is that works around the issues and bugs one encounters with binding other data types.
Another advantage is that bind values are easier to decipher on an Oracle event 10046 trace.
Also, an EXPLAIN PLAN (I believe) expects all bind variables to be VARCHAR2, so that means the statement being explained is slightly different than the actual statement being executed (due to the implicit data conversions when the datatypes of the bind arguments in the actual statement are not VARCHAR2.)
And (less important) when I'm testing of the statement in TOAD, it's easier just to be able to type in strings in the input boxes, and not have to muck with changing the datatype in a dropdown list box.
I also let the buitin TO_NUMBER and TO_DATE functions validate the data. (In earlier versions of Oracle at least, I encountered issues with binding a DATE value directly, and it bypassed (at least some of) the validity checking, and allowed invalid date values to be stored in the database.
This is just a personal preference, based on past experience. I use this same approach with Perl DBD.
I wonder what Tom Kyte (asktom.oracle.com) has to say about this topic?