Pervasive VAccess control v double data type - pervasive

Hope someone can help! We're encountering an issue with the Pervasive VAccess control whereby any time we save an item of type 'double' into a Pervasive database, the value saved is different to the one we want...
As an example we try to save 1.44 and it actually saves 1.44004035454
A slight difference but a difference nonetheless!
FYI the field defined in the DDF has decimal set to 0, i'm wondering if one course of action is to set this to e.g. 4? But thought i'd see if anyone can shed any light on it before we head down that path...

The underlying effect is nothing to do with pervasive, it's a simple floating point issue. You'll find the same in any system that uses single- or double-precision floating point, though some systems do automatic rounding to hide this from you.
See http://en.wikipedia.org/wiki/Floating_point
In the case of PostgreSQL and its derivatives, you can set extra_float_digits to control this rounding.
regress=> SET extra_float_digits = 3;
SET
regress=> SELECT FLOAT8 '1.44';
float8
---------------------
1.43999999999999995
(1 row)
regress=> SET extra_float_digits = 0;
SET
regress=> SELECT FLOAT8 '1.44';
float8
--------
1.44
(1 row)
It defaults to 0, but your client driver might be changing it. If you're using JDBC (which I'm guessing you are) then don't mess with this setting, the JDBC driver expects it to remain how the driver sets it and will get upset at you if you change it.
In general, if you want a human-readable formatted number you should be doing the rounding with round or to_char, or doing it client-side, instead. Note that there's no round(double precision, integer) function for reasons explained in answers to this question. So you'll probably want to_char, eg.
regress=> SELECT to_char(FLOAT8 '1.44', 'MI999999999D99');
to_char
---------------
1.44
(1 row)
(I wish PostgreSQL exposed a version of the cast from float8 to text that let you specify extra_float_digits on a per-call basis. That's often closer to what people really want. Guess I should add that if I get the time...)

Related

How is build the format of geography data type in sql server?

I'm not being able to understand how is the data type geography in SQL server...
For example I have the following data:
0xE6100000010CCEAACFD556484340B2F336363BCA21C0
what I know:
0x is prefix for hexadecimal
last 16 numbers are longitude: B2F336363BCA21C0 (double of decimal format)
16 numbers before the last 16 are latitude: CEAACFD556484340 (double of decimal format)
4 first numbers are SRID: E610 (hexadecimal for WGS84)
what I don't understand:
numbers from 5 to 12 : 0000010C
what is this?
From what I read this seems linked to WKB(Well Known Binary) or EWKB(Extended Well Known Binary) anyway i was not abble to find a definition for EWKB...
And for WKB this is supposed to be geometry type (4-byte integer) but the value doesn't match with the Geometry types codes (this example is for one point coordinate)
Can you help to understand this format?
The spatial types (geometry and geography) in SQL Server are implemented as CLR data types. As with any such data types, you get a binary representation when you query the value directly. Unfortunately, it's not (as far as I know) WKB but rather whatever format Microsoft decided was best for their implementation. For us (the users), we should work with the published interface of methods that have been published by MS (for instance the geography method reference). Which is to say that you should only try to decipher the MS binary representation if you're curious (and not for actually working with it).
That said, if you need/want to work with WKB, you can! For example, you can use the STGeomFromWKB() static method to create a geography instance from WKB that you provide and STAsBinary() can be called on a geography instance to return WKB to you.
The Format spec can be found here:
https://msdn.microsoft.com/en-us/library/ee320529(v=sql.105).aspx
As that page shows, it used to change very frequently, but has slowed down significantly over the past 2 years
I am currently needing to dig into the spec to serialize from JVM code into a bcp file so that I can use SQLServerBulkCopy rather than plain JDBC to upload data into tables (it is about 7x faster to write a bcp file than using JDBC), but this is proving to be more complicated than what I originally anticipated.
After testing with bcp, you can upload geographies by specifying an off row format ( varchar(max) ) and store the well known text, SQL server will see this and assume you wanted a geography based on the WKT it sees.
In my case converting to nvarchar resolved the issue.

Count in Firebird 3.0 bigint vs Firebird 2.5 integer

On my system, after the migration from Firebird 2.5 to 3.0, many reports and other functions began to give problems stating that what was expected was Integer and the current one is LargeInt. I took a look and saw that some searches with the count in 3.0 return a bigInt column, while in 2.5 it returns an integer column.
To solve the way I know, I would have to cast a cast on everything that is giving error, tested and worked, but it is a big system, it will give a lot of work.
Does anyone know of any way to resolve this in Firebird itself? Some configuration, or something?
There is no configuration for it; the Firebird 3 release notes only say:
The COUNT() aggregator now returns its result as BIGINT instead of INTEGER.
You either need to explicitly apply a cast in your queries, or in your code, or see if your data access library can explicitly request integer instead of just the dynamic type. For example in the Java world the JDBC API has an explicit getInt which will work for BIGINT as long as the value fits in a 32 bit integer.
I use Unidac, and for the solution I used Map Rules (Data Type Mapping).
Use TYPECAST in SQL in any delphi component and will be ok. For example wite:
select
cast(count(*) as integer) BR
from ...
instead of
select
count(*) BR
from ...

How to use Dapper micro-ORM with Oracle to map NUMBER (OracleDecimal)

The ODP.NET provider raises an exception in IDataReader.GetValue()/GetValues() if the column type is NUMBER(x,y) such that it will overflow all .NET numeric types. So Dapper is unable to map such a column to a POCO property.
I have an Oracle stored procedure that uses a REF CURSOR output parameter to return 3-column records. Fundamentally all 3 are NUMBER(something), but the ODP.NET Oracle managed provider seems to decide what ODP.NET or .NET type to turn them into.
I've been having problems with Dapper's Query() mapping records from this sproc into POCOs. Perhaps it actually isn't my fault, for once - it seems when a column comes across as an ODP.NET type instead of a .NET type, Dapper fails. If I comment an offending column out of my POCO, everything works.
Here's a pair of rows to illustrate:
--------------------------------------------------------------------
RDWY_LINK_ID RLC_LINK_OSET SIGN
---------------------- ---------------------- ----------------------
1829 1.51639964279667746989761971196153763602 1
14380 578.483600357203322530102380288038462364 -1
The first column is seen in .NET as int, the second column as type OracleDecimal, and the third as decimal. The second one is the problem.
For example, removing Dapper for the moment and using vanilla ODP.NET to access these records thusly indicates the problem:
int linkid = (int)reader.GetValue(0);
decimal linksign = (decimal)reader.GetValue(2);
//decimal dlinkoffset = (decimal)reader.GetValue(1); //**invalid cast exception at at Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i)**
//object olinkoffset = reader.GetValue(1); //**same**
//decimal dlinkoffset = reader.GetDecimal(1); //**same**
//object[] values = new object[reader.FieldCount];
//reader.GetValues(values); //**same**
OracleDecimal linkoffset = (OracleDecimal)reader.GetProviderSpecificValue(1); //this works!
double dblinkoffset = reader.GetDouble(1); //interesting, this works too!
//decimal dlinkoffset = linkoffset.Value; //overflow exception
dblinkoffset = linkoffset.ToDouble(); //voila
What little browsing and breakpointing I've done in Dapper's SqlMapper.cs file shows me that it is extracting data from the reader with GetValue()/GetValues(), as above, which fails.
Any suggestions how to patch Dapper up? Many thanks.
UPDATE:
Upon reflection, I RTFMed: Section 3, "Obtaining Data from an OracleDataReader Object" of the Oracle Data Provider for .NET Developer’s Guide which explains. For NUMBER columns, ODP.NET's OracleDataReader will try a sequence of .NET types from Byte to Decimal to prevent overflow. But a NUMBER may still overflow Decimal, giving an invalid cast exception if you try any of the reader's .NET type accessors (GetValue()/GetValues()), in which case you have to use the reader's ODP.NET type accessor GetProviderSpecificValue(), which gives you an OracleDecimal, and if it overflows a Decimal, its Value property will give you an overflow exception and your only recourse is to coerce it into a lesser type with one of OracleDecimal's ToXxx() methods.
But of course the ODP.NET type accessor is not part of the IDataReader interface used by Dapper to hold reader objects, so it seems that Dapper, by itself, is Oracle-incompatible when a column type will overflow all .NET types.
The question remains - do the smart folk know how to extend Dapper to handle this. It seems to me I'd need an extension point where I could provide implementation on how to use the reader (forcing it to use GetDouble() instead of GetValue(), or casting to OracleDataReader and calling GetProviderSpecificValue()) for certain POCO property or column types.
To avoid this issue i used:
CAST(COLUMN AS BINARY_DOUBLE)
or
TO_BINARY_DOUBLE(COLUMN)
In the Oracle types listed here it's described as:
64-bit floating point number. This datatype requires 9 bytes, including the length byte.
Most of the other number types used by Oracle are 22 bytes max, so this is as good as it gets for .NET

Any way to control number of decimal places while browsing SSAS cube?

When I browse the cube and pivot Sales by Month ,(for example), I get something like 12345.678901.
Is there a way to make it so that when a user browses they get values rounded up to nearest two decimal places, ie: 12345.68, instead?
Thanks,
-teddy
You can enter a format string in the properties for your measure or calculation and if your OLAP client supports it then the formatting will be used. e.g. for 1 decimal place you'd use something like "#,0.0;(#,0.0)". Excel supports format strings by default and you can configure Reporting Services to use them.
Also if you're dealing with money you should configure the measure to use the Currency data type. By default Analysis Services will use Double if the source data type in the database is Money. This can introduce rounding issues and is not as efficient as using Currency. See this article for more info: The many benefits of money data type. One side benefit of using Currency is you will never see more than 4 decimal places.
Either edit the display properties in the cube itself, so it always returns 2 decimal places whenever anyone edits the cube.
Or you can add in a format string when running MDX:
WITH MEMBER [Measures].[NewMeasure] AS '[Measures].[OldMeasure]', FORMAT_STRING='##0.00'
You can change format string property of your measure. There are two possible ways:
If measure is direct measure -
Go to measure's properties and update 'Format String'
If measure is calculated measure -
Go to Calculations and update 'Format String'

What datatype should I bind as query parameter to use with NUMBER(15) column in Oracle ODBC?

I have just been bitten by issue described in SO question Binding int64 (SQL_BIGINT) as query parameter causes error during execution in Oracle 10g ODBC.
I'm porting a C/C++ application using ODBC 2 from SQL Server to Oracle. For numeric fields exceeding NUMBER(9) it uses __int64 datatype which is bound to queries as SQL_C_SBIGINT. Apparently such binding is not supported by Oracle ODBC. I must now do an application wide conversion to another method. Since I don't have much time---it's an unexpected issue---I would rather use proved solution, not trial and error.
What datatype should be used to bind as e.g. NUMBER(15) in Oracle? Is there documented recommended solution? What are you using? Any suggestions?
I'm especially interested in solutions that do not require any additional conversions. I can easily provide and consume numbers in form of __int64 or char* (normal non-exponential form without thousands separator or decimal point). Any other format requires additional conversion on my part.
What I have tried so far:
SQL_C_CHAR
Looks like it's going to work for me. I was worried about variability of number format. But in my use case it doesn't seem to matter. Apparently only fraction point character changes with system language settings.
And I don't see why I should use explicit cast (e.g. TO_NUMERIC) in SQL INSERT or UPDATE command. Everything works fine when I bind parameter with SQL_C_CHAR as C type and SQL_NUMERIC (with proper precision and scale) as SQL type. I couldn't reproduce any data corruption effect.
SQL_NUMERIC_STRUCT
I've noticed SQL_NUMERIC_STRUCT added with ODBC 3.0 and decided to give it a try. I am disappointed.
In my situation it is enough, as the application doesn't really use fractional numbers. But as a general solution... Simply, I don't get it. I mean, I finally understood how it is supposed to be used. What I don't get is: why anyone would introduce new struct of this kind and then make it work this way.
SQL_NUMERIC_STRUCT has all the needed fields to represent any NUMERIC (or NUMBER, or DECIMAL) value with it's precision and scale. Only they are not used.
When reading, ODBC sets precision of the number (based on precision of the column; except that Oracle returns bigger precision, e.g. 20 for NUMBER(15)). But if your column has fractional part (scale > 0) it is by default truncated. To read number with proper scale you need to set precision and scale yourself with SQLSetDescField call before fetching data.
When writing, Oracle thankfully respects scale contained in SQL_NUMERIC_STRUCT. But ODBC spec doesn't mandate it and MS SQL Server ignores this value. So, back to SQLSetDescField again.
See HOWTO: Retrieving Numeric Data with SQL_NUMERIC_STRUCT and INF: How to Use SQL_C_NUMERIC Data Type with Numeric Data for more information.
Why ODBC doesn't fully use its own SQL_NUMERIC_STRUCT? I don't know. It looks like it works but I think it's just too much work.
I guess I'll use SQL_C_CHAR.
My personal preference is to make the bind variables character strings (VARCHAR2), and let Oracle do the conversion from character to it's own internal storage format. It's easy enough (in C) to get data values represented as null terminated strings, in an acceptable format.
So, instead of writing SQL like this:
SET MY_NUMBER_COL = :b1
, MY_DATE_COL = :b2
I write the SQL like this:
SET MY_NUMBER_COL = TO_NUMBER( :b1 )
, MY_DATE_COL = TO_DATE( :b2 , 'YYYY-MM-DD HH24:MI:SS')
and supply character strings as the bind variables.
There are a couple of advantages to this approach.
One is that works around the issues and bugs one encounters with binding other data types.
Another advantage is that bind values are easier to decipher on an Oracle event 10046 trace.
Also, an EXPLAIN PLAN (I believe) expects all bind variables to be VARCHAR2, so that means the statement being explained is slightly different than the actual statement being executed (due to the implicit data conversions when the datatypes of the bind arguments in the actual statement are not VARCHAR2.)
And (less important) when I'm testing of the statement in TOAD, it's easier just to be able to type in strings in the input boxes, and not have to muck with changing the datatype in a dropdown list box.
I also let the buitin TO_NUMBER and TO_DATE functions validate the data. (In earlier versions of Oracle at least, I encountered issues with binding a DATE value directly, and it bypassed (at least some of) the validity checking, and allowed invalid date values to be stored in the database.
This is just a personal preference, based on past experience. I use this same approach with Perl DBD.
I wonder what Tom Kyte (asktom.oracle.com) has to say about this topic?

Resources