On my system, after the migration from Firebird 2.5 to 3.0, many reports and other functions began to give problems stating that what was expected was Integer and the current one is LargeInt. I took a look and saw that some searches with the count in 3.0 return a bigInt column, while in 2.5 it returns an integer column.
To solve the way I know, I would have to cast a cast on everything that is giving error, tested and worked, but it is a big system, it will give a lot of work.
Does anyone know of any way to resolve this in Firebird itself? Some configuration, or something?
There is no configuration for it; the Firebird 3 release notes only say:
The COUNT() aggregator now returns its result as BIGINT instead of INTEGER.
You either need to explicitly apply a cast in your queries, or in your code, or see if your data access library can explicitly request integer instead of just the dynamic type. For example in the Java world the JDBC API has an explicit getInt which will work for BIGINT as long as the value fits in a 32 bit integer.
I use Unidac, and for the solution I used Map Rules (Data Type Mapping).
Use TYPECAST in SQL in any delphi component and will be ok. For example wite:
select
cast(count(*) as integer) BR
from ...
instead of
select
count(*) BR
from ...
Related
We are using a software that has limited Oracle capabilities. I need to filter through a CLOB field by making sure it has a specific value. Normally, outside of this software I would do something like:
DBMS_LOB.SUBSTR(t.new_value) = 'Y'
However, this isn't supported so I'm attempting to use CAST instead. I've tried many different attempts but so far these are what I found:
The software has a built-in query checker/validator and these are the ones it shows as invalid:
DBMS_LOB.SUBSTR(t.new_value)
CAST(t.new_value AS VARCHAR2(10))
CAST(t.new_value AS NVARCHAR2(10))
However, the validator does accept these:
CAST(t.new_value AS VARCHAR(10))
CAST(t.new_value AS NVARCHAR(10))
CAST(t.new_value AS CHAR(10))
Unfortunately, even though the validator lets these ones go through, when running the query to fetch data, I get ORA-22835: Buffer too small when using VARCHAR or NVARCHAR. And I get ORA-25137: Data value out of range when using CHAR.
Are there other ways I could try to check that my CLOB field has a specific value when filtering the data? If not, how do I fix my current issues?
The error you're getting indicates that Oracle is trying to apply the CAST(t.new_value AS VARCHAR(10)) to a row where new_value has more than 10 characters. That makes sense given your description that new_value is a generic audit field that has values from a large number of different tables with a variety of data lengths. Given that, you'd need to structure the query in a way that forces the optimizer to reduce the set of rows you're applying the cast to down to just those where new_value has just a single character before applying the cast.
Not knowing what sort of scope the software you're using provides for structuring your code, I'm not sure what options you have there. Be aware that depending on how robust you need this, the optimizer has quite a bit of flexibility to choose to apply predicates and functions on the projection in an arbitrary order. So even if you find an approach that works once, it may stop working in the future when statistics change or the database is upgraded and Oracle decides to choose a different plan.
Using this as sample data
create table tab1(col clob);
insert into tab1(col) values (rpad('x',3000,'y'));
You need to use dbms_lob.substr(col,1) to get the first character (from the default offset= 1)
select dbms_lob.substr(col,1) from tab1;
DBMS_LOB.SUBSTR(COL,1)
----------------------
x
Note that the default amount (= length) of the substring is 32767 so using only DBMS_LOB.SUBSTR(COL) will return more than you expects.
CAST for CLOB does not cut the string to the casted length, but (as you observes) returns the exception ORA-25137: Data value out of range if the original string is longert that the casted length.
As documented for the CAST statement
CAST does not directly support any of the LOB data types. When you use CAST to convert a CLOB value into a character data type or a BLOB value into the RAW data type, the database implicitly converts the LOB value to character or raw data and then explicitly casts the resulting value into the target data type. If the resulting value is larger than the target type, then the database returns an error.
I transferred an Oracle database to SQL Server and all seems to have went well. The various ID columns are large numbers so I had to use Decimal as they were too large for BigInt.
I am now trying to read the data using pandas.read_sql using pyodbc connection with ODBC Driver 17 for SQL Server. df = pandas.read_sql("SELECT * FROM table1"),con)
The numbers are coming out as float64 and when I try to print them our use them in SQL statements they come out in scientific notation and when I try to use '{:.0f}'.format(df.loc[i,'Id']) It turns several numbers into the same number such as 90300111000003078520832. It is like precision is lost when it goes to scientific notation.
I also tried pd.options.display.float_format = '{:.0f}'.format before the read_sql but this did not help.
Clearly I must be doing something wrong as the Ids in the database are correct.
Any help is appreciated Thanks
pandas' read_sql method has an option named coerce_float which defaults to True and it …
Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets.
However, in your case it is not useful, so simply specify coerce_float=False.
I've had this problem too, especially working with long ids: read_sql works fine for the primary key, but not for other columns (like the retweeted_status_id from Twitter API calls). Setting coerce_float to false does nothing for me, so instead I cast retweeted_status_id to a character format in my sql query.
Using psql, I do:
df = pandas.read_sql("SELECT *, Id::text FROM table1"),con)
But in SQL server it'd be something like
df = pandas.read_sql("SELECT *, CONVERT(text, Id) FROM table1"),con)
or
df = pandas.read_sql("SELECT *, CAST(Id AS varchar) FROM table1"),con)
Obviously there's a cost here if you're asking to cast many rows, and a more efficient option might be to pull from SQL server without using pandas (as a nested list or JSON or something else) which will also preserve your long integer formats.
I'm not being able to understand how is the data type geography in SQL server...
For example I have the following data:
0xE6100000010CCEAACFD556484340B2F336363BCA21C0
what I know:
0x is prefix for hexadecimal
last 16 numbers are longitude: B2F336363BCA21C0 (double of decimal format)
16 numbers before the last 16 are latitude: CEAACFD556484340 (double of decimal format)
4 first numbers are SRID: E610 (hexadecimal for WGS84)
what I don't understand:
numbers from 5 to 12 : 0000010C
what is this?
From what I read this seems linked to WKB(Well Known Binary) or EWKB(Extended Well Known Binary) anyway i was not abble to find a definition for EWKB...
And for WKB this is supposed to be geometry type (4-byte integer) but the value doesn't match with the Geometry types codes (this example is for one point coordinate)
Can you help to understand this format?
The spatial types (geometry and geography) in SQL Server are implemented as CLR data types. As with any such data types, you get a binary representation when you query the value directly. Unfortunately, it's not (as far as I know) WKB but rather whatever format Microsoft decided was best for their implementation. For us (the users), we should work with the published interface of methods that have been published by MS (for instance the geography method reference). Which is to say that you should only try to decipher the MS binary representation if you're curious (and not for actually working with it).
That said, if you need/want to work with WKB, you can! For example, you can use the STGeomFromWKB() static method to create a geography instance from WKB that you provide and STAsBinary() can be called on a geography instance to return WKB to you.
The Format spec can be found here:
https://msdn.microsoft.com/en-us/library/ee320529(v=sql.105).aspx
As that page shows, it used to change very frequently, but has slowed down significantly over the past 2 years
I am currently needing to dig into the spec to serialize from JVM code into a bcp file so that I can use SQLServerBulkCopy rather than plain JDBC to upload data into tables (it is about 7x faster to write a bcp file than using JDBC), but this is proving to be more complicated than what I originally anticipated.
After testing with bcp, you can upload geographies by specifying an off row format ( varchar(max) ) and store the well known text, SQL server will see this and assume you wanted a geography based on the WKT it sees.
In my case converting to nvarchar resolved the issue.
The ODP.NET provider raises an exception in IDataReader.GetValue()/GetValues() if the column type is NUMBER(x,y) such that it will overflow all .NET numeric types. So Dapper is unable to map such a column to a POCO property.
I have an Oracle stored procedure that uses a REF CURSOR output parameter to return 3-column records. Fundamentally all 3 are NUMBER(something), but the ODP.NET Oracle managed provider seems to decide what ODP.NET or .NET type to turn them into.
I've been having problems with Dapper's Query() mapping records from this sproc into POCOs. Perhaps it actually isn't my fault, for once - it seems when a column comes across as an ODP.NET type instead of a .NET type, Dapper fails. If I comment an offending column out of my POCO, everything works.
Here's a pair of rows to illustrate:
--------------------------------------------------------------------
RDWY_LINK_ID RLC_LINK_OSET SIGN
---------------------- ---------------------- ----------------------
1829 1.51639964279667746989761971196153763602 1
14380 578.483600357203322530102380288038462364 -1
The first column is seen in .NET as int, the second column as type OracleDecimal, and the third as decimal. The second one is the problem.
For example, removing Dapper for the moment and using vanilla ODP.NET to access these records thusly indicates the problem:
int linkid = (int)reader.GetValue(0);
decimal linksign = (decimal)reader.GetValue(2);
//decimal dlinkoffset = (decimal)reader.GetValue(1); //**invalid cast exception at at Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i)**
//object olinkoffset = reader.GetValue(1); //**same**
//decimal dlinkoffset = reader.GetDecimal(1); //**same**
//object[] values = new object[reader.FieldCount];
//reader.GetValues(values); //**same**
OracleDecimal linkoffset = (OracleDecimal)reader.GetProviderSpecificValue(1); //this works!
double dblinkoffset = reader.GetDouble(1); //interesting, this works too!
//decimal dlinkoffset = linkoffset.Value; //overflow exception
dblinkoffset = linkoffset.ToDouble(); //voila
What little browsing and breakpointing I've done in Dapper's SqlMapper.cs file shows me that it is extracting data from the reader with GetValue()/GetValues(), as above, which fails.
Any suggestions how to patch Dapper up? Many thanks.
UPDATE:
Upon reflection, I RTFMed: Section 3, "Obtaining Data from an OracleDataReader Object" of the Oracle Data Provider for .NET Developer’s Guide which explains. For NUMBER columns, ODP.NET's OracleDataReader will try a sequence of .NET types from Byte to Decimal to prevent overflow. But a NUMBER may still overflow Decimal, giving an invalid cast exception if you try any of the reader's .NET type accessors (GetValue()/GetValues()), in which case you have to use the reader's ODP.NET type accessor GetProviderSpecificValue(), which gives you an OracleDecimal, and if it overflows a Decimal, its Value property will give you an overflow exception and your only recourse is to coerce it into a lesser type with one of OracleDecimal's ToXxx() methods.
But of course the ODP.NET type accessor is not part of the IDataReader interface used by Dapper to hold reader objects, so it seems that Dapper, by itself, is Oracle-incompatible when a column type will overflow all .NET types.
The question remains - do the smart folk know how to extend Dapper to handle this. It seems to me I'd need an extension point where I could provide implementation on how to use the reader (forcing it to use GetDouble() instead of GetValue(), or casting to OracleDataReader and calling GetProviderSpecificValue()) for certain POCO property or column types.
To avoid this issue i used:
CAST(COLUMN AS BINARY_DOUBLE)
or
TO_BINARY_DOUBLE(COLUMN)
In the Oracle types listed here it's described as:
64-bit floating point number. This datatype requires 9 bytes, including the length byte.
Most of the other number types used by Oracle are 22 bytes max, so this is as good as it gets for .NET
I have just been bitten by issue described in SO question Binding int64 (SQL_BIGINT) as query parameter causes error during execution in Oracle 10g ODBC.
I'm porting a C/C++ application using ODBC 2 from SQL Server to Oracle. For numeric fields exceeding NUMBER(9) it uses __int64 datatype which is bound to queries as SQL_C_SBIGINT. Apparently such binding is not supported by Oracle ODBC. I must now do an application wide conversion to another method. Since I don't have much time---it's an unexpected issue---I would rather use proved solution, not trial and error.
What datatype should be used to bind as e.g. NUMBER(15) in Oracle? Is there documented recommended solution? What are you using? Any suggestions?
I'm especially interested in solutions that do not require any additional conversions. I can easily provide and consume numbers in form of __int64 or char* (normal non-exponential form without thousands separator or decimal point). Any other format requires additional conversion on my part.
What I have tried so far:
SQL_C_CHAR
Looks like it's going to work for me. I was worried about variability of number format. But in my use case it doesn't seem to matter. Apparently only fraction point character changes with system language settings.
And I don't see why I should use explicit cast (e.g. TO_NUMERIC) in SQL INSERT or UPDATE command. Everything works fine when I bind parameter with SQL_C_CHAR as C type and SQL_NUMERIC (with proper precision and scale) as SQL type. I couldn't reproduce any data corruption effect.
SQL_NUMERIC_STRUCT
I've noticed SQL_NUMERIC_STRUCT added with ODBC 3.0 and decided to give it a try. I am disappointed.
In my situation it is enough, as the application doesn't really use fractional numbers. But as a general solution... Simply, I don't get it. I mean, I finally understood how it is supposed to be used. What I don't get is: why anyone would introduce new struct of this kind and then make it work this way.
SQL_NUMERIC_STRUCT has all the needed fields to represent any NUMERIC (or NUMBER, or DECIMAL) value with it's precision and scale. Only they are not used.
When reading, ODBC sets precision of the number (based on precision of the column; except that Oracle returns bigger precision, e.g. 20 for NUMBER(15)). But if your column has fractional part (scale > 0) it is by default truncated. To read number with proper scale you need to set precision and scale yourself with SQLSetDescField call before fetching data.
When writing, Oracle thankfully respects scale contained in SQL_NUMERIC_STRUCT. But ODBC spec doesn't mandate it and MS SQL Server ignores this value. So, back to SQLSetDescField again.
See HOWTO: Retrieving Numeric Data with SQL_NUMERIC_STRUCT and INF: How to Use SQL_C_NUMERIC Data Type with Numeric Data for more information.
Why ODBC doesn't fully use its own SQL_NUMERIC_STRUCT? I don't know. It looks like it works but I think it's just too much work.
I guess I'll use SQL_C_CHAR.
My personal preference is to make the bind variables character strings (VARCHAR2), and let Oracle do the conversion from character to it's own internal storage format. It's easy enough (in C) to get data values represented as null terminated strings, in an acceptable format.
So, instead of writing SQL like this:
SET MY_NUMBER_COL = :b1
, MY_DATE_COL = :b2
I write the SQL like this:
SET MY_NUMBER_COL = TO_NUMBER( :b1 )
, MY_DATE_COL = TO_DATE( :b2 , 'YYYY-MM-DD HH24:MI:SS')
and supply character strings as the bind variables.
There are a couple of advantages to this approach.
One is that works around the issues and bugs one encounters with binding other data types.
Another advantage is that bind values are easier to decipher on an Oracle event 10046 trace.
Also, an EXPLAIN PLAN (I believe) expects all bind variables to be VARCHAR2, so that means the statement being explained is slightly different than the actual statement being executed (due to the implicit data conversions when the datatypes of the bind arguments in the actual statement are not VARCHAR2.)
And (less important) when I'm testing of the statement in TOAD, it's easier just to be able to type in strings in the input boxes, and not have to muck with changing the datatype in a dropdown list box.
I also let the buitin TO_NUMBER and TO_DATE functions validate the data. (In earlier versions of Oracle at least, I encountered issues with binding a DATE value directly, and it bypassed (at least some of) the validity checking, and allowed invalid date values to be stored in the database.
This is just a personal preference, based on past experience. I use this same approach with Perl DBD.
I wonder what Tom Kyte (asktom.oracle.com) has to say about this topic?