Can Npgsql 3.1 automatically convert between PostgreSQL enums and text, like Npgsql 2.2 did? - npgsql

I'm upgrading an application from Npgsql 2.2 to Npgsql 3.1. It uses many enums and I have many errors like this:
Npgsql.PostgresException : 42804: column "my_column" is of type my_enum but expression is of type text
This seems to happen even with a simple INSERT, like this
INSERT INTO my_table (col1, col2, col3, ...)
SELECT $1, $2, $3, ...
RETURNING 1 AS _row_affected
Each column value is added as a parameter using NpgsqlCommand.Parameters.AddWithValue(name, value) (without specifying the type). The .NET type of the value is a string in this case.
In Npgsql 2.2 this worked, because it transparently converted text to Postgres enums and vice-versa. Is there a way to get the same behaviour in 3.1? I know that it handles enums natively now, which can be useful, but it would really really simplify the upgrade if I could get the enums working as text again!

When creating the parameter, try settings its NpgsqlDbType to Unknown, this should make PostgreSQL transparently convert the text value into an enum.

Related

Count in Firebird 3.0 bigint vs Firebird 2.5 integer

On my system, after the migration from Firebird 2.5 to 3.0, many reports and other functions began to give problems stating that what was expected was Integer and the current one is LargeInt. I took a look and saw that some searches with the count in 3.0 return a bigInt column, while in 2.5 it returns an integer column.
To solve the way I know, I would have to cast a cast on everything that is giving error, tested and worked, but it is a big system, it will give a lot of work.
Does anyone know of any way to resolve this in Firebird itself? Some configuration, or something?
There is no configuration for it; the Firebird 3 release notes only say:
The COUNT() aggregator now returns its result as BIGINT instead of INTEGER.
You either need to explicitly apply a cast in your queries, or in your code, or see if your data access library can explicitly request integer instead of just the dynamic type. For example in the Java world the JDBC API has an explicit getInt which will work for BIGINT as long as the value fits in a 32 bit integer.
I use Unidac, and for the solution I used Map Rules (Data Type Mapping).
Use TYPECAST in SQL in any delphi component and will be ok. For example wite:
select
cast(count(*) as integer) BR
from ...
instead of
select
count(*) BR
from ...

sqlite why select like works and equals does not?

I have a problem with sqlite3 database, I execute the following queries
sqlite> select * from property where link like
"http://www.domain.com/huur/den-bosch/appartement-48118689-meester-spoermekerlaan-88/";
I get two rows
17|2014-11-03|Meester Spoermekerlaan
88|http://www.domain.com/huur/den-bosch/appartement-48118689-meester-spoermekerlaan-88/|5237
JZ|Den Bosch|€ 789|3|1
32|2014-11-03|Meester Spoermekerlaan
88|http://www.domain.com/huur/den-bosch/appartement-48118689-meester-spoermekerlaan-88/|5237
JZ|Den Bosch|€ 789|3|1
Then I execute the same query, but using the equality operator, like so
sqlite> select * from property where
link="http://www.domain.com/huur/den-bosch/appartement-48118689-meester-spoermekerlaan-88/";
sqlite> (<---- no results??)
I already found a similar answer to mine, however the issue is not the same, my fields are of datatype "text", as you can see here: https://stackoverflow.com/a/14823565/279147
sqlite> .schema property
CREATE TABLE property (id integer PRIMARY KEY AUTOINCREMENT UNIQUE,"date" text,address text,link text,postcode text,city text,price text,rooms text,page integer);
So does anybody have any idea why this would happen? here is my version information
root#s1:/# sqlite3 application.sqlite3
SQLite version 3.7.3
I had the same problem. This work around worked for me.
SELECT * from foo WHERE CAST(name AS BLOB) = CAST('bla' AS BLOB);
SQLite uses dynamic typing; it does not matter if the link column is declared as text or blob or fluffy bunnies.
LIKE automatically converts its parameters into strings, but the = operator does not.
Fix the program that writes the entries to the database to use the correct data type.
I had similar issue, in my case the point was, the like is not case sensitive, but the = operator is case sensitive.
This is because = is a literal string comparison so it would have to be exactly the same.
The like operator is looking for a matching pattern within the string and finds it.

How to use Dapper micro-ORM with Oracle to map NUMBER (OracleDecimal)

The ODP.NET provider raises an exception in IDataReader.GetValue()/GetValues() if the column type is NUMBER(x,y) such that it will overflow all .NET numeric types. So Dapper is unable to map such a column to a POCO property.
I have an Oracle stored procedure that uses a REF CURSOR output parameter to return 3-column records. Fundamentally all 3 are NUMBER(something), but the ODP.NET Oracle managed provider seems to decide what ODP.NET or .NET type to turn them into.
I've been having problems with Dapper's Query() mapping records from this sproc into POCOs. Perhaps it actually isn't my fault, for once - it seems when a column comes across as an ODP.NET type instead of a .NET type, Dapper fails. If I comment an offending column out of my POCO, everything works.
Here's a pair of rows to illustrate:
--------------------------------------------------------------------
RDWY_LINK_ID RLC_LINK_OSET SIGN
---------------------- ---------------------- ----------------------
1829 1.51639964279667746989761971196153763602 1
14380 578.483600357203322530102380288038462364 -1
The first column is seen in .NET as int, the second column as type OracleDecimal, and the third as decimal. The second one is the problem.
For example, removing Dapper for the moment and using vanilla ODP.NET to access these records thusly indicates the problem:
int linkid = (int)reader.GetValue(0);
decimal linksign = (decimal)reader.GetValue(2);
//decimal dlinkoffset = (decimal)reader.GetValue(1); //**invalid cast exception at at Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i)**
//object olinkoffset = reader.GetValue(1); //**same**
//decimal dlinkoffset = reader.GetDecimal(1); //**same**
//object[] values = new object[reader.FieldCount];
//reader.GetValues(values); //**same**
OracleDecimal linkoffset = (OracleDecimal)reader.GetProviderSpecificValue(1); //this works!
double dblinkoffset = reader.GetDouble(1); //interesting, this works too!
//decimal dlinkoffset = linkoffset.Value; //overflow exception
dblinkoffset = linkoffset.ToDouble(); //voila
What little browsing and breakpointing I've done in Dapper's SqlMapper.cs file shows me that it is extracting data from the reader with GetValue()/GetValues(), as above, which fails.
Any suggestions how to patch Dapper up? Many thanks.
UPDATE:
Upon reflection, I RTFMed: Section 3, "Obtaining Data from an OracleDataReader Object" of the Oracle Data Provider for .NET Developer’s Guide which explains. For NUMBER columns, ODP.NET's OracleDataReader will try a sequence of .NET types from Byte to Decimal to prevent overflow. But a NUMBER may still overflow Decimal, giving an invalid cast exception if you try any of the reader's .NET type accessors (GetValue()/GetValues()), in which case you have to use the reader's ODP.NET type accessor GetProviderSpecificValue(), which gives you an OracleDecimal, and if it overflows a Decimal, its Value property will give you an overflow exception and your only recourse is to coerce it into a lesser type with one of OracleDecimal's ToXxx() methods.
But of course the ODP.NET type accessor is not part of the IDataReader interface used by Dapper to hold reader objects, so it seems that Dapper, by itself, is Oracle-incompatible when a column type will overflow all .NET types.
The question remains - do the smart folk know how to extend Dapper to handle this. It seems to me I'd need an extension point where I could provide implementation on how to use the reader (forcing it to use GetDouble() instead of GetValue(), or casting to OracleDataReader and calling GetProviderSpecificValue()) for certain POCO property or column types.
To avoid this issue i used:
CAST(COLUMN AS BINARY_DOUBLE)
or
TO_BINARY_DOUBLE(COLUMN)
In the Oracle types listed here it's described as:
64-bit floating point number. This datatype requires 9 bytes, including the length byte.
Most of the other number types used by Oracle are 22 bytes max, so this is as good as it gets for .NET

ADO Recordset Type for Date

Warning: Classic ASP Ahead. :)
I'm working on a legacy Classic ASP application and I'm running into an oddity with an ADODB.Recordset object.
I have a SQL2012 database table containing a particular field. Let's call it AnnoyingField. Its datatype in SQL is 'date'.
The ASP opens an ADODB.Recordset with a SELECT on the table to collect the fields, then does some looping to do its work:
For each Field in rs.Fields
typeid = rs(Field.Name).Type
'do stuff based on type
For some reason, the Type for AnnoyingField is coming back as 202 (nvarchar) rather than one of the expected types for date (133 or even 7). This is causing some issues further in the code.
I tested with another field of 'datetime' type and the Recordset code returned the expected Type for a datetime field.. 135.
Anyone have an idea why the 'date' fields are returning as an nvarchar?
Changing the database fields from date to datetime in this case might not be possible, even though it might be the logical path to take to get expected data.
Date fields are newer than your version of ADODB. So it doesn't understand what it's getting.
You may be able to cheer it up by using
select convert(datetime, AnnoyingField) from CrazyFuturisticTable
You may also get the correct result if you upgrade your ADODB version to 2.8 and/or connect using the SQL Native Client. Obviously, I haven't tried this, because I live in 2014.
If you keep it a little simpler, cut ADO out of the picture and use isdate and "convert date" and vartype.
For each Field in rs.Fields
Field=rs(Field.Name)
if isdate(Field) then Field=cdate(Field) ' just in case
typeid = VarType(Field)
'do stuff based on type
Vartype
http://www.w3schools.com/vbscript/func_vartype.asp
ADO .type can report more "types"
http://www.w3schools.com/ado/prop_para_type.asp
But I think what vartype can offer should help people in 95% of cases.
Word of warning with cdate, depending on what the server locale is set too or your Session.LCID (in your code) is set will determine what format the date will be formatted to. Shouldn't be a problem for most people but obviously test to see if you get the expected result.
I am using Classic ASP / VBScript and have found that for SQL2012 using SQL Native driver ("Provider=SQLNCLI11"), and the ADO module provided by Windows (presumably Vista or newer [mine is Windows Servr2012 R2 - the version [MyDbConn.version] shows as 6.3) this now works again.
I had been looking for / expecting the Field.Type to be adDate [7], but am actually getting adDBDate [133]

Delphi XE Unicode dealing with nchar(1) fields

I'm using a dbGo TADODataset in Delphi XE to read data from a SQL Server 2008 table. All the string fields are nvarchar or nchar.
The nvarchar fields map to TWideStringField with no problems.
The problem is with an nchar(1) field, this maps to a TStringField fine but not to a TWideStringField - when I configure the field to be TWideStringField I get the error:
Type mismatch for field 'foobar', expecting: WideString actual: FixedChar
The problem in using a TStringField is that it is UTF8 and we want to make our application fully unicode compliant.
Any suggestions on this?
Hmmm. That field is used for what, exactly?
In most db designs I've seen, an (n)Char(1) field are normally used for
one-letter flags - which can be easily represented in an ASCII char.
I would recomend change the type of the column to CHAR(1), if that's the
case.

Resources