I'm receiving an error message in VS2010 after I execute the following code to get values from a SQLite database via an automatically generated ADO.Net Entity Data Model.
using (Data.DbEntities ent = new Data.DbEntities())
{
var r = from tt in ent.Template_DB select tt;
r.First();//Required to cause error
}
The SQLite database table being accessed is called 'Template' (which was renamed to Template_DB for the model) with a few columns holding strings, longs and bits. All queries I've tried return exactly what's expected.
The message I receive is:
ReleaseHandleFailed was detected
A SafeHandle or CriticalHandle of type
'Microsoft.Win32.SafeHandles.SafeCapiHashHandle' failed to properly
release the handle with value 0x0D0DDCF0. This usually indicates that
the handle was released incorrectly via another means (such as
extracting the handle using DangerousGetHandle and closing it directly
or building another SafeHandle around it.)
This message comes up perhaps 60% of the time, up to 8 seconds after the code has completed. As far as I'm aware, the database is not encrypted and has no password. Until recently, I've been using similar MS-SQL databases with Entity Framework models and never seen an error like this.
Help!
EDIT:
I downloaded/installed "sqlite-netFx40-setup-bundle-x86-2010-1.0.81.0.exe" to install SQLite, from here. This included the System.Data.SQLite 1.0.81.0 (3.7.12.1) package (not 3.7.13 as stated in the comment below)
Related
I'm wanting to update hundreds (or even thousands) of records at a time with Peewee FlaskDb (which is in an App Factory).
Referencing the Peewee documentation, I have bulk_update working well (and it is very fast compared to other methods), but it fails when using batches.
For example, Ticket.bulk_update(selected_tickets, fields=[Ticket.customer]) works great, but when I use the following code to update in batches I receive the following error.
code
with db.atomic():
Ticket.bulk_update(selected_tickets, fields=[Ticket.customer], batch=50)
error
AttributeError: 'FlaskDB' object has no attribute 'atomic'
What is the recommended way of updating records in bulk with FlaskDB? Does FlaskDB support atomic?
You are trying to access peewee.Database methods on the FlaskDB wrapper class. Those methods do not exist, you need to refer to the underlying Peewee database:
# Here we assume db is a FlaskDB() instance:
peewee_db = db.database
with peewee_db.atomic():
...
I have an application that supports two databases. MSSQL and SQLite. I am revamping the underlying data access and models and using RepoDb. I would be using the same model for the SQLite and MSSQL. Depending on the connection string I create my connection object (i.e. SQLiteConnection or SqlConnection). I am facing a problem with one of my entities. The problem is with a column type.
public class PLANT
{
public string OP_ID {get;set;}
}
The OP_ID in the SQL Server maps to a uniqueidentifier, and in SQLite to nvarchar. Where I try to do it, it works fine with SQLiteConnection. The problem I face is when I use SqlConnection
var plant = connection.Query<PLANT>(e => e.PL_ID == "3FFA25B5-4DF5-4216-846C-2C9F58B7DD90").FirstOrDefault();
I get error
“No coercion operator is defined between types 'System.Guid' and 'System.String “
I have tried using the IPropertyHandler<Guid, string> on the OP_ID; it works for SqlConnection but fails for SQLiteConnection.
Is there a way that I can use the same model for both connections?
I strongly recommend that you share the models between multiple databases if the PK is on the same type, otherwise you will be ending up some coerce problem like this due to the fact that one DB does not support that target type (i.e. UNIQUEIDENTIFIER).
In anyway, a PropertyHandler is a not way for this as the input types is different. You can use separate models for your SQLite and SqlServer, otherwise, you can explicitly set the RepoDb.Converter.ConversionType = Automatic so the coerce will automatically be handled.
I am not recommending the CoversionType to Automatic as it is an additional conversion logic on top of data extraction. But that would fix it.
I want to know the oracle objects which contain the predefined error messages along with the error code ,plz help me to get the database objects
If you want to find out the error messages you have had:
SELECT *
FROM user_errors
Do you mean the ORA-nnnnn type messages? If so, these I beleive are not held in database tables, but in OS files under Oracle_HOME as message files (for each language). There is (on UNIX/LInux at least) a utility called "oerr" which you run as "oerr ora 1301" and it will give you the message for the given code in the lanagues according to your LOCALE/settings. If you want to get all the messages, you can get these from the ORacle manuals, e.g.
http://docs.oracle.com/cd/B28359_01/server.111/b28278/toc.htm
I am trying do a query on a table via jdbc in my java program.
I know there are three rows in that table.
I've got the resultset, and can read and process data of the first row. but when I try to move the resultset to next row, an exception is thorwn.
the exception pointed [SQL0181]
You can use the SQL Message Finder to look up message codes.
The SQL0181 message text is:
Value in date, time, or timestamp string not valid.
This indicates you have a value in a row that can not be represented as an SQL Datetime value.
It is not uncommon for legacy HLL programs to introduce these sorts of errors as they are capable of writing directly to the table row without the same validation enforced by the SQL interface.
See also this previously asked SO question: Why am I getting a “[SQL0802] Data conversion of data mapping error” exception?
I know this is an old question, but it's a top search result for SQL0181, and the answers is wrong.
The problem is the date being retrieved can't be represented in the date format being used. The IBM i allows several non-ISO date formats, which can't handle things like the beginning & ending of time. The error stems from trying to use one of them.
You can either have your user profile changed or use ISO on your JDBC settings.
"jdbc:as400://RCHASSLH;date format=iso;time format=iso;"
ref:
http://www-01.ibm.com/support/docview.wss?uid=nas8N1017268
I am trying to implement this solution:
NHibernate-20-SQLite-and-In-Memory-Databases
The only problem is that we have hbms like this:
<class name="aTable" table="[dbo].[aTable]" mutable="true" lazy="false">
with [dbo] in the table name, because we are working with mssql, and this does not work with Sqlite.
I found this posting on the rhino-tools-dev group where they talk about just removing the schema from the mapping, but on NH2 there doesn't seem to be a classMapping.Schema.
There is a classMapping.Table.Schema, but it seems to be read-only. For example, this doesn't work:
foreach (PersistentClass cp in configuration.ClassMappings) {
// Does not work - throws a
//System.IndexOutOfRangeException: Index was outside the bounds of the array.
cp.Table.Schema = "";
}
Is there a way to tell Sqlite to ignore the [dbo] (I tried attach database :memory: as dbo, but this didn't seem to help)?
Alternatively, can I programmatically remove it from the classmappings (unfortunately changing the hbms is not possible right now)?
We had too many problems with SQLite which eventually pushed us to switch to SQL Express.
Problems I remember:
SQLite, when used in-memory, discards the database when Session is closed
SQLite does not support bunch of SQL constructs such basic ones as ISNULL, but also more advanced like common table expressions and others added in SQL 2005 and 2008. This becomes important when you start writing complex named queries.
SQLite's datetime has bigger range of possible values than SQL Server's
The API NHibernate uses for SQLite behaves differently than ADO.NET for MS SQL Server when used in scope of transaction. One example is the hbm-to-ddl tool whose Execute method does not work inside transaction with SQL Server but works fine with SQLite.
To summarize, SQLite-based unit-testing is very far from being conclusively representative of the issues you'll encounter when using MS SQL Server in PROD and therefore undermines the credibility of unit-testing overall.
We are using Sqlite to run unit tests with NH 2.0.1. Actually, I didn't run into this problem. I just didn't specify dbo, I think it is default on SqlServer.
By the way, there is a default_schema parameter in the configuration file. This is actually the database name, but you can try putting the dbo there, only for the SqlServer configuration of course.
After looking through the source of NH and some experimenting i think i found a simple workaround -
foreach (PersistentClass cp in configuration.ClassMappings)
{
// Input : [dbo].[Tablename] Output : Tablename
cp.Table.Name = Regex.Replace(cp.Table.Name, #"^\[.*\]\.\[", "");
cp.Table.Name = Regex.Replace(cp.Table.Name, #"\]$", "");
// just to be sure
cp.Table.Schema = null;
}
note that i can set Table.Schema to null while an empty string threw an exception ...
thanks for the answers !