AccessViolation Exception in wpf while connecting to db due to multiple threads - wpf

I am working on a multithreaded wpf application I get "AccessViolationException" saying Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
in my ConnectionOpen().
my code is as follows.
public class DatabaseServices
{
static SQLiteConnection connection;
static object conLock = new object();
static object conCloseLock = new object();
public static SQLiteDataReader ConnectionOpen(string Query)
{
lock (conLock)
{
if (connection != null && connection.State != System.Data.ConnectionState.Open)
{
connection = new SQLiteConnection("Data Source=Database/abc.sqlite");
connection.Open();
}
else if (connection == null)
{
connection = new SQLiteConnection("Data Source=Database/abc.sqlite");
connection.Open();
}
SQLiteCommand mycommand = new SQLiteCommand(Query, connection);
SQLiteDataReader sqlite_datareader = mycommand.ExecuteReader();
return sqlite_datareader;
}
}
public static void ConnectionClose()
{
lock (conCloseLock)
{
connection.Close();
}
}
}
I've used lock as well for thread safe code but its not working.why?

The SQLiteConnection is not thread safe. Like all other database connections, you are supposed to open one per thread. The fact that your code has a few parts that won't work even if it were thread safe, does not help either. For example, anybody can close a connection, while somebody else is just querying on it.
Keep to the well-established patterns. Do not use database connections across threads. Do not write your own connection caching. Open a conection, do your work and then close it. If you definetly need connection caching, look into the documentation of your database and find out how the inbuilt mechanism works.

SQLite does not support Multiple Active ResultSets (MARS)
So you cannot have multiple DataReaders served by the same connection.
After connecting (and dropping the lock) you hand out a DataReader. I assume the client code calls this ConnectionOpen method twice resulting (or rather attempting) to re-use the same connection.
Create a connection per DataReader.
When you use connection pooling:
Data Source=c:\mydb.db;Version=3;Pooling=True;Max Pool Size=100;
connections will be recycled/pooled when closed properly. This will lessen the overhead of the creation and opening of the connections.

Related

How to get rid of "The connection was not closed. The connection's current state is open." error? [duplicate]

I'm writing an ASP.NET application. In my datalayer an sql connection is being opened and closed before and after querying. The SqlConnection is being kept as a private field of a single class. Every database call in the class uses the same structure:
conn.Open();
try
{
// database querying here
}
finally
{
conn.Close();
}
Yet, on very rare occasions I get the exception 'The connection was not closed. The connection's current state is open'. It's not possible to reproduce the problem since it originates very rarely from different parts of the code. There is some threading involved in my application but new threads also make new data layer classes and thus new connection objects.
I do not understand how it's possible to have a connection lingering around open using the code above. Shouldn't the connection always be closed after opening, making it impossible for the above exception to occur?
It's likely that an exception is being thrown in the try block that you aren't handling. See this note in MSDN for try-finally:
Within a handled exception, the associated finally block is guaranteed to be run. However, if the exception is unhandled, execution of the finally block is dependent on how the exception unwind operation is triggered.
I would recommend wrapping the connection in a using block anyway:
using (SqlConnection connection = new SqlConnection(connectionString))
{
//etc...
}
Alternatively, add a catch block to the try-finally:
conn.Open();
try
{
}
catch
{
}
finally
{
conn.Close();
}
you should close connections as soon as you operations finished. Try to open connections for the shortest time possible.
However it is best to use using it will call Dispose method even in case of exceptions.
using (SqlConnection conn= new SqlConnection(conStr))
{
//etc...
}
OR
1) Open the connection
2) Access the database
3) Close the connection
//conn.Open();
try
{
conn.Open();
//Your Code
}
finally
{
conn.Close();
conn.Dispose();//Do not call this if you want to reuse the connection
}

Does the SQL connection not get closed if you put the datareader in a using block?

So, I recently inherited a large project that uses the following data access pattern; unfortunately, this is resulting in a massive number of timeout exceptions related to connection pooling.
Timeout expired. The timeout period elapsed prior to obtaining a
connection from the pool. This may have occurred because all pooled
connections were in use and max pool size was reached"
It clear that the connections are leaking and not getting closed properly.
So, the framework has a DataAccess class with the method GetDataReader.
When the data reader is referenced, it is placed inside a using block, but connections are still leaking.
Does the fact that the connection is not explicitly closed or placed in a using block the reason why the connections are getting leaked?
Normally, I would wrap the connection in a using block AND wrap the data reader in a using block.
Obviously, this framework is very flawed, but would somehow using the option CommandBehavior.CloseConnection for the data reader resolve this issue?
None the external code accesses the SqlConnection directly and has to go through this DataAccess class.
public IDataReader GetDataReader(QueryDto dto)
{
DateTime current = DateTime.Now;
Database db = DatabaseFactory.CreateDatabase(dto.DatabaseName);
DbCommand cmd = db.GetStoredProcCommand(dto.StoredProcedureName);
if (dto.Params.Length > 0)
{
cmd = db.GetStoredProcCommand(dto.StoredProcedureName, dto.Params);
}
dto.Command = cmd;
cmd.CommandTimeout = dto.Timeout;
cmd.Connection = db.CreateConnection();
try
{
cmd.Connection.Open();
}
catch (SqlException ex)
{
// Handle Exception here...
throw;
}
return rdr;
}
Usage in some static repository class:
var query = new QueryDto
{
DatabaseName = "SomeDatabase",
Params = parms,
StoredProcedureName = "StoredProcedureName"
};
using (IDataReader dr = dataAccess.GetDataReader(query))
{
while (dr.Read())
{
// do stuff here
}
}
I think your problem is that the using statement is around a function that has open resources embedded in it. The using will not dispose of the connection that is opened inside GetDataReader. I think your are correct that the Connection itself needs to be in a using block. The using statement only calls Dispose on the object that is passed in, not any nested resources.

ORMLite OpenDbConnection gives AccessViolationException

I am using ServiceStack and OrmLite.Oracle. I connect to an old Oracle 7.3 instance using ODBC Driver for Oracle on a Windows Server 2012 x64. ODBC is setup as an ODBC32.
I connect and query the database from my repo like this:
using (IDbConnection db = _context.DbFactory.OpenDbConnection())
{
return db.Select<T>();
}
The _context hold the OrmLiteConnectionFactory which was created like this:
DbFactory= new OrmLiteConnectionFactory(conInfo.ConnectionString,false, ServiceStack.OrmLite.Oracle.OracleDialect.Provider);
My service is running just fine and I can access and query the database, no problem.
But after a certain period of time (30 minutes or so), the connection is lost and I have to restart my service (hosted in a Windows Service) because the call to Open the connection will give me this error: unable to allocate an environment handle.
It might be a normal thing to release the handle to the connection after a while but why it simply doesn't reconnect to it? From OrmLite code, I can see that OpenDbConnection should return a new instance of its connection when the AutoDisposeConnection is set to True or if the internal ormLiteConnection is null. I guess my connection is not null but not quite alive...
private OrmLiteConnection ormLiteConnection;
private OrmLiteConnection OrmLiteConnection
{
get
{
if (ormLiteConnection == null)
{
ormLiteConnection = new OrmLiteConnection(this);
}
return ormLiteConnection;
}
}
public IDbConnection OpenDbConnection()
{
var connection = CreateDbConnection();
connection.Open();
return connection;
}
public IDbConnection CreateDbConnection()
{
if (this.ConnectionString == null)
throw new ArgumentNullException("ConnectionString", "ConnectionString must be set");
var connection = AutoDisposeConnection
? new OrmLiteConnection(this)
: OrmLiteConnection;
return connection;
}
I have tried to set the AutoDisposeConnection to True but when I do, I always get an AccessViolationException saying "Attempted to read or write protected memory. This is often an indication that other memory is corrupt.". What does that mean? Is this an OS, ODBC or OrmLite error? Any idea why this is happening?
I have to say that because I am using Oracle 7.3, I had to recompile the ServiceStack.OrmLite.Oracle.dll so it uses the System.Data.Odbc rather than System.Data.OracleClient (only compatible with v8+).
I really want to avoid to test if the connection is alive or not at every call, so any help to make this work is greatly appreciated. Thanks

Creating a cross-computer mutex using SQL Server

I have a few computers using the same database (SQL Server 2008)
I'm trying to synchronize a task between all these computers using the database.
Each task is represented by a guid that is the lock-id (if comparing to Mutex, that would be the Mutex name)
I have a few thoughts, but I think they are kind of hacks, and was hoping someone here would have a better solution:
Create a new table "Locks" each row consists of a guid, lock the table row exclusively in a transaction - and complete/revert the transaction when finished.
use the sp_getapplock in a transaction where the lock name is the lock-id guid
I think that holding a transaction running is not so good... I thought maybe there's a solution that does not require me to hold an open transaction or session?
I have put together a little class will test and feedback
public class GlobalMutex
{
private SqlCommand _sqlCommand;
private SqlConnection _sqlConnection;
string sqlCommandText = #"
declare #result int
exec #result =sp_getapplock #Resource=#ResourceName, #LockMode='Exclusive', #LockOwner='Transaction', #LockTimeout = #LockTimeout
";
public GlobalMutex(SqlConnection sqlConnection, string unqiueName, TimeSpan lockTimeout)
{
_sqlConnection = sqlConnection;
_sqlCommand = new SqlCommand(sqlCommandText, sqlConnection);
_sqlCommand.Parameters.AddWithValue("#ResourceName", unqiueName);
_sqlCommand.Parameters.AddWithValue("#LockTimeout", lockTimeout.TotalMilliseconds);
}
private readonly object _lockObject = new object();
private Locker _lock = null;
public Locker Lock
{
get
{
lock(_lockObject)
{
if (_lock != null)
{
throw new InvalidOperationException("Unable to call Lock twice"); // dont know why
}
_lock = new Locker(_sqlConnection, _sqlCommand);
}
return _lock;
}
}
public class Locker : IDisposable
{
private SqlTransaction _sqlTransaction;
private SqlCommand _sqlCommand;
internal Locker(SqlConnection sqlConnection, SqlCommand sqlCommand)
{
_sqlCommand = sqlCommand;
_sqlTransaction = sqlConnection.BeginTransaction();
_sqlCommand.Transaction = _sqlTransaction;
int result = sqlCommand.ExecuteNonQuery();
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
_sqlTransaction.Commit(); // maybe _sqlTransaction.Rollback() might be slower
}
}
}
}
Usage is:
GlobalMutex globalMutex = new GlobalMutex(
new SqlConnection(""),
"myGlobalUniqueLockName",
new TimeSpan(0, 1, 0)
);
using (globalMutex.Lock)
{
// do work.
}
I would recommend something rather different: use a queue. Rather than explicitly lock the task, add the task to a processing queue and let the queue handler dequeue the task and perform the work. The additional decoupling will also help scalability and throughput.
If the only shared resource you have is the database, then using a transaction lock as part of the solution may be your best option. And if I'm understanding the articled linked by #Remus Rusanu in the other answer, it also requires the dequeuing to be in a transaction.
It depends somewhat on how long you plan to hold open these locks. If you are...
Forcing serialization for a brief operation on the lock ID in question
Already opening a transaction anyway to complete that operation
...then your option 2 is probably easiest and most reliable. I've had a solution like it in a production system for several years with no issues. It becomes even easier if you bundle the creation of the mutex with the creation of the transaction and wrap it all in a "using" block.
using (var transaction = MyTransactionUtil.CreateTransaction(mutexName))
{
// do stuff
transaction.Commit();
}
In your CreateTransaction utility method you call sp_getapplock right after creating the transaction. Then the whole thing (including the mutex) commits or rolls back together.

SubSonic - Need to manually force connections closed?

In using Enterprise Library, there was an issue with having to manually close db connections, as GC, when scanning the heap, looks for items out of scope.
A connection that is part of a pool that is being used but the connection state is broken or fetching, but you have already received your results, will be kept open, and connection handles in the pool will run out.
Thus, adding manual connection checking and forcedly closing the connections is good form.
Now, take SubSonic. With an EntLib base, I am doing the following in a finally block:
public static bool GetISOCountryCodes(out DataSet dsISOCountryCodes, out Response dbResponse)
{
dbResponse = new Response();
dsISOCountryCodes = new DataSet();
StoredProcedure sp = null;
try
{
sp = SPs.GetISOCountryCodes(null);
dsISOCountryCodes = sp.GetDataSet();
// set the response object properties
dbResponse = new Response((int)sp.OutputValues[0]);
return dbResponse.IsValid;
}
catch (System.Exception ex)
{
return dbResponse.IsValid;
}
finally
{
if (sp.Command != null && sp.Command.ToDbCommand().Connection != null &&
sp.Command.ToDbCommand().Connection.State == ConnectionState.Open)
sp.Command.ToDbCommand().Connection.Close();
}
}
I know it's been said that you don't have to manually do this, as SubSonic will do this for you, however, I'd like to know if anyone has run into issues with SubSonic not closing connections (once again, as it uses EntLib at the root), and if there are better ways of accomplishing this.
Obviously, in all my data caller methods, I will reference one, say, "ConnectionCloser()" method.
Thanks.
This post was more of a notification for discussion. However, I'm not sure if the issue has actually been resolved with v5. So essentially the answer is to continue checking in the finally block.

Resources