How to release ODBC objects correctly? - sql-server

I have created an application that calls SQLDriverConnect to connect to a MS SQL Server database called 'MyDB'. After doing some things, it calls SQLDisconnect.But then SSMS fails to delete 'MyDB'. This means some resources are not closed properly. Only after exiting the process, does SSMS delete it (i.e. the OS releases them) and all SQLHENV and SQLHDBC are released properly.
Code below:
SMARTHSTMT::~SMARTHSTMT()
{
if (!m_hstmt) return;
SQLFreeStmt(m_hstmt, SQL_CLOSE);
SQLFreeStmt(m_hstmt, SQL_UNBIND);
SQLFreeStmt(m_hstmt, SQL_RESET_PARAMS);
SQLFreeHandle(SQL_HANDLE_STMT, m_hstmt);
m_hstmt = nullptr;
};
How can I find which object is not released? Is there any other considerations should I take? any idea or help appreciated.
Edit: code for disconnecting:
void AConnection::uDisconnect()
{
if (m_hdbc)
{
SQLDisconnect(m_hdbc);
SQLFreeHandle(SQL_HANDLE_DBC, m_hdbc);
m_hdbc = nullptr;
}
if (m_henv)
{
SQLFreeHandle(SQL_HANDLE_ENV, m_henv);
m_henv = nullptr;
}
}

You can check if SQLDisconnect() returns SQL_ERROR. If that is the case, a statement might still be open, or a transaction (as you detected) is still open.
Transaction Handling in ODBC is (simplified) like this:
By default auto-commit is enabled. Everything starts a new transaction and if the statement succeeds the transaction is committed. If you have not changed the commit-mode, its confusing for me that a transaction is still open.
If you have disabled auto-commit, you must manually call SQLEndTrans(...) to commit or rollback any ongoing transaction. As far as I know, there is no way in ODBC to query the driver if any transaction is still open.
As you mention the calls to SQLEndTrans(), I guess you have already disabled auto-commit. Looking at my sources, I see that I always do a Rollback before closing a connection-handle - maybe because of the same problem, I dont remember exactly (its old code).
Anyway, if you have enabled manual commit mode, I would just recommend that you do a Rollback before closing the connection handle. Maybe there would be tools on the SQL Server Side to analyze more details about what is exactly open at that time..
See here for more details: https://msdn.microsoft.com/en-us/library/ms131281.aspx

Related

C Multithreading - Sqlite3 database access by 2 threads crash

Here is a description of my problem:
I have 2 threads in my program. One is the main thread and the other one that i create using pthread_create
The main thread performs various functions on an sqlite3 database. Each function opens to perform the required actions and closing it when done.
The other thread simply reads from the database after a set interval of time and uploads it onto a server. The thread also opens and closes the database to perform its operation.
The problem occurs when both threads happen to open the database. If one finishes first, it closes the database thus causing the other to crash making the application unusable.
Main requires the database for every operation.
Is there a way I can prevent this from happening? Mutex is one way but if I use mutex it will make my main thread useless. Main thread must remain functional at all times and the other thread runs in the background.
Any advice to make this work would be great.
I did not provide snippets as this problem is a bit too vast for that but if you do not understand anything about the problem please do let me know.
EDIT:
static sqlite3 *db = NULL;
Code snippet for opening database
int open_database(char* DB_dir) // argument is the db path
rc = sqlite3_open(DB_dir , &db);
if( rc )
{
//failed to open message
sqlite3_close(db);
db = NULL;
return SDK_SQL_ERR;
}
else
{
//success message
}
}
return SDK_OK;
}
And to close db
int close_database()
{
if(db!=NULL)
{
sqlite3_close(db);
db = NULL;
//success message
}
return 1;
}
EDIT: I forgot to add that the background thread performs one single write operation that updates 1 field of the table for each row it uploads onto the server
Have your threads each use their own database connection. There's no reason for the background thread to affect the main thread's connection.
Generally, I would want to be using connection pooling, so that I don't open and close database connections very frequently; connection opening is an expensive operation.
In application servers we very often have many threads, we find that a connection pool of a few tens of connections is sufficient to service requests on behalf of many hundreds of users.
Basically built into sqlite3 there are mechanisms to provide locking... BEGIN EXCLUSIVE then you can also register a sleep callback so that the other thread can do other things...
see sqlite3_busy_handler()

Neo4j store is not cleanly shut down; Recovering from inconsistent db state from interrupted batch insertion

I was importing ttl ontologies to dbpedia following the blog post http://michaelbloggs.blogspot.de/2013/05/importing-ttl-turtle-ontologies-in-neo4j.html. The post uses BatchInserters to speed up the task. It mentions
Batch insertion is not transactional. If something goes wrong and you don't shutDown() your database properly, the database becomes inconsistent.
I had to interrupt one of the batch insertion tasks as it was taking time much longer than expected which left my database in an inconsistence state. I get the following message:
db_name store is not cleanly shut down
How can I recover my database from this state? Also, for future purposes is there a way for committing after importing every file so that reverting back to the last state would be trivial. I thought of git, but I am not sure if it would help for a binary file like index.db.
There are some cases where you cannot recover from unclean shutdowns when using the batch inserter api, please note that its package name org.neo4j.unsafe.batchinsert contains the word unsafe for a reason. The intention for batch inserter is to operate as fast as possible.
If you want to guarantee a clean shutdown you should use a try finally:
BatchInserter batch = BatchInserters.inserter(<dir>);
try {
} finally {
batch.shutdown();
}
Another alternative for special cases is registering a JVM shutdown hook. See the following snippet as an example:
BatchInserter batch = BatchInserters.inserter(<dir>);
// do some operations potentially throwing exceptions
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
batch.shutdown();
}
});

Connections with Entity Framework and Transient Fault Handling Block?

We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider

How to set a timeout to the database connection in SORM?

I use H2 database in embedded mode with SORM.
If the database is busy then SORM just continue to wait. There are no exception, nothing happens. This is misleading. :(
So how i can set the db connection timeout?
How come it's misleading and how come exceptions are better? If you need the process to be non-blocking just use Future like so:
future{ Db.query[Artist].fetch() }.foreach{ artists => ... }
Consider this to be a non-blocking version of the following:
val artists = Db.query[Artist].fetch()

sqlite3 close statement is not working

I'm using a C program with sqlite3.
Some times insert works.But sometime ,its not working.
assert(retval == SQLITE3_OK)
gives error. while debugging I found retval value of sqlite3_step() is error code = 5
which refers to Database file is busy
Even closing with sqlite3_close() return error code 5.
Any thoughts on how to close the database connection ,even when it's busy?
You could be in the middle of a transaction. Try checking http://www.sqlite.org/c3ref/get_autocommit.html for explanations on how sqlite3_get_autocommit() works. It should return a zero if it is not in auto-commit, meaning there is an open transaction.
Or if you suspect your database might still be working on something, you can use sqlite3_busy_timeout() to set a timer. http://www.sqlite.org/c3ref/busy_timeout.html

Resources