I use H2 database in embedded mode with SORM.
If the database is busy then SORM just continue to wait. There are no exception, nothing happens. This is misleading. :(
So how i can set the db connection timeout?
How come it's misleading and how come exceptions are better? If you need the process to be non-blocking just use Future like so:
future{ Db.query[Artist].fetch() }.foreach{ artists => ... }
Consider this to be a non-blocking version of the following:
val artists = Db.query[Artist].fetch()
Related
We are using Dapper for some data access activity and are using the standard recommended approach for connecting to database as follows:
public static Func<DbConnection> ConnectionFactory = () => new SqlConnection(ConnectionString);
However, if we try and execute a statement, in the docs it show that you need to first state:
using (var conn = ConnectionFactory())
{
conn.Open();
var result = await conn.ExecuteAsync(sql, p, commandType: CommandType.StoredProcedure);
return result;
}
That means, you have to explicitly open the connection. However, if we leave out the statement conn.open(), it also works and we are worried if in such cases the connection may not be disposed of properly.
I would appreciate any comments as to how the SQL gets executed without explicitly opening any connection.
Dapper provide two ways to handle connection.
First is - Allow Dapper to handle it.
Here, you do not need to open the connection before sending it to Dapper. If input connection is not in Open state, Dapper will open it - Dapper will do the actions - Dapper will close the connection.
This will just close the connection. Open/Close is different than Dispose. So, if you really want to Dispose the connection better switch to second way.
Second is - Handle all yourself.
Here, you should explicitly create, open, close and dispose the connection yourself.
Please refer to following links for more details:
https://stackoverflow.com/a/51138718/5779732
https://stackoverflow.com/a/41054369/5779732
https://stackoverflow.com/a/40827671/5779732
I have created an application that calls SQLDriverConnect to connect to a MS SQL Server database called 'MyDB'. After doing some things, it calls SQLDisconnect.But then SSMS fails to delete 'MyDB'. This means some resources are not closed properly. Only after exiting the process, does SSMS delete it (i.e. the OS releases them) and all SQLHENV and SQLHDBC are released properly.
Code below:
SMARTHSTMT::~SMARTHSTMT()
{
if (!m_hstmt) return;
SQLFreeStmt(m_hstmt, SQL_CLOSE);
SQLFreeStmt(m_hstmt, SQL_UNBIND);
SQLFreeStmt(m_hstmt, SQL_RESET_PARAMS);
SQLFreeHandle(SQL_HANDLE_STMT, m_hstmt);
m_hstmt = nullptr;
};
How can I find which object is not released? Is there any other considerations should I take? any idea or help appreciated.
Edit: code for disconnecting:
void AConnection::uDisconnect()
{
if (m_hdbc)
{
SQLDisconnect(m_hdbc);
SQLFreeHandle(SQL_HANDLE_DBC, m_hdbc);
m_hdbc = nullptr;
}
if (m_henv)
{
SQLFreeHandle(SQL_HANDLE_ENV, m_henv);
m_henv = nullptr;
}
}
You can check if SQLDisconnect() returns SQL_ERROR. If that is the case, a statement might still be open, or a transaction (as you detected) is still open.
Transaction Handling in ODBC is (simplified) like this:
By default auto-commit is enabled. Everything starts a new transaction and if the statement succeeds the transaction is committed. If you have not changed the commit-mode, its confusing for me that a transaction is still open.
If you have disabled auto-commit, you must manually call SQLEndTrans(...) to commit or rollback any ongoing transaction. As far as I know, there is no way in ODBC to query the driver if any transaction is still open.
As you mention the calls to SQLEndTrans(), I guess you have already disabled auto-commit. Looking at my sources, I see that I always do a Rollback before closing a connection-handle - maybe because of the same problem, I dont remember exactly (its old code).
Anyway, if you have enabled manual commit mode, I would just recommend that you do a Rollback before closing the connection handle. Maybe there would be tools on the SQL Server Side to analyze more details about what is exactly open at that time..
See here for more details: https://msdn.microsoft.com/en-us/library/ms131281.aspx
I was importing ttl ontologies to dbpedia following the blog post http://michaelbloggs.blogspot.de/2013/05/importing-ttl-turtle-ontologies-in-neo4j.html. The post uses BatchInserters to speed up the task. It mentions
Batch insertion is not transactional. If something goes wrong and you don't shutDown() your database properly, the database becomes inconsistent.
I had to interrupt one of the batch insertion tasks as it was taking time much longer than expected which left my database in an inconsistence state. I get the following message:
db_name store is not cleanly shut down
How can I recover my database from this state? Also, for future purposes is there a way for committing after importing every file so that reverting back to the last state would be trivial. I thought of git, but I am not sure if it would help for a binary file like index.db.
There are some cases where you cannot recover from unclean shutdowns when using the batch inserter api, please note that its package name org.neo4j.unsafe.batchinsert contains the word unsafe for a reason. The intention for batch inserter is to operate as fast as possible.
If you want to guarantee a clean shutdown you should use a try finally:
BatchInserter batch = BatchInserters.inserter(<dir>);
try {
} finally {
batch.shutdown();
}
Another alternative for special cases is registering a JVM shutdown hook. See the following snippet as an example:
BatchInserter batch = BatchInserters.inserter(<dir>);
// do some operations potentially throwing exceptions
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
batch.shutdown();
}
});
We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider
I am looking for a way to let my C# (4.0) app send data to a SQL Server 2008 instance, asynchronously. I kind of like what I saw at http://nayyeri.net/asynchronous-command-execution-in-net-2-0 but that is not quite what I am looking for.
I have code like this:
// myDataTable is a .NET DataTable object
SqlCommand sc= new SqlCommand("dbo.ExtBegin", conn);
SqlParameter param1 = sc.Parameters.AddWithValue("#param1", "a");
SqlParameter param2 = sc.Parameters.AddWithValue("#param2", "b");
SqlParameter param3 = sc.Parameters.AddWithValue("#param3", myDataTable);
param3.SqlDbType = SqlDbType.Structured;
param3.TypeName = "dbo.MyTableType";
int execState = sc.ExecuteNonQuery();
And because the myDataTable is potentially large, I don't want the console app to hang while it sends the data to the server, if there are 6 big loads I want them all going at the same time without blocking at the console app. I don't want to send serially.
All ideas appreciated, thanks!
set the AsynchronousProcessing property on the connection string to True.
Use BeginExecuteNonQuery
But what is dbo.ExtBegin doing? It all depends on it, as the calls may well serialize on locks in the database (at best) or, at worst, you may get incorrect results if the procedure is not properly designed for concurency.
Create a thread and execute the query within that thread, make sure not to have subsequent database calls that would cause race conditions.
My first thought would be to spawn a new thread for the inserts, and have the main thread check the spawned thread's execution with AutoResetEvent, TimerCallback, and Timer objects.
I do it in Silverlight all the time.
Take a look at using Service Broker Activation. This will allow you to call a stored proc and have it run on it's own thread why you continue on the current thread.
Here is an excellent article that goes over how to do this.