Is there a way to dump the generated sql to the Debug log or something? I'm using it in a winforms solution so the mini-profiler idea won't work for me.
I got the same issue and implemented some code after doing some search but having no ready-to-use stuff. There is a package on nuget MiniProfiler.Integrations I would like to share.
Update V2: it supports to work with other database servers, for MySQL it requires to have MiniProfiler.Integrations.MySql
Below are steps to work with SQL Server:
1.Instantiate the connection
var factory = new SqlServerDbConnectionFactory(_connectionString);
using (var connection = ProfiledDbConnectionFactory.New(factory, CustomDbProfiler.Current))
{
// your code
}
2.After all works done, write all commands to a file if you want
File.WriteAllText("SqlScripts.txt", CustomDbProfiler.Current.ProfilerContext.BuildCommands());
Dapper does not currently have an instrumentation point here. This is perhaps due, as you note, to the fact that we (as the authors) use mini-profiler to handle this. However, if it helps, the core parts of mini-profiler are actually designed to be architecture neutral, and I know of other people using it with winforms, wpf, wcf, etc - which would give you access to the profiling / tracing connection wrapper.
In theory, it would be perfectly possible to add some blanket capture-point, but I'm concerned about two things:
(primarily) security: since dapper doesn't have a concept of a context, it would be really really easy for malign code to attach quietly to sniff all sql traffic that goes via dapper; I really don't like the sound of that (this isn't an issue with the "decorator" approach, as the caller owns the connection, hence the logging context)
(secondary) performance: but... in truth, it is hard to say that a simple delegate-check (which would presumably be null in most cases) would have much impact
Of course, the other thing you could do is: steal the connection wrapper code from mini-profiler, and replace the profiler-context stuff with just: Debug.WriteLine etc.
You should consider using SQL profiler located in the menu of SQL Management Studio → Extras → SQL Server Profiler (no Dapper extensions needed - may work with other RDBMS when they got a SQL profiler tool too).
Then, start a new session.
You'll get something like this for example (you see all parameters and the complete SQL string):
exec sp_executesql N'SELECT * FROM Updates WHERE CAST(Product_ID as VARCHAR(50)) = #appId AND (Blocked IS NULL OR Blocked = 0)
AND (Beta IS NULL OR Beta = 0 OR #includeBeta = 1) AND (LangCode IS NULL OR LangCode IN (SELECT * FROM STRING_SPLIT(#langCode, '','')))',N'#appId nvarchar(4000),#includeBeta bit,#langCode nvarchar(4000)',#appId=N'fea5b0a7-1da6-4394-b8c8-05e7cb979161',#includeBeta=0,#langCode=N'de'
Try Dapper.Logging.
You can get it from NuGet. The way it works is you pass your code that creates your actual database connection into a factory that creates wrapped connections. Whenever a wrapped connection is opened or closed or you run a query against it, it will be logged. You can configure the logging message templates and other settings like whether SQL parameters are saved. Elapsed time is also saved.
In my opinion, the only downside is that the documentation is sparse, but I think that's just because it's a new project (as of this writing). I had to dig through the repo for a bit to understand it and to get it configured to my liking, but now it's working great.
From the documentation:
The tool consists of simple decorators for the DbConnection and
DbCommand which track the execution time and write messages to the
ILogger<T>. The ILogger<T> can be handled by any logging framework
(e.g. Serilog). The result is similar to the default EF Core logging
behavior.
The lib declares a helper method for registering the
IDbConnectionFactory in the IoC container. The connection factory is
SQL Provider agnostic. That's why you have to specify the real factory
method:
services.AddDbConnectionFactory(prv => new SqlConnection(conStr));
After registration, the IDbConnectionFactory can be injected into
classes that need a SQL connection.
private readonly IDbConnectionFactory _connectionFactory;
public GetProductsHandler(IDbConnectionFactory connectionFactory)
{
_connectionFactory = connectionFactory;
}
The IDbConnectionFactory.CreateConnection will return a decorated
version that logs the activity.
using (DbConnection db = _connectionFactory.CreateConnection())
{
//...
}
This is not exhaustive and is essentially a bit of hack, but if you have your SQL and you want to initialize your parameters, it's useful for basic debugging. Set up this extension method, then call it anywhere as desired.
public static class DapperExtensions
{
public static string ArgsAsSql(this DynamicParameters args)
{
if (args is null) throw new ArgumentNullException(nameof(args));
var sb = new StringBuilder();
foreach (var name in args.ParameterNames)
{
var pValue = args.Get<dynamic>(name);
var type = pValue.GetType();
if (type == typeof(DateTime))
sb.AppendFormat("DECLARE #{0} DATETIME ='{1}'\n", name, pValue.ToString("yyyy-MM-dd HH:mm:ss.fff"));
else if (type == typeof(bool))
sb.AppendFormat("DECLARE #{0} BIT = {1}\n", name, (bool)pValue ? 1 : 0);
else if (type == typeof(int))
sb.AppendFormat("DECLARE #{0} INT = {1}\n", name, pValue);
else if (type == typeof(List<int>))
sb.AppendFormat("-- REPLACE #{0} IN SQL: ({1})\n", name, string.Join(",", (List<int>)pValue));
else
sb.AppendFormat("DECLARE #{0} NVARCHAR(MAX) = '{1}'\n", name, pValue.ToString());
}
return sb.ToString();
}
}
You can then just use this in the immediate or watch windows to grab the SQL.
Just to add an update here since I see this question still get's quite a few hits - these days I use either Glimpse (seems it's dead now) or Stackify Prefix which both have sql command trace capabilities.
It's not exactly what I was looking for when I asked the original question but solve the same problem.
Related
Our team's application development involves using Effort Testing Tool to mock our Entity Framework's DbContext. However, it seems that Effort Testing Tool needs to be see the actual SQL Server Database that the application uses in order to mock our Entity Framework's DbContext which seems to going against proper Unit Testing principles.
The reason being that in order to unit test our application code by mocking anything related to Database connectivity ( for example Entity Framework's DbContext), we should Never need a Database to be up and running.
How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?
*
Update:
#gert-arnold We are using Entity Framework Model First approach to implement the back-end model and database.
The following excerpt is from the test code:
connection = Effort.EntityConnectionFactory.CreateTransient("name=NorthwindModel");
jsAudtMppngPrvdr = new BlahBlahAuditMappingProvider();
fctry = new BlahBlahDataContext(jsAudtMppngPrvdr, connection, false);
qryCtxt = new BlahBlahDataContext(connection, false);
audtCtxt = new BlahBlahAuditContext(connection, false);
mockedReptryCtxt = new BlahBlahDataContext(connection, false);
_repository = fctry.CreateRepository<Account>(mockedReptryCtxt, null);
_repositoryAccountRoleMaps = fctry.CreateRepository<AccountRoleMap>(null, _repository);
The "name=NorthwindModel" pertains to our edmx file which contains information about our Database tables
and their corresponding relationships.
If I remove the "name=NorthwindModel" by making the connection like the following line of code, I get an error stating that it expects an argument:
connection = Effort.EntityConnectionFactory.CreateTransient(); // throws error
Could you please explain how the aforementioned code should be rewritten?
You only need that connection string because Effort needs to know where the EDMX file is.
The EDMX file contains all information required for creating an inmemory store with an identical schema you have in your database. You have to specify a connection string only because I thought it would be convenient if the user didn't have to mess with EDMX paths.
If you check the implementation of the CreateTransient method you will see that it merely uses the connection string to get the metadata part of it.
public static EntityConnection CreateTransient(string entityConnectionString, IDataLoader dataLoader)
{
var metadata = GetEffortCompatibleMetadataWorkspace(ref entityConnectionString);
var connection = DbConnectionFactory.CreateTransient(dataLoader);
return CreateEntityConnection(metadata, connection);
}
private static MetadataWorkspace GetEffortCompatibleMetadataWorkspace(ref string entityConnectionString)
{
entityConnectionString = GetFullEntityConnectionString(entityConnectionString);
var connectionStringBuilder = new EntityConnectionStringBuilder(entityConnectionString);
return MetadataWorkspaceStore.GetMetadataWorkspace(
connectionStringBuilder.Metadata,
metadata => MetadataWorkspaceHelper.Rewrite(
metadata,
EffortProviderConfiguration.ProviderInvariantName,
EffortProviderManifestTokens.Version1));
}
I am having issues attempting to connect to two different databases in one Qt Application. I have my information database that stores all the information collected by the application and the new Log database which allows me to track all the changes that occur to the Application, button presses, screen loads etc, for easy debugging after its release. Separately, the databases work perfectly, but when I try to use both of them, only one will work. I read that this could be because I wasn't naming the connections and obviously only the most recently connected database could use the default connection. However when I give the databases names they wont work at all, isOpen() will return true on both, but as soon as they attempt to execute a query I get the errors
"QSqlQuery::prepare: database not open"
"QSqlError(-1, "Driver not loaded", "Driver not loaded")"
My two database declarations are:
database_location = filepath.append("/logger.sqlite");
logDB = QSqlDatabase::addDatabase("QSQLITE", "LoggerDatabaseConnection");
logDB.setHostName("localhost");
logDB.setDatabaseName(database_location);
for the Logger Database connection and :
database_location = filepath.append("/db.sqlite");
db = QSqlDatabase::addDatabase("QSQLITE", "NormalDB");
db.setHostName("localhost");
db.setDatabaseName(database_location);
Also when I am running the first query on the databases to see if their tables exist I am using
QSqlQuery query("LoggerDatabaseConnection");
and likewise for the normal database, but I am still getting connection issues even after declaring the database connection to run the query on.
The database used for the application is declared as a static QSqlDatabase in a namespace to create a global effect, so everyone can access it, that was a previous programmer, and I created the Log database as Singleton with a private database connection. Like I said both versions of the code work separately but when they are together they are fighting each other. I know there is a huge debate over the proper design of Singleton vs Dependecy Injection, but again the code works separately so I am happy with how it is designed for now. If there is any missing information or if you have any ideas, please let me know. Thank you.
QSqlQuery query("LoggerDatabaseConnection");
The first parameter of the constructor is the query, not the connection name. It will use the default connection since you specified no database object.
Try something like this:
QSqlQuery query1("YourFirstQuery", db);
QSqlQuery query2("YourSecondQuery", logDB);
Important: Also do not forget to open and close the database before / after using it by calls to QSqlDatabase::open() and QSqlDatabase::close().
The correct way to have multiple databases is to not use the pointer returned from the static addConnection method. You should use the connectionName argument:
https://doc.qt.io/qt-5/qsqldatabase.html#addDatabase-1 during initilization and query usage:
example:
void MyClass::initDb(QString dbPath, QString connName)
{
// initial db usage, etc
QSqlDatabase db = QSqlDatabase::addDatabase(YOUR_DRIVER, connName);
db.setDatabaseName(dbPath);
// open it, etc
}
void MyClass::updateThing(QString val, QString name, QString connName)
{
QString q = QString("UPDATE THINGS SET val=%1 WHERE name=%2").arg(val, name);
// add the reference to your database via the connection name
QSqlDatabase db = QSqlDatabase::database(connName);
QSqlQuery query(db);
query.exec(q);
// handle the query normally, etc
}
We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider
I'm using EF with 2 databases - with SQL CE and SQL Server.
Is there a way to know which connection type is used at runtime? I mean, if I have only ObjectContext in some place (already initialized with some connection string), can I get the database type from it (is it Compact or SQL Server at the moment)?
Thanks
You can check the Connection Property, which should return an EntityConnection; from there you must check its StoreConnection which will be the "real" database connection.
From there, you can either check the ConnectionString, which will tell you the provider, or simply check the type of the provider connection itself with is or GetType. If it's SQL Server it will be a SqlConnection, and if it's SQL CE it will be a SqlCeConnection.
This looks like an ugly hack because it is; if you're looking for a way to do this without an ugly hack, don't bother - the ObjectContext is explicitly designed not to leak any information about the connection unless you know exactly what to ask for. By contrast, here's all the hoops you would have to jump through to check it via the app config:
static string GetProviderName(ObjectContext context)
{
string entityConnectionString = GetEntityConnectionString(context);
return !string.IsNullOrEmpty(entityConnectionString) ?
GetProviderConnectionString(entityConnectionString) : null;
}
static string GetEntityConnectionString(ObjectContext context)
{
var match = Regex.Match(context.Connection.ConnectionString,
#"name=(?<name>[^;]+)", RegexOptions.Compiled);
string connectionStringName = match.Success ?
match.Groups["name"].Value : null;
return ConfigurationManager.ConnectionStrings[connectionStringName].ConnectionString;
}
static string GetProviderConnectionString(string entityConnectionString)
{
var match = Regex.Match(entityConnectionString,
#"provider=(?<provider>[^;]+)", RegexOptions.Compiled);
return match.Success ? match.Groups["provider"].Value : null;
}
Once any solution starts to involve regexes, I tend to look for a straighter path, and in this case, that's the type casting you say you'd prefer not to use. Pick your poison.
Be careful how you use either of the above approaches. EF4 is designed around persistence ignorance and you should be trying to avoid any logic that's specific to the connection type, because you really have no idea how it will be configured (maybe tomorrow it will be an Oracle connection). I believe that the provider-specific code resides mainly in the QueryProvider.
Can I do nested transactions in NHibernate, and how do I implement them? I'm using SQL Server 2008, so support is definitely in the DBMS.
I find that if I try something like this:
using (var outerTX = UnitOfWork.Current.BeginTransaction())
{
using (var nestedTX = UnitOfWork.Current.BeginTransaction())
{
... do stuff
nestedTX.Commit();
}
outerTX.Commit();
}
then by the time it comes to outerTX.Commit() the transaction has become inactive, and results in a ObjectDisposedException on the session AdoTransaction.
Are we therefore supposed to create nested NHibernate sessions instead? Or is there some other class we should use to wrap around the transactions (I've heard of TransactionScope, but I'm not sure what that is)?
I'm now using Ayende's UnitOfWork implementation (thanks Sneal).
Forgive any naivety in this question, I'm still new to NHibernate.
Thanks!
EDIT: I've discovered that you can use TransactionScope, such as:
using (var transactionScope = new TransactionScope())
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
using (var tx = UnitOfWork.Current.BeginTransaction())
{
... do stuff
tx.Commit();
}
transactionScope.Commit();
}
However I'm not all that excited about this, as it locks us in to using SQL Server, and also I've found that if the database is remote then you have to worry about having MSDTC enabled... one more component to go wrong. Nested transactions are so useful and easy to do in SQL that I kind of assumed NHibernate would have some way of emulating the same...
NHibernate sessions don't support nested transactions.
The following test is always true in version 2.1.2:
var session = sessionFactory.Open();
var tx1 = session.BeginTransaction();
var tx2 = session.BeginTransaction();
Assert.AreEqual(tx1, tx2);
You need to wrap it in a TransactionScope to support nested transactions.
MSDTC must be enabled or you will get error:
{"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."}
As Satish suggested, nested transactions are not supported in NHibernate. I've not come across scenarios where nested transactions were needed, but certainly I've faced problems where I had to ignore creating transactions if other ones were already active in other units of work.
The blog link below provides an example implementation for NHibernate, but should also work for SQL server:
http://rajputyh.blogspot.com/2011/02/nested-transaction-handling-with.html
I've been struggling with this for a while now. Am going to have another crack at it.
I want to implement transactions in individual service containers - because that makes them self-contained - but then be able to nest a bunch of those service methods within a larger transaction and rollback the whole lot if necessary.
Because I'm using Rhino Commons I'm now going to try refactoring using the With.Transaction method. Basically it allows us to write code as if transactions were nested, though in reality there is only one.
For example:
private Project CreateProject(string name)
{
var project = new Project(name);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(project);
});
return project;
}
private Sample CreateSample(Project project, string code)
{
var sample = new Sample(project, code);
With.Transaction(delegate
{
UnitOfWork.CurrentSession.Save(sample);
});
return sample;
}
private void Test_NoNestedTransaction()
{
var project = CreateProject("Project 1");
}
private void TestNestedTransaction()
{
using (var tx = UnitOfWork.Current.BeginTransaction())
{
try
{
var project = CreateProject("Project 6");
var sample = CreateSample(project, "SAMPLE006", true);
}
catch
{
tx.Rollback();
throw;
}
tx.Commit();
}
}
In Test_NoNestedTransaction(), we are creating a project alone, without the context of a larger transaction. In this case, in CreateSample a new transaction will be created and committed, or rolled back if an exception occurs.
In Test_NestedTransaction(), we are creating both a sample and a project. If anything goes wrong, we want both to be rolled back. In reality, the code in CreateSample and CreateProject will run just as if there were no transactions at all; it is entirely the outer transaction that decides whether to rollback or commit, and does so based on whether an exception is thrown. Really that's why I'm using a manually created transaction for the outer transaction; so we I have control over whether to commit or rollback, rather than just defaulting to on-exception-rollback-else-commit.
You could achieve the same thing without Rhino.Commons by putting a whole lot of this sort of thing through your code:
if (!UnitOfWork.Current.IsInActiveTransaction)
{
tx = UnitOfWork.Current.BeginTransaction();
}
_auditRepository.SaveNew(auditEvent);
if (tx != null)
{
tx.Commit();
}
... and so on. But With.Transaction, despite the clunkiness of needing to create anonymous delegates, does that quite conveniently.
An advantage of this approach over using TransactionScopes (apart from the reliance on MSDTC) is that there ought to be just a single flush to the database in the final outer-transaction commit, regardless of how many methods have been called in-between. In other words, we don't need to write uncommitted data to the database as we go, we're always just writing it to the local NHibernate cache.
In short, this solution doesn't offer ultimate control over your transactions, because it doesn't ever use more than one transaction. I guess I can accept that, since nested transactions are by no means universally supported in every DBMS anyway. But now perhaps I can at least write code without worrying about whether we're already in a transaction or not.
That implementation doesn't support nesting, if you want nesting use Ayende's UnitOfWork implementation. Another problem with the implementation your are using (at least for web apps) is that it holds onto the ISession instance in a static variable.
I just rewrote our UnitOfWork yesterday for these reasons, it was originally based off of Gabriel's.
We don't use UnitOfWork.Current.BeginTransaction(), we use UnitofWork.TransactionalFlush(), which creates a separate transaction at the very end to flush all the changes at once.
using (var uow = UnitOfWork.Start())
{
var entity = repository.Get(1);
entity.Name = "Sneal";
uow.TransactionalFlush();
}