My profiler trace shows that exec sp_reset_connection is being called between every sql batch or procedure call. There are reasons for it, but can I prevent it from being called, if I'm confident that it's unnecessary, to improve performance?
UPDATE:
The reason I imagine this could improve performance is twofold:
SQL Server doesn't need to reset the connection state. I think this would be a relatively negligible improvement.
Reduced network latency because the client doesn't need to send down an exec sp_reset_connection, wait for response, then send whatever sql it really wants to execute.
The second benefit is the one I'm interested in, because in my architecture the clients are sometimes some distance from the database. If every sql batch or rpc requires a double round-trip this doubles the impact of any network latency. Eliminating such double calls could potentially improve performance.
Yes there are lots of other things I could do to improve performance like re-architect the app, and I'm a big fan of solving the root cause of problems, but in this case I just want to know if it's possible to prevent sp_reset_connection to be called. Then I can test if there is any performance improvement and properly assess the risks of not calling this.
This prompts another question: does the network communication with sp_reset_connection really occur like I outlined above? i.e. Does the client send exec sp_reset_connection, wait for a response, then send the real sql? Or does it all go in one chunk?
If you're using .NET to connect to SQL Server, disabling of the extra reset call was disabled as of .NET 3.5 -- see here. (The property remains, but it does nothing.)
I guess Microsoft realized (as someone did experimentally here) that opening the door to avoid the reset was far more dangerous than it was to get a (likely) small performance gain. Can't say I blame them.
Does the client send exec sp_reset_connection, wait for a response, then send the real sql?
EDIT: I was wrong -- see here -- the answer is no.
Summary: there is a special bit set in a TDS message that specifies that the connection should be reset, and SQL Server executes sp_reset_connection automatically. It appears as a separate batch in Profiler and would always be executed before the actual query you wanted to execute, so my test was invalid.
Yes, it's sent in a separate batch.
I put together a little C# test program to demonstrate this because I was curious:
using System.Data.SqlClient;
(...)
private void Form1_Load(object sender, EventArgs e)
{
SqlConnectionStringBuilder csb = new SqlConnectionStringBuilder();
csb.DataSource = #"MyInstanceName";
csb.IntegratedSecurity = true;
csb.InitialCatalog = "master";
csb.ApplicationName = "blarg";
for (int i = 0; i < 2; i++)
_RunQuery(csb);
}
private void _RunQuery(SqlConnectionStringBuilder csb)
{
using (SqlConnection conn = new SqlConnection(csb.ToString()))
{
conn.Open();
SqlCommand cmd = new SqlCommand("WAITFOR DELAY '00:00:05'", conn);
cmd.ExecuteNonQuery();
}
}
Start Profiler and attach it to your instance of choice, filtering on the dummy application name I provided. Then, put a breakpoint on the cmd.ExecuteNonQuery(); line and run the program.
The first time you step over, just the query runs, and all you get is the SQL:BatchCompleted event after the 5 second wait. When the breakpoint hits the second time, all you see in profiler is still just the one event. When you step over again, you immediately see the exec sp_reset_connection event, and then the SQL:BatchCompleted event shows up after the delay.
The only way to get rid of the exec sp_reset_connection call (which may or may not be a legitimate performance problem for you) would be to turn off .NET's connection pooling. And if you're planning to do that, you'd likely want to build your own connection pooling mechanism, because just turning it off and doing nothing else will probably hurt more overall than taking the hit of the extra roundtrip, and you will have to deal with the correctness issues manually.
This Q/A could be helpful:
What does "exec sp_reset_connection" mean in Sql Server Profiler?
However, I did a quick test using Entity Framework and MS-SQL 2008 R2. It shows that "exec sp_reset_connection" isn't time consuming after the first call:
for (int i = 0; i < n; i++)
{
using (ObjectContext context = new myEF())
{
DateTime timeStartOpenConnection = DateTime.Now;
context.Connection.Open();
Console.WriteLine();
Console.WriteLine("Opening connection time waste: {0} ticks.", (DateTime.Now - timeStartOpenConnection).Ticks);
ObjectSet<myEntity> query = context.CreateObjectSet<myEntity>();
DateTime timeStart = DateTime.Now;
myEntity e = query.OrderByDescending(x => x.EventDate).Skip(i).Take(1).SingleOrDefault<myEntity>();
Console.Write("{0}. Created By {1} on {2}... ", e.ID, e.CreatedBy, e.EventDate);
Console.WriteLine("({0} ticks).", (DateTime.Now - timeStart).Ticks);
DateTime timeStartCloseConnection = DateTime.Now;
context.Connection.Close();
context.Connection.Dispose();
Console.WriteLine("Closing connection time waste: {0} ticks.", (DateTime.Now - timeStartCloseConnection).Ticks);
Console.WriteLine();
}
}
And output was this:
Opening connection time waste: 5390101 ticks.
585. Created By sa on 12/20/2011 2:18:23 PM... (2560183 ticks).
Closing connection time waste: 0 ticks.
Opening connection time waste: 0 ticks.
584. Created By sa on 12/20/2011 2:18:20 PM... (1730173 ticks).
Closing connection time waste: 0 ticks.
Opening connection time waste: 0 ticks.
583. Created By sa on 12/20/2011 2:18:17 PM... (710071 ticks).
Closing connection time waste: 0 ticks.
Opening connection time waste: 0 ticks.
582. Created By sa on 12/20/2011 2:18:14 PM... (720072 ticks).
Closing connection time waste: 0 ticks.
Opening connection time waste: 0 ticks.
581. Created By sa on 12/20/2011 2:18:09 PM... (740074 ticks).
Closing connection time waste: 0 ticks.
So, the final conclusion is: Don't worry about "exec sp_reset_connection"! It wastes nothing.
Personally, I'd leave it.
Given what it does, I want to make sure I have no temp tables in scope or transactions left open.
To be fair, you will gain a bigger performance boost by not running profiler against your production database. And do you have any numbers or articles or recommendations about what you can gain from this please?
Just keep the connection open instead of returning it to the pool, and execute all commands on that one connection.
Related
static void clean() throws Exception {
final UserTransaction tx = InitialContext.doLookup("UserTransaction");
tx.begin();
try {
final DataSource ds = InitialContext.doLookup(Databases.ADMIN);
Connection connection1 = ds.getConnection();
Connection connection2 = ds.getConnection();
PreparedStatement st1 = connection1.prepareStatement("XXX delete records XXX"); // delete data
PreparedStatement st2 = connection2.prepareStatement("XXX insert records XXX"); // insert new data that is same primary as deleted data above
st1.executeUpdate();
st1.close();
connection1.close();
st2.executeUpdate();
st2.close();
connection2.close();
tx.commit();
} finally {
if (tx.getStatus() == Status.STATUS_ACTIVE) {
tx.rollback();
}
}
}
I have a web app, the DAO taking DataSource as the object to create individual connection to perform database operations.
So I have a UserTransaction, inside there are two DAO object doing separated action, first one is doing deletion and second one is doing insertion. The deletion is to delete some records to allow insertion to take place because insertion will insert same primary key's data.
I take out the DAO layer and translate the logic into the code above. There is one thing I couldn't understand, based on the code above, the insertion operation should fail, because the code (inside the UserTransaction) take two different connections, they don't know each other, and the first deletion haven't committed obviously, so second statement (insertion) should fail (due to unique constraint), because two database operation not in same connection, second connection is not able to detect uncommitted changes. But amazingly, it doesn't fail, and both statement can work perfectly.
Can anyone help explain this? Any configuration can be done to achieve this result? Or whether my understanding is wrong?
Since your application is running in weblogic server, the java-EE-container is managing the transaction and the connection for you. If you call DataSource#getConnection multiple times in a java-ee transaction, you will get multiple Connection instances joining the same transaction. Usually those connections connect to database with the identical session. Using oracle you can check that with the following snippet in a #Stateless ejb:
#Resource(lookup="jdbc/myDS")
private DataSource ds;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
#Schedule(hour="*", minute="*", second="42")
public void testDatasource() throws SQLException {
try ( Connection con1 = ds.getConnection();
Connection con2 = ds.getConnection();
) {
String sessId1 = null, sessId2 = null;
try (ResultSet rs1 = con1.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs1.next() ) sessId1 = rs1.getString(1);
};
try (ResultSet rs2 = con2.createStatement().executeQuery("select userenv('SESSIONID') from dual") ){
if ( rs2.next() ) sessId2 = rs2.getString(1);
};
LOG.log( Level.INFO," con1={0}, con2={1}, sessId1={2}, sessId2={3}"
, new Object[]{ con1, con2, sessId1, sessId2}
);
}
}
This results in the following log-Message:
con1=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#19f32aa,
con2=com.sun.gjc.spi.jdbc40.ConnectionWrapper40#1cb42e0,
sessId1=9347407,
sessId2=9347407
Note that you get different Connection instances with same session-ID.
For more details see eg this question
The only way to do this properly is to use a transaction manager and two phase commit XA drivers for all databases involved in this transaction.
My guess is that you have autocommit enabled on the connections. This is the default when creating a new connection, as is documented here
https://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html
System.out.println(connection1.getAutoCommit());
will most likely print true.
You could try
connection1.setAutoCommit(false);
and see if that changes the behavior.
In addition to that, it's not really defined what happens if you call close() on a connection and haven't issued a commit or rollback statement beforehand. Therefore it is strongly recommended to either issue one of the two before closing the connection, see https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#close()
EDIT 1:
If autocommit is false, the it's probably due to the undefined behavior of close. What happens if you switch the statements? :
st2.executeUpdate();
st2.close();
connection2.close();
st1.executeUpdate();
st1.close();
connection1.close();
EDIT 2:
You could also try the "correct" way of doing it:
st1.executeUpdate();
st1.close();
st2.executeUpdate();
st2.close();
tx.commit();
connection1.close();
connection2.close();
If that doesn't fail, then something is wrong with your setup for UserTransactions.
Depending on your database this is quite a normal case.
An object implementing UserTransaction interface represents a "logical transaction". It doesn't always map to a real, "physical" transaction that a database engine respects.
For example, there are situations that cause implicit commits (as well as implicit starts) of transactions. In case of Oracle (can't vouch for other DBs), closing a connection is one of them.
From Oracle's docs:
"If the auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes, then
an implicit COMMIT operation is run".
But there can be other possible reasons for implicit commits: select for update, various locking statements, DDLs, and so on. They are database-specific.
So, back to our code.
The first transaction is committed by closing a connection.
Then another transaction is implicitly started by the DML on the second connection. It inserts non-conflicting changes and the second connection.close() commits them without PK violation. tx.commit() won't even get a chance to commit anything (and how could it? the connection is already closed).
The bottom line: "logical" transaction managers don't always give you the full picture.
Sometimes transactions are started and committed without an explicit reason. And sometimes they are even ignored by a DB.
PS: I assumed you used Oracle, but the said holds true for other databases as well. For example, MySQL's list of implicit commit reasons.
If auto-commit mode is disabled and you close the connection
without explicitly committing or rolling back your last changes,
then an implicit COMMIT operation is executed.
Please check below link for details:
http://in.relation.to/2005/10/20/pop-quiz-does-connectionclose-result-in-commit-or-rollback/
I use a SqlTransaction in my C# project, and I use a Delete statement with an EcexuteNonQuery call.
This works very well and I have always the same amount of rows to delete, but 95% of the time, this needs 1 ms and approx 5% of the time, it is between 300 - 500 ms.
My code:
using (SqlTransaction DbTrans = conn.BeginTransaction(IsolationLevel.ReadCommitted))
{
SqlCommand dbQuery = conn.CreateCommand();
dbQuery.Transaction = DbTrans;
dbQuery.CommandType = CommandType.Text;
dbQuery.CommandText = "delete from xy where id = #ID";
dbQuery.Parameters.Add("ID", SqlDbType.Int).Value = x.ID;
dbQuery.ExecuteNonQuery();
}
Is something wrong with my code?
Read Understanding how SQL Server executes a query and How to analyse SQL Server performance to get you started on troubleshooting such issues.
Of course I assume you have an index on xy.id. Your DELETE is likely blocking from time to time. This an be caused by many causes:
data locks from other queries
IO block from your hardware
log growth events
etc
The gist of it is that using the techniques in the articles linked above (specially the second one) you can identify the cause and address it appropriately.
Changes to your C# code will have little impact, if any at all. Using a stored procedure is
not going to help. You need to root cause the problem.
I'm hoping someone can confirm what is actually happening here with TPL and SQL connections.
Basically, I have a large application which, in essence, reads a table from SQL Server, and then processes each row - serially. The processing of each row can take quite some time. So, I thought to change this to use the Task Parallel Library, with a "Parallel.ForEach" across the rows in the datatable. This seems to work for a little while (minutes), then it all goes pear-shaped with...
"The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached."
Now, I surmised the following (which may of course be entirely wrong).
The "ForEach" creates tasks for each row, up to some limit based on the number of cores (or whatever). Lets say 4 for want of a better idea. Each of the four tasks gets a row, and goes off to process it. TPL waits until the machine is not too busy, and fires up some more. I'm expecting a max of four.
But that's not what I observe - and not what I think is happening.
So... I wrote a quick test (see below):
Sub Main()
Dim tbl As New DataTable()
FillTable(tbl)
Parallel.ForEach(tbl.AsEnumerable(), AddressOf ProcessRow)
End Sub
Private n As Integer = 0
Sub ProcessRow(row As DataRow, state As ParallelLoopState)
n += 1 ' I know... not thread safe
Console.WriteLine("Starting thread {0}({1})", n, Thread.CurrentThread.ManagedThreadId)
Using cnx As SqlConnection = New SqlConnection(My.Settings.ConnectionString)
cnx.Open()
Thread.Sleep(TimeSpan.FromMinutes(5))
cnx.Close()
End Using
Console.WriteLine("Closing thread {0}({1})", n, Thread.CurrentThread.ManagedThreadId)
n -= 1
End Sub
This creates way more than my guess at the number of tasks. So, I surmise that TPL fires up tasks to the limit it thinks will keep my machine busy, but hey, what's this, we're not very busy here, so lets start some more. Still not very busy, so... etc. (seems like one new task a second - roughly).
This is reasonable-ish, but I expect it to go pop 30 seconds (SQL connection timeout) after when and if it gets 100 open SQL connections - the default connection pool size - which it doesn't.
So, to scale it back a bit, I change my connection string to limit the max pool size.
Sub Main()
Dim tbl As New DataTable()
Dim csb As New SqlConnectionStringBuilder(My.Settings.ConnectionString)
csb.MaxPoolSize = 10
csb.ApplicationName = "Test 1"
My.Settings("ConnectionString") = csb.ToString()
FillTable(tbl)
Parallel.ForEach(tbl.AsEnumerable(), AddressOf ProcessRow)
End Sub
I count the real number of connections to the SQL server, and as expected, its 10. But my application has fired up 26 tasks - and then hangs. So, setting the max pool size for SQL somehow limited the number of tasks to 26, but why no 27, and especially, why doesn't it fall over at 11 because the pool is full ?
Obviously, somewhere along the line I'm asking for more work than my machine can do, and I can add "MaxDegreesOfParallelism" to the ForEach, but I'm interested in what's actually going on here.
PS.
Actually, after sitting with 26 tasks for (I'm guessing) 5 minutes, it does fall over with the original (max pool size reached) error. Huh ?
Thanks.
Edit 1:
Actually, what I now think happens in the tasks (my "ProcessRow" method) is that after 10 successful connections/tasks, the 11th does block for the connection timeout, and then does get the original exception - as do any subsequent tasks.
So... I conclude that the TPL is creating tasks at about 1 a second, and it gets enough time to create about 26/27 before task 11 throws an exception. All subsequent tasks then also throw exceptions (about a second apart) and the TPL stops creating new tasks (because it gets unhandled exceptions in one or more tasks ?)
For some reason (as yet undetermined), the ForEach than hangs for a while. If I modify my ProcessRow method to use the state to say "stop", it appears to have no effect.
Sub ProcessRow(row As DataRow, state As ParallelLoopState)
n += 1
Console.WriteLine("Starting thread {0}({1})", n, Thread.CurrentThread.ManagedThreadId)
Try
Using cnx As SqlConnection = fnNewConnection()
Thread.Sleep(TimeSpan.FromMinutes(5))
End Using
Catch ex As Exception
Console.WriteLine("Exception on thread {0}", Thread.CurrentThread.ManagedThreadId)
state.Stop()
Throw
End Try
Console.WriteLine("Closing thread {0}({1})", n, Thread.CurrentThread.ManagedThreadId)
n -= 1
End Sub
Edit 2:
Dur... The reason for the long delay is that, while tasks 11 onwards all crash and burn, tasks 1 to 10 don't, and all sit there sleeping for 5 minutes. The TPL has stopped creating new tasks (because of the unhandled exception in one or more of the tasks it has created), and then waits for the un-crashed tasks to complete.
The edits to the original question add more detail and, eventually, the answer becomes apparent.
TPL creates tasks repeatedly because the tasks it has created are (basically) idle. This is fine until the connection pool is exhausted, at which point the tasks which want a new connection wait for one to become available, and timeout. In the meantime, the TPL is still creating more tasks, all doomed to fail. After the connection timeout, the tasks start failing, and the ensuing exception(s) cause the TPL to stop creating new tasks. The TPL then waits for the tasks that did get connections to complete, before an AggregateException is thrown.
The TPL is not made for IO-bound work. It has heuristics which it uses to steer the count of threads being active. These heuristics fail for long-running and/or IO-bound tasks, causing it to inject more and more threads without a practical limit.
Use PLINQ to set a fixed amount of threads using WithDegreeOfParallelism. You should probably test different amounts. It could look like this. I have written much more about this topic on SO, but I can't find it at the moment.
I have no idea why you are seeing exactly 26 threads in your example. Note, that when the pool is depleted, a request to take a connection only fails after a timeout. This entire system is very non-deterministic and I'd consider any number of threads plausible.
I am using SQL Server 2005 CE framework 3.5 and attempting to use merge replication between my hand held and my SQL Server. When I run the code to synchronise it just seems to sit forever, and when I put a breakpoint in my code it never gets past the call to Synchronize().
If I look at the replication monitor in sql server, it gets to the point where it says the subscription is no longer synchronising and doesn't show any errors. Therefore I am assuming this to mean the synchronisation is complete.
http://server/virtualdirectory/sqlcesa35.dll?diag does not report any issues.
This is my first attempt at any handheld development, so I may have done something daft. However, SQL Server seems to be reporting a successful synchronisation.
Any help would be greatly appreciated as I have spent ages on this !
Here is my code.
const string DatabasePath = #"SD Card\mydb.sdf";
var repl = new SqlCeReplication
{
ConnectionManager = true,
InternetUrl = #"http://server/virtualdirectory/sqlcesa35.dll",
Publisher = #"servername",
PublisherDatabase = #"databasename",
PublisherSecurityMode = SecurityType.DBAuthentication,
PublisherLogin = #"username",
PublisherPassword = #"password",
Publication = #"publicationname",
Subscriber = #"PPC",
SubscriberConnectionString = "Data Source=" + DatabasePath
};
try
{
Cursor.Current = Cursors.WaitCursor;
if (!File.Exists(DatabasePath))
{
repl.AddSubscription(AddOption.CreateDatabase);
}
repl.Synchronize();
MessageBox.Show("Successfully synchronised");
}
catch (SqlCeException e)
{
DisplaySqlCeErrors(e.Errors, e);
}
finally
{
repl.Dispose();
Cursor.Current = Cursors.Default;
}
Another thing you can do to speed up the Synchronize operation is to specify a db file path that is in your PDA's main program memory (instead of on the SD Card as in your example). You should see a speed improvement of up to 4X (meaning the Sync may take only 25% as long as it's taking now).
If you're running out of main program memory on your PDA, you can use System.IO.File.Move() to move the file to the SD Card after the Synchronize call. This seems a bit strange, I know, but it's much faster to sync to program memory and copy to the SD card then it is to sync directly to the SD card.
I have since discovered that it was just taking a long time to copy the data to the physical disk. Although the sql server replication had completed, it was still copying the data to the sd card.
I identified this by reducing the amount of tables I am replicating and I got a more immediate response (well another error but unrelated to this issue).
Thanks anyway :)
We have a couple of mirrored SQL Server databases.
My first problem - the key problem - is to get a notification when the db fails over. I don't need to know because, erm, its mirrored and so it (almost) all carries on working automagically but it would useful to be advised and I'm currently getting failovers when I don't think I should be so it want to know when they occur (without too much digging) to see if I can determine why.
I have services running that I could fairly easily use to monitor this - so the alternative question would be "How do I programmatically determine which is the principal and which is the mirror" - preferably in a more intelligent fashion than just attempting to connect each in turn (which would mostly work but...).
Thanks, Murph
Addendum:
One of the answers queries why I don't need to know when it fails over - the answer is that we're developing using ADO.NET and that has automatic failover support, all you have to do is add Failover Partner=MIRRORSERVER (where MIRRORSERVER is the name of your mirror server instance) to your connection string and your code will fail over transparently - you may get some errors depending on what connections are active but in our case very few.
Right,
The two answers and a little thought got me to something approaching an answer.
First a little more clarification:
The app is written in C# (2.0+) and uses ADO.NET to talk to SQL Server 2005.
The mirror setup is two W2k3 servers hosting the Principal and the Mirror plus a third server hosting an express instance as a monitor. The nice thing about this is a failover is all but transparent to the app using the database, it will throw an error for some connections but fundamentally everything will carry on nicely. Yes we're getting the odd false positive but the whole point is to have the system carry on working with the least amount of fuss and mirror does deliver this very nicely.
Further, the issue is not with serious server failure - that's usually a bit more obvious but with a failover for other reasons (c.f. the false positives above) as we do have a couple of things that can't, for various reasons, fail over and in any case so we can see if we can identify the circumstance where we get false positives.
So, given the above, simply checking the status of the boxes is not quite enough and chasing through the event log is probably overly complex - the answer is, as it turns out, fairly simple: sp_helpserver
The first column returned by sp_helpserver is the server name. If you run the request at regular intervals saving the previous server name and doing a comparison each time you'll be able to identify when a change has taken place and then take the appropriate action.
The following is a console app that demonstrates the principal - although it needs some work (e.g. the connection ought to be non-pooled and new each time) but its enough for now (so I'd then accept this as "the" answer"). Parameters are Principal, Mirror, Database
using System;
using System.Data.SqlClient;
namespace FailoverMonitorConcept
{
class Program
{
static void Main(string[] args)
{
string server = args[0];
string failover = args[1];
string database = args[2];
string connStr = string.Format("Integrated Security=SSPI;Persist Security Info=True;Data Source={0};Failover Partner={1};Packet Size=4096;Initial Catalog={2}", server, failover, database);
string sql = "EXEC sp_helpserver";
SqlConnection dc = new SqlConnection(connStr);
SqlCommand cmd = new SqlCommand(sql, dc);
Console.WriteLine("Connection string: " + connStr);
Console.WriteLine("Press any key to test, press q to quit");
string priorServerName = "";
char key = ' ';
while(key.ToString().ToLower() != "q")
{
dc.Open();
try
{
string serverName = cmd.ExecuteScalar() as string;
Console.WriteLine(DateTime.Now.ToLongTimeString() + " - Server name: " + serverName);
if (priorServerName == "")
{
priorServerName = serverName;
}
else if (priorServerName != serverName)
{
Console.WriteLine("***** SERVER CHANGED *****");
Console.WriteLine("New server: " + serverName);
priorServerName = serverName;
}
}
catch (System.Data.SqlClient.SqlException ex)
{
Console.WriteLine("Error: " + ex.ToString());
}
finally
{
dc.Close();
}
key = Console.ReadKey(true).KeyChar;
}
Console.WriteLine("Finis!");
}
}
}
I wouldn't have arrived here without a) asking the question and then b) getting the responses which made me actually think
Murph
If the failover logic is in your application you could write a status screen that shows which box you're connected by writing to a var when the first connection attempt fails.
I think your best bet would be a ping daemon/cron job that checks the status of each box periodically and sends an email if one doesn't respond.
Use something like Host Monitor http://www.ks-soft.net/hostmon.eng/ to monitor the Event Log for messages related to the failover event, which can send you an alert via email/SMS.
I'm curious though how you wouldn't need to know that the failover happened, because don't you have to then update the datasources in your applications to point to the new server that you failed over to? Mirroring takes place on different hosts (the primary and the mirror), unlike clustering which has multiple nodes that appear to be a single device from the outside.
Also, are you using a witness server in order to automatically fail over from the primary to the mirror? This is the only way I know of to make it happen automatically, and in my experience, you get a lot of false-positives where network hiccups can fool the mirror and witness into thinking the primary is down when in fact it is not.