Sql Server CE 3.5 Merge Replication Synchronise is Hanging - sql-server

I am using SQL Server 2005 CE framework 3.5 and attempting to use merge replication between my hand held and my SQL Server. When I run the code to synchronise it just seems to sit forever, and when I put a breakpoint in my code it never gets past the call to Synchronize().
If I look at the replication monitor in sql server, it gets to the point where it says the subscription is no longer synchronising and doesn't show any errors. Therefore I am assuming this to mean the synchronisation is complete.
http://server/virtualdirectory/sqlcesa35.dll?diag does not report any issues.
This is my first attempt at any handheld development, so I may have done something daft. However, SQL Server seems to be reporting a successful synchronisation.
Any help would be greatly appreciated as I have spent ages on this !
Here is my code.
const string DatabasePath = #"SD Card\mydb.sdf";
var repl = new SqlCeReplication
{
ConnectionManager = true,
InternetUrl = #"http://server/virtualdirectory/sqlcesa35.dll",
Publisher = #"servername",
PublisherDatabase = #"databasename",
PublisherSecurityMode = SecurityType.DBAuthentication,
PublisherLogin = #"username",
PublisherPassword = #"password",
Publication = #"publicationname",
Subscriber = #"PPC",
SubscriberConnectionString = "Data Source=" + DatabasePath
};
try
{
Cursor.Current = Cursors.WaitCursor;
if (!File.Exists(DatabasePath))
{
repl.AddSubscription(AddOption.CreateDatabase);
}
repl.Synchronize();
MessageBox.Show("Successfully synchronised");
}
catch (SqlCeException e)
{
DisplaySqlCeErrors(e.Errors, e);
}
finally
{
repl.Dispose();
Cursor.Current = Cursors.Default;
}

Another thing you can do to speed up the Synchronize operation is to specify a db file path that is in your PDA's main program memory (instead of on the SD Card as in your example). You should see a speed improvement of up to 4X (meaning the Sync may take only 25% as long as it's taking now).
If you're running out of main program memory on your PDA, you can use System.IO.File.Move() to move the file to the SD Card after the Synchronize call. This seems a bit strange, I know, but it's much faster to sync to program memory and copy to the SD card then it is to sync directly to the SD card.

I have since discovered that it was just taking a long time to copy the data to the physical disk. Although the sql server replication had completed, it was still copying the data to the sd card.
I identified this by reducing the amount of tables I am replicating and I got a more immediate response (well another error but unrelated to this issue).
Thanks anyway :)

Related

SQL Server : Delete statement between 1ms and

I use a SqlTransaction in my C# project, and I use a Delete statement with an EcexuteNonQuery call.
This works very well and I have always the same amount of rows to delete, but 95% of the time, this needs 1 ms and approx 5% of the time, it is between 300 - 500 ms.
My code:
using (SqlTransaction DbTrans = conn.BeginTransaction(IsolationLevel.ReadCommitted))
{
SqlCommand dbQuery = conn.CreateCommand();
dbQuery.Transaction = DbTrans;
dbQuery.CommandType = CommandType.Text;
dbQuery.CommandText = "delete from xy where id = #ID";
dbQuery.Parameters.Add("ID", SqlDbType.Int).Value = x.ID;
dbQuery.ExecuteNonQuery();
}
Is something wrong with my code?
Read Understanding how SQL Server executes a query and How to analyse SQL Server performance to get you started on troubleshooting such issues.
Of course I assume you have an index on xy.id. Your DELETE is likely blocking from time to time. This an be caused by many causes:
data locks from other queries
IO block from your hardware
log growth events
etc
The gist of it is that using the techniques in the articles linked above (specially the second one) you can identify the cause and address it appropriately.
Changes to your C# code will have little impact, if any at all. Using a stored procedure is
not going to help. You need to root cause the problem.

Why are my AJAX calls to a Database Server Slow ONLY From My Site Server?

I hope this is the correct site to post this on. I wasn't sure if I should post here or Server Fault, but seeing as this involves the website perspective, I thought perhaps this community might be a little more accurate, but I'm not 100% on that.
I have been banging my head against the wall for over half a year trying to figure out just what's going on here. I would be ecstatic if I could track down why AJAX calls are slow when going through our Site Server.
I have built a small web-app for the organization I work for and it is pretty much set up like this:
The site itself (WebMatrix IIS Express site) resides on the Site Server, but (with the help of C#) it uses SQL queries to query a (considerably large) database on our Database Server.
The problem is that when my site performs the AJAX (simple jQuery $.ajax() calls) that requires it to query the database, the response takes over 5 seconds, each!
(Chrome Network Details):
(You'll see that some of the responses are really quick. These responses contain no or a lot less data than the other responses. Maybe there's a data limit somewhere that's causing the Site Server to analyze them?)
Now here's the kicker:
On the Development machine, the local machine the site is developed on, cuts out the Site Server, has the same code, and queries the same database, but the lag doesn't persist in this scenario. The responses in this scenario are in the low millisecond, just what I would expect it to be.
Here's what the Chrome Network Details look like from the development machine:
(None even close to 1 second, let alone 5).
Some More Specifics
When launching this site straight from the Site Server, the lag persists.
WebMatrix uses SQL Server CE, while the SQL Installed on the Database Server is SQL Server 2005 (I really don't think this makes a difference, as the query itself isn't anything special, plus it's the same code that's used in either scenario).
The Site Server has been tested to see if the RAM, Processor, and Bandwidth are maxing out, but the truth is that running this web-app doesn't even touch the Site Server's resources. The same has been found for the Database Server, as well.
The connection to the database is readonly (doubt this matters, just trying to give as much detail as possible).
We have indexed the database on the Database Server, but it helped, virtually, none at all.
Even though it is just an Intranet site, I am told that putting the site directly on the Database Server is not an option.
At the moment, the AJAX requests are not asynchronous, but it should still not take this long (especially considering that it only lags from the Site Server and not from the Development Machine, even though the code is 100% identical in both cases).
Probably doesn't make any difference, but I am in an ASP.NET WebPages using WebMatrix with C# environment.
The Operating System on the Site Server is: Windows Server 2008 R2
The Operating System on the Database Server is: Windows Server 2003
What could make this app work well from my local machine but not from the Site Server? I think the problem has to be the Site Server, given this, but none of its resources are maxing out or anything. It seems to only lag by about 5 seconds per request if the data being returned is over a certain amount (an amount that seems pretty low, honestly).
Truth is, I am hopelessly stuck here. We have tried everything over the past several months (we are having a similar problem with another Intranet site where the AJAX calls lag there, too, we have just lived with it for a while).
I don't know what else to even look into anymore.
In case anybody wants to see some code
jQuery (one of the AJAX requests, they are all just repeats of this with different parameters)
$.ajax({
url: '/AJAX Pages/Get_Transactions?dep=1004',
async: false,
type: 'GET',
dataType: "json",
contentType: "application/json",
success: function (trans) {
for (var i = 0; i < trans.length; i++) {
trans[i][0] = getTimeStamp(trans[i][0]);
}
jsonObj1004 = trans;
},
error: function (jqXHR, textStatus, error) {
alert("Oops! It appears there has been an AJAX error. The Transaction chart may not work properly. Please try again, by reloading the page.\n\nError Status: " + textStatus + "\nError: " + error);
}
});
C# Server Side Code (With Razor)
#{
Layout = "";
if (IsAjax)
{
var db = Database.Open("OkmulgeeCIC");
Dictionary<string, double> dataList = new Dictionary<string, double>();
var date = "";
var previousDate = "";
double amount = 0;
string jsonString = "[";
string queryDep = "SELECT ba_trans_entered AS transDate, (ba_trans_amount * -1) AS transAmount FROM BA_VTRANS WHERE ba_trans_year >= 2011 AND ba_trans_operator = 'E' AND ba_trans_system = 'AP' AND ba_trans_ledger LIKE #0 + '%' ORDER BY ba_trans_entered ASC";
string dep = Request.QueryString["dep"];
foreach (var row in db.Query(queryDep, dep))
{
date = row.transDate.ToString();
date = date.Substring(0, date.IndexOf(" "));
amount = Convert.ToDouble(row.transAmount);
if (date == previousDate)
{
dataList[date] = dataList[date] + amount;
}
else
{
dataList.Add(date, amount);
}
previousDate = date;
}
foreach (var item in dataList)
{
jsonString += "[";
jsonString += Json.Encode(item.Key) + ", ";
jsonString += Json.Encode(item.Value) + "],";
}
//jsonString += Json.Encode(date);
jsonString = jsonString.TrimEnd(',');
jsonString += "]";
#Html.Raw(jsonString)
}
else
{
Context.RedirectLocal("~/");
}
}
ADDITONAL INFO FROM SQL SERVER PROFILER
From Development Machine
From User Machine (lag)
Just looking over your code two things jumped out for me
1) You're not closing your db connection, this is very bad. Either wrap your connection object in a using block (preferred) or add a call to .Close() at the end of your data work
using(var db = Database.Open())
{
//do work
}
2) Doing string concatenation in a loop like that is a terrible thing to do and very slow. Either use a StringBuilder. Or since you're outputting JSON anyway just bundle your objects into a list or something and pass that to JSON.Encode() (preferred)
It seem to me this problem come from your Site Server but anyway you can try this:
1/ Publish your site to any Internet web server. if it is still slow => your code problem -> Check your code.
If not go to 2/ Check your configuration Site Server and Database Server. It might be Firewall or TCP/IP:Port or NETBIOS/Domain name between two Server.
I do not know if this has any relevance to this problem because i cannot see how you are calling your application. But I have multiple times experienced about 5 sec lag on IIS using C# when i use domain names to call other servers (this can also be localhost). Instead the ip should be used.
It could be nice if you tried playing around with this using IP instead of using a domain
I had something similar when working with AjAX and JSF site.
Jquery loading taking excessive time
Since you already have it working from dev machine, it might not be a problem in your case.
But to rule out any such scenario, can you develop the page without using jquery ?
I had a similar issue where I would call 20 sprocs using a for loop, they were not large sprocs mind you but a sproc that would return 5 values.
It would work fine but time to time it would almost like lag out and would not be able to load any of the sprocs or a very small amount until timing out completely.
That is when I discovered Parameter Sniffing for SQL Server.
To fix it I added in sproc Parameters equal to the incoming parameters from my C# code.
OLD CODE:
CREATE PROC [dbo].[sp_procname_proc]
(
#param1 int,
#param2 int,
#param3 varchar(5),
--..... etc .....
)AS
BEGIN
-- select from db
END
NEW CODE
CREATE PROC [dbo].[sp_procname_proc]
(
#param1 int,
#param2 int,
#param3 varchar(5),
--..... etc .....
)AS
BEGIN
#localParam1 INT = #param1
#localParam2 INT = #param2
#localParam3 varchar(5) = #param3
-- select from db using new parameters
END

Can I stop sp_reset_connection being called to improve performance?

My profiler trace shows that exec sp_reset_connection is being called between every sql batch or procedure call. There are reasons for it, but can I prevent it from being called, if I'm confident that it's unnecessary, to improve performance?
UPDATE:
The reason I imagine this could improve performance is twofold:
SQL Server doesn't need to reset the connection state. I think this would be a relatively negligible improvement.
Reduced network latency because the client doesn't need to send down an exec sp_reset_connection, wait for response, then send whatever sql it really wants to execute.
The second benefit is the one I'm interested in, because in my architecture the clients are sometimes some distance from the database. If every sql batch or rpc requires a double round-trip this doubles the impact of any network latency. Eliminating such double calls could potentially improve performance.
Yes there are lots of other things I could do to improve performance like re-architect the app, and I'm a big fan of solving the root cause of problems, but in this case I just want to know if it's possible to prevent sp_reset_connection to be called. Then I can test if there is any performance improvement and properly assess the risks of not calling this.
This prompts another question: does the network communication with sp_reset_connection really occur like I outlined above? i.e. Does the client send exec sp_reset_connection, wait for a response, then send the real sql? Or does it all go in one chunk?
If you're using .NET to connect to SQL Server, disabling of the extra reset call was disabled as of .NET 3.5 -- see here. (The property remains, but it does nothing.)
I guess Microsoft realized (as someone did experimentally here) that opening the door to avoid the reset was far more dangerous than it was to get a (likely) small performance gain. Can't say I blame them.
Does the client send exec sp_reset_connection, wait for a response, then send the real sql?
EDIT: I was wrong -- see here -- the answer is no.
Summary: there is a special bit set in a TDS message that specifies that the connection should be reset, and SQL Server executes sp_reset_connection automatically. It appears as a separate batch in Profiler and would always be executed before the actual query you wanted to execute, so my test was invalid.
Yes, it's sent in a separate batch.
I put together a little C# test program to demonstrate this because I was curious:
using System.Data.SqlClient;
(...)
private void Form1_Load(object sender, EventArgs e)
{
SqlConnectionStringBuilder csb = new SqlConnectionStringBuilder();
csb.DataSource = #"MyInstanceName";
csb.IntegratedSecurity = true;
csb.InitialCatalog = "master";
csb.ApplicationName = "blarg";
for (int i = 0; i < 2; i++)
_RunQuery(csb);
}
private void _RunQuery(SqlConnectionStringBuilder csb)
{
using (SqlConnection conn = new SqlConnection(csb.ToString()))
{
conn.Open();
SqlCommand cmd = new SqlCommand("WAITFOR DELAY '00:00:05'", conn);
cmd.ExecuteNonQuery();
}
}
Start Profiler and attach it to your instance of choice, filtering on the dummy application name I provided. Then, put a breakpoint on the cmd.ExecuteNonQuery(); line and run the program.
The first time you step over, just the query runs, and all you get is the SQL:BatchCompleted event after the 5 second wait. When the breakpoint hits the second time, all you see in profiler is still just the one event. When you step over again, you immediately see the exec sp_reset_connection event, and then the SQL:BatchCompleted event shows up after the delay.
The only way to get rid of the exec sp_reset_connection call (which may or may not be a legitimate performance problem for you) would be to turn off .NET's connection pooling. And if you're planning to do that, you'd likely want to build your own connection pooling mechanism, because just turning it off and doing nothing else will probably hurt more overall than taking the hit of the extra roundtrip, and you will have to deal with the correctness issues manually.
This Q/A could be helpful:
What does "exec sp_reset_connection" mean in Sql Server Profiler?
However, I did a quick test using Entity Framework and MS-SQL 2008 R2. It shows that "exec sp_reset_connection" isn't time consuming after the first call:
for (int i = 0; i < n; i++)
{
using (ObjectContext context = new myEF())
{
DateTime timeStartOpenConnection = DateTime.Now;
context.Connection.Open();
Console.WriteLine();
Console.WriteLine("Opening connection time waste: {0} ticks.", (DateTime.Now - timeStartOpenConnection).Ticks);
ObjectSet<myEntity> query = context.CreateObjectSet<myEntity>();
DateTime timeStart = DateTime.Now;
myEntity e = query.OrderByDescending(x => x.EventDate).Skip(i).Take(1).SingleOrDefault<myEntity>();
Console.Write("{0}. Created By {1} on {2}... ", e.ID, e.CreatedBy, e.EventDate);
Console.WriteLine("({0} ticks).", (DateTime.Now - timeStart).Ticks);
DateTime timeStartCloseConnection = DateTime.Now;
context.Connection.Close();
context.Connection.Dispose();
Console.WriteLine("Closing connection time waste: {0} ticks.", (DateTime.Now - timeStartCloseConnection).Ticks);
Console.WriteLine();
}
}
And output was this:
Opening connection time waste: 5390101 ticks.
585. Created By sa on 12/20/2011 2:18:23 PM... (2560183 ticks).
Closing connection time waste: 0 ticks.
Opening connection time waste: 0 ticks.
584. Created By sa on 12/20/2011 2:18:20 PM... (1730173 ticks).
Closing connection time waste: 0 ticks.
Opening connection time waste: 0 ticks.
583. Created By sa on 12/20/2011 2:18:17 PM... (710071 ticks).
Closing connection time waste: 0 ticks.
Opening connection time waste: 0 ticks.
582. Created By sa on 12/20/2011 2:18:14 PM... (720072 ticks).
Closing connection time waste: 0 ticks.
Opening connection time waste: 0 ticks.
581. Created By sa on 12/20/2011 2:18:09 PM... (740074 ticks).
Closing connection time waste: 0 ticks.
So, the final conclusion is: Don't worry about "exec sp_reset_connection"! It wastes nothing.
Personally, I'd leave it.
Given what it does, I want to make sure I have no temp tables in scope or transactions left open.
To be fair, you will gain a bigger performance boost by not running profiler against your production database. And do you have any numbers or articles or recommendations about what you can gain from this please?
Just keep the connection open instead of returning it to the pool, and execute all commands on that one connection.

Biztalk suspended messages in database

I was wondering if someone knows where I can see the data of a suspended message in the biztalk database.
I need this because about 900 messages have been suspended because of a validation and I need to edit all of them, resuming isn't possible.
I know that info of suspended messages are shown in BizTalkMsgBoxDb in the table InstancesSuspended and that the different parts of each message are shown in the table MessageParts. However I can't find the table where the actual data is stored.
Does anyone have any idea where this can be done?
I found a way to do this, there's no screwing up my system when I just want to read them.
How I did it is using the method "CompressionStreams" using Microsoft.Biztalk.Pipeline.dll.
The method to do this:
public static Stream getMsgStrm(Stream stream)
{
Assembly pipelineAssembly = Assembly.LoadFrom(string.Concat(#"<path to dll>", #"\Microsoft.BizTalk.Pipeline.dll"));
Type compressionStreamsType = pipelineAssembly.GetType("Microsoft.BizTalk.Message.Interop.CompressionStreams", true);
return (Stream)compressionStreamsType.InvokeMember("Decompress", BindingFlags.Public | BindingFlags.InvokeMethod | BindingFlags.Static, null, null, new object[] { (object)stream });
}
Then I connect with my database, fill in a dataset and stream out the data to string, code:
String SelectCmdString = "select * from dbo.Parts";
SqlDataAdapter mySqlDataAdapter = new SqlDataAdapter(SelectCmdString, "<your connectionstring">);
DataSet myDataSet = new DataSet();
mySqlDataAdapter.Fill(myDataSet, "BodyParts");
foreach (DataRow row in myDataSet.Tables["BodyParts"].Rows)
{
if (row["imgPart"].GetType() != typeof(DBNull))
{
SqlBinary binData = new SqlBinary((byte[])row["imgPart"]);
MemoryStream stm = new MemoryStream(binData.Value);
Stream aStream = getMsgStrm(stm);
StreamReader aReader = new StreamReader(aStream);
string aMessage = aReader.ReadToEnd();
//filter msg
//write msg
}
}
I then write each string to an appropriate "txt" or "xml" depending on what u want, you can also filter out certain messages with regular expression, etc.
Hope this helps anyone, it sure as hell helped me.
Greetings
Extract Messages from suspended instances
Scenario:
BizTalk 2010 and SQL 2008 R2 is the environment we have used fore this scenario.
You have problem with some integrations, 1500 suspended instances inside BizTalk and you need to send the actual messages to a customer, and then you properly do not want to manually save out this from BizTalk Administrator.
There are a lot of blogs and Internet resources pointing out vbs, powershell scripts how to do this, but I have used BizTalk Terminator to solve this kind of scenarios.
As you now BizTalk terminator is asking you 3 questions when the tool starts
I.1.All BizTalk databases are backed up?
II.2.All Host Instances is stopped?
III.3.All BizTalk SQL Agents is stopped?
This is ok when you are going to actually change something inside BizTalk databases but this is not what you are going to do in this scenario you are only using the tool to read from BizTalk databases. But you should always have backups off BizTalk databases.
You are always responsible for what you are doing, but when we have used this tools in the way I describe we have not have any problem with this scenario.
So after you have start Terminator tool please click yes to the 3 questions(you dont need to stop anything in this scenario) then connect to the correct environment please do this in your test environment first so you feel comfortable with this scenario, the next step is to choose a terminator task choose Count Instances(and save messages) after this you have to fill in the parameter TAB with correct serviceClass and Hostname and set SaveMessages to True and last set FilesaveFullPath to the correct folder you want to save the messages to.
Then you can choose to click on the Execute Button and depending of the size and how many it can take some time, after this disconnect Terminator do NOT do anything else.
You should now if you have filled in the correct values in the parameter TAB have the saved messages inside the FilesaveFullPath folder.
Download BizTalk terminator from this address:
http://www.microsoft.com/en-us/download/details.aspx?id=2846
This is more than likely not supported by Microsoft. Don't risk screwing up your system. If you have a need to have a edit and resubmit, it needs to be built into the orchestration. Otherwise, your best bet is to use WMI to write a script to:
pull out all of the suspended messages
terminate them
edit them
resubmit them
you can find it through the HAT tool you just need to specify the schema ,port and the exact date
with the exact time and it will show you the messages right click on the desired one and save .

How can I get notification when a mirrored SQL Server database has failed over

We have a couple of mirrored SQL Server databases.
My first problem - the key problem - is to get a notification when the db fails over. I don't need to know because, erm, its mirrored and so it (almost) all carries on working automagically but it would useful to be advised and I'm currently getting failovers when I don't think I should be so it want to know when they occur (without too much digging) to see if I can determine why.
I have services running that I could fairly easily use to monitor this - so the alternative question would be "How do I programmatically determine which is the principal and which is the mirror" - preferably in a more intelligent fashion than just attempting to connect each in turn (which would mostly work but...).
Thanks, Murph
Addendum:
One of the answers queries why I don't need to know when it fails over - the answer is that we're developing using ADO.NET and that has automatic failover support, all you have to do is add Failover Partner=MIRRORSERVER (where MIRRORSERVER is the name of your mirror server instance) to your connection string and your code will fail over transparently - you may get some errors depending on what connections are active but in our case very few.
Right,
The two answers and a little thought got me to something approaching an answer.
First a little more clarification:
The app is written in C# (2.0+) and uses ADO.NET to talk to SQL Server 2005.
The mirror setup is two W2k3 servers hosting the Principal and the Mirror plus a third server hosting an express instance as a monitor. The nice thing about this is a failover is all but transparent to the app using the database, it will throw an error for some connections but fundamentally everything will carry on nicely. Yes we're getting the odd false positive but the whole point is to have the system carry on working with the least amount of fuss and mirror does deliver this very nicely.
Further, the issue is not with serious server failure - that's usually a bit more obvious but with a failover for other reasons (c.f. the false positives above) as we do have a couple of things that can't, for various reasons, fail over and in any case so we can see if we can identify the circumstance where we get false positives.
So, given the above, simply checking the status of the boxes is not quite enough and chasing through the event log is probably overly complex - the answer is, as it turns out, fairly simple: sp_helpserver
The first column returned by sp_helpserver is the server name. If you run the request at regular intervals saving the previous server name and doing a comparison each time you'll be able to identify when a change has taken place and then take the appropriate action.
The following is a console app that demonstrates the principal - although it needs some work (e.g. the connection ought to be non-pooled and new each time) but its enough for now (so I'd then accept this as "the" answer"). Parameters are Principal, Mirror, Database
using System;
using System.Data.SqlClient;
namespace FailoverMonitorConcept
{
class Program
{
static void Main(string[] args)
{
string server = args[0];
string failover = args[1];
string database = args[2];
string connStr = string.Format("Integrated Security=SSPI;Persist Security Info=True;Data Source={0};Failover Partner={1};Packet Size=4096;Initial Catalog={2}", server, failover, database);
string sql = "EXEC sp_helpserver";
SqlConnection dc = new SqlConnection(connStr);
SqlCommand cmd = new SqlCommand(sql, dc);
Console.WriteLine("Connection string: " + connStr);
Console.WriteLine("Press any key to test, press q to quit");
string priorServerName = "";
char key = ' ';
while(key.ToString().ToLower() != "q")
{
dc.Open();
try
{
string serverName = cmd.ExecuteScalar() as string;
Console.WriteLine(DateTime.Now.ToLongTimeString() + " - Server name: " + serverName);
if (priorServerName == "")
{
priorServerName = serverName;
}
else if (priorServerName != serverName)
{
Console.WriteLine("***** SERVER CHANGED *****");
Console.WriteLine("New server: " + serverName);
priorServerName = serverName;
}
}
catch (System.Data.SqlClient.SqlException ex)
{
Console.WriteLine("Error: " + ex.ToString());
}
finally
{
dc.Close();
}
key = Console.ReadKey(true).KeyChar;
}
Console.WriteLine("Finis!");
}
}
}
I wouldn't have arrived here without a) asking the question and then b) getting the responses which made me actually think
Murph
If the failover logic is in your application you could write a status screen that shows which box you're connected by writing to a var when the first connection attempt fails.
I think your best bet would be a ping daemon/cron job that checks the status of each box periodically and sends an email if one doesn't respond.
Use something like Host Monitor http://www.ks-soft.net/hostmon.eng/ to monitor the Event Log for messages related to the failover event, which can send you an alert via email/SMS.
I'm curious though how you wouldn't need to know that the failover happened, because don't you have to then update the datasources in your applications to point to the new server that you failed over to? Mirroring takes place on different hosts (the primary and the mirror), unlike clustering which has multiple nodes that appear to be a single device from the outside.
Also, are you using a witness server in order to automatically fail over from the primary to the mirror? This is the only way I know of to make it happen automatically, and in my experience, you get a lot of false-positives where network hiccups can fool the mirror and witness into thinking the primary is down when in fact it is not.

Resources