MS SQL Availiability Group Failover behavior? - sql-server

I have a client that uses MS SQL with availability groups. I develop a java based software and connect to the server in the following fasion: jdbc:sqlserver://[serverName[:portNumber]]
Everytime the DBA does a update on the servers, we loose the connection to the server (We get a Connection Closed). According to the DBA this is normal behavior in SQL-Servers and our software should just do a retry.
Is it really normal that the sql server closes all connections in a failover situation? Shouldn't it just redirect all connections to the new instance?
Unfortunately I am no SQL-expert and the DBA is anything but helpful, he just claims that our software should simple reconnect after receiving a closed connection. Am I missing something or is this really the desired experience in sql server?

Is it really normal that the sql server closes all connections in a failover situation? Shouldn't it just redirect all connections to the new instance?
Yes. On the assumption that the Availability Group (hereafter abbreviated "AG") is built on top of Windows Failover Clusters†, the AG listener is brought offline, transferred to new owner of the AG, and brought back online. Moreover, the databases in the AG on both the primary and all secondary replicas are transitioned from online → recovery pending → back online in their new (perhaps same as old) capacity as read/write or read-only.
Either of these phenomena alone would cause your application to lose communication with a database that it had previously established a connection with. But also, two of the fallacies of distributed computing are:
The network is reliable.
Topology doesn't change.
Regardless of the reason for those assumptions being violated, if your application isn't resilient to them, there's going to be a problem.
† - while it's technically possible to build an AG without Windows Clustering (i.e. a basic AG or if the AG is on Linux and is using Pacemaker as the coordinator), I think that the majority of implementations do use Windows Clustering.

Related

Alarm DB Logger (Intouch) configuration with SQL Server Mirroring

I have an installation which has two SCADA (Intouch) HMIs and I want to save the data in an SQL Server database which will be in another computer. To be as sure as possible that I have an operating database I'm going to set a SQL Server mirroring. So I will have 2 SQL server databases with a distributor. About this I don't have any doubt. To make it easy to understand I've made an image with the architecture of the system.
Architecture.
My doubt is how do I configure the Alarm DB Logger to make it point, automatically, to the secondary database in case that the principal database is down for any unknown failover.
PS: I don't know if it's even possible.
Configure it the database in Automatic failover. The connection are handled automatically in case of a failover. Read on Mirroring EndPoints
The below Links should have more than enough information.
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/role-switching-during-a-database-mirroring-session-sql-server
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/the-database-mirroring-endpoint-sql-server
The AlarmDBLogger reads its configuration from the registry, so you could try the following:
Stop AlarmLogger
Change ServerName in registry [HKLM].[Software].[Wonderware].[AlarmLogger].[SQLServer]
Start AlarmLogger
But what about the two InTouch-nodes? What if one of those fails? You would have to make sure one of them logs alarms, and that they don't log duplicates!
The standard controls and activex for alarms use a specific view in the alarm database. You cannot change that behaviour, but you can script a server change in InTouch or System Platform.
Keep in mind that redundancy needs to be tested, and should only be implemented if 100% uptime is necessary. In many cases you will be creating new problems to solve instead of solving an actual problem.

What does ApplicationIntent=ReadOnly mean in the connection string

I am using MS Access to connect to Sql Server through a DSN connection. This is a linked table to a sql server backend. Here is the connection string
ODBC;DSN=mydsn;Description=mydesc;Trusted_Connection=Yes;APP=Microsoft Office 2010;DATABASE=mydb;ApplicationIntent=READONLY;;TABLE=dbo.mytable
As you can see there is a ApplicationIntent=READONLY tag in the connection string. What does this mean. Am I connecting to the database in a read only fashion? Is it recommended to perform updates and inserts using this connection string?
This means that if you are using Availability Groups in SQL Server 2012, the engine knows that your connections are read only and can be routed to read-only replicas (if they exist). Some information here:
Configure Read-Only Access on an Availability Replica
Availability Group Listeners, Client Connectivity, and Application Failover
If you are not currently using Availability Groups, it may be a good idea to leave that in there for forward compatibility, but it really depends on whether or not you are intentionally only just reading. This should prevent writes but there are some caveats. These Connect items may be useful or may leave you scratching your head. I'll confess I haven't read them through.
ApplicationIntent=ReadOnly allows updates to a database
ApplicationIntent=ReadOnly does not send the connection to the secondary copy

What is best practice for SQL Server failover cluster database access tier?

In principle an SQL Server failover cluster presents itself as a virtual machine that applications can connect to oblivious to the fact that the SQL Server is actually a cluster of servers, hence, in principle no additional logic is required within the database access tier of the application.
My question is whether the above is true and whether there are best practice modifications to how the DB access tier operates when using a failover cluster. E.g. presumably when failover occurs there will be a delay that may cause a time-out error at the DB access tier, we are considering putting logic in that tier to re-try [some] DB calls upon a timeout occurring (we already have retry logic for DB deadlocks). This provides another level of protection from errors affecting the application.
If a failover switch occurs and results in the higher application level receiving a timeout error on a service call then that is not seamless switch over. Should we simply be setting our timeouts at a duration that allows for failover?
Thanks.
In principle an SQL Server failover cluster presents itself as a virtual machine that
applications can connect to oblivious to the fact that the SQL Server
is actually a cluster of servers
Ah? Really? That contradicts documentation. A cluster is basically nothing more than a moving IP address with different installation on different servers, hardly a virtual machine.
in principle no additional logic is required within the database access tier of the
application.
Yes and no - a failing node DOES kill all ongoing transactions and connections, obviously, so the CLIENT must be able to react to that and retry. If the client crashes because a connection is down an does not retry, it does not help you that server is reachable again after a second or two.
Should we simply be setting our timeouts at a duration that allows for failover?
No, a connection is broken by failover as the ongoing transaction state is lost. You need to reestablish the connection and then start all Sql commands again that were issued in the transaction.
Note from a security point, clustering is bad and you should use mirroring - you have a specific risk that a failing cluster node turns the database files corrupt in which case the fail-over fails. Mirroring is more robust.

DBA's say no to SQL Server DTC?

I am trying to get our DBA's to enable DTC on a cluster of SQL Server 2005. Unfortunately they keep refusing. Their argument that they would need to set up a dedicated host for DTC (Could take months!!) as it is not a matter of ticking a few boxes. Is this true? How intrusive is DTC on a shared environment such as a SQL farm. Do I have an argument against this?
Thanks
Had to tone down the original response your 'DBA' team deserve!
In response to your questions:
Dedicated server - Not at all. Everywhere I've worked with clusters, the DTC service is installed when the cluster is commissioned. Typically it sits in its own resource group or within the cluster group. If in its own group its usually sits on whichever server is hosting the cluster group.
Intrusive? - Absolutely not. It should be installed when the cluster is created, as per MS best practice.
Do you have an argument? - You most certainly do. The links below should cover the why and how for getting it installed:
MSDTC and SQL on a Cluster
Clustered SQL Server do's, dont's and basic warnings
DTC needs to be enabled and running on both sides of the connection. In my organization, it took some research to figure out which four boxes to check and then some hand-holding to get those boxes checked on all db servers, all app servers and most laptops. There's still a couple of hold-out developer laptops... but they're ok as long as they don't write. :)
You should have some driving scenario (such as an atomic multiple database write) to hit the DBA's over the head with. Give them some time to guess at alternatives... then let them know that DTC is the only hammer for this kind of nail.
I'm unsure of the implications of DTC on a SQL farm. I imagine the whole farm could get involved in the transaction if it involves enough data... which can't be a good thing.

Multipurposing a failover server?

I'm not a DBA so this may be a stupid question but I'll ask it anyway. We're upgrading our SQL Servers from 2000 to 2005 and we will probably use either database replication or database mirroring. Our DBA would like to "multipurpose" the standby server meaning that he'd like to increase our capabilities and capacity by running other database applications on the standby server since "it's just going to be sitting there anyway" (his words, not mine). Is this such a good idea? Right now, our main application server uses only one instance that contains 50+ databases. As I understand it, what we're doing now and what our DBA is proposing for a failover server is a bad idea because all of these databases are sharing memory, CPUs, and working areas. If one applications starts behaving badly, the other DBs could be affected.
Any thoughts?
It's really a business question that needs to be answered?? is a slow app better then no app if you can't afford the expense of extra hardware?
Standby and mirrored db's can be used for reporting. Using it as the failover db can work if you have enough headroom (i.e. both databases will comfortably run on the server)
Will you depend on these extra applications? Where do they run in the failover case?
You really need to understand your failure modes.
If you look at it as basic resource math, that doesn't often make sense unless the resources you have running in the failure scenarios can handle the entire expected load. Sometimes this is the case, but not always. In this case, to handle the actual load you may need yet another server to come in (like RAID - perhaps your load needs a minimum of 5 servers, but you have a farm of 6, then you need 1 standby server for ever server to fail above 1). Sometimes a farm can run degraded, but sometimes they just puke and die.
And in the case of out of normal operation, you often have accident cascading where a legitimate incident causes a cascade of issues - e.g. your backup tape is busy restoring a server from a backup (to a test environment, even - there are no real "failures"), now your sql server or exhcange server (or both) is not backed up and your log gets full.
Database Mirroring would not be the way to go here in my opinion as it provides redundancy at the database level only. So you would need to configure database mirroring for up to 50 databases based on the information you provided. The chances are that if one DB where to fail all, 50 would probably follow, as failures typically occur at the hardware level rather than a specific database.
It sounds to me like you should be using SQL Server Clustering technology. You could create an Active/Active cluster to support your requirements.
What is an Active/Active Cluster?
An Active/Active SQL Server cluster means that SQL Server is running on both nodes of a two-way cluster. Each copy of SQL Server acts independently, and users see two different SQL Servers. If one of the SQL Servers in the cluster should fail, then the failed instance of SQL Server will failover to the remaining server. This means that then both instances of SQL Server will be running on one physical server, instead of two.
Applying this to your scenario
You could then split the databases between two instances of SQL server, one active instance on each node. Should one node fail, the other node will pick up the slack and vice versa.
Further Reading
An introduction to SQL Server Clustering
I suspect that you will find the following MSDN thread useful reading also
"it's just going to be sitting there anyway"
It will be sitting there applying transactions...
Take note of John Sansom's recommendation. Keep in mind that a Active/Active cluster requires two sql server licenses and a failover cluster/mirror only needs one.
Setting up mirroring for a large number of db's could turn into a big pain. You need any jobs/maintenance to move over as well - which can be achieved with alerts on WMI failover events. There's probably more to think about that could complicate things.

Resources