High Availability Groups and Fail Over - sql-server

I have an active and passive node for HAG's in SQL2012. The passive node is constantly hit when there is a "READ-ONLY" parameter passed with the connection string. So, for example, reports would use this type of connection on an everyday basis.
Recently we had a QA environment setup with HAG's as active-active, so NO passive node. From conversing with the DBA's I was told that the difference between the 2 setups (active-active vs active-passive) is that in a fail over situation the active-active setup would allow "READ-WRITE" connections to continue to work.
In an active-passive fail over situation any "READ-WRITE" connections would not work because the passive DB would only allow "READ-ONLY" type connections. Further, more tools like SSRS would fail because they can only be setup in one node at a time. Currently we only have it installed in the passive node. That doesn't make sense because the passive node is one node, which then means we should be able to install it in the active node. Technically this all sort of makes sense... but then it doesn't.
Isn't one of the main purposes of HAG groups to provide fail over protection regardless of the setup? Can anyone shed light on this?

I think that either you misunderstood your DBAs or they're not correct.
In an availability group, you have three options with regards to how you want secondary nodes to behave (in order from most to least permissive):
Allow any connections
Allow only connections that specify application intent as readonly
Allow no connections
You also have two options for the primary replica (again in order from most to least permissive):
Allow any connections
Allow only connecitons that specify application intent as readwrite
What makes this slightly confusing is that this preference is configured per replica. That is, you could have the following configuration:
Node A
Primary: Accepts any connection
Secondary: Accepts no connections
Node B
Primary: Accepts read-write connections
Secondary: Accepts read-only connections
In a failover situation, the role of the primary node is transferred to another node each replica obeys whatever semantics are configured for it. So, in my example above, if the primary is Node A, any application connecting to it will be accepted while only read-only connections will be accepted at Node B. When a failover happens (making Node B the primary), only read-write connections will be accepted at Node B while no connections will be accepted at Node A. I think to avoid confusion, configuring all of the nodes the same way is best. But talk with your DBAs and ask what each node's behavior is in the primary and secondary roles.

Related

Finding out sources of connections to MongoDB cluster

The "Real Time Metrics" panel of my MongoDB Atlas cluster, shows 36 connections, even though I terminated all server apps that were supposed to be connected to it. Currently nothing should be connected to it, but I still see those 36 connections. I tried pausing the cluster and then resuming it - the connections came back. Is there any way for me to find out where are they coming from? OR, terminating all connections.
Each connection is supposed to provide with it what is called "app metadata". This is supposed to always include:
The driver identifier (e.g. pymongo 1.2.3)
The platform of the client (e.g. linux amd64)
Additionally, you can provide your own information to be sent as part of client metadata which you can use to identify your application. See e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/ :app_name option.
Atlas has internal processes that connect to cluster nodes and cluster nodes communicate with each other also. All of these add to connection count seen on each node.
To figure out where connections are coming from:
Read the server logs (which you have to download first) to obtain the client metadata sent with each connection.
Hopefully this will provide enough clues to identify cluster to cluster connections. You should also be able to tell those by source IPs which you should be able to dig out of cluster configuration.
Atlas connections should be using either Go or Java drivers, if you don't use those in your own applications this would be an easy way of telling those apart.
Add app name to all of your application connections to eliminate those from the unknown ones.
There is no facility provided by MongoDB server to terminate connections from clients. You can kill operations and sessions but connections used for those operations would remain until the clients close them. When clients close connections depends on the particular driver used and connection pool settings, see e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/#connection-pooling.

What is the purpose of static IPs for SQL HAG listener?

Can somebody answer following questions. I have 2 Azure VMs with HAG setup but no HAG listeners are setup. The reason being I'm confused where those static IPs are supposed to come from and if they are needed in a first place.
Questions:
Why do I need HAG listener at all if I can just use IP address of each host to connect to SQL
If add additional IP addresses, are those supposed to be manually added to TCP/IP properties of adapter or WSFC will take care of that during failover
What is difference between using HAG DNS name vs just using database mirroring type (Data Source/Failover Partner) connection string. They seemed to be doing the same thing, i.e. providing alternative IPs where service is being hosted
Does WSFC needs to have "Server Name" under core cluster resources? What is the point of that name in terms of HAG functionality? Can I just delete it?
Why do I need HAG listener at all if I can just use IP address of each host to connect to SQL
Answer:
Listener is part of cluster resources. Connection first connect to listener, and depends the setting it will be relay to different replica. Of course, you can connect to each of the replica directly by their instance name or by IP. However, having listener would provide you the HA. That is, if your primary replica failover to secondary replica, the listener will automatically point to the new primary replica.
If add additional IP addresses, are those supposed to be manually added to TCP/IP properties of adapter or WSFC will take care of that during failover
Answer:
I assume here you were asking additional IP for listener. Noticed you have replicas in multiple subnets. Your listener had to have two IPs, each for separate subnets. These settings could not be manually added to TCP/IP. You had to configure them while creating listener.
What is difference between using HAG DNS name vs just using database mirroring type (Data Source/Failover Partner) connection string. They seemed to be doing the same thing, i.e. providing alternative IPs where service is being hosted
Answer:
Mirroring is in singel database level.
AG is for group of databases.
Both use end points to communicate between.
Does WSFC needs to have "Server Name" under core cluster resources?
What is the point of that name in terms of HAG functionality? Can I just delete it?
Answer:
WSFC is the foundation of AG. You need to create WSFC first. It has it's name and IP and other properties. No you cannot delete it.

How to access mnesia in remote server

I have an application built on Erlang/cowboy, the database is mnesia. The node name is webserver#127.0.0.1.
Since there is no GUI in remote server, I want to use local observer to access to the remote mnesia.
I tried many times, but still failed. Can anyone help me out? (Assume the IP of remote server is 10.123.45.67)
Your remote Erlang node name should be webserver#10.123.45.67 instead of webserver#127.0.0.1.
Also you need to set the same cookie for both nodes as well as the same node naming convention. By naming convention I mean short name (-sname flag) or long name (-name flag), becuase a node with a long node name cannot communicate with a node with a short node name.
Note that if your real remote IP is not in a trusted network, it is not good practice to do it in case of security.

AlwaysON SQL Server 2014 Application exception: Failed to update database because database is readonly

We have two nodes availability group. The two nodes being SQL cluster1- node1 and SQL cluster 2- node2 and a Availability group listener. The Java application is connecting to this listener and all is working fine initially i.e application is able to perform both read/writes on the database, untill we do a failover.
The connector string is driverURL=jdbc:jtds:sqlserver://[Listerner DNS Name]:[Port]/[Database]
Say initially the node1 was primary and node2 was the secondary.
After failover, node1 becomes secondary and node2 becomes primary. Now the application is still able to connect to the database but only able to perform reads on the database. The application throws exceptions (which is mentioned in the title) if we try to do inserts on that DB.
Basically what I need is for the application to be able to perform read/writes all the time irrespective of which node is the primary. Any ideas ?
There should be no reason why you get a read-only database when the connection string is pointing to the listener. That's the point of the avail grp listener - to direct flow to the read/write (primary) database. Ping the DNS name and check that it resolves to the listener (before and after an AG failover). Unfortunatelyy I don't use Java so can't help you any further. Cheers, Mark.

Multiple "Default" instances in SQL Server cluster? (AKA multiple clustered instances without requiring an instance name to connect)

I'm setting up multiple SQL instances on an active/active cluster, and on our existing SQL Cluster, the cluster name is SQLCLUSTER, but we access the instances as SQLCLUSTERINST1\Instance1, SQLCLUSTERINST2\Instance2, etc. Since each instance has its own IP and network name anyway, can I install SQL as the "Default" instance on each network name? I'd really like to access my instances without having to give the instance name (ie, instead of the above, just SQLCLUSTERINST1, SQLCLUSTERINST2, etc), but my understanding of SQL is that, even in a cluster, the instance name is required, even though the IP already uniquely identifies an instance.
Does anybody know if I can do this? I'm about to install the first instance, and I wanted to get an answer to this before I start installing them as named instances if I don't need to. It just seems reduntant, and potentially unnecessary, to have to give the instance cluster name and the instance name to connect to a server when just the instance cluster name would uniquely identify a sql instance as-is. I would expect one default instance per cluster group (as they'd share an IP), but not per cluster.
You can only use default instances in an active/passive cluster. The reason for this is because you cannot have multiple default instances installed on the same server, clustering requires an instance to be installed on each node of the cluster to support fail over.
I ended up finding a work-around for this. While I installed named instances on the cluster, I can access them using port 1433 on each DNS name, so I don't have to provide the instance name to connect, which is what I was after.
To get this accomplished, I have to modify the listener configuration to force each instance to listen on port 1433 on its dedicated IP, rather than just relying on dynamic ports and the SQL Browser.
I've detailed the steps on my blog
Good idea rwmnau. I haven't read your blog post, yet, but I suspect the limitation revolves around registry keys or directory structures. Remember, each node only has one registry hive for SQL Server. There's a registry key that lists the instances on the box. It's a space separated list. I'm pretty sure that list has to have distinct values in it, therefore, you can't have more than one MSSQLSERVER instance. The internal instance name for default instances is MSSQLSERVER. So I think, if nothing else, there's your limitation, there. However, I do think you have a wonderful idea with using port 1433 for all instances involved. Good job and thanks for sharing. I think I might try that myself on my next cluster!

Resources