I am having issues with creating a failover cluster with an availabilty group.
I've made a windows failover cluster, and a sql availability group. I also have an azure load balancer with an IP address and a DNS name.
I am trying to follow this guide here
I get to the Configure the Listener, add Client access point, and things fail from there.
Is the name here supposed to be the DNS name in the load balancer? Same for the IP? Or is it supposed to be another object in Active directory.
Steps 5 and 6 seem to conflict, Is the dependency supposed to be a resource or an IP?
If anyone have any advice, I would be appreciative.
I have been using the above guide trying to get things to work in GUI before changing this over to powershell code.
I suspect either there is something I am missing, or this is all the same IP address and dns name used.
Related
Can somebody answer following questions. I have 2 Azure VMs with HAG setup but no HAG listeners are setup. The reason being I'm confused where those static IPs are supposed to come from and if they are needed in a first place.
Questions:
Why do I need HAG listener at all if I can just use IP address of each host to connect to SQL
If add additional IP addresses, are those supposed to be manually added to TCP/IP properties of adapter or WSFC will take care of that during failover
What is difference between using HAG DNS name vs just using database mirroring type (Data Source/Failover Partner) connection string. They seemed to be doing the same thing, i.e. providing alternative IPs where service is being hosted
Does WSFC needs to have "Server Name" under core cluster resources? What is the point of that name in terms of HAG functionality? Can I just delete it?
Why do I need HAG listener at all if I can just use IP address of each host to connect to SQL
Answer:
Listener is part of cluster resources. Connection first connect to listener, and depends the setting it will be relay to different replica. Of course, you can connect to each of the replica directly by their instance name or by IP. However, having listener would provide you the HA. That is, if your primary replica failover to secondary replica, the listener will automatically point to the new primary replica.
If add additional IP addresses, are those supposed to be manually added to TCP/IP properties of adapter or WSFC will take care of that during failover
Answer:
I assume here you were asking additional IP for listener. Noticed you have replicas in multiple subnets. Your listener had to have two IPs, each for separate subnets. These settings could not be manually added to TCP/IP. You had to configure them while creating listener.
What is difference between using HAG DNS name vs just using database mirroring type (Data Source/Failover Partner) connection string. They seemed to be doing the same thing, i.e. providing alternative IPs where service is being hosted
Answer:
Mirroring is in singel database level.
AG is for group of databases.
Both use end points to communicate between.
Does WSFC needs to have "Server Name" under core cluster resources?
What is the point of that name in terms of HAG functionality? Can I just delete it?
Answer:
WSFC is the foundation of AG. You need to create WSFC first. It has it's name and IP and other properties. No you cannot delete it.
I am having trouble getting two agents to communicate across platforms.
I have two virtual machines running on an internal network and one of the VM's has an agent that attempts to connect and publish to the platform on the other VM. The code for the connection and send is the same as in examples like the ForwarderAgent. I know the agents can see each other, and attempt to connect, but the authentication fails.
On the platform I am trying to connect to, I can see the credentials that the publishing agent is presenting itself with. However, the presented credentials are a private key that is generated in
$VOLTTRONHOME/keystores/
every time I start the agent. So the credentials change every time i start the agent.
I am unsure how I can add the agent as a known identity beforehand if I don't know the credentials it will try to use.
I have added the different addresses as known_hosts, and attempted to register the agents between the two platforms using the public keys associated with their agent installations with
volttron-ctl auth add
but the sending agent still presents itself with new credentials. Is there a configuration step I am missing so that the agent will publish with its consistent public key?
When creating an agent to connect to the external platform from an installed agent you should use the following as a guideline of how to do it
````
import gevent
from volttron.platform.vip.agent import Agent
destination_vip="tcp://127.0.0.5:22916?serverkey=dafn..&publickey=adf&secretkey=afafdf"
event = gevent.event.Event()
# Note by specifying the identity, the remote platform will use the same
# keystore to authenticate the agent. Otherwise a guid is used which
# changes keys each time.
agent = Agent(address=destination_vip, enable_store=False, identity="remote_identity")
gevent.spawn(agent.core.run)
if not event.wait(timeout=10):
print("Unable to start agent!"
````
Note this was from https://github.com/VOLTTRON/volttron/blob/master/services/core/ForwardHistorian/forwarder/agent.py#L317, however there is a different mechanism that doesn't require the destination_vip address to be included public and secret keys within it that is in develop.
In addition, the publickey that you mention in the above code does need to be in the auth.json file and/or you need to allow all connections via /.*/ in the auth.json file.
I hope this helps!
Our customers add a unique service (_careq) with an SRV record to their DNS servers so our software can just do a DnsQuery lookup and get the host's name.
The problem is some customers don't put the SRV record in the correct location (it should read _careq._tcp.[FQDN], but customers can put it in _careq._tcp.[subdomain].[FQDN], etc).
Rather than fixing every customer's DNS server, is there a way to just send a query to the DNS server with our service name (_careq) and have it search its entire DNS tree?
If not, is there another/better way to do a DNS lookup for our host server?
You can't get there from here.
DNS does not publish a list of subdomains. Game Over.
Well, "game over" unless they have zone transfers enabled for your client and that is fairly unlikely.
As for an alternative: perhaps zeroconf can do what you want.
ActiveDirectory Server 2003
I am using the ActiveDirectoryMembershipProvider and ADroleProvider. They work great. Until my active directory server restarts in the middle of the day to get updates. (I'm not in charge of the server and can't change this). When this happens, for the five minutes the server is rebooting, my users can't use my website because I've tied my menu to the Role Provider. So, here are my questions:
Is it possible to tell my RoleProvider to use the "next" available ADS? If so, how so that while the initial one reboots, I don't frustrate my users with ADS connection messages?
Should I be using some kind of connection pool that automatically reconnects to the available server? If so, how?
Let's imagine that all my active directory servers go down. Is there a way to keep my web application running? Obviously there are bigger problems if all servers are down, but what I'm after is a possible "disconnected" active directory authentication that will still move forward if the server somehow goes kaput. Is this wise AND possible?
You probably have the server connection string set to "server01.domain.local". If you change it to just "domain.local" you're no longer depending on "server01" being online. Instead you will use the Round Robin feature of Active Directory DNS to get a list of all domain controllers and use one that's online. (I don't think your admins reboot all of the domain controllers at the same time...)
Also try running nslookup domain.local a couple of times in succession in a command prompt to see the order changing.
I'm setting up multiple SQL instances on an active/active cluster, and on our existing SQL Cluster, the cluster name is SQLCLUSTER, but we access the instances as SQLCLUSTERINST1\Instance1, SQLCLUSTERINST2\Instance2, etc. Since each instance has its own IP and network name anyway, can I install SQL as the "Default" instance on each network name? I'd really like to access my instances without having to give the instance name (ie, instead of the above, just SQLCLUSTERINST1, SQLCLUSTERINST2, etc), but my understanding of SQL is that, even in a cluster, the instance name is required, even though the IP already uniquely identifies an instance.
Does anybody know if I can do this? I'm about to install the first instance, and I wanted to get an answer to this before I start installing them as named instances if I don't need to. It just seems reduntant, and potentially unnecessary, to have to give the instance cluster name and the instance name to connect to a server when just the instance cluster name would uniquely identify a sql instance as-is. I would expect one default instance per cluster group (as they'd share an IP), but not per cluster.
You can only use default instances in an active/passive cluster. The reason for this is because you cannot have multiple default instances installed on the same server, clustering requires an instance to be installed on each node of the cluster to support fail over.
I ended up finding a work-around for this. While I installed named instances on the cluster, I can access them using port 1433 on each DNS name, so I don't have to provide the instance name to connect, which is what I was after.
To get this accomplished, I have to modify the listener configuration to force each instance to listen on port 1433 on its dedicated IP, rather than just relying on dynamic ports and the SQL Browser.
I've detailed the steps on my blog
Good idea rwmnau. I haven't read your blog post, yet, but I suspect the limitation revolves around registry keys or directory structures. Remember, each node only has one registry hive for SQL Server. There's a registry key that lists the instances on the box. It's a space separated list. I'm pretty sure that list has to have distinct values in it, therefore, you can't have more than one MSSQLSERVER instance. The internal instance name for default instances is MSSQLSERVER. So I think, if nothing else, there's your limitation, there. However, I do think you have a wonderful idea with using port 1433 for all instances involved. Good job and thanks for sharing. I think I might try that myself on my next cluster!