How to Configure Keepalived that When Master is Available After Falling, It Will not Route to Master - ubuntu-18.04

How can I configure Keepalived that when Master is available after falling, it will continue to route to backup until backup goes down. My currently configuration always routes to Master if it is available. When Master is gone, it routes to Backup and when Master is available again, it routes to the Master. I don't want that. I want it route whenever the current routed server is gone, not always to Master. So it can be understood like there are two Masters. Is it possible?
Servers are Ubuntu 18.04

You can do this by defining state MASTER on both, and having priority set to the same value on both.
You can use check scripts to decrease priority when you detect a failure.
When the failure condition is resolved, both will have the same priority, but the one that is currently master will continue being a master.

Related

How can i get alarmed if the master's GTID differs from the slave?

The MaxScale distributes the requests to the MariaDB database -> master/slave server on which the database is located.
What i need is a script running as a cron or something similar which verifies the GTID from master and slaves. If the slaves GTID differs from the masters GTID i want to be informed/alarmed via email.
Unfortunately i have no idea if this is possible somehow and how to do it
You can enable gtid_strict_mode to automatically stop the replication if GTIDs from the same domain conflict with what is already in the binlogs. If you are using MaxScale, it will automatically detect this and stop using it.
Note that this will not prevent transactions from other GTID domains from causing problems with your data. This just means you'll have to pay some attention if you're using multi-domain replication.
If you want to be notified of this, you can use the script option in MaxScale to trigger a custom script to be launched whenever the server stops replicating.

Offline and online execution problem with multiples queries incoming

I have a SQL Server service and multiple Windows Service doing some backgrounds work on same server.
One of them (I'm calling it "A") have a routine executing "single_user/offline" and "online/multi_user" to active databases to do some backup operations at midnight. The another ones executes multiple queries over that databases (I'm calling it "B").
The problem is the following:
1.- Windows Service "A" executes SET ONLINE.
2.- Windows Service "B" executes a random SELECT.
3.- Windows Service "A" tries to execute SET MULTI_USER. This execution is dropped because there is an active connection made from Windows Service "B".
I've tried executing SET ONLINE and SET MULTI_USER on same CommandText of the SqlCommand, but this doesn't denies the incoming query from Windows Service "B", breaking my process and keeping the database locked (because the SINGLE_USER).
How can I make an ONLINE and MULTI_USER commands at same time on the Windows Service "A" to make Windows Service "B" being cancelled or wait the process finished? (It's not a problem that Windows Service "B" being cancelled)
Could be sp_dettach_db or sp_attach_db useful?
It sounds like Service A is closing it's connection after bringing the database into Single User mode. This frees up Service B to become the single user, at which point Service A can no longer change the mode until it can grab the single connection back, which it won't be able to do as long as Service B, or any other client for that matter, has it.
I can think of a couple of things you could do here:
Once your offline operations are complete, begin polling until Service A can become the single user again
Find the SPID of the connection from Service B and kill it.
See the limitations and restrictions section about single user mode at this link for more info.
Thanks to all, but I reanalyzed the problem I was having from the beginning, and concluded that the action "SET SINGLE_USER" was not necessary, since the process does not require actions in that mode, so finally with the action "SET OFFLINE" and "SET ONLINE" could prevent intermediate connection problems.

Using SQL Server Service Broker with multiple routes

When using the SQL Server Service Broker - if I had a service with two routes configured and I executed the BEGIN DIALOG statement without specifying the desired target broker instance, which of the possible destinations would it pick as the destination for the message?
I realise with BEGIN DIALOG I can explicitly target a specific broker, but this is only optional. What would happen without it? Would the message be sent to both routes?
I can't find the supporting documentation right now, but my memory says that it will choose one of the routes arbitrarily. It was meant as a means of being able to load balance among n databases that provide the same processing capability and you as the sender of the message don't care which of them actually does the processing.

Block all connections to a database db2

I'm working on logs. I want to reproduce a log in which the application fails to connect to the server.
Currently the commands I'm using are
db2 force applications all
This closes all the connections and then one by one I deactivate each database using
db2 deactivate db "database_name"
What happens is that it temporary blocks the connections and after a minute my application is able to create the connection again, due to which I am not able to regenerate the log. Any Ideas how can I do this?
What you are looking for is QUIESCE.
By default users can connect to a database. It becomes active and internal in-memory data structures are initialized. When the last connection closes, the database becomes inactive. Activating a database puts and leaves them initialized and "ready to use".
Quiescing the database puts them into an administrative state. Regular users cannot connect. You can quiesce a single database or the entire instance. See the docs for some options to manage access to quiesced instances. The following forces all users off the current database and keeps them away:
db2 quiesce db immediate
If you want to produce a connection error for an app, there are other options. Have you ever tried to connect to a non-estisting port, Db2 not listening on it? Or revoke connect privilege for that user trying to connect.
There are several testing strategies that can be used, they involve disrupting the network connection between client and server:
Alter the IP routing table on the client to route the DB2 server address to a non-existent subnet
Use the connection via a proxy software that can be turned off, there is a special proxy ToxiProxy, which was designed for the purpose of testing network disruptions
Pull the Ethernet cable from the client machine, observe then plug it back in (I've done this)
This has the advantage of not disabling the DB2 server for other testing in progress.

Documentation on PostgreSQL service reload not interrupting open transactions?

I have a version 9.5 PostgreSQL database in production that has constant traffic. I need to change a value in the pg_hba.conf file. I have confirmed on a test server that this can be put into effect by reloading the postgresql service.
I have read on other posts and sites that calling pg_ctl reload does not cause interuptions of live connections in postgresql. e.g https://dba.stackexchange.com/questions/41517/is-it-safe-to-call-pg-ctl-reload-while-doing-heavy-writes
But I am trying to find concrete documentation that calling pg_ctl reload or service postgresql-9.5 reload does not interrupt or effect any open transactions or ongoing queries to the db.
Here it is right from the horses mouth
reload mode simply sends the postgres process a SIGHUP signal, causing
it to reread its configuration files (postgresql.conf, pg_hba.conf,
etc.). This allows changing of configuration-file options that do not
require a complete restart to take effect.
This signal is used by lots of other servers including Apache, NGINX etc to reread the configuration files without dropping open connections.
if you are unconvinced, try this after opening the psql client or pgadmin or whaterver
START TRANSACTION;
/* open the console and do /etc/init.d/postgrsql reload or equivalent for your system */
INSERT INTO my_table(id,name) value(1,'1');
COMMIT;
If the client has been disconnected you will be notified.

Resources