How can i get alarmed if the master's GTID differs from the slave? - database

The MaxScale distributes the requests to the MariaDB database -> master/slave server on which the database is located.
What i need is a script running as a cron or something similar which verifies the GTID from master and slaves. If the slaves GTID differs from the masters GTID i want to be informed/alarmed via email.
Unfortunately i have no idea if this is possible somehow and how to do it

You can enable gtid_strict_mode to automatically stop the replication if GTIDs from the same domain conflict with what is already in the binlogs. If you are using MaxScale, it will automatically detect this and stop using it.
Note that this will not prevent transactions from other GTID domains from causing problems with your data. This just means you'll have to pay some attention if you're using multi-domain replication.
If you want to be notified of this, you can use the script option in MaxScale to trigger a custom script to be launched whenever the server stops replicating.

Related

Using SQL Server Service Broker with multiple routes

When using the SQL Server Service Broker - if I had a service with two routes configured and I executed the BEGIN DIALOG statement without specifying the desired target broker instance, which of the possible destinations would it pick as the destination for the message?
I realise with BEGIN DIALOG I can explicitly target a specific broker, but this is only optional. What would happen without it? Would the message be sent to both routes?
I can't find the supporting documentation right now, but my memory says that it will choose one of the routes arbitrarily. It was meant as a means of being able to load balance among n databases that provide the same processing capability and you as the sender of the message don't care which of them actually does the processing.

Automatic failover with SQL mirroring and connection strings

I have 3 servers set up for SQL mirroring and automatic failover using a witness server. This works as expected.
Now my application that connects to the database, seems to have a problem when a failover occurs - I need to manually intervene and change connection strings for it to connect again.
The best solution I've found so far involves using Failover Partner parameter of the connection string, however it's neither intuitive nor complete: Data Source="Mirror";Failover Partner="Principal" found here.
From the example in the blog above (scenario #3) when the first failover occurs, and principal (failover partner) is unavailable, data source is used instead (which is the new principal). If it fails again (and I only tried within a limited period), it then comes up with an error message. This happens because the connection string is cached, so until this is refreshed, it will keep coming out with an error (it seems connection string refreshes ~5 mins after it encounters an error). If after failover I swap data source and failover partner, I will have one more silent failover again.
Is there a way to achieve fully automatic failover for applications that use mirroring databases too (without ever seeing the error)?
I can see potential workarounds using custom scripts that would poll currently active database node name and adjust connection string accordingly, however it seems like an overkill at the moment.
Read the blog post here
http://blogs.msdn.com/b/spike/archive/2010/12/15/running-a-database-mirror-setup-with-the-sqlbrowser-service-off-may-produce-unexpected-results.aspx
It explains what is happening, the failover partner is actually being read from the sql server not from your config. Run the query in that post to find out what is actually being used as the failover server. It will probably be a machine name that is not discoverable from where your client is running.
You can clear the application pool in the case a failover has happened. Not very nice I know ;-)
// ClearAllPools resets (or empties) the connection pool.
// If there are connections in use at the time of the call,
// they are marked appropriately and will be discarded
// (instead of being returned to the pool) when Close is called on them.
System.Data.SqlClient.SqlConnection.ClearAllPools();
We use it when we change an underlying server via SQL Server alias, to enforce a "refresh" of the server name.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.clearallpools.aspx
The solution is to turn connection pooling off Pooling="false"
Whilst this has minimal impact on small applications, I haven't tested it with applications that receive hundreds of requests per minute (or more) and not sure what the implications are. Anyone care to comment?
Try this connectionString:
connectionString="Data Source=[MSSQLPrincipalServerIP,MSSQLPORT];Failover Partner=[MSSQLMirrorServerIP,MSSQLPORT];Initial Catalog=DatabaseName;Persist Security Info=True;User Id=userName; Password=userPassword.; Connection Timeout=15;"
If you are using .net development, you can try to use ObjAdoDBLib or PigSQLSrvLib and PigSQLSrvCoreLib, and the code will become simple.
Example code:
New object
ObjAdoDBLib
Me.ConnSQLSrv = New ConnSQLSrv(Me.DBSrv, Me.MirrDBSrv, Me.CurrDB, Me.DBUser, Me.DBPwd, Me.ProviderSQLSrv)
PigSQLSrvLib or PigSQLSrvCoreLib
Me.ConnSQLSrv = New ConnSQLSrv(Me.DBSrv, Me.MirrDBSrv, Me.CurrDB, Me.DBUser, Me.DBPwd)
Execute this method to automatically connect to the online database after the mirror database fails over.
Me.ConnSQLSrv.OpenOrKeepActive
For more information, see the relevant links.
https://www.nuget.org/packages/ObjAdoDBLib/
https://www.nuget.org/packages/PigSQLSrvLib/
https://www.nuget.org/packages/PigSQLSrvCoreLib/

Using nHibernate in a windows service

I want to use nHibernate in a windows service. If the systems boots, it might start my service before the database. In that case, configuration of nHibernate fails and the service crashes. So now I'm wondering how I can check if the database service has already been started. In case it has not yet started, my service should wait a bit and try again later.
If your service always runs on the same machine as SQL Server, You should be using ServiceInstaller.ServicesDependedOn to tell Windows(SCM) that you depend on 'MSSQLSERVER' (the name of service that runs SQL Server).
From MSDN:
A service can require other services to be running before it can
start. The information from this property is written to a key in the
registry. When the user (or the system, in the case of automatic
startup) tries to run the service, the Service Control Manager (SCM)
verifies that each of the services in the array has already been
started.
ServiceInstaller is the class that is used by InstallUtil when it installs your service. Other installation packages including InstallShield also support this windows functionality. Equivalent SC command.
So your service will only start after SQL Server is already running. But even in this case, it might still be a good idea to offload all potentially long running startup procedures to the background thread. Do as little as possible in OnStart method. Ideally you would just spawn a new initialization thread that would take care of NHibernate session factory initialization. If for some reasons you still want to do this in OnStart, then you should consider retrying NHibernate initialization and calling ServiceBase.RequestAdditionalTime to avoid:
Error 1053: The service did not respond to the start or control
request in a timely fashion.
Ideally your service should not depend on the database availability because it might be running on a remote machine. The service is an 'always on' process that should tolerate intermittent database connectivity issues.
No clue if there are better ways, but in your service startup, check for the system uptime. If this is less then let's say 5 minutes, wait for (5 minutes - Uptime) and after that start the rest of the service as you normally would.
See the following for Calculating server uptime gives "The network path was not found"
This is not a solution however for when your service tries to connect to a SQL which is down, however if this happens you want to handle the exception and actually be notified that the SQL is down. Very unlikely you want the service to keep trying without you yourself beeing aware the SQL is down.
You could use ServiceController class and call its static method GetServices() to get the list of services. It will give an array of services, find the right one and check its status.
See ServiceController on MSDN
Currently I am making sure I can establish a connection to the database needed and running a default query (configurable). If this is successful I proceed to start the service.
What I've found in some cases is that even if the MSSQL service is started it doesn't guarantee that you can connect to it and execute queries against it.

Get hostname when reading Service Broker Queue (SQL Server 2005)

I am trying to configure auditing on my SQL Server using Service Broker. I did all the configuration needed to capture the DDL Events (queue, routes, endpoints, event notification). It is working properly except that I am not able to get the hostname of the client from where the DDL event originated from.
Using the service broker's activation procedure, I tried reading the value from the message_body, but there's no xml element that contains the hostname. I can see a value for the SPID but am unable to make use of it. Exec'ing sp_who and querying sys.processes against this SPID doesn't return any value. And running sp_who without parameter shows only one process (I think it's the background process used by the service broker). Is it all because the message was sent asynchronously? But why will it cause the activation context to see different data on sys.processes view?
I am aware that there are DDL triggers that can achieve the same goal, but it seems it is tightly coupled to the command that causes it to fire. So if the triggers fails, the command will also fail.
UPDATE: I managed to retrieve the Hostname by using a combination of xp_cmdshell and sqlcmd (command line app). But I also realized that since the message is asynchronous, it is not always reliable (The SPID who issue the DDL command might have been disconnected already before the message is read from the queue).
I'm not exactly sure what you're trying to implement here, but it's expected that activated procedure will only see a subset of rows in DMVs. This has to do with activation context which often impersonates a different user that you use when debugging the procedure. That impersonated user will only see these rows of server-level views and DMVs to which it has permissions. See here and here for more info.

Where do I begin to learn about SQL Server alerts or notifications?

Just recently started having issues with an SQL Server Agent Job that contains an SSIS package to extract production data and summarize it into a separate reporting database.
I think that some of the Alerts/Notifications settings I tried playing with caused the problem as the job had been running to completion unattended for the previous two weeks.
So... Where's a good place to start reading up on SQL Agent Alerts and Notifications? I want to enable some sort of alert/notification so that I'm always informed:
That the job completes successfully (as a check to ensure that it's always executed), or
That the job ran into some sort of error, which should include enough info (such as error number) that I can diagnose the cause of the error
As always, any help will be greatly appreciated!
Books Online is probably a good place to start (or at least I like it and generally find it useful).
SQLMenace and bofe made some good points. Here's my additional two cents:
I'd recommend configuring Database Mail rather than SQL Mail (i.e. SMTP vs. MAPI, which I think is deprecated anyway). Once you get the mail profile configured, you'll have to also configure the SQL agent to use that mail profile (which is just a page of settings for the agent properties), or else your SSIS job notifications won't actually get sent, even though you can successfully send a test email from Management Studio.
I don't use alerts as often as job notifications, so the only tricky thing I can recall about them is that if you're raising an error and you want the alert to email you when that happens, you have to make sure that the raised error gets written to the log. I think that just boils down to "RAISERROR ... WITH LOG"; here's the BOL link for the syntax details.
In each step of the job click on advanced then from there you can log to a file or to a table, this will have all errorcodes and other things why the job failed
You should be able to see this also from the job history.
Right click on the job-->view history, click on the + sign to expand, the click on each step and it will be in the lower panel
To set up notifications you need to set up an operator and the in the job on the notification tab you pick it from the email dropdown
You'll want to have "When the job completes" marked in your notifications page on the job's properties.
Just go to that dropdown and switch it to job completion instead of failure (which is on the screenshot).
You'll also want to make sure that your server has e-mail configured. I think it's under SQL Surface Area Configuration for Features.

Resources