Does anyone has experience with maximum execution time of Flyway migrations
What is the maximum execition time, if set by Flyway (or this depends on dababase settings primarily)?
What will happen when this time is hit?
What if multiple migrations are in chain and one of them timeouts, what will happen?
I have been unable to find any related information in docs or any articles.
Flyway itself currently does not set a timeout or maximum execution time. The timeout is managed by the target database and the settings on your connection to it.
There is a github issue thread here on this topic if you would like a timeout to be added and would like to share your scenario with the flyway team.
What happens when you hit a timeout (or if there is a network or other failure which causes the query to disconnect) will vary depending on how you are using transactions and whether your target database supports DDL statements within a transaction.
Related
I have a Hangfire service running on one of my servers, I'm a DBA and sometimes I'm asked to trigger jobs using the Dashboard, but it takes me a lot of time to connect to the jobs' server due to some connectivity and security issues.
And to overcome that, I want to trigger those jobs by inserting in Hangfire's tables on the database, I can already query those tables to find which job executed when and whether they failed, succeeded or still enqueued, does anyone know an approach to do so?
I've included a sample of two tables which I think will be used to do this trick, their names are Hash and Set respectively:
Hangfire normally uses a gui like swagger in .net (http://localhost:5000/hangfire) , there should be a immediate trigger feature. If not a second option is changing the cron expression for every minute or maybe every 30 seconds.
I have a problem using the AWS Database Migration Service for implementing a transactional replication from SQL Server as a source database engine, a help is highly appreciated.
The 'safeguardPolicy' connection attribute defaults to 'RELY_ON_SQL_SERVER_REPLICATION_AGENT'. The tools will start mimicking a transaction in the database for preventing the log to be reused and to be able to read as much changes from the active log.
But what is the intended behavior of these safeguard transaction? Will those sessions be stopped at some point? What is the mechanism to start / run for some time / stop such a transaction?
The production databases I manage are in Full recovery mode, with Log backups on each half an hour. The log grows to an enormous size due to the inability for a valid truncation procedure to succeed and because of those safeguard transactions initiated by the DMS tool.
The only solution to a full transaction log due to LOG_SCAN caused by such behavior of DMS for now is to stop the DMS tasks and run a manual truncation of the log, to release space not used. But it is not a solution at all if we need to stop the replication each time such a problem occurs, knowing that it will occur often.
Please share some internals about the tool if possible.
Thanks
Scenario:
I'm working with a customer that has a live database. On a separate server, they have a copy of this database and they have transactional replication setup, which runs constantly. I have an SSIS package that runs on the copy of the database for up to an hour to export data to a reporting database.
When I've tested the package with replication enabled, it occasionally fails as it reads from various tables at different points of the execution. The problem is that if some data is read at an early stage, which subsequently gets deleted/inserted, other related records that are read later on effectively become orphaned and cause lookup failures. Whilst I have various safeguards to combat this, it's difficult to cater for every case as not all records have dates that I can use to limit data.
Plan:
I have been looking at pausing the replication job, so that the package can run with static data and then re-enable it once the package has run. Once the replication is enabled again, all of the transactions from the live database that were generated during the package execution should then be applied to the copy.
Problem:
I've done some reading around the various Replication Agents used for transactional replication, but I'm not entirely sure what the minimum requirement is for pausing the replication.
At the moment I'm looking at pausing the Distribution Agent and the Log Reader Agent to achieve what I want to do. The question is, do I need to pause both agent jobs or can I pause one or the other so that the transactions build up and are applied once the agent is enabled?
I'm not sure if some of this is dependent on specific configurations or setup, but I can provide further information if required, so please comment if more information is required.
but I'm not entirely sure what the minimum requirement is for pausing the replication.
Replication works like below
Log reader agent reads the transactional log from publisher and inserts those records in distributor DB and also marks those log as inactive(so that Tlog space can be reused)
Now Distributor DB reads those records and inserts it into subscriber Database..
Now When you want to stop/pause Replication,you can stop
1.Log reader agent
Right click job and stop
or
2.Distributor agent
Right click job and stop
or
both
The question is, do I need to pause both agent jobs or can I pause one or the other so that the transactions build up and are applied once the agent is enabled?
If you pause only Distributor agent ( i would do the same),Log reader will do it's job and also there will be no impact to Log reusabilty on publisher
there are also caveats like ,if replication latency xrosses maximum limit,you will need to reinitialize replication.Though this will be huge like 24 hours
You also can use below link to monitor replication,after it has been enabled
https://www.brentozar.com/archive/2014/07/monitoring-sql-server-transactional-replication/
I have two SQL Server 2005 instances that are geographically separated. Important databases are replicated from the primary location to the secondary using transactional replication.
I'm looking for a way that I can monitor this replication and be alerted immediately if it fails.
We've had occasions in the past where the network connection between the two instances has gone down for a period of time. Because replication couldn't occur and we didn't know, the transaction log blew out and filled the disk causing an outage on the primary database as well.
My google searching some time ago led to us monitoring the MSrepl_errors table and alerting when there were any entries but this simply doesn't work. The last time replication failed (last night hence the question), errors only hit that table when it was restarted.
Does anyone else monitor replication and how do you do it?
Just a little bit of extra information:
It seems that last night the problem was that the Log Reader Agent died and didn't start up again. I believe this agent is responsible for reading the transaction log and putting records in the distribution database so they can be replicated on the secondary site.
As this agent runs inside SQL Server, we can't simply make sure a process is running in Windows.
We have emails sent to us for Merge Replication failures. I have not used Transactional Replication but I imagine you can set up similar alerts.
The easiest way is to set it up through Replication Monitor.
Go to Replication Monitor and select a particular publication. Then select the Warnings and Agents tab and then configure the particular alert you want to use. In our case it is Replication: Agent Failure.
For this alert, we have the Response set up to Execute a Job that sends an email. The job can also do some work to include details of what failed, etc.
This works well enough for alerting us to the problem so that we can fix it right away.
You could run a regular check that data changes are taking place, though this could be complex depending on your application.
If you have some form of audit train table that is very regularly updated (i.e. our main product has a base audit table that lists all actions that result in data being updated or deleted) then you could query that table on both servers and make sure the result you get back is the same. Something like:
SELECT CHECKSUM_AGG(*)
FROM audit_base
WHERE action_timestamp BETWEEN <time1> AND BETWEEN <time2>
where and are round values to allow for different delays in contacting the databases. For instance, if you are checking at ten past the hour you might check items from the start the last hour to the start of this hour. You now have two small values that you can transmit somewhere and compare. If they are different then something has most likely gone wrong in the replication process - have what-ever pocess does the check/comparison send you a mail and an SMS so you know to check and fix any problem that needs attention.
By using SELECT CHECKSUM_AGG(*) the amount of data for each table is very very small so the bandwidth use of the checks will be insignificant. You just need to make sure your checks are not too expensive in the load that apply to the servers, and that you don't check data that might be part of open replication transactions so might be expected to be different at that moment (hence checking the audit trail a few minutes back in time instead of now in my example) otherwise you'll get too many false alarms.
Depending on your database structure the above might be impractical. For tables that are not insert-only (no updates or deletes) within the timeframe of your check (like an audit-trail as above), working out what can safely be compared while avoiding false alarms is likely to be both complex and expensive if not actually impossible to do reliably.
You could manufacture a rolling insert-only table if you do not already have one, by having a small table (containing just an indexed timestamp column) to which you add one row regularly - this data serves no purpose other than to exist so you can check updates to the table are getting replicated. You can delete data older than your checking window, so the table shouldn't grow large. Only testing one table does not prove that all the other tables are replicating (or any other tables for that matter), but finding an error in this one table would be a good "canery" check (if this table isn't updating in the replica, then the others probably aren't either).
This sort of check has the advantage of being independent of the replication process - you are not waiting for the replication process to record exceptions in logs, you are instead proactively testing some of the actual data.
I have set up transactional replication between two SQL Servers on different ends of a relatively slow VPN connection. The setup is your standard "load snapshot immediately" kind of thing where the first thing it does after initializing the subscription is to drop and recreate all tables on the subscriber side and then start doing a BCP of all the data. The problem is that there are a few tables with several million rows in them, and the process either a) takes a REALLY long time or b) just flat out fails. The messages I keep getting when I look in Replication Monitor are:
The process is running and is waiting for a response from the server.
Query timeout expired
Initializing
It then tries to restart the bulk loading process (skipping any BCP files that it has already loaded).
I am currently stuck where it just keeps doing this over and over again. It's been running for a couple days now.
My questions are:
Is there something I could do to improve this situation given that the network connection is so slow? Maybe some setting or something? I don't mind waiting a long time as long as the process doesn't keep timing out.
Is there a better way to do this? Perhaps make a backup, zip it, copy it over and then restore? If so, how would the replication process know where to pick up when it starts applying the transactions, since updates will be occurring between the time I make the backup and get it restored and running on the other side.
Yes.
You can apply the initial snapshot manually.
It's been a while for me, but the link (into BOL) has alternatives to setting up the subscriber.
Edit: From BOL How-tos, Initialize a Transactional Subscriber from a Backup
In SQL 2005, you have a "compact snapshot" option, that allow you to reduce the total size of the snapshot. When applied over a network, snapshot items "travel" compacted to the suscriber, where they are then expanded.
I think you can easily figure the potential speed gain by comparing sizes of standard and compacted snapshots.
By the way, there is a (quite) similar question here for merge replication, but I think that at the snapshot level there is no difference.