I take a transaction log back-up every 30 minutes, but every day at about 05:30 the back-up takes 02:30 and this increases each day. Can anyone help?
From the size of the backup, I would guess that there is a scheduled process occurring between 5am and 5:30am that is generating roughly 100x as many transactions as in any other 30 minute period - possibly some processing over the entire database, which is increasing in complexity (and thus generating more transaction logs) as the database grows.
Check schedule of your maintenance plans. You might be running re-indexing job that time.
Related
I have an issue where a full backup on a MSSQL server randomly takes ~2.5 the time to complete. Is there any option in SQL Management Studio or Stored Procedure that would tell me what cause this giant slowdown.
I have many MSSQL server running and that particular one is internal only and backup happen at 5am in the morning while there is no one in the office until 7:30-8:00. Backup is typically taking a steady 14 minutes 20 seconds (plus or less 10 seconds) but once or twice per week it suddenly takes upward of 45 minutes.
The backup size is growing but it's really minor like 40-50 mb per day while the backup size is sitting currently at around 21 gb. All daily transaction are stable in size too. When i have a slow backup the transaction of the previous day is not any different is size than the day before neither the other "normal" days.
The only logs i see simply give me start time and end time of the maintenance plan which is useless as it's the total runtime.
MSSQL V18.7.1
Transaction log on databases is back-upped every hour.
Size from this databaselog is auto-grow with 128Mb max 5Gb
This runs smoothly but sometimes we do get an error in our application:
'The transaction log for database Borculo is full due to 'LOG_BACKUP'
This message we got 8.15AM while on 8.01AM de log-backup was done (and emptied).
I would really like it if I had a script or command to check what caused this exponential growth.
We could backup more often (ever 30 minutes) or change size but the problem is not solved then.
Basically this problem should not occur with the number of transactions we have.
Probably some task is running (in our ERP) which causes this.
This does not happen every day but in the last month this is the 2nd time.
The transaction-log is a back-upped one to get info from. Not the active one.
Can anyone point me in the right direction?
Thanks
An hourly transaction log backup means in case of a disaster you could lose up to an hour's worth of data.
It is usually advised to keep you transaction log backups as frequent as possible.
Every 15 mins is usually a good starting point. But if it is a business critical database consider a transaction log backup every minute.
Also why would you limit the size for your transaction log file? If you have more space available on the disk, allow your file to grow if it needs to grow.
It is possible that the transaction log file is getting full because there is some maintenance task running (Index/Statistics maintenance etc) and because the log file is not backed up for an entire hour, the logs doesn't get truncated for and hour and the file reaches 5GB in size. Hence the error message.
Things I would do, to sort this out.
Remove the file size limit, or at least increase the limit to allow it to grow bigger than 5 GB.
Take transaction Log backups more frequently, maybe every 5 minutes.
Set the log file growth increment to at least 1 GB from 128MB (to reduce the number of VLFs)
Monitor closely what is running on the server when the log file gets full, it is very likely to be a maintenance task (or maybe a bad hung connection).
Instead of setting max limit on the log file size, setup some alerts to inform you when the log file is growing too much, this will allow you to investigate the issue without any interference or even potential downtime for the end users.
As part of our DR solution, we have attempted to enable log shipping for a heavy transaction load database. While the configuration completes successfully, the first transaction log backup job to kick off after the completion of the log shipping configuration runs continuously and grows exponentially in size. On one occasion, that first transaction log backup job ran for 12 hours with a file size 3X greater than the 27 GB full backup file for the database. We killed that process. Recently, we tried a twist on the approach using a differential as explained below, but the transaction log backup job still ran with an ever growing file size.
This process was run during weekend low use hours
7:46 am – log shipping configuration kicks off
9:32 am – backup file is stored in network share folder. File size is 26.1 GB
9:30 pm – Log shipping configuration completes.
– I disable the log shipping backup, copy, and restore jobs
9:31 pm – I enter command to backup database with differential
9:33 pm – Differential completes with a file size of 768 MB.
– I re-enable the backup and copy jobs to get that process moving along after the differential
– I copy the differential file to the secondary location
9:45 pm – The first transaction log backup job kicks off
9:59 pm – After the Differential file is copied, I restore the database on Secondary using the differential
11:02 pm – The restore of the differential is still running
– The transaction log backup job that was created at 9:45 am is still running with a file size of 28 GB and still growing.
We ultimately killed this process due to space issues as the transaction log backup job never completed.
Has anyone experienced this scenario before? Is there anything we could change to improve the process time on the transaction log backup job? Given the heavy transaction load, I wonder if it would be best to implement an alternative DR solution for this particular database.
I know this may be old,but adding some pointers which will help you.
1.When database is set to Bulklogged recovery model,Tlog will contain copy of data files too,so your Tlogs size will be big
2.further you might want to check what is happening during backups and restores using below trace flags.
dbcc traceon(3004,3605,-1)
3.Same trace flag can be applied to restore as well
4.further if restore is taking some much time,this might be due to huge transactions which are rolledback.See below link for more details
http://www.sqlskills.com/blogs/paul/why-could-restoring-a-log-shipping-log-backup-be-slow/
5.You also can enable Instant file initilization to speedup restores as this will help in growing data files instantly
you also can check if there is network latency by using perfmon counters.
I have an agent job set to run log backups every two hours from 2:00 AM to 11:59 PM (leaving a window for running a full or differential backup). A similar job is set up in every one of my 50 or so instances. I may be adding several hundred instances over time (we host SQL Servers for some of our customers). They all backup to the same SAN disk volume. This is causing latency issues and otherwise impacting performance.
I'd like to offset the job run times on each instance by 5 minutes, so that instance one would run the job at 2:00, 4:00, etc., instance two would run it at 2:05, 4:05, etc., instance three would run it at 2:10, 4:10, etc. and so on. If I offset the start time for the job on each instance (2:00 for instance one, 2:05 for instance two, 2:10 for instance three, etc.), can I reasonably expect that I will get my desired result of not having all the instances run the job at the same time?
If this is the same conversation we just had on twitter: when you tell SQL Server Agent to run every n minutes or every n hours, the next run is based on the start time, not the finish time. So if you set a job on instance 1 to run at 2:00 and run every 2 hours, the 2nd run will run at 4:00, whether the first run took 1 minute, 12 minutes, 45 minutes, etc.
There are some caveats:
there can be minor delays due to internal agent synchronization, but I've never seen this off by more than a few seconds
if the first run at 2:00 takes more than 2 hours (but less than 4 hours), the next time the job runs will be at 6:00 (the 4:00 run is skipped, it doesn't run at 4:10 or 4:20 to "catch up")
There was another suggestion to add a WAITFOR to offset the start time (and we should discard random WAITFOR, because that is probably not what you want - random <> unique). If you want to hard-code a different delay on each instance (1 minute, 2 minutes, etc.) then it is much more straightforward to do that with a schedule than by adding steps to all of your jobs. IMHO.
Perhaps you could setup a centralized DB that manages the "schedule" and have the jobs add/update a row when they run. This way each subsequent server can start the job that "polls" when it can start. This way any latency in the jobs will cause the others to wait so you don't have a disparity in your timings when one of the servers is thrown off.
Being a little paranoid I'd add a catchall scenario that says after "x" minutes of waiting proceed anyway so that a delay doesn't cascade far enough that the jobs don't run.
I am a programmer, with a side job as an involuntary DBA.
I have a maintenance plan that does a full backup and a 'check database integrity' every night. I backup transaction logs every 10 minutes. The transaction log backup size spikes after the database backup - exponentially bigger. I used to rebuild indexes and statistics every night - I thought that is what was causing the transaction log spike - but removing those steps didn't change anything.
Mirroring our backups on slow connections would be helped considerably if there wasn't this massive spike - so I am hoping it is something I am doing wrong. Can anyone suggest anything?
If you are only running the log backup from 6am to midnight, then the very first log backup at 6am is backing up all the database activity that has occurred in the 6 hours since the last log backup.
This is entirely normal, and probably has nothing to do with the fact that your database backup takes place at 4am.
Since you are on SQL2008, the warning in my other answer doesn't apply, and you should be fine with running the log backups 24 hours.
Is this SQL 2000?
In SQL 2000, you're not supposed to run the log backup while the full backup is executing, or "bad things can happen", like blocking, or hugely bloated log files.
See this ServerFault post for "The Word" from "The Man", Paul Randal, who used to be in charge of the SQL engine at Microsoft.
See this follow-up post for some ideas for skipping the log backup while the full backup is executing.
In SQL 2005 or later, this restriction no longer exists, and you shouldn't have trouble running log backups and full backups at the same time.
While your full backup is running, transaction backups will not run. So how long does your full backup take? The transaction log will not be truncated during this time by transaction log backups.