MSSQL Recurring Transaction Log full. Need to know who causes it - sql-server

MSSQL V18.7.1
Transaction log on databases is back-upped every hour.
Size from this databaselog is auto-grow with 128Mb max 5Gb
This runs smoothly but sometimes we do get an error in our application:
'The transaction log for database Borculo is full due to 'LOG_BACKUP'
This message we got 8.15AM while on 8.01AM de log-backup was done (and emptied).
I would really like it if I had a script or command to check what caused this exponential growth.
We could backup more often (ever 30 minutes) or change size but the problem is not solved then.
Basically this problem should not occur with the number of transactions we have.
Probably some task is running (in our ERP) which causes this.
This does not happen every day but in the last month this is the 2nd time.
The transaction-log is a back-upped one to get info from. Not the active one.
Can anyone point me in the right direction?
Thanks

An hourly transaction log backup means in case of a disaster you could lose up to an hour's worth of data.
It is usually advised to keep you transaction log backups as frequent as possible.
Every 15 mins is usually a good starting point. But if it is a business critical database consider a transaction log backup every minute.
Also why would you limit the size for your transaction log file? If you have more space available on the disk, allow your file to grow if it needs to grow.
It is possible that the transaction log file is getting full because there is some maintenance task running (Index/Statistics maintenance etc) and because the log file is not backed up for an entire hour, the logs doesn't get truncated for and hour and the file reaches 5GB in size. Hence the error message.
Things I would do, to sort this out.
Remove the file size limit, or at least increase the limit to allow it to grow bigger than 5 GB.
Take transaction Log backups more frequently, maybe every 5 minutes.
Set the log file growth increment to at least 1 GB from 128MB (to reduce the number of VLFs)
Monitor closely what is running on the server when the log file gets full, it is very likely to be a maintenance task (or maybe a bad hung connection).
Instead of setting max limit on the log file size, setup some alerts to inform you when the log file is growing too much, this will allow you to investigate the issue without any interference or even potential downtime for the end users.

Related

Sql Server Setting max file size lead to faile transactions

I have a database with log file size of 700 MB. Now I have fixed its max file size to 1 GB.
When it reaches 1 GB,transaction failed the reason for the same is that "The transaction log for database is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases"
This is same case if I uncheck Autogrowth for log file.
When I checked log_reuse_wait_desc column in sys.databases it says "Active_Transaction".
I am not able to understand why Sql server is not maintaining the max file size limit.Why it cannot delete old logs or something like that to maintain the max file size.
How does it work.
What I want is to limit the log file size that not to exceed 1 GB in any case.
There are a few things you need to consider here, especially if you want to restrict the log file size to 1GB.
as already mentioned, you need to understand the difference
between the three recovery models. Taking log backups is a key task
when using the full recovery model. However this is only part of the
problem; log backups only truncate the inactive part of the log,
therefore a transaction could fill the log file with 1GB+ of data,
and then you are in the same position you are in now... even if you are in simple recovery model (a log backup will not help you here!).
In an ideal situation, you would not restrict the log file in such a way, because of this problem. if possible you want to allow it to auto-grow so, in theory, it could fill the disk.
Transaction log management is a science in itself. Kimberly Tripp has some very good advice on how to manage transaction log throughput here
understanding VLF's will allow you to better manage your transaction log, and could help towards better proportioning your log file for large transactions.
If, after everything you have read, you are still required to limit the transaction log growth, then you will need to consider batch updating large result sets. this will allow you to update, say, 1000 rows at a time, meaning that only 1000 records are written to the log. SQL Server uses write-ahead logging, so in order to complete a transaction, you first need to have enough space in the transaction log to write all the details. If using a simple recovery model, this write-ahead logging is automatically truncated, meaning you don't need to backup the log. Therefore, writing 1000 records at a time (for example) will causing less of a problem than a huge 1,000,000 record insert (say)
Redgate provide a free e-book to help you on your way!
EDIT:
P.s. I've just read your comment above... If you are in full recovery model you MUST do log backups, otherwise sql server WILL NOT recover the space from the log, and will continue to write to the log causing it to expand! However note, you MUST have a full backup for transaction log backups to take effect. SQL Server cannot backup the log if it doesn't have an initial restore point (i.e. the full backup).

Monitoring SQL Log File

After deploying a project on the client's machine, the sql db log file has grown to up to 450G, although the db size is less than 100MB, The logging mode is set to Simple mode, and the transactions are send from a windows service that send insertion and updating transaction every 30 seconds.
my question is, how to know the reason of db log file growth?
I would like to know how to monitor the log file to know what is the exact transaction that causes the problem.
should i debug the front end ? or there is away that expose the transactions that cause db log file growth.
Thank you.
Note that a simple recovery model does not allow for log backups since it keeps the least amount of information and relies on CHECKPOINT, so if this is a critical database, consider protecting the client by use of a FULL RECOVERY plan. Yes, you have to use more space, but disk space is cheap and you can have greater control over the point in time recovery and managing your log files. Trying to be concise:
A) Your database in Simple Mode will only truncate transactions in your transaction log as when a CHECKPOINT is created.
B) Unfortunately, large/lots of uncommitted transactions, including BACKUP, creation of SNAPSHOT, and LOG SCANs, among other things will stop your database from creating those checkpoints and your database will be left unprotected until those transactions are completed.
Your current system relies on having the right edition of your .bak file, which depending on the size may mean hours of potential loss.
In other words, it is that ridiculous size because your database is not able to create a CHECKPOINT to truncate these transactions often enough....
a little note on log files
Foremost, Log files are not automatically truncated every time a transaction is committed (otherwise, you would only have the last committed transaction to go back to). Taking frequent log backups will ensure pertinent changes are kept (point in time) and SHRINKFILE will squeeze the log file to the smallest size available/size specified.
Use DBCC SQLPERF(logspace) to see how much of your log file is in use and how large it is. Once you perform a full backup, the log file will be truncated to the remaining uncommitted/active transactions. (Do not confuse this with shrinking the size)
Some suggestions on researching your transactions:
You can use the system tables to see the most expensive cache, frequent, and active plans.
You can search the log file using an undocumented extended stored procedure, fn_dblog.
Pinal has great info on this topic that you can read at this webpage and link:
Beginning Reading Transaction Log
A Log File is text, and depending on your log levels and how many errors and messages you receive these files can grow very quickly.
You need to rotate your logs with something like logrotate although from your question it sounds like you're using Windows so not sure what the solution for that would be.
The basics of log rotation are taking daily/weekly versions of the logs, and compressing them with gzip or similar and trashing the uncompressed version.
As it is text with a lot of repetition this will make the files very very small in comparison, and should solve your storage issues.
log file space won't be reused ,if there is open transaction..You can verify the reason for log space reuse using below DMV..
select log_reuse_wait_desc,database_id from sys.databases
In your case,your database is set to simple and database is 100 MB..but the log has grown upto 450 GB..which is very huge..
My theory is that ,there may be some open transactions ,which prevented log space reuse..log file won't shrink back,once it grew..
As of know you can run above DMV and see ,what is preventing log space reuse at this point,you can't go back in time to know what prevented log space reuse

SonarQube database using too much space (60+ GB)

I'm currently scanning 70 projects with a total of about 2 million lines of code. All has been going well until a few weeks ago I was notified that a few projects failed because we ran out of hard drive space on the SonarQube server. I was sure we had more than enough space according to the HW/SW requirements. I read that restarting the Sonarqube server service does clear up temp files, but after doing that several times, something is still eating up more space. The culprit is coming from SQL server:
...\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\sonarqube_log.ldf
The size of this file is currently at 66.8 GB. Does anyone know if I can just truncate the stuff in there, know of any best practices for reducing this file size, and keeping the size down during future scans?
OK, if your database recovery is set to Full and you're not backing up the transaction logs, that's the problem.
If you want help understanding the differences between recovery models, then start here. The short version is that Full recovery allows point-in-time (aka, point-of-failure) recovery, meaning you can restore the DB to the exact moment before the problem occurred. The drawback of Full recovery is that you must backup your transaction logs or this exact issue happens: the logs grow endlessly. Simple recovery eliminates the need for transaction log backups (indeed, I don't think you can do a transaction log backup on a Simple database) but limits you to restoring the DB to what it was when you last ran a database backup. Note that simple recovery still uses transaction logs! The system just periodically flushes them.
So, you will need to do one of two things: Use Full recovery and backup your transaction logs periodically (I've seen systems that do it every hour or even every 15 minutes for high traffic systems), or switch to Simple recovery. Those are your only real options.
Whichever one you do, switching to Simple or backing up the logs will flush the transaction logs. You can verify that with DBCC SQLPERF(logspace). However, you'll notice that sonarqube_log.ldf will not change size at all. It will still be 66.8 GB. This is by design. In a properly managed system, the transaction logs will reach the size that they need to operate, and then the backups and simple flushes will keep the size constant. The log file will be the proper size to run the system, so it will never need to grow (which is expensive) and will never run out of space (which would cause all transactions to fail).
"So, how do I get my disk space back?" you ask. "I've fixed the problem so the log file is going to be 95% wasted space now."
What you'll need to do is shrink the log file. This is something that you will often see written that you should never do. And, I agree, in a properly managed system, you should never need to do this. However, your system was not running properly. The log files were on runaway. In general, though, you shouldn't ever need to do this.
First, take a full database backup. I repeat: Take a full backup of your database. This shouldn't cause any problems, but you don't want to be doing stuff like this without a fresh backup.
Next, you'll need to find the file id for the log file for the database in question. You can do that like this:
select d.name, mf.file_id
from sys.databases d
join sys.master_files mf
on d.database_id = mf.database_id
where mf.type = 1
and d.name = 'SonarQube'
The mf.type = 1 returns only transaction log files. Use the name of your database if it's not SonarQube.
--Switch to the SonarQube database. If you're not in the right context, you'll shrink the wrong file.
use [SonarQube];
--Do a checkpoint if you're on Simple recovery
checkpoint;
--Do a log backup if you're on Full recovery
backup log ....
--Shrink logfile to 1 GB. Replace #file_id with the id you got above.
dbcc shrinkfile ( #file_id, 1024 );
The size there is the size in MB. You'll have to make a guess based on how large your DB is and how many changes you make to your system. Something between 50% the size of the DB and 100% the size of the DB is pretty safe for most systems. In any case, I would not shrink the logs further than 1 GB. You'll need to monitor the log space and whether or not the log continues to grow. The Disk Usage report in SSMS is a good way to do that.
The first time you run this, you may not see much gain in disk space at all. It might go to 66.0 GB or 62 GB. The reason for that is because the current tail of the log file is still at the end of the transaction log file. What you should do is, if the system is under activity, wait a few hours and then run the above again. That will give the system time to cycle the log back to the beginning of the log file, and you'll be able to shrink it down. Here is a good article covering more about how shrinking the log file actually works, if you're interested.

Can a large transaction log cause cpu hikes to occur

I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out.
I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan.
The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur.
I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk.
I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation).
Many Thanks,
It's hard to say exactly given the lack of data, but the spikes are commonly observed on transaction log checkpoint.
A checkpoint is a procedure of applying data sequentially appended and stored in the transaction log to the actual datafiles.
This involves lots of I/O, including CPU operations, and may be a reason of the CPU activity spikes.
Normally, a checkpoint occurs when a transaction log is 70% full or when SQL Server decides that a recovery procedure (reapplying the log) would take longer than 1 minute.
Your first priority should be to address the transaction log size. Is the DB being backed up correctly, and how frequently. Address theses issues and then see if the CPU spikes go away.
CHECKPOINT is the process of reading your transaction log and applying the changes to the DB file, if the transaction log is HUGE then it makes sense it could affect it?
You could try extending the autogrowth: Kimberley Tripp suggests upwards of 500MB autogrowth for transaction logs measured in GBs:
http://www.sqlskills.com/blogs/kimberly/post/8-Steps-to-better-Transaction-Log-throughput.aspx
(see point 7)
While I wouldn't be surprised if having a log that size wasn't causing a problem, there are other things it could be as well. Have the statistics been updated lately? Are the spikes happening when some automated job is running, is there a clear time pattern to when you have the spikes - then look at what else is running? Did you load a new version of anything on the server about the time the spikes started happeining?
In any event, the transaction log needs to be fixed. The reason it is so large is that it is not being backed up (or not backed up frequently enough). It is not enough to back up the database, you must also back up the log. We back ours up every 15 minutes but ours is a highly transactional system and we cannot afford to lose data.

SQL Server 2008 log file size is large and growing quickly

Most of the time users will hit the database to read news. There are very few number of queries executed under transactions. 95% of the database hits would be for read-only purposes.
My database log files size is growing 1 GB per day. Even if I shrink the database, the log file size is not decreasing. What could be the reason for growing the log file size more and more? How can I control this? As per my knowledge log file does not increase when we read data from tables.
Any suggestions on how to deal with the log file growing? How can it be kept a manageable or reasonable size? Does this effect performance in any way?
There are couple of things to consider. What type of backups you do, and what type of backups you need. If you will have the answer to this question you can either switch Recovery Mode to simple or leave it full but then you need to make incremental backups on a daily basis (or whatever makes you happy with log size).
To set your database logging to simple (but only if you do Full Backups of your database!).
Right click on your database
Choose Properties
Choose Options
Set Recovery mode to simple
This will work and is best if your backup schedule is Full Backup every day. Because in such scenario your log won't be trimmed and it will skyrocket (just like in your case).
If you would be using Grandfather&Father&Son backup technique, which means Monthly Full backup, Weekly Full backup, and then every day incremental backup. Then for that you need Full Recovery Mode. If 1GB of log per day is still too much you can enable incremental backup per hour or per 15 minutes. This should fix the problem of log growing more and more.
If you run Full Backup every day you can switch it to simple recovery mode and you should be fine without risking your data (if you can leave with possible 1 day of data being lost). If you plan to use incremental then leave it at Recovery Mode Full.
Full backups will not help, you must regularly backup the transaction log (as well as the regular database full and differential backups) for it to be emptied. If you are not backing up the log and you are not in simple recovery mode, then your transaction log has every transaction in it since the database was set up. If you have enough action that you are growing by a gig a day, then you may also have large imports or updates affecting many records at once. It is possible you need to be in simple recovery mode where transactions are not recorded individually. Do NOT do that however if you have a mix of data from imports and users. In that case you need to back up the transaction log frequently to be able to keep the size manageable and to a point in time. We backup our transaction log every 15 minutes.
Read about transaction log backups in BOL to see how to fix the mess you have right now. Then get your backups set up and running properly. You need to read and understand this stuff thoroughly before attempting to fix. Right now, you would probably be in a world of hurt if your server failed and you had to recover the database. Transaction log backups are critical to being able to recover properly from a failure.
Do you backup your database frequently? You need to perform full- and/or transaction log- backups in order for SQL Server to consider shrinking your log file.
The major reason for the log file large size is due to bulk transaction in DB. To reduce the log file size best option to take the transaction log backup after certain interval of time.

Resources