How does a large transaction log affect performance? - sql-server

I have seen that our database has the full recovery model and has a 3GB transaction log.
As the log gets larger, how will this affect the performance of the database and the performance of the applications that accesses the database?

The recommended best practice is to assign a SQL Server Transaction log file its’ very own disk or LUN.
This is to avoid fragmentation of the transaction log file on disk, as other posters have mentioned, and to also avoid/minimise disk contention.
The ideal scenario is to have your DBA allocate sufficient log space for your database environment ahead of time i.e. to allocate say x GB of data in one go. On a dedicated disk this will create a contiguous allocation, thereby avoiding fragmentation.
If you need to grow your transaction log, again you should endeavour to do so in sizeable chunks in order to endeavour to allocate contiguously.
You should also look to NOT shrink your transaction log file as, repeated shrinking and auto growth can lead to fragmentation of the data file on disk.
I find it best to think of the autogrowth database property as a failsafe i.e. your DBA should proactively monitor transaction log space (perhaps by setting up alerts) so that they can increase the transaction log file size accordingly to support your database usage requirements but the autogrowth property can be in place to ensure that your database can continue to operate normally should unexpected growth occur.
A larger transaction log file in itself if not detrimental to performance as SQL server writes to the log sequentially, so provided you are managing your overall log size and allocation of additional space appropriately you should not be concerned.

In a couple of ways.
If your system is configured to auto-grow the transaction log, as the file gets bigger, your SQL server will need to do more work and you will potentially get slowed down. When you finally do run out of space, you're out of luck and your database will stop taking new transactions.
You need to get with your DBA (or maybe you are the DBA?) and perform frequent, periodic log backups. Save them off your server onto another dedicated backup system. As you back up the log, the space in your existing log file will be reclaimed, preventing the log from getting much bigger. Backing up the transaction log will also allow you to restore your database to a specific point in time after your last full or differential backup, which significantly cuts your data losses in the event of a server failure.

If the log gets fragmented or needs to grow it will slow down the application.

If you don't clear the transaction log periodically by performing backups, the log will get full and consume the entire available disk space.

If the log file growths larger in small steps you will end up with a lot of virtual log files
these virtual log files will slow down the database startups, restore and backup operations.
here is an article that shows how to fix this:
http://www.sqlserveroptimizer.com/2013/02/how-to-speed-up-sql-server-log-file-by-reducing-number-of-virtual-log-file/

For increasing performance it's nice to separate SQL server log file from sql server data file in order to optimize I/O efficiency.
The writing method to the data file is Random but SQL Server writes to the transaction log sequentially.
With Sequential I/O ,SQL Server can read/write data without re-positioning the disk head.

Related

Monitoring SQL Log File

After deploying a project on the client's machine, the sql db log file has grown to up to 450G, although the db size is less than 100MB, The logging mode is set to Simple mode, and the transactions are send from a windows service that send insertion and updating transaction every 30 seconds.
my question is, how to know the reason of db log file growth?
I would like to know how to monitor the log file to know what is the exact transaction that causes the problem.
should i debug the front end ? or there is away that expose the transactions that cause db log file growth.
Thank you.
Note that a simple recovery model does not allow for log backups since it keeps the least amount of information and relies on CHECKPOINT, so if this is a critical database, consider protecting the client by use of a FULL RECOVERY plan. Yes, you have to use more space, but disk space is cheap and you can have greater control over the point in time recovery and managing your log files. Trying to be concise:
A) Your database in Simple Mode will only truncate transactions in your transaction log as when a CHECKPOINT is created.
B) Unfortunately, large/lots of uncommitted transactions, including BACKUP, creation of SNAPSHOT, and LOG SCANs, among other things will stop your database from creating those checkpoints and your database will be left unprotected until those transactions are completed.
Your current system relies on having the right edition of your .bak file, which depending on the size may mean hours of potential loss.
In other words, it is that ridiculous size because your database is not able to create a CHECKPOINT to truncate these transactions often enough....
a little note on log files
Foremost, Log files are not automatically truncated every time a transaction is committed (otherwise, you would only have the last committed transaction to go back to). Taking frequent log backups will ensure pertinent changes are kept (point in time) and SHRINKFILE will squeeze the log file to the smallest size available/size specified.
Use DBCC SQLPERF(logspace) to see how much of your log file is in use and how large it is. Once you perform a full backup, the log file will be truncated to the remaining uncommitted/active transactions. (Do not confuse this with shrinking the size)
Some suggestions on researching your transactions:
You can use the system tables to see the most expensive cache, frequent, and active plans.
You can search the log file using an undocumented extended stored procedure, fn_dblog.
Pinal has great info on this topic that you can read at this webpage and link:
Beginning Reading Transaction Log
A Log File is text, and depending on your log levels and how many errors and messages you receive these files can grow very quickly.
You need to rotate your logs with something like logrotate although from your question it sounds like you're using Windows so not sure what the solution for that would be.
The basics of log rotation are taking daily/weekly versions of the logs, and compressing them with gzip or similar and trashing the uncompressed version.
As it is text with a lot of repetition this will make the files very very small in comparison, and should solve your storage issues.
log file space won't be reused ,if there is open transaction..You can verify the reason for log space reuse using below DMV..
select log_reuse_wait_desc,database_id from sys.databases
In your case,your database is set to simple and database is 100 MB..but the log has grown upto 450 GB..which is very huge..
My theory is that ,there may be some open transactions ,which prevented log space reuse..log file won't shrink back,once it grew..
As of know you can run above DMV and see ,what is preventing log space reuse at this point,you can't go back in time to know what prevented log space reuse

SQL Server database log file increasing very fast

I have a SQL Server database with one log file and it is growing very fast.
But it is happening after I shrink the file before that it was good.
My database recovery model is FULL recovery.
Please help with it.
Assuming you meant transaction log...
If you use FULL recovery model,
to reclaim a space in the transaction logs or shrink it, the log (or database) should be backed up first. If you use also CDC or Replication, even backup is not enough. The space can be reclaimed only after Log Reader Agent read the transaction logs.
The transaction log is growing fast if there are many DB changes.
It sounds like the log file was appropriately sized for your traffic before you shrank it. To paraphrase Elsa from the movie Frozen "Let it grow!". That is, unless you had an event that you can point to that made the it much larger than it typically is, the log file will likely be appropriately sized given your transactional volume.

SQL Server log file is too big

We are using SQL Server 2008 with full recovery model, the database size is 10 GB and the log file is 172 GB, we want to clear up the space of the log file internally ,we did transaction log file backup it should be clear up, but it still 172 GB ,what to do?
shrink the DB after doing the following task :
/*Perform a full backup of your database.
Change the backup method of your database to "Simple"
Open a query window and enter "checkpoint" and execute
Perform another backup of the database
Perform a final full backup of the database.*/
To understand this, you need to understand the difference between file shrinkage and truncation.
Shrinking means physically shrinking the size of the file on disk. Truncation means logically freeing up space to be used by new data, but with NO impact on the size of the file on disk.
In layman's terms for instance, if your log file is 100 GB in size and you've never backed it up, 100GB of that size is USED. If you then back up the log / database, it will be truncated, meaning the 100GB will be reserved for future use... basically meaning it's freed up for SQL Server, but still uses up the same space on disk as before. You can actually see this with the following query for instance.
DBCC SQLPERF(logspace) WITH NO_INFOMSGS
The result will show on a DB basis how large the log file is, and how much of the log file is actually used. If you now shrink that file, it will free up the reserved portion of the file on disk.
This same logic applies to all files in SQL Server (including primary and partition files, as well as log files for tempdb, user databases etc), even on table-level. For instance if you run this code
sp_spaceused 'myTableName'
You will see how much of that table is reserved and used for various things, where the "unused" column will show how much is reserved but NOT used.
So as a conclusion, you can NOT free up space on disk without shrinking it. The real question here is why exactly are you not allowed to shrink the log? The typical reason for the recommendation to not shrink log files is because it will naturally grow to its normal size anyway, so there's no point. But if you're only stepping into the recommended practice of backing up the log anyway, it only makes sense to start by shrinking the oversized log first so that in the future your maintenance backups will keep it in its natural, comparatively smaller size.
Another EXTREMELY IMPORTANT point about shrinking, is that shrinking data files is another matter entirely. If you shrink a data file, you will be severely fragmenting all indexes and statistics on the database, making it mandatory to rebuild practically all the indexes on the entire database to avoid catastrophic degradation in performance. So do NOT shrink data files, ever, without consulting someone who knows what they're doing and is prepared to deal with the consequences. Log files are a lot safer in this regard since the fragmentation doesn't apply there.

Does the transaction log drive need to be as fast as the database drive?

We are telling our client to put a SQL Server database file (mdf), on a different physical drive than the transaction log file (ldf). The tech company (hired by our client) wanted to put the transaction log on a slower (e.g. cheaper) drive than the database drive, because with transaction logs, you are just sequencially writing to the log file.
I told them that I thought that the drive (actually a RAID configuration) needed to be on a fast drive as well, because every data changing call to the database, needs be saved there, as well as to the database itself.
After saying that though, I realized I was not entirely sure about that. Does the speed of the transaction log drive make a significant difference in performance... if the drive with the database is fast?
The speed of the log drive is the most critical factor for a write intensive database. No updates can occur faster than the log can be written, so your drive must support your maximum update rate experienced at a spike. And all updates generate log. Database file (MDF/NDF) updates can afford slower rates of write because of two factors
data updates are written out lazily and flushed on checkpoint. This means that an update spike can be amortized over the average drive throughput
multiple updates can accumulate on a single page and thus will need one single write
So you are right that the log throughput is critical.
But at the same time, log writes have a specific pattern of sequential writes: log is always appended at the end. All mechanical drives have a much higher throughput, for both reads and writes, for sequential operations, since they involve less physical movement of the disk heads. So is also true what your ops guys say that a slower drive can offer in fact sufficient throughput.
But all these come with some big warnings:
the slower drive (or RAID combination) must truly offer high sequential throughput
the drive must see log writes from one and only one database, and nothing else. Any other operation that could interfere with the current disk head position will damage your write throughput and result in slower database performance
the log must be only write, and not read. Keep in mind that certain components need to read from the log, and thus they will move the disk mechanics to other positions so they can read back the previously written log:
transactional replication
database mirroring
log backup
In simplistic terms, if you are talking about an OLTP database, your throughput is determined by the speed of your writes to the Transaction Log. Once this performance ceiling is hit, all other dependant actions must wait on the commit to log to complete.
This is a VERY simplistic take on the internals of the Transaction Log, to which entire books are dedicated, but the rudimentary point remains.
Now if the storage system you are working with can provide the IOPS that you require to support both your Transaction Log and Database data files together then a shared drive/LUN would provide adequately for your needs.
To provide you with a specific recommended course of action I would need to know more about your database workload and the performance you require your database server to deliver.
Get your hands on the title SQL Server 2008 Internals to get a thorough look into the internals of the SQL Server transaction log, it's one of the best SQL Server titles out there and it will pay for itself in minutes from the value you gain from reading.
Well, the transaction log is the main structure that provides ACID, can be a big bottleneck for performance, and if you do backups regularly its required space has an upper limit, so i would put it in a safe, fast drive with just space enough + a bit of margin.
The Transaction log should be on the fastest drives, if it just can complete the write to the log it can do the rest of the transaction in memory and let it hit disk later.

How to make a log as small as possible in a SQL database?

This question might be kind of elementary, but here goes:
I have a SQL Server database with a 4 GB log file. The DB is 16GB and is backed up nightly.
Can I truncate the log regularly because the entire DB+Log is backed up each night?
you can something like this to you maintenance schedule to run every night before the backup. This will try to shrink/truncate your log file to 1 meg
BACKUP LOG DBNAME
TO disk = 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\DBNAME.log'
DBCC SHRINKFILE('DBNAME_Log', 1)
Are you sure the log is backed up nightly and not just the database?
If so, then what does this database do? Are you deleting and refreshing whole tables? If so, your log might be the right size for the amount of transactions you have. You want the log to be large enough to handle your normal transaction load without having to grow. A small log can be a detriment to performance.
If this database is not transactional in nature (i.e., the tables are populated by full refreshes rather than one record ata time), the change the recovery mode to simple. Do not do that though if you have transactional tables that you will need to be able to recover from the log rahter than simply re-importing the data.
If you can run log backups during the day (depending on load, etc. this may or may not be possible for you) you can keep the log file under control by doing so. This will prevent the log file itself from growing quite so large, and also provide the side benefit of giving you the ability to restore closer to the point of failure in the event of a problem.
You'll still need to shrink the log file once using DBCC SHRINKFILE, but if it's backed up regularly after that point it shouldn't stabilize at a smaller size.

Resources