SQL Server log file is too big - sql-server

We are using SQL Server 2008 with full recovery model, the database size is 10 GB and the log file is 172 GB, we want to clear up the space of the log file internally ,we did transaction log file backup it should be clear up, but it still 172 GB ,what to do?

shrink the DB after doing the following task :
/*Perform a full backup of your database.
Change the backup method of your database to "Simple"
Open a query window and enter "checkpoint" and execute
Perform another backup of the database
Perform a final full backup of the database.*/

To understand this, you need to understand the difference between file shrinkage and truncation.
Shrinking means physically shrinking the size of the file on disk. Truncation means logically freeing up space to be used by new data, but with NO impact on the size of the file on disk.
In layman's terms for instance, if your log file is 100 GB in size and you've never backed it up, 100GB of that size is USED. If you then back up the log / database, it will be truncated, meaning the 100GB will be reserved for future use... basically meaning it's freed up for SQL Server, but still uses up the same space on disk as before. You can actually see this with the following query for instance.
DBCC SQLPERF(logspace) WITH NO_INFOMSGS
The result will show on a DB basis how large the log file is, and how much of the log file is actually used. If you now shrink that file, it will free up the reserved portion of the file on disk.
This same logic applies to all files in SQL Server (including primary and partition files, as well as log files for tempdb, user databases etc), even on table-level. For instance if you run this code
sp_spaceused 'myTableName'
You will see how much of that table is reserved and used for various things, where the "unused" column will show how much is reserved but NOT used.
So as a conclusion, you can NOT free up space on disk without shrinking it. The real question here is why exactly are you not allowed to shrink the log? The typical reason for the recommendation to not shrink log files is because it will naturally grow to its normal size anyway, so there's no point. But if you're only stepping into the recommended practice of backing up the log anyway, it only makes sense to start by shrinking the oversized log first so that in the future your maintenance backups will keep it in its natural, comparatively smaller size.
Another EXTREMELY IMPORTANT point about shrinking, is that shrinking data files is another matter entirely. If you shrink a data file, you will be severely fragmenting all indexes and statistics on the database, making it mandatory to rebuild practically all the indexes on the entire database to avoid catastrophic degradation in performance. So do NOT shrink data files, ever, without consulting someone who knows what they're doing and is prepared to deal with the consequences. Log files are a lot safer in this regard since the fragmentation doesn't apply there.

Related

Very Low "Used Log Space Percentage" for SQL Server database transaction file

Prologue
I have always read/observed that we should not shrink database file as they tend to grow back. There will be a performance penalty when DB will try to grow these files if there is not enough space already.
Situation
When I am executing following query for few of my databases -
select * from sys.dm_db_log_space_usage
Some of my databases are taking around 20 GB space. Catch is column used_log_space_in_percent is showing values between .1 to 10 %. If I shrink these databases I can actually gain around 100 GB space immediately. Also please note that LogReuseWaitDesc is "nothing" for some of DBs if not all.
Due to some reason, transaction log backups are not possible in near future. (Convincing in progress)
It will be really helpful if you can provide recommendations as well as reasoning in such cases whether its a good idea to shrink files or not.
If you are taking time to think about this, THANK YOU!
ldf files grow if you don't get backup, as soon as you create backup, SQL Server truncates log file. Shrinking log files is not a good idea unless you have a backup.
Please read the followings :
https://www.sqlshack.com/sql-server-transaction-log-backup-truncate-and-shrink-operations/
https://www.brentozar.com/archive/2009/08/stop-shrinking-your-database-files-seriously-now/

Monitoring SQL Log File

After deploying a project on the client's machine, the sql db log file has grown to up to 450G, although the db size is less than 100MB, The logging mode is set to Simple mode, and the transactions are send from a windows service that send insertion and updating transaction every 30 seconds.
my question is, how to know the reason of db log file growth?
I would like to know how to monitor the log file to know what is the exact transaction that causes the problem.
should i debug the front end ? or there is away that expose the transactions that cause db log file growth.
Thank you.
Note that a simple recovery model does not allow for log backups since it keeps the least amount of information and relies on CHECKPOINT, so if this is a critical database, consider protecting the client by use of a FULL RECOVERY plan. Yes, you have to use more space, but disk space is cheap and you can have greater control over the point in time recovery and managing your log files. Trying to be concise:
A) Your database in Simple Mode will only truncate transactions in your transaction log as when a CHECKPOINT is created.
B) Unfortunately, large/lots of uncommitted transactions, including BACKUP, creation of SNAPSHOT, and LOG SCANs, among other things will stop your database from creating those checkpoints and your database will be left unprotected until those transactions are completed.
Your current system relies on having the right edition of your .bak file, which depending on the size may mean hours of potential loss.
In other words, it is that ridiculous size because your database is not able to create a CHECKPOINT to truncate these transactions often enough....
a little note on log files
Foremost, Log files are not automatically truncated every time a transaction is committed (otherwise, you would only have the last committed transaction to go back to). Taking frequent log backups will ensure pertinent changes are kept (point in time) and SHRINKFILE will squeeze the log file to the smallest size available/size specified.
Use DBCC SQLPERF(logspace) to see how much of your log file is in use and how large it is. Once you perform a full backup, the log file will be truncated to the remaining uncommitted/active transactions. (Do not confuse this with shrinking the size)
Some suggestions on researching your transactions:
You can use the system tables to see the most expensive cache, frequent, and active plans.
You can search the log file using an undocumented extended stored procedure, fn_dblog.
Pinal has great info on this topic that you can read at this webpage and link:
Beginning Reading Transaction Log
A Log File is text, and depending on your log levels and how many errors and messages you receive these files can grow very quickly.
You need to rotate your logs with something like logrotate although from your question it sounds like you're using Windows so not sure what the solution for that would be.
The basics of log rotation are taking daily/weekly versions of the logs, and compressing them with gzip or similar and trashing the uncompressed version.
As it is text with a lot of repetition this will make the files very very small in comparison, and should solve your storage issues.
log file space won't be reused ,if there is open transaction..You can verify the reason for log space reuse using below DMV..
select log_reuse_wait_desc,database_id from sys.databases
In your case,your database is set to simple and database is 100 MB..but the log has grown upto 450 GB..which is very huge..
My theory is that ,there may be some open transactions ,which prevented log space reuse..log file won't shrink back,once it grew..
As of know you can run above DMV and see ,what is preventing log space reuse at this point,you can't go back in time to know what prevented log space reuse

Handling unused disk space in sql server db

I have one database whose size is growing very fast. Curruntly its size is aruond 60GB however after executing db_spaceused stored procedure i could verify that more that 40 GB is unused(unused space is different, not reserved space which i unsderstand is for table growth). And actual data size is around 10-12 GB and few GB's in reserved space.
Now to collect that unsused space i tried to use the shrink operation but it turned out to be not helping. After searching further i also found not to use the shrink DB as that causes the data fragments to get genrated resulting in the dealay while disk operation. Now i am really not sure what other operation i should try to recollect the space and recollect the DB. I unsertand that due to the size queries might be taking longer that expected and reclaiming this space could help with the performance (not sure ).
While investigating i also come across Gererate Scripts feature. It helps exporting data, schema also but i am not sure if it also help creating script(everying user, permission and other things also) so that script will help creat as is replica(deep copy/clone) of DB using create scema and then populating it with data to other db/server ?
Any pointer would be helpful.
If your database is 60Gb it means it had grown to 60gb. Even if the data is only 20Gb, you probably have operations that grow the data from time to time (eg. nightly index maintenance jobs). The recommendation is to leave the database at 60Gb. Do not attempt to reclaim the space, you will only cause harm and whatever caused the database to grow to 60Gb to start with it will likely occur again and trigger database growth.
In fact, you should to the opposite. Try to identify why it grew to 60Gb and extrapolate what will happen when your data reaches 30Gb. Will the database grow to 90Gb? If yes, the you should grow it now to 90Gb. The last thing you want is for growth to occur randomly, and possibly run out of disk space at a critical moment. In fact you should check right now if your server has Instant File initialization enabled.
Now of course, the question is: what would cause 3x data size growth, and how to identify it? I don't know of any easy method. I would recommend start by looking at your SQL Agent jobs. Check your maintenance scripts. Look into the application itself, does it has a pattern of growth and delete for data? Look at past backups (you do have them, right?) and compare.
BTW I assume due diligence and you checked that the data file has grown to 60Gb. If is the LOG file that has grown then is easy, it means you enabled full recovery model and forgot to backup the log.

SQL Server 2005 TempDB Size

We are using SQL Server 2005. Recently SQL server 2005 crashed in our production environment due to large tempdb size.
1) what could be reason for large tempdb size?
2) Is there any way to look what data is there in tempdb?
2) Is there any way to look what data is there in tempdb?
No, because it is not kept there. Tempdb has very special treatment, like being dropped on every server restart.
1) what could be reason for large tempdb size?
Inefficient SQL, maintenance jobs or just the data at hand. Obviously a 800gb, 6000gb database may require more tempdb space than a 4gb online crm attempt. You dont really specify ANY size in absolute terms. What IS large? I have tempdb databases hardcoded at 64gb ony my smaller servers.
Typical SQL that goes into Tempdb are:
Sorts that are not solvable as part of the query (you need to store keys SOMEWHERE)
DISCTINCT. Needs all returned data in tempdb to find doubles.
Certain poerations psossibly during joins.
Tempdb usage (temporary tables). I just mention them becasue I often keep some hundred megabytes worth of data in them during loads and scrubbing.
In general you can find those queries by having hugh IO stats in the query log, or simply being slow.
That said, maintenance plans also go int there, but with reason. At the end, your "large" is possibly mine "not even worth mentioning tiny". It really depends what you do. Use the query trace tool to find out what takes long.
Physically Tempdb is very special in treatment - sql server does NOT write to the file if it does not have to (i.e. keeps thigns in memory). Writes to the disc are a sign of memory flowing ofer. This is different from normal db write behavior. Tempdb, IF it flows over, is best put onto a decently fast SSD... which wont normally be SO expensive because it still will be relatively small.
Use the query here to find other queries for tempdb - basicaly you are fishing in dirty water here, need to try out things until you find the culprit.
The usual way to grow a SQL Server database — any database, not just tempdb — is to have it's data and log files set to autogrow (especially the log files). SQL Server is perfectly happy grow the log and data files until the consume all the disk space available to them.
Best practice, IMHO, is to allow limited autogrowth on the data files (put an upper bound on how big it can grow) and fix the size of the log files. You might need to do some analysis to figure out how big the log files need to be. For tempdb, especially, the recovery model should be set to simple, too.
Ok tempdb is a kinda special database. Any temporary objects you use in procedures etc, is created here. So if you application uses a lot of temp tables in queries, they will all reside here, but they should clean themselves up after the connection (spid) is reset.
The other thing that can grow a tempdb is database maintenance tasks, however they will take a larger toll on the database log files.
Tempdb is also cleared every time you restart the SQL Service. It basically drops the database and re-create it. I agree with #Nic about leaving tempdb as it is, dont muck around with it, any issues with space in tempdb, usually indicates another larger problem somewhere else. More space will mask the problem, but only for so long. How much free space does your drive have that you have tempdb on?
Other thing, if not already, try and put tempdb on it's own drive, and one more if possible, have the data and the log files on their own separate drives.
So, if you dont restart your SQL Server/Service, your drive will run out of space pretty soon.,
use tempdb
select (size*8) as FileSizeKB from sys.database_files

How does a large transaction log affect performance?

I have seen that our database has the full recovery model and has a 3GB transaction log.
As the log gets larger, how will this affect the performance of the database and the performance of the applications that accesses the database?
The recommended best practice is to assign a SQL Server Transaction log file its’ very own disk or LUN.
This is to avoid fragmentation of the transaction log file on disk, as other posters have mentioned, and to also avoid/minimise disk contention.
The ideal scenario is to have your DBA allocate sufficient log space for your database environment ahead of time i.e. to allocate say x GB of data in one go. On a dedicated disk this will create a contiguous allocation, thereby avoiding fragmentation.
If you need to grow your transaction log, again you should endeavour to do so in sizeable chunks in order to endeavour to allocate contiguously.
You should also look to NOT shrink your transaction log file as, repeated shrinking and auto growth can lead to fragmentation of the data file on disk.
I find it best to think of the autogrowth database property as a failsafe i.e. your DBA should proactively monitor transaction log space (perhaps by setting up alerts) so that they can increase the transaction log file size accordingly to support your database usage requirements but the autogrowth property can be in place to ensure that your database can continue to operate normally should unexpected growth occur.
A larger transaction log file in itself if not detrimental to performance as SQL server writes to the log sequentially, so provided you are managing your overall log size and allocation of additional space appropriately you should not be concerned.
In a couple of ways.
If your system is configured to auto-grow the transaction log, as the file gets bigger, your SQL server will need to do more work and you will potentially get slowed down. When you finally do run out of space, you're out of luck and your database will stop taking new transactions.
You need to get with your DBA (or maybe you are the DBA?) and perform frequent, periodic log backups. Save them off your server onto another dedicated backup system. As you back up the log, the space in your existing log file will be reclaimed, preventing the log from getting much bigger. Backing up the transaction log will also allow you to restore your database to a specific point in time after your last full or differential backup, which significantly cuts your data losses in the event of a server failure.
If the log gets fragmented or needs to grow it will slow down the application.
If you don't clear the transaction log periodically by performing backups, the log will get full and consume the entire available disk space.
If the log file growths larger in small steps you will end up with a lot of virtual log files
these virtual log files will slow down the database startups, restore and backup operations.
here is an article that shows how to fix this:
http://www.sqlserveroptimizer.com/2013/02/how-to-speed-up-sql-server-log-file-by-reducing-number-of-virtual-log-file/
For increasing performance it's nice to separate SQL server log file from sql server data file in order to optimize I/O efficiency.
The writing method to the data file is Random but SQL Server writes to the transaction log sequentially.
With Sequential I/O ,SQL Server can read/write data without re-positioning the disk head.

Resources