SQL Server Log File Size - sql-server

Apologies if this question was asked by someone.
I'm not much experienced in SQL server.
On our SQL server, there is 1 TB plus log file size.
Database is in full recovery.
Had taken an initial full backup and set a regular backup job for a transaction log for a stop to growing log file size too much.
so my question is, can I truncate my log file after taking log backup.

If there was abnormal event like long running transaction or huge data import, you restore the previous size with the code below:
DBCC SHRINKFILE(2,TRUNCATEONLY);
ALTER DATABASE [StackOverflow] MODIFY FILE (NAME = N'StackOverflow_Log', SIZE = 256MB);
The SHRINKFILE second argument is the file_id:
SELECT *
FROM sys.database_files;
Also, sometimes having a huge log file might be something normal. It basically depends on the activity on your database. So, 256 MB might be more or less. It will be better to set a size, which will be enough for handling your normal workload without growing.
You should also check how often you are performing backup of the log file - each 10 minute or each 1 hour.

Related

Sql Server LDF file taking too large space

Why is my database log file taking to high space? Its almost taking up 30GB of my HDD. Even after deleting 1,000,000 records, its not freeing up any space.
So,
1.Why is the log file taking this much space (30gb)?2.how can I free up the space?
1.Why is the log file taking this much space (30gb)?
Or because of your recovery not SIMPLE and ldf grown eventually to such size
Or because there was a large one-time DML operation
Or because of other reasons, as noted by #sepupic in another answer
2.how can I free up the space?
IF recovery is other than SIMPLE:
Firstly backup transaction log file
Perform a shrink, like DBCC SHRINKFILE(2,256)
IF recovery is SIMPLE:
Just shrink it to desired size, like DBCC SHRINKFILE(2,256)
If the database log still did not reduce to a target size, then the exact reason to be checked, by using a code snippet of #sepupic
Some members still give and advice to physicaly remove LDF files.
I highly suggest to not do this. Remarkable related post of Aaron Bertrand:
Some things you don't want to do:
Detach the database, delete the log file, and re-attach. I can't
emphasize how dangerous this can be. Your database may not come back
up, it may come up as suspect, you may have to revert to a backup (if
you have one), etc. etc.
1. Why is the log file taking this much space (30gb)?
It was because the Autogrowth / Maxsize was set 200,000 MB
2. how can I free up the space?
As described Here i used the following command and the file is now less than 200mb
ALTER DATABASE myDatabaseName
SET RECOVERY SIMPLE
GO
DBCC SHRINKFILE (myDatabaseName_log, 1)
GO
ALTER DATABASE myDatabaseName_log
SET RECOVERY FULL
I have also set Autogrowh/Maxsize in the database properties to 1000 as Limited (See the image below).
The link describes more, so I recommend referring it for detailed description and other options.
Thanks #hadi for the link.
Why is my database log file taking to high space?
There can be more causes, not only the 2 mentioned in another answer.
You can find the exact reason using this query:
select log_reuse_wait_desc
from sys.databases
where name = 'myDB';
Here is a link to the BOL article that describes all the possible causes under log_reuse_wait:
sys.databases (Transact-SQL)
how can I free up the space?
First you should determine the cause using the query above, then you should fix it, for example, if it's broken replication you should remove it or fix it.
You need a maintenance job to backup the transaction log, and do it often: like every 10 minutes or so.
A FULL backup once per day isn't good enough.
Alternatively, you can change the Recovery Model of the database from FULL to SIMPLE. But if you do this, you'll lose the ability to do point-in-time restores.
In my case the DB names was with bad characters so the script doesn't worked.
I found out and follow this article, which worked perfect in two steps: changing backup log from full to simple and shrink DB log file more than 95%

Why does my SQL Server logfile have 99% space available after giving me a full warning?

While deleting a large number of records, I get this error:
The transaction log for database 'databasename' is full
I found this answer very helpful, it recommends:
Right-click your database in SQL Server Manager, and check the Options page.
Switch Recovery Model from Full to Simple
Right-click the database again. Select Tasks Shrink, Files Shrink the log file to a proper size (I generally stick to 20-25% of the size of the data files)
Switch back to Full Recovery Model
Take a full database backup straight away
Question: in step 3, when I go to shrink > files and choose log from the file type dropdown menu, it tells me that 99% of the allocated space is free.
Out of ~4500MB of allocated space, there is ~4400MB free (the data file size is ~3000MB).
Does that mean I'm good to go, and there is no need to shrink?
I don't understand this. Why would that be the case, given the warning I received initially?
I'm not one for hyperbole, but there are literally billions of articles written about SQL Server transaction logs.
Reader's digest version: if you delete 1,000,000 rows at a time, the logs are going to get large because it is writing those 1,000,000 deletes in case it has to roll back the transaction. The space needed to hold those records does not get released until the transaction commits. If your logs are not big enough to hold 1,000,000 deletes, the log will get filled, throw that error you saw, and rollback the whole transaction. Then all that space will most likely get released. Now you have a big log with lots of free space.
You probably hit a limit on your log file at 4.5gb and it wont get any bigger. To avoid filling your logs in the future, chunk down your transactions to smaller amounts, like deleting 1,000 records at a time. A shrink operation will reduce the physical size of the file, like from 4.5gb down to 1gb.
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql?view=sql-server-2017

Why can't I shrink transaction log?

I’m converting some historic databases to read-only and trying to clean them up. I’d like to shrink the transaction logs to 1MB. I realize it’s normally considered bad practice to shrink transaction logs, but I think this is probably the exception to the rule.
The databases are set to SIMPLE recovery on SQL Server 2012 Standard. So I would have expected that after issuing a CHECKPOINT statement that the contents of the transaction log could be shrunk, but that’s not working.
I have tried:
Manually issuing CHECKPOINT commands.
Detaching/attaching files.
Backing up / restoring database.
Switching from Simple, to Full, back to Simple recovery.
Shaking my fist at it in a threatening manner.
After each of those attempts I tried running:
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 0)
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 0, TRUNCATEONLY)
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 1)
I’ve seen this error message a couple times:
Cannot shrink log file 2 (MYDATABASE_2010_log) because total number of
logical log files cannot be fewer than 2.
At one point I tried creating a dummy table and adding records to it in an attempt to get the transaction log to roll over and move to the end of the file, but that was just a wild guess.
Here are the results of DBCC SQLPERF(LOGSPACE)
Database Name Log Size (MB) Log Space Used (%) Status
MyDatabase_2010 10044.13 16.71015 0
Here are the results of DBCC LOGINFO:
RecoveryUnitId FileId FileSize StartOffset FSeqNo Status Parity CreateLSN
0 2 5266014208 8192 15656 0 64 0
0 2 5266014208 5266022400 15673 2 128 0
Does anyone have any idea what I'm doing wrong?
If you are unable to truncate and shrink the log file, the first thing that you should do is to check if there is a real reason that avoids the log to be truncated. Execute this query:
SELECT name ,
log_reuse_wait ,
log_reuse_wait_desc ,
FROM sys.databases AS D
You can filter by the database name
If the value for log_reuse_wait is 0, the database log can be truncated. If the value is other than 0 then there is something that avoids the truncation. See the description for the log reuse wait values in the docs for sys.databases. Or even better here: Factors That Can Delay Log Truncation. If the value is 1 you can wait for the checkpoint, or run it by hand: CHECKPOINT.
Once you have checked that there is no reason that avoids the log file truncation, you can do the usual sequence of backup (log, full of incremental) and DBCC SHRINKDATABASE or DBCC SHRINKFILE. And the file should shrink or not.
If at this point the file is not shrunk, don't worry, the reason is the physical structure of the log file, and it can be solved:
The log file works as a circular buffer, and can only be truncated by removing the end of the file. If the used part of the circular buffer is at the end of the file, then it cannot be truncated. You simply have to wait until the used part of the transaction log advances, and moves from the end of the file to the beginning of the file. Once this happens, you can run one of the shrink commands, and your file will shrink without a glitch. This is very well explained in this page: How to shrink the SQL Server log.
If you want to force the log file active part to move from the end to the start of the buffer:
do some quite heavy operation on the DB inside a transaction and roll it back, to move the transaction log pointer further
repeat the backup, to truncate the log
shrink the file. If the active part of the log moved far enough, the file will shrink
Allowing for the usual caveats about backing up beforehand. I found the answer at SQLServerCentral
DETACH the database, RENAME the log file, ATTACH the database using:
CREATE DATABASE xxxx ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\xxxx.MDF') FOR ATTACH_REBUILD_LOG;
That's a new error to me. However, the course seems clear; expand the log file by a trivial amount to get some new VLFs created and then induce some churn in your db so that the current large VLF isn't the active one.
The first two VLFs are 5GB in size each. You somehow need to get rid of them. I can think of no sequence of shrinks and growths that would do that. I have never heard of the possibility of splitting or shrinking a VLF.
Just create a new log file. A database can have multiple. Then, delete the old log file.

Microsoft Sql Server Managment studio backup size goes negative

The problem is that I need to explain the different sizes of backups that are being made of a database in a plant. Sometimes the difference between the sizes is in negative, even though that there is no data being deleted from the system.
Datum Backupfile-file Size KB Diff
6/1/10 backup201006010100.bak 3355914
7/1/10 backup201007010100.bak 4333367 977453
7/2/10 backup201007020100.bak 4355963 22596
7/3/10 backup201007030100.bak 4380896 24933
7/4/10 backup201007040100.bak 4380404 -492
8/1/10 backup201008010100.bak 4507775 1151861
8/2/10 backup201008020100.bak 4507777 2
8/3/10 backup201008030100.bak 4532492 24715
8/4/10 backup201008040100.bak 4584028 51536
On 7/3/10 and 8/1/10 there was no production. On other days the production is mostly consistent hence the database is expected to have a pretty much linear increase in size, but how is it that the size goes to negative.
In the maintenance plan the the tasks are: Backup Database Task (Type: Full Append Existing) -> Shrink Database (Leave 10% free space)
The last step of the backup process is to append data from the log that reflects any changes made to the database during the backup process, this could account for the difference you are seeing.
SQL Server have 2 step process of storing your data. First, your data goes into log file, and it not only data that you inserted, but also the whole list of operations SQL performed on your data. So, if something wrong happens, SQL can 'replay' your transactions.
At some point CHECKPOINT happens, ans data gets written into data file. Log files have a tendency to grow and shrink.
During BACKUP, SQL will write data and log files EXACTLY as they look at the point of BACKUP. That's why you can see that difference in size.

How do I shrink the transaction log on MS SQL 2000 databases?

I have several databases where the transaction log (.LDF) is many times larger than the database file (.MDF).
What can I do to automatically shrink these or keep them from getting so large?
That should do the job
use master
go
dump transaction <YourDBName> with no_log
go
use <YourDBName>
go
DBCC SHRINKFILE (<YourDBNameLogFileName>, 100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs
go
-- then you can call to check that all went fine
dbcc checkdb(<YourDBName>)
A word of warning
You would only really use it on a test/development database where you do not need a proper backup strategy as dumping the log will result in losing transactions history. In live systems you should use solution sugested by Cade Roux
Backup transaction log and shrink it.
If the DB is being backed up regularly and truncated on checkpoint, it shouldn't grow out of control, however, if you are doing a large number (size) of transactions between those intervals, it will grow until the next checkpoint.
Right click on the database in Enterprise Manager > All Tasks > Shrink Database.
DBCC SHRINKFILE.
Here for 2005.
Here for 2000.
No one here said it, so I will: NEVER EVER shrink the transaction log. It is a bad idea from the SQL Server point of view.
Keep the transaction log small by doing daily db backups and hourly (or less) transaction log backups. The transaction log backup interval depends on how busy your db is.
Another thing you can try is to set the recovery mode to simple (if they are not already) for the database, which will keep the log files from growing as rapidly. We had this problem recently where our transaction log filled up and we were not permitted anymore transactions.
A combination of the shrink file which is in multiple answers and simple recovery mode made sure our log file stayed a reasonable size.
Using Query Analyser:
USE yourdabatase
SELECT * FROM sysfiles
You should find something similar to:
FileID …
1 1 24264 -1 1280 1048578 0 yourdabatase_Data D:\MSSQL_Services\Data\yourdabatase_Data.MDF
2 0 128 -1 1280 66 0 yourdabatase_Log D:\MSSQL_Services\Data\yourdabatase_Log.LDF
Check the file ID of the log file (its 2 most of the time).
Execute 2 or 3 times the checkpoint command to write every page to the hard-drive.
Checkpoint
GO
Checkpoint
GO
Execute the following transactional command to trunk the log file to 1 MB
DUMP TRAN yourdabatase WITH no_log
DBCC SHRINKFILE(2,1) /*(FileID , the new size = 1 Mb)*/
Here is what I have been Using
BACKUP LOG <CatalogName> with TRUNCATE_ONLY
DBCC SHRINKDATABASE (<CatalogName>, 1)
use <CatalogName>
go
DBCC SHRINKFILE(<CatalogName_logName>,1)
try sp_force_shrink_log which you can find here
http://www.rectanglered.com/sqlserver.php

Resources