Why can't I shrink transaction log? - sql-server

I’m converting some historic databases to read-only and trying to clean them up. I’d like to shrink the transaction logs to 1MB. I realize it’s normally considered bad practice to shrink transaction logs, but I think this is probably the exception to the rule.
The databases are set to SIMPLE recovery on SQL Server 2012 Standard. So I would have expected that after issuing a CHECKPOINT statement that the contents of the transaction log could be shrunk, but that’s not working.
I have tried:
Manually issuing CHECKPOINT commands.
Detaching/attaching files.
Backing up / restoring database.
Switching from Simple, to Full, back to Simple recovery.
Shaking my fist at it in a threatening manner.
After each of those attempts I tried running:
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 0)
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 0, TRUNCATEONLY)
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 1)
I’ve seen this error message a couple times:
Cannot shrink log file 2 (MYDATABASE_2010_log) because total number of
logical log files cannot be fewer than 2.
At one point I tried creating a dummy table and adding records to it in an attempt to get the transaction log to roll over and move to the end of the file, but that was just a wild guess.
Here are the results of DBCC SQLPERF(LOGSPACE)
Database Name Log Size (MB) Log Space Used (%) Status
MyDatabase_2010 10044.13 16.71015 0
Here are the results of DBCC LOGINFO:
RecoveryUnitId FileId FileSize StartOffset FSeqNo Status Parity CreateLSN
0 2 5266014208 8192 15656 0 64 0
0 2 5266014208 5266022400 15673 2 128 0
Does anyone have any idea what I'm doing wrong?

If you are unable to truncate and shrink the log file, the first thing that you should do is to check if there is a real reason that avoids the log to be truncated. Execute this query:
SELECT name ,
log_reuse_wait ,
log_reuse_wait_desc ,
FROM sys.databases AS D
You can filter by the database name
If the value for log_reuse_wait is 0, the database log can be truncated. If the value is other than 0 then there is something that avoids the truncation. See the description for the log reuse wait values in the docs for sys.databases. Or even better here: Factors That Can Delay Log Truncation. If the value is 1 you can wait for the checkpoint, or run it by hand: CHECKPOINT.
Once you have checked that there is no reason that avoids the log file truncation, you can do the usual sequence of backup (log, full of incremental) and DBCC SHRINKDATABASE or DBCC SHRINKFILE. And the file should shrink or not.
If at this point the file is not shrunk, don't worry, the reason is the physical structure of the log file, and it can be solved:
The log file works as a circular buffer, and can only be truncated by removing the end of the file. If the used part of the circular buffer is at the end of the file, then it cannot be truncated. You simply have to wait until the used part of the transaction log advances, and moves from the end of the file to the beginning of the file. Once this happens, you can run one of the shrink commands, and your file will shrink without a glitch. This is very well explained in this page: How to shrink the SQL Server log.
If you want to force the log file active part to move from the end to the start of the buffer:
do some quite heavy operation on the DB inside a transaction and roll it back, to move the transaction log pointer further
repeat the backup, to truncate the log
shrink the file. If the active part of the log moved far enough, the file will shrink

Allowing for the usual caveats about backing up beforehand. I found the answer at SQLServerCentral
DETACH the database, RENAME the log file, ATTACH the database using:
CREATE DATABASE xxxx ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\xxxx.MDF') FOR ATTACH_REBUILD_LOG;

That's a new error to me. However, the course seems clear; expand the log file by a trivial amount to get some new VLFs created and then induce some churn in your db so that the current large VLF isn't the active one.

The first two VLFs are 5GB in size each. You somehow need to get rid of them. I can think of no sequence of shrinks and growths that would do that. I have never heard of the possibility of splitting or shrinking a VLF.
Just create a new log file. A database can have multiple. Then, delete the old log file.

Related

SQL Server Log File Size

Apologies if this question was asked by someone.
I'm not much experienced in SQL server.
On our SQL server, there is 1 TB plus log file size.
Database is in full recovery.
Had taken an initial full backup and set a regular backup job for a transaction log for a stop to growing log file size too much.
so my question is, can I truncate my log file after taking log backup.
If there was abnormal event like long running transaction or huge data import, you restore the previous size with the code below:
DBCC SHRINKFILE(2,TRUNCATEONLY);
ALTER DATABASE [StackOverflow] MODIFY FILE (NAME = N'StackOverflow_Log', SIZE = 256MB);
The SHRINKFILE second argument is the file_id:
SELECT *
FROM sys.database_files;
Also, sometimes having a huge log file might be something normal. It basically depends on the activity on your database. So, 256 MB might be more or less. It will be better to set a size, which will be enough for handling your normal workload without growing.
You should also check how often you are performing backup of the log file - each 10 minute or each 1 hour.

Why does my SQL Server logfile have 99% space available after giving me a full warning?

While deleting a large number of records, I get this error:
The transaction log for database 'databasename' is full
I found this answer very helpful, it recommends:
Right-click your database in SQL Server Manager, and check the Options page.
Switch Recovery Model from Full to Simple
Right-click the database again. Select Tasks Shrink, Files Shrink the log file to a proper size (I generally stick to 20-25% of the size of the data files)
Switch back to Full Recovery Model
Take a full database backup straight away
Question: in step 3, when I go to shrink > files and choose log from the file type dropdown menu, it tells me that 99% of the allocated space is free.
Out of ~4500MB of allocated space, there is ~4400MB free (the data file size is ~3000MB).
Does that mean I'm good to go, and there is no need to shrink?
I don't understand this. Why would that be the case, given the warning I received initially?
I'm not one for hyperbole, but there are literally billions of articles written about SQL Server transaction logs.
Reader's digest version: if you delete 1,000,000 rows at a time, the logs are going to get large because it is writing those 1,000,000 deletes in case it has to roll back the transaction. The space needed to hold those records does not get released until the transaction commits. If your logs are not big enough to hold 1,000,000 deletes, the log will get filled, throw that error you saw, and rollback the whole transaction. Then all that space will most likely get released. Now you have a big log with lots of free space.
You probably hit a limit on your log file at 4.5gb and it wont get any bigger. To avoid filling your logs in the future, chunk down your transactions to smaller amounts, like deleting 1,000 records at a time. A shrink operation will reduce the physical size of the file, like from 4.5gb down to 1gb.
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql?view=sql-server-2017

The transaction log for database 'Name' is full.To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

I am getting following error while I am trying to insert 8355447 records in single insert query.i use sql-server-2008-r2.
INSERT INTO table
select * from [DbName].table
Please help me to solve.... Thanks
Check the disk space on the SQL Server as typically this occurs when the transaction log cannot expand due to a lack of free disk space.
If you are struggling for disk space, you can shrink the transaction logs of your application databases and also don't forget to shrink the transaction log of the TEMPDB database.
Note:- Posting this as a separate comment as I am a newcomer to Stackoverflow and don't have enough points to add comments.
May be more than one options are available
Is it necessary that you should run this insert as a single
transaction. If that is not mandatory, you can insert say 'n' no. of
rows as a single transaction, then next 'n' and so on.
If you can spare some space on another drive, created an additional log
file on that drive.
Check whether you can clear some space on drive under consideration by moving
some other file to somewhere else.
If none of the previous options are open, shrink SQL transaction log files
with TRUNCATE_ONLY option ( release free space available at the end of log file to OS).
dbcc sqlperf ( 'logspace') can be used to find out the log files with more free space in it.
USE' those databases and apply a shrinkfile, The Command is :-
dbcc shrinkfile ( , TRUNCATEONLY )
DBCC Shrinkfile details are available here DBCC Shrinkfile.
If you are not getting space even after that, you may have to do a rigorous shrink by re-allocating pages within the database ( by specifying a target size ) , details of this can be taken from the link provided.
Well, clean up the transaction log. THe error is extremely clear if anyone cares to read it.

DB2 Logfile Limitation, SQLCODE: -964

I have tried a huge insert query in DB2.
INSERT INTO MY_TABLE_COPY ( SELECT * FROM MY_TABLE);
Before that, I set the followings:
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGFILSIZ 70000;
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGPRIMARY 50;
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGSECOND 2;
db2stop force;
db2start;
and I got this error:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0964C The transaction log for the database is full. SQLSTATE=57011
SQL0964C The transaction log for the database is full.
Explanation:
All space in the transaction log is being used.
If a circular log with secondary log files is being used, an
attempt has been made to allocate and use them. When the file
system has no more space, secondary logs cannot be used.
If an archive log is used, then the file system has not provided
space to contain a new log file.
The statement cannot be processed.
User Response:
Execute a COMMIT or ROLLBACK on receipt of this message (SQLCODE)
or retry the operation.
If the database is being updated by concurrent applications,
retry the operation. Log space may be freed up when another
application finishes a transaction.
Issue more frequent commit operations. If your transactions are
not committed, log space may be freed up when the transactions
are committed. When designing an application, consider when to
commit the update transactions to prevent a log full condition.
If deadlocks are occurring, check for them more frequently.
This can be done by decreasing the database configuration
parameter DLCHKTIME. This will cause deadlocks to be detected
and resolved sooner (by ROLLBACK) which will then free log
space.
If the condition occurs often, increase the database
configuration parameter to allow a larger log file. A larger log
file requires more space but reduces the need for applications to
retry the operation.
If installing the sample database, drop it and install the
sample database again.
sqlcode : -964
sqlstate : 57011
any suggestions?
I used the maximum values for LOGFILSIZ, LOGPRIMARY, and LOGSECOND;
The max value for LOGFILSIZ may be different for windows, linux, etc. But, you can try a very big number and the DB let you know what is the max. In my case it was 262144.
Also, LOGPRIMARY + LOGSECOND <= 256. I tried 128 for each and it works for my huge query.
Instead of performing trial and errors with the DB CFG parameters, you can put these INSERT statements in the Stored Procedure with commit interval.
Refer to the following post for details: This might help.
https://prasadspande.wordpress.com/2014/06/06/plsql-ways-updatingdeleting-the-bulk-data-from-the-table/
Thanks

How do I shrink the transaction log on MS SQL 2000 databases?

I have several databases where the transaction log (.LDF) is many times larger than the database file (.MDF).
What can I do to automatically shrink these or keep them from getting so large?
That should do the job
use master
go
dump transaction <YourDBName> with no_log
go
use <YourDBName>
go
DBCC SHRINKFILE (<YourDBNameLogFileName>, 100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs
go
-- then you can call to check that all went fine
dbcc checkdb(<YourDBName>)
A word of warning
You would only really use it on a test/development database where you do not need a proper backup strategy as dumping the log will result in losing transactions history. In live systems you should use solution sugested by Cade Roux
Backup transaction log and shrink it.
If the DB is being backed up regularly and truncated on checkpoint, it shouldn't grow out of control, however, if you are doing a large number (size) of transactions between those intervals, it will grow until the next checkpoint.
Right click on the database in Enterprise Manager > All Tasks > Shrink Database.
DBCC SHRINKFILE.
Here for 2005.
Here for 2000.
No one here said it, so I will: NEVER EVER shrink the transaction log. It is a bad idea from the SQL Server point of view.
Keep the transaction log small by doing daily db backups and hourly (or less) transaction log backups. The transaction log backup interval depends on how busy your db is.
Another thing you can try is to set the recovery mode to simple (if they are not already) for the database, which will keep the log files from growing as rapidly. We had this problem recently where our transaction log filled up and we were not permitted anymore transactions.
A combination of the shrink file which is in multiple answers and simple recovery mode made sure our log file stayed a reasonable size.
Using Query Analyser:
USE yourdabatase
SELECT * FROM sysfiles
You should find something similar to:
FileID …
1 1 24264 -1 1280 1048578 0 yourdabatase_Data D:\MSSQL_Services\Data\yourdabatase_Data.MDF
2 0 128 -1 1280 66 0 yourdabatase_Log D:\MSSQL_Services\Data\yourdabatase_Log.LDF
Check the file ID of the log file (its 2 most of the time).
Execute 2 or 3 times the checkpoint command to write every page to the hard-drive.
Checkpoint
GO
Checkpoint
GO
Execute the following transactional command to trunk the log file to 1 MB
DUMP TRAN yourdabatase WITH no_log
DBCC SHRINKFILE(2,1) /*(FileID , the new size = 1 Mb)*/
Here is what I have been Using
BACKUP LOG <CatalogName> with TRUNCATE_ONLY
DBCC SHRINKDATABASE (<CatalogName>, 1)
use <CatalogName>
go
DBCC SHRINKFILE(<CatalogName_logName>,1)
try sp_force_shrink_log which you can find here
http://www.rectanglered.com/sqlserver.php

Resources