After I delete huge amount of data from a SQL Server database table, the database size increased.
When I run sp_spaceused it shows me 2625.25 MB. It used to be ~ 1800.00 MB before I deleted from table.
Is there any specific reason it keeps growing even if I delete data?
A temporary transaction log is often the reason for a notable increase of size after a huge delete.
It will eventually disappear on its own but you may remove the files if you need to reclaim the space.
I'm assuming that you are using SQL Server (sp_spaceused). Deleting is logged, so your log file has grown.
See SQL Server 2008 log will not truncate on how to truncate your log (depending on your DB and recovery model), and then you can run
DBCC SHRINKFILE(N)
to reclaim lost space
Edit As per #Aaron, Truncating is also a logged operation. Answer corrected.
Related
I have a database with log file size of 700 MB. Now I have fixed its max file size to 1 GB.
When it reaches 1 GB,transaction failed the reason for the same is that "The transaction log for database is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases"
This is same case if I uncheck Autogrowth for log file.
When I checked log_reuse_wait_desc column in sys.databases it says "Active_Transaction".
I am not able to understand why Sql server is not maintaining the max file size limit.Why it cannot delete old logs or something like that to maintain the max file size.
How does it work.
What I want is to limit the log file size that not to exceed 1 GB in any case.
There are a few things you need to consider here, especially if you want to restrict the log file size to 1GB.
as already mentioned, you need to understand the difference
between the three recovery models. Taking log backups is a key task
when using the full recovery model. However this is only part of the
problem; log backups only truncate the inactive part of the log,
therefore a transaction could fill the log file with 1GB+ of data,
and then you are in the same position you are in now... even if you are in simple recovery model (a log backup will not help you here!).
In an ideal situation, you would not restrict the log file in such a way, because of this problem. if possible you want to allow it to auto-grow so, in theory, it could fill the disk.
Transaction log management is a science in itself. Kimberly Tripp has some very good advice on how to manage transaction log throughput here
understanding VLF's will allow you to better manage your transaction log, and could help towards better proportioning your log file for large transactions.
If, after everything you have read, you are still required to limit the transaction log growth, then you will need to consider batch updating large result sets. this will allow you to update, say, 1000 rows at a time, meaning that only 1000 records are written to the log. SQL Server uses write-ahead logging, so in order to complete a transaction, you first need to have enough space in the transaction log to write all the details. If using a simple recovery model, this write-ahead logging is automatically truncated, meaning you don't need to backup the log. Therefore, writing 1000 records at a time (for example) will causing less of a problem than a huge 1,000,000 record insert (say)
Redgate provide a free e-book to help you on your way!
EDIT:
P.s. I've just read your comment above... If you are in full recovery model you MUST do log backups, otherwise sql server WILL NOT recover the space from the log, and will continue to write to the log causing it to expand! However note, you MUST have a full backup for transaction log backups to take effect. SQL Server cannot backup the log if it doesn't have an initial restore point (i.e. the full backup).
I tried to shrink the log file but it only did small amount of free space.
The log space of tempdb is full due to the data file.
How can I free the space?
Edit:
I have checked following:
1. Bad queries are running - NO Queries are running
DBCC OPENTRAN - No Result
The tempdb is in SIMPLE recovery mode.
I have a separate drive for Transaction log of 1TB i.e. 40% free now.
Observation:
When I Right click on tempdb>task>shrink>database shows 99% free space available. Can I shrink the database file here?
Restarting the SQL instance will flush the tempdb and recreate it.
As we create and drop temporary tables, inserts data into those tables, the size of the temp db and it's log cause the database to grow in size unlimitedly. It reaches upto 100s of gb and fills the hard disk.
This can cause the lack of size in database server and the application may crash.
We need to restart the sqlexpress service which is I think is a bad idea.
The stopping of service cause the site/application to go down.
So what is the alternative for this problem
You can always try shrink database files:
USE [tempdb]
GO
DBCC SHRINKFILE (N'templog' , 0)
GO
DBCC SHRINKFILE (N'tempdev' , 0)
GO
This will release all unused space from the tempdb. But MSSQL should reuse the space anyway. So if your files are such big, you need to look into your logic and find places where you create really big tables and try to reduce their sizes and/or their lifetime.
Also you shouldn't avoid dropping unused temporary tables.
And you can try to reduce session lifetime. It will guarantee that old unused tables will be dropped.
How do I increase the size of the transaction log? Is is also possible to temporarily increase the transaction log?
Let's say I have the following scenario. I have a Delete operation that's too big for the current transaction log. I wan't to:
Increase the transaction log (can I detect the current size?, can I tell how large I need the transaction log to be for my operation?)
(Perform my operation)
Backup the transaction log
Restore the size of the transaction log.
Short answer:
SQL 2k5/2k8 How to: Increase the Size of a Database (SQL Server Management Studio) (applies to log also), How to: Shrink a Database (SQL Server Management Studio)
SQL 2K How to increase the size of a database (Enterprise Manager), How to shrink a database (Enterprise Manager)
Long answer: you can use ALTER DATABASE ... MODIFY FILE to change the size of database files, including LOG files. You can look up master_files/sysfiles (2k) or <dbname>.sys.database_files (2k5/2k8) to get the logical name of the log. And you can use DBCC SHRINKFILE to shrink a file (if possible).
can I tell how large I need the
transaction log to be for my
operation?
It depends on a lot of factors (is this new data? is it an update? is it a delete? what recovery model? Do you have compression on SQL 2k8? etc etc) but is usually bigger than you expect. I would estimate 2.5 times the size of the update you are about to perform.
Update:
Missed you say is an DELETE. A rough estimate is 1.5 times the size of the data deleted (including all indexes).
The transaction log can be configured to expand as needed. You set the option to grow automatically.
However when the transaction log gets too big (running out of disk space) or making sql server unusable.
Back up transaction log. SQL will auto truncate inactive transactions
When you restore the transaction log will be reduced
To configure autogrow:
Right click on the database in management studio.
Select Properties
Update Autogrowth value
The most important part is the last line of your scenario: "Restore the size of the transaction log." You mean shrink the log back to its original size.
This is really dangerous for a lot of reasons, and we've covered them in a couple of stories over at SQLServerPedia:
http://sqlserverpedia.com/wiki/Shrinking_Databases
http://sqlserverpedia.com/blog/sql-server-backup-and-restore/backup-log-with-truncate_only-like-a-bear-trap/
http://sqlserverpedia.com/blog/sql-server-bloggers/i-was-in-the-pool-dealing-with-sql-shrinkage/
This question might be kind of elementary, but here goes:
I have a SQL Server database with a 4 GB log file. The DB is 16GB and is backed up nightly.
Can I truncate the log regularly because the entire DB+Log is backed up each night?
you can something like this to you maintenance schedule to run every night before the backup. This will try to shrink/truncate your log file to 1 meg
BACKUP LOG DBNAME
TO disk = 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\DBNAME.log'
DBCC SHRINKFILE('DBNAME_Log', 1)
Are you sure the log is backed up nightly and not just the database?
If so, then what does this database do? Are you deleting and refreshing whole tables? If so, your log might be the right size for the amount of transactions you have. You want the log to be large enough to handle your normal transaction load without having to grow. A small log can be a detriment to performance.
If this database is not transactional in nature (i.e., the tables are populated by full refreshes rather than one record ata time), the change the recovery mode to simple. Do not do that though if you have transactional tables that you will need to be able to recover from the log rahter than simply re-importing the data.
If you can run log backups during the day (depending on load, etc. this may or may not be possible for you) you can keep the log file under control by doing so. This will prevent the log file itself from growing quite so large, and also provide the side benefit of giving you the ability to restore closer to the point of failure in the event of a problem.
You'll still need to shrink the log file once using DBCC SHRINKFILE, but if it's backed up regularly after that point it shouldn't stabilize at a smaller size.