Does the recovery model of a database effect the size of the tempdb.mdf?
We have a database which involves a lot of processing and bulk inserts. We are having problems with the tempdb file growing to extremely large sizes (over 70 gb). The database in question is set to Full Recovery. Will changing it to Simple Recovery(on the database with all the transactions not the tempdb) prevent it from using the tempdb on these large inserts and bulk loads?
The recovery mode of the database doesn't affect its use of tempdb. The tempdb usage is most likely from the 'processing' part: static cursors, temp tables and table variables, sort operations and other worktable backed query operators, large variables and parameters.
The bulk insert part (ie. the part which would be affected by the recovery mode) has no tempdb impact.
tempdb Size and Placement Recommendations
To achieve optimal tempdb performance, we recommend the following configuration for tempdb in a production environment:
Set the recovery model of tempdb to SIMPLE. This model automatically reclaims log space to keep space requirements small.
http://msdn.microsoft.com/en-us/library/ms175527.aspx
Related
In order to remedy deadlocks (introduced by indexed view), I attempted to utilize RCSI in sql server. I engaged this mode by:
ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON
ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
and verified that it is set by:
DBCC useroptions
SELECT * FROM sys.databases
I have 8 tempdbs in my database and they are set to auto grow by 64 MB. After ingesting thousands of records I do not see any growth in tempdbs. Based on documentation the RCSI heavily uses tempdb and increases its size considerably. I expected to see some increase in tempdb. Trace 1117, 1118 are also ON. But there is no increase in tempdb size. I have not turned on Allow Snap Shot Isolation for Tempddb database.
Thanks
Based on documentation the RCSI heavily uses tempdb and increases its size considerably.
There's a lot of baseless worry about RCSI. And INSERTs only create row versions if there is a trigger on the table.
From BOL
When the SNAPSHOT isolation level is enabled, each time a row is updated, the SQL Server Database Engine stores a copy of the original row in tempdb, and adds a transaction sequence number to the row.
This means that if you are updating one row, one row will be put in TempDB, if you are altering or updating an entire table the entire table will be put in TempDB. So it is entirely possible that your specific work load does not require large amounts of data to be versioned in TempDB. I've needed to increase the size of TempDB considerably (or turned off RSCI) during large updates to avoid this problem.
This question also discusses many things to consider when using TempDB.
How can we decrease the DB files size when removing lots of outdated data?
We have an application that has data collected over many years. Now one customer has left the project and theirs data can be dropped. This customer alone stands for 75% of the data in the database.
The disk usage is very high, and running in a virtualized cloud service where the pricing is VERY high. Changing to another provider is not an option, and buying more disk is not popular since we have in practice 75% less data in use.
In my opinion it would be great if we could get rid of this customer data, shrink the files and be safe for years to come for reaching this disk usage level again.
I have seen many threads warning for performance decrease due to index fragmentation.
So, how can we drop this customer's data (stored in the same tables that other customers use, indexed on customer id among other things) without causing any considerable drawbacks?
Is these steps the way to go, or are there better ways?
Delete this customer's data
DBCC SHRINKDATABASE (database,30)
ALTER DATABASE database SET RECOVERY SIMPLE
DBCC SHRINKFILE (database_Log, 50)
ALTER DATABASE database SET RECOVERY FULL
DBCC INDEXDEFRAG
Only step 1 is required. Deleting data will free space within the database which can be reused for new data. You should also execute ALTER INDEX...REORGANIZE or ALTER INDEX...REORGANIZE 1 to compact indexes and distribute free space evenly throughout the indexes.
EDIT:
Since your cloud provider charges for disk space used, you'll need to shrink files to release space back to the OS and avoid incurring charges. Below are some tweaks to the steps you proposed, which can be customized according to your RPO requirements.
ALTER DATABASE ... SET RECOVERY SIMPLE (or BULK_LOGGED)
Delete this customer's data
DBCC SHRINKFILE (database_Data, 30) --repeat for each file to the desired size
DBCC SHRINKFILE (database_Log, 50)
REORGANIZE (or REBUILD) indexes
ALTER DATABASE ... SET RECOVERY FULL
BACKUP DATABASE ...
I had very large data in database(log) after deleting all the records in the database tables but the database size is not reduced , specially Log File(s) Ldf
ROWS 56274.125000 55306.625000
LOG 179705.437500 179567.046875
how i reduce the size ?
Is the database in Full Recovery Model? If so, you will need to implement Transaction log backups before you can shrink the log-file size.
If you do not need transaction log backups (in case your full backup is taken often enough that the business is okay with losing the amount of data since the previous backup), you can switch the database to run in Simple recovery model.
You can find out which recovery model a database is in by right-clicking the database, selecting "Properties", and then checking the "Options" tab.
You can change the recovery model at that location, too. However, before you do, I highly recommend you to read up on the different recovery models, and the implications of changing them.
When you are ready to change the recovery model, you might want to read this first.
I have a SQL Server 2008 Express Database which is 7.8 GB in size
DataFile 1.2 GB
LogFile 6.6 GB
Recovery Model = Full
Auto Shrink = False
On a Live database, what is the best way to reduce the size of this database?
Before you can shrink a database running in full recovery model, you must backup the transaction log. So the procedure is to run a transaction log backup, and then shrink the log file.
If you have never performed a transaction log backup then you will continue to suffer from run-away log files and shrinking it will only be a band-aid solution.
You can also identify unused tables and remove those tables(if there is)
you can create Archive database that will stored some old unused data
you can normalize your database more to reduce table size.
hope this information helps you.
We're taking one of our production databases and creating a copy on another server for read-only purposes. The read-only database is on SQL Server 2008. Once the database is on the new server we'd like to optimize it for read-only use.
One problem is that there are large amounts of allocated space for some of the tables that are unused. Another problem I would anticipate would be fragmentation of the indexes. I'm not sure if table fragmentation is an issue.
What are the issues involved and what's the best way to go about this? Are there stored procedures included with SQL Server that will help? I've tried running DBCC SHRINKDATABASE, but that didn't deallocate the unused space.
EDIT: The exact command I used to shrink the database was
DBCC SHRINKDATABASE (dbname, 0)
GO
It ran for a couple hours. When I checked the table space using sp_spaceused, none of the unused space had been deallocated.
There are a couple of things you can do:
First -- don't worry about absolute allocated DB size unless you're running short on disk.
Second -- Idera has a lot of cool SQL Server tools, one of them defrags the DB.
http://www.idera.com/Content/Show27.aspx
Third -- dropping and re-creating the clustered index essentially defrags the tables, too -- and it re-creates all of the non-clustered indexes (defragging them as well). Note that this will probably EXPAND the allocated size of your database (again, don't worry about it) and take a long time (clustered index rebuilds are expensive).
One thing you may wish to consider is to change the recovery model of the database to simple. If you do not intend to perform any write activity to the database then you may as well benefit from automatic truncation of the transaction log, and eliminate the administrative overhead of using the other recovery models. You can always perform ad-hoc backups should you make any significant structural changes i.e. to indexes.
You may also wish to place the tables that are unused in a separate Filegroup away from the data files that will be accessed. Perhaps consider placing the unused tables on lower grade disk storage to benefit from cost savings.
Some things to consider with DBCC SHRINKDATABASE, you cannot shrink beyond the minimum size of your database.
Try issuing the statement in the following form.
DBCC SHRINKDATABASE (DBName, TRUNCATEONLY);
Cheers, John
I think it will be OK to just recreate it from the backup.
Putting tables and indexes on separate physical disks is always of help too. Indexes will be rebuilt from scratch when you recreate them on another filegroup, and therefore won't be fragmented.
There is a tool for shrinking or truncating a database in MSSQL Server. I think you select the properties of the database and you'll find it. This can be done before or after you copy the backup.
Certain forms of replication may do what you wish also.