In order to remedy deadlocks (introduced by indexed view), I attempted to utilize RCSI in sql server. I engaged this mode by:
ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON
ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
and verified that it is set by:
DBCC useroptions
SELECT * FROM sys.databases
I have 8 tempdbs in my database and they are set to auto grow by 64 MB. After ingesting thousands of records I do not see any growth in tempdbs. Based on documentation the RCSI heavily uses tempdb and increases its size considerably. I expected to see some increase in tempdb. Trace 1117, 1118 are also ON. But there is no increase in tempdb size. I have not turned on Allow Snap Shot Isolation for Tempddb database.
Thanks
Based on documentation the RCSI heavily uses tempdb and increases its size considerably.
There's a lot of baseless worry about RCSI. And INSERTs only create row versions if there is a trigger on the table.
From BOL
When the SNAPSHOT isolation level is enabled, each time a row is updated, the SQL Server Database Engine stores a copy of the original row in tempdb, and adds a transaction sequence number to the row.
This means that if you are updating one row, one row will be put in TempDB, if you are altering or updating an entire table the entire table will be put in TempDB. So it is entirely possible that your specific work load does not require large amounts of data to be versioned in TempDB. I've needed to increase the size of TempDB considerably (or turned off RSCI) during large updates to avoid this problem.
This question also discusses many things to consider when using TempDB.
Related
How can we decrease the DB files size when removing lots of outdated data?
We have an application that has data collected over many years. Now one customer has left the project and theirs data can be dropped. This customer alone stands for 75% of the data in the database.
The disk usage is very high, and running in a virtualized cloud service where the pricing is VERY high. Changing to another provider is not an option, and buying more disk is not popular since we have in practice 75% less data in use.
In my opinion it would be great if we could get rid of this customer data, shrink the files and be safe for years to come for reaching this disk usage level again.
I have seen many threads warning for performance decrease due to index fragmentation.
So, how can we drop this customer's data (stored in the same tables that other customers use, indexed on customer id among other things) without causing any considerable drawbacks?
Is these steps the way to go, or are there better ways?
Delete this customer's data
DBCC SHRINKDATABASE (database,30)
ALTER DATABASE database SET RECOVERY SIMPLE
DBCC SHRINKFILE (database_Log, 50)
ALTER DATABASE database SET RECOVERY FULL
DBCC INDEXDEFRAG
Only step 1 is required. Deleting data will free space within the database which can be reused for new data. You should also execute ALTER INDEX...REORGANIZE or ALTER INDEX...REORGANIZE 1 to compact indexes and distribute free space evenly throughout the indexes.
EDIT:
Since your cloud provider charges for disk space used, you'll need to shrink files to release space back to the OS and avoid incurring charges. Below are some tweaks to the steps you proposed, which can be customized according to your RPO requirements.
ALTER DATABASE ... SET RECOVERY SIMPLE (or BULK_LOGGED)
Delete this customer's data
DBCC SHRINKFILE (database_Data, 30) --repeat for each file to the desired size
DBCC SHRINKFILE (database_Log, 50)
REORGANIZE (or REBUILD) indexes
ALTER DATABASE ... SET RECOVERY FULL
BACKUP DATABASE ...
Does the recovery model of a database effect the size of the tempdb.mdf?
We have a database which involves a lot of processing and bulk inserts. We are having problems with the tempdb file growing to extremely large sizes (over 70 gb). The database in question is set to Full Recovery. Will changing it to Simple Recovery(on the database with all the transactions not the tempdb) prevent it from using the tempdb on these large inserts and bulk loads?
The recovery mode of the database doesn't affect its use of tempdb. The tempdb usage is most likely from the 'processing' part: static cursors, temp tables and table variables, sort operations and other worktable backed query operators, large variables and parameters.
The bulk insert part (ie. the part which would be affected by the recovery mode) has no tempdb impact.
tempdb Size and Placement Recommendations
To achieve optimal tempdb performance, we recommend the following configuration for tempdb in a production environment:
Set the recovery model of tempdb to SIMPLE. This model automatically reclaims log space to keep space requirements small.
http://msdn.microsoft.com/en-us/library/ms175527.aspx
How do I increase the size of the transaction log? Is is also possible to temporarily increase the transaction log?
Let's say I have the following scenario. I have a Delete operation that's too big for the current transaction log. I wan't to:
Increase the transaction log (can I detect the current size?, can I tell how large I need the transaction log to be for my operation?)
(Perform my operation)
Backup the transaction log
Restore the size of the transaction log.
Short answer:
SQL 2k5/2k8 How to: Increase the Size of a Database (SQL Server Management Studio) (applies to log also), How to: Shrink a Database (SQL Server Management Studio)
SQL 2K How to increase the size of a database (Enterprise Manager), How to shrink a database (Enterprise Manager)
Long answer: you can use ALTER DATABASE ... MODIFY FILE to change the size of database files, including LOG files. You can look up master_files/sysfiles (2k) or <dbname>.sys.database_files (2k5/2k8) to get the logical name of the log. And you can use DBCC SHRINKFILE to shrink a file (if possible).
can I tell how large I need the
transaction log to be for my
operation?
It depends on a lot of factors (is this new data? is it an update? is it a delete? what recovery model? Do you have compression on SQL 2k8? etc etc) but is usually bigger than you expect. I would estimate 2.5 times the size of the update you are about to perform.
Update:
Missed you say is an DELETE. A rough estimate is 1.5 times the size of the data deleted (including all indexes).
The transaction log can be configured to expand as needed. You set the option to grow automatically.
However when the transaction log gets too big (running out of disk space) or making sql server unusable.
Back up transaction log. SQL will auto truncate inactive transactions
When you restore the transaction log will be reduced
To configure autogrow:
Right click on the database in management studio.
Select Properties
Update Autogrowth value
The most important part is the last line of your scenario: "Restore the size of the transaction log." You mean shrink the log back to its original size.
This is really dangerous for a lot of reasons, and we've covered them in a couple of stories over at SQLServerPedia:
http://sqlserverpedia.com/wiki/Shrinking_Databases
http://sqlserverpedia.com/blog/sql-server-backup-and-restore/backup-log-with-truncate_only-like-a-bear-trap/
http://sqlserverpedia.com/blog/sql-server-bloggers/i-was-in-the-pool-dealing-with-sql-shrinkage/
We're taking one of our production databases and creating a copy on another server for read-only purposes. The read-only database is on SQL Server 2008. Once the database is on the new server we'd like to optimize it for read-only use.
One problem is that there are large amounts of allocated space for some of the tables that are unused. Another problem I would anticipate would be fragmentation of the indexes. I'm not sure if table fragmentation is an issue.
What are the issues involved and what's the best way to go about this? Are there stored procedures included with SQL Server that will help? I've tried running DBCC SHRINKDATABASE, but that didn't deallocate the unused space.
EDIT: The exact command I used to shrink the database was
DBCC SHRINKDATABASE (dbname, 0)
GO
It ran for a couple hours. When I checked the table space using sp_spaceused, none of the unused space had been deallocated.
There are a couple of things you can do:
First -- don't worry about absolute allocated DB size unless you're running short on disk.
Second -- Idera has a lot of cool SQL Server tools, one of them defrags the DB.
http://www.idera.com/Content/Show27.aspx
Third -- dropping and re-creating the clustered index essentially defrags the tables, too -- and it re-creates all of the non-clustered indexes (defragging them as well). Note that this will probably EXPAND the allocated size of your database (again, don't worry about it) and take a long time (clustered index rebuilds are expensive).
One thing you may wish to consider is to change the recovery model of the database to simple. If you do not intend to perform any write activity to the database then you may as well benefit from automatic truncation of the transaction log, and eliminate the administrative overhead of using the other recovery models. You can always perform ad-hoc backups should you make any significant structural changes i.e. to indexes.
You may also wish to place the tables that are unused in a separate Filegroup away from the data files that will be accessed. Perhaps consider placing the unused tables on lower grade disk storage to benefit from cost savings.
Some things to consider with DBCC SHRINKDATABASE, you cannot shrink beyond the minimum size of your database.
Try issuing the statement in the following form.
DBCC SHRINKDATABASE (DBName, TRUNCATEONLY);
Cheers, John
I think it will be OK to just recreate it from the backup.
Putting tables and indexes on separate physical disks is always of help too. Indexes will be rebuilt from scratch when you recreate them on another filegroup, and therefore won't be fragmented.
There is a tool for shrinking or truncating a database in MSSQL Server. I think you select the properties of the database and you'll find it. This can be done before or after you copy the backup.
Certain forms of replication may do what you wish also.
The Snapshot Isolation feature helps us to solve the problem where readers lock out writers on high volume sites. It does so by versioning rows using tempdb in SqlServer.
My question is to correctly implement this Snapshot Isolation feature, is it just a matter of executing the following on my SqlServer
ALTER DATABASE MyDatabase
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON
Do I still also have to write code that includes TransactionScope, like
using (new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.SnapShot}))
Finally, Brent pointed out his concern in this post under section The Hidden Costs of Concurrency, where he mentioned as you version rows in tempdb, tempdb may run out of space, and may have performance issues since it has to lookup versioned rows. So my question is I know this site uses Snapshot Isolation, anyone else uses this feature on large sites and what's your opinion on the performance?
Thx,
Ray.
It is "just a matter of executing the following", as stated in https://msdn.microsoft.com/en-us/library/tcbchxcb(v=vs.110).aspx, "If the READ_COMMITTED_SNAPSHOT option is set to OFF, you must explicitly set the Snapshot isolation level for each session in order to access versioned rows." Your second ALTER DATABASE command sets the READ_COMMITTED_SNAPSHOT ON so code does not need to specify that TransactionScope.
There are two sides to a performance coin, whenever one seeks an opinion about performance is "sufficient" versus "insufficient": Either "supply" is underwhelming or "demand" is overwhelming.... For this post, "supply" could refer to the performance and space used by tempdb, while the "demand" could concern the rate at which writes to tempdb occur. On the supply side, a variety of HW (from a single spindle 5400 RPM disk to arrays of SSDs) can be used. On the demand side, this isn't a SQL Server concern (although failing to properly normalize a database design can be a factor) as much as its a client code concern.
My SQL Servers see clients concurrently demanding approximately 50 writes/minute and 2000 batches/minute, where the writes are usually on the OTLP/short side. I have 1 TB of databases and a 30 GB tempdb, per SQL Server. All databases are in general normalized to 3rd normal form. All databases are running on SSDs. I have no concerns about the tempdb disk's IO throughput capacity being exceeded. As a result, I have had no concerns about enabling snapshot isolation on my systems. But, I have seen other systems where enabling snapshot isolation was attempted, but quickly abandoned.
Your system's experience can vary from any other respondent's system, by orders of magnitude. You should seek to profile/reliably replay your system's writes, along with replaying other uses of tempdb (including sorts), in order to come up with your own conclusions for your own system (for a variety of HW with sufficient space for your system's resulting tempdb size). Load testing should not be an afterthought :). You should also benchmark your tempdb disk's IO throughput capacity - see https://technet.microsoft.com/library/Cc966412, and be prepared to spend more money if its IO throughput capacity ends up being insufficient.