SQL Server 2005 TempDB Size - sql-server

We are using SQL Server 2005. Recently SQL server 2005 crashed in our production environment due to large tempdb size.
1) what could be reason for large tempdb size?
2) Is there any way to look what data is there in tempdb?

2) Is there any way to look what data is there in tempdb?
No, because it is not kept there. Tempdb has very special treatment, like being dropped on every server restart.
1) what could be reason for large tempdb size?
Inefficient SQL, maintenance jobs or just the data at hand. Obviously a 800gb, 6000gb database may require more tempdb space than a 4gb online crm attempt. You dont really specify ANY size in absolute terms. What IS large? I have tempdb databases hardcoded at 64gb ony my smaller servers.
Typical SQL that goes into Tempdb are:
Sorts that are not solvable as part of the query (you need to store keys SOMEWHERE)
DISCTINCT. Needs all returned data in tempdb to find doubles.
Certain poerations psossibly during joins.
Tempdb usage (temporary tables). I just mention them becasue I often keep some hundred megabytes worth of data in them during loads and scrubbing.
In general you can find those queries by having hugh IO stats in the query log, or simply being slow.
That said, maintenance plans also go int there, but with reason. At the end, your "large" is possibly mine "not even worth mentioning tiny". It really depends what you do. Use the query trace tool to find out what takes long.
Physically Tempdb is very special in treatment - sql server does NOT write to the file if it does not have to (i.e. keeps thigns in memory). Writes to the disc are a sign of memory flowing ofer. This is different from normal db write behavior. Tempdb, IF it flows over, is best put onto a decently fast SSD... which wont normally be SO expensive because it still will be relatively small.
Use the query here to find other queries for tempdb - basicaly you are fishing in dirty water here, need to try out things until you find the culprit.

The usual way to grow a SQL Server database — any database, not just tempdb — is to have it's data and log files set to autogrow (especially the log files). SQL Server is perfectly happy grow the log and data files until the consume all the disk space available to them.
Best practice, IMHO, is to allow limited autogrowth on the data files (put an upper bound on how big it can grow) and fix the size of the log files. You might need to do some analysis to figure out how big the log files need to be. For tempdb, especially, the recovery model should be set to simple, too.

Ok tempdb is a kinda special database. Any temporary objects you use in procedures etc, is created here. So if you application uses a lot of temp tables in queries, they will all reside here, but they should clean themselves up after the connection (spid) is reset.
The other thing that can grow a tempdb is database maintenance tasks, however they will take a larger toll on the database log files.
Tempdb is also cleared every time you restart the SQL Service. It basically drops the database and re-create it. I agree with #Nic about leaving tempdb as it is, dont muck around with it, any issues with space in tempdb, usually indicates another larger problem somewhere else. More space will mask the problem, but only for so long. How much free space does your drive have that you have tempdb on?
Other thing, if not already, try and put tempdb on it's own drive, and one more if possible, have the data and the log files on their own separate drives.
So, if you dont restart your SQL Server/Service, your drive will run out of space pretty soon.,

use tempdb
select (size*8) as FileSizeKB from sys.database_files

Related

How can I link a SQL image column to an external database?

I need a bit of a help with the following.
Note: in the following scenario, I do not have access to the application's source code, therefore I can only make changes at the database level.
Our database uses dbo.[BLOB] to store all kinds of files and documents. The table uses an IMAGE (yeah, obsolete) data type. Since this particular table is growing quite fast, I was thinking to implement some archiving feature.
My idea is to move all files older than X months to a second database, and then somehow link from the dbo.[BLOB] table to the external/archiving database.
Is this even possible? The goal is to reduce the database size, in order to improve backup and query performance.
Any ideas and hints much appreciated.
Thanks.
Fabian
There are 2 features to help you with backup speed and database size in this case:
Filestream will allow you to store BLOBS as files on the file system instead of in database file. It complicates backup scenario, you have to backup both database and files but you get smaller database file along with faster access time to documents. It is much faster to read file from filesystem than from blob column. Additionally filestream allows for files bigger than 2GB.
Partitioning will split table into smaller chunks on physical level. This way you do not need to access application code to change where particular rows are stored physically and decide which data needs to be accessed fast and put it on SSD drive and which can land on slower archive. This way you can have more frequent backups on current partition, while less frequent on archive.
Prior to SQL Server 2016 SP1 - this feature was available in Enterprise version only. For SQL Server 2016 SP1 this is available in all editions.
In your case most likely you should go with filestream first.
W/o modifying the application you can do, basically, nothing. You may try to see if changing the column type will be tolerated by the application (very unlikely, 99.99% it will break the app) and try to use FILESTREAM, but even if you succeed it does not give much benefits (backup size will be the same, for example).
A second thing you can try is to replace the table with a view, using INSTEAD OF triggers for updates. It is still very likely to break the application (lets say 99.98%). The goal would be to have a distributed partitioned view (or cross DB partitioned view) which presents to the application an unified view of the 'cold' and 'hot' data. Is complex, error prone, but it will reduce the size of the backups (as long as data is moved from hot to cold and cold data is immutable, requiring few backups).
The goal is to reduce the database size, in order to improve backup and query performance.
To reduce the backup size, as I explained above, you can do, basically, nothing. But performance you need to investigate it and address it appropriately, based on your findings. Saying the the database is slow 'because of BLOBs' is hand-waving.

Handling unused disk space in sql server db

I have one database whose size is growing very fast. Curruntly its size is aruond 60GB however after executing db_spaceused stored procedure i could verify that more that 40 GB is unused(unused space is different, not reserved space which i unsderstand is for table growth). And actual data size is around 10-12 GB and few GB's in reserved space.
Now to collect that unsused space i tried to use the shrink operation but it turned out to be not helping. After searching further i also found not to use the shrink DB as that causes the data fragments to get genrated resulting in the dealay while disk operation. Now i am really not sure what other operation i should try to recollect the space and recollect the DB. I unsertand that due to the size queries might be taking longer that expected and reclaiming this space could help with the performance (not sure ).
While investigating i also come across Gererate Scripts feature. It helps exporting data, schema also but i am not sure if it also help creating script(everying user, permission and other things also) so that script will help creat as is replica(deep copy/clone) of DB using create scema and then populating it with data to other db/server ?
Any pointer would be helpful.
If your database is 60Gb it means it had grown to 60gb. Even if the data is only 20Gb, you probably have operations that grow the data from time to time (eg. nightly index maintenance jobs). The recommendation is to leave the database at 60Gb. Do not attempt to reclaim the space, you will only cause harm and whatever caused the database to grow to 60Gb to start with it will likely occur again and trigger database growth.
In fact, you should to the opposite. Try to identify why it grew to 60Gb and extrapolate what will happen when your data reaches 30Gb. Will the database grow to 90Gb? If yes, the you should grow it now to 90Gb. The last thing you want is for growth to occur randomly, and possibly run out of disk space at a critical moment. In fact you should check right now if your server has Instant File initialization enabled.
Now of course, the question is: what would cause 3x data size growth, and how to identify it? I don't know of any easy method. I would recommend start by looking at your SQL Agent jobs. Check your maintenance scripts. Look into the application itself, does it has a pattern of growth and delete for data? Look at past backups (you do have them, right?) and compare.
BTW I assume due diligence and you checked that the data file has grown to 60Gb. If is the LOG file that has grown then is easy, it means you enabled full recovery model and forgot to backup the log.

Should static database data be in its own Filegroup?

I'm creating a new DB and have a bunch of static data that won't change. If it does, it will be a manual process AND it will happen very rarely.
This data is a mix of varchars and Geographies.
I'm guessing it could be around 100K or so in total, over 4 or so tables.
Questions
Should I put these on a READ ONLY filegroup
Can I create the tables in the designer and define the filegroup during creation? Or is it only possible via a script?
Once the data is in the table (on a read only filegroup), can I change it later? Is it really hard to do that?
thanks.
It is worth it for VLDB (very large databases) for assorted reasons.
For 100,000 rows or 100 KB, I wouldn't bother.
This SQL Server support engineering team article discusses one of the associated "urban legends".
There is another one (can't find it) where you need 300 GB - 1B of data before you should consider multiple files/filegroups.
But, to answer specifically
Personal choice (there is no hard and fast rule)
Yes (edit:) In SSMS 2005, design mode, go to Indexes/Key, "data space specfication". The data lives where the clustered index is. WIthout a clustered index, then you can only do it via CREATE TABLE (..) ON filegroup
Yes, but You'll have to ALTER DATABASE myDB MODIFY FILEGROUP foo READ_WRITE with the database in single user exclusive mode
It is unlikely to hurt to put the data in to a read only space but I am unsure you will gain significantly. A read-only file group (or tablespace in Oracle) can give you 2 advantages; less to back-up each time a full backup is taken and a higher level of security over the data (e.g. it cannot be changed by a bug, accessing the DB via another tool, etc). The backup advantage is most true with larger DBs where backup windows are tight so putting a small amount of effort into excluding file groups is valuable. The security one depends on the nature of the site, data, etc. (if you do exclude the read-only space from regular backups make sure you get a copy on any retained backup tapes. I tend to backup up read-only spaces once a month.)
I am not familiar with designer.
Changing to and from read only is not onerous.
I think anything you read here is likely to be speculation, unless you have any evidence that it's been actually tried and recommended - to me it looks like a novel but unlikely idea. Do you have some reason to suspect that conventional practices will be unsatisfactory? It should be fairly easy to just try it and find out. Post your results if you get a chance.

Best Way To Prepare A Read-Only Database

We're taking one of our production databases and creating a copy on another server for read-only purposes. The read-only database is on SQL Server 2008. Once the database is on the new server we'd like to optimize it for read-only use.
One problem is that there are large amounts of allocated space for some of the tables that are unused. Another problem I would anticipate would be fragmentation of the indexes. I'm not sure if table fragmentation is an issue.
What are the issues involved and what's the best way to go about this? Are there stored procedures included with SQL Server that will help? I've tried running DBCC SHRINKDATABASE, but that didn't deallocate the unused space.
EDIT: The exact command I used to shrink the database was
DBCC SHRINKDATABASE (dbname, 0)
GO
It ran for a couple hours. When I checked the table space using sp_spaceused, none of the unused space had been deallocated.
There are a couple of things you can do:
First -- don't worry about absolute allocated DB size unless you're running short on disk.
Second -- Idera has a lot of cool SQL Server tools, one of them defrags the DB.
http://www.idera.com/Content/Show27.aspx
Third -- dropping and re-creating the clustered index essentially defrags the tables, too -- and it re-creates all of the non-clustered indexes (defragging them as well). Note that this will probably EXPAND the allocated size of your database (again, don't worry about it) and take a long time (clustered index rebuilds are expensive).
One thing you may wish to consider is to change the recovery model of the database to simple. If you do not intend to perform any write activity to the database then you may as well benefit from automatic truncation of the transaction log, and eliminate the administrative overhead of using the other recovery models. You can always perform ad-hoc backups should you make any significant structural changes i.e. to indexes.
You may also wish to place the tables that are unused in a separate Filegroup away from the data files that will be accessed. Perhaps consider placing the unused tables on lower grade disk storage to benefit from cost savings.
Some things to consider with DBCC SHRINKDATABASE, you cannot shrink beyond the minimum size of your database.
Try issuing the statement in the following form.
DBCC SHRINKDATABASE (DBName, TRUNCATEONLY);
Cheers, John
I think it will be OK to just recreate it from the backup.
Putting tables and indexes on separate physical disks is always of help too. Indexes will be rebuilt from scratch when you recreate them on another filegroup, and therefore won't be fragmented.
There is a tool for shrinking or truncating a database in MSSQL Server. I think you select the properties of the database and you'll find it. This can be done before or after you copy the backup.
Certain forms of replication may do what you wish also.

SQLServer tempDB growing infinitely

we have several "production environments" (three servers each, with the same version of our system. Each one has a SQL Server Database as production database).
In one of this environment the tempdb transaction log starts to grow fast and infinitely, we can´t find why. Same version of SO, SQL Server, application. No changes in the environment.
Someone know how to figure what´s happening ou how to fix this?
You might be in Full recovery model mode - if you are doing regular backups you can change this to simple and it will reduce the size of the log after the backup.
Here is some more info.
Have you tried running Profiler? This will allow you to view all of the running queries on the server. This may give you some insight into what is creating items in tempdb.
Your best bet is to fire up SQL Server Profiler and see what's going on. Look for high values in the "Writes" column or Spool operators, these are both likely to cause high temp usage.
If it is only the transaction log growing then try this, open transactions prevent the log from being shrunk down as it goes. This should be run in tempdb:
DBCC OPENTRAN
ok, i think this question is the same as mine.
the tempdb grow fast. the common reason is that the programmer create the procedure, and use the temportary table.
when we create these tables, or other operation,like trigger, dbcc command, they are all use the tempdb.
create the temportary tables, sqlserver will alloc space for table, like GAM,SGAM or IAM,but sqlserver must sure the Physical consistency, so there can only be a person do it every time, the others objects must wait. that caused tempdb grow fast.
i find the sovlution from MS, about like that, hope can help you:
1.create the data files for tempdb, the number will the same as CPU, ec:your host have 16cpu,you need to create 16 date files for tempdb. and every file must has the same size.
2.you need monitor these files , sure they are not full.
3.if these files space not enough big, that will auto grow, you need to put others the same size.
my english is not good, and if you are cant solve it, use the procedure sp_helpfile , check it. and paste the result at here.
when i was in singapore, i find this situation.
good luck.

Resources