As we create and drop temporary tables, inserts data into those tables, the size of the temp db and it's log cause the database to grow in size unlimitedly. It reaches upto 100s of gb and fills the hard disk.
This can cause the lack of size in database server and the application may crash.
We need to restart the sqlexpress service which is I think is a bad idea.
The stopping of service cause the site/application to go down.
So what is the alternative for this problem
You can always try shrink database files:
USE [tempdb]
GO
DBCC SHRINKFILE (N'templog' , 0)
GO
DBCC SHRINKFILE (N'tempdev' , 0)
GO
This will release all unused space from the tempdb. But MSSQL should reuse the space anyway. So if your files are such big, you need to look into your logic and find places where you create really big tables and try to reduce their sizes and/or their lifetime.
Also you shouldn't avoid dropping unused temporary tables.
And you can try to reduce session lifetime. It will guarantee that old unused tables will be dropped.
We are using SQL Server 2005. Recently SQL server 2005 crashed in our production environment due to large tempdb size.
1) what could be reason for large tempdb size?
2) Is there any way to look what data is there in tempdb?
2) Is there any way to look what data is there in tempdb?
No, because it is not kept there. Tempdb has very special treatment, like being dropped on every server restart.
1) what could be reason for large tempdb size?
Inefficient SQL, maintenance jobs or just the data at hand. Obviously a 800gb, 6000gb database may require more tempdb space than a 4gb online crm attempt. You dont really specify ANY size in absolute terms. What IS large? I have tempdb databases hardcoded at 64gb ony my smaller servers.
Typical SQL that goes into Tempdb are:
Sorts that are not solvable as part of the query (you need to store keys SOMEWHERE)
DISCTINCT. Needs all returned data in tempdb to find doubles.
Certain poerations psossibly during joins.
Tempdb usage (temporary tables). I just mention them becasue I often keep some hundred megabytes worth of data in them during loads and scrubbing.
In general you can find those queries by having hugh IO stats in the query log, or simply being slow.
That said, maintenance plans also go int there, but with reason. At the end, your "large" is possibly mine "not even worth mentioning tiny". It really depends what you do. Use the query trace tool to find out what takes long.
Physically Tempdb is very special in treatment - sql server does NOT write to the file if it does not have to (i.e. keeps thigns in memory). Writes to the disc are a sign of memory flowing ofer. This is different from normal db write behavior. Tempdb, IF it flows over, is best put onto a decently fast SSD... which wont normally be SO expensive because it still will be relatively small.
Use the query here to find other queries for tempdb - basicaly you are fishing in dirty water here, need to try out things until you find the culprit.
The usual way to grow a SQL Server database — any database, not just tempdb — is to have it's data and log files set to autogrow (especially the log files). SQL Server is perfectly happy grow the log and data files until the consume all the disk space available to them.
Best practice, IMHO, is to allow limited autogrowth on the data files (put an upper bound on how big it can grow) and fix the size of the log files. You might need to do some analysis to figure out how big the log files need to be. For tempdb, especially, the recovery model should be set to simple, too.
Ok tempdb is a kinda special database. Any temporary objects you use in procedures etc, is created here. So if you application uses a lot of temp tables in queries, they will all reside here, but they should clean themselves up after the connection (spid) is reset.
The other thing that can grow a tempdb is database maintenance tasks, however they will take a larger toll on the database log files.
Tempdb is also cleared every time you restart the SQL Service. It basically drops the database and re-create it. I agree with #Nic about leaving tempdb as it is, dont muck around with it, any issues with space in tempdb, usually indicates another larger problem somewhere else. More space will mask the problem, but only for so long. How much free space does your drive have that you have tempdb on?
Other thing, if not already, try and put tempdb on it's own drive, and one more if possible, have the data and the log files on their own separate drives.
So, if you dont restart your SQL Server/Service, your drive will run out of space pretty soon.,
use tempdb
select (size*8) as FileSizeKB from sys.database_files
Consider a SQL script designed to copy rows from one table to another in a SQL 2000 database. The transfer involves 750,000 rows in a simple:
INSERT INTO TableB([ColA],[ColB]....[ColG])
SELECT [ColA],[ColB]....[ColG]
FROM TableA
This is a long running query, perhaps in part because ColB is of type ntext.
There are a handful of CONVERT() operations in the SELECT statement.
The difficulty is that after ~15 mins of operation, this exception is raised by SQL Server.
Could not allocate space for object '[TABLE]'.'[PRIMARY_KEY]' in database '[DB]' because the 'PRIMARY' filegroup is full.
Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
More Info
Autogrowth is already on.
there is more than enough free space on disk (~20gb)
the single .mdf is ~6gb
no triggers on the source or target tables
Question
What options need to be set, either via Management Studio, or via T-SQL to allow the database to grow as required? What other remedies would you suggest?
Resolution
The db could not grow as needed because I was hosting this database on an instance of SQL Server 2008 Express. Upgrading to a non-neutered version of SQL Server will solve this problem.
Best advice: Pre-size your database larger, instead of forcing it to grow on-demand (which can be a slow operation).
One reason this error could occur is if your autogrow interval is set too large. Besides the obvious (trying to grow by 25GB with only 20GB on disk), a large growth interval can take a very long time to allocate, which can cause your query to time out.
EDIT: Based on your new screenshot, doesn't look like the interval is the problem. But, my original advice still stands. Try to manually grow the database yourself, and see if it lets you:
ALTER DATABASE foobar
MODIFY FILE (name = foobar_data, size = 5000)
If you can share a screenshot/information on your PRIMARY filegroup makeup and autogrow settings (i.e. all files included in the PRIMARY and the autogrow settings for each) that would be helpful as well. First thought without seeing anything additional would be that you potentially have a maxFileSize specified for one/more of the files that makeup your PRIMARY group, but that's just a hunch without actually seeing the information.
Is there any triggers on the table in question? I've seen similar results before when there was triggers and it was actually the expansion of the log file (ldf) which reached the limit just logging all the queries that were being ran by triggers and not the mdf itself. If there is any triggers I'd consider disabling them while you do this update and seeing if that helps (I assume this is a 1-off data migration rather than a recurring event?)
I'm creating a new DB and have a bunch of static data that won't change. If it does, it will be a manual process AND it will happen very rarely.
This data is a mix of varchars and Geographies.
I'm guessing it could be around 100K or so in total, over 4 or so tables.
Questions
Should I put these on a READ ONLY filegroup
Can I create the tables in the designer and define the filegroup during creation? Or is it only possible via a script?
Once the data is in the table (on a read only filegroup), can I change it later? Is it really hard to do that?
thanks.
It is worth it for VLDB (very large databases) for assorted reasons.
For 100,000 rows or 100 KB, I wouldn't bother.
This SQL Server support engineering team article discusses one of the associated "urban legends".
There is another one (can't find it) where you need 300 GB - 1B of data before you should consider multiple files/filegroups.
But, to answer specifically
Personal choice (there is no hard and fast rule)
Yes (edit:) In SSMS 2005, design mode, go to Indexes/Key, "data space specfication". The data lives where the clustered index is. WIthout a clustered index, then you can only do it via CREATE TABLE (..) ON filegroup
Yes, but You'll have to ALTER DATABASE myDB MODIFY FILEGROUP foo READ_WRITE with the database in single user exclusive mode
It is unlikely to hurt to put the data in to a read only space but I am unsure you will gain significantly. A read-only file group (or tablespace in Oracle) can give you 2 advantages; less to back-up each time a full backup is taken and a higher level of security over the data (e.g. it cannot be changed by a bug, accessing the DB via another tool, etc). The backup advantage is most true with larger DBs where backup windows are tight so putting a small amount of effort into excluding file groups is valuable. The security one depends on the nature of the site, data, etc. (if you do exclude the read-only space from regular backups make sure you get a copy on any retained backup tapes. I tend to backup up read-only spaces once a month.)
I am not familiar with designer.
Changing to and from read only is not onerous.
I think anything you read here is likely to be speculation, unless you have any evidence that it's been actually tried and recommended - to me it looks like a novel but unlikely idea. Do you have some reason to suspect that conventional practices will be unsatisfactory? It should be fairly easy to just try it and find out. Post your results if you get a chance.
I have a Database nearly 1.9Gb Database in size, and MSDE2000 does not allow DBs that exceed 2.0Gb
I need to shrink this DB (and many others like this at various client locations).
I have found and deleted many 100's of 1000's of records which are considered unneeded:
these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable.
So now I need to shrink the DB to account for the missing records.
I execute DBCC ShrinkDatabase('MyDB')...... No effect.
I have tried the various shrink facilities provided in MSSMS.... Still no effect.
I have backed up the database and restored it... Still no effect.
Still 1.9Gb
Why?
Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.
ALTER DATABASE MyDatabase SET RECOVERY SIMPLE
GO
DBCC SHRINKFILE (MyDatabase_Log, 5)
GO
ALTER DATABASE MyDatabase SET RECOVERY FULL
GO
This may seem bizarre, but it's worked for me and I have written a C# program to automate this.
Step 1: Truncate the transaction log (Back up only the transaction log, turning on the option to remove inactive transactions)
Step 2: Run a database shrink, moving all the pages to the start of the files
Step 3: Truncate the transaction log again, as step 2 adds log entries
Step 4: Run a database shrink again.
My stripped down code, which uses the SQL DMO library, is as follows:
SQLDatabase.TransactionLog.Truncate();
SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_NoTruncate);
SQLDatabase.TransactionLog.Truncate();
SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_Default);
This is an old question, but I just happened upon it.
The really short and correct answer is already given and has the most votes. That is how you shrink a transaction log, and that was probably the OP's problem. And when the transaction log has grown out of control, it often needs to be shrunk back, but care should be taken to prevent future situations of a log from growing out of control. This question on dba.se explains that. Basically - Don't let it get that large in the first place through proper recovery model, transaction log maintenance, transaction management, etc.
But the bigger question in my mind when reading this question about shrinking the data file (or even the log file) is why? and what bad things happen when you try? It appears as though shrink operations were done. Now in this case it makes sense in a sense - because MSDE/Express editions are capped at max DB size. But the right answer may be to look at the right version for your needs. And if you stumble upon this question looking to shrink your production database and this isn't the reason why you should ask yourself the why? question.
I don't want someone searching the web for "how to shrink a database" coming across this and thinking it is a cool or acceptable thing to do.
Shrinking Data Files is a special task that should be reserved for special occasions. Consider that when you shrink a database, you are effectively fragmenting your indexes. Consider that when you shrink a database you are taking away the free space that a database may someday grow right back into - effectively wasting your time and incurring the performance hit of a shrink operation only to see the DB grow again.
I wrote about this concept in several blog posts about shrinking databases. This one called "Don't touch that shrink button" comes to mind first. I talk about these concepts outlined here - but also the concept of "Right-Sizing" your database. It is far better to decide what your database size needs to be, plan for future growth, and allocate it to that amount. With Instant File Initialization available in SQL Server 2005 and beyond for data files, the cost of growth is lower - but I still prefer to have a proper initial application - and I'm far less scared of white space in a database than I am of shrinking in general with no thought first. :)
DBCC SHRINKDATABASE works for me, but this is its full syntax:
DBCC SHRINKDATABASE ( database_name, [target_percent], [truncate] )
where target_percent is the desired percentage of free space left in the database file after the database has been shrunk.
And truncate parameter can be:
NOTRUNCATE
Causes the freed file space to be retained in the database files. If not specified, the freed file space is released to the operating system.
TRUNCATEONLY
Causes any unused space in the data files to be released to the operating system and shrinks the file to the last allocated extent, reducing the file size without moving any data. No attempt is made to relocate rows to unallocated pages. target_percent is ignored when TRUNCATEONLY is used.
...and yes no_one is right, shrinking datbase is not very good practice becasue for example :
shrink on data files are excellent ways to introduce significant logical fragmentation, becasue it moves pages from the end of the allocated range of a database file to somewhere at the front of the file...
shrink database can have a lot of consequence on database, server.... think a lot about it before you do it!
on the web there are a lot of blogs and articles about it.
Late answer but might be useful useful for someone else
If neither DBCC ShrinkDatabase/ShrinkFile or SSMS (Tasks/Shrink/Database) doesn’t help, there are tools from Quest and ApexSQL that can get the job done, and even schedule periodic shrinking if you need it.
I’ve used the latter one in free trial to do this some time ago, by following short description at the end of this article:
https://solutioncenter.apexsql.com/sql-server-database-shrink-how-and-when-to-schedule-and-perform-shrinking-of-database-files/
All you need to do is install ApexSQL Backup, click "Shrink database" button in the main ribbon, select database in the window that will pop-up, and click "Finish".
You will also need to shrink the individual data files.
It is however not a good idea to shrink the databases. For example see here
You should use:
dbcc shrinkdatabase (MyDB)
It will shrink the log file (keep a windows explorer open and see it happening).
Here's another solution: Use the Database Publishing Wizard to export your schema, security and data to sql scripts. You can then take your current DB offline and re-create it with the scripts.
Sounds kind of foolish, but there are a couple advantages. First, there's no chance of losing data. Your original db (as long as you don't delete your DB when dropping it!) is safe, the new DB will be roughly as small as it can be, and you'll have two different snapshots of your current database - one ready to roll, one minified - you can choose from to back up.
"Therefore it's reasonable to assume much space should now be retrievable."
Apologies if I misunderstood the question, but are you sure it's the database and not the log files that are using up the space? Check to see what recovery model the database is in. Chances are it's in Full, which means the log file is never truncated. If you don't need a complete record of every transaction, you should be able to change to Simple, which will truncate the logs. You can shrink the database during the process. Assuming things go right, the process looks like:
Backup the database!
Change to Simple Recovery
Shrink db (right-click db, choose all tasks > shrink db -> set to 10% free space)
Verify that the space has been reclaimed, if not you might have to do a full backup
If that doesn't work (or you get a message saying "log file is full" when you try to switch recovery modes), try this:
Backup
Kill all connections to the db
Detach db (right-click > Detach or right-click > All Tasks > Detach)
Delete the log (ldf) file
Reattach the db
Change the recovery mode
etc.
I came across this post even though I needed to SHRINKFILE on MSSQL 2012 version which is little trickier since 2000 or 2005 versions. After reading up on all risks and issues related to this issue I ended up testing. Long story short, the best results I got were from using the MS SQL Server Management Studio.
Right-Click the DB -> TASKS -> SHRINK -> FILES -> select the LOG file
You also have to modify the minimum size of the data and log files. DBCC SHRINKDATABASE will shrink the data inside the files you already have allocated. To shrink a file to a size smaller than its minimum size, use DBCC SHRINKFILE and specify the new size.
Delete data, make sure recovery model is simple, then skrink (either shrink database or shrink files works). If the data file is still too big, AND you use heaps to store data -- that is, no clustered index on large tables -- then you might have this problem regarding deleting data from heaps: http://support.microsoft.com/kb/913399
I recently did this. I was trying to make a compact version of my database for testing on the road, but I just couldn't get it to shrink, no matter how many rows I deleted. Eventually, after many other commands in this thread, I found that my clustered indexes were not getting rebuilt after deleting rows. Rebuilding my indexes made it so I could shrink properly.
Not sure how practical this would be, and depending on the size of the database, number of tables and other complexities, but I:
defrag the physical drive
create a new database according to my requirements, space, percentage growth, etc
use the simple ssms task to import all tables from the old db to the new db
script out the indexes for all tables on the old database, and then recreate the indexes on the new database. expand as needed for foreign keys etc.
rename databases as needed, confirm successful, delete old
I think you can remove all your log with switch from full to simple recovery. Right click on your Database and select Properties and select Options and change
Recovery mode to Simple
Containment type to None
When you've set the recovery model to Simple (and enabled auto-shrink), it is still possible that SQL Server can not shrink the log. It has to do with checkpoints in the log (or lack thereof).
So first run
DBCC CHECKDB
on your database. After that the shrink operation should work like a charm.
Usually I use the Tasks>Shrink>Files menu and choose the logfile with the option to reorganise pages.