SQL Server filegroup full during a large INSERT INTO statement - sql-server

Consider a SQL script designed to copy rows from one table to another in a SQL 2000 database. The transfer involves 750,000 rows in a simple:
INSERT INTO TableB([ColA],[ColB]....[ColG])
SELECT [ColA],[ColB]....[ColG]
FROM TableA
This is a long running query, perhaps in part because ColB is of type ntext.
There are a handful of CONVERT() operations in the SELECT statement.
The difficulty is that after ~15 mins of operation, this exception is raised by SQL Server.
Could not allocate space for object '[TABLE]'.'[PRIMARY_KEY]' in database '[DB]' because the 'PRIMARY' filegroup is full.
Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
More Info
Autogrowth is already on.
there is more than enough free space on disk (~20gb)
the single .mdf is ~6gb
no triggers on the source or target tables
Question
What options need to be set, either via Management Studio, or via T-SQL to allow the database to grow as required? What other remedies would you suggest?
Resolution
The db could not grow as needed because I was hosting this database on an instance of SQL Server 2008 Express. Upgrading to a non-neutered version of SQL Server will solve this problem.

Best advice: Pre-size your database larger, instead of forcing it to grow on-demand (which can be a slow operation).
One reason this error could occur is if your autogrow interval is set too large. Besides the obvious (trying to grow by 25GB with only 20GB on disk), a large growth interval can take a very long time to allocate, which can cause your query to time out.
EDIT: Based on your new screenshot, doesn't look like the interval is the problem. But, my original advice still stands. Try to manually grow the database yourself, and see if it lets you:
ALTER DATABASE foobar
MODIFY FILE (name = foobar_data, size = 5000)

If you can share a screenshot/information on your PRIMARY filegroup makeup and autogrow settings (i.e. all files included in the PRIMARY and the autogrow settings for each) that would be helpful as well. First thought without seeing anything additional would be that you potentially have a maxFileSize specified for one/more of the files that makeup your PRIMARY group, but that's just a hunch without actually seeing the information.

Is there any triggers on the table in question? I've seen similar results before when there was triggers and it was actually the expansion of the log file (ldf) which reached the limit just logging all the queries that were being ran by triggers and not the mdf itself. If there is any triggers I'd consider disabling them while you do this update and seeing if that helps (I assume this is a 1-off data migration rather than a recurring event?)

Related

PRIMARY filegroup is full

Since there is no space left on my SQL server, I keep getting the following error and I have to delete data to get the system working again.
The hosting company says we do not have unlimited package options. They suggest that we switch to the server. Apart from that, I saw the 'shrink' on the internet, but will this damage the database or will it be the definitive solution for me?
Could not allocate space for object in database because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
As has been mentioned in the comments, Shrink Database won't help you because SQL Server will use all free space in the file before it attempts to grow it.
So your options are as follows:
Archive or delete old data
If you have audit tables, large blobs that no longer need to be accessed, or other data that doesn't need to be accessed within your application any more, you can SELECT it out of the database, store it in a file somewhere as an archive (or retrieve it from an older backup when you need it).
Apply table compression
If you are running Enterprise SQL, or a version of Standard/Express/Web after 2016 SP1, you can apply Compression to larger tables. This converts CHAR columns to be physically stored as VARCHAR columns, among other things, and I have seen it gain 1/3 space back on some SQL databases. However, you will run into the problem of needing space initially to perform the compression in, so you may need to apply this in concert with #1. A restart of the database service may also give you some space back from tempdb in which to operate if your tempdb has filled.
Upgrade storage space
If your host can't meet your storage space needs and #1 and #2 are not possible, it may be time to look at changing hosts. This may be a good idea anyway as your database will no doubt eventually grow back to bigger than its current size.
Keep in mind that if you have microsoft sql express server the license limits you to about 10 GB per database so you cannot increase the primary group more than that. For Sql Express Server before 2008 the size limit is even less.

SQL Server - Tempdb vs. Database Log usage

This may be a very basic question, but how can you determine beforehand whether a large operation will end up using database log or tempdb space?
For instance, one large insert / update operation I did used the database log to a point where we needed to employ SSIS & bulk operations just so the space wouldn't run out, because all the changes in the script had to be deployed at one time.
So now I'm working with a massive delete operation, that would fill the log 10 times over. So I created a script to check the space used by the database log file and delete the rows in smaller batches, with the idea that once the log file was large enough, the script would abort and then continue from that point the next day (allowing normal usage to continue till the next backup, without risk of the log running out of space).
Now, instead of filling the log, the latter query started filling up tempdb. Tempdb data file, not log file, to be specific. So I'm thinking there's a huge hole where my understanding of these two should be. :)
Thanks for any advice!
Edit:
To clarify, the question here is that why does the first example use database log, while the latter uses tempdb data file, to store the changes? And in general, by which logic are DML operations stored to either tempdb or log? Normally log should store all DB changes while tempdb is only used to store the processed data during operation when explicitly requested (ie, temp objects) or when the server runs out of RAM, right?
There is actually quite a bit that goes on behind the scenes when deleting records from a table. This MSDN Blog link may help shed some light on why tempdb is filling up when you try and delete. Either way, the delete will fill up the transaction logs as well, it just sounds like tempdb is filling up before it gets to the step of logging the transaction(s).
I'm not entirely sure what your requirements are, but the following links could be somewhat enlightening on your transaction logging issues. These are all set for SQL Server 2008 R2, but you can switch to whatever version you are running.
Recovery Model Overiew
Considerations for Switching from the Simple Recovery Model
Considerations for Switching from the Full or Bulk-Logged Recovery Model
You also have the option of truncating the table, but that depends on a few things. If you don't need the operation to be logged and you're deleting all the records from the table you can truncate. If you are doing some sort of conditional delete, but you're deleting more than you're keeping, you could always insert all of the records you want to keep into another "staging" table and then truncate the original. Then you can re-insert the records into the staging table. However, that really only works when you have no foreign key relationships on that table.

Best Way To Prepare A Read-Only Database

We're taking one of our production databases and creating a copy on another server for read-only purposes. The read-only database is on SQL Server 2008. Once the database is on the new server we'd like to optimize it for read-only use.
One problem is that there are large amounts of allocated space for some of the tables that are unused. Another problem I would anticipate would be fragmentation of the indexes. I'm not sure if table fragmentation is an issue.
What are the issues involved and what's the best way to go about this? Are there stored procedures included with SQL Server that will help? I've tried running DBCC SHRINKDATABASE, but that didn't deallocate the unused space.
EDIT: The exact command I used to shrink the database was
DBCC SHRINKDATABASE (dbname, 0)
GO
It ran for a couple hours. When I checked the table space using sp_spaceused, none of the unused space had been deallocated.
There are a couple of things you can do:
First -- don't worry about absolute allocated DB size unless you're running short on disk.
Second -- Idera has a lot of cool SQL Server tools, one of them defrags the DB.
http://www.idera.com/Content/Show27.aspx
Third -- dropping and re-creating the clustered index essentially defrags the tables, too -- and it re-creates all of the non-clustered indexes (defragging them as well). Note that this will probably EXPAND the allocated size of your database (again, don't worry about it) and take a long time (clustered index rebuilds are expensive).
One thing you may wish to consider is to change the recovery model of the database to simple. If you do not intend to perform any write activity to the database then you may as well benefit from automatic truncation of the transaction log, and eliminate the administrative overhead of using the other recovery models. You can always perform ad-hoc backups should you make any significant structural changes i.e. to indexes.
You may also wish to place the tables that are unused in a separate Filegroup away from the data files that will be accessed. Perhaps consider placing the unused tables on lower grade disk storage to benefit from cost savings.
Some things to consider with DBCC SHRINKDATABASE, you cannot shrink beyond the minimum size of your database.
Try issuing the statement in the following form.
DBCC SHRINKDATABASE (DBName, TRUNCATEONLY);
Cheers, John
I think it will be OK to just recreate it from the backup.
Putting tables and indexes on separate physical disks is always of help too. Indexes will be rebuilt from scratch when you recreate them on another filegroup, and therefore won't be fragmented.
There is a tool for shrinking or truncating a database in MSSQL Server. I think you select the properties of the database and you'll find it. This can be done before or after you copy the backup.
Certain forms of replication may do what you wish also.

How do I shrink my SQL Server Database?

I have a Database nearly 1.9Gb Database in size, and MSDE2000 does not allow DBs that exceed 2.0Gb
I need to shrink this DB (and many others like this at various client locations).
I have found and deleted many 100's of 1000's of records which are considered unneeded:
these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable.
So now I need to shrink the DB to account for the missing records.
I execute DBCC ShrinkDatabase('MyDB')...... No effect.
I have tried the various shrink facilities provided in MSSMS.... Still no effect.
I have backed up the database and restored it... Still no effect.
Still 1.9Gb
Why?
Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.
ALTER DATABASE MyDatabase SET RECOVERY SIMPLE
GO
DBCC SHRINKFILE (MyDatabase_Log, 5)
GO
ALTER DATABASE MyDatabase SET RECOVERY FULL
GO
This may seem bizarre, but it's worked for me and I have written a C# program to automate this.
Step 1: Truncate the transaction log (Back up only the transaction log, turning on the option to remove inactive transactions)
Step 2: Run a database shrink, moving all the pages to the start of the files
Step 3: Truncate the transaction log again, as step 2 adds log entries
Step 4: Run a database shrink again.
My stripped down code, which uses the SQL DMO library, is as follows:
SQLDatabase.TransactionLog.Truncate();
SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_NoTruncate);
SQLDatabase.TransactionLog.Truncate();
SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_Default);
This is an old question, but I just happened upon it.
The really short and correct answer is already given and has the most votes. That is how you shrink a transaction log, and that was probably the OP's problem. And when the transaction log has grown out of control, it often needs to be shrunk back, but care should be taken to prevent future situations of a log from growing out of control. This question on dba.se explains that. Basically - Don't let it get that large in the first place through proper recovery model, transaction log maintenance, transaction management, etc.
But the bigger question in my mind when reading this question about shrinking the data file (or even the log file) is why? and what bad things happen when you try? It appears as though shrink operations were done. Now in this case it makes sense in a sense - because MSDE/Express editions are capped at max DB size. But the right answer may be to look at the right version for your needs. And if you stumble upon this question looking to shrink your production database and this isn't the reason why you should ask yourself the why? question.
I don't want someone searching the web for "how to shrink a database" coming across this and thinking it is a cool or acceptable thing to do.
Shrinking Data Files is a special task that should be reserved for special occasions. Consider that when you shrink a database, you are effectively fragmenting your indexes. Consider that when you shrink a database you are taking away the free space that a database may someday grow right back into - effectively wasting your time and incurring the performance hit of a shrink operation only to see the DB grow again.
I wrote about this concept in several blog posts about shrinking databases. This one called "Don't touch that shrink button" comes to mind first. I talk about these concepts outlined here - but also the concept of "Right-Sizing" your database. It is far better to decide what your database size needs to be, plan for future growth, and allocate it to that amount. With Instant File Initialization available in SQL Server 2005 and beyond for data files, the cost of growth is lower - but I still prefer to have a proper initial application - and I'm far less scared of white space in a database than I am of shrinking in general with no thought first. :)
DBCC SHRINKDATABASE works for me, but this is its full syntax:
DBCC SHRINKDATABASE ( database_name, [target_percent], [truncate] )
where target_percent is the desired percentage of free space left in the database file after the database has been shrunk.
And truncate parameter can be:
NOTRUNCATE
Causes the freed file space to be retained in the database files. If not specified, the freed file space is released to the operating system.
TRUNCATEONLY
Causes any unused space in the data files to be released to the operating system and shrinks the file to the last allocated extent, reducing the file size without moving any data. No attempt is made to relocate rows to unallocated pages. target_percent is ignored when TRUNCATEONLY is used.
...and yes no_one is right, shrinking datbase is not very good practice becasue for example :
shrink on data files are excellent ways to introduce significant logical fragmentation, becasue it moves pages from the end of the allocated range of a database file to somewhere at the front of the file...
shrink database can have a lot of consequence on database, server.... think a lot about it before you do it!
on the web there are a lot of blogs and articles about it.
Late answer but might be useful useful for someone else
If neither DBCC ShrinkDatabase/ShrinkFile or SSMS (Tasks/Shrink/Database) doesn’t help, there are tools from Quest and ApexSQL that can get the job done, and even schedule periodic shrinking if you need it.
I’ve used the latter one in free trial to do this some time ago, by following short description at the end of this article:
https://solutioncenter.apexsql.com/sql-server-database-shrink-how-and-when-to-schedule-and-perform-shrinking-of-database-files/
All you need to do is install ApexSQL Backup, click "Shrink database" button in the main ribbon, select database in the window that will pop-up, and click "Finish".
You will also need to shrink the individual data files.
It is however not a good idea to shrink the databases. For example see here
You should use:
dbcc shrinkdatabase (MyDB)
It will shrink the log file (keep a windows explorer open and see it happening).
Here's another solution: Use the Database Publishing Wizard to export your schema, security and data to sql scripts. You can then take your current DB offline and re-create it with the scripts.
Sounds kind of foolish, but there are a couple advantages. First, there's no chance of losing data. Your original db (as long as you don't delete your DB when dropping it!) is safe, the new DB will be roughly as small as it can be, and you'll have two different snapshots of your current database - one ready to roll, one minified - you can choose from to back up.
"Therefore it's reasonable to assume much space should now be retrievable."
Apologies if I misunderstood the question, but are you sure it's the database and not the log files that are using up the space? Check to see what recovery model the database is in. Chances are it's in Full, which means the log file is never truncated. If you don't need a complete record of every transaction, you should be able to change to Simple, which will truncate the logs. You can shrink the database during the process. Assuming things go right, the process looks like:
Backup the database!
Change to Simple Recovery
Shrink db (right-click db, choose all tasks > shrink db -> set to 10% free space)
Verify that the space has been reclaimed, if not you might have to do a full backup
If that doesn't work (or you get a message saying "log file is full" when you try to switch recovery modes), try this:
Backup
Kill all connections to the db
Detach db (right-click > Detach or right-click > All Tasks > Detach)
Delete the log (ldf) file
Reattach the db
Change the recovery mode
etc.
I came across this post even though I needed to SHRINKFILE on MSSQL 2012 version which is little trickier since 2000 or 2005 versions. After reading up on all risks and issues related to this issue I ended up testing. Long story short, the best results I got were from using the MS SQL Server Management Studio.
Right-Click the DB -> TASKS -> SHRINK -> FILES -> select the LOG file
You also have to modify the minimum size of the data and log files. DBCC SHRINKDATABASE will shrink the data inside the files you already have allocated. To shrink a file to a size smaller than its minimum size, use DBCC SHRINKFILE and specify the new size.
Delete data, make sure recovery model is simple, then skrink (either shrink database or shrink files works). If the data file is still too big, AND you use heaps to store data -- that is, no clustered index on large tables -- then you might have this problem regarding deleting data from heaps: http://support.microsoft.com/kb/913399
I recently did this. I was trying to make a compact version of my database for testing on the road, but I just couldn't get it to shrink, no matter how many rows I deleted. Eventually, after many other commands in this thread, I found that my clustered indexes were not getting rebuilt after deleting rows. Rebuilding my indexes made it so I could shrink properly.
Not sure how practical this would be, and depending on the size of the database, number of tables and other complexities, but I:
defrag the physical drive
create a new database according to my requirements, space, percentage growth, etc
use the simple ssms task to import all tables from the old db to the new db
script out the indexes for all tables on the old database, and then recreate the indexes on the new database. expand as needed for foreign keys etc.
rename databases as needed, confirm successful, delete old
I think you can remove all your log with switch from full to simple recovery. Right click on your Database and select Properties and select Options and change
Recovery mode to Simple
Containment type to None
When you've set the recovery model to Simple (and enabled auto-shrink), it is still possible that SQL Server can not shrink the log. It has to do with checkpoints in the log (or lack thereof).
So first run
DBCC CHECKDB
on your database. After that the shrink operation should work like a charm.
Usually I use the Tasks>Shrink>Files menu and choose the logfile with the option to reorganise pages.

SQLServer tempDB growing infinitely

we have several "production environments" (three servers each, with the same version of our system. Each one has a SQL Server Database as production database).
In one of this environment the tempdb transaction log starts to grow fast and infinitely, we can´t find why. Same version of SO, SQL Server, application. No changes in the environment.
Someone know how to figure what´s happening ou how to fix this?
You might be in Full recovery model mode - if you are doing regular backups you can change this to simple and it will reduce the size of the log after the backup.
Here is some more info.
Have you tried running Profiler? This will allow you to view all of the running queries on the server. This may give you some insight into what is creating items in tempdb.
Your best bet is to fire up SQL Server Profiler and see what's going on. Look for high values in the "Writes" column or Spool operators, these are both likely to cause high temp usage.
If it is only the transaction log growing then try this, open transactions prevent the log from being shrunk down as it goes. This should be run in tempdb:
DBCC OPENTRAN
ok, i think this question is the same as mine.
the tempdb grow fast. the common reason is that the programmer create the procedure, and use the temportary table.
when we create these tables, or other operation,like trigger, dbcc command, they are all use the tempdb.
create the temportary tables, sqlserver will alloc space for table, like GAM,SGAM or IAM,but sqlserver must sure the Physical consistency, so there can only be a person do it every time, the others objects must wait. that caused tempdb grow fast.
i find the sovlution from MS, about like that, hope can help you:
1.create the data files for tempdb, the number will the same as CPU, ec:your host have 16cpu,you need to create 16 date files for tempdb. and every file must has the same size.
2.you need monitor these files , sure they are not full.
3.if these files space not enough big, that will auto grow, you need to put others the same size.
my english is not good, and if you are cant solve it, use the procedure sp_helpfile , check it. and paste the result at here.
when i was in singapore, i find this situation.
good luck.

Resources