I have been using DBCC SHRINKFILE with EMPTYFILE to move data from one secondary data files data to another. There is currently four files in the filegroup and there is over 1TB free of space between those files.
Oddly, I've had the system alert me that the filegroup that these files belongs to is full and that no space could be allocated for a table (user) table.
I've used this numerous times before and never had this happen and some googling couldn't seem to find any other incidents that people had reported. Anybody have any ideas?
Version of SQL Server is 2008R2
I had this exact problem. It turned out that a nightly maintenance process was issuing an ALTER INDEX command on an index in the same file while the DBCC SHRINKFILE WITH EMPTYFILE was running. Makes sense that ALTER INDEX wouldn't work while the file is marked to not accept new data.
In our environment, we resolved this by preventing the nightly maintenance process from issuing this command and issuing it manually outside of the time that we allow the SHRINKFILE to run.
Related
I was tasked to remove a bunch of Files and Filegroups from a SQL database. Now I have read multiple sites but they all use DBCC SHRINKFILE (N'filename', EMPTYFILE).
From the MS document on SHRINKFILE it states"
EMPTYFILE
Migrates all data from the specified file to other files in the same filegroup.
I do not want to migrate the data, I want to purge it. Is there any way to purge the data so I can remove the files/filegroups without needing to use SHRINKFILE?
I also do not want to take the database offline if that can be prevented. These files and groups are created by one of our systems, so would prefer that the database remain online so the system can use the database.
It creates a new Group on a monthly basis, and a File for each day. The data we want to remove is from 2019, and no longer needed and we need to recover the space.
TIA
What I ended up doing was setting the database to RESTRICTED_USER for a couple of minutes, set all the filegroups to READWRITE, then set the database back to MULTI_USER. Thus keeping the downtime to a minimum.
After that I ran the DROP TABLE, DBCC SHRINKFILE (..., EMPTYFILE) and REMOVE FILE commands and it cleared up the space on the drive.
Thank you for those that commented, it helped in getting the script working. :)
I recently took over management of a database that has been in use for 2-3 years, and it had no transaction log maintenance plan in place. The DB file is 8 GB, but the transaction log file is a whopping 54 GB. I started backing up the log file, and I need to reclaim that drive space. I have compared my DB to other sites within my company that had proper maintenance plans built, and their transaction logs are roughly 4 GB, which is what I would expect. This is the first time I've run into this problem.
I performed a full backup of the DB, and set up an initial transaction log maintenance plan, but I need to shrink this *.ldf file, because it is so grossly out of proportion. I searched the Stack Overflow message boards in the hope of finding a similar situation. Based on that research, I tried the DBCC SHRINKFILE but that did not yield the results I expected. I restored the DB to original (oversized log file in place) and tried the Full-Simple-Full recovery technique to truncate the log, but was still unable to reclaim the space. I even tried deleting the .ldf and going through the process of clearing the (Recovery Pending) status. I went back to the DBCC CHECKDB repair function, but after clearing the (Recovery Pending) status, I was unable to backup the transaction log at all. I started receiving a msg 42000 error 50000, which also referenced error 3013. In the end, I deleted the whole mess and restored it back to it's original state. I;ve tried to be as detailed as possible, and will be happy to clarify or expound if necessary. As I said, this is the first time I've run into something like this, but I have always started my projects from the beginning. This is the first time I've jumped into the middle of something that was built by someone else and broken when I got it.
ALTER DATABASE [DBName] SET EMERGENCY;
GO
ALTER DATABASE [DBName] set single_user
GO
DBCC CHECKDB ([DBName], REPAIR_ALLOW_DATA_LOSS) WITH ALL_ERRORMSGS;
GO
ALTER DATABASE [DBName] set multi_user
GO
My expected result is an uncorrupted transaction log that is appropriately sized for my database. Let this be a cautionary tale and reminder to ask all the questions before accepting management of someone else's screw-up.
If you're on 2008 or older you can try 'BACKUP LOG WITH TRUNCATE_ONLY'
Otherwise switch the database to simple recovery, this will clear out the log.
then run DBCC SHRINKFILE to shrink the file size itself.
I have a spring project, when I start the server a named file 'tempdb' is created in the sql server directory, the size of this file is too big (reaches 8G)
I like to know why this file is created? is there a way to reduce its size?
Thanks in advance
run this
-- use the proper DB
USE tempdb;
GO
-- sync
CHECKPOINT;
GO
-- drop buffers
DBCC DROPCLEANBUFFERS;
DBCC FREEPROCCACHE;
DBCC FREESYSTEMCACHE('ALL');
DBCC FREESESSIONCACHE;
GO
--shrink db (file_name, size_in_MB)
DBCC SHRINKFILE (TEMPDEV, 1024);
GO
Either, you can always restart whole Database (like service restart), it will also drop the tempdb.
The tempdb system database is a global resource that is available to all users that are connected to an instance of SQL Server. The tempdb database is used to store the following objects: user objects, internal objects, and version stores. It is also used to store Worktables that hold intermediate results that are created during query processing and sorting.
You can use "ALTER DATABASE tempdb" command to limit/shrink the size of this.
The size 8Gb is suspicious though as tempdb size is seen mostly in MBs. You might want to check for any ugly queries when you're starting up your server.
I made a mistake in my logic and let a table grow way beyond what it should have. Now the database is 90gb (where it should be 10gb). I was able to clear the table that had all this information but I can't get the database to shrink.
I've used dbcc shrinkDatabase and dbcc shrinkfile and it appears to shrink the database down to 82gb temporarily but then goes right back to 90gb after a minute or so. I'm positive that another table is not taking up the space.
Also, if I export the DB it's only about 5gb in size.
I'm thinking it may have something to do with the indexes because it happens right after I run the rebuild on the index (the application is offline so nothing is being written to the database while I am working on it).
For a location this size using my application it's typical to have 8-10gb of DB file usage.
Does anyone know how to shrink the DB back to its normal size?
Thanks to Shannon I got my answer. I needed to run the following to clean the table up
dbcc cleantable('databasename',tablename)
dbcc shrinkdatabase('databasebame', 10)
I then tried to rebuild the clustered index on the table to no avail (probably because the table had grown so large; it was very much fubar) so I dropped and re-created the index.
The space came to what I was expecting to see.
First of all, I know it's better not shrink database. But in our situation we had to shrink the data file to claim more space back.
Environment: SQL Server 2005 x64 SP3 Ent running on Windows Server 2003 Enterprise x64.
The database has one single data file and one log file. Before we run DBCC SHRINKFILE, the data file has 640GB, in which 400GB is free, so the data is about 240GB. To speed up the shrink process,we had to defrag the database first then we shrink data file.
However, after we shrinked the database data file using DBCC SHRINKFILE, the data changed to 490GB. How could it happen?
I asked around include Paul Randal. Here's the possible reason:
When I rebuild indexes for those indexes that were dropped, the indexes would not physically removed from data file, they would be put in deferred drop queue, instead they would stay there and would be dropped in batch.