DBCC SHRINKFILE EMPTYFILE stucked because of sysfiles1 table - sql-server

I'm trying to migrate my pretty big db on SQLServer 2008 from one drive to another with minimum downtime and have some issues.
So , basically, my plan is to use DBCC SHRINKFILE ('filename', EMPTYFILE) for extent movement.
After some period of time, I shrink this file to avoid some space problems with log shipping db's in other server.
Huge amount of extents were moved successfully, but then I've got this error
DBCC SHRINKFILE: System table SYSFILES1 Page 1:21459450 could not be moved to other files because it only can reside in the primary file of the database.
Msg 2555, Level 16, State 1, Line 3
Cannot move all contents of file "filename" to other places to complete the emptyfile operation.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
So, what I've tried already:
manually make my db bigger by adding empty space(just make file bigger by altering database)
work a little bit with files in SECONDARY filegroup
work with db after full\transactional backup
And this didn't work.
Can someone help me to fix this?
Thanks a lot.

As the error message states, there are some things that need to reside in the PRIMARY filegroup. Use the information in sys.allocation_units to find out what user (as opposed to system) objects are still in PRIMARY and move them with create index … with (drop_existing = on) on OTHER_FILEGROUP. Once you've moved all of your objects, you should be able to shrink the file down to as small as it can possibly be. Your final step will be to incur the downtime to move the primary file (in this case, minimal downtime does not mean "no downtime"). Luckily, what actually needs to reside in PRIMARY isn't very much data, so the downtime should be small. But you'll have a good idea once you get everything out of there.
While you're at it, set the default filegroup for your database to something other than primary to avoid putting user objects there in the future.

Related

Reclaim unused space from a SQL Server table

In SQL Server, in one of our databases, we have a big database table that's using over 1.2 TB of space. It has about 200 GB of actual data but over 1 TB of unused space.
This happened over 2 years as old time series data was deleted from this table daily and new data was inserted daily.
We do not expect for the table size to increase much going forward.
I am looking for the best way to reclaim unused space from this table without taking the database or table offline, and without causing too much CPU overhead.
I think for this you'll need to use DBCC Shrinkfile, possibly in several incremental steps.
First see if truncateonly has an acceptable effect - it depends on how the data is distributed within the file
DBCC SHRINKFILE (N'Tablename' , truncateonly)
If the file does not shrink sufficiently you can specify a target size to shrink to in MB eg
DBCC SHRINKFILE (N'Tablename' , 256000)
You can monitor the impact on performance while this executes and stop it if need be, resuming again as appropriate.

How to get the space used by SQL Server back?

I have created a table on SQL Server then inserted 135 million records in that table.
then I truncate it
then tried to re-insert the same 135 million records again.
but something went wrong and had to restart the computer
I got the recovery mode in my database.
then fixed.
the problem now is C drive has 10GB free only (210GB used) while before that I used to have 105GB free!
I checked folders but the sum of all sizes including hidden ones does not sum to 210GB
what happened and where did these GBs have gone?
The space is not automatically released by the database. In the case of tempdb, if you restart the sql service, tempdb will reinitialize to original size. But, not the case for other databases.
If you want to reclaim the space, there are two approaches:
Cleaner approch:
As suggested by Paul Randal, go for new filegroup for existing tables and then drop the old filegroup.Refer to his article
Create a new filegroup
Move all affected tables and indexes into the new filegroup using the CREATE INDEX … WITH (DROP_EXISTING = ON) ON syntax, to move the
tables and remove fragmentation from them at the same time
Drop the old filegroup that you were going to shrink anyway (or shrink it way down if its the primary filegroup)
Brute Force approach and redmediation
Here, you can use DBCC SHRINKFILE and reduce the size. Keep little more space in the database to avoid frequent autogrowth. Beware that, shrinking file will lead to index fragmentation and you have to rebuild indexes post the shrinking of files.
As Paul Randal recommends:
If you absolutely have no choice and have to run a data file shrink
operation, be aware that you’re going to cause index fragmentation and
you might need to take steps to remove it afterwards if it’s going to
cause performance problems. The only way to remove index fragmentation
without causing data file growth again is to use DBCC INDEXDEFRAG or
ALTER INDEX … REORGANIZE. These commands only require a single 8KB
page of extra space, instead of needing to build a whole new index in
the case of an index rebuild operation (which will likely cause the
file to grow).

Sql Server LDF file taking too large space

Why is my database log file taking to high space? Its almost taking up 30GB of my HDD. Even after deleting 1,000,000 records, its not freeing up any space.
So,
1.Why is the log file taking this much space (30gb)?2.how can I free up the space?
1.Why is the log file taking this much space (30gb)?
Or because of your recovery not SIMPLE and ldf grown eventually to such size
Or because there was a large one-time DML operation
Or because of other reasons, as noted by #sepupic in another answer
2.how can I free up the space?
IF recovery is other than SIMPLE:
Firstly backup transaction log file
Perform a shrink, like DBCC SHRINKFILE(2,256)
IF recovery is SIMPLE:
Just shrink it to desired size, like DBCC SHRINKFILE(2,256)
If the database log still did not reduce to a target size, then the exact reason to be checked, by using a code snippet of #sepupic
Some members still give and advice to physicaly remove LDF files.
I highly suggest to not do this. Remarkable related post of Aaron Bertrand:
Some things you don't want to do:
Detach the database, delete the log file, and re-attach. I can't
emphasize how dangerous this can be. Your database may not come back
up, it may come up as suspect, you may have to revert to a backup (if
you have one), etc. etc.
1. Why is the log file taking this much space (30gb)?
It was because the Autogrowth / Maxsize was set 200,000 MB
2. how can I free up the space?
As described Here i used the following command and the file is now less than 200mb
ALTER DATABASE myDatabaseName
SET RECOVERY SIMPLE
GO
DBCC SHRINKFILE (myDatabaseName_log, 1)
GO
ALTER DATABASE myDatabaseName_log
SET RECOVERY FULL
I have also set Autogrowh/Maxsize in the database properties to 1000 as Limited (See the image below).
The link describes more, so I recommend referring it for detailed description and other options.
Thanks #hadi for the link.
Why is my database log file taking to high space?
There can be more causes, not only the 2 mentioned in another answer.
You can find the exact reason using this query:
select log_reuse_wait_desc
from sys.databases
where name = 'myDB';
Here is a link to the BOL article that describes all the possible causes under log_reuse_wait:
sys.databases (Transact-SQL)
how can I free up the space?
First you should determine the cause using the query above, then you should fix it, for example, if it's broken replication you should remove it or fix it.
You need a maintenance job to backup the transaction log, and do it often: like every 10 minutes or so.
A FULL backup once per day isn't good enough.
Alternatively, you can change the Recovery Model of the database from FULL to SIMPLE. But if you do this, you'll lose the ability to do point-in-time restores.
In my case the DB names was with bad characters so the script doesn't worked.
I found out and follow this article, which worked perfect in two steps: changing backup log from full to simple and shrink DB log file more than 95%

Database size after deleting rows does not change

I have 2 databases from which I have deleted rows in a specific table in order to decrease the size of the database.
After deleting, the size of DB.mdf does not change.
I also tried to rebuild the index and used cleantable, but to no effect!
ALTER INDEX ALL ON dbo.'Tablename' REBUILD
DBCC CLEANTABLE ('DBname', 'Tablename', 0)
Deleting rows in a database will not decrease the actual database file size.
You need to compact the database after row deletion.
Look for this
After running this, you'll want to rebuild indexes. Shrinking typically causes index fragmentation, and that could be a significant performance cost.
I would also recommend that after you shrink, you re-grow the files so that you have some free space. That way, when new rows come in, they don't trigger autogrowth. Autogrowth has a performance cost and is something you would like to avoid whenever possible.
You need to shrink the db. Right click db, Tasks->Shrink database
Even I faced the same issue, my db was 40MB after deleting some columns still its size was not getting changed..
I installed SQLManager then opened my db and used command 'vaccum' that cleaned my db and its size got reduced to 10MB.
I wrote this after being in the exact same scenario and needing to shrink the database. However, not wanting to use DBCC SHRINFKILE I used Paul Randals method of shrinking the database.
https://gist.github.com/tcartwright/ea60e0a38fac25c847e39bced10ecd04

The transaction log for database 'Name' is full.To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

I am getting following error while I am trying to insert 8355447 records in single insert query.i use sql-server-2008-r2.
INSERT INTO table
select * from [DbName].table
Please help me to solve.... Thanks
Check the disk space on the SQL Server as typically this occurs when the transaction log cannot expand due to a lack of free disk space.
If you are struggling for disk space, you can shrink the transaction logs of your application databases and also don't forget to shrink the transaction log of the TEMPDB database.
Note:- Posting this as a separate comment as I am a newcomer to Stackoverflow and don't have enough points to add comments.
May be more than one options are available
Is it necessary that you should run this insert as a single
transaction. If that is not mandatory, you can insert say 'n' no. of
rows as a single transaction, then next 'n' and so on.
If you can spare some space on another drive, created an additional log
file on that drive.
Check whether you can clear some space on drive under consideration by moving
some other file to somewhere else.
If none of the previous options are open, shrink SQL transaction log files
with TRUNCATE_ONLY option ( release free space available at the end of log file to OS).
dbcc sqlperf ( 'logspace') can be used to find out the log files with more free space in it.
USE' those databases and apply a shrinkfile, The Command is :-
dbcc shrinkfile ( , TRUNCATEONLY )
DBCC Shrinkfile details are available here DBCC Shrinkfile.
If you are not getting space even after that, you may have to do a rigorous shrink by re-allocating pages within the database ( by specifying a target size ) , details of this can be taken from the link provided.
Well, clean up the transaction log. THe error is extremely clear if anyone cares to read it.

Resources