Shrink databases in case a database split - sql-server

Due to a new architecture I have to split the current database in 2 databases both of them having 50% of the initial database (= 15GB).
1/ Is this a good idea to execute DBCC SHRINKDATABASE (0) for the 2 newly created databases? I'm asking this as I've read many articles stating the shrinking database leads to fragmentation.
2/ Is a good approach to set both databases as SIMPLE recovery while doing the separation and then to set it to FULL back?
What is the action you recommend to apply in this cases?

Shrinking obviously is there is no chance the space gets reiused. THat said ,the database initially is on the really tiny size - I have databases here that have multiple files and earch is larger. So, the gain may simply not worth it.
But it is a valid case for shrinking IF the database will not grow back in reasonable time and you need the space.

Related

Handling unused disk space in sql server db

I have one database whose size is growing very fast. Curruntly its size is aruond 60GB however after executing db_spaceused stored procedure i could verify that more that 40 GB is unused(unused space is different, not reserved space which i unsderstand is for table growth). And actual data size is around 10-12 GB and few GB's in reserved space.
Now to collect that unsused space i tried to use the shrink operation but it turned out to be not helping. After searching further i also found not to use the shrink DB as that causes the data fragments to get genrated resulting in the dealay while disk operation. Now i am really not sure what other operation i should try to recollect the space and recollect the DB. I unsertand that due to the size queries might be taking longer that expected and reclaiming this space could help with the performance (not sure ).
While investigating i also come across Gererate Scripts feature. It helps exporting data, schema also but i am not sure if it also help creating script(everying user, permission and other things also) so that script will help creat as is replica(deep copy/clone) of DB using create scema and then populating it with data to other db/server ?
Any pointer would be helpful.
If your database is 60Gb it means it had grown to 60gb. Even if the data is only 20Gb, you probably have operations that grow the data from time to time (eg. nightly index maintenance jobs). The recommendation is to leave the database at 60Gb. Do not attempt to reclaim the space, you will only cause harm and whatever caused the database to grow to 60Gb to start with it will likely occur again and trigger database growth.
In fact, you should to the opposite. Try to identify why it grew to 60Gb and extrapolate what will happen when your data reaches 30Gb. Will the database grow to 90Gb? If yes, the you should grow it now to 90Gb. The last thing you want is for growth to occur randomly, and possibly run out of disk space at a critical moment. In fact you should check right now if your server has Instant File initialization enabled.
Now of course, the question is: what would cause 3x data size growth, and how to identify it? I don't know of any easy method. I would recommend start by looking at your SQL Agent jobs. Check your maintenance scripts. Look into the application itself, does it has a pattern of growth and delete for data? Look at past backups (you do have them, right?) and compare.
BTW I assume due diligence and you checked that the data file has grown to 60Gb. If is the LOG file that has grown then is easy, it means you enabled full recovery model and forgot to backup the log.

SQL Server 2005 TempDB Size

We are using SQL Server 2005. Recently SQL server 2005 crashed in our production environment due to large tempdb size.
1) what could be reason for large tempdb size?
2) Is there any way to look what data is there in tempdb?
2) Is there any way to look what data is there in tempdb?
No, because it is not kept there. Tempdb has very special treatment, like being dropped on every server restart.
1) what could be reason for large tempdb size?
Inefficient SQL, maintenance jobs or just the data at hand. Obviously a 800gb, 6000gb database may require more tempdb space than a 4gb online crm attempt. You dont really specify ANY size in absolute terms. What IS large? I have tempdb databases hardcoded at 64gb ony my smaller servers.
Typical SQL that goes into Tempdb are:
Sorts that are not solvable as part of the query (you need to store keys SOMEWHERE)
DISCTINCT. Needs all returned data in tempdb to find doubles.
Certain poerations psossibly during joins.
Tempdb usage (temporary tables). I just mention them becasue I often keep some hundred megabytes worth of data in them during loads and scrubbing.
In general you can find those queries by having hugh IO stats in the query log, or simply being slow.
That said, maintenance plans also go int there, but with reason. At the end, your "large" is possibly mine "not even worth mentioning tiny". It really depends what you do. Use the query trace tool to find out what takes long.
Physically Tempdb is very special in treatment - sql server does NOT write to the file if it does not have to (i.e. keeps thigns in memory). Writes to the disc are a sign of memory flowing ofer. This is different from normal db write behavior. Tempdb, IF it flows over, is best put onto a decently fast SSD... which wont normally be SO expensive because it still will be relatively small.
Use the query here to find other queries for tempdb - basicaly you are fishing in dirty water here, need to try out things until you find the culprit.
The usual way to grow a SQL Server database — any database, not just tempdb — is to have it's data and log files set to autogrow (especially the log files). SQL Server is perfectly happy grow the log and data files until the consume all the disk space available to them.
Best practice, IMHO, is to allow limited autogrowth on the data files (put an upper bound on how big it can grow) and fix the size of the log files. You might need to do some analysis to figure out how big the log files need to be. For tempdb, especially, the recovery model should be set to simple, too.
Ok tempdb is a kinda special database. Any temporary objects you use in procedures etc, is created here. So if you application uses a lot of temp tables in queries, they will all reside here, but they should clean themselves up after the connection (spid) is reset.
The other thing that can grow a tempdb is database maintenance tasks, however they will take a larger toll on the database log files.
Tempdb is also cleared every time you restart the SQL Service. It basically drops the database and re-create it. I agree with #Nic about leaving tempdb as it is, dont muck around with it, any issues with space in tempdb, usually indicates another larger problem somewhere else. More space will mask the problem, but only for so long. How much free space does your drive have that you have tempdb on?
Other thing, if not already, try and put tempdb on it's own drive, and one more if possible, have the data and the log files on their own separate drives.
So, if you dont restart your SQL Server/Service, your drive will run out of space pretty soon.,
use tempdb
select (size*8) as FileSizeKB from sys.database_files

Should static database data be in its own Filegroup?

I'm creating a new DB and have a bunch of static data that won't change. If it does, it will be a manual process AND it will happen very rarely.
This data is a mix of varchars and Geographies.
I'm guessing it could be around 100K or so in total, over 4 or so tables.
Questions
Should I put these on a READ ONLY filegroup
Can I create the tables in the designer and define the filegroup during creation? Or is it only possible via a script?
Once the data is in the table (on a read only filegroup), can I change it later? Is it really hard to do that?
thanks.
It is worth it for VLDB (very large databases) for assorted reasons.
For 100,000 rows or 100 KB, I wouldn't bother.
This SQL Server support engineering team article discusses one of the associated "urban legends".
There is another one (can't find it) where you need 300 GB - 1B of data before you should consider multiple files/filegroups.
But, to answer specifically
Personal choice (there is no hard and fast rule)
Yes (edit:) In SSMS 2005, design mode, go to Indexes/Key, "data space specfication". The data lives where the clustered index is. WIthout a clustered index, then you can only do it via CREATE TABLE (..) ON filegroup
Yes, but You'll have to ALTER DATABASE myDB MODIFY FILEGROUP foo READ_WRITE with the database in single user exclusive mode
It is unlikely to hurt to put the data in to a read only space but I am unsure you will gain significantly. A read-only file group (or tablespace in Oracle) can give you 2 advantages; less to back-up each time a full backup is taken and a higher level of security over the data (e.g. it cannot be changed by a bug, accessing the DB via another tool, etc). The backup advantage is most true with larger DBs where backup windows are tight so putting a small amount of effort into excluding file groups is valuable. The security one depends on the nature of the site, data, etc. (if you do exclude the read-only space from regular backups make sure you get a copy on any retained backup tapes. I tend to backup up read-only spaces once a month.)
I am not familiar with designer.
Changing to and from read only is not onerous.
I think anything you read here is likely to be speculation, unless you have any evidence that it's been actually tried and recommended - to me it looks like a novel but unlikely idea. Do you have some reason to suspect that conventional practices will be unsatisfactory? It should be fairly easy to just try it and find out. Post your results if you get a chance.

Best Way To Prepare A Read-Only Database

We're taking one of our production databases and creating a copy on another server for read-only purposes. The read-only database is on SQL Server 2008. Once the database is on the new server we'd like to optimize it for read-only use.
One problem is that there are large amounts of allocated space for some of the tables that are unused. Another problem I would anticipate would be fragmentation of the indexes. I'm not sure if table fragmentation is an issue.
What are the issues involved and what's the best way to go about this? Are there stored procedures included with SQL Server that will help? I've tried running DBCC SHRINKDATABASE, but that didn't deallocate the unused space.
EDIT: The exact command I used to shrink the database was
DBCC SHRINKDATABASE (dbname, 0)
GO
It ran for a couple hours. When I checked the table space using sp_spaceused, none of the unused space had been deallocated.
There are a couple of things you can do:
First -- don't worry about absolute allocated DB size unless you're running short on disk.
Second -- Idera has a lot of cool SQL Server tools, one of them defrags the DB.
http://www.idera.com/Content/Show27.aspx
Third -- dropping and re-creating the clustered index essentially defrags the tables, too -- and it re-creates all of the non-clustered indexes (defragging them as well). Note that this will probably EXPAND the allocated size of your database (again, don't worry about it) and take a long time (clustered index rebuilds are expensive).
One thing you may wish to consider is to change the recovery model of the database to simple. If you do not intend to perform any write activity to the database then you may as well benefit from automatic truncation of the transaction log, and eliminate the administrative overhead of using the other recovery models. You can always perform ad-hoc backups should you make any significant structural changes i.e. to indexes.
You may also wish to place the tables that are unused in a separate Filegroup away from the data files that will be accessed. Perhaps consider placing the unused tables on lower grade disk storage to benefit from cost savings.
Some things to consider with DBCC SHRINKDATABASE, you cannot shrink beyond the minimum size of your database.
Try issuing the statement in the following form.
DBCC SHRINKDATABASE (DBName, TRUNCATEONLY);
Cheers, John
I think it will be OK to just recreate it from the backup.
Putting tables and indexes on separate physical disks is always of help too. Indexes will be rebuilt from scratch when you recreate them on another filegroup, and therefore won't be fragmented.
There is a tool for shrinking or truncating a database in MSSQL Server. I think you select the properties of the database and you'll find it. This can be done before or after you copy the backup.
Certain forms of replication may do what you wish also.

How do I shrink my SQL Server Database?

I have a Database nearly 1.9Gb Database in size, and MSDE2000 does not allow DBs that exceed 2.0Gb
I need to shrink this DB (and many others like this at various client locations).
I have found and deleted many 100's of 1000's of records which are considered unneeded:
these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable.
So now I need to shrink the DB to account for the missing records.
I execute DBCC ShrinkDatabase('MyDB')...... No effect.
I have tried the various shrink facilities provided in MSSMS.... Still no effect.
I have backed up the database and restored it... Still no effect.
Still 1.9Gb
Why?
Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.
ALTER DATABASE MyDatabase SET RECOVERY SIMPLE
GO
DBCC SHRINKFILE (MyDatabase_Log, 5)
GO
ALTER DATABASE MyDatabase SET RECOVERY FULL
GO
This may seem bizarre, but it's worked for me and I have written a C# program to automate this.
Step 1: Truncate the transaction log (Back up only the transaction log, turning on the option to remove inactive transactions)
Step 2: Run a database shrink, moving all the pages to the start of the files
Step 3: Truncate the transaction log again, as step 2 adds log entries
Step 4: Run a database shrink again.
My stripped down code, which uses the SQL DMO library, is as follows:
SQLDatabase.TransactionLog.Truncate();
SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_NoTruncate);
SQLDatabase.TransactionLog.Truncate();
SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_Default);
This is an old question, but I just happened upon it.
The really short and correct answer is already given and has the most votes. That is how you shrink a transaction log, and that was probably the OP's problem. And when the transaction log has grown out of control, it often needs to be shrunk back, but care should be taken to prevent future situations of a log from growing out of control. This question on dba.se explains that. Basically - Don't let it get that large in the first place through proper recovery model, transaction log maintenance, transaction management, etc.
But the bigger question in my mind when reading this question about shrinking the data file (or even the log file) is why? and what bad things happen when you try? It appears as though shrink operations were done. Now in this case it makes sense in a sense - because MSDE/Express editions are capped at max DB size. But the right answer may be to look at the right version for your needs. And if you stumble upon this question looking to shrink your production database and this isn't the reason why you should ask yourself the why? question.
I don't want someone searching the web for "how to shrink a database" coming across this and thinking it is a cool or acceptable thing to do.
Shrinking Data Files is a special task that should be reserved for special occasions. Consider that when you shrink a database, you are effectively fragmenting your indexes. Consider that when you shrink a database you are taking away the free space that a database may someday grow right back into - effectively wasting your time and incurring the performance hit of a shrink operation only to see the DB grow again.
I wrote about this concept in several blog posts about shrinking databases. This one called "Don't touch that shrink button" comes to mind first. I talk about these concepts outlined here - but also the concept of "Right-Sizing" your database. It is far better to decide what your database size needs to be, plan for future growth, and allocate it to that amount. With Instant File Initialization available in SQL Server 2005 and beyond for data files, the cost of growth is lower - but I still prefer to have a proper initial application - and I'm far less scared of white space in a database than I am of shrinking in general with no thought first. :)
DBCC SHRINKDATABASE works for me, but this is its full syntax:
DBCC SHRINKDATABASE ( database_name, [target_percent], [truncate] )
where target_percent is the desired percentage of free space left in the database file after the database has been shrunk.
And truncate parameter can be:
NOTRUNCATE
Causes the freed file space to be retained in the database files. If not specified, the freed file space is released to the operating system.
TRUNCATEONLY
Causes any unused space in the data files to be released to the operating system and shrinks the file to the last allocated extent, reducing the file size without moving any data. No attempt is made to relocate rows to unallocated pages. target_percent is ignored when TRUNCATEONLY is used.
...and yes no_one is right, shrinking datbase is not very good practice becasue for example :
shrink on data files are excellent ways to introduce significant logical fragmentation, becasue it moves pages from the end of the allocated range of a database file to somewhere at the front of the file...
shrink database can have a lot of consequence on database, server.... think a lot about it before you do it!
on the web there are a lot of blogs and articles about it.
Late answer but might be useful useful for someone else
If neither DBCC ShrinkDatabase/ShrinkFile or SSMS (Tasks/Shrink/Database) doesn’t help, there are tools from Quest and ApexSQL that can get the job done, and even schedule periodic shrinking if you need it.
I’ve used the latter one in free trial to do this some time ago, by following short description at the end of this article:
https://solutioncenter.apexsql.com/sql-server-database-shrink-how-and-when-to-schedule-and-perform-shrinking-of-database-files/
All you need to do is install ApexSQL Backup, click "Shrink database" button in the main ribbon, select database in the window that will pop-up, and click "Finish".
You will also need to shrink the individual data files.
It is however not a good idea to shrink the databases. For example see here
You should use:
dbcc shrinkdatabase (MyDB)
It will shrink the log file (keep a windows explorer open and see it happening).
Here's another solution: Use the Database Publishing Wizard to export your schema, security and data to sql scripts. You can then take your current DB offline and re-create it with the scripts.
Sounds kind of foolish, but there are a couple advantages. First, there's no chance of losing data. Your original db (as long as you don't delete your DB when dropping it!) is safe, the new DB will be roughly as small as it can be, and you'll have two different snapshots of your current database - one ready to roll, one minified - you can choose from to back up.
"Therefore it's reasonable to assume much space should now be retrievable."
Apologies if I misunderstood the question, but are you sure it's the database and not the log files that are using up the space? Check to see what recovery model the database is in. Chances are it's in Full, which means the log file is never truncated. If you don't need a complete record of every transaction, you should be able to change to Simple, which will truncate the logs. You can shrink the database during the process. Assuming things go right, the process looks like:
Backup the database!
Change to Simple Recovery
Shrink db (right-click db, choose all tasks > shrink db -> set to 10% free space)
Verify that the space has been reclaimed, if not you might have to do a full backup
If that doesn't work (or you get a message saying "log file is full" when you try to switch recovery modes), try this:
Backup
Kill all connections to the db
Detach db (right-click > Detach or right-click > All Tasks > Detach)
Delete the log (ldf) file
Reattach the db
Change the recovery mode
etc.
I came across this post even though I needed to SHRINKFILE on MSSQL 2012 version which is little trickier since 2000 or 2005 versions. After reading up on all risks and issues related to this issue I ended up testing. Long story short, the best results I got were from using the MS SQL Server Management Studio.
Right-Click the DB -> TASKS -> SHRINK -> FILES -> select the LOG file
You also have to modify the minimum size of the data and log files. DBCC SHRINKDATABASE will shrink the data inside the files you already have allocated. To shrink a file to a size smaller than its minimum size, use DBCC SHRINKFILE and specify the new size.
Delete data, make sure recovery model is simple, then skrink (either shrink database or shrink files works). If the data file is still too big, AND you use heaps to store data -- that is, no clustered index on large tables -- then you might have this problem regarding deleting data from heaps: http://support.microsoft.com/kb/913399
I recently did this. I was trying to make a compact version of my database for testing on the road, but I just couldn't get it to shrink, no matter how many rows I deleted. Eventually, after many other commands in this thread, I found that my clustered indexes were not getting rebuilt after deleting rows. Rebuilding my indexes made it so I could shrink properly.
Not sure how practical this would be, and depending on the size of the database, number of tables and other complexities, but I:
defrag the physical drive
create a new database according to my requirements, space, percentage growth, etc
use the simple ssms task to import all tables from the old db to the new db
script out the indexes for all tables on the old database, and then recreate the indexes on the new database. expand as needed for foreign keys etc.
rename databases as needed, confirm successful, delete old
I think you can remove all your log with switch from full to simple recovery. Right click on your Database and select Properties and select Options and change
Recovery mode to Simple
Containment type to None
When you've set the recovery model to Simple (and enabled auto-shrink), it is still possible that SQL Server can not shrink the log. It has to do with checkpoints in the log (or lack thereof).
So first run
DBCC CHECKDB
on your database. After that the shrink operation should work like a charm.
Usually I use the Tasks>Shrink>Files menu and choose the logfile with the option to reorganise pages.

Resources