Handling unused disk space in sql server db - sql-server

I have one database whose size is growing very fast. Curruntly its size is aruond 60GB however after executing db_spaceused stored procedure i could verify that more that 40 GB is unused(unused space is different, not reserved space which i unsderstand is for table growth). And actual data size is around 10-12 GB and few GB's in reserved space.
Now to collect that unsused space i tried to use the shrink operation but it turned out to be not helping. After searching further i also found not to use the shrink DB as that causes the data fragments to get genrated resulting in the dealay while disk operation. Now i am really not sure what other operation i should try to recollect the space and recollect the DB. I unsertand that due to the size queries might be taking longer that expected and reclaiming this space could help with the performance (not sure ).
While investigating i also come across Gererate Scripts feature. It helps exporting data, schema also but i am not sure if it also help creating script(everying user, permission and other things also) so that script will help creat as is replica(deep copy/clone) of DB using create scema and then populating it with data to other db/server ?
Any pointer would be helpful.

If your database is 60Gb it means it had grown to 60gb. Even if the data is only 20Gb, you probably have operations that grow the data from time to time (eg. nightly index maintenance jobs). The recommendation is to leave the database at 60Gb. Do not attempt to reclaim the space, you will only cause harm and whatever caused the database to grow to 60Gb to start with it will likely occur again and trigger database growth.
In fact, you should to the opposite. Try to identify why it grew to 60Gb and extrapolate what will happen when your data reaches 30Gb. Will the database grow to 90Gb? If yes, the you should grow it now to 90Gb. The last thing you want is for growth to occur randomly, and possibly run out of disk space at a critical moment. In fact you should check right now if your server has Instant File initialization enabled.
Now of course, the question is: what would cause 3x data size growth, and how to identify it? I don't know of any easy method. I would recommend start by looking at your SQL Agent jobs. Check your maintenance scripts. Look into the application itself, does it has a pattern of growth and delete for data? Look at past backups (you do have them, right?) and compare.
BTW I assume due diligence and you checked that the data file has grown to 60Gb. If is the LOG file that has grown then is easy, it means you enabled full recovery model and forgot to backup the log.

Related

A bigger database has almost all disk space but 54% of free space on it. Do I have to shrink it?

I have received a database management procedure on SQL and the database has with more than 440 Gbytes of space and the disk is just for 500 Gbytes so there is almost no free space on disk to run tasks and to have a good space for temporary files. The database has more of 54% of free space internally. I know that it is not a good idea to shrink database or files but, what other solution exists? (not involving a new hardware) Do I have to "shrink" sql databases or files now?
I would not be scared to do so if you already cleaned up particular drive.
Shrinking a database should not be your first option, but it does to the trick if there are no other option to clean up the disk.
I think the warning is mostly done to make sure that no one shrinks a database by maintenance or something like that.
Yes, db shrink is a bad thing bla bla, but Given the situation you are in, I think you are ok to Shrink the database but also keep an eye out what causing the database to bloat like this ? Maybe a process that create some tables and drop them later.
Also, once you have shrunk the database you must do an index rebuild on the database, because shrinking the database fragments the indexes really badly.
In most of the article you read online, they have said that it is a bad practice to shrink databases, yes, if you are shrinking databases as a scheduled task to free up disk space, this is wrong.
But the situation you are in, it makes perfect sense to shrink the db.

SQL Server log file is too big

We are using SQL Server 2008 with full recovery model, the database size is 10 GB and the log file is 172 GB, we want to clear up the space of the log file internally ,we did transaction log file backup it should be clear up, but it still 172 GB ,what to do?
shrink the DB after doing the following task :
/*Perform a full backup of your database.
Change the backup method of your database to "Simple"
Open a query window and enter "checkpoint" and execute
Perform another backup of the database
Perform a final full backup of the database.*/
To understand this, you need to understand the difference between file shrinkage and truncation.
Shrinking means physically shrinking the size of the file on disk. Truncation means logically freeing up space to be used by new data, but with NO impact on the size of the file on disk.
In layman's terms for instance, if your log file is 100 GB in size and you've never backed it up, 100GB of that size is USED. If you then back up the log / database, it will be truncated, meaning the 100GB will be reserved for future use... basically meaning it's freed up for SQL Server, but still uses up the same space on disk as before. You can actually see this with the following query for instance.
DBCC SQLPERF(logspace) WITH NO_INFOMSGS
The result will show on a DB basis how large the log file is, and how much of the log file is actually used. If you now shrink that file, it will free up the reserved portion of the file on disk.
This same logic applies to all files in SQL Server (including primary and partition files, as well as log files for tempdb, user databases etc), even on table-level. For instance if you run this code
sp_spaceused 'myTableName'
You will see how much of that table is reserved and used for various things, where the "unused" column will show how much is reserved but NOT used.
So as a conclusion, you can NOT free up space on disk without shrinking it. The real question here is why exactly are you not allowed to shrink the log? The typical reason for the recommendation to not shrink log files is because it will naturally grow to its normal size anyway, so there's no point. But if you're only stepping into the recommended practice of backing up the log anyway, it only makes sense to start by shrinking the oversized log first so that in the future your maintenance backups will keep it in its natural, comparatively smaller size.
Another EXTREMELY IMPORTANT point about shrinking, is that shrinking data files is another matter entirely. If you shrink a data file, you will be severely fragmenting all indexes and statistics on the database, making it mandatory to rebuild practically all the indexes on the entire database to avoid catastrophic degradation in performance. So do NOT shrink data files, ever, without consulting someone who knows what they're doing and is prepared to deal with the consequences. Log files are a lot safer in this regard since the fragmentation doesn't apply there.

Shrink databases in case a database split

Due to a new architecture I have to split the current database in 2 databases both of them having 50% of the initial database (= 15GB).
1/ Is this a good idea to execute DBCC SHRINKDATABASE (0) for the 2 newly created databases? I'm asking this as I've read many articles stating the shrinking database leads to fragmentation.
2/ Is a good approach to set both databases as SIMPLE recovery while doing the separation and then to set it to FULL back?
What is the action you recommend to apply in this cases?
Shrinking obviously is there is no chance the space gets reiused. THat said ,the database initially is on the really tiny size - I have databases here that have multiple files and earch is larger. So, the gain may simply not worth it.
But it is a valid case for shrinking IF the database will not grow back in reasonable time and you need the space.

SQL Server 2005 TempDB Size

We are using SQL Server 2005. Recently SQL server 2005 crashed in our production environment due to large tempdb size.
1) what could be reason for large tempdb size?
2) Is there any way to look what data is there in tempdb?
2) Is there any way to look what data is there in tempdb?
No, because it is not kept there. Tempdb has very special treatment, like being dropped on every server restart.
1) what could be reason for large tempdb size?
Inefficient SQL, maintenance jobs or just the data at hand. Obviously a 800gb, 6000gb database may require more tempdb space than a 4gb online crm attempt. You dont really specify ANY size in absolute terms. What IS large? I have tempdb databases hardcoded at 64gb ony my smaller servers.
Typical SQL that goes into Tempdb are:
Sorts that are not solvable as part of the query (you need to store keys SOMEWHERE)
DISCTINCT. Needs all returned data in tempdb to find doubles.
Certain poerations psossibly during joins.
Tempdb usage (temporary tables). I just mention them becasue I often keep some hundred megabytes worth of data in them during loads and scrubbing.
In general you can find those queries by having hugh IO stats in the query log, or simply being slow.
That said, maintenance plans also go int there, but with reason. At the end, your "large" is possibly mine "not even worth mentioning tiny". It really depends what you do. Use the query trace tool to find out what takes long.
Physically Tempdb is very special in treatment - sql server does NOT write to the file if it does not have to (i.e. keeps thigns in memory). Writes to the disc are a sign of memory flowing ofer. This is different from normal db write behavior. Tempdb, IF it flows over, is best put onto a decently fast SSD... which wont normally be SO expensive because it still will be relatively small.
Use the query here to find other queries for tempdb - basicaly you are fishing in dirty water here, need to try out things until you find the culprit.
The usual way to grow a SQL Server database — any database, not just tempdb — is to have it's data and log files set to autogrow (especially the log files). SQL Server is perfectly happy grow the log and data files until the consume all the disk space available to them.
Best practice, IMHO, is to allow limited autogrowth on the data files (put an upper bound on how big it can grow) and fix the size of the log files. You might need to do some analysis to figure out how big the log files need to be. For tempdb, especially, the recovery model should be set to simple, too.
Ok tempdb is a kinda special database. Any temporary objects you use in procedures etc, is created here. So if you application uses a lot of temp tables in queries, they will all reside here, but they should clean themselves up after the connection (spid) is reset.
The other thing that can grow a tempdb is database maintenance tasks, however they will take a larger toll on the database log files.
Tempdb is also cleared every time you restart the SQL Service. It basically drops the database and re-create it. I agree with #Nic about leaving tempdb as it is, dont muck around with it, any issues with space in tempdb, usually indicates another larger problem somewhere else. More space will mask the problem, but only for so long. How much free space does your drive have that you have tempdb on?
Other thing, if not already, try and put tempdb on it's own drive, and one more if possible, have the data and the log files on their own separate drives.
So, if you dont restart your SQL Server/Service, your drive will run out of space pretty soon.,
use tempdb
select (size*8) as FileSizeKB from sys.database_files

How does a large transaction log affect performance?

I have seen that our database has the full recovery model and has a 3GB transaction log.
As the log gets larger, how will this affect the performance of the database and the performance of the applications that accesses the database?
The recommended best practice is to assign a SQL Server Transaction log file its’ very own disk or LUN.
This is to avoid fragmentation of the transaction log file on disk, as other posters have mentioned, and to also avoid/minimise disk contention.
The ideal scenario is to have your DBA allocate sufficient log space for your database environment ahead of time i.e. to allocate say x GB of data in one go. On a dedicated disk this will create a contiguous allocation, thereby avoiding fragmentation.
If you need to grow your transaction log, again you should endeavour to do so in sizeable chunks in order to endeavour to allocate contiguously.
You should also look to NOT shrink your transaction log file as, repeated shrinking and auto growth can lead to fragmentation of the data file on disk.
I find it best to think of the autogrowth database property as a failsafe i.e. your DBA should proactively monitor transaction log space (perhaps by setting up alerts) so that they can increase the transaction log file size accordingly to support your database usage requirements but the autogrowth property can be in place to ensure that your database can continue to operate normally should unexpected growth occur.
A larger transaction log file in itself if not detrimental to performance as SQL server writes to the log sequentially, so provided you are managing your overall log size and allocation of additional space appropriately you should not be concerned.
In a couple of ways.
If your system is configured to auto-grow the transaction log, as the file gets bigger, your SQL server will need to do more work and you will potentially get slowed down. When you finally do run out of space, you're out of luck and your database will stop taking new transactions.
You need to get with your DBA (or maybe you are the DBA?) and perform frequent, periodic log backups. Save them off your server onto another dedicated backup system. As you back up the log, the space in your existing log file will be reclaimed, preventing the log from getting much bigger. Backing up the transaction log will also allow you to restore your database to a specific point in time after your last full or differential backup, which significantly cuts your data losses in the event of a server failure.
If the log gets fragmented or needs to grow it will slow down the application.
If you don't clear the transaction log periodically by performing backups, the log will get full and consume the entire available disk space.
If the log file growths larger in small steps you will end up with a lot of virtual log files
these virtual log files will slow down the database startups, restore and backup operations.
here is an article that shows how to fix this:
http://www.sqlserveroptimizer.com/2013/02/how-to-speed-up-sql-server-log-file-by-reducing-number-of-virtual-log-file/
For increasing performance it's nice to separate SQL server log file from sql server data file in order to optimize I/O efficiency.
The writing method to the data file is Random but SQL Server writes to the transaction log sequentially.
With Sequential I/O ,SQL Server can read/write data without re-positioning the disk head.

Resources