Reclaim unused space from a SQL Server table - sql-server

In SQL Server, in one of our databases, we have a big database table that's using over 1.2 TB of space. It has about 200 GB of actual data but over 1 TB of unused space.
This happened over 2 years as old time series data was deleted from this table daily and new data was inserted daily.
We do not expect for the table size to increase much going forward.
I am looking for the best way to reclaim unused space from this table without taking the database or table offline, and without causing too much CPU overhead.

I think for this you'll need to use DBCC Shrinkfile, possibly in several incremental steps.
First see if truncateonly has an acceptable effect - it depends on how the data is distributed within the file
DBCC SHRINKFILE (N'Tablename' , truncateonly)
If the file does not shrink sufficiently you can specify a target size to shrink to in MB eg
DBCC SHRINKFILE (N'Tablename' , 256000)
You can monitor the impact on performance while this executes and stop it if need be, resuming again as appropriate.

Related

Fully delete data from Clickhouse DB to save disk space

In order to free up disk space, I've gotten rid of a number of old tables in a clickhouse database via
DROP TABLE mydb.mytable
However, disk usage did not change at all. In particular I expected /var/lib/clickhouse/data/store to shrink.
What am I missing here? Is there a "postgresql.vacuum"-equivalent in clickhouse I should be doing?
Atomic databases keep dropped tables for 8 minutes.
You can use DROP TABLE mydb.mytable no delay
https://kb.altinity.com/engines/altinity-kb-atomic-database-engine

How to get the space used by SQL Server back?

I have created a table on SQL Server then inserted 135 million records in that table.
then I truncate it
then tried to re-insert the same 135 million records again.
but something went wrong and had to restart the computer
I got the recovery mode in my database.
then fixed.
the problem now is C drive has 10GB free only (210GB used) while before that I used to have 105GB free!
I checked folders but the sum of all sizes including hidden ones does not sum to 210GB
what happened and where did these GBs have gone?
The space is not automatically released by the database. In the case of tempdb, if you restart the sql service, tempdb will reinitialize to original size. But, not the case for other databases.
If you want to reclaim the space, there are two approaches:
Cleaner approch:
As suggested by Paul Randal, go for new filegroup for existing tables and then drop the old filegroup.Refer to his article
Create a new filegroup
Move all affected tables and indexes into the new filegroup using the CREATE INDEX … WITH (DROP_EXISTING = ON) ON syntax, to move the
tables and remove fragmentation from them at the same time
Drop the old filegroup that you were going to shrink anyway (or shrink it way down if its the primary filegroup)
Brute Force approach and redmediation
Here, you can use DBCC SHRINKFILE and reduce the size. Keep little more space in the database to avoid frequent autogrowth. Beware that, shrinking file will lead to index fragmentation and you have to rebuild indexes post the shrinking of files.
As Paul Randal recommends:
If you absolutely have no choice and have to run a data file shrink
operation, be aware that you’re going to cause index fragmentation and
you might need to take steps to remove it afterwards if it’s going to
cause performance problems. The only way to remove index fragmentation
without causing data file growth again is to use DBCC INDEXDEFRAG or
ALTER INDEX … REORGANIZE. These commands only require a single 8KB
page of extra space, instead of needing to build a whole new index in
the case of an index rebuild operation (which will likely cause the
file to grow).

PostgreSQL failed imports still claim hard disk space? Need to clear cache?

I have a PostgreSQL (10.0 on OS X) database with a single table for the moment. I have noticed something weird when I'm importing a csv file in that table.
When the import fails for various reasons (e.g. one extra row in the csv file or too many characters in a column for a given row), no rows are being added to the table but PostgreSQL still claims that space on my hard disk.
Now, I have a very big csv to import and it failed several time because the csv was not compliant to begin with - so I had tons of import fails that I fixed and tried to import again. What I've realized now is that my computer storage has been reduced by 30-50 GB or so because of that and my database is still empty.
Is that normal?
I suspect this is somewhere in my database cache. Is there a way for me to clear that cache or do I have to fully reinstall my database?
Thanks!
Inserting rows into the database will increase the table size.
Even if the COPY statement fails, the rows that have been inserted so far remain in the table, but they are dead rows since the transaction that inserted them failed.
In PostgreSQL, the SQL statement VACUUM will free that space. That typically does not shrink the table, but it makes the space available for future inserts.
Normally, this is done automatically in the background by the autovacuum daemon.
There are several possibilities:
You disabled autovacuum.
Autovacuum is not fast enough cleaning up the table, so the next load cannot reuse the space yet.
What can you do:
Run VACUUM (VERBOSE) on the table to remove the dead rows manually.
If you want to reduce the table size, run VACUUM (FULL) on the table. That will lock the table for the duration of the operation.

DB2 - Reclaiming disk space used by dropped tables

I have an application that logs to a DB2 database. Each log is stored in a daily table, meaning that I have several tables, one per each day.
Since the application is running for quite some time, I dropped some of the older daily tables, but the disk space was not reclaimed.
I understand this is normal in DB2, so I goggled and found out that the following command can be used to reclaim space:
db2 alter tablespace <table space> reduce max
Since the tablespace that store the daily log tables is called USERSPACE1, I executed the following command successfully:
db2 alter tablespace userspace1 reduce max
Unfortunately the disk space used by DB2 instance is still the same...
I've read somewhere that the REORG command can be executed, but what I've seen it is used to reorganize tables. Since I dropped the tables, how can I use REORG?
Is there any other way to do this?
Thanks
Reduce the size of a tablespace is very complex. The extents (set of contiguous pages; unit of tablespace allocation) of the tables are not distributed sequentially for a same table. When you reorg a table, the rows will be organized in pages, and the new pages will be written normally at the end of the tablespace. Sometimes, the high watermark will be increased, and your tablespace will be bigger.
You need to reorg all tables from a tablespace in order to "defrag" all tables. Then, you have to perform a new reorg in order to use the previous space, because it should be an empty space in the tablespace.
However, there are many criteria that impacts the organization of the tables in a tablespace: New extents are created (new rows, rows overflow due to updates); compression could be activated after reorg.
What you can do is to assign few or just one table per tablespace; however, you will waste a lot of space (overhead, empty pages, etc.)
The command that you are using is an automatic way to do that, but it does not always work as desired: http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.dbobj.doc/doc/c0055392.html
If you want to see the distribution of the tables in your tablespace, you can use db2dart. Then, you can have an idea of which table to reorg (move).
Sorry guys,
The command that I mentioned on the original post works after all, but the space was retrieved very slowly.
Thanks for the help

Ghost Records and MS Sql Express size limit

Consider we have an MS SQL Express database that has grown up to almost 10 Gb. And consider we have a big table with a clusterred index in it.
Now we want to free some space by deleting some rows from that table. After delete db size remains the same as it was and rows become ghost records.
The question is what will happen to those GRs if db size will reach a limit of 10 GB? Will they be cleaned or not?
They will not be automatically cleaned. The best option is to use
ALTER TABLE <table_name> REBUILD
to get rid of all artefacts.
Execute following command and check the Ghost_Record_Count and Version_Ghost_Record_Count columns. If this is high (several million in some cases), then you’ve most probably got a ghost record cleanup issue
Select * from sys.dm_db_index_physical_stats(db_id(<dbname>),<ObjectID>,NULL,NULL,’DETAILED’)
Following command will free up ghost records
EXEC sp_clean_db_free_space #dbname=N’<dbname>’
You can use sp_spaceused to see the amount of free space in your db
Use DBCC SHRINKDATABASE or DBCC SHRINKFILE to shrink data and log files for a specific database

Resources