Azure Hyperscale - How to reclaim unused allocated space - sql-server

Our DB has 3TB of data but allocated space is 4,5TB.
How to reduce space ?
MS documentation:
"DBCC SHRINKDATABASE or DBCC SHRINKFILE isn't currently supported for Hyperscale databases."
DBCC SHRINKDATABASE/DBCC SHRINKFILE

Yes Shrink is not supported in Hyperscale as of now, so you cannot reduce the space currently as it is a complex task, only possible option is the logical data movement to another database

Related

A very big size of tempdb

I have a spring project, when I start the server a named file 'tempdb' is created in the sql server directory, the size of this file is too big (reaches 8G)
I like to know why this file is created? is there a way to reduce its size?
Thanks in advance
run this
-- use the proper DB
USE tempdb;
GO
-- sync
CHECKPOINT;
GO
-- drop buffers
DBCC DROPCLEANBUFFERS;
DBCC FREEPROCCACHE;
DBCC FREESYSTEMCACHE('ALL');
DBCC FREESESSIONCACHE;
GO
--shrink db (file_name, size_in_MB)
DBCC SHRINKFILE (TEMPDEV, 1024);
GO
Either, you can always restart whole Database (like service restart), it will also drop the tempdb.
The tempdb system database is a global resource that is available to all users that are connected to an instance of SQL Server. The tempdb database is used to store the following objects: user objects, internal objects, and version stores. It is also used to store Worktables that hold intermediate results that are created during query processing and sorting.
You can use "ALTER DATABASE tempdb" command to limit/shrink the size of this.
The size 8Gb is suspicious though as tempdb size is seen mostly in MBs. You might want to check for any ugly queries when you're starting up your server.

One-time database shrinking with SQL Server

How can we decrease the DB files size when removing lots of outdated data?
We have an application that has data collected over many years. Now one customer has left the project and theirs data can be dropped. This customer alone stands for 75% of the data in the database.
The disk usage is very high, and running in a virtualized cloud service where the pricing is VERY high. Changing to another provider is not an option, and buying more disk is not popular since we have in practice 75% less data in use.
In my opinion it would be great if we could get rid of this customer data, shrink the files and be safe for years to come for reaching this disk usage level again.
I have seen many threads warning for performance decrease due to index fragmentation.
So, how can we drop this customer's data (stored in the same tables that other customers use, indexed on customer id among other things) without causing any considerable drawbacks?
Is these steps the way to go, or are there better ways?
Delete this customer's data
DBCC SHRINKDATABASE (database,30)
ALTER DATABASE database SET RECOVERY SIMPLE
DBCC SHRINKFILE (database_Log, 50)
ALTER DATABASE database SET RECOVERY FULL
DBCC INDEXDEFRAG
Only step 1 is required. Deleting data will free space within the database which can be reused for new data. You should also execute ALTER INDEX...REORGANIZE or ALTER INDEX...REORGANIZE 1 to compact indexes and distribute free space evenly throughout the indexes.
EDIT:
Since your cloud provider charges for disk space used, you'll need to shrink files to release space back to the OS and avoid incurring charges. Below are some tweaks to the steps you proposed, which can be customized according to your RPO requirements.
ALTER DATABASE ... SET RECOVERY SIMPLE (or BULK_LOGGED)
Delete this customer's data
DBCC SHRINKFILE (database_Data, 30) --repeat for each file to the desired size
DBCC SHRINKFILE (database_Log, 50)
REORGANIZE (or REBUILD) indexes
ALTER DATABASE ... SET RECOVERY FULL
BACKUP DATABASE ...

How to decrease size of SQL Server 2014 database

I have a SQL Server 2014 database dump which is approx. 60GB large. In SQL Server Management Studio, it is shown for the original DB, that the "ROWS Data" has an initial size of ~ 99000MB and the "LOG" has an initial size of ~ 25600MB.
Now there are a view tables in the Database which are about 10GB large and which I can flush/clean.
After deleting the data inside those tables, what is the best way to decrease the physical size of the database? A lot of posts I discovered are dealing with SHRINKDATABASE but some articles won't recommend it because of bad fragmentation and performance.
Thanks in advance.
Here is a code I have used to reduce the space used and free up Database space and actual Drive space. Hope this helps.
USE YourDataBase;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDataBase
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (YourDataBase_log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDataBase
SET RECOVERY FULL;
GO
Instead of use DBCC SHRINKDATABASE I suggest you to use
DBCC SHRINKFILE, obviously your tables should be stored on different FILEGROUP.
Then you can reduce the physical size of your database optimizing fragmentation.
Hope this could help
You need to shrink your DataBase by using below Query.
ALTER DATABASE [DBNAME] SET RECOVERY SIMPLE WITH NO_WAIT
DBCC SHRINKFILE(PTechJew_LOG, 1)
ALTER DATABASE [DBNAME] SET RECOVERY FULL WITH NO_WAIT
After run this query check your log file.
Its Worked.

Shrinking database in SQL Server

I have shrink the large database. I have shrink the database log using SSMS and by query both way. on the properties it is showing the reduced size. But on the drive where it is mounted showing the previous size. What can I do to release the space after shrink of database?
After you shrink database and see the database size is reduced to release the unused empty space to file system, you can execute DBCC SHRINKDATABASE command with TRUNCATEONLY option once more

CheckPoint Vs DBCC Shrinkfile

i have million of records insertion and procession in a table. DB is in Full recovery Mode.
when Any insert command and group by etc.. executed it filled off ldf file(of size 1mb) to (120 GB).
Shld i use Checkpoint or Alter database set recovery simple and then shrinkfile etc.
i think as i am inserting data
Checkpoint will not help you in reducing transaction log size with Full recovery model. I see two option here:
truncate transaction log (BACKUP DATABASE [YourDBName] WITH NO_LOG), though this will not work with SQL 2008+ as this option is discontinued (read more details here)
switch to Simple recovery model (which is recommended way of freeing space in transaction log (see 'Transaction Log Truncation' section))
Both of the above option deal with freeing space. After using any of them you will have to shrink log file anyway.
Backup your transaction log on a very regular basis (e.g. every 5 minutes).
This will allow the transaction log to be reused and it won't grow as large.

Resources