A very big size of tempdb - sql-server

I have a spring project, when I start the server a named file 'tempdb' is created in the sql server directory, the size of this file is too big (reaches 8G)
I like to know why this file is created? is there a way to reduce its size?
Thanks in advance

run this
-- use the proper DB
USE tempdb;
GO
-- sync
CHECKPOINT;
GO
-- drop buffers
DBCC DROPCLEANBUFFERS;
DBCC FREEPROCCACHE;
DBCC FREESYSTEMCACHE('ALL');
DBCC FREESESSIONCACHE;
GO
--shrink db (file_name, size_in_MB)
DBCC SHRINKFILE (TEMPDEV, 1024);
GO
Either, you can always restart whole Database (like service restart), it will also drop the tempdb.

The tempdb system database is a global resource that is available to all users that are connected to an instance of SQL Server. The tempdb database is used to store the following objects: user objects, internal objects, and version stores. It is also used to store Worktables that hold intermediate results that are created during query processing and sorting.
You can use "ALTER DATABASE tempdb" command to limit/shrink the size of this.
The size 8Gb is suspicious though as tempdb size is seen mostly in MBs. You might want to check for any ugly queries when you're starting up your server.

Related

Shrinking transactional Log file not working

I restored a Production database to test environment. In Prod it's configured to Transactional Replication and database around 400GB, log file alone 120GB.
I tried Database set to Simple recovery and shrink DBCC Shrinkfile still log file size same (I know shrinking is not an ideal solution, but I want to make it small). There are no long running transactions and blocking
Here is what I followed:
* Backup database
ALTER DATABASE DatabaseName SET RECOVERY SIMPLE
GO
DBCC SHRINKFILE (databasenaem_log,5)
GO
ALTER DATABASE DatabaseName SET RECOVERY FULL
GO
I checked the sys.databases, log_reuse_wait_desc column and it shows "replication", this may be the reason log file won't allow shrinking. The problem is that there is no replication (publisher or subscriber) on the on the Database or server.
select name, log_reuse_wait_desc from sys.databases
Do I need to set up replication and turn off?
This is an issue that I have encountered before. Sometimes it is because Replication was configured and then removed while the replication processes were still running, sometimes the cause isn't known (DBCC CheckDB with REPAIR ALLOW DATA LOSS seems to be a frequent cause), but the best way I ever discovered to fix the issue is with the information in this MSDN article.
Basically, you set up replication with SNAPSHOT replication, then remove the replication.
This will clear the Replication hold on the log, and allow you to shrink it.

How to decrease size of SQL Server 2014 database

I have a SQL Server 2014 database dump which is approx. 60GB large. In SQL Server Management Studio, it is shown for the original DB, that the "ROWS Data" has an initial size of ~ 99000MB and the "LOG" has an initial size of ~ 25600MB.
Now there are a view tables in the Database which are about 10GB large and which I can flush/clean.
After deleting the data inside those tables, what is the best way to decrease the physical size of the database? A lot of posts I discovered are dealing with SHRINKDATABASE but some articles won't recommend it because of bad fragmentation and performance.
Thanks in advance.
Here is a code I have used to reduce the space used and free up Database space and actual Drive space. Hope this helps.
USE YourDataBase;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE YourDataBase
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (YourDataBase_log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE YourDataBase
SET RECOVERY FULL;
GO
Instead of use DBCC SHRINKDATABASE I suggest you to use
DBCC SHRINKFILE, obviously your tables should be stored on different FILEGROUP.
Then you can reduce the physical size of your database optimizing fragmentation.
Hope this could help
You need to shrink your DataBase by using below Query.
ALTER DATABASE [DBNAME] SET RECOVERY SIMPLE WITH NO_WAIT
DBCC SHRINKFILE(PTechJew_LOG, 1)
ALTER DATABASE [DBNAME] SET RECOVERY FULL WITH NO_WAIT
After run this query check your log file.
Its Worked.

In sql is there another way to clear temp db and its log other than restarting the service?

As we create and drop temporary tables, inserts data into those tables, the size of the temp db and it's log cause the database to grow in size unlimitedly. It reaches upto 100s of gb and fills the hard disk.
This can cause the lack of size in database server and the application may crash.
We need to restart the sqlexpress service which is I think is a bad idea.
The stopping of service cause the site/application to go down.
So what is the alternative for this problem
You can always try shrink database files:
USE [tempdb]
GO
DBCC SHRINKFILE (N'templog' , 0)
GO
DBCC SHRINKFILE (N'tempdev' , 0)
GO
This will release all unused space from the tempdb. But MSSQL should reuse the space anyway. So if your files are such big, you need to look into your logic and find places where you create really big tables and try to reduce their sizes and/or their lifetime.
Also you shouldn't avoid dropping unused temporary tables.
And you can try to reduce session lifetime. It will guarantee that old unused tables will be dropped.

SQL server cleaning up databases, transaction logs?

I have a SQL Server 2005 database from which I'm removing several large tables to an archive database. The original database should shrink considerably.
To make the archive database, I was going to restore a copy of the original and just remove the current tables from that.
Is this the best way to go about it? What should I do with logs/shrinking to make sure the final sizes are as small as possible? The archive database may grow a little, but the original continues its normal growth.
That seems like an ok way to do it. Set the recovery model to simple, then truncate and shrink the log files. This will make it as small as possible.
See here for a good way to do it.
Note: This assumes you don't want or need to recover the archive database back to specific points in time. The reason being that Simple recovery model does not save the transactions in a transaction log. So as your archive database changes "a little" (as you said), it won't save the transactions to a log.
I use this script and this is very useful in developing.
BACKUP log [CustomerServiceSystem] with truncate_only
go
DBCC SHRINKDATABASE ([CustomerServiceSystem], 10, TRUNCATEONLY)
go
Redesign the db
Try one of these sql commands:
DBCC SHRINKDATABASE
DBCC SHRINKFILE
Or right click into Sql Server Management Studio Object Explorer's database and select Tasks-Shrink.

DBCC SHRINKFILE operation increases data usage instead

First of all, I know it's better not shrink database. But in our situation we had to shrink the data file to claim more space back.
Environment: SQL Server 2005 x64 SP3 Ent running on Windows Server 2003 Enterprise x64.
The database has one single data file and one log file. Before we run DBCC SHRINKFILE, the data file has 640GB, in which 400GB is free, so the data is about 240GB. To speed up the shrink process,we had to defrag the database first then we shrink data file.
However, after we shrinked the database data file using DBCC SHRINKFILE, the data changed to 490GB. How could it happen?
I asked around include Paul Randal. Here's the possible reason:
When I rebuild indexes for those indexes that were dropped, the indexes would not physically removed from data file, they would be put in deferred drop queue, instead they would stay there and would be dropped in batch.

Resources