The next .ldf grew to 50GB, and is eating all the disk… (it is in SIMPLE recovery model)
I wanted to be able to roughly answer these questions:
-what’s inside the .ldf? (can I say it is just temp tables?!)
-which command or user caused this 50GB to fill up?
-potential issues if I force to shrink the file to 10GB.
I do not want this information to blame anyone, but to educate ourselves on the usage.
Amazingly I get this result:
Check for uncommitted Transactions by running the below script:
SELECT
er.session_id
,er.open_transaction_count
FROM sys.dm_exec_requests er
you can use sp_who2 to track down the user.
If you urgently need to free-up space in the Log file - without performing a shrink - take a log-backup.
Related
In SQL Server, the transaction log file was needed to be shrunk, therefore the DBCC SHRINKFILE was executed (we forgot to note down the file-size before execution). Now how can we check if the file shrinking process was succeeded, especially we don't know the initial file size before the shrinking was done.
For clarity: Shrinking process is currently not running, (this is not about checking the on-going progress of shrinking).
Also is there a way to get historical stats on shrinking events?
TIA.
You can do that with extended events by tracking databases_data_file_size_changed event.
Follow this command :
CREATE EVENT SESSION ES_TRACK_DB_FILE_CHANGE
ON SERVER
ADD EVENT sqlserver.database_file_size_change
(ACTION(sqlserver.client_app_name,
sqlserver.client_hostname,
sqlserver.database_name,
sqlserver.nt_username,
sqlserver.server_principal_name,
sqlserver.session_nt_username,
sqlserver.sql_text,
sqlserver.username))
ADD TARGET package0.event_file
(SET filename=N'C:\XE_EVENTS\TRACK_DB_FILE_CHANGE.xel')
WITH (MAX_MEMORY=2048 KB,
EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=60 SECONDS,
STARTUP_STATE=ON)
GO
For (recent) historical information, you can consult the default trace (background in this answer):
DECLARE #path nvarchar(260);
SELECT
#path = REVERSE(SUBSTRING(REVERSE([path]),
CHARINDEX(CHAR(92), REVERSE([path])), 260)) + N'log.trc'
FROM sys.traces
WHERE is_default = 1;
SELECT TextData, [Database] = DB_NAME(DatabaseID), LoginName
FROM sys.fn_trace_gettable(#path, DEFAULT)
WHERE EventClass = 116
AND UPPER(CONVERT(nvarchar(max), TextData)) LIKE N'%SHRINK%';
-- could be SHRINKDATABASE
However, this just tells you when they happened and who did it. It does not include how much shrinking happened (if any) or even if the file grew (which, yes, is possible via SHRINKFILE). It doesn't even allow you to calculate duration, which might help you infer how much shrinking happened, because the DBCC events are captured when they start.
To capture this information on an ongoing basis, I would say don't just run DBCC SHRINKFILE(), but wrap that in a script that polls for the file sizes before and after. You could use an Extended Events session as described in another answer, but that session doesn't look like it captures size.
I have virtualbox with oracle database. So, I had 5 gb space left. I tried to import 2gb something dmp file, and it failed after disk became full. So, I tried to drop it by using "DROP USER ABC";
The username was dropped but the space was not recovered.
Please let me know I would be able recover this space?
Thank you.
Did you use the "CASCADE" option? If the user dropped without that option, then it didn't own any database objects and would not have recovered any space. There are other ways you could have lost the space, too, besides from the data itself: archived transaction logs, space for indexes (that don't store data in the dmp files), and the growth of the TEMP tablespace come immediately to mind.
Use the DBA_SEGMENTS view to establish which objects are actually taking up space in your database, which users own them, and which tablespaces they're located in:
https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/DBA_SEGMENTS.html
http://dba-oracle.com/t_dba_segments.htm
Also check your archivelog location, automatic diagnostic repository (ADR) for log and trace file growth, and see if you can reduce the size of your TEMP tablespace (if it seems to have grown).
Apologies if this question was asked by someone.
I'm not much experienced in SQL server.
On our SQL server, there is 1 TB plus log file size.
Database is in full recovery.
Had taken an initial full backup and set a regular backup job for a transaction log for a stop to growing log file size too much.
so my question is, can I truncate my log file after taking log backup.
If there was abnormal event like long running transaction or huge data import, you restore the previous size with the code below:
DBCC SHRINKFILE(2,TRUNCATEONLY);
ALTER DATABASE [StackOverflow] MODIFY FILE (NAME = N'StackOverflow_Log', SIZE = 256MB);
The SHRINKFILE second argument is the file_id:
SELECT *
FROM sys.database_files;
Also, sometimes having a huge log file might be something normal. It basically depends on the activity on your database. So, 256 MB might be more or less. It will be better to set a size, which will be enough for handling your normal workload without growing.
You should also check how often you are performing backup of the log file - each 10 minute or each 1 hour.
Why is my database log file taking to high space? Its almost taking up 30GB of my HDD. Even after deleting 1,000,000 records, its not freeing up any space.
So,
1.Why is the log file taking this much space (30gb)?2.how can I free up the space?
1.Why is the log file taking this much space (30gb)?
Or because of your recovery not SIMPLE and ldf grown eventually to such size
Or because there was a large one-time DML operation
Or because of other reasons, as noted by #sepupic in another answer
2.how can I free up the space?
IF recovery is other than SIMPLE:
Firstly backup transaction log file
Perform a shrink, like DBCC SHRINKFILE(2,256)
IF recovery is SIMPLE:
Just shrink it to desired size, like DBCC SHRINKFILE(2,256)
If the database log still did not reduce to a target size, then the exact reason to be checked, by using a code snippet of #sepupic
Some members still give and advice to physicaly remove LDF files.
I highly suggest to not do this. Remarkable related post of Aaron Bertrand:
Some things you don't want to do:
Detach the database, delete the log file, and re-attach. I can't
emphasize how dangerous this can be. Your database may not come back
up, it may come up as suspect, you may have to revert to a backup (if
you have one), etc. etc.
1. Why is the log file taking this much space (30gb)?
It was because the Autogrowth / Maxsize was set 200,000 MB
2. how can I free up the space?
As described Here i used the following command and the file is now less than 200mb
ALTER DATABASE myDatabaseName
SET RECOVERY SIMPLE
GO
DBCC SHRINKFILE (myDatabaseName_log, 1)
GO
ALTER DATABASE myDatabaseName_log
SET RECOVERY FULL
I have also set Autogrowh/Maxsize in the database properties to 1000 as Limited (See the image below).
The link describes more, so I recommend referring it for detailed description and other options.
Thanks #hadi for the link.
Why is my database log file taking to high space?
There can be more causes, not only the 2 mentioned in another answer.
You can find the exact reason using this query:
select log_reuse_wait_desc
from sys.databases
where name = 'myDB';
Here is a link to the BOL article that describes all the possible causes under log_reuse_wait:
sys.databases (Transact-SQL)
how can I free up the space?
First you should determine the cause using the query above, then you should fix it, for example, if it's broken replication you should remove it or fix it.
You need a maintenance job to backup the transaction log, and do it often: like every 10 minutes or so.
A FULL backup once per day isn't good enough.
Alternatively, you can change the Recovery Model of the database from FULL to SIMPLE. But if you do this, you'll lose the ability to do point-in-time restores.
In my case the DB names was with bad characters so the script doesn't worked.
I found out and follow this article, which worked perfect in two steps: changing backup log from full to simple and shrink DB log file more than 95%
While deleting a large number of records, I get this error:
The transaction log for database 'databasename' is full
I found this answer very helpful, it recommends:
Right-click your database in SQL Server Manager, and check the Options page.
Switch Recovery Model from Full to Simple
Right-click the database again. Select Tasks Shrink, Files Shrink the log file to a proper size (I generally stick to 20-25% of the size of the data files)
Switch back to Full Recovery Model
Take a full database backup straight away
Question: in step 3, when I go to shrink > files and choose log from the file type dropdown menu, it tells me that 99% of the allocated space is free.
Out of ~4500MB of allocated space, there is ~4400MB free (the data file size is ~3000MB).
Does that mean I'm good to go, and there is no need to shrink?
I don't understand this. Why would that be the case, given the warning I received initially?
I'm not one for hyperbole, but there are literally billions of articles written about SQL Server transaction logs.
Reader's digest version: if you delete 1,000,000 rows at a time, the logs are going to get large because it is writing those 1,000,000 deletes in case it has to roll back the transaction. The space needed to hold those records does not get released until the transaction commits. If your logs are not big enough to hold 1,000,000 deletes, the log will get filled, throw that error you saw, and rollback the whole transaction. Then all that space will most likely get released. Now you have a big log with lots of free space.
You probably hit a limit on your log file at 4.5gb and it wont get any bigger. To avoid filling your logs in the future, chunk down your transactions to smaller amounts, like deleting 1,000 records at a time. A shrink operation will reduce the physical size of the file, like from 4.5gb down to 1gb.
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql?view=sql-server-2017