I'm trying to efficiently determine if a log backup will contain any data.
The best I have come up with is the following:
DECLARE #last_lsn numeric(25,0)
SELECT #last_lsn = last_log_backup_lsn
FROM sys.database_recovery_status WHERE database_id = DB_ID()
SELECT TOP 1 [Current LSN] FROM ::fn_dblog(#last_lsn, NULL)
The problem is when there are no transactions since the last backup, fn_dblog throws error 9003 with severity 20(!!) and logs it to the ERRORLOG file and event log. That makes me nervous -- I wish it just returned no records.
FYI, the reason I care is I have hundreds of small databases that can have activity at any time of day, but are typically used 8 hours/day. That means 2/3 of my log backups are empty. Those extra thousands of files can have a measurable impact on the time required for both off-site backup and recovering from a disaster.
I figured out an answer that works for my particular application. If I compare the results of the following two queries, I can determine if any activity has occurred on the database since the last log backup.
SELECT MAX(backup_start_date) FROM msdb..backupset WHERE type = 'L' AND database_name = DB_NAME();
SELECT MAX(last_user_update) FROM sys.dm_db_index_usage_stats WHERE database_id = DB_ID() AND last_user_update IS NOT NULL;
If I run
SELECT [Current LSN] FROM ::fn_dblog(null, NULL)
It seems to return my current LSN at the top that matches the last log backup.
What happens if you change the select from ::fn_dblog to a count(*)? Does that eliminate the error?
If not, maybe select the log records into a temp table (top 100 from ::fn_dblog(null, NULL), ordering by a date, if there is one) and then query that.
Related
OK I know this has been asked a lot and I thought I had found the answer but it's not working.
I need to delete a large amount of data from some SQL tables. Copying the data I want to keep and truncating or deleting the old table is not an option.
The database is set to simple logging.
I'm only deleting 3,000 rows.
I've tried indies a BEGIN/END Transaction and without that and have a CHECKPOINT command
My understanding was doing this would cause the transaction log to not grow but I'm still getting a 100+ gig transaction log.
I'm looking for a way to delete and not grow the transaction log.
I understand that it's to roll things back if needed but I don't need to I just want to delete and not have the log filled up.
kindly check VLf count also check Log_reuse in database.
SELECT [name] AS 'Database Name',
COUNT(li.database_id) AS 'VLF Count',
SUM(li.vlf_size_mb) AS 'VLF Size (MB)',
SUM(CAST(li.vlf_active AS INT)) AS 'Active VLF',
SUM(li.vlf_active*li.vlf_size_mb) AS 'Active VLF Size (MB)',
COUNT(li.database_id)-SUM(CAST(li.vlf_active AS INT)) AS 'Inactive VLF',
SUM(li.vlf_size_mb)-SUM(li.vlf_active*li.vlf_size_mb) AS 'Inactive VLF Size (MB)'
FROM sys.databases s
CROSS APPLY sys.dm_db_log_info(s.database_id) li
GROUP BY [name]
ORDER BY COUNT(li.database_id) DESC;
check why log reused in database which give description to resolve issue.
SELECT name, log_reuse_wait_desc FROM sys.databases;
I used the following query to view the database log file.
declare #templTable as table
( DatabaseName nvarchar(50),
LogSizeMB nvarchar(50),
LogSpaceUsedPersent nvarchar(50),
Statusee bit
)
INSERT INTO #templTable
EXEC('DBCC SQLPERF(LOGSPACE)')
SELECT * FROM #templTable ORDER BY convert(float , LogSizeMB) desc
DatabaseName LogSizeMB LogSpaceUsedPersent
===============================================
MainDB 6579.93 65.8095
I also used the following code to view the amount of space used by the main database file.
with CteDbSizes
as
(
select database_id, type, size * 8.0 / 1024 size , f.physical_name
from sys.master_files f
)
select
dbFileSizes.[name] AS DatabaseName,
(select sum(size) from CteDbSizes where type = 1 and CteDbSizes.database_id = dbFileSizes.database_id) LogFileSizeMB,
(select sum(size) from CteDbSizes where type = 0 and CteDbSizes.database_id = dbFileSizes.database_id) DataFileSizeMB
--, (select physical_name from CteDbSizes where type = 0 and CteDbSizes.database_id = dbFileSizes.database_id) as PathPfFile
from sys.databases dbFileSizes ORDER BY DataFileSizeMB DESC
DatabaseName LogFileSizeMB DataFileSizeMB
===============================================
MainDB 6579.937500 7668.250000
But whatever I did, the amount of database log space is not less than 6 GB. Do you think there is a reason that the database log has not been changed for more than a month? Is there a solution to reduce this amount or not? I also used different methods and queries to reduce the size of the log file. I got good answers on other databases. Like folow:
use [master];
GO
USE [master]
GO
ALTER DATABASE [MainDB] SET RECOVERY SIMPLE WITH NO_WAIT
GO
USE [MainDB]
GO
DBCC SHRINKDATABASE(N'MainDB')
GO
DBCC SHRINKFILE (N'MainDB_log' , EMPTYFILE)
GO
ALTER DATABASE [MainDB] SET RECOVERY FULL WITH NO_WAIT
GO
But in this particular database, the database log is still not less than 6 GB. Please help. Thanks.
As I informed in the main post, after a while, the size of our database file log reached about 25 gigs and we could no longer even compress the database files. After some searching, I came to the conclusion that I should back up the log file and then compress the log file. For this purpose, I defined a job that prepares a file from a log backup file almost every 30 minutes, and the size of these files usually does not exceed 150 MB. Then, after each backup of the log file, I run the log compression command once. With this method, the size of the log file is greatly reduced and now we have about 500 MB of log file. Of course, due to the large number of transactions on the database, the mentioned job must always be active. If the job is not active, I will increase the log volume again.
I have a server that is running SQL Server 2019 but the databases are still on Compatibility level 110 (so that means SQL Server 2012 basically).
We take a FULL backup every night and I indeed see the files backed up in the right folder every day. But then if I run this query , and I add backup_finish_date desc to check when the last FULL backup was taken I see that the date is months back:
So I found this guide that says it might be a bug in SQL Server and ask to run this check:
USE msdb
GO
SELECT server_name, database_name, backup_start_date, is_snapshot, database_backup_lsn
FROM backupset
"...In the result, notice the database_backup_lsn column and the is_snapshot column. An entry that represents an actual database backup operation has the following characteristics:
The value of the database_backup_lsn column is not 0.
The value of the is_snapshot column is 0."
All good to me, it looks like database_backup_lsn column is not 0 and is_snapshot column is 0
Then the guide says to run this query to verify the integrity of the backup:
WITH backupInfo AS( SELECT database_name AS [DatabaseName],
name AS [BackupName], is_damaged AS [BackupStatus],
backup_start_date AS [backupDate],
ROW_NUMBER() OVER(PARTITION BY database_name
ORDER BY backup_start_date DESC) AS BackupIDForDB
FROM msdb..backupset) SELECT DatabaseName
FROM backupinfo WHERE BackupIDForDB = 1 and BackupStatus=1
The result is nothing!
And the guide says: "...If the this query returns any results, it means that you do not have good database backups after the reported date."
So now I'm scared that our backup is fucked up. We take the backup with CHECKSUM but we haven't run DBCC CHECKDB in ages so we are maybe (successfully and with CHECKSUM) taking backups of corrupted databases. Let's run:
DBCC CHECKDB('msdb') WITH NO_INFOMSGS, ALL_ERRORMSGS
And the result is nothing, so it seems all good.
And at the same time the size of the log file (155GB) appears unusually large compared to the data file size of 514GB
EDIT:
I take Full backup every night and Log backup every hour
EDIT 2:
Brent Ozar suggests to run SELECT name, log_reuse_wait_desc FROM sys.databases; and as result I have NOTHING nearly everywhere:
...And the solution was:
We use Always On availability groups and the backups were taken on the failover environment (the replica server).
I see on the web a lot of question similar to mine an they don't really have an answer. I believe I'm not the only one that faced this problem
I have an application which is using Entity framework for DB operations. In an one table when performing the delete operation it takes more than 3 minutes. But other similar tables doesn't take much time. I have debugged the code and find out there is no issue with the code.But executing the query in the sql server took much time.
Any troubleshooting steps/root cause for this issue ?
My table is as below,
Id (PK,uniqueidentifier,not null)
FirstValue(real,not null)
SecondValue(real,not null)
ThirdValue(real,not null)
LastValue(int,not null)
Config_Id(FK,uniqueidentifier,not null)
Query Execution Plan
Something isn't adding up here, we're not seeing the full picture...
There are a multitude of things which can slow down deletes (usually):
deleting a lot of records (which we know isn't the case here)
many indexes (which I suspect IS the case here)
deadlocks and blocking (is this a development or production database?)
triggers
cascade delete
transaction log needing to grow
many foreign keys to check (I suspect this might also be happening)
Can you please give us a screenshot of the "View Dependencies" feature in SSMS? To get this, right click on the table in the object explorer and select View Dependencies.
Also, can you open up a query on the master database, run the following queries and post the results:
SELECT name, value, value_in_use, minimum, maximum, [description], is_dynamic, is_advanced
FROM sys.configurations WITH (NOLOCK)
where name in (
'backup compression default',
'clr enabled',
'cost threshold for parallelism',
'lightweight pooling',
'max degree of parallelism',
'max server memory',
'optimize for ad hoc workloads',
'priority boost',
'remote admin connections'
)
ORDER BY name OPTION (RECOMPILE);
SELECT DB_NAME([database_id]) AS [Database Name],
[file_id], [name], physical_name, [type_desc], state_desc,
is_percent_growth, growth,
CONVERT(bigint, growth/128.0) AS [Growth in MB],
CONVERT(bigint, size/128.0) AS [Total Size in MB]
FROM sys.master_files WITH (NOLOCK)
ORDER BY DB_NAME([database_id]), [file_id] OPTION (RECOMPILE);
I restored two databases using two different .BAK files (different nightly backup files)
I have a row of data that disappeared from the latest restored .BAK.
Need to find out why.
Is there a way for me to read/go through the .TRN data to see what user-action might have caused the issue?
Check out Apexsql ,they provide tools to read the transaction log. Its not freeware however.
There is also a undocumented feature inside SQl Server. See This Post for more details.
DBCC LOG(databasename, typeofoutput)
Paul Randal wrote on using an undocumented function to find out who dropped a table using the transaction log, you might be able to use the same concept.
In his post he was looking for a dropped table so I played with it on my local system and found you would filter for WHERE [Tranaction Name] = 'DELETE', for deleting a record from a table.
So this query:
SELECT [Current LSN], [Begin Time], SPID, [Database Name], [Transaction Begin], [Transaction ID], [Transaction Name], [Transaction SID], Context, Operation
FROM ::fn_dblog (null, null)
WHERE [Transaction Name] = 'DELETE'
GO
Returns this output
Current LSN Begin Time SPID Database Name Transaction Begin Transaction ID Transaction Name Transaction SID Context Operation
00000474:00000239:0001 2012/03/06 10:09:19:547 58 NULL NULL 0001:000a67be DELETE 0x010500000000000515000000628ADB6E31CC6098F269B2B9F8060000 LCX_NULL LOP_BEGIN_XACT