Count number of rollbacks in SQL Sever 2008 - sql-server

I am experimenting with the SET-OPTION XACT_ABORT ON as I am seeing a lot of sleeping sessions holding an open transaction causing problems in the application.
Is there a way to measure if flipping the option has an effect or not? I am thinking about something like number of rollbacks per day. How could I collect these?
I am on SQL Server 2008 SP4.

Considering your database is in Full Recovery Model.You can combine all the transaction log backups taken per day and Query it...
SELECT
CASE
WHEN OPERATION='LOP_ABORT_XACT' THEN 'COMMITS'
ELSE 'ROLLBACKS' END AS 'OPERATIONS'
,COUNT(*) AS CNT
FROM FN_DBLOG(NULL,NULL) WHERE OPERATION IN ('LOP_COMMIT_XACT','LOP_ABORT_XACT')
GROUP BY OPERATION
I did some tests using some sample data..
Begin tran test1
insert into t1
select 3
rollback
Output:
Operations cnt
Commits 3
RollBacks 4
Update as per comments:
Reading Transaction log Backup can be expensive ,I recommend you do this on backups taken and not on Active log ..doing this on active Tlog can have below effects
1.since this does a Log Scan,Transaction log truncation will be prevented
2.Huge IO load on server depending on Your log backup Size,since the output of ::fn_dblog can easily go into millions of rows
References:
http://rusanu.com/2014/03/10/how-to-read-and-interpret-the-sql-server-log/
https://blogs.msdn.microsoft.com/dfurman/2009/11/05/reading-database-transaction-log-with-fn_dump_dblog/

Related

SQL Server :: FULL backup not recorded in msdb.dbo.backupset and log file unusually large

I have a server that is running SQL Server 2019 but the databases are still on Compatibility level 110 (so that means SQL Server 2012 basically).
We take a FULL backup every night and I indeed see the files backed up in the right folder every day. But then if I run this query , and I add backup_finish_date desc to check when the last FULL backup was taken I see that the date is months back:
So I found this guide that says it might be a bug in SQL Server and ask to run this check:
USE msdb
GO
SELECT server_name, database_name, backup_start_date, is_snapshot, database_backup_lsn
FROM backupset
"...In the result, notice the database_backup_lsn column and the is_snapshot column. An entry that represents an actual database backup operation has the following characteristics:
The value of the database_backup_lsn column is not 0.
The value of the is_snapshot column is 0."
All good to me, it looks like database_backup_lsn column is not 0 and is_snapshot column is 0
Then the guide says to run this query to verify the integrity of the backup:
WITH backupInfo AS( SELECT database_name AS [DatabaseName],
name AS [BackupName], is_damaged AS [BackupStatus],
backup_start_date AS [backupDate],
ROW_NUMBER() OVER(PARTITION BY database_name
ORDER BY backup_start_date DESC) AS BackupIDForDB
FROM msdb..backupset) SELECT DatabaseName
FROM backupinfo WHERE BackupIDForDB = 1 and BackupStatus=1
The result is nothing!
And the guide says: "...If the this query returns any results, it means that you do not have good database backups after the reported date."
So now I'm scared that our backup is fucked up. We take the backup with CHECKSUM but we haven't run DBCC CHECKDB in ages so we are maybe (successfully and with CHECKSUM) taking backups of corrupted databases. Let's run:
DBCC CHECKDB('msdb') WITH NO_INFOMSGS, ALL_ERRORMSGS
And the result is nothing, so it seems all good.
And at the same time the size of the log file (155GB) appears unusually large compared to the data file size of 514GB
EDIT:
I take Full backup every night and Log backup every hour
EDIT 2:
Brent Ozar suggests to run SELECT name, log_reuse_wait_desc FROM sys.databases; and as result I have NOTHING nearly everywhere:
...And the solution was:
We use Always On availability groups and the backups were taken on the failover environment (the replica server).
I see on the web a lot of question similar to mine an they don't really have an answer. I believe I'm not the only one that faced this problem

Detect the cause of SQL Server update lock

Problem:
A .NET application during business transaction executes a query like
UPDATE Order
SET Description = 'some new description`
WHERE OrderId = #p1 AND RowVersion = #p2
This query hangs until timeout (several minutes) and then I get an exception:
SqlException: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
It is reproduced, when database is under heavy load (several times per day).
I need to detect the cause of the lock of the query.
What I've tried:
Exploring activity monitor - it shows that the query is hanging by lock. Filtering by headblocker does not give much, it is frequently changing.
Analyze SQL script, that gives similar to activity monitor data - almost same result as looking to activity monitor. Chasing blocking_session_id results in some session, that awaits for command or executing some SQL, I can't reason a relation to Order table. Executing the same script in a second gives other session. I also tried a some other queries/stored procedures from this atritcle with no result.
Building standard SQL Server report for locked/problem transactions results in errors like Max recursion exhausted or Local OutOfMemory Exception (I have 16 Gb RAM).
Database Details
Version: SQL Server 2016
Approximate number of concurrent queries per second by apps to database: 400
Database size: 1.5 Tb
Transaction isolation level: ReadUncommited for readonly transactions, Serializable for transactions with modifications
I'm absolutely new to this kind of problems, so I have missed a lot for sure.
Any help or direction would be great!
In case anyone interested, I have found this particular query espesially usefull:
SELECT tl.resource_type
,OBJECT_NAME(p.object_id) AS object_name
,tl.request_status
,tl.request_mode
,tl.request_session_id
,tl.resource_description
,(select text from sys.dm_exec_sql_text(r.sql_handle))
FROM sys.dm_tran_locks tl
INNER JOIN sys.dm_exec_requests r ON tl.request_session_id=r.session_id
LEFT JOIN sys.partitions p ON p.hobt_id = tl.resource_associated_entity_id
WHERE tl.resource_database_id = DB_ID()
AND OBJECT_NAME(p.object_id) = '<YourTableName>'
ORDER BY tl.request_session_id
It shows transactions, that have acquired locks on <YourTableName> and what query they are executing now.
Try to use sys.dm_exec_requests view, and filter by columns blocking_session_id, wait_time

Sql server drop table not working

I have a table with almost 45 million rows. I was updating a field of it with the query:
update tableName set columnX = Right(columnX, 10)
I didn't do tran or commit but directly ran the query. During the execution of query, after an hour unfortunately power failure occurred and now when i try to run select query it takes too much time and returns nothing. Even drop table doesn't work. I don't know what is the problem.
I don't know what is the problem.
SQL server is rolling back your update statement..you can monitor the status of rollback ,using many ways
1.
kill sessionid with status only
2.By using DMV
select
der.session_id,
der.command,
der.status,
der.percent_complete
from sys.dm_exec_requests as der
where command IN ('killed/rollback',’rollback’)
Dont try to restart SQLServer,as this may prolong the status..

SQL Server 2012 - Finding out which transaction logs have been applied to a NORECOVER db

I have a copy of an offsite production database used for reporting which is running on SQL Server 2012. I want to start updating it hourly with transaction logs from that offsite production database.
No big deal, restore a full backup (w/ NORECOVERY) to get things started and apply the transaction logs (w/ NORECOVERY) as they come in.
However, in the event of a problem with the restore (or with getting the log files) I could end up with several transaction log files, some of which have been applied and others that have not. When that happens, how do I figure out which file to start with in my TSQL script?
I tried looking in the restore history table like this:
select distinct
h.destination_database_name,
h.restore_date,
s.server_name,
m.physical_device_name as backup_device,
f.physical_name
from
msdb..restorehistory h
inner join
msdb..backupfile f on h.backup_set_id = f.backup_set_id
inner join
msdb..backupset s on f.backup_set_id = s.backup_set_id
inner join
msdb..backupmediafamily m on s.media_set_id = m.media_set_id
where
h.destination_database_name = 'mydb'
and h.restore_date > (GETDATE() -0.5)
order by
h.restore_date
But checking restorehistory is no good because the NORECOVERY flag means no records have been added in that table. So is there another way to check this, via T-SQL, that works for a NORECOVERY database?
Assuming this is a rare manual operation, the simplest way to is scan the errorlog.
SQL Server's built-in log shipping (and some third party implementations) have tables, views and user interfaces that make this simpler.

Trigger that only runs runs once per day

This trigger backs up data from dbo.node to dbo.nodearchive. While backups are important, I only need to do this once per day. Note that there is a field called dbo.NodeArchive.versionDate (smalldDatetime).
CREATE TRIGGER [dbo].[Node_update]
ON [dbo].[Node]
for UPDATE
AS
BEGIN
INSERT INTO dbo.NodeArchive ([NodeID]
,[ParentNodeID]
,[Slug]
,[xmlTitle]
...
,[ModifyBy]
,[ModifyDate]
,[CreateBy]
,[CreateDate])
SELECT [deleted].[NodeID]
,[deleted].[ParentNodeID]
,[deleted].[Slug]
,[deleted].[xmlTitle]
...
,[deleted].[ModifyBy]
,[deleted].[ModifyDate]
,[deleted].[CreateBy]
,[deleted].[CreateDate]
FROM [deleted] LEFT JOIN dbo.Node
ON [deleted].NodeID = dbo.Node.NodeID
WHERE deleted.ModifyDate <> dbo.Node.ModifyDate
END
GO
I am looking to backup changes, but never more than one backup version per day. If there is no change, there is no backup.
That's not a trigger anymore - that'll be a scheduled job. Triggers by their very definition execute whenever a given operation (INSERT, DELETE, UPDATE) happens.
Use the SQL Server Agent facility to schedule that T-SQL code to run once per day.
Read all about SQL Server Agent Jobs in the SQL Server Books Online on MSDN
Update: so if I understand correctly: you want to have an UPDATE trigger - but that trigger would only record the NodeID that were affected, into a "these nodes need to be backed up at night" sort of table. Then, at night, you would have a SQL Agent Job that runs and that scans that "work table" and for all NodeID values stored in there, it would then execute that T-SQL statement to copy their data into the NodeArchive table.
With this approach, if your nodes with NodeID = 42 changes ten times, you'll still only have a single entry NodeID = 42 in your work table, and the nightly backup job would then copy that node only once into the NodeArchive.
With this approach, you can decouple the actual copying (which might take time) from the update process. The UPDATE trigger only records which NodeID rows need processing - the actual processing then happens sometime later, at an off-peak hour, without disturbing users of your system.

Resources