I restored two databases using two different .BAK files (different nightly backup files)
I have a row of data that disappeared from the latest restored .BAK.
Need to find out why.
Is there a way for me to read/go through the .TRN data to see what user-action might have caused the issue?
Check out Apexsql ,they provide tools to read the transaction log. Its not freeware however.
There is also a undocumented feature inside SQl Server. See This Post for more details.
DBCC LOG(databasename, typeofoutput)
Paul Randal wrote on using an undocumented function to find out who dropped a table using the transaction log, you might be able to use the same concept.
In his post he was looking for a dropped table so I played with it on my local system and found you would filter for WHERE [Tranaction Name] = 'DELETE', for deleting a record from a table.
So this query:
SELECT [Current LSN], [Begin Time], SPID, [Database Name], [Transaction Begin], [Transaction ID], [Transaction Name], [Transaction SID], Context, Operation
FROM ::fn_dblog (null, null)
WHERE [Transaction Name] = 'DELETE'
GO
Returns this output
Current LSN Begin Time SPID Database Name Transaction Begin Transaction ID Transaction Name Transaction SID Context Operation
00000474:00000239:0001 2012/03/06 10:09:19:547 58 NULL NULL 0001:000a67be DELETE 0x010500000000000515000000628ADB6E31CC6098F269B2B9F8060000 LCX_NULL LOP_BEGIN_XACT
Related
I have an SSIS package using CHANGE TRACKING that runs every 5 minutes to perform one way synchronization on a table.
These are the DB's involved:
DestDB
SourceDB
DestDB contains a table called TableSyncVersions that is used to keep track of the most recent Sync version used to extract information from the table in SourceDB. This Sync Version is used for the next execution of the package to get the next batch of data.
SourceDB has Snapshot Isolation enabled and the CT Query is being executed by an "OLE DB Source" in SSIS. The Query is as follows:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRAN;
--Using OLE DB parameters to capture the current version within the transaction
SELECT ? = CAST(CHANGE_TRACKING_CURRENT_VERSION() AS NVARCHAR)
SELECT ct.KeyColumn1
, ct.KeyColumn2
, ct.KeyColumn3
, st.Column1
, st.Column2
, st.Column3
, st.Column4
, ct.SYS_CHANGE_OPERATION
FROM TABLE1 AS st
--Using OLE DB Parameters to reference the version # saved in TableSyncVersions
RIGHT OUTER JOIN CHANGETABLE(CHANGES TABLE1, ?) AS ct
ON avq.KeyColumn1 = ct.KeyColumn1
AND avq.KeyColumn2 = ct.KeyColumn2
AND avq.KeyColumn3 = ct.KeyColumn3
COMMIT TRAN;
Here is a screen shot of the Control Flow for this package:
At least once a day, the package misses 5-20 records even though it ran without error, the records are missed at different times everyday. Has anyone experienced anything like this with Change Tracking before?
Any help is greatly appreciated.
Thank you,
Tory Hill
I have an application which is using Entity framework for DB operations. In an one table when performing the delete operation it takes more than 3 minutes. But other similar tables doesn't take much time. I have debugged the code and find out there is no issue with the code.But executing the query in the sql server took much time.
Any troubleshooting steps/root cause for this issue ?
My table is as below,
Id (PK,uniqueidentifier,not null)
FirstValue(real,not null)
SecondValue(real,not null)
ThirdValue(real,not null)
LastValue(int,not null)
Config_Id(FK,uniqueidentifier,not null)
Query Execution Plan
Something isn't adding up here, we're not seeing the full picture...
There are a multitude of things which can slow down deletes (usually):
deleting a lot of records (which we know isn't the case here)
many indexes (which I suspect IS the case here)
deadlocks and blocking (is this a development or production database?)
triggers
cascade delete
transaction log needing to grow
many foreign keys to check (I suspect this might also be happening)
Can you please give us a screenshot of the "View Dependencies" feature in SSMS? To get this, right click on the table in the object explorer and select View Dependencies.
Also, can you open up a query on the master database, run the following queries and post the results:
SELECT name, value, value_in_use, minimum, maximum, [description], is_dynamic, is_advanced
FROM sys.configurations WITH (NOLOCK)
where name in (
'backup compression default',
'clr enabled',
'cost threshold for parallelism',
'lightweight pooling',
'max degree of parallelism',
'max server memory',
'optimize for ad hoc workloads',
'priority boost',
'remote admin connections'
)
ORDER BY name OPTION (RECOMPILE);
SELECT DB_NAME([database_id]) AS [Database Name],
[file_id], [name], physical_name, [type_desc], state_desc,
is_percent_growth, growth,
CONVERT(bigint, growth/128.0) AS [Growth in MB],
CONVERT(bigint, size/128.0) AS [Total Size in MB]
FROM sys.master_files WITH (NOLOCK)
ORDER BY DB_NAME([database_id]), [file_id] OPTION (RECOMPILE);
Having accidentally nullified a column in MS SQL 2012, I'm looking at how to use fn_dblog for the first time. I had previously backed up the table, and deleted it this morning. I am using full recovery mode (code below for anyone in the future who would like to find out):
SELECT name, recovery_model_desc
FROM sys.databases
WHERE name = 'model' ;
GO
Is it possible to rollback a DROP TABLE transaction that was committed within the past 12 hours?
I found this transaction which seems to be exactly what I want. But only the last 3431 rows are found:
SELECT [Current LSN],
[Operation],
[Transaction ID],
[Parent Transaction ID],
[Begin Time],
[Transaction Name],
[Transaction SID]
FROM fn_dblog(NULL, NULL)
WHERE [Operation] = 'LOP_BEGIN_XACT'
How can I return earlier transactions using this query?
I am in unfamiliar territory here. What else should I be thinking of?
How do I know if logs exist and haven't been truncated?
Is it easier to reinstate a table vs a column deletion?
What are the dangers of using fn_fblog? On a blog post I found this (https://raresql.com/2013/04/15/sql-server-undocumented-function-fn_dblog/):
"No doubt fn_dblog is one of the helpful undocumented functions but do not use this function in the production server unless otherwise required." What is the reason for this?
=== EDIT ===
On a side note, a very helpful article is here:
A very helpful introduction to MS SQL logging:
http://www.sqlshack.com/reading-sql-server-transaction-log/
I intend to track delete actions done on a SQL Server DB whose recovery model is simple.
Do such actions get logged when the DB is in this mode?
You can achieve your goal in many different way. If you want you can read delete operations from sql server transaction log, but you will "loose" it after each transaction log backup if you are in full recovery model. In simple recovery model you can not control the transaction log contents.
To find delete operations for a particular table you can use the following query:
DECLARE #MonitoredTable sysname
SET #MonitoredTable = 'YouTable'
SELECT
u.[name] AS UserName
, l.[Begin Time] AS TransactionStartTime
FROM
fn_dblog(NULL, NULL) l
INNER JOIN
(
SELECT
[Transaction ID]
FROM
fn_dblog(NULL, NULL)
WHERE
AllocUnitName LIKE #MonitoredTable + '%'
AND
Operation = 'LOP_DELETE_ROWS'
) deletes
ON deletes.[Transaction ID] = l.[Transaction ID]
INNER JOIN
sysusers u
ON u.[sid] = l.[Transaction SID]
Another approach you can use is to write an "audit trigger" or you can use directly sql server auditing features/Sql server extended events as well explained in this Apex webpage:
SQL Server database auditing techniques
I'm trying to efficiently determine if a log backup will contain any data.
The best I have come up with is the following:
DECLARE #last_lsn numeric(25,0)
SELECT #last_lsn = last_log_backup_lsn
FROM sys.database_recovery_status WHERE database_id = DB_ID()
SELECT TOP 1 [Current LSN] FROM ::fn_dblog(#last_lsn, NULL)
The problem is when there are no transactions since the last backup, fn_dblog throws error 9003 with severity 20(!!) and logs it to the ERRORLOG file and event log. That makes me nervous -- I wish it just returned no records.
FYI, the reason I care is I have hundreds of small databases that can have activity at any time of day, but are typically used 8 hours/day. That means 2/3 of my log backups are empty. Those extra thousands of files can have a measurable impact on the time required for both off-site backup and recovering from a disaster.
I figured out an answer that works for my particular application. If I compare the results of the following two queries, I can determine if any activity has occurred on the database since the last log backup.
SELECT MAX(backup_start_date) FROM msdb..backupset WHERE type = 'L' AND database_name = DB_NAME();
SELECT MAX(last_user_update) FROM sys.dm_db_index_usage_stats WHERE database_id = DB_ID() AND last_user_update IS NOT NULL;
If I run
SELECT [Current LSN] FROM ::fn_dblog(null, NULL)
It seems to return my current LSN at the top that matches the last log backup.
What happens if you change the select from ::fn_dblog to a count(*)? Does that eliminate the error?
If not, maybe select the log records into a temp table (top 100 from ::fn_dblog(null, NULL), ordering by a date, if there is one) and then query that.