while going though sql server interview question in book of mr. shiv prashad koirala. i got to know that, even after using truncate table command the data can be recovered.
please tell me how can we recover data when data is deleted using 'delete' command and how can data be recover if data is deleted using 'truncate' command.
what i know is that when we use delete command to delete records the entry of it is made in log file but i don't know how to recover the data from and as i read that truncate table not enters any log entry in database then how can that also be recovered.
if you can give me any good link to do it practically step by step than that will be great help to me.
i have got sql server 2008.
Thanks
If you use TRANSACTIONS in your code, TRUNCATE can be rolled back. If there is no transaction is used and TRUNCATE operation is committed, it can not be retrieved from log file. TRUNCATE is DDL operation and it is not logged in log file.
DELETE and TRUNCATE both can be rolled back when surrounded by TRANSACTION if the current session is not closed. If TRUNCATE is written in Query Editor surrounded by TRANSACTION and if session is closed, it can not be rolled back but DELETE can be rolled back.
USE tempdb
GO
-- Create Test Table
CREATE TABLE TruncateTest (ID INT)
INSERT INTO TruncateTest (ID)
SELECT 1
UNION ALL
SELECT 2
UNION ALL
SELECT 3
GO
-- Check the data before truncate
SELECT * FROM TruncateTest
GO
-- Begin Transaction
BEGIN TRAN
-- Truncate Table
TRUNCATE TABLE TruncateTest
GO
-- Check the data after truncate
SELECT * FROM TruncateTest
GO
-- Rollback Transaction
ROLLBACK TRAN
GO
-- Check the data after Rollback
SELECT * FROM TruncateTest
GO
-- Clean up
DROP TABLE TruncateTest
GO
By default none of these two can be reverted but there are special cases when this is possible.
Truncate: when truncate is executed SQL Server doesn’t delete data but only deallocates pages. This means that if you can still read these pages (using query or third party tool) there is a possibility to recover data. However you need to act fast before these pages are overwritten.
Delete: If database is in full recovery mode then all transactions are logged in transaction log. If you can read transaction log you can in theory figure out what were the previous values of all affected rows and then recover data.
Recovery methods:
One method is using SQL queries similar to the one posted here for
truncate or using functions like fn_dblog to read transaction log.
Another one is to use third party tools such as ApexSQL Log, SQL Log
Rescue, ApexSQL Recover or Quest Toad
SQL server keeps the entry (Page # & file #) of the truncated records and those records, you can easily browse from the below query.
Once you get the page ID & file ID , you can put it in the DBCC PAGE to retreive the complete record.
SELECT LTRIM(RTRIM(Replace([Description],'Deallocated',''))) AS [PAGE ID]
,[Slot ID],[AllocUnitId]
FROM sys.fn_dblog(NULL, NULL)
WHERE
AllocUnitId IN
(Select [Allocation_unit_id] from sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2
AND partitions.partition_id = allocunits.container_id)
Where object_id=object_ID('' + 'dbo.Student' + ''))
AND Operation IN ('LOP_MODIFY_ROW') AND [Context] IN ('LCX_PFS')
AND Description Like '%Deallocated%'
Given below is the link of article that explains , how to recover truncated records from SQl server.
http://raresql.com/2012/04/08/how-to-recover-truncated-data-from-sql-server-without-backup/
If your database is in full recovery mode you can recover data either by truncated, deleted or dropped
Complete Step by Step Article is here https://codingfry.blogspot.com/2018/09/how-to-recover-data-from-truncated.html
Related
I've created a stored procedure to add data to a table. In mock fashion the steps are:
truncate original table
Select data into the original table
The query that selects data into the original table is quite long (it can take almost a minute to complete), which means that the table is then empty of data for over a minute.
To fix this empty table I changed the stored procedure to:
select data into #temp table
truncate Original table
insert * from #temp into Original
While the stored procedure was running, I did a select * on the original table and it was empty (refreshing, it stayed empty until the stored procedure completed).
Does the truncate happen at the beginning of the procedure no matter where it actually is in the code? If so is there something else I can do to control when the data is deleted?
A very interesting method to move data into a table very quickly is to use partition switching.
Create two staging tables, myStaging1 and myStaging2, with the new data in myStaging2. They must be in the same DB and the same filegroup (so not temp tables or table variables), with the EXACT same columns, PKs, FKs and indexes.
Then run this:
SET XACT_ABORT, NOCOUNT ON; -- force immediate rollback if session is killed
BEGIN TRAN;
ALTER TABLE myTargetTable SWITCH TO myStaging1
WITH ( WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 1 MINUTES, ABORT_AFTER_WAIT = BLOCKERS ));
-- not strictly necessary to use WAIT_AT_LOW_PRIORITY but better for blocking
-- use SELF instead of BLOCKERS to kill your own session
ALTER TABLE myStaging2 SWITCH TO myTargetTable
WITH (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 MINUTES, ABORT_AFTER_WAIT = BLOCKERS));
-- force blockers off immediately
COMMIT TRAN;
TRUNCATE TABLE myStaging1;
This is extremely fast, as it's just a metadata change.
You will ask: partitions are only supported on Enterprise Edition (or Developer), how does that help?
Switching non-partitioned tables between each other is still allowed even in Standard or Express Editions.
See this article by Kendra Little for further info on this technique.
The sp is being called by code in an HTTP Get, so I didn't want the table to be empty for over a minute during refresh. When I asked the question I was using a select * from the table to test, but just now I tested by hitting the endpoint in postman and I never received an empty response. So it appears that putting the truncate later in the sp did work.
Right now I am having an issue with a stored procedure that is locking up when running.
It's a conversion from Sybase.
The stored procedure originally would do
TRUNCATE TABLE appInfo
Then repopulate the data within the same stored procedure, but in SQL Server this seems to be causing locks to the users.
Its not a high traffic database.
The change I tried was the to do
BEGIN TRAN
DELETE TABLE appInfo
COMMIT TRAN
Then repopulate the data, but the users are getting a NO_DATA_FOUND error on this one.
So if I TRUNCATE they get data, but it causes a lock
If I do a delete there is no data found.
Anyone have any insight into this condition and a solution? I was thinking of taking the truncate out to a separate stored procedure and called from within the parent procedure, but that might just be pushing the issue down the road and not actually solving it.
Thanks in advance
When you truncate a table the entire table is locked (from MSDN https://technet.microsoft.com/en-us/library/ms177570%28v=sql.105%29.aspx - TRUNCATE TABLE always locks the table and page but not each row.) When you issue a delete table it locks a row, deletes it, and then locks the next row and deletes it. Your users are continuing to hit the table as it is happening. I would go with truncate because its almost always faster.
The thing is..
My sp throws error: string or binary data would be trunkated.
I traced down the code snippet with sql profiler to find the line where it occurs, however
I would need to get the data which is being inserted.
I added the code to insert that data to my temp table and thought that i would be able to
read its content from another session (while that session is still in progress - not committed) and ...
unfortunately select statement hangs even with nolock hint... under not committed isolation level.
Generally I would like to get the data which will be rolledback because of the error.
Is it possible? How would i do it?
Temp tables are session scoped. If you check sys.tables in tempdb, you will see that your #t table is actually caled something like
#t__________________________________________________________________________________________________________________000000000006
So, you couldn't read it from another session.
What's more, temp tables don't survive a rollback of transaction. To be able to read the data after the rollback, use table variable, and after the rollback save it into a permanent table you can query.
I have a LOGGIN database and it is quite big - 400 GB. It has millions of rows.
I just ran a delete statement which took 2.5 hours and deleted probably millions of rows.
delete FROM [DB].[dbo].[table]
where [Level] not in ('info','error')
This is a simple recovery model database. But when I ran the above statement the log files grew to be 800 GB and crashed the server. Why does the LOG file grows for a simple recovery model database?
How can I avoid this in future?
Thanks for your time - RM
I bet you tried to run the whole delete in one transaction. Correct?
Once the transaction is complete, the log space can be reclaimed. Since the transaction never completed, the log file grew until it crashed the server.
Check out my blog entry on How to Delete Large Data.
http://craftydba.com/?p=3079
The key to the solution is the following, SIMPLE recover mode, DELETE in small batches, take FULL backup at end of purge. Select the recovery model that you want at the end.
Here is some sample code to help you on your way.
--
-- Delete in batches in SIMPLE recovery mode
--
-- Select correct db
USE [MATH]
GO
-- Set to simple mode
ALTER DATABASE [MATH] SET RECOVERY SIMPLE;
GO
-- Get count of records
SELECT COUNT(*) AS Total FROM [MATH].[dbo].[TBL_PRIMES];
GO
-- Delete in batches
DECLARE #VAR_ROWS INT = 1;
WHILE (#VAR_ROWS > 0)
BEGIN
DELETE TOP (10000) FROM [MATH].[dbo].[TBL_PRIMES];
SET #VAR_ROWS = ##ROWCOUNT;
CHECKPOINT;
END;
GO
-- Set to full mode
ALTER DATABASE [MATH] SET RECOVERY FULL;
GO
Last but not least, if the amount of remaining data after the delete is real small, it might be quicker to do the following.
1 - SELECT * INTO [Temp Table] WHERE (clause = small data).
2 - DROP [Original Table].
3 - Rename [Temp Table] to [Original Table].
4 - Add any constraints or missing objects.
The DROP table action does not LOG all the data being removed.
Sincerely,
John
Consider using an open-source PowerShell Module sqlsizer-msql.
It's available on GitHub and it's published under MIT license:
https://github.com/sqlsizer/sqlsizer-mssql
I think could help you with your task. It has "slow delete" feature.
I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.