Read another's uncommitted session - temp table data - sql-server

The thing is..
My sp throws error: string or binary data would be trunkated.
I traced down the code snippet with sql profiler to find the line where it occurs, however
I would need to get the data which is being inserted.
I added the code to insert that data to my temp table and thought that i would be able to
read its content from another session (while that session is still in progress - not committed) and ...
unfortunately select statement hangs even with nolock hint... under not committed isolation level.
Generally I would like to get the data which will be rolledback because of the error.
Is it possible? How would i do it?

Temp tables are session scoped. If you check sys.tables in tempdb, you will see that your #t table is actually caled something like
#t__________________________________________________________________________________________________________________000000000006
So, you couldn't read it from another session.
What's more, temp tables don't survive a rollback of transaction. To be able to read the data after the rollback, use table variable, and after the rollback save it into a permanent table you can query.

Related

Inserted records getting committed in DB without committing it manually

I have created a database transaction and I am inserting records in Table1 of H2 DB. But no commits done yet.
In between this process, after executing half of the records, I execute one create statement(created Table2).
Table2 is created and along with it, previous INSERT statements are also getting committed in DB.
After this, I'm inserting more records in Table1, if there is a failure in insertion, I still see records in Table1 which were inserted before create statement for Table2.
Due to this, I see some records in DB even after transaction failure. I was expecting ZERO records in DB.
Why is this happening?
Because create table is a DDL statement and no DML statement. And DDL statement usually commit any open transaction.
If you want to avoid this you should create all objects you need during the import before you import the first record.
EDIT 2019-03-22
Although this topic is a bit old I like to mention one thing which could help. You could create a procedure which uses PRAGMA AUTONOMOUS_TRANSACTION which executes an sql statement via execute immediate
PROCEDURE exec_sql_autonomous(p_sql VARCHAR2)
AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE p_sql;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
ROLLBACK;
RAISE;
END;
This way you may be able to create a table while the data inserting transaction is in progress without committing it due to the table creation.

Truncate Vs Delete and Table Locks SQL Server 2012

Right now I am having an issue with a stored procedure that is locking up when running.
It's a conversion from Sybase.
The stored procedure originally would do
TRUNCATE TABLE appInfo
Then repopulate the data within the same stored procedure, but in SQL Server this seems to be causing locks to the users.
Its not a high traffic database.
The change I tried was the to do
BEGIN TRAN
DELETE TABLE appInfo
COMMIT TRAN
Then repopulate the data, but the users are getting a NO_DATA_FOUND error on this one.
So if I TRUNCATE they get data, but it causes a lock
If I do a delete there is no data found.
Anyone have any insight into this condition and a solution? I was thinking of taking the truncate out to a separate stored procedure and called from within the parent procedure, but that might just be pushing the issue down the road and not actually solving it.
Thanks in advance
When you truncate a table the entire table is locked (from MSDN https://technet.microsoft.com/en-us/library/ms177570%28v=sql.105%29.aspx - TRUNCATE TABLE always locks the table and page but not each row.) When you issue a delete table it locks a row, deletes it, and then locks the next row and deletes it. Your users are continuing to hit the table as it is happening. I would go with truncate because its almost always faster.

How to recover data from truncated table

while going though sql server interview question in book of mr. shiv prashad koirala. i got to know that, even after using truncate table command the data can be recovered.
please tell me how can we recover data when data is deleted using 'delete' command and how can data be recover if data is deleted using 'truncate' command.
what i know is that when we use delete command to delete records the entry of it is made in log file but i don't know how to recover the data from and as i read that truncate table not enters any log entry in database then how can that also be recovered.
if you can give me any good link to do it practically step by step than that will be great help to me.
i have got sql server 2008.
Thanks
If you use TRANSACTIONS in your code, TRUNCATE can be rolled back. If there is no transaction is used and TRUNCATE operation is committed, it can not be retrieved from log file. TRUNCATE is DDL operation and it is not logged in log file.
DELETE and TRUNCATE both can be rolled back when surrounded by TRANSACTION if the current session is not closed. If TRUNCATE is written in Query Editor surrounded by TRANSACTION and if session is closed, it can not be rolled back but DELETE can be rolled back.
USE tempdb
GO
-- Create Test Table
CREATE TABLE TruncateTest (ID INT)
INSERT INTO TruncateTest (ID)
SELECT 1
UNION ALL
SELECT 2
UNION ALL
SELECT 3
GO
-- Check the data before truncate
SELECT * FROM TruncateTest
GO
-- Begin Transaction
BEGIN TRAN
-- Truncate Table
TRUNCATE TABLE TruncateTest
GO
-- Check the data after truncate
SELECT * FROM TruncateTest
GO
-- Rollback Transaction
ROLLBACK TRAN
GO
-- Check the data after Rollback
SELECT * FROM TruncateTest
GO
-- Clean up
DROP TABLE TruncateTest
GO
By default none of these two can be reverted but there are special cases when this is possible.
Truncate: when truncate is executed SQL Server doesn’t delete data but only deallocates pages. This means that if you can still read these pages (using query or third party tool) there is a possibility to recover data. However you need to act fast before these pages are overwritten.
Delete: If database is in full recovery mode then all transactions are logged in transaction log. If you can read transaction log you can in theory figure out what were the previous values of all affected rows and then recover data.
Recovery methods:
One method is using SQL queries similar to the one posted here for
truncate or using functions like fn_dblog to read transaction log.
Another one is to use third party tools such as ApexSQL Log, SQL Log
Rescue, ApexSQL Recover or Quest Toad
SQL server keeps the entry (Page # & file #) of the truncated records and those records, you can easily browse from the below query.
Once you get the page ID & file ID , you can put it in the DBCC PAGE to retreive the complete record.
SELECT LTRIM(RTRIM(Replace([Description],'Deallocated',''))) AS [PAGE ID]
,[Slot ID],[AllocUnitId]
FROM sys.fn_dblog(NULL, NULL)
WHERE
AllocUnitId IN
(Select [Allocation_unit_id] from sys.allocation_units allocunits
INNER JOIN sys.partitions partitions ON (allocunits.type IN (1, 3)
AND partitions.hobt_id = allocunits.container_id) OR (allocunits.type = 2
AND partitions.partition_id = allocunits.container_id)
Where object_id=object_ID('' + 'dbo.Student' + ''))
AND Operation IN ('LOP_MODIFY_ROW') AND [Context] IN ('LCX_PFS')
AND Description Like '%Deallocated%'
Given below is the link of article that explains , how to recover truncated records from SQl server.
http://raresql.com/2012/04/08/how-to-recover-truncated-data-from-sql-server-without-backup/
If your database is in full recovery mode you can recover data either by truncated, deleted or dropped
Complete Step by Step Article is here https://codingfry.blogspot.com/2018/09/how-to-recover-data-from-truncated.html

TSQL logging inside transaction

I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.

How is a T-SQL transaction not thread-safe?

The following (sanitized) code sometimes produces these errors:
Cannot drop the table 'database.dbo.Table', because it does not exist or you do not have permission.
There is already an object named 'Table' in the database.
begin transaction
if exists (select 1 from database.Sys.Tables where name ='Table')
begin drop table database.dbo.Table end
Select top 3000 *
into database.dbo.Table
from OtherTable
commit
select * from database.dbo.Table
The code can be run multiple times simultaneously. Anyone know why it breaks?
Can I ask why your doing this first? You should really consider using temporary tables or come up with another solution.
I'm not positive that DDL statments behave the sameway in transactions as DML statements and have seen a blog post with a weird behavior and creating stored procedures within a DDL.
Asside from that you might want to verify your transaction isolation level and set it to Serialized.
Edit
Based on a quick test, I ran the same sql in two different connections, and when I created the table but didn't commit the transaction, the second transaction blocked. So it looks like this should work. I would still caution against this type of design.
In what part of the code are you preventing multiple accesses to this resource?
begin transaction
if exists (select 1 from database.Sys.Tables where name ='Table')
begin drop table database.dbo.Table end
Select top 3000 *
into database.dbo.Table
from OtherTable
commit
Begin transaction isn't doing it. It's only setting up for a commit/rollback scenario on any rows added to tables.
The (if exists, drop) is a race condition, along with the re-creation of the table with (select..into). Mutiliple people dropping into that code all at once will most certainly cause all kinds of errors. Some creating tables that others have just destroyed, others dropping tables that don't exist anymore, and others dropping tables that some are busy inserting into. UGH!
Consider the temp table suggestions of others, or using an application lock to block others from entering this code at all if the critical resource is busy. Transactions on drop/create are not what you want.
If you are just using this table during this process I would suggest using a temp table or , depending on how much data , a ram table. I use ram tables frequently to avoid any transaction costs and save on disk activity.

Resources