I just want to know if it is possible in SQL Server 2008 to know the last SP/query that generated the "begin transaction"
Thank you so much for your response
Information about active transaction DBCC OPENTRAN
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-opentran-transact-sql
Variant B
CREATE TABLE #OpenTranStatus (
ActiveTransaction varchar(25),
Details sql_variant
);
-- Execute the command, putting the results in the table.
INSERT INTO #OpenTranStatus
EXEC ('DBCC OPENTRAN WITH TABLERESULTS, NO_INFOMSGS');
-- Display the results.
SELECT * FROM #OpenTranStatus;
Transactions are specific to sessions, so there could be many concurrent ones
Transactions can be implicit and any standalone UPDATE/INSERT/DELETE will be a self contained transactions
Now that's out of the way, you have DMVs like sys.dm_tran_active_transactions and others which give you everything.
It depends what you actually need of course.
There is no transaction history though, so once committed/rolled back you have no trace.
Related
I have an app which keeps inserting rows into table.using stored procedures.
Based on my search so far Oracle needs commit but sql server does it automatically.
I could not find any solid reference to confirm the above.
So question: does Sql server needs commit after every insert and delete(inside stored procedures) or it is automatic?
SQL will commit by default. If you don't want it to commit, you can begin a TRANSACTION, and then you can choose to COMMIT TRANSACTION or ROLLBACK TRANSACTION
More info:
https://msdn.microsoft.com/en-us/library/ms188929.aspx?f=255&MSPPError=-2147217396
The answer can be complicated depending on configuration and your exact code, however in general Sql Server writes with each operation. For all practical purposes if you do something like:
CREATE TABLE dbo.DataTable( Value nvarchar(max) )
GO
CREATE PROC dbo.WriteData
#data NVARCHAR(MAX)
AS BEGIN
INSERT INTO DataTable( Value ) VALUES ( #data )
END
GO
EXEC dbo.WriteData 'Hello World'
SELECT *
FROM DataTable
DROP TABLE dbo.DataTable
DROP PROC dbo.WriteData
Once the proc has completed the data is commited. Again, depending on lots of factors the timing of this can change or be delayed.
However for what it sounds like you are asking if your "INSERT" the data is inserted no need to finalize a transaction unless you started one.
I'm quite experienced with SQL databases but mostly with Oracle and MySQL.
Now I'm dealing with SQL Server 2012 (Management Studio 2008) and facing a weird behaviour that I cannot explain.
Considering these 3 queries and an origin table made of 400k rows:
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES]
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY (ID_TARJETA)
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
If I run them one after the other it runs OK. (total: ~7sec).
If I select them all and run all the queries at once it runs BAD. (total: ~60sec)
Finally if I wrap it all with a transaction it runs OK again
BEGIN TRANSACTION;
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES]
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY(ID_TARJETA)
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
COMMIT;
The whole picture makes no sense to me, considering that creating transactions looks quite expensive the first scenario should be a slow one, and the second one should work far better, am I wrong?
The question is quite important for me since, I'm building programatically (jdbc) this sort of packages of queries and I need a way to tweak its performance.
The only difference between the two snippet provided, is that the first uses the default transaction mode and the second uses an Explicit Transaction.
Since SQL Server default transaction mode is Autocommit Transactions, each individual statement is a transaction.
You can find more information about transaction modes here.
You can try this to see if it run in 60 sec too:
BEGIN TRANSACTION;
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES];
COMMIT;
BEGIN TRANSACTION;
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY(ID_TARJETA);
COMMIT;
BEGIN TRANSACTION;
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
COMMIT;
I'm having issues with timeouts of a table on mine.
Example table:
Id BIGINT,
Token uniqueidentifier,
status smallint,
createdate datetime,
updatedate datetime
I'm inserting data into this table from 2 different stored procedures that are wrapped with transaction (with specific escalation) and also 1 job that executes once every 30 secs.
I'm getting timeout from only 1 of them, and the weird thing that its from the simple one
BEGIN TRY
BEGIN TRAN
INSERT INTO [dbo].[TempTable](Id, AppToken, [Status], [CreateDate], [UpdateDate])
VALUES(#Id, NEWID(), #Status, GETUTCDATE(), GETUTCDATE() )
COMMIT TRAN
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRAN;
END CATCH
When there is some traffic on this table (TempTable) this procedure keeps getting timeout.
I checked the execution plan and it seems I haven't missed any indexes in both stored procedures.
Also, the only index on TempTable is the clustered PK on Id.
Any ideas?
If more information is needed, do tell.
The 2nd stored procedure using this table isn't causing any big IO or something.
The job, however, uses an atomic UPDATE on this table and in the end of it DELETEs from the table, but as I checked on high IO of this table, the job takes no longer than 3 secs.
Thanks.
It is most propably because some other process is blocking your insert operation, It could be another insert, delete , update or some trigger or any other sql statement.
To find out who is blocking your operation you can use some esaily avialable stored procedures like
sp_who2
sp_whoIsActive (My Preferred)
While your insert statement is being executed/hung up execute one of these procedures and see who is blocking you.
In sp_who2 you will see a column by the name Blk_by get the SPID from that column and execute the following query
DBCC INPUTBUFFER(71);
GO
This will reutrn the last query executed by that process id. and it is not very well formatted the sql statement, all the query will be in one single line you will need to format it in your SSMS to acutally be able to read it.
On the other hand sp_WhoIsActive will only return the queries that are blocking other process and will have the query formatted just as the user has execute it. Also it will give you the execution plan for that query.
I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.
The following (sanitized) code sometimes produces these errors:
Cannot drop the table 'database.dbo.Table', because it does not exist or you do not have permission.
There is already an object named 'Table' in the database.
begin transaction
if exists (select 1 from database.Sys.Tables where name ='Table')
begin drop table database.dbo.Table end
Select top 3000 *
into database.dbo.Table
from OtherTable
commit
select * from database.dbo.Table
The code can be run multiple times simultaneously. Anyone know why it breaks?
Can I ask why your doing this first? You should really consider using temporary tables or come up with another solution.
I'm not positive that DDL statments behave the sameway in transactions as DML statements and have seen a blog post with a weird behavior and creating stored procedures within a DDL.
Asside from that you might want to verify your transaction isolation level and set it to Serialized.
Edit
Based on a quick test, I ran the same sql in two different connections, and when I created the table but didn't commit the transaction, the second transaction blocked. So it looks like this should work. I would still caution against this type of design.
In what part of the code are you preventing multiple accesses to this resource?
begin transaction
if exists (select 1 from database.Sys.Tables where name ='Table')
begin drop table database.dbo.Table end
Select top 3000 *
into database.dbo.Table
from OtherTable
commit
Begin transaction isn't doing it. It's only setting up for a commit/rollback scenario on any rows added to tables.
The (if exists, drop) is a race condition, along with the re-creation of the table with (select..into). Mutiliple people dropping into that code all at once will most certainly cause all kinds of errors. Some creating tables that others have just destroyed, others dropping tables that don't exist anymore, and others dropping tables that some are busy inserting into. UGH!
Consider the temp table suggestions of others, or using an application lock to block others from entering this code at all if the critical resource is busy. Transactions on drop/create are not what you want.
If you are just using this table during this process I would suggest using a temp table or , depending on how much data , a ram table. I use ram tables frequently to avoid any transaction costs and save on disk activity.