I'm quite experienced with SQL databases but mostly with Oracle and MySQL.
Now I'm dealing with SQL Server 2012 (Management Studio 2008) and facing a weird behaviour that I cannot explain.
Considering these 3 queries and an origin table made of 400k rows:
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES]
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY (ID_TARJETA)
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
If I run them one after the other it runs OK. (total: ~7sec).
If I select them all and run all the queries at once it runs BAD. (total: ~60sec)
Finally if I wrap it all with a transaction it runs OK again
BEGIN TRANSACTION;
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES]
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY(ID_TARJETA)
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
COMMIT;
The whole picture makes no sense to me, considering that creating transactions looks quite expensive the first scenario should be a slow one, and the second one should work far better, am I wrong?
The question is quite important for me since, I'm building programatically (jdbc) this sort of packages of queries and I need a way to tweak its performance.
The only difference between the two snippet provided, is that the first uses the default transaction mode and the second uses an Explicit Transaction.
Since SQL Server default transaction mode is Autocommit Transactions, each individual statement is a transaction.
You can find more information about transaction modes here.
You can try this to see if it run in 60 sec too:
BEGIN TRANSACTION;
SELECT ID_TARJETA
INTO [SGMENTIA_TEMP].[dbo].[borra_borra_]
FROM [DATAMART_SEGMENTIA].[DESA].[CLIENTES];
COMMIT;
BEGIN TRANSACTION;
ALTER TABLE [SGMENTIA_TEMP].[dbo].[borra_borra_]
ADD PRIMARY KEY(ID_TARJETA);
COMMIT;
BEGIN TRANSACTION;
SELECT COUNT(*)
FROM [SGMENTIA_TEMP].[dbo].[borra_borra_]
COMMIT;
Related
I just want to know if it is possible in SQL Server 2008 to know the last SP/query that generated the "begin transaction"
Thank you so much for your response
Information about active transaction DBCC OPENTRAN
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-opentran-transact-sql
Variant B
CREATE TABLE #OpenTranStatus (
ActiveTransaction varchar(25),
Details sql_variant
);
-- Execute the command, putting the results in the table.
INSERT INTO #OpenTranStatus
EXEC ('DBCC OPENTRAN WITH TABLERESULTS, NO_INFOMSGS');
-- Display the results.
SELECT * FROM #OpenTranStatus;
Transactions are specific to sessions, so there could be many concurrent ones
Transactions can be implicit and any standalone UPDATE/INSERT/DELETE will be a self contained transactions
Now that's out of the way, you have DMVs like sys.dm_tran_active_transactions and others which give you everything.
It depends what you actually need of course.
There is no transaction history though, so once committed/rolled back you have no trace.
I have an update stored procedure, I call it from c# code and my code is running in 3 threads at the same time. Update statement generally throws the error "Transaction (Process ID) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction". How can I solve this in sql server 2014 or in c# code?
Update stored procedure:
ALTER PROCEDURE sp_UpdateSP
#RecordID nvarchar(50),
#FileNetID nvarchar(50),
#ClassName nvarchar(150)
AS
Begin tran t1
UPDATE MYTABLE SET FilenetID=#FileNetID, DOCUMENT_TYPE=#ClassName, CONTROLID='FileAttach' where OTRECORDID=#RecordID
Commit tran t1
Table Index:
Non-Unique, Non-Clustered OTRECORDID Ascending nvarchar(255)
Thanks
I suspect the problem is caused by SQL performing a scan over the table because it thinks that is quicker than doing a seek on the index and then a keylookup to find the row to update.
You can prevent these scans and force SQL to perform a seek by using the FORCESEEK hint.
You code would become
Begin tran t1
UPDATE mt SET FilenetID=#FileNetID, DOCUMENT_TYPE=#ClassName, CONTROLID='FileAttach' FROM MYTABLE mt WITH(FORCESEEK) where OTRECORDID=#RecordID
Commit tran t1
This will be slower than the scan but will reduce the probability of deadlocks.
It might not be the exact answer, but if you want a workaround to skip the problem, run services:
Win + R > type services.msc
Find SQL Server service - Mostly named SQL Server (MSSQLSERVER) if you have only one instance - then Restart the service, now the deadlocked transaction has gone so you can keep working.
I have an app which keeps inserting rows into table.using stored procedures.
Based on my search so far Oracle needs commit but sql server does it automatically.
I could not find any solid reference to confirm the above.
So question: does Sql server needs commit after every insert and delete(inside stored procedures) or it is automatic?
SQL will commit by default. If you don't want it to commit, you can begin a TRANSACTION, and then you can choose to COMMIT TRANSACTION or ROLLBACK TRANSACTION
More info:
https://msdn.microsoft.com/en-us/library/ms188929.aspx?f=255&MSPPError=-2147217396
The answer can be complicated depending on configuration and your exact code, however in general Sql Server writes with each operation. For all practical purposes if you do something like:
CREATE TABLE dbo.DataTable( Value nvarchar(max) )
GO
CREATE PROC dbo.WriteData
#data NVARCHAR(MAX)
AS BEGIN
INSERT INTO DataTable( Value ) VALUES ( #data )
END
GO
EXEC dbo.WriteData 'Hello World'
SELECT *
FROM DataTable
DROP TABLE dbo.DataTable
DROP PROC dbo.WriteData
Once the proc has completed the data is commited. Again, depending on lots of factors the timing of this can change or be delayed.
However for what it sounds like you are asking if your "INSERT" the data is inserted no need to finalize a transaction unless you started one.
I'm having issues with timeouts of a table on mine.
Example table:
Id BIGINT,
Token uniqueidentifier,
status smallint,
createdate datetime,
updatedate datetime
I'm inserting data into this table from 2 different stored procedures that are wrapped with transaction (with specific escalation) and also 1 job that executes once every 30 secs.
I'm getting timeout from only 1 of them, and the weird thing that its from the simple one
BEGIN TRY
BEGIN TRAN
INSERT INTO [dbo].[TempTable](Id, AppToken, [Status], [CreateDate], [UpdateDate])
VALUES(#Id, NEWID(), #Status, GETUTCDATE(), GETUTCDATE() )
COMMIT TRAN
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRAN;
END CATCH
When there is some traffic on this table (TempTable) this procedure keeps getting timeout.
I checked the execution plan and it seems I haven't missed any indexes in both stored procedures.
Also, the only index on TempTable is the clustered PK on Id.
Any ideas?
If more information is needed, do tell.
The 2nd stored procedure using this table isn't causing any big IO or something.
The job, however, uses an atomic UPDATE on this table and in the end of it DELETEs from the table, but as I checked on high IO of this table, the job takes no longer than 3 secs.
Thanks.
It is most propably because some other process is blocking your insert operation, It could be another insert, delete , update or some trigger or any other sql statement.
To find out who is blocking your operation you can use some esaily avialable stored procedures like
sp_who2
sp_whoIsActive (My Preferred)
While your insert statement is being executed/hung up execute one of these procedures and see who is blocking you.
In sp_who2 you will see a column by the name Blk_by get the SPID from that column and execute the following query
DBCC INPUTBUFFER(71);
GO
This will reutrn the last query executed by that process id. and it is not very well formatted the sql statement, all the query will be in one single line you will need to format it in your SSMS to acutally be able to read it.
On the other hand sp_WhoIsActive will only return the queries that are blocking other process and will have the query formatted just as the user has execute it. Also it will give you the execution plan for that query.
I have a table I'm using as a work queue. Essentially, it consists of a primary key, a piece of data, and a status flag (processed/unprocessed). I have multiple processes trying to grab the next unprocessed row, so I need to make sure that they observe proper lock and update semantics to avoid race condition nastiness. To that end, I've defined a stored procedure they can call:
CREATE PROCEDURE get_from_q
AS
DECLARE #queueid INT;
BEGIN TRANSACTION TRAN1;
SELECT TOP 1
#queueid = id
FROM
MSG_Q WITH (updlock, readpast)
WHERE
MSG_Q.status=0;
SELECT TOP 1 *
FROM
MSG_Q
WHERE
MSG_Q.id=#queueid;
UPDATE MSG_Q
SET status=1
WHERE id=#queueid;
COMMIT TRANSACTION TRAN1;
Note the use of "WITH (updlock, readpast)" to make sure that I lock the target row and ignore rows that are similarly locked already.
Now, the procedure works as listed above, which is great. While I was putting this together, however, I found that if the second SELECT and the UPDATE are reversed in order (i.e. UPDATE first then SELECT), I got no data back at all. And no, it didn't matter whether the second SELECT was before or after the final COMMIT.
My question is thus why the order of the second SELECT and UPDATE makes a difference. I suspect that there is something subtle going on there that I don't understand, and I'm worried that it's going to bite me later on.
Any hints?
by default transactions are READ COMMITTED :
"Specifies that shared locks are held while the data is being read to avoid dirty reads, but the data can be changed before the end of the transaction, resulting in nonrepeatable reads or phantom data. This option is the SQL Server default."
http://msdn.microsoft.com/en-us/library/aa259216.aspx
I think you are getting nothing in the select because the record is still marked as dirty. You'd have to change the transaction isolation level OR, what I do is do the update first and then read the record, but to do this you have to flag the record w/ a unique value (I use a getdate() for batchs but a GUID would be what you probably want to use).
Although not directly answering your question here, rather than reinventing the wheel and making life difficult for yourself, unless you enjoy it of course ;-), may I suggest that you look at using SQL Server Service Broker.
It provides an existing framework for using queues etc.
To find out more visit.
Service Broker Link
Now back to the question, I am not able to replicate your problem, as you will see if you execute the code below, data is returned regardless of the order os the select/update statement.
So your example above then.
create table #MSG_Q
(id int identity(1,1) primary key,status int)
insert into #MSG_Q select 0
DECLARE #queueid INT
BEGIN TRANSACTION TRAN1
SELECT TOP 1 #queueid = id FROM #MSG_Q WITH (updlock, readpast) WHERE #MSG_Q.status=0
UPDATE #MSG_Q SET status=1 WHERE id=#queueid
SELECT TOP 1 * FROM #MSG_Q WHERE #MSG_Q.id=#queueid
COMMIT TRANSACTION TRAN1
select * from #MSG_Q
drop table #MSG_Q
Returns the Results (1,1) and (1,1)
Now swapping the statement order.
create table #MSG_Q
(id int identity(1,1) primary key,status int)
insert into #MSG_Q select 0
DECLARE #queueid INT
BEGIN TRANSACTION TRAN1
SELECT TOP 1 #queueid = id FROM #MSG_Q WITH (updlock, readpast) WHERE #MSG_Q.status=0
SELECT TOP 1 * FROM #MSG_Q WHERE #MSG_Q.id=#queueid
UPDATE #MSG_Q SET status=1 WHERE id=#queueid
COMMIT TRANSACTION TRAN1
select * from #MSG_Q
drop table #MSG_Q
Results in: (1,0), (1,1) as expected.
Perhaps you could qualify your issue further?
More experimentation leads me to conclude that I was chasing a red herring, brought about by the tools I was using to exec my stored procedure. I was initially using DBVisualizer (free edition) and Netbeans, and they both appear to be confused by something about the format of the results. DBVisualizer suggests that I'm getting multiple result sets back, and that the free edition doesn't handle that.
Since then, I grabbed the free MS SQL Server Management Studio Express and things work perfectly. For those interested, the URL to SMSE is here:
MS SQL Server SMSE
Don't forget to install the MSXML6 service pack, too:
MSXML Service Pack 1
So, totally my bad in this case. :-(
Major thanks and kudos to you guys for your answers though. You helped me confirm that what I was doing should work, which lead me to the change I had to make to actually "solve" the issue. Thanks ever so much!
One more point-- including a "SET NOCOUNT ON" in the stored procedure fixed things for all ODBC clients. Apparently the rowcounts for the first select was confusing the ODBC clients, and telling SQL Server to not return that value makes things work perfectly...