I am trying to resolve deadlocks. My Application gets deadlocks all the time when there is more then 10 users at the same time.
I have tried with SQL profiler and can't figure it out.
The thing is, in SQL Profiler I have checked to use the Deadlock Graph Event. But when I run the trace the event never got logged. I can see there are many Deadlocks and Deadlock Chains, but none Deadlock Graph. Please advice.
Thanks for help
You need to have only Locks->Deadlock graph selected if you want to see Deadlock graph event only.
When you run set up a filter for database name or database id, the DeadlockGraph event is not captured, even if you don't check "Exclude rows that don't check values".
If you filter for, say, Duration or NTUserName, which neither are populated by DeadlockGraph, the event is included (as long as you don't filter for the database, that is.)
Likewise, if you add LockAcquired and filter for DatabaseName (not populated by LockAcquired), the event is included.
So the problem is with this precise combination.
Refer:
https://connect.microsoft.com/SQLServer/feedback/details/240737/filtering-for-database-name-id-filters-out-deadlock-graph-when-it-shouldnt
Related
I am using DbVisualizer 8.0.12 as the client tool towards MS SQL Server 2012 database.
I want to perform simple update:
update table1 set field1=0 where filed2='something';
I expect exactly one row to be updated, since field2 is primary key of table1.
Also, doing a:
select * from table1 where field2='something';
returns exactly one row.
But when executing the update sql, DBVisualizer informs me that there were two updates successfully executed.
11:16:58 [UPDATE - 1 row(s), 0.003 secs] Command processed
11:16:58 [UPDATE - 1 row(s), 0.003 secs] Command processed
... 2 statement(s) executed, 2 row(s) affected, exec/fetch time: 0.006/0.000 sec [2 successful, 0 warnings, 0 errors]
I don't understand why is there two updates performed? Shouldn't there be only one update?
Can anybody please advise? Thank you in advance for any kind of information.
[EDIT]
I have used MS SQL Server profiler, as #TomTom suggested.
And I also ran my SQL update using Microsoft SQL Server Management Studio.
Things I had to turn on for the profiler (and for my needs) were:
1. 'Trace properties > Events Selection > Column Filters > Database name – Like: my_db_name', since we have a lot of db on the server, so in order to trace only my database named 'my_db_name'
2. 'Trace properties > Events Selection > Stored procedures > enable SP:StmtStarting and SP:StmtCompleted', since I wanted to enable trigger trace
It seems that this info message from DBVisualizer is misleading (this happens only for tables that have triggers - in this particular case the trigger inserted data into another table(so called, archive table) on every update). Actually, only one update was done, so all fine there.
Microsoft SQL Server Management Studio shows correct info: 1 update and 1 insert.
Hope this will help someone having similar "problem". #TomTom please put your comment as an answer, so I can give you credit for it. Thank you.
Still, there is one more thing I would like to know about Profiler.
Is there a way you can actually see which rows (in which table) will be updated.
From the information I have above, I can only see that there was one update (so I am assuming it is this one on table1, which I expected). But I would like to see information, something like, in this table: 'tablename' this rows: list of rows will be updated with these values or something like that...
Is this possible with the Profiler?
Consider doing some work yourself. It is obvious that there are 2 commands issues. Fist Trace them - with the profiler - and check whether they are what you think they are.
SQL Server comes with a decent profiler out of the box. Old rule when you do stuff like that: NEVER assume, always validate. The statements may not even be the same... as long as you do not know that.... all is a vild guess.
I have used MS SQL Server profiler, as #TomTom suggested.
And I also ran my SQL update using Microsoft SQL Server Management Studio.
Things I had to turn on for the profiler (and for my needs) were:
1. 'Trace properties > Events Selection > Column Filters > Database name – Like: my_db_name', since we have a lot of db on the server, so in order to trace only my database named 'my_db_name'
2. 'Trace properties > Events Selection > Stored procedures > enable SP:StmtStarting and SP:StmtCompleted', since I wanted to enable trigger trace
It seems that this info message from DBVisualizer is misleading (this happens only for tables that have triggers - in this particular case the trigger inserted data into another table(so called, archive table) on every update). Actually, only one update was done, so all fine there.
Microsoft SQL Server Management Studio shows correct info: 1 update and 1 insert.
Hope this will help someone having similar "problem". #TomTom please put your comment as an answer, so I can give you credit for it. Thank you.
[EDIT]
#TomTom
Hmmm, maybe not.
I think you had enough time to think about it ...
Your answer wasn't helpfull at all (except the little track of light in the confirmative form of:
"Yes, SQL server has profiler included, DAAAH ..."
with no constructive suggestions of your own and with lot of "being a smarty" guy).
An answer to a question should include some more useful information and concrete guidance if you have it, otherwise, don't be a smartass.
Since I did all the work without your help, I think you don't actually deserve credit for it.
The funny thing about it is that you ACTUALLY think you do.
No comment on that, except that I really have ZERO (0.000000000000000000000 > is it going to change, hmmm, let see ... 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000 ... well, I guess not > that is a little bit of smartass for you :) tolerance politics with smartasses like you.
[END OF EDIT]
Still, there is one more thing I would like to know about Profiler.
Is there a way you can actually see which rows (in which table) will be updated.
From the information I have above, I can only see that there was one update (so I am assuming it is this one on table1, which I expected). But I would like to see information, something like, in this table: 'tablename' this rows: list of rows will be updated with these values or something like that...
Is this possible with the Profiler?
Thank you in advance for your time and answers.
I am configuring Event Notifications on a Service Broker Queue to log when various performance related events occur. One of these is Missing_Join_Predicate.
The XML payload of this event holds nothing useful for me to identify the cause (TSQL, query plan, objectid(s) etc) so in the procedure to process the queue I am trying to use the TransactionID to query dm_exec_requests and dm_exec_query_plan to get the query plan and the TSQL where the dm_exec_requests.transactionid is the TransactionID from the event.
The code catches no data.
Removing the filter from the query (ie collecting all rows from dm_exec_requests and dm_exec_query_plan) shows there are records returned but none for the TransactionID in question.
Is what I am trying to do possible? Where am I going wrong?!
The trace based event notifications, like MISSING_JOIN_PREDICATE, are just a verbatim translation of the corresponding trace event (Missing Join Predicate Event Class) and carry exactly the same info. For this particular event there's basically no useful info whatsoever, the <TransactionID> is the xact id that triggered the event and, by the time you dequeue it and process the notification message, the transaction is most likely finished and gone (yay for asynchronous queued based processing).
When using the original trace error event, eg. with Profiler, you could also enable SQL:BatchCompleted, filter appropriately and then catch the JOIN culprit in the act. But with Event Notifications I don't see any feasible way to automate the process to the point at which you can pinpoint the problem query and application. With EN you can, at best, raise the awareness of the problem, show the client that cause it (the app), and then use other means (eg. code inspection) to actually hunt down the problem root cause.
Unfortunately you'll discover that there are more event notification events that suffer from similar problem(s).
I have high volume of data normalized into more than 100 tables. There are multiple applications which change underlying data in those tables and I want to raise events on those changes. Possible options that I know of are:
Change Data Capture
Change Tracking
Using Triggers on each table (bad option but possible)
Can someone share the best way of doing this if someone has already done this before?
What I really want in the end is if there is one transaction that affected 12 tables off 100 I should be able to bubble one event up instead of 12. Assume there are concurrent users change these tables.
Two options I can think of:
Triggers ARE the right way to capture change events in the DB layer
Codewise, I make sure in my app that each table is changed through only one place in the code, regardless what the change is (I call it a hub for that table, as it channels many different pathways into one place), it becomes very easy to catch change events that way in the code layer
One possibility is SQL Server Query Notifications: Using Query Notifications
As long as you want to 'batch' multiple changes, I think you should follow the route of Change Data Capture or Change Tracking (depending on whether you just want to know that something changed or what changes happened).
They should be used by a 'polling' procedure, where you poll for changes every few minutes (seconds, miliseconds???) and raise events. The nice thing about this is that as long as you store the last rowversion of the previous poll -for each table- you can check whenever you like for changes since the last poll. You don't rely on a real time triggers approach, that if halted you would loose all events forever. The procedure could be easily created inside a procedure that checks each table and you would need only 1 more table to store last rowversion per table.
Also, the overhead of this approach would be controlled by you and by how frequently the polling happens.
I'm trying to analyze a deadlock in SQL Server 2008 profiler. I know how to find the offending sql queries, but the collected queries do not include parameter values.
I other words I can see something like this:
DELETE FROM users WHERE id = #id
But what I would like to see is this:
DELETE FROM users WHERE id = 12345
I guess there are some additional events or Columns I need to collect in the profiler, but I don't know which. I am currently using the "TSQL_LOCKS" template.
Any hints would be greatly appreciated.
Thanks,
Adrian
Disclaimer: I've asked a similar question before, but I guess it was too specific, which is why I got no replies. I'm starting another attempt with this one.
I think you need the RPC:Completed event:
http://msdn.microsoft.com/en-us/library/ms175543.aspx
The Profiler will contain the parameter values in the RPC:Completed/RPC:Starting events. But you already got replies telling you this.
What I want to add is that is very little need to know the parameter run-time values in order to analyze a deadlock graph. First, because if 'users' is involved in the deadlock, the deadlock graph itself will give away what #id is the conflict, if the conflict is on a key. Second, more importantly, for a deadlock scenario is irrelevant the exact keys that are involved. Is not like a deadlock happens because one deletes user with id 123 but will not happen when it deletes user 321.
If you decided to ask on SO in the first place, I think the best would be to post the actual deadlock graph and let the community have a look at it. There are many here that can answer quite a few questions just from the deadlock graph XML.
Start a trace with the following events having all checkboxes checked:
SQL: BatchCompleted
SQL: BatchStarting
Deadlock graph
Lock:Deadlock
Lock:Deadlock chain
After the deadlock occurs, stop the trace, then click on the deadlock graph event class.
This should give you a good idea of what's going wrong.
If you're using a stored procedure (which it looks like you are) or Hibernate/NHibernate you might need to turn on the Stored Procedures starting event (SP:StmtStarting) and RPC:Starting event. This will show the parameters in it's own line after the query.
Something like:
SP:StmtStarting DELETE FROM users WHERE id = #id
RPC:Starting exec sp_execute 12345
I'm doing an integration on a community platform called Telligent. I'm using a 3rd-party add-on called BlogML to import blog posts from an XML file (in BlogML format) into my local Telligent site. The Telligent platform comes with many classes in their SDK so that I can programmatically add content, such as blog posts. E.g.
myWeblogService.AddPost(myNewPostObject);
The BlogML app I'm using essentially parses the XML and creates blog post objects then adds them to the site using code like the above sample line. After about 40 post imports I get a SQL error:
Exception Details: System.Data.SqlClient.SqlException:
String or binary data would be truncated.
The statement has been terminated.
I believe this error means that I'm trying to insert too much data into a db field that has a max size limit. Unfortunately, I cannot tell which field this is an issue for. I ran the SQL Server Profiler while doing the import but I cannot seem to see what stored procedure the error is occurring on. Is there another way to use the profiler or another tool to see exactly what stored procedure and even what field the error is being caused by? Are there any other tips to get more information about where specifically to look?
Oh the joys of 3rd-party tools...
You are correct in that the exception is due to trying to stuff too much data into a character/binary based field. Running a trace should definitely allow you to see which procedure/statement is throwing the exception if you are capturing the correct events, those you'd want to capture would include:
SQL:BatchStarting
SQL:BatchCompleted
SQL:StmtStarting
SQL:StmtCompleted
RPC:Starting
RPC:Completed
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
Exception
If you know for certain it is a stored procedure that includes the faulty code, you could do away with capturing #'s 1-4. Be sure you capture all associated columns in the trace as well (should be the default if you are running a trace using the Profiler tool). The Exception class will include the actual error in your trace, which should allow you to see the immediate preceding statement within the same SPID that threw the exception. You must include the starting events in addition to the completed events as an exception that occurs will preclude the associated completed events from firing in the trace.
If you can filter your trace to a particular database, application, host name, etc. that will certainly make it easier to debug if you are on a busy server, however if you are on an idle server you may not need to bother with the filtering.
Assuming you are using Sql 2005+, the trace will include a column called 'EventSequence', which is basically an incrementing value ordered by the sequence that events fire. Once you run the trace and capture the output, find the 'Exception' event that fired (if you are using profiler, the row's it will be in Red color), then you should be able to simply find the most recent SP:StmtStarting or SQL:StmtStarting event for the same SPID that occurred before the Exception.
Here is a screen shot of a profile I captured reproducing an event similar to yours:
You can see the exception line in Red, and the line highlighted is the immediate preceding SP:StmtStarting event that fired prior to the exception for the same SPID. If you want to find what stored procedure this statement is a part of, look for the values in the ObjectName and/or ObjectId columns.
By doing some silly mistakes you will get this error.
if you are trying to insert a string like.
String reqName="Food Non veg /n";
here /n is the culprit.Remove /n from the string to get out of this error.
I hope this will help some one.