Azure SQL DB, Extended Events using Ring Buffer stop on their own after a while - why? - sql-server

I run a fairly basic Xevent on Azure SQL DB, using a Ring Buffer target, that looks for Severity errors over 10, based on a post Brent Ozar made. However, the session stops on its own, and I'm not sure why. I suspect it's filling up, even though all the documentation says it's FIFO and will drop the oldest events. Am I missing something? Do I need to set something differently? Thanks!
update: weirdly, I made a much smaller test, with max_memory = 200 and a much bigger amount of data in the failing code, to try and force it to stop/die, but I show it's looping as I expected. So I'm still confused why it's stopping, but it doesn't seem to be because it's filling up.
CREATE EVENT SESSION
severity_10plus_errors_XE
ON database
ADD EVENT sqlserver.error_reported
(
ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack,sqlserver.username)
--ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
WHERE ([severity]> 10)
)
ADD TARGET
package0.ring_buffer
(SET
max_memory = 10000 ) -- Units of KB.
WITH (MAX_DISPATCH_LATENCY = 60SECONDS)
GO
ALTER EVENT SESSION severity_10plus_errors_XE
ON DATABASE
STATE = START;
GO

Related

Extended event to capture which is running view in SQL Server

I have a requirement to capture which users are hitting a view in a database. My initial thought was to use extended events but for some reason when I test nothing is being captured.
This is what I have so far. Any suggestion would be greatly appreciated.
-- Test 1
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.module_start
(
SET collect_statement=1
ACTION
(
sqlserver.client_app_name,
sqlserver.database_name,
sqlserver.session_server_principal_name,
sqlserver.username,
sqlserver.sql_text,
sqlserver.tsql_stack
)
WHERE
(
[object_type]='V '
AND [object_name]=N'MyView'
)
)
ADD TARGET package0.histogram(SET filtering_event_name=N'sqlserver.module_start',source=N'object_name',source_type=(0)),
ADD TARGET package0.event_file(SET filename=N'C:\Event_Trace\XE_Track_view.xel',max_rollover_files=(20))
WITH (MAX_MEMORY=1048576 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=ON,STARTUP_STATE=ON)
GO
-- Test 2
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.sql_batch_starting(
ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.nt_username,sqlserver.sql_text,sqlserver.username)
WHERE ([package0].[equal_boolean]([sqlserver].[is_system],(0))
AND [sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'myview')
AND [sqlserver].[equal_i_sql_unicode_string]([sqlserver].[database_name],N'myDB')
))
ADD TARGET package0.event_file(SET filename=N'C:\Event_Trace\XE_Track_view.xel',max_rollover_files=(20))
WITH (MAX_MEMORY=1048576 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=ON,STARTUP_STATE=ON)
GO
First, let's break down why each of the XE sessions are not capturing access to the view and then let's see if we can make a small change so that one of them does.
The session that you've labeled as "test 1" is capturing the sqlserver.module_start event. Even though views are modules (which, before writing up this answer, I didn't believe that they were but the documentation for sys.sql_modules says that they are), they don't start and end in the same way as, say, a stored procedure or a function does. So the event isn't firing.
The session that you've labeled as "test 2" has a subtle error. Let's look at this predicate specifically:
[sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'myview')
Because it lacks any wildcards in the search criteria, this effectively says [sqlserver].[sql_text] = N'myview'. Changing the search criteria to N'%myview%' should suffice. And in my testing†, that was sufficient.
One last thing I will note is that XE may not be sufficient to capture all uses of a given object. Take, for instance, a situation where a base object is referenced indirectly through a synonym. I've had good luck using the SQL Audit feature (which, ironically, uses XE under the hood) to track object use. It's a bit more to set up, but you get most (if not all) of what you're looking for as far as execution context. For your use case you'd want to audit some or all of the CRUD operations against the view.
† here is the XE session that I used in my testing. I used my local copy of AdventureWorks (which is why it references vEmployee) and added a predicate for the session from which I was issuing the query (to avoid spamming the XE session). But otherwise the approach is identical.
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.sql_batch_starting(
ACTION(
sqlserver.client_app_name,
sqlserver.client_hostname,
sqlserver.database_name,
sqlserver.nt_username,
sqlserver.sql_text,
sqlserver.username
)
WHERE (
[package0].[equal_boolean]([sqlserver].[is_system],(0))
AND [sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'%vEmployee%')
AND [sqlserver].[session_id]=(70)
)
)
ADD TARGET package0.event_counter,
ADD TARGET package0.ring_buffer(SET max_events_limit=(10))
WITH (
MAX_MEMORY=4096 KB,
EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,
MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,
STARTUP_STATE=OFF
);

SQL Random DBConcurrencyException Errors- VB.Net

I have an application, and SQL Server that 6 users have access to. Occasionally they will get a DBConcurrencyException error. By occasionally I mean I get a copy of their error a few times a week at most - and usually not by the same user. I have determined that it is NOT because another user has also updated the same "record." I have a feeling it has something to do with the conversion of decimals? between the server and the users workstation. However, I can not replicate the error.
I have tried updating a record on the application at the same time as the server. Whichever one finishes first, wins. But there is NO error. Because I cannot replicate this error, and don't know how to "force" an error, I can't come up with a reasoning or solution.
So 2 questions:
Is my thinking that the error is possibly due to data conversion plausible? If so, how do I test for and/or correct this issue?
Is there a way I can force or fake the exception so I can build a proper solution? I'd like to know exactly WHY the error is happening before I start trying to force updates into the server.
Not sure what code would be helpful here, but I am using SqlDataAdapter.Update to update the server with changes.
Build update query:
SQLcmd = New SqlCommand("
UPDATE [ACL].data_Demog_BGL SET
[Offline] = #Offline
,[ID_BGL_Assay] = #Assay
,[Barcode_Number] = #BarNum
,[Result] = #Result
,[Result_Type] = #RType
,[ID_Test_Outcome] = #TOutcome
,[CLIR_Request] = #CReq
,[CLIR_Entry] = #CEntry
,[Tasklist] = #Tasklist
,[Sequence] = #Seq
,[CLIR_Date] = #CDate
WHERE ID_Demog_BGL = #ID
", Vars.sqlConnACL)
With SQLcmd.Parameters
.Add("#Offline", SqlDbType.Bit, 0, "Offline")
.Add("#Assay", SqlDbType.Int, 0, "ID_BGL_Assay")
.Add("#BarNum", SqlDbType.NVarChar, 255, "Barcode_Number")
.Add("#Result", SqlDbType.NVarChar, 255, "Result")
.Add("#RType", SqlDbType.NVarChar, 255, "Result_Type")
.Add("#TOutcome", SqlDbType.Int, 0, "ID_Test_Outcome")
.Add("#CReq", SqlDbType.Bit, 0, "CLIR_Request")
.Add("#CEntry", SqlDbType.NVarChar, 255, "CLIR_Entry")
.Add("#Tasklist", SqlDbType.NVarChar, 255, "Tasklist")
.Add("#Seq", SqlDbType.NVarChar, 255, "Sequence")
.Add("#CDate", SqlDbType.Date, 0, "CLIR_Date")
.Add("#ID", SqlDbType.Int, 0, "ID_Demog_BGL")
End With
da_BGLAssay.UpdateCommand = SQLcmd
and...
Me.Validate()
bsBGL.EndEdit()
da_BGLAssay.Update(dt_BGLAssay)
da_BGLAssay.Fill(dt_BGLAssay)
SQL Server has a few options for tracking changes to tables, and some techniques for finding out who deleted data - problem is that most apps these days roll their own user system and everyone/webserver uses a common login so that might not help much...
If you're fortunate enough to have multiple db users you might be set already: https://www.mssqltips.com/sqlservertip/3090/how-to-find-user-who-ran-drop-or-delete-statements-on-your-sql-server-objects/ or https://dba.stackexchange.com/questions/4269/how-to-find-out-who-deleted-some-data-sql-server
If you rolled your own user system then you might need to roll out something to do some logging. Could certainly be programmatic and minimal effort but what sort of effort would depend on how your data layer is implemented
You could also put a trigger to log the time and what data is deleted - if these records are disappearing when the office is closed, perhaps some automated job .. If you have a table of login and logout times (relatively trivial) it can help determine the person
Handling the DBConcurrencyExcpetion is also possible: when you catch it, in your case you know the record has been deleted. Just offer to reinstate it and if the user says yes, change the RowState from Modified to Added and save it again, to use the INSERT rather than the UPDATE query
Note that if you're just doing Update(datatable) then the adapter could be UPDATE/INSERT/DELETE hundreds of rows and you might not know which one caused the exception. You can go row by row in a loop
Oh, which just made me think of something else; are you aware that even though a dataapter/tableadapter method for saving changes in a datatable is called Update() that it will call INSERT/UPDATE/DELETE according to whether the RowState is Added/Modified/Deleted? I ask because some people think that Update() only does updated, but if your datatable has some deleted rows, when you call Update those rows will be removed from the DB
Update() would have been better off called SaveChanges.. but yes, just incase you didn't know - it could be a cause of your deletes, if stray records in your DT are marked as deleted and you Update() intending to save changes to a modified row

Is there any event in Event Library by which we can identify Fragmented Index using Extended Events?

I am new to extended events however, I got to know about the same by reading some articles like this one
My question: Is there any event in Event Library by which we can identify Fragmented Index ?
Because I got only
I am using SQL Server 2014.
Thanks in advance
Not directly. Fragmentation exists when the physical order of an index doesn't match the logical order. And that happens when there is need to put data on a page for which there is no room, thus causing a page split. There is an event for page splits. However, I wouldn't use it to track fragmentation in the general case. The event exists more for tracking activity for one-off operations. If you want to look at fragmentation, take a look at sys.dm_db_index_physical_stats.
To add to #Ben Thuls's answer, you may track page splits using extended events and thus track fragmentation indirectly. Check this awesome article by Paul Randal to familiarize yourself with the LOP_DELETE_SPLIT log operation and then create a session which will look like that:
CREATE EVENT SESSION [Page Splits] ON SERVER
ADD EVENT sqlserver.transaction_log(SET collect_database_name = 1
WHERE (operation = $LOP_DELETE_ID$) ) --LOP_DELETE_SPLIT*
ADD TARGET package0.event_file(SET FILENAME = N'PageSplitsOutput.xel',MAX_FILE_SIZE = 200, MAX_ROLLOVER_FILES = 2, INCREMENT = 20)
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF);
GO
And fill $LOP_DELETE_ID$ with the result from this:
SELECT *
FROM sys.dm_xe_map_values
WHERE name = 'log_op'
AND map_value = 'LOP_DELETE_SPLIT';

How to compare the same db at new moments in time

By developing client server applications in Delphi + SQL Server I continuously face the problem to have an exact report of what an action caused on db.
A simple example is:
BEFORE: start "capturing" the DB
user changes a value in an editbox
the user presses a "save" button
AFTER: stop "capturing" the DB
I would like to have a tool that compares the before and after status (I manually capture the BEFORE and AFTER status).
I googled for this kind of tools but all I found are tools for doing data or schema comparison between more datasources.
Thanks.
The following is an extract for an application we have. This code is in a BeforePost event handler that is linked to all of the Dataset components in the application. This linking is done using code as there are a lot of datasets. This doesn't actually log the changes (just lists the fields) but it should be simple enough to change to meet your objective. I don't know if this is exactly right for what you are trying to achieve since you ask for a tool but it would be an effective way of creating a report of all changes
CurrentReport := Format('Table %s modified', [DataSet.Name]);
for i := 0 to DataSet.FieldCount - 1 do
begin
XField := DataSet.Fields[i];
if (XField.FieldKind = fkData) and (XField.Value <> XField.OldValue) then
begin
CurrentReport := CurrentReport + Format(', %s changed', [XField.FieldName])
end;
end;
Note that our code collects a report but only logs it after the post has been successfully completed

Why qry.post executed with asynchronous mode?

Recently I met a strange problem, see code snips as below:
var
sqlCommand: string;
connection: TADOConnection;
qry: TADOQuery;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
qry := TADOQuery.Create(nil);
try
qry.Connection := connection;
qry.SQL.Text := 'Select * from aaa';
qry.Open;
qry.Append;
qry.FieldByName('TestField1').AsString := 'test';
qry.Post;
beep;
finally
qry.Free;
end;
finally
connection.Free;
end;
end;
First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1.
We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed.
who can explain this problem , it troubles me so long time. thanks !!!
BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7
Access won't show you records from transactions that haven't been committed yet. At the point where you pause your program, the implicit transaction created by the connection hasn't been committed yet. Haven't experimented, but my guess would be that the implicit transaction will be committed after you free the query. So if you pause just after that, you should see your record in MS Access.
After more information from Ryan (see his answer to himself), I did a little more investigating.
Having a primary key (autonumber or otherwise) doesn't seem to affect the behaviour.
Table with autonumber column as primary key
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa');
beep;
finally
connection.Free;
end;
Stopping on the "select" does not show the new record.
Stopping on the "delete" shows the new record.
Stopping on the "beep" still shows all records in the table even after repeated refresh's.
Stopping on the "connection.Free" shows no more records in the table. Huh?
Stopping on a "select" inserted between the "delete" and the "beep" shows no more records in the table.
Same table
connection.Execute('insert into aaa (TestField1) values (''Test'')');
beep;
connection.Execute('delete * from aaa');
beep;
beep;
Stopping on each statement shows that Access doesn't receive the "command" until at least one other statement has been executed. In other words: the beep after the "Execute" statement must have been processed before the statement is processed by Access (it may take a couple of refreshes to show up, the first refresh isn't always enough). If you stop on the first beep after the "Execute" statement nothing has happened in Access and won't if you reset the program without executing any other statements.
Stepping into the connection.Execute (Use debug dcu's on): the effect of the executed sql statement is now visible in Access on return to the beep. Actually, it is visible much earlier. For example stepping into the "delete" statement, the record becomes marked #deleted somewhere still in the ADODB code.
In fact, when stepping through the adodb code, the record becomes visible in Access when stopped in the OnExecuteComplete handler. Not when stopped on the "begin", but when stopped on the "if Assigned" immediately thereafter. The same applies to the delete statement. The effect becomes visible in Access when stopped on the if statement in the OnExecuteComplete handler in AdoDb.
Ado does have an ExecuteOption to execute statements asynchronously. It wasn't in effect during all this (its not included by default). And while we are dealing with an out-of-process COM server and with call backs such as the OnExecuteComplete handler, that handler was executed before returning to the statement right after the ConnectionObject.Execute statement in the TAdoConnection.Execute method in AdoDb.
All in all I think it isn't so much a matter of synchronous or asynchronous execution, but more a matter of when references are released (we are dealing with COM and interface reference counting), or with thread and process timing issues (in app, Access and between them), or with a combination thereof.
And the debugger may just be muddling things more than clarifying them. Would be interesting to see what happens in D2010 with its single thread debugging capabilities, but haven't got it available where I am (now and for the next two weeks).
First , Marjan, Thank you for your answer, I am very sure I had clicked the refesh button in that time, but there was still nothing changed....
After many experiments, I found that if I inserted an auto-increment id into table fields as primary key , this strange behaviour would not happen, although i have done this , there is another strange behaviour , I will show my code snips , as below:
procedure TForm9.btn1Click(Sender: TObject);
var
sqlCommand: string;
connection: TADOConnection;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa'); // breakpoint 1
beep; // breakpoint2
finally
connection.Free;
end;
end;
Put two breakpoints at line “delete” and “beep”, when codes stoped at breakpoint1, you can refresh the database , and you would find that the record was inserted, continue running when the codes stoped at the breakpoint2, you would find the record was still in there..... If at this time you pressed ctrl+f2, the record would be not deleted.... if connection.execute is a real sychronouse procedure , this should not happend. sorry for checking your answer so late, because i am on our dragon boat festival...
Marjan, thanks for your response again, but i can't accept this behaviour what the connection enginee processes, today I find something useful on MSDN website, see:
http://msdn.microsoft.com/en-us/library/ms719649(v=VS.85).aspx
I have resolved the problem fortunately according to the article, Actually, the default value of the property "Jet OLEDB:Implicit Commit Sync" is false, According to the explanation of this property, Be false implies that the implicit transaction will use asynchronouse mode. so what we can do is set this property be true by using code snips as below :
connection.Properties.Item['Jet OLEDB:Implicit Commit Sync'].Value := true;
BTW, according to that article, this property can only be set by using the Properties property of the connection object, otherwise if it is set in connection string, an error will occur

Resources