I have a requirement to capture which users are hitting a view in a database. My initial thought was to use extended events but for some reason when I test nothing is being captured.
This is what I have so far. Any suggestion would be greatly appreciated.
-- Test 1
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.module_start
(
SET collect_statement=1
ACTION
(
sqlserver.client_app_name,
sqlserver.database_name,
sqlserver.session_server_principal_name,
sqlserver.username,
sqlserver.sql_text,
sqlserver.tsql_stack
)
WHERE
(
[object_type]='V '
AND [object_name]=N'MyView'
)
)
ADD TARGET package0.histogram(SET filtering_event_name=N'sqlserver.module_start',source=N'object_name',source_type=(0)),
ADD TARGET package0.event_file(SET filename=N'C:\Event_Trace\XE_Track_view.xel',max_rollover_files=(20))
WITH (MAX_MEMORY=1048576 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=ON,STARTUP_STATE=ON)
GO
-- Test 2
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.sql_batch_starting(
ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.nt_username,sqlserver.sql_text,sqlserver.username)
WHERE ([package0].[equal_boolean]([sqlserver].[is_system],(0))
AND [sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'myview')
AND [sqlserver].[equal_i_sql_unicode_string]([sqlserver].[database_name],N'myDB')
))
ADD TARGET package0.event_file(SET filename=N'C:\Event_Trace\XE_Track_view.xel',max_rollover_files=(20))
WITH (MAX_MEMORY=1048576 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=ON,STARTUP_STATE=ON)
GO
First, let's break down why each of the XE sessions are not capturing access to the view and then let's see if we can make a small change so that one of them does.
The session that you've labeled as "test 1" is capturing the sqlserver.module_start event. Even though views are modules (which, before writing up this answer, I didn't believe that they were but the documentation for sys.sql_modules says that they are), they don't start and end in the same way as, say, a stored procedure or a function does. So the event isn't firing.
The session that you've labeled as "test 2" has a subtle error. Let's look at this predicate specifically:
[sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'myview')
Because it lacks any wildcards in the search criteria, this effectively says [sqlserver].[sql_text] = N'myview'. Changing the search criteria to N'%myview%' should suffice. And in my testing†, that was sufficient.
One last thing I will note is that XE may not be sufficient to capture all uses of a given object. Take, for instance, a situation where a base object is referenced indirectly through a synonym. I've had good luck using the SQL Audit feature (which, ironically, uses XE under the hood) to track object use. It's a bit more to set up, but you get most (if not all) of what you're looking for as far as execution context. For your use case you'd want to audit some or all of the CRUD operations against the view.
† here is the XE session that I used in my testing. I used my local copy of AdventureWorks (which is why it references vEmployee) and added a predicate for the session from which I was issuing the query (to avoid spamming the XE session). But otherwise the approach is identical.
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.sql_batch_starting(
ACTION(
sqlserver.client_app_name,
sqlserver.client_hostname,
sqlserver.database_name,
sqlserver.nt_username,
sqlserver.sql_text,
sqlserver.username
)
WHERE (
[package0].[equal_boolean]([sqlserver].[is_system],(0))
AND [sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'%vEmployee%')
AND [sqlserver].[session_id]=(70)
)
)
ADD TARGET package0.event_counter,
ADD TARGET package0.ring_buffer(SET max_events_limit=(10))
WITH (
MAX_MEMORY=4096 KB,
EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,
MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,
STARTUP_STATE=OFF
);
Related
I run a fairly basic Xevent on Azure SQL DB, using a Ring Buffer target, that looks for Severity errors over 10, based on a post Brent Ozar made. However, the session stops on its own, and I'm not sure why. I suspect it's filling up, even though all the documentation says it's FIFO and will drop the oldest events. Am I missing something? Do I need to set something differently? Thanks!
update: weirdly, I made a much smaller test, with max_memory = 200 and a much bigger amount of data in the failing code, to try and force it to stop/die, but I show it's looping as I expected. So I'm still confused why it's stopping, but it doesn't seem to be because it's filling up.
CREATE EVENT SESSION
severity_10plus_errors_XE
ON database
ADD EVENT sqlserver.error_reported
(
ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack,sqlserver.username)
--ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
WHERE ([severity]> 10)
)
ADD TARGET
package0.ring_buffer
(SET
max_memory = 10000 ) -- Units of KB.
WITH (MAX_DISPATCH_LATENCY = 60SECONDS)
GO
ALTER EVENT SESSION severity_10plus_errors_XE
ON DATABASE
STATE = START;
GO
Code Migration due to Performance Issues :-
SQL Server LIKE Condition ( BEFORE )
SQL Server Full Text Search --> CONTAINS ( BEFORE )
Elastic Search ( CURRENTLY )
Achieved So Far :-
We have a web page created in ASP.Net Core which has a Auto Complete Drop Down of 2.5+ Million Companies Indexed in Elastic Search https://www.99corporates.com/
Due to performance issues we have successfully shifted our code from SQL Server Full Text Search to Elastic Search and using NEST v7.2.1 and Elasticsearch.Net v7.2.1 in our .Net Code.
Still looking for a solution :-
If the user does not select a company from the Auto Complete List and simply enters a few characters and clicks on go then a list should be displayed which we had done earlier by using the SQL Server Full Text Search --> CONTAINS
Can we call the ASP.Net Web Service which we have created using SQL CLR and code like SELECT * FROM dbo.Table WHERE Name IN( dbo.SQLWebRequest('') )
[System.Web.Script.Services.ScriptMethod()]
[System.Web.Services.WebMethod]
public static List<string> SearchCompany(string prefixText, int count)
{
}
Any better or alternate option
While that solution (i.e. the SQL-APIConsumer SQLCLR project) "works", it is not scalable. It also requires setting the database to TRUSTWORTHY ON (a security risk), and loads a few assemblies as UNSAFE, such as Json.NET, which is risky if any of them use static variables for caching, expecting each caller to be isolated / have their own App Domain, because SQLCLR is a single, shared App Domain, hence static variables are shared across all callers, and multiple concurrent threads can cause race-conditions (this is not to say that this is something that is definitely happening since I haven't seen the code, but if you haven't either reviewed the code or conducted testing with multiple concurrent threads to ensure that it doesn't pose a problem, then it's definitely a gamble with regards to stability and ensuring predictable, expected behavior).
To a slight degree I am biased given that I do sell a SQLCLR library, SQL#, in which the Full version contains a stored procedure that also does this but a) handles security properly via signatures (it does not enable TRUSTWORTHY), b) allows for handling scalability, c) does not require any UNSAFE assemblies, and d) handles more scenarios (better header handling, etc). It doesn't handle any JSON, it just returns the web service response and you can unpack that using OPENJSON or something else if you prefer. (yes, there is a Free version of SQL#, but it does not contain INET_GetWebPages).
HOWEVER, I don't think SQLCLR is a good fit for this scenario in the first place. In your first two versions of this project (using LIKE and then CONTAINS) it made sense to send the user input directly into the query. But now that you are using a web service to get a list of matching values from that user input, you are no longer confined to that approach. You can, and should, handle the web service / Elastic Search portion of this separately, in the app layer.
Rather than passing the user input into the query, only to have the query pause to get that list of 0 or more matching values, you should do the following:
Before executing any query, get the list of matching values directly in the app layer.
If no matching values are returned, you can skip the database call entirely as you already have your answer, and respond immediately to the user (much faster response time when no matches return)
If there are matches, then execute the search stored procedure, sending that list of matches as-is via Table-Valued Parameter (TVP) which becomes a table variable in the stored procedure. Use that table variable to INNER JOIN against the table rather than doing an IN list since IN lists do not scale well. Also, be sure to send the TVP values to SQL Server using the IEnumerable<SqlDataRecord> method, not the DataTable approach as that merely wastes CPU / time and memory.
For example code on how to accomplish this correctly, please see my answer to Pass Dictionary to Stored Procedure T-SQL
In C#-style pseudo-code, this would be something along the lines of the following:
List<string> = companies;
companies = SearchCompany(PrefixText, Count);
if (companies.Length == 0)
{
Response.Write("Nope");
}
else
{
using(SqlConnection db = new SqlConnection(connectionString))
{
using(SqlCommand batch = db.CreateCommand())
{
batch.CommandType = CommandType.StoredProcedure;
batch.CommandText = "ProcName";
SqlParameter tvp = new SqlParameter("ParamName", SqlDbType.Structured);
tvp.Value = MethodThatYieldReturnsList(companies);
batch.Paramaters.Add(tvp);
db.Open();
using(SqlDataReader results = db.ExecuteReader())
{
if (results.HasRows)
{
// deal with results
Response.Write(results....);
}
}
}
}
}
Done. Got the solution.
Used SQL CLR https://github.com/geral2/SQL-APIConsumer
exec [dbo].[APICaller_POST]
#URL = 'https://www.-----/SearchCompany'
,#JsonBody = '{"searchText":"GOOG","count":10}'
Let me know if there is any other / better options to achieve this.
I am new to extended events however, I got to know about the same by reading some articles like this one
My question: Is there any event in Event Library by which we can identify Fragmented Index ?
Because I got only
I am using SQL Server 2014.
Thanks in advance
Not directly. Fragmentation exists when the physical order of an index doesn't match the logical order. And that happens when there is need to put data on a page for which there is no room, thus causing a page split. There is an event for page splits. However, I wouldn't use it to track fragmentation in the general case. The event exists more for tracking activity for one-off operations. If you want to look at fragmentation, take a look at sys.dm_db_index_physical_stats.
To add to #Ben Thuls's answer, you may track page splits using extended events and thus track fragmentation indirectly. Check this awesome article by Paul Randal to familiarize yourself with the LOP_DELETE_SPLIT log operation and then create a session which will look like that:
CREATE EVENT SESSION [Page Splits] ON SERVER
ADD EVENT sqlserver.transaction_log(SET collect_database_name = 1
WHERE (operation = $LOP_DELETE_ID$) ) --LOP_DELETE_SPLIT*
ADD TARGET package0.event_file(SET FILENAME = N'PageSplitsOutput.xel',MAX_FILE_SIZE = 200, MAX_ROLLOVER_FILES = 2, INCREMENT = 20)
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF);
GO
And fill $LOP_DELETE_ID$ with the result from this:
SELECT *
FROM sys.dm_xe_map_values
WHERE name = 'log_op'
AND map_value = 'LOP_DELETE_SPLIT';
I have 3 TQueries: qy_master, qy_detail, qy_detail2, master of qy_detail2 is qy_detail, master of qy_detail is qy_Master, all queries have corresponding data-sources, I placed queries in datamodule, when datamodule create I activate those queries.
In another form I used those queries, when trying 'qy_detail.open' it says 'EDbengine error : Tables is read-only' but no problem when opening qy_detail, I don't modify SQL statements, but I don't know why this error happens.
I also tried with qy_detail.Active := True; this statement also raise error,
I used SQL Server 2005 connected via BDE and ODBC datasources.
Please anyone help me to fix this.
Have you set TQuery.RequestLive = true? RequestLive is False by default forcing query to always return as read-only result set.
From documentation:
A TQuery can return two kinds of result sets: "live" as with TTable
component (users can edit data with data controls, and when a call to
Post occurs changes are sent to database), "read only" for display
purposes only. To request a live result set, set a query component's
RequestLive property to True...
Recently I met a strange problem, see code snips as below:
var
sqlCommand: string;
connection: TADOConnection;
qry: TADOQuery;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
qry := TADOQuery.Create(nil);
try
qry.Connection := connection;
qry.SQL.Text := 'Select * from aaa';
qry.Open;
qry.Append;
qry.FieldByName('TestField1').AsString := 'test';
qry.Post;
beep;
finally
qry.Free;
end;
finally
connection.Free;
end;
end;
First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1.
We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed.
who can explain this problem , it troubles me so long time. thanks !!!
BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7
Access won't show you records from transactions that haven't been committed yet. At the point where you pause your program, the implicit transaction created by the connection hasn't been committed yet. Haven't experimented, but my guess would be that the implicit transaction will be committed after you free the query. So if you pause just after that, you should see your record in MS Access.
After more information from Ryan (see his answer to himself), I did a little more investigating.
Having a primary key (autonumber or otherwise) doesn't seem to affect the behaviour.
Table with autonumber column as primary key
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa');
beep;
finally
connection.Free;
end;
Stopping on the "select" does not show the new record.
Stopping on the "delete" shows the new record.
Stopping on the "beep" still shows all records in the table even after repeated refresh's.
Stopping on the "connection.Free" shows no more records in the table. Huh?
Stopping on a "select" inserted between the "delete" and the "beep" shows no more records in the table.
Same table
connection.Execute('insert into aaa (TestField1) values (''Test'')');
beep;
connection.Execute('delete * from aaa');
beep;
beep;
Stopping on each statement shows that Access doesn't receive the "command" until at least one other statement has been executed. In other words: the beep after the "Execute" statement must have been processed before the statement is processed by Access (it may take a couple of refreshes to show up, the first refresh isn't always enough). If you stop on the first beep after the "Execute" statement nothing has happened in Access and won't if you reset the program without executing any other statements.
Stepping into the connection.Execute (Use debug dcu's on): the effect of the executed sql statement is now visible in Access on return to the beep. Actually, it is visible much earlier. For example stepping into the "delete" statement, the record becomes marked #deleted somewhere still in the ADODB code.
In fact, when stepping through the adodb code, the record becomes visible in Access when stopped in the OnExecuteComplete handler. Not when stopped on the "begin", but when stopped on the "if Assigned" immediately thereafter. The same applies to the delete statement. The effect becomes visible in Access when stopped on the if statement in the OnExecuteComplete handler in AdoDb.
Ado does have an ExecuteOption to execute statements asynchronously. It wasn't in effect during all this (its not included by default). And while we are dealing with an out-of-process COM server and with call backs such as the OnExecuteComplete handler, that handler was executed before returning to the statement right after the ConnectionObject.Execute statement in the TAdoConnection.Execute method in AdoDb.
All in all I think it isn't so much a matter of synchronous or asynchronous execution, but more a matter of when references are released (we are dealing with COM and interface reference counting), or with thread and process timing issues (in app, Access and between them), or with a combination thereof.
And the debugger may just be muddling things more than clarifying them. Would be interesting to see what happens in D2010 with its single thread debugging capabilities, but haven't got it available where I am (now and for the next two weeks).
First , Marjan, Thank you for your answer, I am very sure I had clicked the refesh button in that time, but there was still nothing changed....
After many experiments, I found that if I inserted an auto-increment id into table fields as primary key , this strange behaviour would not happen, although i have done this , there is another strange behaviour , I will show my code snips , as below:
procedure TForm9.btn1Click(Sender: TObject);
var
sqlCommand: string;
connection: TADOConnection;
begin
connection := TADOConnection.Create(nil);
try
connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False';
connection.Open();
connection.Execute('insert into aaa (TestField1) values (''Test'')');
connection.Execute('select * from aaa');
connection.Execute('delete * from aaa'); // breakpoint 1
beep; // breakpoint2
finally
connection.Free;
end;
end;
Put two breakpoints at line “delete” and “beep”, when codes stoped at breakpoint1, you can refresh the database , and you would find that the record was inserted, continue running when the codes stoped at the breakpoint2, you would find the record was still in there..... If at this time you pressed ctrl+f2, the record would be not deleted.... if connection.execute is a real sychronouse procedure , this should not happend. sorry for checking your answer so late, because i am on our dragon boat festival...
Marjan, thanks for your response again, but i can't accept this behaviour what the connection enginee processes, today I find something useful on MSDN website, see:
http://msdn.microsoft.com/en-us/library/ms719649(v=VS.85).aspx
I have resolved the problem fortunately according to the article, Actually, the default value of the property "Jet OLEDB:Implicit Commit Sync" is false, According to the explanation of this property, Be false implies that the implicit transaction will use asynchronouse mode. so what we can do is set this property be true by using code snips as below :
connection.Properties.Item['Jet OLEDB:Implicit Commit Sync'].Value := true;
BTW, according to that article, this property can only be set by using the Properties property of the connection object, otherwise if it is set in connection string, an error will occur