How to find out every time a stored procedure has run - sql-server

Is there a way to find out every single time a stored procedure has run in the last X amount of days? There are ways to see the last time it ran, but what if I want to see the last X number of times.
I can't find anything when I searched. Any suggestions?

I would suggest that you take advantage of the ability of Extended Events to capture a histogram. You can set it up like this:
CREATE EVENT SESSION [ProcedureExecutionCount]
ON SERVER
ADD EVENT sqlserver.rpc_completed
(SET collect_statement = (0)
WHERE (
[sqlserver].[equal_i_sql_unicode_string]([sqlserver].[database_name], N'AdventureWorks')
AND [object_name] = N'AddressByCity'
)
)
ADD TARGET package0.histogram
(SET filtering_event_name = N'sqlserver.rpc_completed', source = N'object_name', source_type = (0));
That will now, only, ever, count the number of times that procedure, on that database, is executed. Nice, simple, clean and easy.

There are several ways to do it. Visit here: https://www.mssqltips.com/sqlservertip/3259/several-methods-to-collect-sql-server-stored-procedure-execution-history/

Related

Azure SQL DB, Extended Events using Ring Buffer stop on their own after a while - why?

I run a fairly basic Xevent on Azure SQL DB, using a Ring Buffer target, that looks for Severity errors over 10, based on a post Brent Ozar made. However, the session stops on its own, and I'm not sure why. I suspect it's filling up, even though all the documentation says it's FIFO and will drop the oldest events. Am I missing something? Do I need to set something differently? Thanks!
update: weirdly, I made a much smaller test, with max_memory = 200 and a much bigger amount of data in the failing code, to try and force it to stop/die, but I show it's looping as I expected. So I'm still confused why it's stopping, but it doesn't seem to be because it's filling up.
CREATE EVENT SESSION
severity_10plus_errors_XE
ON database
ADD EVENT sqlserver.error_reported
(
ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_id,sqlserver.sql_text,sqlserver.tsql_stack,sqlserver.username)
--ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
WHERE ([severity]> 10)
)
ADD TARGET
package0.ring_buffer
(SET
max_memory = 10000 ) -- Units of KB.
WITH (MAX_DISPATCH_LATENCY = 60SECONDS)
GO
ALTER EVENT SESSION severity_10plus_errors_XE
ON DATABASE
STATE = START;
GO

Extended event to capture which is running view in SQL Server

I have a requirement to capture which users are hitting a view in a database. My initial thought was to use extended events but for some reason when I test nothing is being captured.
This is what I have so far. Any suggestion would be greatly appreciated.
-- Test 1
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.module_start
(
SET collect_statement=1
ACTION
(
sqlserver.client_app_name,
sqlserver.database_name,
sqlserver.session_server_principal_name,
sqlserver.username,
sqlserver.sql_text,
sqlserver.tsql_stack
)
WHERE
(
[object_type]='V '
AND [object_name]=N'MyView'
)
)
ADD TARGET package0.histogram(SET filtering_event_name=N'sqlserver.module_start',source=N'object_name',source_type=(0)),
ADD TARGET package0.event_file(SET filename=N'C:\Event_Trace\XE_Track_view.xel',max_rollover_files=(20))
WITH (MAX_MEMORY=1048576 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=ON,STARTUP_STATE=ON)
GO
-- Test 2
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.sql_batch_starting(
ACTION(sqlserver.client_app_name,sqlserver.client_hostname,sqlserver.database_name,sqlserver.nt_username,sqlserver.sql_text,sqlserver.username)
WHERE ([package0].[equal_boolean]([sqlserver].[is_system],(0))
AND [sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'myview')
AND [sqlserver].[equal_i_sql_unicode_string]([sqlserver].[database_name],N'myDB')
))
ADD TARGET package0.event_file(SET filename=N'C:\Event_Trace\XE_Track_view.xel',max_rollover_files=(20))
WITH (MAX_MEMORY=1048576 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=ON,STARTUP_STATE=ON)
GO
First, let's break down why each of the XE sessions are not capturing access to the view and then let's see if we can make a small change so that one of them does.
The session that you've labeled as "test 1" is capturing the sqlserver.module_start event. Even though views are modules (which, before writing up this answer, I didn't believe that they were but the documentation for sys.sql_modules says that they are), they don't start and end in the same way as, say, a stored procedure or a function does. So the event isn't firing.
The session that you've labeled as "test 2" has a subtle error. Let's look at this predicate specifically:
[sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'myview')
Because it lacks any wildcards in the search criteria, this effectively says [sqlserver].[sql_text] = N'myview'. Changing the search criteria to N'%myview%' should suffice. And in my testing†, that was sufficient.
One last thing I will note is that XE may not be sufficient to capture all uses of a given object. Take, for instance, a situation where a base object is referenced indirectly through a synonym. I've had good luck using the SQL Audit feature (which, ironically, uses XE under the hood) to track object use. It's a bit more to set up, but you get most (if not all) of what you're looking for as far as execution context. For your use case you'd want to audit some or all of the CRUD operations against the view.
† here is the XE session that I used in my testing. I used my local copy of AdventureWorks (which is why it references vEmployee) and added a predicate for the session from which I was issuing the query (to avoid spamming the XE session). But otherwise the approach is identical.
CREATE EVENT SESSION [Track_View] ON SERVER
ADD EVENT sqlserver.sql_batch_starting(
ACTION(
sqlserver.client_app_name,
sqlserver.client_hostname,
sqlserver.database_name,
sqlserver.nt_username,
sqlserver.sql_text,
sqlserver.username
)
WHERE (
[package0].[equal_boolean]([sqlserver].[is_system],(0))
AND [sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'%vEmployee%')
AND [sqlserver].[session_id]=(70)
)
)
ADD TARGET package0.event_counter,
ADD TARGET package0.ring_buffer(SET max_events_limit=(10))
WITH (
MAX_MEMORY=4096 KB,
EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,
MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,
STARTUP_STATE=OFF
);

Entity Framework: Max. number of "subqueries"?

My data model has an entity Person with 3 related (1:N) entities Jobs, Tasks and Dates.
My query looks like
var persons = (from x in context.Persons
select new {
PersonId = x.Id,
JobNames = x.Jobs.Select(y => y.Name),
TaskDates = x.Tasks.Select(y => y.Date),
DateInfos = x.Dates.Select(y => y.Info)
}).ToList();
Everything seems to work fine, but the lists JobNames, TaskDates and DateInfos are not all filled.
For example, TaskDates and DateInfos have the correct values, but JobNames stays empty. But when I remove TaskDates from the query, then JobNames is correctly filled.
So it seems that EF can only handle a limited number of these "subqueries"? Is this correct? If so, what is the max. number of these "subqueries" for a single statement? Is there a way to work around these issue without having to make more than one call to the database?
(ps: I'm not entirely sure, but I seem to remember that this query worked in LINQ2SQL - could it be?)
UPDATE
I'm getting crazy about this. I tried to repro the issue from ground up using a fresh, simple project (to post the entire piece of code here, not only an oversimplified example) - and I found I wasn't able to repro it. It still happens within our existing code base (apparently there's more behind this problem, but I cannot share this closed code base, unfortunately).
After hours and hours of playing around I found the weirdest behavior:
It works great when I don't SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; before calling the LINQ statement
It also works great (independent of the above) when I don't use a .Take() to only get the first X rows
It also works great when I add an additional .Where() statements to cut the the number of rows returned from SQL Server
I didn't find any comprehensible reason why I see this behavior, but I started to look at the SQL: Although EF generates the exact same SQL, the execution plan is different when I use READ UNCOMMITTED. It returns more rows on a specific index in the middle of the execution plan, which curiously ends in less rows returned for the entire SQL statement - which in turn results in the missing data, that is the reason for my question to begin with.
This sounds very confusing and unbelievable, I know, but this is the behavior I see. I don't know what else to do, I don't even know what to google for at this point ;-).
I can fix my problem (just don't use READ UNCOMMITTED), but I have no idea why it occurs and if it is a bug or something I don't know about SQL Server. Maybe there's some "magic max number of allowed results in sub-queries" in SQL Server? At least: As far as I can see, it's not an issue with EF itself.
A little late, but does calling ToList() on each subquery produce the required effect?
var persons = (from x in context.Persons
select new {
PersonId = x.Id,
JobNames = x.Jobs.Select(y => y.Name.ToList()),
TaskDates = x.Tasks.Select(y => y.Date).ToList(),
DateInfos = x.Dates.Select(y => y.Info).ToList()
}).ToList();

Django Model: Best Approach to update?

I am trying to update 100's of objects in my Job which is scheduled every two hours.
I have articles table in my Model. All articles are parsed and then different attributes are saved for each article.
First i query to get all unparsed articles and then parse each URL which is saved against article and save the received attributes.
Below is my code
articles = Articles.objects.filter(status = 0) #100's of articles
for art in articles:
try:
url = art.link
result = ArticleParser(URL) #Custom function which will do all the parsing
art.author = result.articleauthor
art.description = result.articlecontent[:5000]
art.imageurl = result.articleImage
art.status = 1
art.save()
except Exception as e:
art.author = ""
art.description = ""
art.imageurl = ""
art.status = 2
art.save()
The thing is when this job is running CPU utilization is very high also DB process utilization is very high. I am trying to pin point when and where it spikes.
Question: Is this the right way to update multiple objects or is there any better way to do it? Any suggestions.
Appreciate your help.
Regards
Edit 1: Sorry for the confusion. There is some explanation to do. The fields like author, desc etc they will be different for every article they will be returned after i parse the URL. The reason i am updating in loop is because these fields will be different for every iteration according to the URL. I have updated the code i hope it helps clearing the confusion.
You are doing 100s of DB operations in a relatively tight loop, so it is expected that there is some load on the DB.
If you have a lot of articles, make sure you have an index on the status column to avoid a table scan.
You can try disabling autocommit and wrapping the whole update in one transaction instead.
From my understanding, you do NOT want to set the fields author, description and imageurl to same value on all articles, so QuerySet.update won't work for you.
Django recommends this way when you want to update or delete multi-objects: https://docs.djangoproject.com/en/1.6/topics/db/optimization/#use-queryset-update-and-delete
1.Better not to use 'Exception', need to specify concretely: KeyError, IndexError etc.
2.Data can be created once. Something like this:
data = dict(
author=articleauthor,
description=articlecontent[:5000],
imageurl=articleImage,
status=1
)
Articles.objects.filter(status=0).update(**data)
To Edit 1: Probably want to set up a periodic tasks celery. That is, for each query to a separate task. For help see this documentation.

Quickly finding the last item in a database Cakephp

I just inherited some cakePHP code and I am not very familiar with it (or any other php/serverside language). I need to set the id of the item I am adding to the database to be the value of the last item plus one, originally I did a call like this:
$id = $this->Project->find('count') + 1;
but this seems to add about 8 seconds to my page loading (which seems weird because the database only has about 400 items) but that is another problem. For now I need a faster way to find the id of the last item in the database, is there a way using find to quickly retrieve the last item in a given table?
That's a very bad approach on setting the id.
You do know that, for example, MySQL supports auto-increment for INT-fields and therefore will set the id automatically for you?
The suggested functions getLastInsertId and getInsertId will only work after an insert and not always.
I also can't understand that your call adds 8 seconds to your siteload. If I do such a call on my table (which also has around 400 records) the call itself only needs a few milliseconds. There is no delay the user would notice.
I think there might be a problem with your database-setup as this seems very unlikely.
Also please have a look if your database supports auto-increment (I can't imagine that's not possible) as this would be the easiest way of adding your wanted functionality.
I would try
$id = $this->Project->getLastInsertID();
$id++;
The method can be found in cake/libs/model/model.php in line 2768
As well as on this SO page
Cheers!
If you are looking for the cakePHP3 solution to this you simply use last().
ie:
use Cake\ORM\TableRegistry;
....
$myrecordstable=Tableregistry::get('Myrecords');
$myrecords=$myrecordstable->find()->last();
$lastId = $myrecords->id;
....

Resources