Performance impact of running stored procedures in jobs - sql-server

At work, I have come across several SQL Server stored procedures that are only used by a single job. In that case, wouldn't it just make more sense to run the code in a job step? Is there some benefit from running statements in stored procedures?
These specific stored procedures do not require input variables, nor are they commonly used calculations; they are mostly just complex select statements. Looking for advice on best practice and performance impact.

There should be no material performance difference.
Code in a stored procedure is stored in the user database, present in backups, owned by the database owner, and can be invoked and debugged from anywhere.
Code in a job step is stored in the MSDB system database and owned by the job owner and can only be run through Agent.

Related

Parameter sniffing - only in stored procedures?

We develop an application that works with MS SQL Server - our customers run anything from SQL 2008 Express to 2017 Standard. Our queries are not parameterised and it is impractical to rewrite the whole application so that they are. We therefore have a lot of plans for the same query. I have seen that there is an option in SSMS against the database to set Parameterisation to Forced, so that there will be fewer query plans, but that this can then cause issues with Parameter Sniffing with Stored Procedures.
Before I try changing that option, can I just clarify that Stored Procedures are pieces of code that you explicitly create and store in the database; running queries directly from the application do NOT get turned into Stored Procedures (even temporarily), so it isn't a problem.
Parameter sniffing can happen to both stored procedure calls and parameterized queries. In your case, the best option is to fix your application, which will take considerable efforts. So before that can happen, set the Parameterisation to Forced will certainly help to reduce the number of plans and tighten the security.
and no, this will not change your queries to stored procedures.

Is "getDate" OK as a SQL Server stored procedure name?

I'm reviewing dozens of SQL Server 2017 stored procedure query execution plans. I just noticed that one of the stored procedures is named "getDate". The procedure "seems" to work, and according to this "getDate" isn't a reserved keyword, but I'm bothered by the potential confusion with the GETDATE() function. I don't have a lot of time right now to comb through all of the potentially impacted code modules editing calls to this stored procedure. Is this something I can ignore for now and fix later, or is it likely causing problems such that I should fix it right away? I don't see any problems, apart from (presumably unrelated) super-slow running queries--which is why I'm reviewing the execution plans.
The estimated execution plan for this "getDate" stored procedure looks OK though.
It's not recommended to keep the same name for the stored procedure. It would create so much chaos when you think long run and if you have enough time and privilege, change the name using SP_RENAME proc.
But at the same time, it would not throw error. Because GETDATE function is a database object in SQL Server and supports only SELECT or read data. What we can create is a stored procedure with the same name of database objects. So we are able to do DDL things to the user defined database objects.

Should you use master.dbo when accessing sp_ procedures?

I'm 100% convinced this is a duplicate but after more than an hour of searching I just can't seem to find the answer.
When using special procedures (i.e. the sp_ ones like sp_executesql), is it wise to use the full 3-part identifier master.dbo (or master..) or just use them as is? I'm looking for the most performance optimized version of this:
1. sp_executesql
2. master..sp_executesql
3. master.dbo.sp_executesql
Are 2 and 3 identical in terms of performance specifically regarding the above (i.e. referencing master) and is it safe to user master.. or should you not risk it even on master since someone could still create another schema there at some point?
Much appreciated.
TL;DR;
Shouldn't be any noticeable performance difference.
The long story:
Whenever you are executing a stored procedure that starts with the sp_ prefix SQL Server will first search for it in master.dbo, so all three options should have the same performance.
From an article posted by Eric Cobb in 2015 entitled Why you should not prefix your stored procedures with “sp_”
Whenever SQL Server sees “sp_” at the beginning of a stored procedure, it first tries to find the procedure in the master database. As stated in the Microsoft documentation above, “This prefix is used by SQL Server to designate system procedures“, so when SQL Server sees “sp_” it starts looking for system procedures. Only after it has searched through all of the procedures in the master database and determined that your procedure is not there will it then come back to your database to try to locate the stored procedure.
Also, it quotes another official documentation (with a link to 2008 version, working on finding current version):
A user-defined stored procedure that has the same name as a system stored procedure and is either nonqualified or is in the dbo schema will never be executed; the system stored procedure will always execute instead.
That quote, even though I couldn't find in the documentation of current version, I can easily prove.
Consider the following script:
USE <YourDatabaseNameHere> -- change to the actual name of the db, of course
GO
CREATE PROCEDURE dbo.sp_who
AS
SELECT 'Zohar peled' as myName
GO
-- change to the actual name of the db, of course
EXEC <YourDatabaseNameHere>.dbo.sp_who
EXEC dbo.sp_who
EXEC sp_who
GO
DROP PROCEDURE dbo.sp_who -- cleanup
When tested on 2016 version (which is the server I've had available for testing),
All three exec statements executed the system procedure. I couldn't find any way to execute my procedure.
Now I can't fiddle around with the master DB on my server, so I can only show that it's true for existing system procedures, but I'm pretty sure that it's going to be the same for any procedure that starts with the sp_ prefix, even if you wrote it yourself to both the master database and your own, as Aaron Bertrand illustrated on his article under the title Another side effect : Ambiguity
However, even if that wasn't the case, unless having many procedures in the current schema, and running the stored procedure in a tight loop, I doubt you'll see any noticeable performance difference.
Later on in the same article:
As alluded to in the previous point, procedures named with “sp_” are going to perform slower. It may not always be noticeable, but it is there. Connecting to DatabaseA, jumping over to the master database and scanning every stored procedure there, then coming back to DatabaseA and executing the procedure is always going to take more time than just connecting to DatabaseA and executing the procedure.
Note that this paragraph is talking about performance issues executing a user-defined stored procedure that has the sp_ prefix - so let's reverse this process for a moment:
Suppose SQL Server would have to scan all the stored procedures in the current schema, and only then, if not found, go to Master.Dbo and start looking there.
Easy to see the more procedures you have in the schema the longer it takes. However - have you ever noticed how long it takes SQL Server to find the procedure it needs to run?
I've been working with SQL Server since it's 2000 version, and I've had my share of databases containing hundreds of procedures all cramped up in the same schema - but that was never a performance issue.
In fact, in over 15 years of experience with SQL Server, I've never encountered a performance issue caused by the time it takes SQL Server to find the stored procedure it needs to run.

SQL Server - Stored procedures slow vs "Giant" script

I have a large number of stored procedures (about 200) that need to be executed sequentially. Ideally I wanted to create a single "master" stored procedure that would execute each of the individual stored procedures one after another.
However, when I execute the master stored procedure it consistently freezes after running a long time. That being said, if I take all the SQL code from the 200 individual stored procedures and create one giant SQL script file, it runs without any issue.
The SQL code queries separate tables and inserts a subset of the data into a master "summary" table.
Any ideas why this would happen? Is there something about stored procedures that take more memory? I would prefer to keep everything in stored procedures so we could manage security and updates easier.
Any ideas why this would happen?
Compilation.
The master script likely is compiled batch by batch using the statistics valid at this point.
The SP will be compiled once at start, and if the statistics change during the run - as is typial for a sequence of loads - there you go. Especially if the statistical change is significant during processing. Basically the stats at teh beginning - when things are compiled - are totally off compared to the runtime stats for some tables.
There is a recompile option that you can se tin the individual statements to avoid this.

Direct Sql or combine it in a procedure? which is more efficient

in my recent subject ,I have to do some queries through dynamic SQL,But I'm curious about
the efficiency in different ways:
1)combine the sql sentences in my server and then send them to the database ,do the query
2)send my variables to database and combine them in a certain procedure and finally do the query
Hope someone can help
BTW(I use .Net and Sqlserver)
Firstly, one of the main things you should do is to parameterise your SQL - whether that be by wrapping it up as a stored procedure in the DB, or by creating the SQL statement in your application code and then firing the whole thing in to the DB. This will mean:
prevention against SQL injection attacks by not directly concatenating user-entered values into a SQL statement
execution plan reuse (subsequent executions of that query, regardless of parameter values, will be able to reuse the original execution plan) (NB. this could be done if not parameterised yourself, via Forced Parameterisation)
Stored procedures do offer some extra advantages:
security ,only need to grant EXECUTE permissions to the stored procedures, you don't need to grant the user direct access to underlying db tables
maintainability, a change to a query does not involve an application code change, you can just change the sproc in the DB
network traffic, not necessarily a major point but you're sending less over the wire especially if the query is pretty large/complex
Personally, I use stored procedures most of the time. Though the times I need to build up SQL dynamically in application code, it is always parameterised.
Best is to use stored procedure and pass parameters from your application, as Stored procedures are precompiled queries and have execution plan ready which saves lot of time.
You can refer this url which has details http://mukund.wordpress.com/2005/10/14/advantages-and-disadvantages-of-stored-procedure/
Happy coding!!

Resources