Query runs fast, but runs slow in stored procedure - sql-server

I am doing some tests using the SQL 2005 profiler.
I have a stored procedure which simply runs one SQL query.
When I run the stored procedure, it takes a long time and performs 800,000 disk reads.
When I run the same query separate to the stored procedure, it does 14,000 disk reads.
I found that if I run the same query with OPTION(recompile), it takes 800,000 disk reads.
From this, I make the (possibly erroneous) assumption that the stored procedure is recompiling each time, and that's causing the problem.
Can anyone shed some light onto this?
I have set ARITHABORT ON. (This solved a similar problem on stackoverflow, but didn't solve mine)
Here is the entire stored procedure:
CREATE PROCEDURE [dbo].[GET_IF_SETTLEMENT_ADJUSTMENT_REQUIRED]
#Contract_ID int,
#dt_From smalldatetime,
#dt_To smalldatetime,
#Last_Run_Date datetime
AS
BEGIN
DECLARE #rv int
SELECT #rv = (CASE WHEN EXISTS
(
select * from
view_contract_version_last_volume_update
inner join contract_version
on contract_version.contract_version_id = view_contract_version_last_volume_update.contract_version_id
where contract_version.contract_id=#Contract_ID
and volume_date >= #dt_From
and volume_date < #dt_To
and last_write_date > #Last_Run_Date
)
THEN 1 else 0 end)
-- Note that we are RETURNING a value rather than SELECTING it.
-- This means we can invoke this function from other stored procedures
return #rv
END
Here's a script I run that demonstrates the problem:
DECLARE
#Contract_ID INT,
#dt_From smalldatetime,
#dt_To smalldatetime,
#Last_Run_Date datetime,
#rv int
SET #Contract_ID=38
SET #dt_From='2010-09-01'
SET #dt_To='2010-10-01'
SET #Last_Run_Date='2010-10-08 10:59:59:070'
-- This takes over fifteen seconds
exec GET_IF_SETTLEMENT_ADJUSTMENT_REQUIRED #Contract_ID=#Contract_ID,#dt_From=#dt_From,#dt_To=#dt_To,#Last_Run_Date=#Last_Run_Date
-- This takes less than one second!
SELECT #rv = (CASE WHEN EXISTS
(
select * from
view_contract_version_last_volume_update
inner join contract_version
on contract_version.contract_version_id = view_contract_version_last_volume_update.contract_version_id
where contract_version.contract_id=#Contract_ID
and volume_date >= #dt_From
and volume_date < #dt_To
and last_write_date > #Last_Run_Date
)
THEN 1 else 0 end)
-- With recompile option. Takes 15 seconds again!
SELECT #rv = (CASE WHEN EXISTS
(
select * from
view_contract_version_last_volume_update
inner join contract_version
on contract_version.contract_version_id = view_contract_version_last_volume_update.contract_version_id
where contract_version.contract_id=#Contract_ID
and volume_date >= #dt_From
and volume_date < #dt_To
and last_write_date > #Last_Run_Date
)
THEN 1 else 0 end) OPTION(recompile)

OK, we have had similar issues like this before.
The way we fixed this, was by making local parameters inside the SP, such that
DECLARE #LOCAL_Contract_ID int,
#LOCAL_dt_From smalldatetime,
#LOCAL_dt_To smalldatetime,
#LOCAL_Last_Run_Date datetime
SELECT #LOCAL_Contract_ID = #Contract_ID,
#LOCAL_dt_From = #dt_From,
#LOCAL_dt_To = #dt_To,
#LOCAL_Last_Run_Date = #Last_Run_Date
We then use the local parameters inside the SP rather than the parameters that was passed in.
This typically fixed the issue for Us.
We believe this to be due to parameter sniffing, but do not have any proof, sorry... X-)
EDIT:
Have a look at Different Approaches to Correct SQL Server Parameter Sniffing for some insightful examples, explanations and fixes.

As others have mentioned, this could be a 'parameter sniffing' problem. Try including the line:
OPTION (RECOMPILE)
at the end of your SQL query.
There is an article here explaining what parameter sniffing is: http://blogs.technet.com/b/mdegre/archive/2012/03/19/what-is-parameter-sniffing.aspx

I guess this is caused by parameter sniffing.

The issue of why a batch takes forever to run inside a SQL stored procedure yet runs instantaneously in SSMS has to do with SQL parameter sniffing, especially with datetime parameters.
There are several excellent articles on parameter sniffing out there.
Here's one of them ( I didn't write it, just passing it on).
http://www.sommarskog.se/query-plan-mysteries.html

On my issue I've run:
exec sp_updatestats
and this speed up my sp from 120s to just 3s. More info about Updating Statistics can be found here https://msdn.microsoft.com/en-us/library/ms173804.aspx

I too got the same problem today. I have dropped and recreated the SP and it worked. This is something with SP cache and when dropped the SP the cached plan has been removed. You can try the same or use 'DBCC FREEPROCCACHE' to delete cache.

Related

DATETIME search predicate on DATETIME column much slower than string literal predicate

I'm doing a search on a large table of about 10 million rows. I want to specify a start and end date and return all records in the table created between those dates.
It's a straight-forward query:
declare #StartDateTime datetime = '2016-06-21',
#EndDateTime datetime = '2016-06-22';
select *
FROM Archive.dbo.Order O WITH (NOLOCK)
where O.Created >= #StartDateTime
AND O.Created < #EndDateTime;
Created is a DATETIME column which has a non-clustered index.
This query took about 15 seconds to complete.
However, if I modify the query slightly, as follows, it takes only 1 second to return the same result:
declare #StartDateTime datetime = '2016-06-21',
#EndDateTime datetime = '2016-06-22';
select *
FROM Archive.dbo.Order O WITH (NOLOCK)
where O.Created >= '2016-06-21'
AND O.Created < #EndDateTime;
The only change is replacing the #StartDateTime search predicate with a string literal. Looking at the execution plan, when I used #StartDateTime it did an index scan but when I used a string literal it did an index seek and was 15 times faster.
Does anyone know why using the string literal is so much faster?
I would have thought doing a comparison between a DATETIME column and a DATETIME variable would be quicker than comparing the column to a string representation of a date. I've tried dropping and recreating the index on the Created column and it made no difference. I notice I get similar results on the production system as I do on the test system so the weird behaviour doesn't seem specific to a particular database or SQL Server instance.
All variables have instances that they are recognized.
In OOP languages, we usually distinguish between static/constant variables from temporary variables by using keywords, or when a variable is called into a function where inside that instance the variable is treated as a constant if the function transforms that variable, such like the following in C++:
void string MyFunction(string& name)
//technically, `&` calls the actual location of the variable
//instead of using a logical representation. The concept is the same.
In SQL Server, the Standard chose to implement it a bit differently. There are no constant data types, so instead we use literals which are either
object names (which have similar precedence in the call as system keywords)
names with an object deliminator (including ', [])
or strings with a deliminator CHAR(39) (').
This is the reason you noticed that the two queries produce different results, because those variables are not constants to the Optimizer, which means SQL Server will already have chosen it's execution path beforehand.
If you have SSMS installed, include the Actual Execution Plan (CTRL + M), and notice in the select statement what the Estimated Rows are. This is the highlight of the execution plan. The greater difference between the Estimated and Actual rows, the more likely your query can use optimization. In your example, SQL Server had to guess how many rows, and ended up overshooting the results, losing efficiency.
The solution is one and the same, but you can still encapsulate everything if you wanted to. We use the AdventureWorks2012 for this example:
1) Declare the Variable in the Procedure
CREATE PROC dbo.TEST1 (#NameStyle INT, #FirstName VARCHAR(50) )
AS
BEGIN
SELECT *
FROM Person.PErson
WHERE FirstName = #FirstName
AND NameStyle = #NameStyle; --namestyle is 0
END
2) Pass the variable into Dynamic SQL
CREATE PROC dbo.TEST2 (#NameStyle INT)
AS
BEGIN
DECLARE #Name NVARCHAR(50) = N'Ken';
DECLARE #String NVARCHAR(MAX)
SET #String =
N'SELECT *
FROM Person.PErson
WHERE FirstName = #Other
AND NameStyle = #NameStyle';
EXEC sp_executesql #String
, N'#Other VARCHAR(50), #NameStyle INT'
, #Other = #Name
, #NameStyle = #NameStyle
END
Both plans will produce the same results. I could have used EXEC by itself, but sp_executesql can cache the entire select statement (plus, its more SQL Injection safe)
Notice how in both cases the level of the instance allowed SQL Server to transform the variable into a constant value (meaning it entered the object with a set value), and then the Optimizer was capable of choosing the most efficient execution plan available.
-- Remove Procs
DROP PROC dbo.TEST1
DROP PROC dbo.TEST2
A great article was highlighted in the comment section of the OP, but you can see it here: Optimizing Variables and Parameters - SQLMAG

TVF is much slower when using parameterized query

I am trying to run an inline TVF as a raw parameterized SQL query.
When I run the following query in SSMS, it takes 2-3 seconds
select * from dbo.history('2/1/15','1/1/15','1/31/15',2,2021,default)
I was able to capture the following query through SQL profiler (parameterized, as generated by Entity framework) and run it in SSMS.
exec sp_executesql N'select * from dbo.history(#First,#DatedStart,#DatedEnd,#Number,#Year,default)',N'#First date,#DatedStart date,#DatedEnd date,#Maturity int,#Number decimal(10,5)',#First='2015-02-01',#DatedStart='2015-01-01',#DatedEnd='2015-01-31',#Year=2021,#Number=2
Running the above query in SSMS takes 1:08 which is around 30x longer than the non parameterized version.
I have tried adding option(recompile) to the end of the parameterized query, but it did absolutely nothing as far as performance. This is clearly an indexing issue to me, but I have no idea how to resolve it.
When looking at the execution plan, it appears that the parameterized version mostly gets mostly hung up on an Eager Spool (46%) and then a Clustered Index scan (30%) which are not present in the execution plan without parameters.
Perhaps there is something I am missing, can someone please point me in the right direction as to how I can get this parameterized query to work properly?
EDIT: Parameterized query execution plan, non-parameterized plan
Maybe it's a parameter sniffing problem.
Try modifying your function so that the parameters are set to local variables, and use the local vars in your SQL instead of the parameters.
So your function would have this structure
CREATE FUNCTION history(
#First Date,
#DatedStart Date,
#DatedEnd Date,
#Maturity int,
#Number decimal(10,5))
RETURNS #table TABLE (
--tabledef
)
AS
BEGIN
Declare #FirstVar Date = #First
Declare #DatedStartVar Date = #DatedStart
Declare #DatedEndVar Date = #DatedEnd
Declare #MaturityVar int = #Maturity
Declare #NumberVar decimal(10,5) = #Number
--SQL Statement which uses the local 'Var' variables and not the parameters
RETURN;
END
;
I've had similar probs in the past where this has been the culprit, and mapping to local variables stops SQL Server from coming up with a dud execution plan.

How to execute in parallel using Transaction SQL?

I need to call a stored procedure with hundreds different parameters in a scheduled SQL Agent job. Right now it's executed sequentially. I want to execute the stored procedure with N (e.g. N = 8) different parameters at the same time.
Is it a good way to implement it in Transaction SQL? Can SQL Server Service broker be used for this purpose? Any other options?
There is mention in a comment on the question of a table that holds the various parameters to call the proc with, and that the execution times vary a lot across the parameter values.
If you are able to add two fields to the table of parameters--StartTime DATETIME and EndTime DATETIME--then you can create 7 more SQL Agent Jobs and have them scheduled to run at the same time.
The Job Step of each Job should be the same and should be similar to the following:
DECLARE #Params TABLE (ParamID INT, Param1 DataType, Param2 DataType, ...);
DECLARE #ParamID INT,
#Param1Variable DataType,
#Param2Variable DataType,
...;
WHILE (1 = 1)
BEGIN
UPDATE TOP (1) param
SET param.StartTime = GETDATE() -- or GETUTCDATE()
OUTPUT INSERTED.ParamID, INSERTED.Param1, INSERTED.Param2, ...
INTO #Params (ParamID, Param1, Param2, ...)
FROM Schema.ParameterTable param
WHERE param.StartTime IS NULL;
IF (##ROWCOUNT = 0)
BEGIN
BREAK; -- no rows left to process so just exit
END;
SELECT #ParamID = tmp.ParamID,
#Param1Variable = tmp.Param1,
#Param2Variable = tmp.Param2,
FROM #Params tmp;
BEGIN TRY
EXEC Schema.MyProc #Param1Variable, #Param2Variable, ... ;
UPDATE param
SET param.EndTime = GETDATE() -- or GETUTCDATE()
FROM Schema.ParameterTable param
WHERE param.ParamID = #ParamID;
END TRY
BEGIN CATCH
... do something here...
END CATCH;
DELETE FROM #Params; // clear out last set of params
END;
That general structure should allow for the 8 SQL Jobs to run until all of the parameter value sets have been executed. It accounts for that fact that some sets will run faster than others as each Job will just pick the next available one off the queue until there are none left, at which time the Job will cleanly exit.
Two things to consider adding to the above structure:
A way of resetting the StartTime field to be NULL so that the row can re-run later
A way of handling errors (i.e. clean up of rows where StartTime IS NOT NULL AND EndTime IS NULL and the DATEDIFF between StartTime and GETDATE / GETUTCDATE is too much. A TRY / CATCH could do it by either setting StartTime back to NULL to get re-run OR maybe add a 3rd field for ErrorTime DATETIME that is reset to NULL at the start of the run (like the other 2 fields) but only set if an error happens. Those are just some thoughts.
SQL Server has nothing native built in to issue parallel queries from a T-SQL batch. You need an external driver. Someone who connects on N connections.
SQL Agent can do that if you create N jobs and start them manually. It is a hack, but it will work.
It is probably easier to write a small C# app do do this and put it into Windows Task Scheduler.

SQL Server proc running 5x slower than plain query

I have the following query:
DECLARE #DaysNotUsed int = 14
DECLARE #DaysNotPhoned int = 7
--Total Unique Students
DECLARE #totalStudents TABLE (SchoolID uniqueidentifier, TotalUniqueStudents int)
INSERT INTO #totalStudents
SELECT
SSGG.School,
COUNT(DISTINCT S.StudentID)
FROM Student S
INNER JOIN StudentStudents_GroupGroups SSGG ON (SSGG.Students = S.StudentID AND SSGG.School = S.School)
INNER JOIN [Group] G ON (G.GroupID = SSGG.Groups AND G.School = SSGG.School)
INNER JOIN SessionHistory SH ON (SH.Student = S.StudentID AND SH.School = S.School AND SH.StartDateTime > GETDATE() - #DaysNotUsed)
WHERE G.IsBuiltIn = 0
AND S.HasStartedProduct = 1
GROUP BY SSGG.School
--Last Used On
DECLARE #lastUsed TABLE (SchoolID uniqueidentifier, LastUsedOn datetime)
INSERT INTO #lastUsed
SELECT
vi.SchoolID,
MAX(sh.StartDateTime)
FROM View_Installation as vi
INNER JOIN SessionHistory as sh on sh.School = vi.SchoolID
GROUP BY vi.SchoolID
SELECT
VI.SchoolID,
INS.DateAdded,
INS.Removed,
INS.DateRemoved,
INS.DateToInclude,
VI.SchoolName AS [School Name],
VI.UsersLicensed AS [Licenses],
ISNULL(TS.TotalUniqueStudents, 0) as [Total Unique Students],
ISNULL(TS.TotalUniqueStudents, 0) * 100 / VI.UsersLicensed as [% of Students Using],
S.State,
LU.LastUsedOn,
DATEDIFF(DAY, LU.LastUsedOn, GETDATE()) AS [Days Not Used],
SI.AreaSalesManager AS [Sales Rep],
SI.CaseNumber AS [Case #],
SI.RenewalDate AS [Renewal Date],
SI.AssignedTo AS [Assigned To],
SI.Notes AS [Notes]
FROM View_Installation VI
INNER JOIN School S ON S.SchoolID = VI.SchoolID
LEFT OUTER JOIN #totalStudents TS on TS.SchoolID = VI.SchoolID
INNER JOIN #lastUsed LU on LU.SchoolID = VI.SchoolID
LEFT OUTER JOIN InactiveReports..SchoolInfo SI ON S.SchoolID = SI.SchoolID
LEFT OUTER JOIN InactiveReports..InactiveSchools INS ON S.SchoolID = INS.SchoolID
WHERE VI.UsersLicensed > 0
AND VI.LastPhoneHome > GETDATE() - #DaysNotPhoned
AND
(
(
SELECT COUNT(DISTINCT S.StudentID)
FROM Student S
INNER JOIN StudentStudents_GroupGroups SSGG ON (SSGG.Students = S.StudentID AND SSGG.School = S.School)
INNER JOIN [Group] G ON (G.GroupID = SSGG.Groups AND G.School = SSGG.School)
WHERE G.IsBuiltIn = 0
AND S.School = VI.SchoolID
) * 100 / VI.UsersLicensed < 50
OR
VI.SchoolID NOT IN
(
SELECT DISTINCT SH1.School
FROM SessionHistory SH1
WHERE SH1.StartDateTime > GETDATE() - #DaysNotUsed
)
)
ORDER BY [Days Not Used] DESC
Running just plain sql like this in SSMS take about 10 seconds to run. When I created a stored procedure with exactly the same code, the query takes 50 seconds instead. The only difference in the actual code of the proc is a SET NOCOUNT ON that the IDE put in by default, but adding that line to the query doesn't have any impact. Any idea what would cause such a dramatic slow down like this?
EDIT I neglected the declare statements at the beginning. These are not in the proc, but are parameters to it. Could this be the difference?
I agree about the potential parameter sniffing issue, but I would also check these settings.
For the procedure:
SELECT uses_ansi_nulls, uses_quoted_identifier
FROM sys.sql_modules
WHERE [object_id] = OBJECT_ID('dbo.procedure_name');
For the SSMS query window where the query is running fast:
SELECT [ansi_nulls], [quoted_identifier]
FROM sys.dm_exec_sessions
WHERE session_id = ##SPID;
If either of these don't match, you might consider dropping the stored procedure and re-creating it with those two settings matching. For example, if the procedure has uses_quoted_identifier = 0 and the session has quoted_identifier = 1, you could try:
DROP PROCEDURE dbo.procedure_name;
GO
SET QUOTED_IDENTIFIER ON;
GO
CREATE PROCEDURE dbo.procedure_name
AS
BEGIN
SET NOCOUNT ON;
...
END
GO
Ideally all of your modules will be created with the exact same QUOTED_IDENTIFIER and ANSI_NULLS settings. It's possible the procedure was created when the settings were off (the default is on for both), or it's possible that where you are executing the query, one or both options are off (you can change this behavior in SSMS under Tools/Options/Query Execution/SQL Server/ANSI).
I'm not going to make any disclaimers about the behavior of the stored procedure with the different settings (for example you may have wanted ANSI_NULLS off so you could compare NULL = NULL), that you'll have to test, but at least you'll be comparing queries that are being run with the same options, and it will help narrow down potential parameter sniffing issues. If you're intentionally using SET ANSI_NULLS OFF, however, I caution you to find other approaches as that behavior will eventually be unsupported.
Other ways around parameter sniffing:
make sure you don't inadvertently compile the procedure with atypical parameters
use the recompile option either on the procedure or on the statement that seems to be the victim (I'm not sure if all of these are valid, because I can only tell that you are using SQL Server 2005 or greater, and some of these were introduced in 2008)
declare local variables similar to your input parameters, and pass the input parameter values to them, using the local variables later in the prodedure and ignoring the input parameters
The last option is my least favorite, but it's the quickest / easiest fix in the midst of troubleshooting and when users are complaining.
Also, in addition to everything else mentioned, if you are on SQL Server 2008 and up, have a look at OPTIMIZE FOR UNKNOWN http://blogs.msdn.com/b/sqlprogrammability/archive/2008/11/26/optimize-for-unknown-a-little-known-sql-server-2008-feature.aspx
I would recommend recompiling the execution plan for the stored procedure.
usage: sp_recompile '[target]'
example: sp_recompile 'dbo.GetObject'
When you execute a query from SSMS the query plan is automatically redone every time its executed. However with stored procs, sql server caches execution plans for stored procedures, and its this execution plan that gets used everytime the stored proc is called.
Link for sp_recompile.
You can also change the proc to use with WITH RECOMPILE clause within the stored proc.
Example:
CREATE PROCEDURE dbo.GetObject
(
#parm1 VARCHAR(20)
)
WITH RECOMPILE
AS
BEGIN
-- Queries/work here.
END
However this will force the execution plan to be recompiled every time the stored proc is called. This is good for dev/testing where the proc and/or data changes quite frequently. Make sure you remove it when you deploy it to production, as this can have a performance hit.
sp_recompile only recompiles the execution plan once. If you need to do it again at a later date, you will need to make the call again.
Good luck!
OK, thank you all for your help. Turns out it was a terribly stupid rookie mistake. The first time I created the proc, it created it under my user's schema instead of the dbo schema. When I called the proc I was simply doing 'exec proc_name', which I'm realizing now was using the version of the proc under my user's schema. Running 'exec dbo.proc_name' ran as expected.

Why would a stored procedure perform differently when executed remotely to locally?

We've a stored procedure that happens to build up some dynamic SQL and execute via a parametrised call to sp_executesql.
Under normal conditions, this works wonderfully, and has made a large benefit in execution times for the procedure (~8 seconds to ~1 second), however, under some unknown conditions, something strange happens, and performance goes completely the other way (~31 seconds), but only when executed via RPC (i.e. a call from a .Net app with the SqlCommand.CommandType of CommandType.StoredProcedure; or as a remote query from a linked server) - if executed as a SQL Batch using SQL Server Management Studio, we do not see the degradation in performance.
Altering the white-space in the generated SQL and recompiling the stored procedure, seems to resolve the issue at least in the short term, but we'd like to understand the cause, or ways to force the execution plans to be rebuilt for the generated SQL; but at the moment, I'm not sure how to proceed with either?
To illustrate, the Stored Procedure, looks a little like:
CREATE PROCEDURE [dbo].[usp_MyObject_Search]
#IsActive AS BIT = NULL,
#IsTemplate AS BIT = NULL
AS
DECLARE #WhereClause NVARCHAR(MAX) = ''
IF #IsActive IS NOT NULL
BEGIN
SET #WhereClause += ' AND (svc.IsActive = #xIsActive) '
END
IF #IsTemplate IS NOT NULL
BEGIN
SET #WhereClause += ' AND (svc.IsTemplate = #xIsTemplate) '
END
DECLARE #Sql NVARCHAR(MAX) = '
SELECT svc.[MyObjectId],
svc.[Name],
svc.[IsActive],
svc.[IsTemplate]
FROM dbo.MyObject svc WITH (NOLOCK)
WHERE 1=1 ' + #WhereClause + '
ORDER BY svc.[Name] Asc'
EXEC sp_executesql #Sql, N'#xIsActive BIT, #xIsTemplate BIT',
#xIsActive = #IsActive, #xIsTemplate = #IsTemplate
With this approach, the query plan will be cached for the permutations of NULL/not-NULL, and we're getting the benefit of cached query plans. What I don't understand is why it would use a different query plan when executed remotely vs. locally after "something happens"; I also don't understand what the "something" might be?
I realise I could move away from parametrisation, but then we'd lose the benefit of caching what are normally good execution plans.
I would suspect parameter sniffing. If you are on SQL Server 2008 you could try including OPTIMIZE FOR UNKNOWN to minimise the chance that when it generates a plan it does so for atypical parameter values.
RE: What I don't understand is why it would use a different query plan when executed remotely vs. locally after "something happens"
When you execute in SSMS it won't use the same bad plan because of different SET options (e.g. SET ARITHABORT ON) so it will compile a new plan that works well for the parameter values you are currently testing.
You can see these plans with
SELECT usecounts, cacheobjtype, objtype, text, query_plan, value as set_options
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
where text like '%FROM dbo.MyObject svc WITH (NOLOCK)%'
and attribute='set_options'
Edit
The following bit is just in response to badbod99's answer
create proc #foo #mode bit, #date datetime
as
declare #Sql nvarchar(max)
if(#mode=1)
set #Sql = 'select top 0 * from sys.objects where create_date < #date /*44FC79BD-2AF5-4774-9674-04D6C3D4B228*/'
else
set #Sql = 'select top 0 * from sys.objects where modify_date < #date /*44FC79BD-2AF5-4774-9674-04D6C3D4B228*/'
EXEC sp_executesql #Sql, N'#date datetime',
#date = #date
go
declare #d datetime
set #d = getdate()
exec #foo 0,#d
exec #foo 1, #d
SELECT usecounts, cacheobjtype, objtype, text, query_plan, value as set_options
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
where text like '%44FC79BD-2AF5-4774-9674-04D6C3D4B228%'
and attribute='set_options'
Returns
Recompilation
Any time the execution of the SP would be significantly different due to conditional statements the execution plan which was cached from the last request may not be optimal for this one.
It's all about when SQL compiles the execution plan for the SP. They key section regarding sp compilation on Microsoft docs is this:
... this optimization occurs automatically the first time a stored procedure is run after SQL Server is restarted. It also occurs if an underlying table that is used by the stored procedure changes. But if a new index is added from which the stored procedure might benefit, optimization does not occur until the next time that the stored procedure is run after SQL Server is restarted. In this situation, it can be useful to force the stored procedure to recompile the next time that it executes
SQL does recompile execution plans at times, from Microsoft docs
SQL Server automatically recompiles stored procedures and triggers when it is advantageous to do this.
... but it will not do this with each call (unless using WITH RECOMPILE), so if each execution could be resulting in different SQL, you may be stuck with the same old plan for at least one call.
RECOMPILE query hint
The RECOMPILE query hint, takes into account your parameter values when checking what needs to be recompiled at the statement level.
WITH RECOMPILE option
WITH RECOMPILE (see section F) will cause the execution plan to be compiled with each call, so you will never have a sub-optimal plan, but you will have the compilation overhead.
Restructure into multiple SPs
Looking at your specific case, the execution plan for the proc never changes and the 2 sql statements should have prepared execution plans.
I would suggest that restructuring the code to split the SPs rather than have this conditional SQL generation would simplify things and ensure you always have the optimal execution plan without any SQL magic sauce.

Resources