I have a view that returns 2 ints from a table using a CTE. If I query the view like this it runs in less than a second
SELECT * FROM view1 WHERE ID = 1
However if I query the view like this it takes 4 seconds.
DECLARE #id INT = 1
SELECT * FROM View1 WHERE ID = #id
I've checked the 2 query plans and the first query is performing a Clustered index seek on the main table returning 1 record then applying the rest of the view query to that result set, where as the second query is performing an index scan which is returning about 3000 records records rather than just the one I'm interested in and then later filtering the result set.
Is there anything obvious that I'm missing to try to get the second query to use the Index Seek rather than an index scan. I'm using SQL 2008 but anything I do needs to also run on SQL 2005. At first I thought it was some sort of parameter sniffing problem but I get the same results even if I clear the cache.
Probably it is because in the parameter case, the optimizer cannot know that the value is not null, so it needs to create a plan that returns correct results even when it is. If you have SQL Server 2008 SP1 you can try adding OPTION(RECOMPILE) to the query.
You could add an OPTIMIZE FOR hint to your query, e.g.
DECLARE #id INT = 1
SELECT * FROM View1 WHERE ID = #id OPTION (OPTIMIZE FOR (#ID = 1))
In my case in DB table column type was defined as VarChar and in parameterized query parameter type was defined as NVarChar, this introduced CONVERT_IMPLICIT in the actual execution plan to match data type before comparing and that was culprit for sow performance, 2 sec vs 11 sec. Just correcting parameter type made parameterized query as fast as non parameterized version.
One possible way to do that is to CAST the parameters, as such:
SELECT ...
FROM ...
WHERE name = CAST(:name AS varchar)
Hope this may help someone with similar issue.
I ran into this problem myself with a view that ran < 10ms with a direct assignment (WHERE UtilAcctId=12345), but took over 100 times as long with a variable assignment (WHERE UtilAcctId = #UtilAcctId).
The execution-plan for the latter was no different than if I had run the view on the entire table.
My solution didn't require tons of indexes, optimizer-hints, or a long-statistics-update.
Instead I converted the view into a User-Table-Function where the parameter was the value needed on the WHERE clause. In fact this WHERE clause was nested 3 queries deep and it still worked and it was back to the < 10ms speed.
Eventually I changed the parameter to be a TYPE that is a table of UtilAcctIds (int). Then I can limit the WHERE clause to a list from the table.
WHERE UtilAcctId = [parameter-List].UtilAcctId.
This works even better. I think the user-table-functions are pre-compiled.
When SQL starts to optimize the query plan for the query with the variable it will match the available index against the column. In this case there was an index so SQL figured it would just scan the index looking for the value. When SQL made the plan for the query with the column and a literal value it could look at the statistics and the value to decide if it should scan the index or if a seek would be correct.
Using the optimize hint and a value tells SQL that “this is the value which will be used most of the time so optimize for this value” and a plan is stored as if this literal value was used. Using the optimize hint and the sub-hint of UNKNOWN tells SQL you do not know what the value will be, so SQL looks at the statistics for the column and decides what, seek or scan, will be best and makes the plan accordingly.
I know this is long since answered, but I came across this same issue and have a fairly simple solution that doesn't require hints, statistics-updates, additional indexes, forcing plans etc.
Based on the comment above that "the optimizer cannot know that the value is not null", I decided to move the values from a variable into a table:
Original Code:
declare #StartTime datetime2(0) = '10/23/2020 00:00:00'
declare #EndTime datetime2(0) = '10/23/2020 01:00:00'
SELECT * FROM ...
WHERE
C.CreateDtTm >= #StartTime
AND C.CreateDtTm < #EndTime
New Code:
declare #StartTime datetime2(0) = '10/23/2020 00:00:00'
declare #EndTime datetime2(0) = '10/23/2020 01:00:00'
CREATE TABLE #Times (StartTime datetime2(0) NOT NULL, EndTime datetime2(0) NOT NULL)
INSERT INTO #Times(StartTime, EndTime) VALUES(#StartTime, #EndTime)
SELECT * FROM ...
WHERE
C.CreateDtTm >= (SELECT MAX(StartTime) FROM #Times)
AND C.CreateDtTm < (SELECT MAX(EndTime) FROM #Times)
This performed instantly as opposed to several minutes for the original code (obviously your results may vary) .
I assume if I changed my data type in my main table to be NOT NULL, it would work as well, but I was not able to test this at this time due to system constraints.
Came across this same issue myself and it turned out to be a missing index involving a (left) join on the result of a subquery.
select *
from foo A
left outer join (
select x, count(*)
from bar
group by x
) B on A.x = B.x
Added an index named bar_x for bar.x
DECLARE #id INT = 1
SELECT * FROM View1 WHERE ID = #id
Do this
DECLARE #sql varchar(max)
SET #sql = 'SELECT * FROM View1 WHERE ID =' + CAST(#id as varchar)
EXEC (#sql)
Solves your problem
Related
I'm still in the process of getting to fully understand SQL Server. I have wrote a stored procedure as shown below:
ALTER PROC [dbo].[Specific_Street_Lookup]
#STR Varchar(50),
#CNT int
AS
BEGIN
SELECT DISTINCT TOP (#CNT)
street_desc, street_localitydesc, postcode_selected
FROM
Full_Streets
INNER JOIN
Postcodes ON Full_Streets.street_postcodeid = postcodes.postcode_id
WHERE
street_desc LIKE #STR+'%'
AND postcode_selected = 'TRUE'
ORDER BY
street_desc, street_localitydesc
END
but it can take up to 7 seconds to return a result, I'm not sure what I can do to speed up the query.
The full_street table has a row count of 856800
The postcode table has a row count of 856208
Both tables have a primary key (street_id & postcode_id)
The purpose of the query: in my VB.net app as the user is typing in a street to look up it return a number of records (#CNT) that match the partial string (LIKE #STR'+%') and only if postcode_selected = 'TRUE'
I'm sure there must be a quicker / better way to do this and any help would be appreciated.
Thanks
Can you try with this index?
CREATE INDEX NCI_street_desc ON Full_Streets(street_desc) INCLUDE(street_localitydesc)
The LIKE operator is evil in such a big table, and I don't think you can optimize this query with normal indexes.
Consider using Full Text Search functionalities. With full text search you can't search portions of strings (unless you make a special table where you pre-save all the possible portions of your strings) but performance are hugely superior than what you can achieve using the LIKE operator.
I would change postcode_selected column type to bit (TRUE = 1, FALSE = 0) and then modify sp accordingly - it will reduce time complexity of the query.
EDIT: I've updated the example code and provided complete table and view implementations for reference, but the essential question remains unchanged.
I have a fairly complex view in a database that I am attempting to query. When I attempt to retrieve a set of rows from the view by hard-coding the WHERE clause to specific foreign key values, the view executes very quickly with an optimal execution plan (indexes are used properly, etc.)
SELECT *
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = 20
However, when I attempt to add parameters to the query, all of a sudden my execution plan falls apart. When I run the query below, I'm getting index scans instead of seeks all over the place and the query performance is very poor.
DECLARE #ForeignKeyCol int = 20
SELECT *
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = #ForeignKeyCol
I'm using SQL Server 2008 R2. What gives here? What is it about using parameters that is causing a sub-optimal plan? Any help would be greatly appreciated.
For reference, here are the object definitions for which I'm getting the error.
CREATE TABLE [dbo].[BaseTable]
(
[PrimaryKeyCol] [uniqueidentifier] PRIMARY KEY,
[ForeignKeyCol] [int] NULL,
[DataCol] [binary](1000) NOT NULL
)
CREATE NONCLUSTERED INDEX [IX_BaseTable_ForeignKeyCol] ON [dbo].[BaseTable]
(
[ForeignKeyCol] ASC
)
CREATE VIEW [dbo].[ViewOnBaseTable]
AS
SELECT
PrimaryKeyCol,
ForeignKeyCol,
DENSE_RANK() OVER (PARTITION BY ForeignKeyCol ORDER BY PrimaryKeyCol) AS ForeignKeyRank,
DataCol
FROM
dbo.BaseTable
I am certain that the window function is the problem, but I am filtering my query by a single value that the window function is partitioning by, so I would expect the optimizer to filter first and then run the window function. It does this in the hard-coded example but not the parameterized example. Below are the two query plans. The top plan is good and the bottom plan is bad.
When using OPTION (RECOMPILE) be sure to look at the post-execution ('actual') plan rather than the pre-execution ('estimated') one. Some optimizations are only applied when execution occurs:
DECLARE #ForeignKeyCol int = 20;
SELECT ForeignKeyCol, ForeignKeyRank
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = #ForeignKeyCol
OPTION (RECOMPILE);
Pre-execution plan:
Post-execution plan:
Tested on SQL Server 2012 build 11.0.3339 and SQL Server 2008 R2 build 10.50.4270
Background & limitations
When windowing functions were added in SQL Server 2005, the optimizer had no way to push selections past these new sequence projections. To address some common scenarios where this caused performance problems, SQL Server 2008 added a new simplification rule, SelOnSeqPrj, which allows suitable selections to be pushed where the value is a constant. This constant may be a literal in the query text, or the sniffed value of a parameter obtained via OPTION (RECOMPILE). There is no particular problem with NULLs though the query may need to have ANSI_NULLS OFF to see this. As far as I know, applying the simplification to constant values only is an implementation limitation; there is no particular reason it could not be extended to work with variables. My recollection is that the SelOnSeqPrj rule addresssed the most commonly seen performance problems.
Parameterization
The SelOnSeqPrj rule is not applied when a query is successfully auto-parameterized. There is no reliable way to determine if a query was auto-parameterized in SSMS, it only indicates that auto-param was attempted. To be clear, the presence of place-holders like [#0] only shows that auto-parameterization was attempted. A reliable way to tell if a prepared plan was cached for reuse is to inspect the plan cache, where the 'parameterized plan handle' provides the link between ad-hoc and prepared plans.
For example, the following query appears to be auto-parameterized in SSMS:
SELECT *
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = 20;
But the plan cache shows otherwise:
WITH XMLNAMESPACES
(
DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan'
)
SELECT
parameterized_plan_handle =
deqp.query_plan.value('(//StmtSimple)[1]/#ParameterizedPlanHandle', 'nvarchar(64)'),
parameterized_text =
deqp.query_plan.value('(//StmtSimple)[1]/#ParameterizedText', 'nvarchar(max)'),
decp.cacheobjtype,
decp.objtype,
decp.plan_handle
FROM sys.dm_exec_cached_plans AS decp
CROSS APPLY sys.dm_exec_sql_text(decp.plan_handle) AS dest
CROSS APPLY sys.dm_exec_query_plan(decp.plan_handle) AS deqp
WHERE
dest.[text] LIKE N'%ViewOnBaseTable%'
AND dest.[text] NOT LIKE N'%dm_exec_cached_plans%';
If the database option for forced parameterization is enabled, we get a parameterized result, where the optimization is not applied:
ALTER DATABASE Sandpit SET PARAMETERIZATION FORCED;
DBCC FREEPROCCACHE;
SELECT *
FROM dbo.ViewOnBaseTable
WHERE ForeignKeyCol = 20;
The plan cache query now shows a parameterized cached plan, linked by the parameterized plan handle:
Workaround
Where possible, my preference is to rewrite the view as an in-line table-valued function, where the intended position of the selection can be made more explicit (if necessary):
CREATE FUNCTION dbo.ParameterizedViewOnBaseTable
(#ForeignKeyCol integer)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
SELECT
bt.PrimaryKeyCol,
bt.ForeignKeyCol,
ForeignKeyRank = DENSE_RANK() OVER (
PARTITION BY bt.ForeignKeyCol
ORDER BY bt.PrimaryKeyCol),
bt.DataCol
FROM dbo.BaseTable AS bt
WHERE
bt.ForeignKeyCol = #ForeignKeyCol;
The query becomes:
DECLARE #ForeignKeyCol integer = 20;
SELECT pvobt.*
FROM dbo.ParameterizedViewOnBaseTable(#ForeignKeyCol) AS pvobt;
With the execution plan:
You could always go the CROSS APPLY way.
ALTER VIEW [dbo].[ViewOnBaseTable]
AS
SELECT
PrimaryKeyCol,
ForeignKeyCol,
ForeignKeyRank,
DataCol
FROM (
SELECT DISTINCT
ForeignKeyCol
FROM dbo.BaseTable
) AS Src
CROSS APPLY (
SELECT
PrimaryKeyCol,
DENSE_RANK() OVER (ORDER BY PrimaryKeyCol) AS ForeignKeyRank,
DataCol
FROM dbo.BaseTable AS B
WHERE B.ForeignKeyCol = Src.ForeignKeyCol
) AS X
I think in this particular case it may be because the data types between your parameters and your table do not match exactly so SQL Server has to do an implicit conversion which is not a sargable operation.
Check your table data types and make your parameters the same type. Or do the cast yourself outside the query.
I have a T-SQL statement that I am running against a table with many rows. I am seeing some strange behavior. Comparing a DateTime column against a precalculated value is slower than comparing each row against a calculation based on the GETDATE() function.
The following SQL takes 8 secs:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO
DECLARE #TimeZoneOffset int = -(DATEPART("HH", GETUTCDATE() - GETDATE()))
DECLARE #LowerTime DATETIME = DATEADD("HH", ABS(#TimeZoneOffset), CONVERT(VARCHAR, GETDATE(), 101) + ' 17:00:00')
SELECT TOP 200 Id, EventDate, Message
FROM Events WITH (NOLOCK)
WHERE EventDate > #LowerTime
GO
This alternate strangely returns instantly:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO
SELECT TOP 200 Id, EventDate, Message
FROM Events WITH (NOLOCK)
WHERE EventDate > GETDATE()-1
GO
Why is the second query so much faster?
EDITED: I updated the SQL to accurately reflect other settings I am using
After doing a lot of reading and researching, I've discovered the issue here is parameter sniffing. Sql Server attempts to determine how best to use indexes based on the where clause, but in this case it isnt doing a very good job.
See the examples below :
Slow version:
declare #dNow DateTime
Select #dNow=GetDate()
Select *
From response_master_Incident rmi
Where rmi.response_date between DateAdd(hh,-2,#dNow) AND #dNow
Fast version:
Select *
From response_master_Incident rmi
Where rmi.response_date between DateAdd(hh,-2,GetDate()) AND GetDate()
The "Fast" version runs around 10x faster than the slow version. The Response_Date field is indexed and is a DateTime type.
The solution is to tell Sql Server how best to optimise the query. Modifying the example as follows to include the OPTIMIZE option resulted in it using the same execution plan as the "Fast Version". The OPTMIZE option here explicitly tells sql server to treat the local #dNow variable as a date (as if declaring it as DateTime wasnt enough :s )
Care should be taken when doing this however because in more complicated WHERE clauses you could end up making the query perform worse than Sql Server's own optimisations.
declare #dNow DateTime
SET #dNow=GetDate()
Select ID, response_date, call_back_phone
from response_master_Incident rmi
where rmi.response_date between DateAdd(hh,-2,#dNow) AND #dNow
-- The optimizer does not know too much about the variable so assumes to should perform a clusterd index scann (on the clustered index ID) - this is slow
-- This hint tells the optimzer that the variable is indeed a datetime in this format (why it does not know that already who knows)
OPTION(OPTIMIZE FOR (#dNow = '99991231'));
The execution plans must be different, because SQL Server does not evaluate the value of the variable when creating the execution plan in execution time. So, it uses average statistics from all the different dates that can be stored in the table.
On the other hand, the function getdate is evaluated in execution time, so the execution plan is created using statistics for that specific date, which of course, are more realistic that the previous ones.
If you create a stored procedure with #LowerTime as a parameter, you will get better results.
I want to use a clause along the lines of "CASE WHEN ... THEN 1 ELSE 0 END" in a select statement. The tricky part is that I need it to work with "value IN #List".
If I hard code the list it works fine - and it performs well:
SELECT
CASE WHEN t.column_a IN ( 'value a', 'value b' ) THEN 1 ELSE 0 END AS priority
, t.column_b
, t.column_c
FROM
table AS t
ORDER BY
priority DESC
What I would like to do is:
-- #AvailableValues would be a list (array) of strings.
DECLARE
#AvailableValues ???
SELECT
#AvailableValues = ???
FROM
lookup_table
SELECT
CASE WHEN t.column_a IN #AvailableValues THEN 1 ELSE 0 END AS priority
, t.column_b
, t.column_c
FROM
table AS t
ORDER BY
priority DESC
Unfortunately, it seems that SQL Server doesn't do this - you can't use a variable with an IN clause. So this leaves me with some other options:
Make '#AvailableValues' a comma-delimited string and use a LIKE statement. This does not perform well.
Use an inline SELECT statement against 'lookup_table' in place of the variable. Again, doesn't perform well (I think) because it has to lookup the table on each row.
Write a function wrapping around the SELECT statement in place of the variable. I haven't tried this yet (will try it now) but it seems that it will have the same problem as a direct SELECT statement.
???
Are there any other options? Performance is very important for the query - it has to be really fast as it feeds a real-time search result page (i.e. no caching) for a web site.
Are there any other options here? Is there a way to improve the performance of one of the above options to get good performance?
Thanks in advance for any help given!
UPDATE: I should have mentioned that the 'lookup_table' in the example above is already a table variable. I've also updated the sample queries to better demonstrate how I'm using the clause.
UPDATE II: It occurred to me that the IN clause is operating off an NVARCHAR/NCHAR field (due to historical table design reasons). If I was to make changes that dealt with integer fields (i.e through PK/FK relationship constraints) could this have much impact on performance?
You can use a variable in an IN clause, but not in the way you're trying to do. For instance, you could do this:
declare #i int
declare #j int
select #i = 10, #j = 20
select * from YourTable where SomeColumn IN (#i, #j)
The key is that the variables cannot represent more than one value.
To answer your question, use the inline select. As long as you don't reference an outer value in the query (which could change the results on a per-row basis), the engine will not repeatedly select the same data from the table.
Based on your update and assuming the lookup table is small, I suggest trying something like the following:
DECLARE #MyLookup table
(SomeValue nvarchar(100) not null)
SELECT
case when ml.SomeValue is not null then 1 else 0 end AS Priority
,t.column_b
,t.column_c
from MyTable t
left outer join #MyLookup ml
on ml.SomeValue = t.column_a
order by case when ml.SomeValue is not null then 1 else 0 end desc
(You can't reference the column alias "Priority" in the ORDER BY clause. Alternatively, you could use the ordinal position like so:
order by 1 desc
but that's generally not recommended.)
As long as the lookup table is small , this really should run fairly quickly -- but your comment implies that it's a pretty big table, and that could slow down performance.
As for n[Var]char vs. int, yes, integers would be faster, if only because the CPU has fewer bytes to juggle around... which shoud only be a problem when processing a lot of rows, so it might be worth trying.
I solved this problem by using a CHARINDEX function. I wanted to pass the string in as a single parameter. I created a string with leading and trailing commas for each value I wanted to test for. Then I concatenated a leading and trailing commas to the string I wanted to see if was "in" the parameter. At the end I checked for CHARINDEX > 0
DECLARE #CTSPST_Profit_Centers VARCHAR (256)
SELECT #CTSPST_Profit_Centers = ',CS5000U37Y,CS5000U48B,CS5000V68A,CS5000V69A,CS500IV69A,CS5000V70S,CS5000V79B,CS500IV79B,'
SELECT
CASE
WHEN CHARINDEX(','+ISMAT.PROFIT_CENTER+',' ,#CTSPST_Profit_Centers) > 0 THEN 'CTSPST'
ELSE ISMAT.DESIGN_ID + ' 1 CPG'
END AS DESIGN_ID
You can also do it in the where clause
WHERE CHARINDEX(','+ISMAT.PROFIT_CENTER+',',#CTSPST_Profit_Centers) > 0
If you were trying to compare numbers you'd need to convert the number to a text string for the CHARINDEX function to work.
This might be along the lines of what you need.
Note that this assumes that you have permissions and the input data has been sanitized.
From Running Dynamic Stored Procedures
CREATE PROCEDURE MyProc (#WHEREClause varchar(255))
AS
-- Create a variable #SQLStatement
DECLARE #SQLStatement varchar(255)
-- Enter the dynamic SQL statement into the
-- variable #SQLStatement
SELECT #SQLStatement = "SELECT * FROM TableName WHERE " + #WHEREClause
-- Execute the SQL statement
EXEC(#SQLStatement)
For my SQL queries, I usually do the following for SELECT statements:
SELECT ...
FROM table t
WHERE 1=1
AND t.[column1] = #param1
AND t.[column2] = #param2
This will make it easy if I need to add / remove / comment any WHERE clauses, since I don't have to care about the first line.
Is there any performance hit when using this pattern?
Additional Info:
Example for sheepsimulator and all other who didn't get the usage.
Suppose the above query, I need to change #param1 to be not included into the query:
With 1=1:
...
WHERE 1=1 <-- no change
--AND t.[column1] = #param1 <-- changed
AND t.[column2] = #param2 <-- no change
...
Without 1=1:
...
WHERE <-- no change
--t.[column1] = #param1 <-- changed
{AND removed} t.[column2] = #param2 <-- changed
...
No, SQL Server is smart enough to omit this condition from the execution plan since it's always TRUE.
Same is true for Oracle, MySQL and PostgreSQL.
It is likely that if you use the profiler and look, you will end up seeing that the optimizer will end up ignoring that more often than not, so in the grand scheme of things, there probably won't be much in the way of performance gain or losses.
This has no performance impact, but there the SQL text looks like it has been mangled by a SQL injection attack. The '1=1' trick appears in many sql injection based attacks. You just run the risk that some customer of yours someday deploys a 'black box' that monitors SQL traffic and you'll find your app flagged as 'hacked'. Also source code analyzers may flag this. Its a long long shot, of course, but something worth putting into the balance.
One potentially mildly negative impact of this is that the AND 1=1 will stop SQL Server's simple parameterisation facility from kicking in.
Demo script
DBCC FREEPROCCACHE; /*<-- Don't run on production box!*/
CREATE TABLE [E7ED0174-9820-4B29-BCDF-C999CA319131]
(
X INT,
Y INT,
PRIMARY KEY (X,Y)
);
GO
SELECT *
FROM [E7ED0174-9820-4B29-BCDF-C999CA319131]
WHERE X = 1
AND Y = 2;
GO
SELECT *
FROM [E7ED0174-9820-4B29-BCDF-C999CA319131]
WHERE X = 2
AND Y = 3;
GO
SELECT *
FROM [E7ED0174-9820-4B29-BCDF-C999CA319131]
WHERE 1 = 1
AND X = 1
AND Y = 2
GO
SELECT *
FROM [E7ED0174-9820-4B29-BCDF-C999CA319131]
WHERE 1 = 1
AND X = 2
AND Y = 3
SELECT usecounts,
execution_count,
size_in_bytes,
cacheobjtype,
objtype,
text,
creation_time,
last_execution_time,
execution_count
FROM sys.dm_exec_cached_plans a
INNER JOIN sys.dm_exec_query_stats b
ON a.plan_handle = b.plan_handle
CROSS apply sys.dm_exec_sql_text(b.sql_handle) AS sql_text
WHERE text LIKE '%\[E7ED0174-9820-4B29-BCDF-C999CA319131\]%' ESCAPE '\'
AND text NOT LIKE '%this_query%'
ORDER BY last_execution_time DESC
GO
DROP TABLE [E7ED0174-9820-4B29-BCDF-C999CA319131]
Shows that both the queries without the 1=1 were satisfied by a single parameterised version of the cached plan whereas the queries with the 1=1 compiled and stored a separate plan for the different constant values.
Ideally you shouldn't be relying on this anyway though and should be explicitly parameterising queries to ensure that the desired elements are parameterised and the parameters have the correct datatypes.
There is no difference, as they evaluated constants and are optimized out. I use both 1=1 and 0=1 in both hand- and code-generated AND and OR lists and it has no effect.
Since the condition is always true, SQL Server will ignore it. You can check by running two queries, one with the condition and one without, and comparing the two actual execution plans.
An alternative to achieve your ease of commenting requirement is to restructure your query:
SELECT ...
FROM table t
WHERE
t.[column1] = #param1 AND
t.[column2] = #param2 AND
t.[column3] = #param3
You can then add/remove/comment out lines in the where conditions and it will still be valid SQL.
No performance hit. Even if your WHERE clause is loaded with a large number of comparisons, this is tiny.
Best case scenario is that it's a bit-for-bit comparison. Worse case is that the digits are evaluated as integers.
For queries of any reasonable complexity there will be no difference. You can look at some execution plans and also compare real execution costs, and see for yourself.