I currently have a 'Filter' object which corresponds to a business object. This object has properties that relate to the different ways that I want to be able to filter/search a list of such business objects. Currently these Filter objects have a method that builds the contents of a where-clause that is then passed to a SQL Server 2000 stored procedure where it is concatendated with the rest of the select query. The final string is then executed using Exec.
Currently this works fine except I worry about the performance issue with the lack of execution plan caching. In some research I have seen the use of calling sp_executesql; is this a better solution or are there better conventions for what I am doing?
Update: I think part of the issue with using sp_executesql is that based on a collection in my filter I need to generate a list of OR statements. I am not sure that the 'parameterized' query would be my solution.
example
var whereClause = new StringBuilder();
if (Status.Count > 0)
{
whereClause.Append("(");
foreach (OrderStatus item in Status)
{
whereClause.AppendFormat("Orders.Status = {0} OR ", (int)item);
}
whereClause.Remove(whereClause.Length - 4, 3);
whereClause.Append(") AND ");
}
Yes, sp_executesql will "cache" the execution plan of the query it executes.
Alternatively, instead of passing part of the query to the stored procedure, building the full query there, and executing dynamic SQL, you could build entire query on .NET side and execute it using ADO.NET command object. All queries executed through ADO.NET are getting "cached" by default.
sp_executesql is better than exec because of plan reuse, and you can use parameters which help against sql injection. sp_executesql also won't cause procedure cache bloat if used correctly
take a look at these two articles
Avoid Conversions In Execution Plans By Using sp_executesql Instead of Exec
Changing exec to sp_executesql doesn't provide any benefit if you are not using parameters correctly
You should be using sp_executesql, simply because, as you say, the query plan is stored and future executions will be optimized. It also generally seems to handle dynamic sql better than execute.
Modern RDBMS'es (can't really say whether to consider SQL Server 2000 a "modern" one) are optimized for ad-hoc queries, so there's a negligible performance hit (if any). What's bothering me is that you're using sproc to construct dynamic SQL: this is a huge debugging/support PITA.
sp_executesql is the better option. Have you considered not using a stored procedure for this or at least taking out some of the dynamics? I think it would be much safer from any kind of injection. I write filters much like you are talking about but i try to take care of the input in my code as opposed to in a stored procedure. I really like dynamic sql but maybe it's safer to go the extra mile sometimes.
Related
I have a simple SELECT statement with a couple columns referenced in the WHERE clause. Normally I do these simple ones in the VB code (setup a Command object, set Command Type to text, set Command Text to the Select statement). However I'm seeing timeout problems. We've optimized just about everything we can with our tables, etc.
I'm wondering if there'd be a big performance hit just because I'm doing the query this way, versus creating a simple stored procedure with a couple params. I'm thinking maybe the inline code forces SQL to do extra work compiling, creating query plan, etc. which wouldn't occur if I used a stored procedure.
An example of the actual SQL being run:
SELECT TOP 1 * FROM MyTable WHERE Field1 = #Field1 ORDER BY ID DESC
A well formed "inline" or "ad-hoc" SQL query - if properly used with parameters - is just as good as a stored procedure.
But this is absolutely crucial: you must use properly parametrized queries! If you don't - if you concatenate together your SQL for each request - then you don't benefit from these points...
Just like with a stored procedure, upon first executing, a query execution plan must be found - and then that execution plan is cached in the plan cache - just like with a stored procedure.
That query plan is reused over and over again, if you call your inline parametrized SQL statement multiple times - and the "inline" SQL query plan is subject to the same cache eviction policies as the execution plan of a stored procedure.
Just from that point of view - if you really use properly parametrized queries - there's no performance benefit for a stored procedure.
Stored procedures have other benefits (like being a "security boundary" etc.), but just raw performance isn't one of their major plus points.
It is true that the db has to do the extra work you mention, but that should not result in a big performance hit (unless you are running the query very, very frequently..)
Use sql profiler to see what is actually getting sent to the server. Use activity monitor to see if there are other queries blocking yours.
Your query couldn't be simpler. Is Field1 indexed? As others have said, there is no performance hit associated with "ad-hoc" queries.
For where to put your queries, this is one of the oldest debates in tech. I would argue that your requests "belong" to your application. They will be versionned with your app, tested with your app and should disappear when your app disappears. Putting them anywhere other than in your app is walking into a world of pain. But for goodness sake, use .sql files, compiled as embedded resources.
Select statement which is part of form clause of any
another statement is called as inline query.
Cannot take parameters.
Not a database object
Procedure:
Can take paramters
Database object
can be used globally if same action needs to be performed.
I have a query in my MVC application which takes about 20 seconds to complete (using NHibernate 3.1). When I execute the query manually on Management studio it takes 0 seconds.
I've seen similiar questions on SO about problems similar to this one, so I took my test one step further.
I intercepted the query using Sql Server Profiler, and executed the query using ADO.NET in my application.
The query that i got from the Profiler is something like: "exec sp_executesql N'select...."
My ADO.NET code:
SqlConnection conn = (SqlConnection) NHibernateManager.Current.Connection;
var query = #"<query from profiler...>";
var cmd = new SqlCommand(query, conn);
SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.CloseConnection);
return RedirectToAction("Index");
This query is also very fast, taking no time to execute.
Also, I've seen something very strange on the Profiler. The query, when executed from NH, has the following statistics:
reads: 281702
writes: 0
The one from ADO.NET:
reads: 333
writes: 0
Anyone has any clue? Is there any info I may provide to help diagnose the problem?
I thought it might be related to some connection settings, but the ADO.NET version is using the same connection from NHibernate.
Thanks in advance
UPDATE:
I'm using NHibernate LINQ. The query is enormous, but is a paging query, with just 10 records being fetched.
The parameters that are passed to the "exec sp_executesql" are:
#p0 int,#p1 datetime,#p2 datetime,#p3 bit,#p4 int,#p5 int
#p0=10,#p1='2009-12-01 00:00:00',#p2='2009-12-31 23:59:59',#p3=0,#p4=1,#p5=0
I had the ADO.NET and NHibernate using different query-plans, and I was sufering the effects of parameter sniffing on the NH version. Why? Because I had previously made a query with a small date interval, and the stored query-plan was optimized for it.
Afterwards, when querying with a large date interval, the stored plan was used and it took ages to get a result.
I confirmed that this was in fact the problem because a simple:
DBCC FREEPROCCACHE -- clears the query-plan cache
made my query fast again.
I found 2 ways to solve this:
Injecting an "option(recompile)" to the query, using a NH Interceptor
Adding a dummy predicate to my NH Linq expression, like: query = query.Where(true) when the expected result-set was small (date interval wise). This way two different query plans would be created, one for large-sets of data and one for small-sets.
I tried both options, and both worked, but opted for the second approach. It's a little bit hacky but works really well I my case, because the data is uniformly distributed date-wise.
I had the exact same problem as the OP. I tried #psousa's suggestion of injecting an "option(recompile)" which did improve my performance. But in the end I found that simply updating statistics on SQL Server did the trick for me.
update statistics tablename;
I ended up backing out my code to inject the "option(recompile)". I realize this may not be the answer for everyone, but wanted to share since it was the cause of my problems.
Look at the parameters being supplied to the sp_executesql stored proc. If the parameters are supplied as nvarchar (N'value') and the columns they reference are varchar, SQL Server will use a very inefficient query plan. This has been the root cause of all the performance issues I've had that exhibit these symptoms (slow in app., fast in SSMS).
you didn't specify your query or the size of its resultset, but there's an issue with fetching large number of entities with nHibernate.
basically, the time to 'hydrate' the objects is what's taking that long.
you can try turning on the reflection optimizer, or using an IStatelessSession.
see som suggestions that i've got here.
am used lot of time , i know the diff between sql query and sp ,
SQL query will be compiled everytime it is executed.
Stored procedures are compiled only once when they are
executed for the first time.
This is general database question
But one big doubt is ,
For example ,
one dynamic work , that is i pass the userid to SP and sp will return the username,password,full details,
So for this scenario the query should execute once again know, so what is the necessary of SP instead of SQL QUERY ,
Please clear this doubt ,
Hi thanks for all your updates,
but i dont want the advantage, comparison ,
just say ,
How sp executing , while we go with dynamic works,
For example ,
if pass userid 10 then sp also read records 10 ,
if i pass 14 then, SP again look the 14 records , see this same work NORMAL SQL QUERY
doing , but on that time execute and fetching ,so why should i go for sp ,
Regards
Stored procedures, like the name says, are stored on the database server. They are transmitted to the server and compiled when you create them, and executed when you call them.
Simple SQL queries, on the other hand, are transmitted to the server and compiled each time you use them.
So transmitting of a huge query (instead of a simple "execute procedure" command) and compiling create an overhead which can be avoided by the use of a stored procedure.
MySQL, like other RDBMS, has a query cache. But this avoid only compiling, and only if the query is exactly the same than a previously executed query, which means the cache is not used if you execute 2 times the same query, with different values in a where clause, for example.
I see no reason for a stored procedure simply to query for all user details.
Stored procedures are functional code that you execute on the database server. I can think of three reasons why you'd use them:
To create an interface for users that hides the schema details from clients.
Performance. Extensive calculations on a large data set might be done more efficiently on the database server
Sometimes it can be difficult (or impossible, depending on your skill) to express what you think you need in a declarative, set-based language like SQL. That's when some people throw up their hands and write stored procs.
Only 1. would be justifiable from your question. I would recommend sticking with SQL.
UPDATE: The new information you provided still does not justify stored procedures in my opinion. A query that returns 14 records is routine.
Does anyone know of a way to verify the correctness of the queries in all stored procedures in a database?
I'm thinking of the scenario where if you modify something in a code file, simply doing a rebuild would show you compilation errors that point you to places where you need to fix things. In a database scenario, say if you modify a table and remove a column which is used in a stored procedure you won't know anything about this problem until the first time that procedure would run.
What you describe is what unit testing is for. Stored procedures and functions often require parameters to be set, and if the stored procedure or function encapsulates dynamic SQL--there's a chance that a [corner] case is missed.
Also, all you mention is checking for basic errors--nothing about validating the data returned. For example - I can change the precision on a numeric column...
This also gets into the basic testing that should occur for the immediate issue, and regression testing to ensure there aren't unforeseen issues.
You could create all of your objects with SCHEMABINDING, which would prevent you from changing any underlying tables without dropping and recreating the views and procedures built on top of them.
Depending on your development process, this could be pretty cumbersome. I offer it as a solution though, because if you want to ensure the correctness of all procedures in the db, this would do it.
I found this example on MSDN (SQL Server 2012). I guess it can be used in some scenarios:
USE AdventureWorks2012;
GO
SELECT p.name, r.*
FROM sys.procedures AS p
CROSS APPLY sys.dm_exec_describe_first_result_set_for_object(p.object_id, 0) AS r;
Source: sys.dm_exec_describe_first_result_set_for_object
A while ago I had a query that I ran quite a lot for one of my users. It was still being evolved and tweaked but eventually it stablised and ran quite quickly, so we created a stored procedure from it.
So far, so normal.
The stored procedure, though, was dog slow. No material difference between the query and the proc, but the speed change was massive.
[Background, we're running SQL Server 2005.]
A friendly local DBA (who no longer works here) took one look at the stored procedure and said "parameter spoofing!" (Edit: although it seems that it is possibly also known as 'parameter sniffing', which might explain the paucity of Google hits when I tried to search it out.)
We abstracted some of the stored procedure to a second one, wrapped the call to this new inner proc into the pre-existing outer one, called the outer one and, hey presto, it was as quick as the original query.
So, what gives? Can someone explain parameter spoofing?
Bonus credit for
highlighting how to avoid it
suggesting how to recognise possible cause
discuss alternative strategies, e.g. stats, indices, keys, for mitigating the situation
FYI - you need to be aware of something else when you're working with SQL 2005 and stored procs with parameters.
SQL Server will compile the stored proc's execution plan with the first parameter that's used. So if you run this:
usp_QueryMyDataByState 'Rhode Island'
The execution plan will work best with a small state's data. But if someone turns around and runs:
usp_QueryMyDataByState 'Texas'
The execution plan designed for Rhode-Island-sized data may not be as efficient with Texas-sized data. This can produce surprising results when the server is restarted, because the newly generated execution plan will be targeted at whatever parameter is used first - not necessarily the best one. The plan won't be recompiled until there's a big reason to do it, like if statistics are rebuilt.
This is where query plans come in, and SQL Server 2008 offers a lot of new features that help DBAs pin a particular query plan in place long-term no matter what parameters get called first.
My concern is that when you rebuilt your stored proc, you forced the execution plan to recompile. You called it with your favorite parameter, and then of course it was fast - but the problem may not have been the stored proc. It might have been that the stored proc was recompiled at some point with an unusual set of parameters and thus, an inefficient query plan. You might not have fixed anything, and you might face the same problem the next time the server restarts or the query plan gets recompiled.
Yes, I think you mean parameter sniffing, which is a technique the SQL Server optimizer uses to try to figure out parameter values/ranges so it can choose the best execution plan for your query. In some instances SQL Server does a poor job at parameter sniffing & doesn't pick the best execution plan for the query.
I believe this blog article http://blogs.msdn.com/queryoptteam/archive/2006/03/31/565991.aspx has a good explanation.
It seems that the DBA in your example chose option #4 to move the query to another sproc to a separate procedural context.
You could have also used the with recompile on the original sproc or used the optimize for option on the parameter.
A simple way to speed that up is to reassign the input parameters to local parameters in the very beginning of the sproc, e.g.
CREATE PROCEDURE uspParameterSniffingAvoidance
#SniffedFormalParameter int
AS
BEGIN
DECLARE #SniffAvoidingLocalParameter int
SET #SniffAvoidingLocalParameter = #SniffedFormalParameter
--Work w/ #SniffAvoidingLocalParameter in sproc body
-- ...
In my experience, the best solution for parameter sniffing is 'Dynamic SQL'. Two important things to note is that 1. you should use parameters in your dynamic sql query 2. you should use sp_executesql (and not sp_execute), which saves the execution plan for each parameter values
Parameter sniffing is a technique SQL Server uses to optimize the query execution plan for a stored procedure. When you first call the stored procedure, SQL Server looks at the given parameter values of your call and decides which indices to use based on the parameter values.
So when the first call contains not very typical parameters, SQL Server might select and store a sub-optimal execution plan in regard to the following calls of the stored procedure.
You can work around this by either
using WITH RECOMPILE
copying the parameter values to local variables inside the stored procedure and using the locals in your queries.
I even heard that it's better to not use stored procedures at all but to send your queries directly to the server.
I recently came across the same problem where I have no real solution yet.
For some queries the copy to local vars helps getting back to the right execution plan, for some queries performance degrades with local vars.
I still have to do more research on how SQL Server caches and reuses (sub-optimal) execution plans.
I had similar problem. My stored procedure's execution plan took 30-40 seconds. I tried using the SP Statements in query window and it took few ms to execute the same.
Then I worked out declaring local variables within stored procedure and transferring the values of parameters to local variables. This made the SP execution very fast and now the same SP executes within few milliseconds instead of 30-40 seconds.
Very simple and sort, Query optimizer use old query plan for frequently running queries. but actually the size of data is also increasing so at that time new optimized plan is require and still query optimizer using old plan of query. This is called Parameter Sniffing.
I have also created detailed post on this. Please visit this url:
http://www.dbrnd.com/2015/05/sql-server-parameter-sniffing/
Changing your store procedure to execute as a batch should increase the speed.
Batch file select i.e.:
exec ('select * from order where order id ='''+ #ordersID')
Instead of the normal stored procedure select:
select * from order where order id = #ordersID
Just pass in the parameter as nvarchar and you should get quicker results.