Improving query plan compilation/caching - sql-server

I have a pretty basic positional inverted index, in which I store a lot of words (search terms) and I use this to implement an efficient general purpose search.
My problem is that the query plan compilation is actually taking notably longer than the execution itself, I wondered if there's something that can be done about that.
I'm using dynamic T-SQL (building up the query from strings)
I'm using a lot of CTEs
There's a bunch of filter check boxes that depend on the initial search result for population (take the search result and get me the count of some property of some entity). e.g. for each person found by the search text give me the distinct number of organizations involved and their respective frequency (count). These needs to be reevaluated a lot.
I've done parameterization (given them default sizes, not some constants though, that should be fine eh?) and qualified all tables, I rely on views where possible.
The query structurally changes every time I apply a new filter or change the number of search terms which necessitates recompilation and takes time, other than that the query plan works really well.
The thing is these CTEs and filter box results are virtually the same or near identical even if they are not structurally equivalent, I'm wondering if there's anything that can be done to improve the compilation time.
If you wanna see the T-SQL I can provide samples, it's just that it's big, it's roughly 100 lines of T-SQL per search. I thought I'd ask first, before we go down that road, maybe the solution is a lot simpler that I believe it to be?

Have you considered applying the OPTIMIZE FOR query hint?
If you can split the large query into smaller parameterised stored procedures and combine their results, they are more likely to be cached.
There is also the option of optimizing for ad hoc workloads in SQL Server 2008 (although this might be a last resort):
sp_CONFIGURE 'show advanced options',1
RECONFIGURE
GO
sp_CONFIGURE ‘optimize for ad hoc workloads’,1
RECONFIGURE
GO

Related

What do you do to make sure a new index does not slow down queries?

When we add or remove a new index to speed up something, we may end up slowing down something else.
To protect against such cases, after creating a new index I am doing the following steps:
start the Profiler,
run a SQL script which contains lots of queries I do not want to slow down
load the trace from a file into a table,
analyze CPU, reads, and writes from the trace against the results from the previous runs, before I added (or removed) an index.
This is kind of automated and kind of does what I want. However, I am not sure if there is a better way to do it. Is there some tool that does what I want?
Edit 1 The person who voted to close my question, could you explain your reasons?
Edit 2 I googled up but did not find anything that explains how adding an index can slow down selects. However, this is a well known fact, so there should be something somewhere. If nothing comes up, I can write up a few examples later on.
Edit 3 One such example is this: two columns are highly correlated, like height and weight. We have an index on height, which is not selective enough for our query. We add an index on weight, and run a query with two conditions: a range on height and a range on weight. because the optimizer is not aware of the correlation, it grossly underestimates the cardinality of our query.
Another example is adding an index on increasing column, such as OrderDate, can seriously slow down a query with a condition like OrderDate>SomeDateAfterCreatingTheIndex.
Ultimately what you're asking can be rephrased as 'How can I ensure that the queries that already use an optimal, fast, plan do not get 'optimized' into a worse execution plan?'.
Whether the plan changes due to parameter sniffing, statistics update or metadata changes (like adding a new index) the best answer I know of to keep the plan stable is plan guides. Deploying plan guides for critical queries that already have good execution plans is probably the best way to force the optimizer into keep using the good, validated, plan. See Applying a Fixed Query Plan to a Plan Guide:
You can apply a fixed query plan to a plan guide of type OBJECT or
SQL. Plan guides that apply a fixed query plan are useful when you
know about an existing execution plan that performs better than the
one selected by the optimizer for a particular query.
The usual warnings apply as to any possible abuse of a feature that prevents the optimizer from using a plan which may be actually better than the plan guide.
How about the following approach:
Save the execution plans of all typical queries.
After applying new indexes, check which execution plans have changed.
Test the performance of the queries with modified plans.
From the page "Query Performance Tuning"
Improve Indexes
This page has many helpful step-by-step hints on how to tune your indexes for best performance, and what to watch for (profiling).
As with most performance optimization techniques, there are tradeoffs. For example, with more indexes, SELECT queries will potentially run faster. However, DML (INSERT, UPDATE, and DELETE) operations will slow down significantly because more indexes must be maintained with each operation. Therefore, if your queries are mostly SELECT statements, more indexes can be helpful. If your application performs many DML operations, you should be conservative with the number of indexes you create.
Other resources:
http://databases.about.com/od/sqlserver/a/indextuning.htm
However, it’s important to keep in mind that non-clustered indexes slow down the data modification and insertion process, so indexes should be kept to a minimum
http://searchsqlserver.techtarget.com/tip/Stored-procedure-to-find-fragmented-indexes-in-SQL-Server
Fragmented indexes and tables in SQL Server can slow down application performance. Here's a stored procedure that finds fragmented indexes in SQL servers and databases.
Ok . First off, index's slow down two things (at least)
-> insert/update/delete : index rebuild
-> query planning : "shall I use that index or not ?"
Someone mentioned the query planner might take a less efficient route - this is not supposed to happen.
If your optimizer is even half-decent, and your statistics / parameters correct, there is no way it's going to pick the wrong plan.
Either way, in your case (mssql), you can hardly trust the optimizer and will still have to check every time.
What you're currently doing looks quite sound, you should just make sure the data you're looking at is relevant, i.e. real use case queries in the right proportion (this can make a world of difference).
In order to do that I always advise to write a benchmarking script based on real use - through logging of production-env. queries, a bit like I said here :
Complete db schema transformation - how to test rewritten queries?

How do I tune a query

I have a query that is running slowly. I know generally to make performance faster, limit joins, and try to use procs instead of straight queries. Due to business rules, I cannot use procs. I've already cut the number of joins as much as I can think of.
What's the next step in query tuning?
Adding indexes is probably the number one thing you can do to improve query performance and you haven't mentioned it.
Have you looked at the execution plan to see whether that could be improved with additional indexes?
Additionally you should make sure that your queries are written in such a way so they can use any indexes that are present effectively (e.g. avoid non sargable constructs, avoid *)
the easiest thing to do is go to management studio run this command:
SET SHOWPLAN_ALL ON
then run your actual query.
You will not get your regular query result set. It will give you the execution plan (a very detailed list of what SQL Server does to turn your query) in a result set. Look over the output and try to learn what it means. I generally look for "SCAN", that is a slow part, and I try rewriting it so it uses an index.

What are the types and inner workings of a query optimizer?

As I understand it, most query optimizers are "cost-based". Others are "rule-based", or I believe they call it "Syntax Based". So, what's the best way to optimize the syntax of SQL statements to help an optimizer produce better results?
Some cost-based optimizers can be influenced by "hints" like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements as to what's the fastest way to access rows, assuming it's indexed?
I would imagine that "SELECT col FROM table WHERE ROWID = n" is the fastest (rank 1).
If I'm not mistaking, Informix SE's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows? SE's optimizer is cost based when it has enough data but it does not use distributions like the IDS optimizer.
IDS'ROWID isn't an INT, it is the logical address of the row's page left
shifted 8 bits plus the slot number on the page that contains the row's data.
IDS' optimizer is a cost based optimizer that uses data
about the index depth and width, number of rows, number of pages, and the
data distributions created by update statistics MEDIUM and HIGH to decide
which query path is the least expensive, but there's no ranking of statements?
I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID can be used by the optimizer as a counter to report a query progress?, an idea I mentioned in my "Begin viewing query results before query completes" question? I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there.
Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not just "hint/influence" the optimizer is what's needed. We can then have more dynamic SELECT statements for specific situations! Maybe even tell IDS to read blocks of index nodes at-a-time instead of one-by-one, etc. etc.
I'm not really sure what your are after but here is some info on SQL Server query optimizer which I've recently read:
13 Things You Should Know About Statistics and the Query Optimizer
SQL Server Query Execution Plan Analysis
and one for Informix that I just found using google:
Part 1: Tuning Informix SQL
For Oracle, your best resource would be Cost Based oracle Fundamentals. It's about 500 pages (and billed as Volume 1 but there haven't been any followups yet).
For a (very) simple full-table scan, progress can sometimes be monitored through v$session_longops. Oracle knows how many blocks it has to scan, how many blocks it has scanned, how many it has to go, and reports on progress.
Indexes are a different matter. If I search for records for a client 'Frank', and use the index, the database will make a guess at how many 'Frank' entries are in the table, but that guess can be massively off. It may be that you have 1000 'Frankenstein' and just 1 'Frank' or vice versa.
It gets even more complicated as you add in other filter and access predicates (eg where multiple indexes can be chosen), and makes another leap as you include table joins. And thats without getting into the complex stuff about remote databases, domain indexes like Oracle Text and Locator.
In short, it is very complicated. It is stuff that can be useful to know if you are responsible for tuning a large application. Even for basic development you need to have some grounding in how the database can physically retrieve that data you are interested.
But I'd say you are going the wrong way here. The point of an RDBMS is to abstract the details so that, for the most part, they just happen. Oracle employs smart people to write query transformation stuff into the optimizer so us developers can move away from 'syntax fiddling' to get the best plans (not totally, but it is getting better).

SQL server fields with most usage

Is there an SQL query that can give me the fields that are used in most stored procedures or updated, selected most in a given table. I am asking this because I want to figure out which fields to put indexes on.
Take a look at the missing indexes article on SQLServerPedia http://sqlserverpedia.com/wiki/Find_Missing_Indexes
I think you are looking at the problem the wrong way around.
What you first need to identify are the most expensive (cumulative: so both single-run high cost, and many-runs lower cost) queries in your normal workload.
Once you have identified those queries, you can analyse their query plans and create appropriate indexes.
This SO Answer might be of use: How Can I Log and Find the Most Expensive Queries? (title and tags say SQL Server 2008, but my accepted answer applies to any version).
Most used fields are by no means index candidates. Good index candidates are those that correctly balance the extra storage requirements with SARGability and query projection coverage, as described in Index Design Basics. You should follow the advice the very engine is giving you, using the Missing Indexes feature:
sys.dm_db_missing_index_group_stats
sys.dm_db_missing_index_groups
sys.dm_db_missing_index_details
sys.dm_db_missing_index_columns
A good action plan is to start from the most expensive queries by IO obtained from sys.dm_exec_query_stats and then open the plan of the query with sys.dm_exec_query_plan in Management Studio and at the top of the query plan view will be a proposed index, with the CREATE INDEX just ready to copy and paste into execution. In fact you don't even have to run the queries to find the most expensive query in the plan cache, there are already SSMS reports that can find it for you.
Wish it was that easy.... You need to do a search for SQL Query Optimization. There are lots of tools to use for what you need.
And knowing which fields are used often does not tell you much about what you need to do to optimize access.
You might have some luck by searching your code for all the queries and then running a SELECT EXPLAIN on them. This should give you some glaring numbers when a query is poorly indexed. Unfortunately it won't give you an idea of which statements are called most frequently.
Also - I've noticed some issues with type mismatching in queries. When you pass a string containing a number and it's querying an index based on integers then it will scan the entire table.

Alternatives to MS SQL 2005 FullText Catalog

I can't seem to get acceptable performance from FullText Catalogs. We have situations where we must run 100k+ queries as quickly as possible. Some of the queries use FREETEXT some don't. Here's an example of a query
IF EXISTS(select 1 from user_data d where d.userid=#userid and FREETEXT(*, #activities) SET #match=1
This can take between 3-15 seconds. I need it to be much faster < 1s if possible.
I like the "flexibility" of the fulltext query in that it can search across multiple columns and the syntax is pretty intuitive. I'd rather not use a Like statement because we want to be able to match words like "Writer" and "Writing".
I've tried some of the suggestions listed here http://msdn.microsoft.com/en-us/library/ms142560(SQL.90).aspx
We've got as much memory and cpu as we can afford, unfortunately we can't put the catalogs on their own disk controllers.
I'm stumped and ready to explore other alternatives to FullText Queries. Is there anything else out there that gives that kind of "Writer"/"Writing" similar matches? Perhaps even something that uses the CLR?
Check out these alternatives, although I doubt they'll improve performance without isolating them onto separate hardware:
Which search technology to use with ASP.NET?
Lucene.Net and SQL Server
Due to the nature of FREETEXT, the performance is less than when you'd use CONTAINS, simply because it has to take into account less precise alternatives for the keywords given. CONTAINS can find Writing when you specify Write btw, I'm not sure if you've checked whether CONTAINS will do the trick or not.
Also be sure to avoid IF statements in SQL, as they often lead to a complete recompilation of the execution plan for every query execution, which will likely contribute to the poor performance you're seeing. I'm not sure how the IF statement is used, as it's likely inside a bigger piece of SQL. Try to merge the EXISTS query with that bigger piece of sql, as you can set the #match parameter from within the SELECT statement inside the EXISTS, or get rid of the variable altogether and use the EXISTS clause as a predicate in the bigger query.
SQL is a set-oriented language and interpreted. Therefore, it's often faster to get rid of imperative programming constructs and use the native set-operators of sql instead.
perhaps https://github.com/MahyTim/LuceneNetSqlDirectory can help you, it allows to store a LuceneNET index in SQLServer.

Resources