OPTION (OPTIMIZE FOR (#AccountID=148)) - sql-server

I recently saw this statement in a large, complex production query that is getting executed a couple hundred times per minute. It seems the original author of this query was trying to optimize the query for when AccountId = 148. There are total of 811 different account Id values that could occur in the table. The value 148 represents 9.5% of all rows in this table (~60.1M rows total) and has the highest total of any account value.
I've never come across anyone doing something like this. It seems to me this only has significant value if, more often than not, the #AccountId parameter is equal to 148. Otherwise, the query plan could assume more rows are being returned than actually are. In this case a scan might be performed instead of a seek.
So, is there any practical value to doing this, in this particular scenario?

Assume the extreme case where only account 148 covers 10% of the table, and all the other accounts just 0.001% each. Well, given that that account represents 10% of your data, it also stands to reason it will be searched for more often than the other accounts. Now imagine that for any other account, a nested loop join over a small amount of rows would be really fast, but for account 148, it would be hideously slow and a hash join would be the superior choice.
Further imagine that by some stroke of bad luck, the first query that comes in to your system after a reboot/plan recycle is one for an account other than 148. You are now stuck with a plan that performs extremely poorly 10% of the time, even though it also happens to be really good for other data. In this case, you may well want the optimizer to stick with the plan that isn't a disaster 10% of the time, even if that means it's slightly less than optimal the other times. This is where OPTIMIZE FOR comes in.
Alternatives are OPTIMIZE FOR UNKNOWN (if your distribution is far more even, but you need to protect against an accidental plan lock-in from unrepresentative parameters), RECOMPILE (which can have a considerable performance impact for queries executed frequently), explicit hints (FORCESEEK, OPTION HASH JOIN, etcetera), adding fine-grained custom statistics (CREATE STATISTICS) and splitting queries (IF #AccountID = 148 ...), possibly in combination with a filtered index. But aside from all these, OPTIMIZE FOR #Param = <specific non-representative value> certainly has a place.

Related

Bind Aware Cursor Matching Explanation

Hi I am having a little trouble trying to find a simple explanation for bind aware cursor matching in oracle.. Is bind aware cursor matching basically having Oracle monitor a query with a bind variable over time and seeing if there's an increase in CPU when using some variables. Then from doing this it almost generates a more suitable execution plan say a full table scan then marks the query as bind aware then the next time the query is executed there is a choice of two execution plans? Any help will be greatly appreciated! Cheers!
In the simplest case, imagine that you have an ORDERS table. In that table is a status column. There are only a handful of status values and some are very, very popular while others are very rare. Imagine that the table has 10 million rows. For our purposes, say that 93% are "COMPLETE", 5% are "CANCELLED", and the remaining 2% are spread between a 8 different statuses that track the order flow (INCOMPLETE, COMPLETE, IN FULFILLMENT, IN TRANSIT, etc.).
If you have the most basic statistics on your table, the optimizer knows that there are 10 million rows and 10 distinct statuses. It doesn't know that some status values are more popular than others so it guesses that each status corresponds to 1 million rows. So when it sees a query like
SELECT *
FROM orders
WHERE status = :1
it guesses that it needs to fetch 1 million rows from the table regardless of the bind variable value so it decides to use a full table scan.
Now, a human comes along wondering why Oracle is being silly and doing a full table scan when he asks for the handful of orders that are in an IN TRANSIT status-- clearly an index scan would be preferable there. That human realizes that the optimizer needs more information in order to learn that some status values are more popular than others so that human decides to gather a histogram (there are options that cause Oracle to gather histograms on certain columns automatically as well but I'm ignoring those options to try to keep the story simple).
Once the histogram is gathered, the optimizer knows that the status value is highly skewed-- there are lots of COMPLETED orders but very few IN TRANSIT orders. If it sees a query that is using literals rather than bind variables, i.e.
SELECT *
FROM orders
WHERE status = 'IN TRANSIT'
vs
SELECT *
FROM orders
WHERE status = 'COMPLETED'
then it is very easy for the optimizer to decide to use an index in the first case and table scan in the second. When you have a bind variable, though, the optimizer's job is more difficult-- how is it supposed to determine whether to use the index or to do a table scan...
Oracle's first solution was known as "bind variable peeking". In this approach, when the optimizer sees something like
SELECT *
FROM orders
WHERE status = :1
where it knows (because of the histogram on status) that the query plan should depend on the value passed in for the bind variable, Oracle "peeks" at the first value that is passed in to determine how to optimize the statement. If the first bind variable value is 'IN TRANSIT', an index scan will be used. If the first bind variable value is 'COMPLETE`, a table scan will be used.
For a lot of cases, this works pretty well. Lots of queries really only make sense for either very popular or very rare values. In our example, it's pretty unlikely that anyone would ever really want a list of all 9 million COMPLETE orders but someone might want a list of the couple thousand orders in one of the various transitory states.
But bind variable peeking doesn't work well in other cases. If you have a system where the application sometimes binds very popular values and sometimes binds very rare values, you end up with a situation where application performance depends heavily on who happens to run a query first. If the first person to run the query uses a very rare value, the index scan plan will be generated and cached. If the second person to run the query uses the very common value, the cached plan will be used and you'll get an index scan that takes forever. If the roles are reversed, the second person uses the rare value, gets the cached plan that does a full table scan, and has to scan the entire table to get the couple hundred rows they're interested in. This sort of non-deterministic behavior tends to drive DBAs and developers mad because it can be maddingly hard to diagnose and can lead to rather odd explanations-- Tom Kyte has an excellent example of a customer that concluded they needed reboot the database in the afternoon if it rained Monday morning.
Bind aware cursor matching is the solution to the bind variable peeking problem. Now, when Oracle sees the query
SELECT *
FROM orders
WHERE status = :1
and sees that there is a histogram on status that indicates that some values are more common than others, it is smart enough to make that cursor "bind aware". That means that when you bind a value of IN FULFILLMENT, the optimizer is smart enough to conclude that this is one of the rare values and give you the index plan. When you bind a value of COMPLETE, the optimizer is smart enough to conclude that this is one of the common values and give you the plan with the table scan. So the optimizer now knows of two different plans for the same query and when it sees a new bind value like IN TRANSIT, it checks to see whether that value is similar to others that it has seen before and either gives you one of the existing plans or creates another new plan. In this case, it would decide that IN TRANSIT is roughly as common as IN FULFILLMENT so it re-uses the plan with the index scan rather than generating a third query plan. This, hopefully, leads to everyone getting their preferred plan without having to generate and cache query plans every time a bind variable value changes.
Of course, in reality, there are lots of additional caveats, corner cases, considerations, and complications that I'm intentionally (and unintentionally) glossing over here. But that's the basic idea of what the optimizer is trying to accomplish.

Performance of 100M Row Table (Oracle 11g)

We are designing a table for ad-hoc analysis that will capture umpteen value fields over time for claims received. The table structure is essentially (pseudo-ish-code):
table_huge (
claim_key int not null,
valuation_date_key int not null,
value_1 some_number_type,
value_2 some_number_type,
[etc...],
constraint pk_huge primary key (claim_key, valuation_date_key)
);
All value fields all numeric. The requirements are: The table shall capture a minimum of 12 recent years (hopefully more) of incepted claims. Each claim shall have a valuation date for each month-end occurring between claim inception and the current date. Typical claim inception volumes range from 50k-100k per year.
Adding all this up I project a table with a row count on the order of 100 million, and could grow to as much as 500 million over years depending on the business's needs. The table will be rebuilt each month. Consumers will select only. Other than a monthly refresh, no updates, inserts or deletes will occur.
I am coming at this from the business (consumer) side, but I have an interest in mitigating the IT cost while preserving the analytical value of this table. We are not overwhelmingly concerned about quick returns from the Table, but will occasionally need to throw a couple dozen queries at it and get all results in a day or three.
For argument's sake, let's assume the technology stack is, I dunno, in the 80th percentile of modern hardware.
The questions I have are:
Is there a point at which the cost-to-benefit of indices becomes excessive, considering a low frequency of queries against high-volume tables?
Does the SO community have experience with +100M row tables and can
offer tips on how to manage?
Do I leave the database technology problem to IT to solve or should I
seriously consider curbing the business requirements (and why?)?
I know these are somewhat soft questions, and I hope readers appreciate this is not a proposition I can test before building.
Please let me know if any clarifications are needed. Thanks for reading!
First of all: Expect this to "just work" if leaving the tech problem to IT - especially if your budget allows for an "80% current" hardware level.
I do have experience with 200M+ rows in MySQL on entry-level and outdated hardware, and I was allways positivly suprised.
Some Hints:
On monthly refresh, load the table without non-primary indices, then create them. Search for the sweet point, how many index creations in parallell work best. In a project with much less date (ca. 10M) this reduced load time compared to the naive "create table, then load data" approach by 70%
Try to get a grip on the number and complexity of concurrent queries: This has influence on your hardware decisions (less concurrency=less IO, more CPU)
Assuming you have 20 numeric fields of 64 bits each, times 200M rows: If I can calculate correctly, ths is a payload of 32GB. Trade cheap disks against 64G RAM and never ever have an IO bottleneck.
Make sure, you set the tablespace to read only
You could consider anchor modeling approach to store changes only.
Considering that there are so many expected repeated rows, ~ 95% --
bringing row count from 100M to only 5M, removes most of your concerns.
At this point it is mostly cache consideration, if the whole table
can somehow fit into cache, things happen fairly fast.
For "low" data volumes, the following structure is slower to query than a plain table; at one point (as data volume grows) it becomes faster. That point depends on several factors, but it may be easy to test. Take a look at this white-paper about anchor modeling -- see graphs on page 10.
In terms of anchor-modeling, it is equivalent to
The modeling tool has automatic code generation, but it seems that it currenty fully supports only MS SQL server, though there is ORACLE in drop-down too. It can still be used as a code-helper.
In terms of supporting code, you will need (minimum)
Latest perspective view (auto-generated)
Point in time function (auto-generated)
Staging table from which this structure will be loaded (see tutorial for data-warehouse-loading)
Loading function, from staging table to the structure
Pruning functions for each attribute, to remove any repeating values
It is easy to create all this by following auto-generated-code patterns.
With no ongoing updates/inserts, an index NEVER has negative performance consequences, only positive (by MANY orders of magnitude for tables of this size).
More critically, the schema is seriously flawed. What you want is
Claim
claim_key
valuation_date
ClaimValue
claim_key (fk->Claim.claim_key)
value_key
value
This is much more space-efficient as it stores only the values you actually have, and does not require schema changes when the number of values for a single row exceeds the number of columns you have allocated.
Using partition concept & apply partition key on every query that you perform will save give the more performance improvements.
In our company we solved huge number of performance issues with the partition concept.
One more design solutions is if we know that the table is going to be very very big, try not to apply more constraints on the table & handle in the logic before u perform & don't have many columns on the table to avoid row chaining issues.

Alternatives to SQL COUNT(*) practices?

I was looking for improvements to PostgreSQL/InnoDB MVCC COUNT(*) problem when I found an article about implementing a work around in PostgreSQL. However, the author made a statement that caught my attention:
MySQL zealots tend to point to
PostgreSQL’s slow count() as a
weakness, however, in the real world,
count() isn’t used very often, and if
it’s really needed, most good database
systems provide a framework for you to
build a workaround.
Are there ways to skip using COUNT(*) in the way you design your applications?
Is it true that most applications are designed so they don't need it? I use COUNT() on most of my pages since they all need pagination. What is this guy talking about? Is that why some sites only have a "next/previous" link?
Carrying this over into the NoSQL world, is this also something that has to be done there since you can't COUNT() records very easily?
I think when the author said
however, in the real world, count() isn’t used very often
they specifically meant an unqualified count(*) isn't used very often, which is the specific case that MyISAM optimises.
My own experience backs this up- apart from some dubious Munin plugins, I can't think of the last time I did a select count(*) from sometable.
For example, anywhere I'm doing pagination, it's usually the output of some search. Which implies there will be a WHERE clause to limit the results anyway- so I might be doing something like select count(*) from sometable where conditions followed by select ... from sometable limit n offset m. Neither of which can use the direct how-many-rows-in-this-table shortcut.
Now it's true that if the conditions are purely index conditions, then some databases can merge together the output of covering indices to avoid looking at the table data too. Which certainly decreases the number of blocks looked at... if it works. It may be that, for example, this is only a win if the query can be satisfied with a single index- depends on the db implementation.
Still, this is by no means always the case- a lot of our tables have an active flag which isn't indexed, but often is filtered on, so would require a heap check anyway.
If you just need an idea of whether a table has data in it or not, Postgresql and many other systems do retain estimated statistics for each table: you can examine the reltuples and relpages columns in the catalogue for an estimate of how many rows the table has and how much space it is taking. Which is fine so long as ~6 significant figures is accurate enough for you, and some lag in the statistics being updated is tolerable. In my use case that I can remember (plotting the number of items in the collection) it would have been fine for me...
Trying to maintain an accurate row counter is tricky. The article you cited caches the row count in an auxiliary table, which introduces two problems:
a race condition between SELECT and INSERT populating the auxiliary table (minor, you could seed this administratively)
as soon as you add a row to the main table, you have an update lock on the row in the auxiliary table. now any other process trying to add to the main table has to wait.
The upshot is that concurrent transactions get serialised instead of being able to run in parallel, and you've lost the writers-dont-have-to-block-either benefits of MVCC- you should reasonably expect to be able to insert two independent rows into the same table at the same time.
MyISAM can cache the row count per table because it tacks on exclusive lock on the table when someone writes to it (iirc). InnoDB allows finer-grained locking-- but it doesn't try to cache the row count for the table. Of course if you don't care about concurrency and/or transactions, you can take shortcuts... but then you're moving away from Postgresql's main focus, where data integrity and ACID transactions are primary goals.
I hope this sheds some light. I must admit, I've never really felt the need for a faster "count(*)", so to some extent this is simply a "but it works for me" testament rather than a real answer.
While you're asking more of an application design than database question really, there is more detail about how counting executes in PostgreSQL and the alternatives to doing so at Slow Counting. If you must have a fast count of something, you can maintain one with a trigger, with examples in the references there. That costs you a bit on the insert/update/delete side in return for speeding that up. You have to know in advance what you will eventually want a count of for that to work though.

What are the types and inner workings of a query optimizer?

As I understand it, most query optimizers are "cost-based". Others are "rule-based", or I believe they call it "Syntax Based". So, what's the best way to optimize the syntax of SQL statements to help an optimizer produce better results?
Some cost-based optimizers can be influenced by "hints" like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements as to what's the fastest way to access rows, assuming it's indexed?
I would imagine that "SELECT col FROM table WHERE ROWID = n" is the fastest (rank 1).
If I'm not mistaking, Informix SE's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows? SE's optimizer is cost based when it has enough data but it does not use distributions like the IDS optimizer.
IDS'ROWID isn't an INT, it is the logical address of the row's page left
shifted 8 bits plus the slot number on the page that contains the row's data.
IDS' optimizer is a cost based optimizer that uses data
about the index depth and width, number of rows, number of pages, and the
data distributions created by update statistics MEDIUM and HIGH to decide
which query path is the least expensive, but there's no ranking of statements?
I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID can be used by the optimizer as a counter to report a query progress?, an idea I mentioned in my "Begin viewing query results before query completes" question? I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there.
Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not just "hint/influence" the optimizer is what's needed. We can then have more dynamic SELECT statements for specific situations! Maybe even tell IDS to read blocks of index nodes at-a-time instead of one-by-one, etc. etc.
I'm not really sure what your are after but here is some info on SQL Server query optimizer which I've recently read:
13 Things You Should Know About Statistics and the Query Optimizer
SQL Server Query Execution Plan Analysis
and one for Informix that I just found using google:
Part 1: Tuning Informix SQL
For Oracle, your best resource would be Cost Based oracle Fundamentals. It's about 500 pages (and billed as Volume 1 but there haven't been any followups yet).
For a (very) simple full-table scan, progress can sometimes be monitored through v$session_longops. Oracle knows how many blocks it has to scan, how many blocks it has scanned, how many it has to go, and reports on progress.
Indexes are a different matter. If I search for records for a client 'Frank', and use the index, the database will make a guess at how many 'Frank' entries are in the table, but that guess can be massively off. It may be that you have 1000 'Frankenstein' and just 1 'Frank' or vice versa.
It gets even more complicated as you add in other filter and access predicates (eg where multiple indexes can be chosen), and makes another leap as you include table joins. And thats without getting into the complex stuff about remote databases, domain indexes like Oracle Text and Locator.
In short, it is very complicated. It is stuff that can be useful to know if you are responsible for tuning a large application. Even for basic development you need to have some grounding in how the database can physically retrieve that data you are interested.
But I'd say you are going the wrong way here. The point of an RDBMS is to abstract the details so that, for the most part, they just happen. Oracle employs smart people to write query transformation stuff into the optimizer so us developers can move away from 'syntax fiddling' to get the best plans (not totally, but it is getting better).

indexing large table in SQL SERVER

I have a large table (more than 10 millions records). this table is heavily used for search in the application. So, I had to create indexes on the table. However ,I experience a slow performance when a record is inserted or updated in the table. This is most likely because of the re-calculation of indexes.
Is there a way to improve this.
Thanks in advance
You could try reducing the number of page splits (in your indexes) by reducing the fill factor on your indexes. By default, the fill factor is 0 (same as 100 percent). So, when you rebuild your indexes, the pages are completely filled. This works great for tables that are not modified (insert/update/delete). However, when data is modified, the indexes need to change. With a Fill Factor of 0, you are guaranteed to get page splits.
By modifying the fill factor, you should get better performance on inserts and updates because the page won't ALWAYS have to split. I recommend you rebuild your indexes with a Fill Factor = 90. This will leave 10% of the page empty, which would cause less page splits and therefore less I/O. It's possible that 90 is not the optimal value to use, so there may be some 'trial and error' involved here.
By using a different value for fill factor, your select queries may become slightly slower, but with a 90% fill factor, you probably won't notice it too much.
There are a number of solutions you could choose
a) You could partition the table
b) Consider performing updates in batch at offpeak hours (like at night)
c) Since engineering is a balancing act of trade-offs, you would have to choose which is more important (Select or Update/insert/delete) and which operation is more important. Assuming you don't need the results in real time for an "insert", you can use Sql server service broker for those operations to perform the "less important" operation asynchronously
http://msdn.microsoft.com/en-us/library/ms166043(SQL.90).aspx
Thanks
-RVZ
We'd need to see your indexes, but likely yes.
Some things to keep in mind are that you don't want to just put an index on every column, and you don't generally want just one column per index.
The best thing to do is if you can log some actual searches, track what users are actually searching for, and create indexes targeted at those specific types of searches.
This is a classic engineering trade-off... you can make the shovel lighter or stronger, never both... (until a breakthrough in material science. )
More index mean more DML maintenance, means faster queries.
Fewer indexes mean less DML maintenance, means slower queries.
It could be possible that some of your indexes are redundant and could be combined.
Besides what Joel wrote, you also need to define the SLA for DML. Is it ok that it's slower, you noticed that it slowed down but does it really matter vs. the query improvement you've achieved... IOW, is it ok to have a light shovel that weak?
If you have a clustered index that is not on an identity numeric field, rearranging the pages can slow you down. If this is the case for you see if you can improve speed by making this a non-clustered index (faster than no index but tends to be a bit slower than the clustered index, so your selects may slow down but inserts improve)
I have found users more willing to tolerate a slightly slower insert or update to a slower select. That is a general rule, but if it becomes unacceptably slow (or worse times out) well no one can deal with that well.

Resources