Does anybody know what hypothetical indexes are used for in sql server 2000? I have a table with 15+ such indexes, but have no idea what they were created for. Can they slow down deletes/inserts?
hypothetical indexes are usually created when you run index tuning wizard, and are suggestions, under normal circumstances they will be removed if the wizard runs OK.
If some are left around they can cause some issues, see this link for ways to remove them.
Not sure about 2000, but in 2005 hypothetical indexes and database objects in general are objects created by DTA (Database Tuning Advisor)
You can check if an index is hypothetical by running this query:
SELECT *
FROM sys.indexes
WHERE is_hypothetical = 1
If you have given the tuning advisor good information on which to base it's indexing strategy, then I would say to generally trust its results, but if you should of course examine how it has allocated these before you trust it blindly. Every situation will be different.
A google search for "sql server hypothetical indexes" returned the following article as the first result. Quote:
Hypothetical indexes and database objects in general are simply objects created by DTA (Database Tuning Advisor)
Hypothetical indexes are those generated by the Database Tuning Advisor. Generally speaking, having too many indexes is not a great idea and you should examine your query plans to prune those which are not being used.
From sys.indexes:
is_hypothetical bit
1 = Index is hypothetical and cannot be used directly as a data access path.
Hypothetical indexes hold column-level statistics.
0 = Index is not hypothetical.
They could be also created manually with undocumented WITH STATISTICS_ONLY:
CREATE TABLE tab(id INT PRIMARY KEY, i INT);
CREATE INDEX MyHypIndex ON tab(i) WITH STATISTICS_ONLY = 0;
/* 0 - withoud statistics -1 - generate statistics */
SELECT name, is_hypothetical
FROM sys.indexes
WHERE object_id = OBJECT_ID('tab');
db<>fiddle demo
Related
Why does a complex SQL query run worse with a Use statement and implicit dB references than with a Use master statement and full references to the user dB?
I'm using SQL Server Std 64-bit Version 13.0.4466.4 running on Windows Server 2012 R2. This is an "academic" question raised by one of my users, not an impediment to production.
By "complex" I mean several WITH clauses and a CROSS APPLY, simplified query structure below. By "worse" I mean 3 min. vs. 1 sec for 239 Rows, repeatably. The "plain" Exec Plan for fast query will not show, however, the Exec Plan w/ Live Query Stats runs for both, analysis further below. Tanx in advance for any light shed on this!
USE Master versus USE <userdb>;
DECLARE #chartID INTEGER = 65;
WITH
with1 AS
( SELECT stuff FROM <userdb>.schema1.userauxtable ),
with2 AS
( SELECT lotsastuff FROM <userdb>.dbo.<views w/ JOINS> ),
with3 AS
( SELECT allstuff FROM with2 WHERE TheDate IN (SELECT MAX(TheDate) FROM with2 GROUP BY <field>, CAST(TheDate AS DATE)) ),
with4 AS
( SELECT morestuff FROM with1 WHERE with1.ChartID = #chartID )
SELECT finalstuff FROM with3
CROSS APPLY ( SELECT littelstuff FROM with4 WHERE
with3.TheDate BETWEEN with4.PreDate AND with4.AfterDate
AND with4.MainID = with3.MainID ) as AvgCross
The Exec Plan w/ Live Query Stats for slow query has ~41% Cost ea. (83% total) in two ops:
a) Deep under the 5th Step (of 15) Hash match (Inner Join) Hash Keys Build ... 41% Cost to Index Scan (non-clustered) of ...
b) Very deep under the 4th Step (of 15) Nested Loops (Left Semi Join) -- 42% Cost to near-identical Index Scan per (1) except addition of (... AND datediff(day,Date1,getdate() ) to Predicate.
While the Exec Plan w/ Live Query Stats for fast query shows an 83% Cost in a Columnstore Idx Scan (non-clustered) of quite deep under the 9th Step (of 12) Hash match (Inner Join) Hash Keys Build .
It would seem that the difference is in the Columnstore Idx, but why does the Use master stmt send the Execution down that road?
There may be several possible reasons for this kind of behaviour; however, in order to identify them all, you will need people like Paul Randall or Kalen Delaney to answer this.
With my limited knowledge and understanding of MS SQL Server, I can think of at least 2 possible causes.
1. (Most plausible one) The queries are actually different
If, as you are saying, the query text is sufficiently lengthy and complex, it is completely possible to miss a single object (table, view, user-defined function, etc.) when adding database qualifiers and leave it with no DB prefix.
Now, if an object by that name somehow ended up in both the master and your UserDB databases then different objects will be picked up depending on the current database context, the data might be different, indices and their fragmentation, even data types... well, you get the idea.
This way, queries become different depending on the database context, and there is no point comparing their performance.
2. Compatibility level of user database
Back in the heyday of the 2005 version, I had a database with its compatibility level set to 80, so that ANSI SQL-89 outer joins generated by some antiquated ORM in legacy client apps would keep working. Most of the tasty new stuff worked too, with one notable exception however: the pivot keyword.
A query with PIVOT, when executed in the context of that database, threw an error saying the keyword is not recognised. However, when I switched the context to master and prefixed everything with user database's name, it ran perfectly fine.
Of course, this is not exactly your case, but it's a good demonstration of what I'm talking about. There are lots of internal SQL Server components, invisible to the naked eye, that affect the execution plan, performance and sometimes even results (or your ability to retrieve them, as in the example above) that depend on settings such as database' compatibility level, trace flags and other similar things.
As a possible cause, I can think of the new cardinality estimator which was introduced in SQL Server 2014. The version of the SQL Server instance you mentioned corresponds to 2016 SP1 CU7, however it is still possible that:
your user database may be in compatibility with 2012 version (for example, if it was restored from 2012 backup and nobody bothered to check its settings after that), or
trace flag 9481 is set either for the session or for the entire SQL Server instance, or
database scoped configuration option LEGACY_CARDINALITY_ESTIMATION is set for the database, etc.
(Thankfully, SQL Server doesn't allow to change compatibility level of the master database, so it's always of the latest supported level. Which is probably good, as no one can screw the database engine itself - not this way, at least.)
I'm pretty sure that I have only scratched the surface of the subject, so while checking the aforementioned places definitely wouldn't hurt, what you need to do is to identify the actual cause of the difference (if it's not #1 above, that is). This can be done by looking at actual execution plans of the queries (forget the estimated ones, they are worthless) with a tool other than vanilla SSMS. As an example, SentryOne Plan Explorer might be a good thing to begin with. Even without that, saving plans in .sqlplan files and opening them with any XML-capable viewer/editor will show you much more, including possible leads that might explain the difference you observe.
OK, I'm confused about sql server indexed views(using 2008)
I've got an indexed view called
AssignmentDetail
when I look at the execution plan for
select * from AssignmentDetail
it shows the execution plan of all the underlying indexes of all the other tables that the indexed view is supposed to abstract away.
I would think that the execution plan woul simply be an clustered index scan of PK_AssignmentDetail(the name of the clustered index for my view) but it doesn't.
There seems to be no performance gain with this indexed view what am I supposed to do? Should I also create a non-clustered index with all of the columns so that it doesn't have to hit all the other indexes?
Any insight would be greatly appreciated
The Enterprise edition of SQL Server is smart enough to look for and make use of indexed views when they exist. However, if you are not running Enterprise edition, you'll need to explicitly tell it to use the indexed view, like this:
select *
from AssignmentDetail WITH (NOEXPAND)
The point of an indexed view is not to speed up
SELECT * FROM MyView
The thing it will help you increase performance is an index on a column of the view itself, such as.
SELECT * FROM MyView WHERE ViewColumnA = 'A' and ViewColumnB = 'B'
You can therefore have an index on ViewColumnA and ViewColumnB which could actually exist on different tables.
How does INSERT, UPDATE & DELETE work on a SQL Server table partition?
Technical explaination please how the SQL server engine handles table partition vs non table partition.
The SQL optimiser will use the query predicates to decide on how many table partitions will be affected. This makes the query run faster as unnecessary data is not read from disk. The query will then be run against the relevant data blocks in the affected partitions. To the user this is completely transparent.
I found this article by Kimberly Tripp to be incredibly useful in figuring out the ins and outs of table partitioning. It's about 40 pages long, technically detailed, and a printout sits on my desk as a permanenet reference.
I'm considering dropping an index from a table in a SQL Server 2005 instance. Is there a way that I can see which stored procedures might have statements that are dependent on that index?
First check if the indexes are being used at all, you can use the sys.dm_db_index_usage_stats DMV for that, check the user_scans and the user_seeks column
read this Use the sys.dm db index usage stats dmv to check if indexes are being used
Nope. For one thing, index selection is dynamic - the indexes aren't selected until the query executes.
Barring "HINT", but let's not go there.
As le dorfier says, this depends on the execution plan SQL determines at runtime. I'd suggest setting up perfmon to track table scans, or keep sql profiler running after you drop the index filtering for the colum names you're indexing. Look for long running queries.
I've a table with a lot of registers (more than 2 million). It's a transaction table but I need a report with a lot of joins. Whats the best practice to index that table because it's consuming too much time.
I'm paging the table using the storedprocedure paging method but I need an index because when I want to export the report I need to get the entire query without pagination and to get the total records I need a select all.
Any help?
The SQL Server 2008 Management Studio query tool, if you turn on "Include Actual Execution Plan", will tell you what indexes a given query needs to run fast. (Assuming there's an obvious missing index that is making the query run unusually slow, that is.)
SQL Server 2008 Management Studio Query Screenshot http://img208.imageshack.us/img208/4108/image4sy8.png
We use this all the time on Stack Overflow.. one of the best features of SQL 2008. It works against older SQL instances as well, just install the SQL 2008 tools and point them at a SQL 2005 instance. Not sure if it works on anything earlier, though.
As others have noted, you can also do this manually, but it takes a bit of trial and error. You'll want indexes on fields that are used in ORDER BY and WHERE clauses.
key fields have to be everithing in
the where clause ???
No, that would be overkill. Indexing a field really only works if a) your WHERE clause is selective enough (that is: only selects out about 1-2% of the values; an index on a "Gender" field which can be only one of two or three possible values is pointless), and b) your WHERE clause doesn't involve function calls or other magic.
In your case, TBL.Status might be a candidate - how many possible values are there? You select the '1' and '2' value - if there are hundreds of possible values, then it's a good choice.
On a side note:
this clause here: (TBL.Login IS NULL AND TBL.Login <> 'dev' ) is pretty pointless - if the value of TBL.login IS NULL, then it's DEFINITELY not 'dev' ..... so just the "IS NULL" will be more than sufficient......
The other field you might want to consider putting an index on is the TBL.Date, since you seem to select a range of dates here - that might be a good choice.
Also, on a general note: whenever possible, DO NOT use a SELECT * FROM ...... to select your fields. This causes a lot of overhead for SQL Server. SPECIFY your columns - and ONLY select those that you REALLY NEED - not just all of them for the heck of it.....
Check your queries, and find which fields are used to match them. Those are usually the best candidates!
SQL Server has a 'Database Engine Tuning Advisor' that could help you. This does not exist for SQL Server Express, but does for all other versions of SQL Server.
Load your query in a query window.
On the menu, click Query -> Analyze Query in Database Engine
Tuning Advisor
The tuning advisor will identify indexes that could be added to your table(s) to improve performance. In my experience, the tuning advisor doesn't always help, but most of the time it does. It's where I suggest you start.
ok this is the query in doing
SELECT
TBL.*
FROM
FOREINGDATABASE..TABLENAME TBL
LEFT JOIN Status S
ON TBL.Status = S.Number
WHERE
(TBL.ID = CASE #Reference WHEN 0 THEN TBL.ID ELSE #Reference END) AND
TBL.Date >= #FechaInicial AND
TBL.Date <= #FechaFinal AND
(TBL.Channel = CASE #Canal WHEN '' THEN TBL.Channel ELSE #Canal END)AND
(TBL.DocType = CASE #TipoDocumento WHEN '' THEN TBL.DocType ELSE #TipoDocumento END)AND
(TBL.Document = CASE #NumDocumento WHEN '' THEN TBL.Document ELSE #NumDocumento END)AND
(TBL.Login = CASE #Login WHEN '' THEN TBL.Login ELSE #Login END)AND
(TBL.Login IS NULL AND TBL.Login <> 'dev' ) AND
TBL.Status IN ('1','2')
key fields have to be everithing in the where clause ???
If I am not mistaken, please correct me if I am, I think you should create non-clustered Index on the fields of the conditions of the where clause. (Maybe this can be useful as a starting point to get some candidates for the indexes).
Good Luck
if an Index Scan instead of a seek is performed, the cause might be that the fields are not in the correct order in the index.
put indexes on all columns that you're joining and filtering on.
the use of indexes is also determined by the selectivity of the indexed column.
the best way would be to show us your query so we can try to improve it.