MS SQL index rebuild fixes timeouts - sql-server

we have an MS SQL query, which is rather complex in terms of joins. It's purpose is to search for specific type of entities. We recently spent some time optimizing it and setting the right indexes.
Though at some points in time (have not noticed any rules, so seems arbitrary) the web application starts timing out when utilizing this query. We then can go into the DB and rebuild the indexes on 2 tables included in the SQL and it gets back to normal... That happens occasionally.
Now pardon my ignorance, should the MSSQL rebuild the indexes itself at the optimum moments of time?
Otherwise, would we need to schedule for the index maintenance to run once we hit some level of fragmentation?
Please feel free to ignore my questions and guide me in the right direction.
Thanks in advance.

Production systems should have regular statistics maintenance and possibly some less frequent index maintenance.
Since SQL Server does not (currently) do this for you out of the box, implement Ola Hallegren's Index and Statistics Maintenance scripts. DBA's use these.
I used to rebuild indexes weekly, but now prefer to update statistics nightly, and perform less frequent index rebuilds.
I implement a weekly rebuild of all fragmented indexes (> 50%) and a nightly job (where required) to maintain heavily used (and heavily inserted into) tables.all fragmented indexes (> 50%) and a nightly job (where required) to maintain heavily used (and heavily inserted into) tables.

Related

Automatic database indexing

I have a database which is used by a multi-tenant application. In this database workloads are dynamic and change continuously. Therefore I have to allocate a DA to continuously manage the database. But I thought to use an automated service for this task such as Azure SQL Database Advisor - Automatic index management (platform is not important - I am OK with using MS sql server or oracle or other RDBMS).
I want to know how these automated indexes are actually working.Can I replace database administrator with these automatic indexers. I read that whenever a query execution plan is generated it will find out all the useful indexes to execute that query. Then it uses the indexes which really exist and cache some data about indexes which don't exist. If an index data is cached again and again the sql adviser will show that as a recommended index. But I want to know can we relay on this, what about update and insert queries? If I have a table where records are frequently updated, these automated indexing systems will consider that?
Note that Index Advisor is only available in SQL Database (Azure).
In the background Index Advisor is a machine learning algorithm, a relatively simple and quite effective one. It will analyze your workload, see if you would benefit from indexes. If he thinks you would it will show you as a recommendation - if you turn automatic index creation/dropping on it will actually create the index. To understand better how it works take a look at Channel 9. Note that before you apply a recommendation you can have an estimated impact.
Now the algorithm can make mistakes, right? So once the recommendation is applied it can automatically be reverted based on its performance.
Also note that next to Index Advisor you can check the Query Performance Insights that will show the performance of you queries. So this can help your DBA diagnose other, non-index related problems.
But note that Index Advisor will not drop and create for you new indexes every hour, it takes for him a day or two. So if your database's workload is changing very fast then I am not sure any automatic management tool or DBA will react quickly enough for your workload.

Database tuning advices

Possibly some of you don't even know about these features so you will learn a lot from this post which will in fact help me to optimize better and some of you probably use them on daily basis so you can help me and other less DBA proof users.
I'm using SQL-Server 2005 Standard
I run SQL Server Profiler a lot. Each time i find ad hoc queries or sps which execution time exceed my possible limits of under 100ms for complex queries and above 30ms for short ones (number does not mean a thing, just to make some sense). After i find possibly problematic queries i write them down so i can use Database Engine Tuning Advisor which executes overloaded queries on tables and at the result gives me indexes i need to build in order to improve performance. Each night i execute index rebuild function from Maintenance Plans.
Now question time!!!
1.if Database Engine Tuning Advisor gives me 10 indexes to create while improvement percentage is about 40% should i use it's advice or not? Better question is what is ratio number of indexes/improvement percentage i should follow. Indexes take space and time to rebuild.
2.If i create about 5-7 indexes for each problematic query, i can end up with 500 indexes per DB. How many indexes can i build so DB will perform normally? are there any limitations?
3.Is there any other way to optimize ( nor re-design ) your DB other than using my method or going sp by sp by your hands and eyes?
There's no right answer to this question as it depends heavily on your workload.
For workloads with a heavy ratio of reads (e.g. data warehouse) it might make sense to create an index which it would be positively counter productive to create for an environment with a greater amount of writes.
The DTA can help with this regard by assessing the impact on the overall workload but you would need to try and capture a representative sample (not just the poor performing queries). SQL Profiler is quite resource intensive so to do this with the least possible impact on your server you would need to use a server side SQL trace with appropriate filters to only log events related to the database of interest.
To identify the poorest performing queries in isolation If you have at least SQL2005 SP1 client tools installed you should be able to right click the database node in Management Studio and use the Reports -> Standard Reports menu to see the plans in the cache with highest CPU/IO.
If you are interested in this area I recommend the book SQL Server 2008 Query Performance Tuning Distilled (most of it applicable to SQL2005 as well)
You can get SQL Profiler to log to a table, so it will write the queries to a table you specify. If you can, leave it running for a few hours - Or however long it takes to cover as many queries/events as possible.
Next, use Database Engine Tuning Advisor - And get it to use this table of queries as its source input. You will find it looks at the whole pattern, and will recommend you create some indices, and remove others.
This is better than looking at queries one by one in isolation, although that still has its place.

Using a duplicate SQL Server database for queries

I have a very large (100+ gigs) SQL Server 2005 database that receives a large number of inserts and updates, with less frequent selects. The selects require a lot of indexes to keep them functioning well, but it appears the number of indexes is effecting the efficiency of the inserts and updates.
Question: Is there a method for keeping two copies of a database where one is used for the inserts and updates while the second is used for the selects? The second copy wouldn't need to be real-time updated, but shouldn't be more than an hour old. Is it possible to do this kind of replication while keeping different indexes on each database copy? Perhaps you have other solutions?
Your looking to setup a master/child database topology using replication. With SQL server you'll need to setup replication between two databases (preferrably on separate hardware). The Master DB you should use for inserts and updates. The Child will service all your select queries. You'll want to also optimize both database configuration settings for the type of work they will be performing. If you have heavy select queries on the child database you may also want to setup view's that will make the queries perform better than complex joins on tables.
Some reference material on replication:
http://technet.microsoft.com/en-us/library/ms151198.aspx
Just google it and you'll find plenty of information on how to setup and configure:
http://search.aim.com/search/search?&query=sql+server+2005+replication&invocationType=tb50fftrab
Transactional replication can do this as the subscriber can have a number of aditional indexes compared with the publisher. But you have to bear in mind a simple fact: all inserts/updates/deletes are going to be replicated at the reporting copy (the subscriber) and the aditional indexes will... slow down replication. It is actually possible to slow down the replication to a rate at wich is unable to keep up, causing a swell of the distribution DB. But this is only when you have a constant high rate of updates. If the problems only occur durink spikes, then the distribution DB will act as a queue that absorbes the spikes and levels them off during off-peak hours.
I would not take this endevour without absolute, 100% proof evidence that it is the additional indexes that are slowing down the insert/updates/deletes, and w/o testing that the insert/updates/deletes are actually performing significantly better without the extra indexes. Specifically , ensure that the culprit is not the other usual suspect: lock contention.
Generally, all set-based operations (including updating indexes) are faster than non set-based ones
1,000 inserts will most probably be slower than one insert of 1,000 records.
You can batch the updates to the second database. This will, first, make the index updating more fast, and, second, smooth the peaks.
You could task schedule a bcp script to copy the data to the other DB.
You could also try transaction log shipping to update the read only db.
Don't forget to adjust the fill factor when you create your two databases. It should be low(er) on the database with frequent updates, and 100 on your "data warehouse"/read only database.

What can cause bad SQL server performance?

Every time I find out that the performance of data retrieval from my database is slow. I try to figure out which part of my SQL query has the problem and I try to optimize it and also add some indexes to the table. But this does not always solve the problem.
My question is :
Are there any other tricks to make SQL server performance better?
What are the other reason which can make SQL server performance worse?
Inefficient query design
Auto-growing files
Too many indexes to be maintained on a table
Too few indexes on a table
Not properly choosing your clustered index
Index fragmentation due to poor maintenance
Heap fragmentation due to no clustered index
Too high FILLFACTORs used on indexes, causing excessive page splitting
Too low of a FILLFACTOR used on indexes, causing excessive space usage and increased scanning time
Not using covered indexes where appropriate
Non-selective indexes being used
Improper maintenance of statistics (out of date statistics)
Databases not normalized properly
Transaction logs and data sharing the same drive spindles
The wrong memory configuration
Too little memory
Too little CPU
Slow hard drives
Failing hard drives or other hardware
A 3D screensaver on your database server chewing up your CPU
Sharing the database server with other processes which compete for CPU and memory
Lock contention between queries
Queries which scan entire large tables
Front end code which searches data in an inefficent manner (nested loops, row by row)
CURSORS which are not necessary and/or are not FAST_FORWARD
Not setting NOCOUNT when you have large tables being cursored through.
Using a transaction isolation level which is too high (such as using SERIALIZABLE when it's not necessary)
Too many round trips between the client and the SQL Server (a chatty interface)
An unnecessary linked server query
A linked server query which targets a table on a remote server with no primary or candidate key defined
Selecting too much data
Excessive query recompilations
oh and there might be some others, too.
When I talk to new developers that have this problem I usually find that it is because of one of two problems. Both of them are fixed if you follow these 2 rules.
First, don’t retrieve any data that you don’t need. For example, if you are doing paging then don’t bring back 100 rows and then calculate which ones belong on the page. Have the stored proc figure it out and only retrieve the 10 you need.
Second, nothing is faster than work you don’t do. For example, I worked on a system where the full roles and rights for a user were retrieved with every page requested – this was 100’s of rows for some users. Even just saving this to session state on the first request and then using it from there for subsequent requests took a meaningful weight off of the database.
Suggest you get a good book on Performance tuning for the database you use (this is very much database specific). This is an extremely complex subject and cannot really be answered other than in generalities on the web.
For instance, Dave markle tell you inefficient queries can cause the problem and there are many many ways to write inefficient queries and many more ways to fix them.
If you're new to the database and you have access to the database engine tuning advisor, you can heuristically tune your database.
You basically capture the SQL queries being run against your DB in the SQL Profiler, then feed those to DETA. DETA effectively runs the queries (without altering your data) and then works out what information your database is missing (views, indexes, partitions, statistics etc.) to do the queries better.
It can then apply them for you and monitor them in the future. I'm not saying to assume that DETA is always right or to do things without understanding, but I've found that it's definately a good way to see what your queries are doing, how long they take, and how you can index the DB appropriately.
PS: With all that said, it's much better to invest in a good DBA at the start of a project so that you have good structures and indexing to start with. But thats not the position that you're in right now...
This is a very wide question. And there is a ton of answers already. Still I would like to add one important factor - Page Split. The problem is – there are good splits and bad splits. Following are good articles explaining how to use transaction_log extended event for identifying bad/nasty page splits
Tracking Problematic Pages Splits in SQL Server 2012 Extended Events - Jonathan Kehayias
Tracking page splits using the transaction log - Paul Randal
You mentioned:
I try to optimize it and also add some indexes
But, sometimes removing unused non-clustered indexes may help to improve performance as it help to reduce transaction logs. Read Top Reasons for Log Performance Problems
Wait statistics, or please tell me where it hurts gives an idea about using wait statistics for performance analysis.
To see some fresh ideas for performance, take a look at
Performance Considerations - sqlmag.com
Separate tables in joins to different disks (for parallel disk I/O - filegroups).
Avoid joins on columns with few unique values.
To understand JOIN, read Advanced JOIN Techniques

How often do you update statistics in SQL Server 2000?

I'm wondering if updating statistics has helped you before and how did you know to update them?
exec sp_updatestats
Yes, updating statistics can be very helpful if you find that your queries are not performing as well as they should. This is evidenced by inspecting the query plan and noticing when, for example, table scans or index scans are being performed instead of index seeks. All of this assumes that you have set up your indexes correctly.
There is also the UPDATE STATISTICS command, but I've personally never used that.
It's common to add your statistics update to a maintenance plan (as in an Enterprise Manager-defined Maintenance plan). That way it happens on a schedule - daily, weekly, whatever.
SQL Server 2000 uses statistics to make good decisions about query execution so they definitely help.
It's a good idea to rebuild your indexes at the same time (DBCC DBREINDEX and DBCC INDEXDEFRAG).
If you rebuild indexes, then the statistics for those indexes are automatically rebuilt.
If your timeframes allow, then running UPDATE STATISTICS of part of a maintenance plan is a good idea, as frequently as nightly (if your indexes are being rebuilt less frequently than that).
SQL Server: To determine if out of date statistics are the cause of a query performing poorly, turn on 'Query->Display Estimated Execution plan' (CTRL-L) in Management Studio and run the query. Open another window, paste in the same query and turn on 'Query->Display ActualExecution plan' (CTRL-M) in Management Studio and re-run the query. If the execution plans are different then statistics are most likely out of date.
Updating statistics becomes necessary after the following events:
- Records are inserted into your table
- Records are deleted from your table
- Records are updated in your table
If you have a large database with millions of records that gets lots of writes per day you probably should be determining an off-peak time to schedule index updates.
Also, you need to consider your type of traffic. If you have a lot (millions) of records in tables with many foreign key dependencies and you have a larger proportion of writes to reads you might want to consider turning off automatic statistics recomputation (NOTE: this feature will be removed in a future version of SQL Server, but for SQL Server 2000 you should be OK). This tells the engine to not recompute statistics on every INSERT, DELETE, or UPDATE and makes those actions much more performant.
Indexes are no laughing matter. They are the heart and soul of a performant database.

Resources