Database select queries running slow after bulk delete from tables - sql-server

Over the years our sql sever database has accumulated a lot of data which was causing the queries to run slow which further resulted in the application to slow down significantly.
We eventually decided to archive certain data by storing it in a different data store and deleting from SQL Server. Note that the data is spread over 22 tables (meta-data). After deleting about 40% of the data we saw that certain queries were running significantly slower. Even though there are queries which have slight performance improvement, we are observing that the response times and transactions per second have come down significantly when we ran a load test.
As part of cleanup on the database side, we Reorganized and Rebuilt the indexes. We made sure that the fragmentation after the deletes was with in permissible limits (ideally < 30% but it was way less than that). After this we ran update statistics with full scan.
We identified a particular query which runs significantly slower after the deletes and saw that it had a different query plan as compared to before delete (we created a new database with the pre-delete data set to compare the differences) even though the table definitions (indexes, constraints etc) are the same for all tables in the query.
We ran the load tests again after these steps but still saw the same performance as before. I am not a SQL Server expert but am working with closely with the DBAs to understand the underlying issue for the slow down.
The DBAs are suggesting to optimize the queries but I am not convinced if that would actually help, the queries are performing better with a larger dataset which means deleting the data caused some change on the tables which is resulting in the slowdown.
I would really appreciate any pointers or guidance on addressing this issue.
Thanks,
Karthik

Related

One database vs Multiple database in SQL SERVER 2014

I have a sql server running on my machine.It contains 10 data base file.
say
a
b
....
z
so my question is 10 or more database or 1 single database is best for sql server .Does more database cause more performance issue on single server machine? what is recommended?
You may think like:
"Using multiple databases helps like they are outer index and it can be helpfull for search times.
Think like that, when searching begins, your database server takes your query it will go the firstly to your table and it will execute query on that table and which helps for querying time because datas on other tables will not be looked only your table index will be looked at tables table. :)
In same manner when you group your tables on different dbs query will begin to look just table index of that table on tables table and because there will be less table in that table finding your tables table id will going to complete in less time. :) "
But that is not correct! If you dont have millions of tables it will not going to impact because datastructures used on dbs mostly acces data in O(log(n)) and that means that if(Big if) accesing in 1,000,000 input takes 6 step complete then 100,000 will take 5 step and 1,000 will take 3. As you can see it not makes difference.
On the other hand using 2 db guarantees that it has to be at least 2 connections and connections are expensive things and that is why connection pools are exist.
Mostly Common issue is for poor database design
The causes for performance problems can be various, but the most common are a poorly designed database, incorrectly configured system, insufficient disk space or other system resources, excessive query compilation and recompilation, bad execution plans due to missing or outdated statistics, and queries or stored procedures that have long execution times due to improper design
Memory bottlenecks are caused by limitations in available memory and memory pressure caused by SQL Server, system, or other application activity. Poor indexing requires table scans which in case of large tables means that a large number of rows is read from disk and handled in memory
Network bottlenecks are caused by overload on a server or network, so the data cannot flow as expected
I/O issues can be caused by slow hardware used, bad storage solution design, and configuration. Besides hardware components, such as disk types, disk array type, and RAID configuration that affect I/O performance, unnecessary requests made by a database also affect I/O traffic. Frequent index scans, inefficient queries, and out of date statistics can also cause I/O workload and bottlenecks
- See more at: http://www.sqlshack.com/dba-guide-sql-server-performance-troubleshooting-part-1-problems-performance-metrics/#sthash.QrzEyKbz.dpuf
Multiple Database is not a problem for performance.
you can see these links. I think it will help you about understanding performance tuning :D

Using a duplicate SQL Server database for queries

I have a very large (100+ gigs) SQL Server 2005 database that receives a large number of inserts and updates, with less frequent selects. The selects require a lot of indexes to keep them functioning well, but it appears the number of indexes is effecting the efficiency of the inserts and updates.
Question: Is there a method for keeping two copies of a database where one is used for the inserts and updates while the second is used for the selects? The second copy wouldn't need to be real-time updated, but shouldn't be more than an hour old. Is it possible to do this kind of replication while keeping different indexes on each database copy? Perhaps you have other solutions?
Your looking to setup a master/child database topology using replication. With SQL server you'll need to setup replication between two databases (preferrably on separate hardware). The Master DB you should use for inserts and updates. The Child will service all your select queries. You'll want to also optimize both database configuration settings for the type of work they will be performing. If you have heavy select queries on the child database you may also want to setup view's that will make the queries perform better than complex joins on tables.
Some reference material on replication:
http://technet.microsoft.com/en-us/library/ms151198.aspx
Just google it and you'll find plenty of information on how to setup and configure:
http://search.aim.com/search/search?&query=sql+server+2005+replication&invocationType=tb50fftrab
Transactional replication can do this as the subscriber can have a number of aditional indexes compared with the publisher. But you have to bear in mind a simple fact: all inserts/updates/deletes are going to be replicated at the reporting copy (the subscriber) and the aditional indexes will... slow down replication. It is actually possible to slow down the replication to a rate at wich is unable to keep up, causing a swell of the distribution DB. But this is only when you have a constant high rate of updates. If the problems only occur durink spikes, then the distribution DB will act as a queue that absorbes the spikes and levels them off during off-peak hours.
I would not take this endevour without absolute, 100% proof evidence that it is the additional indexes that are slowing down the insert/updates/deletes, and w/o testing that the insert/updates/deletes are actually performing significantly better without the extra indexes. Specifically , ensure that the culprit is not the other usual suspect: lock contention.
Generally, all set-based operations (including updating indexes) are faster than non set-based ones
1,000 inserts will most probably be slower than one insert of 1,000 records.
You can batch the updates to the second database. This will, first, make the index updating more fast, and, second, smooth the peaks.
You could task schedule a bcp script to copy the data to the other DB.
You could also try transaction log shipping to update the read only db.
Don't forget to adjust the fill factor when you create your two databases. It should be low(er) on the database with frequent updates, and 100 on your "data warehouse"/read only database.

What can cause bad SQL server performance?

Every time I find out that the performance of data retrieval from my database is slow. I try to figure out which part of my SQL query has the problem and I try to optimize it and also add some indexes to the table. But this does not always solve the problem.
My question is :
Are there any other tricks to make SQL server performance better?
What are the other reason which can make SQL server performance worse?
Inefficient query design
Auto-growing files
Too many indexes to be maintained on a table
Too few indexes on a table
Not properly choosing your clustered index
Index fragmentation due to poor maintenance
Heap fragmentation due to no clustered index
Too high FILLFACTORs used on indexes, causing excessive page splitting
Too low of a FILLFACTOR used on indexes, causing excessive space usage and increased scanning time
Not using covered indexes where appropriate
Non-selective indexes being used
Improper maintenance of statistics (out of date statistics)
Databases not normalized properly
Transaction logs and data sharing the same drive spindles
The wrong memory configuration
Too little memory
Too little CPU
Slow hard drives
Failing hard drives or other hardware
A 3D screensaver on your database server chewing up your CPU
Sharing the database server with other processes which compete for CPU and memory
Lock contention between queries
Queries which scan entire large tables
Front end code which searches data in an inefficent manner (nested loops, row by row)
CURSORS which are not necessary and/or are not FAST_FORWARD
Not setting NOCOUNT when you have large tables being cursored through.
Using a transaction isolation level which is too high (such as using SERIALIZABLE when it's not necessary)
Too many round trips between the client and the SQL Server (a chatty interface)
An unnecessary linked server query
A linked server query which targets a table on a remote server with no primary or candidate key defined
Selecting too much data
Excessive query recompilations
oh and there might be some others, too.
When I talk to new developers that have this problem I usually find that it is because of one of two problems. Both of them are fixed if you follow these 2 rules.
First, don’t retrieve any data that you don’t need. For example, if you are doing paging then don’t bring back 100 rows and then calculate which ones belong on the page. Have the stored proc figure it out and only retrieve the 10 you need.
Second, nothing is faster than work you don’t do. For example, I worked on a system where the full roles and rights for a user were retrieved with every page requested – this was 100’s of rows for some users. Even just saving this to session state on the first request and then using it from there for subsequent requests took a meaningful weight off of the database.
Suggest you get a good book on Performance tuning for the database you use (this is very much database specific). This is an extremely complex subject and cannot really be answered other than in generalities on the web.
For instance, Dave markle tell you inefficient queries can cause the problem and there are many many ways to write inefficient queries and many more ways to fix them.
If you're new to the database and you have access to the database engine tuning advisor, you can heuristically tune your database.
You basically capture the SQL queries being run against your DB in the SQL Profiler, then feed those to DETA. DETA effectively runs the queries (without altering your data) and then works out what information your database is missing (views, indexes, partitions, statistics etc.) to do the queries better.
It can then apply them for you and monitor them in the future. I'm not saying to assume that DETA is always right or to do things without understanding, but I've found that it's definately a good way to see what your queries are doing, how long they take, and how you can index the DB appropriately.
PS: With all that said, it's much better to invest in a good DBA at the start of a project so that you have good structures and indexing to start with. But thats not the position that you're in right now...
This is a very wide question. And there is a ton of answers already. Still I would like to add one important factor - Page Split. The problem is – there are good splits and bad splits. Following are good articles explaining how to use transaction_log extended event for identifying bad/nasty page splits
Tracking Problematic Pages Splits in SQL Server 2012 Extended Events - Jonathan Kehayias
Tracking page splits using the transaction log - Paul Randal
You mentioned:
I try to optimize it and also add some indexes
But, sometimes removing unused non-clustered indexes may help to improve performance as it help to reduce transaction logs. Read Top Reasons for Log Performance Problems
Wait statistics, or please tell me where it hurts gives an idea about using wait statistics for performance analysis.
To see some fresh ideas for performance, take a look at
Performance Considerations - sqlmag.com
Separate tables in joins to different disks (for parallel disk I/O - filegroups).
Avoid joins on columns with few unique values.
To understand JOIN, read Advanced JOIN Techniques

SQL Server database with MASSIVE amount of tables

I've been asked to troubleshoot performance problems in a SQL Server 2005 database.
The challenge is not a huge amount of data, but the huge number of tables. There are more than 30,000 tables in a single database. The total data size is about 650 GB.
I don't have any control over the application that creates all those tables. The application uses roughly 2,500 tables per "division" on a larger company with 10-15 divisions.
How do you even start to check for performance problems? All the articles you find on VLDB (Very Large DB) are about the amount of data, not the amount of tables.
Any ideas? Pointers? Hints?
Start like any other kind of performance tuning. Among other things, you should not assume that the large number of tables constitutes a performance problem. It may be a red herring.
Instead, ask the users "what's slow"? Even if you measured the performance (using the Profiler, perhaps), your numbers might not match the perceived performance problem.
As others have noted, the number of tables is probably indicative of a bad design, but it is far from a slam dunk that it is the source of the performance problems.
The best advice I can give you for any performance optimization is to stop guessing about the source of the problem and go look for it. Above all else, don't start optimizing until you have positively identified the source of the problem.
I'd start by running some traces on the database and identify the poor performing queries. This would also tell you which tables are getting used the most by the application. In all likelihood a large number of those tables are probably either: A) leftover temp tables; B) no longer used; or C) working tables someone didn't clean up.
Putting the poor DB design aside, if no users are reporting slow response times then you don't currently have a performance problem.
If you do have a performance problem:
1) Check for fragmentation (dbcc showcontig)
2) Check the hardware specs, RAID/drive/file placement. Check the SQL server error logs.
If hardware seems underspecified or poorly designed, run Performance counters (see PAL
tool)
3) Gather trace data during a normal query work load and identify expensive queries (see this SO answer: How Can I Log and Find the Most Expensive Queries?)
Is the software creating all these tables? If so, maybe the same errors are being repeated over and over. Do all the tables have a primary key? Do they all have a clustered index? Are all the necessary non-clustered indexes present (those columns that are used for filtering and joins) etc etc etc.
Is upgrading the SQL Server 2008 an option? If so, you could take advantage of the new Policy Based Management feature to enforce best practice for this large amount of tables.
To start tuning now, I would use profiler to find those statements with the longest duration, then see what you can do to improve them (add indexes is usually the simplest way).

Will having multiple filegroups help speed up my database?

Currently, I am developing a product that does fairly intensive calculations using MS SQL Server 2005. At a high level, the architecture of my product is based on the concept of "runs" where each time I do some analytics it gets stored in a series of run tables (~100 tables per run).
The problem I'm having is that when the number of runs grows to be about 1,000 or so after a few months, performance on the database really seems to drop off, and specifically simple queries like checking for the existence of tables or creating views can take up to a second to two.
I've heard that using multiple filegroups, which I'm not currently doing, could help. Is this true, and if so, why/how would that help? Also, if there are other suggestions, even ones like, use fewer tables, I'm open to them. I just want to speed the database up and hopefully get it in a state where it will scale.
In terms of performance, the big gain in using separate files/filegroups is that it lets you spread your data across multiple physical disks. This is beneficial because with several disks, multiple data requests can be handled simultaneously (parallel is generally faster than serial). All other things being equal, this would tend to benefit performance, but the question of how much depends on your particular data set and the queries you're running.
From your description, the slow operations you're concerned about are creating tables and checking for the existence of tables. If you are generating 100 tables per run, then after 1000 runs you have 100,000 tables. I don't have much experience with creating that many tables in a single database, but you may be pressing the limits of the system tables that track the database schema. In this case, you might see some benefit by spreading your tables across more than one database (these databases could still all live within the same instance of SQL Server).
In general, the SQL Profiler tool is the best starting point for finding slow queries. There are data columns which indicate the CPU and IO cost of each SQL batch, which should point you to the worst offenders. Once you have found the problem queries, I would use the Query Analyzer to generate query plans for each of these queries, and see if you can tell what's making them slow. Do this by opening a query window, entering your query, and hitting Ctrl+L. A complete discussion of what might be slow would fill an entire book, but good things to look for are table scans (very slow for large tables) and inefficient joins.
In the end, you may be able to improve things simply by rewriting your queries, or you may have to make more broad changes to the table schema. For instance, maybe there's a way to create only one or a few tables per run, instead of 1000. More specifics about your particular setup would help us give a more detailed answer.
I also recommend this website for lots of tips on how to make things faster:
http://www.sql-server-performance.com/
When you talk about 100 tables per run, do you actually mean that you're creating new SQL tables? If so, I think that the architecture of your application may be the issue. I can't imagine a situation where you would need that many new tables as opposed to reusing the same few tables multiple times and simply adding a column or two to differentiate between runs.
If you're already reusing the same group of tables and new runs just mean additional rows in those tables, then the issue could simply be that the new data over time is hurting performance in one of several ways. For example:
The tables/indexes could be fragmented after awhile. Make sure that all of your tables have a clustered index. Check for fragmentation using sys.DM_DB_INDEX_PHYSICAL_STATS and issue ALTER INDEX with the REBUILD option if needed to defrag them.
The tables could simply be too large, so that inefficient on small tables are now obvious on the larger tables. Look into proper indexes on the tables to improve performance.
SQL Server will cache query plans (especially for stored procedures), but if the data in a table changes significantly over time that query plan may no longer be appropriate. Look into sp_recompile for your stored procedures to see if that's needed.
#2 is the culprit that I see most often in real world situations. Developers tend to develop using only a small set of test data and overlook proper indexing because you can do almost anything with a table of 20 rows and it will look fast.
Hope this helps
About 1000 of what? Single row writes? Multiple row transactions? Deletes?
A general tip would be to place the data files and log files on separate physical drives. SQL Server keeps track of every write to the log so having those in different drives should give you a general better performance.
But SQL Server tuning depends on what the application is actually doing. There are general tips but you have to measure your own thing...
The file groups being on different physical drives is what will give you the biggest performance boost, can also split up where the indexes are housed so that table writes and index accesses are hitting different disks. There's a lot you can do with partitioning, but that general concept is where the biggest speed impact comes from.
It can help with performance. moving certain tables/elemnts to distinct file areas/portions of the disk. this can reduce to a certain extent the amount of external fragmentation impacting the daabase.
I would also look at other factors such as tracesql to determine why queries etc are slowing down - there can be other factors such as query statistics, SP recompiles etc that are easier to fix and can give you greater gains in performance.
Split the tables across separate physical drives. If you have that much disk IO, you need a decent IO solution. Raid 10, fast disks, split the logs and DBs onto separate drives.
Re-examine your architecture - can you use multiple databases? If you create 1000s of tables in a go, you will soon hit some interesting bottlenecks that I've not had to deal with before. Multiple DBs should solve that. Think about having one "Controlling" db containing all your main meta-data, and then satellite DBs containing the actual data.
You don't mention any specs about your server - but we saw a decent increase in performance when we went from 8GB to 20GB RAM.
It could if you place them on separate drives - not logical but physical drives so IO is not slowing you down so much.

Resources