Database reads varying dramatically on a query with indexes - sql-server

I have a query that has appropriate indexes and is shown in the query plan with an estimated subtree cost of circa 1.5. The plan shows an Index Seek, followed by Key Lookup - which is fine for a query expected to return 1 row from a set of between 5 and 20 rows (i.e. the Index Seek should find between 5 and 20 rows, and after 5 - 20 Key Lookups, we should return 1 row).
When run interactively, the query returns almost immediately. However, DB traces this morning show runtimes from live (a web app) that vary wildly; typically the query is taking < 100 DB Reads, and effectively 0 runtime... but we are getting a few runs that consume > 170,000 DB Reads, and runtime up to 60s (greater than our timeout value).
What could explain this variation in disk reads? I have tried comparing queries interactively and using Actual Execution plans from two parallel runs with filter values taken from fast and slow runs, but interactively these show effectively no difference in the plan used.
I also tried to identify other queries that could be locking this one, but I am not sure that would impact the DB Reads so much... and in any event this query tended to be the worst for runtime in my trace logs.
Update: Here's a sample of the plan produced when the query is run interactively:
Please ignore the 'missing index' text. It is true that changes to the current indexes could allow a faster query with fewer lookups, but that is not the issue here (there are already appropriate indexes). This is an Actual Execution Plan, where we see figures like Actual Number of Rows. For example, on the Index Seek, the Actual number of rows is 16, and the I/O cost is 0.003. The I/O cost is the same on the Key Lookup.
Update 2: The results from the trace for this query are:
exec sp_executesql N'select [...column list removed...] from ApplicationStatus where ApplicationGUID = #ApplicationGUID and ApplicationStatusCode = #ApplicationStatusCode;',N'#ApplicationGUID uniqueidentifier,#ApplicationStatusCode bigint',#ApplicationGUID='ECEC33BC-3984-4DA4-A445-C43639BF7853',#ApplicationStatusCode=10
The query is constructed using the Gentle.Framework SqlBuilder class, which builds parameterised queries like this:
SqlBuilder sb = new SqlBuilder(StatementType.Select, typeof(ApplicationStatus));
sb.AddConstraint(Operator.Equals, "ApplicationGUID", guid);
sb.AddConstraint(Operator.Equals, "ApplicationStatusCode", 10);
SqlStatement stmt = sb.GetStatement(true);
IList apps = ObjectFactory.GetCollection(typeof(ApplicationStatus), stmt.Execute());

Could the data be being removed from the cache? That may be an explanation why with a hot cache (data already in memory), the reads recorded are very low....and then when the data is no longer in RAM, the reads would increase as it has to read it off disk again.
Just one idea to get things moving.

Run profiler to see if statistics are being updated around the same time. Or simply to see what else is going.
Also, please add the SQL query as well the client code.
Thoughts:
It sounds like your "5-20" rows could be far more than that
With a bad plan/parameter sniffing you'd get consistently bad performance
How may writes happen on this table: enough to update statistics?
Is there some datatype issue? (eg concatenating parameters and introducing a datatype conversion)

Related

How to increase SQL Server select query performance?

I have a table with 102 columns and 43200 rows. Id column is an identity column and 2 columns have an unique index.
When I just execute
Select *
from MyTable
it takes almost 8 minutes+ over the network.
This table has a Status column which contains 1 or 0. If I select with where Status = 1, then I'm getting 31565 rows and the select is taking 6 minutes+. For your information status 1 completed and will not change ever anymore. But 0 status is working in progress and the rows are changing different columns value by different user stage.
When I select with Status = 0, it takes 1.43 minutes and returns 11568 rows.
How can I increase performance for completed and WIP status query separately or cumulatively? Can I somehow use caching?
The SQL server takes care of caching. At least as long as there is enough free RAM. When it take so long to get the data at first you need to find the bottleneck.
RAM: Is there enough to hold the full table? And is the SQL server configured to use it?
Is there an upper limit to RAM usage? If not SQL server assumes unlimited RAM and this will often end caching in page file, which causes massive slow downs
You said "8+ minutes through network". How long does it take on local execution? Maybe the network is slow
Hard drive: When the table is too big to be held in RAM it gets read from hard drive. HDDs are somewhat slow. Maybe defragmenting the indices could help here (at least somewhat)
If none helps, the SQL profiler might help to show you where the bottleneck actually is to find
This is an interesting question, but it's a little open-ended, more info is needed. I totally agree with allmhuran's comment that maybe you shouldn't be using "select * ..." for a large table. (It could in fact be posted as an answer, it deserves upvotes).
I suspect there may be design issues - Are you using BLOB's? Is the data at least partially normalized? ref https://en.wikipedia.org/wiki/Database_normalization
I Suggest create a non clustered index on "Status" Column. It improves your queries with Where Clause that uses this column.

Slow query operations on Azure database

I have 1.2 million rows in Azure data table. The following command:
DELETE FROM _PPL_DETAIL WHERE RunId <> 229
is painfully slow.
There is an index on RunId.
I am deleting most of the data.
229 is a small number of records.
It has been running for an hour now
Should it take this long?
I am pretty sure it will finish.
Is there anything I can do to make operations like this faster?
The database does have a PK, although it is a dummy PK (not used). I already saw that as an optimization need to help this problem, but it still takes way too long (SQL Server treats a table without a PK differently -- much less efficient). It is still taking 1+ hour.
How about trying something like below
BEGIN TRAN
SELECT * INTO #T FROM _PPL_DETAIL WHERE RunId = 229
TRUNCATE TABLE _PPL_DETAIL
INSERT INTO _PPL_DETAIL
SELECT * FROM #T
COMMIT TRAN
Without knowing what database tier is using the database where that statment runs it is not easy to help you. However, let us tell you how the system works so that you can make this determination with a bit more investigation by yourself.
Currently the log commit rate is limited by the tier the database has. Deletes are fundamentally limited on the ability to write out log records (and replicate them to multiple machines in case your main machine dies). When you select records, you don't have to go over the network to N machines and you may not even need to go to the local disk if the records are preserved in memory, so selects are generally expected to be faster than inserts/updates/deletes because of the need to harden log for you. You can read about the specific limits for different reservation sizes are here: DTU Limits and vCore Limits.
One common problem is to do individual operations in a loop (like a cursor or driven from the client). This implies that each statement has a single row updated and thus has to harden each log record serially because the app has to wait for the statement to return before submitting the next statement. You are not hitting that since you are running a big delete as a single statement. That could be slow for other reasons such as:
Locking - if you have other users doing operations on the table, it could block the progress of the delete statement. You can potentially see this by looking at sys.dm_exec_requests to see if your statement is blocking on other locks.
Query Plan choice. If you have to scan a lot of rows to delete a small fraction, you could be blocked on the IO to find them. Looking at the query plan shape will help here, as will set statistics time on (We suggest you change the query to do TOP 100 or similar to get a sense of whether you are doing lots of logical read IOs vs. actual logical writes). This could imply that your on-disk layout is suboptimal for this problem. The general solutions would be to either pick a better indexing strategy or to use partitioning to help you quickly drop groups of rows instead of having to delete all the rows explicitly.
An additional strategy to have better performance with deletes is to perform batching.
As I know SQL Server had a change and the default DOP is 1 on their servers, so if you run the query with OPTION(MAXDOP 0) could help.
Try this:
DELETE FROM _PPL_DETAIL
WHERE RunId <> 229
OPTION (MAXDOP 0);

How do I figure out what is causing Data IO spikes on my Azure SQL database?

I have a Azure SQL production database that runs at around 10-20% DTU usage on average, however, I get DTU spikes that take it upwards of 100% at times. Here is a sample from the past 1 hour:
I realize this could be a rouge query, so I switched over to the Query Performance Insight tab, and I find the following from the past 24 hours:
This chart makes sense with regards to the CPU usage line. Query 3780 takes the majority of at CPU, as expected with my application. The Overall DTU (red) line seems to follow this correctly (minus the spikes).
However, in the DTU Components charts I can see large Data IO spikes occurring that coincide with the Overall DTU spikes. Switching over to the TOP 5 queries by Data IO, I see the following:
This seems to indicate that there are no queries that are using high amounts of Data IO.
How do I find out where this Data IO usage is coming from?
Finally, I see that there is this one, "odd ball" query (7966) listed under the TOP 5 queries by Data IO with only 5 executions. Selecting it shows the following:
SELECT StatMan([SC0], [SC1], [SC2], [SB0000])
FROM (SELECT TOP 100 PERCENT [SC0], [SC1], [SC2], step_direction([SC0]) over (order by NULL) AS [SB0000]
FROM (SELECT [UserId] AS [SC0], [Type] AS [SC1], [Id] AS [SC2] FROM [dbo].[Cipher] TABLESAMPLE SYSTEM (1.828756e+000 PERCENT)
WITH (READUNCOMMITTED) ) AS _MS_UPDSTATS_TBL_HELPER
ORDER BY [SC0], [SC1], [SC2], [SB0000] ) AS _MS_UPDSTATS_TBL
OPTION (MAXDOP 16)
What is this query?
This does not look like any query that my application has created/uses. The timestamps on the details chart seem to line up with the approximate times of the overall Data IO spikes (just prior to 6am) which leads me to think this query has something to do with all of this.
Are there any other tools can I use to help isolate this issue?
The query is updating statistics..this occurs when this setting AUTO UPDATE STATISTICS is on..This should be kept on and you can't turn it off..this is a best practice..
You should update stats manually only when when you see a query not performing well and stats are off for that query..
Also below are some rules when SQL will update stats automatically for you
When a table with no rows gets a row
When 500 rows are changed to a table that is less than 500 rows
When 20% + 500 are changed in a table greater than 500 rows
By ‘change’ we mean if a row is inserted, updated or deleted. So, yes, even the automatically-created statistics get updated and maintained as the data changes.There were some changes to these rules in recent versions and sql can update stats more often
References:
https://www.sqlskills.com/blogs/erin/understanding-when-statistics-will-automatically-update/
It seems that query is part of the automatic update of statistics process. To mitigate the impact of this process on production you can regularly update statistics and indexes using runbooks as explained here. Run sp_updatestats to immediately try to mitigate the impact of this process.

EF: Query quite slow on first execution

We are using EF Core 1.1, in an ASP.NET Core app, where the following LINQ query takes about 45 seconds to a minute on its first execution. After the first execution, the subsequent executions seem to work fine.
Question: How can we improve the performance of this query. User waiting for about 45 seconds or more gives him/her an impression that probably the ASP.NET page displaying the query is broken and user moves on to another page:
var lstProjects = _context.Projects.Where(p => p.ProjectYear == FY && p.User == User_Name).OrderBy(p => p.ProjectNumber).ToList();
Execution Plan in SQL Server Query Editor: The table has 24 columns one of which is of type varchar(255), four are of type varchar(75). Others are of types int, smalldatetime, bit etc. All of the columns are needed in the query. But the WHERE clause filters the data to return about 35 rows out of about 26,000.
More details on Execution Plan
Updated comment to answer.
When using Code First there still needs to be a consideration for indexing based on the common queries run in high-traffic areas of the application. The index scan across the PK amounts to little more than a table scan so an index across the Project Year + UserName would give a boost in performance and should be considered if this is expected to be used a bit or is performance sensitive. Regardless of DB First or Code First, developers need to consider profiler results against the database in order to optimize indexing based on realistic usage. Normally the execution plan will return back suggestions for indexing. Index suggestions should appear just below the SQL statement in the execution plan. From the screen shot it isn't clear whether one was suggested or not, as there might have been a scrollbar to the right of the SQL under the pop-up stats.
In cases where a query is returning slow results but no suggestions, try re-running the query with altered parameters with the execution plan to exclude SQL from picking up pre-compiled queries.

Extrememly High Estimated Number of Rows in Execution Plan

I have a stored procedure running 10 times slower in production than in staging. I took at look at the execution plan and the first thing I noticed was the cost on Table Insert (into a table variable #temp) was 100% in production and 2% in staging.
The estimated number of rows in production showed almost 200 million row! But in staging was only about 33.
Although the production DB is running on SQL Server 2008 R2 while staging is SQL Server 2012 but I don't think this difference could cause such a problem.
What could be the cause of such a huge difference?
UPDATED
Added the execution plan. As you can see, the large number of estimated rows shows up in Nested Loops (Inner Join) but all it does is a clustered index seek to another table.
UPDATED2
Link for the plan XML included
plan.xml
And SQL Sentry Plan Explorer view (with estimated counts shown)
This looks like a bug to me.
There are an estimated 90,991.1 rows going into the nested loops.
The table cardinality of the table being seeked on is 24,826.
If there are no statistics for a column and the equality operator is used, that means the SQL can’t know the density of the column, so it uses a 10 percent fixed value.
90,991.1 * 24,826 * 10% = 225,894,504.86 which is pretty close to your estimated rows of 225,894,000
But the execution plan shows that only 1 row is estimated per seek. Not the 24,826 from above.
So these figures don't add up. I would assume that it starts off from an original 10% ball park estimate and then later adjusts it to 1 because of the presence of a unique constraint without making a compensating adjustment to the other branches.
I see that the seek is calling a scalar UDF [dbo].[TryConvertGuid] I was able to reproduce similar behavior on SQL Server 2005 where seeking on a unique index on the inside of a nested loops with the predicate being a UDF produced a result where the number of rows estimated out of the join was much larger than would be expected by multiplying estimated seeked rows * estimated number of executions.
But, in your case, the operators to the left of the problematic part of the plan are pretty simple and not sensitive to the number of rows (neither the rowcount top operator or the insert operator will change) so I don't think this quirk is responsible for the performance issues you noticed.
Regarding the point in the comments to another answer that switching to a temp table helped the performance of the insert this may be because it allows the read part of the plan to operate in parallel (inserting to a table variable would block this)
Run EXEC sp_updatestats; on the production database. This updates statistics on all tables. It might produce more sane execution plans if your statistics are screwed up.
Please don't run EXEC sp_updatestats; On a large system it could take hours, or days, to complete. What you may want to do is look at the query plan that is being used on production. Try to see if it has a index that could be used and is not being used. Try rebuilding the index (as a side effect it rebuilds statistics on the index.) After rebuilding look at the query plan and note if it is using the index. Perhaps you many need to add an index to the table. Does the table have a clustered index?
As a general rule, since 2005, SQL server manages statistics on its own rather well. The only time you need to explicitly update statistics is if you know that if SQL Server uses an index the query would execute would execute a lot faster but its not. You may want to run (on a nightly or weekly basis) scripts that automatically test every table and every index to see if the index needs to be reorged or rebuilt (depending on how fragmented it is). These kind of scripts (on a large active OLTP system)r may take a long time to run and you should consider carefully when you have a window to run it. There are quite a few versions of this script floating around but I have used this one often:
https://msdn.microsoft.com/en-us/library/ms189858.aspx
Sorry this is probably too late to help you.
Table Variables are impossible for SQL Server to predict. They always estimate one row and exactly one row coming back.
To get accurate estimates so that the better plan can be created you need to switch your table variable to a temp table or a cte.

Resources