I have run three queries in SSMS and cannot understand the difference in time it takes each of them to run. The first two queries extract data for a single month, the third extracts data for both months. All queries work correctly, but I cannot understand how the second one can take so long to run.
The queries, the number of records returned and the time taken for each query is as follows:
Can anyone explain this?
SELECT *
FROM [vw_Movement]
WHERE ShiftDateTime BETWEEN '1 Aug 14 6:00' AND '31 Aug 14 18:00'
(Rcds returned=16,342, time=0 secs)
SELECT *
FROM [vw_Movement]
WHERE ShiftDateTime BETWEEN '1 Sep 14 6:00' AND '30 Sep 14 18:00'
(Rcds returned=14,468, time=24 secs)
SELECT *
FROM [vw_Movement]
WHERE ShiftDateTime BETWEEN '1 Aug 14 6:00' AND '30 Sep 14 18:00'
(Rcds returned=30,810, time=0 secs)
Have you looked at the execution plan (CTL-L in SSMS)? You don't say whether or not the underlying data is in a view or table, but from the vw_ prefix, I'm going to go with view.
So you might also want to try looking at the view definition itself, copy it as a new query into SSMS, put the WHERE statement at the end, and analyze the entire query instead of just the view. Maybe by commenting out some of the joins you can narrow down the problem.
It likely is related to
(1) statistics - you can update the statistics of the underlying tables for thatview during your maintenance window. People warn of running update stats during normal business hours, I feel that depends on your workload. Update stats doesn't lock the tables from my experience should be okay but it really depends on how much the db server/host can handle.
(2) Bad query plan cached in procedure cache. If you search the internet you can find methods to specifically remove on your specific cached query plan. The bad plan is also related to bad old statistics.
FYI - By updating stats you automatically your query will be recompiled due to change in stats. Taking action of (2) should not be needed.
Related
We have a DMV query that executes every 10 mins and inserts usage statistics, like SESSION_CURRENT_DATABASE, SESSION_LAST_COMMAND_STARTTIME, etc.. and supposedly has been running fine for the last 2 years.
Today we were notified by data hyperingestion team that the last records shown were from 6/10. So we found out the job has been stuck for 14 days not executing new statistics since. We've immediately restarted the job and it's been executing successfully since the morning, but basically we've lost the data during this 14 days period. Is there a way for us to execute this DMV query between 6/10-6/24 on the $SYSTEM.DISCOVER to recover these past 14 days of data?
Or all hope's lost?
DMV query:
SELECT [SESSION_ID]
,[SESSION_SPID]
,[SESSION_CONNECTION_ID]
,[SESSION_USER_NAME]
,[SESSION_CURRENT_DATABASE]
,[SESSION_USED_MEMORY]
,[SESSION_PROPERTIES]
,[SESSION_START_TIME]
,[SESSION_ELAPSED_TIME_MS]
,[SESSION_LAST_COMMAND_START_TIME]
,[SESSION_LAST_COMMAND_END_TIME]
,[SESSION_LAST_COMMAND_ELAPSED_TIME_MS]
,[SESSION_IDLE_TIME_MS]
,[SESSION_CPU_TIME_MS]
,[SESSION_LAST_COMMAND_CPU_TIME_MS]
,[SESSION_READS]
,[SESSION_WRITES]
,[SESSION_READ_KB]
,[SESSION_WRITE_KB]
,[SESSION_COMMAND_COUNT]
FROM $SYSTEM.DISCOVER_SESSIONS
I wouldn't say it's "gone" unless the instance has been restarted or the db has been detached. For example, the dmv for procedure usage should still have data in it, but you won't be able to specifically recreate what it looked like 10 days ago.
You can get a rough idea by looking back through the 2 years of data you already have, and get a sense of if there are spikes or consistent usage. Then, grab a snapshot of the DMV today, and extrapolate it back 14 days to get a rough idea of what usage was like.
My QA database has some troubling CPU spikes. I applied a recommended patch for SQL Server 2017, some improvement but not much. I ran a report on the highest CPU use per query, and there's one way out in front with a total time of 70 seconds (not ms, over a minute!). Since it's just a select from a single table, am I right to assume the awkward date conversion is probably the culprit? For all I know it repeats the conversion for every row in the table.
SELECT
[XREF_lib_IMPL].[..]
FROM [XREF_LIB_IMPL]
WHERE [XREF_LIB_IMPL].[FK.XREF_LIB_IMPL]=#1
AND CONVERT([varchar](50).[XREF_LIB_IMPL].[DATE_EXPIRED].(120))=#
I have a MSSQL2005 database that has records dating back to 2004, there are currently just under 1,000,000 records in one particular table.
Thing is, if I run a report comparing 2009 data against 2010 data, 2008 against 2009, 2009 against 2009 or any combination of years before this year then results are returned in 1-5 seconds.
If however I run a report that includes 2011 data then the report takes ~6 minutes.
I've checked the data and it looks similar to previous years and is cross-referenced against the same data used in all of the reports.
It's as if the database has exceeded some limit; that data for this year has become fragmented and therefore harder to access. I'm not saying this is the case but it may be for all I know.
Anyone have any suggestions?
Shaun.
Update:Since posting the question I found DBCC DBREINDEX table_name which seems to have done the trick.
What do the execution plans look like? If different you might need to manually update statistics on the table as the newly inserted rows are likely to be disproportionately unrepresented in the statistics and it might thus choose a sub optimal plan.
See this blog post for an explanation of this issue Statistics, row estimations and the ascending date column
Additionally check that your 2011 query isn't encountering blocking due to concurrent inserts or updates that do not affect queries against historic data.
I see a similar question here from 2013, but it was not answered so I am posting my version.
We are using SQL Server 2008 (SP4) - 10.0.6000.29 (X64) and have a database that is about 70GB in size with about 350 tables. On a daily basis, there are only a small number of updates occurring, though a couple times a year we dump a fair amount of data into it. There are several Windows Services that constantly query the database, but rarely updated it. There are also several websites that use it, and desktop applications (again, minimal daily updates).
The problem we have is that every once in a while a query that hits certain records will take much longer than normal. The following is a bogus example:
This query against 2 tables with less than 600 total records might take 30+ seconds:
select *
from our_program_access bpa
join our_user u on u.user_id = bpa.user_id
where u.user_id = 50 and program_name = 'SomeApp'
But when you change the user_id value to another user record, it takes less than one second:
select *
from our_program_access bpa
join our_user u on u.user_id = bpa.user_id
where u.user_id = 51 and program_name = 'SomeApp'
The real queries that are being used are a little more complex, but the idea is the same: search ID 50 takes 30+ seconds, search ID 51 takes < 1 second, but both return only 1 record out of about 600 total.
We have found that the issue seems related to the statistics. When this problem occurs, we run sp_updatestats, and all the queries are equal and fast in time. So, we started to run sp_updatestats in a maintainenance plan every night. But the problem still pops up. We also tried setting AUTO_UPDATE_STATISTICS_ASYNC on, but the problem eventually popped up.
While the database is large, it doesn't really undergo tremendous changes, though it does face constant queries from different services.
There are several other databases on the same server such as a mail log, SharePoint, and web filtering. Overall, performance is very good until we run into this problem.
Does it make sense that on a database that undergoes relatively small changes daily would need to run sp_updatstats so frequently? What else can we do to resolve this issue?
We have a database with about 50-60 % recompiles. That value comes from [SQL Compilations/sec] coupled with [Batch Requests/sec].
We Think that that value is a bit high
If we look at this query:
SELECT TOP 150
qs.plan_generation_num,
qs.execution_count,
qs.statement_start_offset,
qs.statement_end_offset,
st.text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
WHERE qs.plan_generation_num > 1
ORDER BY plan_generation_num DESC
We don't have a lot of plan_generation_num if you compare it to execution count.
Wat we do have is a lot of single use objects and I am trying to figure out why?
Our application is built in ASP.NET and we always use parameterized querys. We use both SP's and SQL-statements in the application but always parameterized.
The webpage that runs agains this database is a pretty big website with about 500 000 pageviews each day and about 10 000 request per minute if this information helps.
We have no long running Querys and indexes and statisics are in order. This is one of the last things to optimize.
CPU is average 15%
ram is about 100 gb and of coursed used up by SQL-server.
We use SQL Server 2014 Enterprise.
One thing I started wondering about. If I have a sql statement like this
SELECT doors, windows, seats from cars where Wheels = #Wheels AND Active = 1
Will this plan not be reused beacause we don't set a parameter on this part: **AND Active = 1
**
Any idea on how to get an idea on why we have so much single use?
The Count on cached plans is about 20 000. In comparasion we have about 700 sp' and a lot more querys in the app.