Why is GETDATE slowing down select query if I use a variable? - sql-server

I'm doing a select on a table with about 6 millions records selecting GETDATE()
select getdate() as date, [...] from MyTable
I verified that the performance issue is on GETDATE(), removing all other fields the query is still slow.
I thought that putting the value of GETDATE() in a separate var would speed the query up
declare #now datetime
set #now = GETDATE()
select #now as date, [...] from MyTable
It is slow as well. Why?

I'd never really noticed this before. But I am seeing the same thing.
Ran the following on a 10 million row table...
-- query #1
DECLARE #now AS DATETIME ;
SET #now = GETDATE() ;
SELECT #now AS [date], * FROM [MyTable] ;
-- cpu time = 2,563 ms
-- duration = 27,511 ms
-- query #2
SELECT GETDATE() AS [date], * FROM [MyTable] ;
-- cpu time = 2,421 ms
-- duration = 26,862 ms
-- query #3
SELECT * FROM [MyTable] ;
-- cpu time = 1,969 ms
-- duration = 23,149 ms
And the cpu times and durations are showing a difference.
All three query plans are more or less the same, with negligible difference between estimated costs for the queries.
The only differences I could see between the plans were the wait stats...
Query #1
WaitType = ASYNC_NETWORK_IO
WaitCount = 77,716
WaitTimeMs = 24,234
Query #2
WaitType = ASYNC_NETWORK_IO
WaitCount = 75,261
WaitTimeMs = 23,662
Query #3
WaitType = ASYNC_NETWORK_IO
WaitCount = 55,434
WaitTimeMs = 20,280
That's an extra 3-4 seconds, between including and not including the GETDATE() column in the result set, just waiting for whatever's running the query to acknowledge it has consumed the data and is ready for more.
In my case, I was using SSMS to execute the queries. So, I can only put it down to SSMS dragging its heels to render that extra column, which amounted to about 75 MB (10M x 8 bytes).
Having said that, the bulk of the time is obviously taken up with scanning all 10 million rows.
Unfortunately, I think the extra execution time to include your GETDATE() column is unavoidable.

Two points.
ASYNC_NETWORK_IO is SQL Server saying that it is waiting for network bandwidth to be available in order to send more data down the pipe.
SSMS stores the output of the Results window in a temp file on your C:\ drive so will be affected by disk I/O, AV scanning, other processes, etc. running on your machine. Same concept if you use a Linux OS.
I'd experiment with limiting the size of the data being returned (10M records can hardly be analysed by a human), and using a different tool to pull the records (if you really need 10M records) for starters.
Also, review the Execution Plan to find out where exactly the delay is. If it still points yo the ASYNC_NETWORK_IO wait, then your problem could be one or more of the network components between yourself and the server. Try using a wired connection instead of WiFi. Do you have a VPN? Is there anything limiting data transfer rates? Or the reason might simply be that too much data is being pulled.

Related

SQL Server Query Performance with Timestamp and variable

I have a simple SQL query to count the number of telemetry records by clients within the last 24 hours.
With an index on TimeStamp, the following query runs in less than 1 seconds for about 10k rows
select MachineName,count(Message) from Telemetry where TimeStamp between DATEADD(HOUR,-24, getutcdate()) and getutcdate() group by MachineName
However, when I tried to making the hard-coded -24 configurable and added a variable, it took more than 5 min for the query to get executed.
DECLARE #cutoff int; SET #cutoff = 24
select MachineName,count(Message) from Telemetry where TimeStamp between DATEADD(HOUR, -1*#cutoff, getutcdate()) and getutcdate() group by MachineName
Is there any specific reason for the significant decrease of performance? What's the best way of adding a variable without impacting performance?
My guess is that you also have an index on MachineName - or that SQL is deciding that since it needs to group by MachineName, that would be a better way to access the records.
Updating statistics as suggested by AngularRat is a good start - but SQL often maintains those automatically. (In fact, the good performance when SQL knows the 24 hour interval in advance is evidence that the statistics are good...but when SQL doesn't know the size of the BETWEEN in advance, then it thinks other approaches might be a better idea).
Given:
CREATE TABLE Telemetry ( machineName sysname, message varchar(88), [timestamp] timestamp)
CREATE INDEX Telemetry_TS ON Telemetry([timestamp]);
First, try the OPTION (OPTIMIZE FOR ( #cutoff = 24 )); clause to let SQL know how to approach the query, and if that is insufficient then try WITH (Index( Telemetry_TS)). Using the INDEX hint is less desirable.
DECLARE #cutoff int = 24;
select MachineName,count(Message)
from Telemetry -- WITH (Index( Telemetry_TS))
where TimeStamp between DATEADD(HOUR, -1*#cutoff, getutcdate()) and getutcdate()
group by MachineName
OPTION (OPTIMIZE FOR ( #cutoff = 24 ));
Your parameter should actually work, but you MIGHT be seeing an issue where the database is using out of date statistics for the query plan. I'd try updating statistics for the table you are quering. Something like:
UPDATE STATISTICS TableName;
Additionally, if your code is running from within a stored procedure, you might want to recompile the procedure. Something like:
EXEC sp_recompile N'ProcedureName';
A lot of times when I have a query that seems like it should run a lot faster but isn't, it's a statistic/query plan out of date issue.
References:
https://msdn.microsoft.com/en-us/library/ms187348.aspx
https://msdn.microsoft.com/en-us/library/ms190439.aspx

Trouble with SQL Server locks

I am running into an issue where SQL Server is causing a significant number of locks (95 to 150) on our main table. They are typically short duration locks, lasting under 3 seconds, but I would like to eliminate those if I possibly can. We have also noticed that typically there are no blocks, but occasionally we have a situation where the blocks seem to "cascade" and then the entire system slows down considerably.
Background
We have up to 600 virtual machines processing data and we loaded a table in SQL so we could monitor any records that got stalled and records that were marked complete. We typically have between 200,000 and 1,000,000 records in this table during our processing.
What we are trying to accomplish
We are attempting to get the next available record (Status = 0). However, since there can be multiple hits on the stored proc simultaneously, we are trying to make sure each VM gets a unique record. This is important because processing takes between 1.5 and 2.5 minutes per record and we want to make this as clean as possible.
Our thought process to this point
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
WHERE Status = 0
This update was causing us a few issues with locks, so we re-worked the process a little bit and changed the where to a sub-query to return the top 1 RowID (primary key) from the table. This seemed to help things run a little bit smoother, but then we occasionally get over-loaded in the database again.
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
-- WHERE Status = 0
WHERE RowID IN (SELECT TOP 1 RowID FROM do.Test WHERE Status = 0 ORDER BY RowID)
We discovered that having a significant number of Status 1 and 2 records int he table causes slowdowns. We figured it was from a table scan on the Status column. We added the following index but it did not help solve the locks.
CREATE NONCLUSTERED INDEX IX_Test_Status_RowID
ON [dbo].[Test] ([Status])
INCLUDE ([RowID])
The final step after the UPDATE, we use the RowID returned to select out the details:
SELECT 'Test' as FileName, *, #Nick as [Nickname]
FROM Test WITH (NOLOCK)
WHERE RowID IN (SELECT id from #retValue)
Types of locks
The majority of the blocks are LCK_M_U and LCK_M_S, which I would expect with that UPDATE and SELECT query. We did have 1 or 2 LCK_M_X locks as well occasionally. That made me think we may still be getting collisions on our "unique" record code.
Questions
Are these locks and the number of locks just normal SQL operations for this type load?
Is the sub-query causing more issues than a TOP(1) in the UPDATE we started with? I am trying to get confirmation I can remove the ORDER BY statement and remove that extra step of processing.
Would a different index help? I wondered if the index updating was a possible cause of the locks initially, but now I am not sure.
Is there a better or more efficient way to get a unique RowID?
Is the WITH (ROWLOCK) causing more locks than leaving it off would cause? The idea is ROWLOCK would only lock the 1 specific record and allow another proc to update another record and select without locking the table or page.
Does anyone have any tools they recommend to stress test and run 100 queries simultaneously in order to test any potential solutions?
Sorry for all the questions, just trying to make sure I am as clear as possible on our process and the questions we have.
Thanks in advance for any insight as this is a really frustrating issue for us.
Hardware
We are running SQL Server 2008 R2 on a Dual Xeon CPU with 24 GB of RAM. So we should have plenty of horsepower for this process.
It looks like the best solution to the issue was to create a separate table with an identity and use the ##IDENTITY from the insert to determine the next row to process. That has solved all my lock issues so far in my stress testing. Thanks to all who pointed my in the right direction!

TSQL Batch insert - math doesn't work

I need to insert 1.3 million of records from one table into another, and it takes really long time (over 13 min). After some research I found that it is better to do this operation in batches, so I put together something like this (actual query is more complicated, it is simplified here for briefness):
DECLARE #key INT; SET #key = 0;
CREATE TABLE #CURRENT_KEYS(KEY INT)
WHILE 1=1
BEGIN
-- Getting subset of keys
INSERT INTO #CURRENT_KEYS(KEY)
SELECT TOP 100000 KEY FROM #ALL_KEYS WHERE KEY > #key
IF ##ROWCOUNT = 0 BREAK
-- Main Insert
INSERT INTO #RESULT(KEY, VALUE)
SELECT MAIN_TABLE.KEY, MAIN_TABLE.VALUE
FROM MAIN_TABLE INNER_JOIN #CURRENT_KEYS
ON MAIN_TABLE.KEY = #CURRENT_KEYS.KEY
SELECT #key = MAX(KEY ) FROM #CURRENT_KEYS
TRUNCATE TABLE #CURRENT_KEYS
END
I already have indexed list of 1.3 million keys in #ALL_KEYS table so idea here is in a loop create smaller subset of keys for the JOIN and INSERT. The above loop executes 13 times (1,300,000 records / 100,000 records in a batch). If I put a break after just one iterations - execution time is 9 seconds. I assumed total execution time would be 9*13 seconds, but it's the same 13 minutes!
Any idea why?
NOTE: Instead of temp table #CURRENT_KEYS, I tried to use CTE, but with the same result.
UPDATE Some wait stats.
I am showing for this process PAGEIOLATCH_SH and sometimes PREEMPTIVE_OS_WRITEFILEGATHER in wait stats occasionally over 500ms, but often < 100Ms. Also SP_WHO shows user as suspended for the duration of the query.
I'm pretty sure you're experiencing disk pressure. PREEMPTIVE_OS_WRITEFILEGATHER is an autogrowth event (database getting larger), and PAGEIOLATCH_SH means that the process is waiting for a latch on a buffer that's an IO request (probably your file growth event).
http://blog.sqlauthority.com/2011/02/19/sql-server-preemptive-and-non-preemptive-wait-type-day-19-of-28/
http://blog.sqlauthority.com/2011/02/09/sql-server-pageiolatch_dt-pageiolatch_ex-pageiolatch_kp-pageiolatch_sh-pageiolatch_up-wait-type-day-9-of-28/
What I would recommend is pre-growing both tempdb (for your temp table) and the database that's going to hold the batch insert.
http://support.microsoft.com/kb/2091024

Best way of benchmarking INSERTs - all inclusive?

If I would like to benchmark how different table definitions affect row insertion speed in SQL Server, I guess it's not sufficient to just time transaction from BEGIN to COMMIT: this only measures the time spend to append INSERTs to the (sequential) log. Right?
But the real I/O hit comes when the INSERTs are actually applied to the real table (a clustered index which might be slightly reorganized after the INSERTs). How can I measure the total time used, all inclusive? That is, the time for all the INSERTs (written to log) + the time used for updating the "real" data structures? Is it sufficient to perform a "CHECKPOINT" before stopping the timer?
Due to lack of response I will answer this myself.
As far as I can see in various documentation, I will reach all related disk activity induced by a query by issuing a CHECKPOINT. This will force-write all dirty pages to disk.
If nothing but the query to be measured is executed, the only dirty pages will be the ones touched by the query. The experiments performed seem to support this "theory".
SET STATISTICS TIME ON will give you running and CPU times in MS for each statement you run after setting it
edit:
Using the query below you can find out exactly how many pages are dirty in the buffer pool at the time of execution as well as their size in MB and configured max/min memory on server and totals.
SELECT
ISNULL((CASE WHEN ([database_id] = 32767) THEN 'Resource Database' ELSE DB_NAME (database_id) END),'Total Pages') AS [Database Name],
SUM(CASE WHEN ([is_modified] = 1) THEN 1 ELSE 0 END) AS [Dirty Page Count],
SUM(CASE WHEN ([is_modified] = 1) THEN 0 ELSE 1 END) AS [Clean Page Count],
COUNT(*) * 8.0 / 1024.0 [Size in MB], a.value_in_use [Min Server Memory],
b.value_in_use [Max Server Memory]
FROM sys.dm_os_buffer_descriptors
INNER JOIN sys.configurations a on a.configuration_id = 1543
INNER JOIN sys.configurations b on b.configuration_id = 1544
GROUP BY [database_id],a.value_in_use,b.value_in_use WITH CUBE
HAVING A.value_in_use IS NOT NULL AND B.value_in_use IS NOT NULL
ORDER BY 1;

Why is a T-SQL variable comparison slower than GETDATE() function-based comparison?

I have a T-SQL statement that I am running against a table with many rows. I am seeing some strange behavior. Comparing a DateTime column against a precalculated value is slower than comparing each row against a calculation based on the GETDATE() function.
The following SQL takes 8 secs:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO
DECLARE #TimeZoneOffset int = -(DATEPART("HH", GETUTCDATE() - GETDATE()))
DECLARE #LowerTime DATETIME = DATEADD("HH", ABS(#TimeZoneOffset), CONVERT(VARCHAR, GETDATE(), 101) + ' 17:00:00')
SELECT TOP 200 Id, EventDate, Message
FROM Events WITH (NOLOCK)
WHERE EventDate > #LowerTime
GO
This alternate strangely returns instantly:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO
SELECT TOP 200 Id, EventDate, Message
FROM Events WITH (NOLOCK)
WHERE EventDate > GETDATE()-1
GO
Why is the second query so much faster?
EDITED: I updated the SQL to accurately reflect other settings I am using
After doing a lot of reading and researching, I've discovered the issue here is parameter sniffing. Sql Server attempts to determine how best to use indexes based on the where clause, but in this case it isnt doing a very good job.
See the examples below :
Slow version:
declare #dNow DateTime
Select #dNow=GetDate()
Select *
From response_master_Incident rmi
Where rmi.response_date between DateAdd(hh,-2,#dNow) AND #dNow
Fast version:
Select *
From response_master_Incident rmi
Where rmi.response_date between DateAdd(hh,-2,GetDate()) AND GetDate()
The "Fast" version runs around 10x faster than the slow version. The Response_Date field is indexed and is a DateTime type.
The solution is to tell Sql Server how best to optimise the query. Modifying the example as follows to include the OPTIMIZE option resulted in it using the same execution plan as the "Fast Version". The OPTMIZE option here explicitly tells sql server to treat the local #dNow variable as a date (as if declaring it as DateTime wasnt enough :s )
Care should be taken when doing this however because in more complicated WHERE clauses you could end up making the query perform worse than Sql Server's own optimisations.
declare #dNow DateTime
SET #dNow=GetDate()
Select ID, response_date, call_back_phone
from response_master_Incident rmi
where rmi.response_date between DateAdd(hh,-2,#dNow) AND #dNow
-- The optimizer does not know too much about the variable so assumes to should perform a clusterd index scann (on the clustered index ID) - this is slow
-- This hint tells the optimzer that the variable is indeed a datetime in this format (why it does not know that already who knows)
OPTION(OPTIMIZE FOR (#dNow = '99991231'));
The execution plans must be different, because SQL Server does not evaluate the value of the variable when creating the execution plan in execution time. So, it uses average statistics from all the different dates that can be stored in the table.
On the other hand, the function getdate is evaluated in execution time, so the execution plan is created using statistics for that specific date, which of course, are more realistic that the previous ones.
If you create a stored procedure with #LowerTime as a parameter, you will get better results.

Resources