If I would like to benchmark how different table definitions affect row insertion speed in SQL Server, I guess it's not sufficient to just time transaction from BEGIN to COMMIT: this only measures the time spend to append INSERTs to the (sequential) log. Right?
But the real I/O hit comes when the INSERTs are actually applied to the real table (a clustered index which might be slightly reorganized after the INSERTs). How can I measure the total time used, all inclusive? That is, the time for all the INSERTs (written to log) + the time used for updating the "real" data structures? Is it sufficient to perform a "CHECKPOINT" before stopping the timer?
Due to lack of response I will answer this myself.
As far as I can see in various documentation, I will reach all related disk activity induced by a query by issuing a CHECKPOINT. This will force-write all dirty pages to disk.
If nothing but the query to be measured is executed, the only dirty pages will be the ones touched by the query. The experiments performed seem to support this "theory".
SET STATISTICS TIME ON will give you running and CPU times in MS for each statement you run after setting it
edit:
Using the query below you can find out exactly how many pages are dirty in the buffer pool at the time of execution as well as their size in MB and configured max/min memory on server and totals.
SELECT
ISNULL((CASE WHEN ([database_id] = 32767) THEN 'Resource Database' ELSE DB_NAME (database_id) END),'Total Pages') AS [Database Name],
SUM(CASE WHEN ([is_modified] = 1) THEN 1 ELSE 0 END) AS [Dirty Page Count],
SUM(CASE WHEN ([is_modified] = 1) THEN 0 ELSE 1 END) AS [Clean Page Count],
COUNT(*) * 8.0 / 1024.0 [Size in MB], a.value_in_use [Min Server Memory],
b.value_in_use [Max Server Memory]
FROM sys.dm_os_buffer_descriptors
INNER JOIN sys.configurations a on a.configuration_id = 1543
INNER JOIN sys.configurations b on b.configuration_id = 1544
GROUP BY [database_id],a.value_in_use,b.value_in_use WITH CUBE
HAVING A.value_in_use IS NOT NULL AND B.value_in_use IS NOT NULL
ORDER BY 1;
Related
I'm doing a select on a table with about 6 millions records selecting GETDATE()
select getdate() as date, [...] from MyTable
I verified that the performance issue is on GETDATE(), removing all other fields the query is still slow.
I thought that putting the value of GETDATE() in a separate var would speed the query up
declare #now datetime
set #now = GETDATE()
select #now as date, [...] from MyTable
It is slow as well. Why?
I'd never really noticed this before. But I am seeing the same thing.
Ran the following on a 10 million row table...
-- query #1
DECLARE #now AS DATETIME ;
SET #now = GETDATE() ;
SELECT #now AS [date], * FROM [MyTable] ;
-- cpu time = 2,563 ms
-- duration = 27,511 ms
-- query #2
SELECT GETDATE() AS [date], * FROM [MyTable] ;
-- cpu time = 2,421 ms
-- duration = 26,862 ms
-- query #3
SELECT * FROM [MyTable] ;
-- cpu time = 1,969 ms
-- duration = 23,149 ms
And the cpu times and durations are showing a difference.
All three query plans are more or less the same, with negligible difference between estimated costs for the queries.
The only differences I could see between the plans were the wait stats...
Query #1
WaitType = ASYNC_NETWORK_IO
WaitCount = 77,716
WaitTimeMs = 24,234
Query #2
WaitType = ASYNC_NETWORK_IO
WaitCount = 75,261
WaitTimeMs = 23,662
Query #3
WaitType = ASYNC_NETWORK_IO
WaitCount = 55,434
WaitTimeMs = 20,280
That's an extra 3-4 seconds, between including and not including the GETDATE() column in the result set, just waiting for whatever's running the query to acknowledge it has consumed the data and is ready for more.
In my case, I was using SSMS to execute the queries. So, I can only put it down to SSMS dragging its heels to render that extra column, which amounted to about 75 MB (10M x 8 bytes).
Having said that, the bulk of the time is obviously taken up with scanning all 10 million rows.
Unfortunately, I think the extra execution time to include your GETDATE() column is unavoidable.
Two points.
ASYNC_NETWORK_IO is SQL Server saying that it is waiting for network bandwidth to be available in order to send more data down the pipe.
SSMS stores the output of the Results window in a temp file on your C:\ drive so will be affected by disk I/O, AV scanning, other processes, etc. running on your machine. Same concept if you use a Linux OS.
I'd experiment with limiting the size of the data being returned (10M records can hardly be analysed by a human), and using a different tool to pull the records (if you really need 10M records) for starters.
Also, review the Execution Plan to find out where exactly the delay is. If it still points yo the ASYNC_NETWORK_IO wait, then your problem could be one or more of the network components between yourself and the server. Try using a wired connection instead of WiFi. Do you have a VPN? Is there anything limiting data transfer rates? Or the reason might simply be that too much data is being pulled.
I have a SQL Server 2008 R2 SP1 cluster with 2008 OS. Any time a reboot happens or a failover happens the next several days any processing is extremely slow; however if we leave them running they run much better. I have been researching the possibility when the procedure cache is flushed then with all the plans needing to be rebuilt this is causing the slowness instead of being able to go to memory for the existing plan. Has anyone else experience this and what did you do to resolve so a reboot would not affect the system so negatively?
Rebuilding plan most probably would not be the problem. I see simmilar behaviour on our system, problem is HDD array. I tested if we could move to SSD it would speed up cold start queries more than 10x.
One thing you can do is monitor your most resource intensive queries after the procedure cache is flushed. Once you ID what queries are taking a long time reading from disk to get put back into the buffer pool, you can schedule a job to spin up those queries immediately after the reboot so when the first user goes to execute it, it is already in the buffer pool and will read from memory instead of disk. Here is a query to find I/O intensive queries after your reboot:
SELECT TOP 25 cp.usecounts AS [execution_count]
,qs.total_worker_time AS CPU
,qs.total_elapsed_time AS ELAPSED_TIME
,qs.total_logical_reads AS LOGICAL_READS
,qs.total_logical_writes AS LOGICAL_WRITES
,qs.total_physical_reads AS PHYSICAL_READS
,SUBSTRING(text,
CASE WHEN statement_start_offset = 0
OR statement_start_offset IS NULL
THEN 1
ELSE statement_start_offset/2 + 1 END,
CASE WHEN statement_end_offset = 0
OR statement_end_offset = -1
OR statement_end_offset IS NULL
THEN LEN(text)
ELSE statement_end_offset/2 END -
CASE WHEN statement_start_offset = 0
OR statement_start_offset IS NULL
THEN 1
ELSE statement_start_offset/2 END + 1
) AS [Statement]
FROM sys.dm_exec_query_stats qs
join sys.dm_exec_cached_plans cp on qs.plan_handle = cp.plan_handle
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
ORDER BY qs.total_physical_reads DESC;
Is there a way to retrieve the info on what was the average (or better a distribution) insert time into a given table in SQL Server up to the current point in time?
e.g. inserting into 'employees' took on average 1 millisecond per record.
I'm talking about historical data here e.g. over the last year, not what I can get for specific queries when profiling.
You should also check plan cache. From there you can calculate the average duration per statement, and assuming you're not inserting into the table using a lot of different statements (and your queries are parametrized) you should get quite good results.
Here's one example how to query the DMVs:
select top 100
SUBSTRING(t.text, (s.statement_start_offset/2)+1,
((CASE s.statement_end_offset
WHEN -1 THEN DATALENGTH(t.text)
ELSE s.statement_end_offset
END - s.statement_start_offset)/2) + 1) as statement_text,
t.text,
s.total_logical_reads, s.total_logical_reads / s.execution_count as avg_logical_reads,
s.total_worker_time, s.total_worker_time / s.execution_count as avg_worker_time,
s.execution_count,
creation_time,
last_execution_time
--,cast(p.query_plan as xml) as query_plan
from sys.dm_exec_query_stats s
cross apply sys.dm_exec_sql_text (sql_handle) t
--cross apply sys.dm_exec_text_query_plan (plan_handle, statement_start_offset, statement_end_offset) p
order by s.execution_count desc
The part commented out is for query plans.
SQL Profiler is not accurate, also it is marked deprecated starting SQL 2012.
Best tools for caturing performance-related data is extended events or perfmon. Don't think perfmon will give you object level performance, but it will tell you if you have bottlenecks at IO level. You will need to enable these tools/features for data collection, so if it hasn't been enabled already, then getting historical data is probably not possible.
I am running into an issue where SQL Server is causing a significant number of locks (95 to 150) on our main table. They are typically short duration locks, lasting under 3 seconds, but I would like to eliminate those if I possibly can. We have also noticed that typically there are no blocks, but occasionally we have a situation where the blocks seem to "cascade" and then the entire system slows down considerably.
Background
We have up to 600 virtual machines processing data and we loaded a table in SQL so we could monitor any records that got stalled and records that were marked complete. We typically have between 200,000 and 1,000,000 records in this table during our processing.
What we are trying to accomplish
We are attempting to get the next available record (Status = 0). However, since there can be multiple hits on the stored proc simultaneously, we are trying to make sure each VM gets a unique record. This is important because processing takes between 1.5 and 2.5 minutes per record and we want to make this as clean as possible.
Our thought process to this point
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
WHERE Status = 0
This update was causing us a few issues with locks, so we re-worked the process a little bit and changed the where to a sub-query to return the top 1 RowID (primary key) from the table. This seemed to help things run a little bit smoother, but then we occasionally get over-loaded in the database again.
UPDATE TOP (1) dbo.Test WITH (ROWLOCK)
SET Status = 1,
VMID = #VMID,
ReadCount = ReadCount + 1,
ProcessDT = GETUTCDATE()
OUTPUT INSERTED.RowID INTO #retValue
-- WHERE Status = 0
WHERE RowID IN (SELECT TOP 1 RowID FROM do.Test WHERE Status = 0 ORDER BY RowID)
We discovered that having a significant number of Status 1 and 2 records int he table causes slowdowns. We figured it was from a table scan on the Status column. We added the following index but it did not help solve the locks.
CREATE NONCLUSTERED INDEX IX_Test_Status_RowID
ON [dbo].[Test] ([Status])
INCLUDE ([RowID])
The final step after the UPDATE, we use the RowID returned to select out the details:
SELECT 'Test' as FileName, *, #Nick as [Nickname]
FROM Test WITH (NOLOCK)
WHERE RowID IN (SELECT id from #retValue)
Types of locks
The majority of the blocks are LCK_M_U and LCK_M_S, which I would expect with that UPDATE and SELECT query. We did have 1 or 2 LCK_M_X locks as well occasionally. That made me think we may still be getting collisions on our "unique" record code.
Questions
Are these locks and the number of locks just normal SQL operations for this type load?
Is the sub-query causing more issues than a TOP(1) in the UPDATE we started with? I am trying to get confirmation I can remove the ORDER BY statement and remove that extra step of processing.
Would a different index help? I wondered if the index updating was a possible cause of the locks initially, but now I am not sure.
Is there a better or more efficient way to get a unique RowID?
Is the WITH (ROWLOCK) causing more locks than leaving it off would cause? The idea is ROWLOCK would only lock the 1 specific record and allow another proc to update another record and select without locking the table or page.
Does anyone have any tools they recommend to stress test and run 100 queries simultaneously in order to test any potential solutions?
Sorry for all the questions, just trying to make sure I am as clear as possible on our process and the questions we have.
Thanks in advance for any insight as this is a really frustrating issue for us.
Hardware
We are running SQL Server 2008 R2 on a Dual Xeon CPU with 24 GB of RAM. So we should have plenty of horsepower for this process.
It looks like the best solution to the issue was to create a separate table with an identity and use the ##IDENTITY from the insert to determine the next row to process. That has solved all my lock issues so far in my stress testing. Thanks to all who pointed my in the right direction!
Is it possible to get a breakdown of CPU utilization by database?
I'm ideally looking for a Task Manager type interface for SQL server, but instead of looking at the CPU utilization of each PID (like taskmgr) or each SPID (like spwho2k5), I want to view the total CPU utilization of each database. Assume a single SQL instance.
I realize that tools could be written to collect this data and report on it, but I'm wondering if there is any tool that lets me see a live view of which databases are contributing most to the sqlservr.exe CPU load.
Sort of. Check this query out:
SELECT total_worker_time/execution_count AS AvgCPU
, total_worker_time AS TotalCPU
, total_elapsed_time/execution_count AS AvgDuration
, total_elapsed_time AS TotalDuration
, (total_logical_reads+total_physical_reads)/execution_count AS AvgReads
, (total_logical_reads+total_physical_reads) AS TotalReads
, execution_count
, SUBSTRING(st.TEXT, (qs.statement_start_offset/2)+1
, ((CASE qs.statement_end_offset WHEN -1 THEN datalength(st.TEXT)
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS txt
, query_plan
FROM sys.dm_exec_query_stats AS qs
cross apply sys.dm_exec_sql_text(qs.sql_handle) AS st
cross apply sys.dm_exec_query_plan (qs.plan_handle) AS qp
ORDER BY 1 DESC
This will get you the queries in the plan cache in order of how much CPU they've used up. You can run this periodically, like in a SQL Agent job, and insert the results into a table to make sure the data persists beyond reboots.
When you read the results, you'll probably realize why we can't correlate that data directly back to an individual database. First, a single query can also hide its true database parent by doing tricks like this:
USE msdb
DECLARE #StringToExecute VARCHAR(1000)
SET #StringToExecute = 'SELECT * FROM AdventureWorks.dbo.ErrorLog'
EXEC #StringToExecute
The query would be executed in MSDB, but it would poll results from AdventureWorks. Where should we assign the CPU consumption?
It gets worse when you:
Join between multiple databases
Run a transaction in multiple databases, and the locking effort spans multiple databases
Run SQL Agent jobs in MSDB that "work" in MSDB, but back up individual databases
It goes on and on. That's why it makes sense to performance tune at the query level instead of the database level.
In SQL Server 2008R2, Microsoft introduced performance management and app management features that will let us package a single database in a distributable and deployable DAC pack, and they're promising features to make it easier to manage performance of individual databases and their applications. It still doesn't do what you're looking for, though.
For more of those, check out the T-SQL repository at Toad World's SQL Server wiki (formerly at SQLServerPedia).
Updated on 1/29 to include total numbers instead of just averages.
SQL Server (starting with 2000) will install performance counters (viewable from Performance Monitor or Perfmon).
One of the counter categories (from a SQL Server 2005 install is:)
- SQLServer:Databases
With one instance for each database. The counters available however do not provide a CPU % Utilization counter or something similar, although there are some rate counters, that you could use to get a good estimate of CPU. Example would be, if you have 2 databases, and the rate measured is 20 transactions/sec on database A and 80 trans/sec on database B --- then you would know that A contributes roughly to 20% of the total CPU, and B contributes to other 80%.
There are some flaws here, as that's assuming all the work being done is CPU bound, which of course with databases it's not. But that would be a start I believe.
Here's a query that will show the actual database causing high load. It relies on the query cache which might get flushed frequently in low-memory scenarios (making the query less useful).
select dbs.name, cacheobjtype, total_cpu_time, total_execution_count from
(select top 10
sum(qs.total_worker_time) as total_cpu_time,
sum(qs.execution_count) as total_execution_count,
count(*) as number_of_statements,
qs.plan_handle
from
sys.dm_exec_query_stats qs
group by qs.plan_handle
order by sum(qs.total_worker_time) desc
) a
inner join
(SELECT plan_handle, pvt.dbid, cacheobjtype
FROM (
SELECT plan_handle, epa.attribute, epa.value, cacheobjtype
FROM sys.dm_exec_cached_plans
OUTER APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
/* WHERE cacheobjtype = 'Compiled Plan' AND objtype = 'adhoc' */) AS ecpa
PIVOT (MAX(ecpa.value) FOR ecpa.attribute IN ("dbid", "sql_handle")) AS pvt
) b on a.plan_handle = b.plan_handle
inner join sys.databases dbs on dbid = dbs.database_id
I think the answer to your question is no.
The issue is that one activity on a machine can cause load on multiple databases. If I have a process that is reading from a config DB, logging to a logging DB, and moving transactions in and out of various DBs based on type, how do I partition the CPU usage?
You could divide CPU utilization by the transaction load, but that is again a rough metric that may mislead you. How would you divide transaction log shipping from one DB to another, for instance? Is the CPU load in the reading or the writing?
You're better off looking at the transaction rate for a machine and the CPU load it causes. You could also profile stored procedures and see if any of them are taking an inordinate amount of time; however, this won't get you the answer you want.
With all said above in mind.
Starting with SQL Server 2012 (may be 2008 ?) , there is column database_id in sys.dm_exec_sessions.
It gives us easy calculation of cpu for each database for currently connected sessions. If session have disconnected, then its results have gone.
select session_id, cpu_time, program_name, login_name, database_id
from sys.dm_exec_sessions
where session_id > 50;
select sum(cpu_time)/1000 as cpu_seconds, database_id
from sys.dm_exec_sessions
group by database_id
order by cpu_seconds desc;
Take a look at SQL Sentry. It does all you need and more.
Regards,
Lieven
Have you looked at SQL profiler?
Take the standard "T-SQL" or "Stored Procedure" template, tweak the fields to group by the database ID (I think you have to used the number, you dont get the database name, but it's easy to find out using exec sp_databases to get the list)
Run this for a while and you'll get the total CPU counts / Disk IO / Wait etc. This can give you the proportion of CPU used by each database.
If you monitor the PerfMon counter at the same time (log the data to a SQL database), and do the same for the SQL Profiler (log to database), you may be able to correlate the two together.
Even so, it should give you enough of a clue as to which DB is worth looking at in more detail. Then, do the same again with just that database ID and look for the most expensive SQL / Stored Procedures.
please check this query:
SELECT
DB_NAME(st.dbid) AS DatabaseName
,OBJECT_SCHEMA_NAME(st.objectid,dbid) AS SchemaName
,cp.objtype AS ObjectType
,OBJECT_NAME(st.objectid,dbid) AS Objects
,MAX(cp.usecounts)AS Total_Execution_count
,SUM(qs.total_worker_time) AS Total_CPU_Time
,SUM(qs.total_worker_time) / (max(cp.usecounts) * 1.0) AS Avg_CPU_Time
FROM sys.dm_exec_cached_plans cp
INNER JOIN sys.dm_exec_query_stats qs
ON cp.plan_handle = qs.plan_handle
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
WHERE DB_NAME(st.dbid) IS NOT NULL
GROUP BY DB_NAME(st.dbid),OBJECT_SCHEMA_NAME(objectid,st.dbid),cp.objtype,OBJECT_NAME(objectid,st.dbid)
ORDER BY sum(qs.total_worker_time) desc