In MYSQL server by looking into the value for "second behind master" it can be known by how much a slave server is lagging behind to its master. So, is there something similar to it in MSSQL so that it can be known how a slave server is lagging behind by its master?
There is some controversy on this subject, but I like to use regularly posted tracer tokens. That is, you call the sp_posttracertoken procedure on the publisher and it will send a, well, token all the way through to the subscriber. You can see the history of all tokens in the distributor database. I wrote the following view to make the data a little easier to grok:
create view [dbo].[tokens] as
select
ps.name as [publisher],
p.publisher_db,
p.publication,
ss.name as [subscriber],
da.subscriber_db,
t.publisher_commit,
t.distributor_commit,
h.subscriber_commit,
datediff(second, t.publisher_commit, t.distributor_commit) as [pub to dist (s)],
datediff(second, t.distributor_commit ,h.subscriber_commit) as [dist to sub (s)],
datediff(second, t.publisher_commit, h.subscriber_commit) as [total latency (s)]
from mstracer_tokens t
inner join MStracer_history h
on t.tracer_id = h.parent_tracer_id
inner join mspublications p
on p.publication_id = t.publication_id
inner join sys.servers ps
on p.publisher_id = ps.server_id
inner join msdistribution_agents da
on h.agent_id = da.id
inner join sys.servers ss
on da.subscriber_id = ss.server_id
Another approach is to use what's commonly called a canary table. The idea is that you have a table specifically to monitor replication that typically only has one row with a datetime field. You update the column at the publisher and then you monitor how far behind the subscriber is by seeing what the value of that column is at the subscriber.
Lastly, there are some perfmon counters that you can look at. In my experience, they're not that great; the number of outstanding commands is an accurate number, but the measurement of latency as a duration is typically very inaccurate.
Related
I'm running SQL Server 2019 Always ON Availability Group with an asynchronous replication.
I use a free tool called IDERA SQL Check and I have spotted the SPID 69 which program name is Replication Distribution Agent. It's always there, staring at me like a bored cat.
This SPID 69 is pointing to a specific database which is mirrored I investigated it with this the query:
select
s.session_id
,login_name
,login_time
,host_name
,program_name
,status
,cpu_time
,memory_usage
,total_scheduled_time
,total_elapsed_time
,last_request_start_time
,reads
,writes
,logical_reads
from sys.dm_exec_sessions s
inner join sys.dm_exec_connections c
on s.session_id = c.session_id
outer apply sys.dm_exec_sql_text(c.most_recent_sql_handle) st
where s.is_user_process = 1
and s.open_transaction_count > 0;
Which gave me this response:
session_id = 69
text = begin tran
login_time = 2020-09-08 18:40:57.153
program_name = Replication Distribution Agent
status = sleeping
cpu_time = 1362772
memory_usage = 4
total_scheduled_time = 1689634
total_elapsed_time = 22354857
last_request_start_time = 2020-09-28 16:28:39.433
reads = 18607577
writes = 5166597
logical_reads = 112256365
Now, on internet I find that when you see Replication Distribution Agent is all good, that agent should be going and there should be no problem. But why:
The text says begin tran and nothing more?
IDERA SQL Check is labelling it as connection idling transaction?
The status is sleeping?
I'm concerned that CPU time, reads and writes are basically telling me that this process is frying the drive with never ending I/O, am I right?
This is perfectly normal.
The replication distribution agent is effectively running continuously to scan the transactions on your source to be able to send them to the replicas. Because it needs to capture these and forward them, it has to run continuously.
It is not frying your drive - unless your transaction rate is so high that that is actually frying your drive. It shows high reads in an incremental manner - this is cumulative values and not a snapshot of current. That suggests that it has read the equivalent of 141GB over 20 days - not particularly heavy use.
AWS has a required SSL certificate update for it's RDS instances going out on the 5th. Even though I do not actually use the certificate I went ahead and ran the update so it was done and I wouldn't have any unexpected downtime. Only the SSL Cert should have been updated as I understand it.
Instead my CPU usage went from less than 10% while idle to over 80%. Now I've isolated the cause of this to a query we run every few seconds to retrieve a list of recent transactions. And with some tweaking the CPU usage has returned to normal levels.
But this query has been in place for a few years without issues and it's only after this SSL update that it's caused us any grief. My concern is there is some deeper issue behind the scenes and that changing the query is merely treating a symptom. Before revising the query, I ran all pending updates and rebooted the database with no changes. There was also one other person on the AWS Forums with the same issue but neither of us were able to get any useful responses. Thankfully the rest of the system seems to be behaving itself but I want to know what's going on.
In case it can help identify why a query would suddenly use far more resources here is a (simplified) version of the query prior to my tweak.
SELECT Distinct Top (#NumOfTrx) [trx].* ,[c].*
FROM [dbo].[TRX_Transactions] trx
Inner Join #ProdSelection s on ((trx.Code = s.ID and s.ID != 0) or (s.ID = 0 and (trx.TypeID = 1or StatusID = 4))
or (BatchId > 0 and s.ID in (select b.Code from [dbo].[TRX_Batch] b where b.BatchId = trx.BatchId)))
Join CLI_Details c on trx.UserName = c.UserName
where trx.TransactionDate > DATEADD(Day, -1, GETDATE()) and (trx.Amount >= #Size Or trx.TypeID = 1 or StatusID = 4)
And (#Company = 0 or c.Company = #Company) and (#Agent = '' or [Agent] = #Agent)
order by [trx].[TransactionDate] Desc
Removing the Prod selection join, a filter that is a list of ids that we can operate without for the time being, was what resolved the issue.
When I execute a particular query from Heidi against an MSSQL database, it takes approximately 10 times longer than executing the identical query in SSMS.
They are both being executed against the same server from the same workstation.
What can account for this difference?
Here is the exact query and relative execution times:
SELECT b.ID as BookingID, b.ReservationID, b.RoomID, b.EventName,
b.EventTypeID, b.StatusID, b.DateAdded, build.Value1 as BuildingID,
build.ValueDescription as Building
FROM EMS.dbo.tblBooking b
INNER JOIN EMS.dbo.tblRoom room
ON room.ID = b.RoomID
INNER JOIN ( SELECT deff.Value1, deff.ValueDescription
FROM tblDataExtractionFilter_Fields deff
INNER JOIN tblDataExtractionFilter def ON deff.FilterID = def.ID
WHERE def.Description = '[redacted]'
AND deff.FieldID = 28
AND deff.Show = 0) build
ON room.BuildingID = build.Value1
WHERE b.DateAdded > DATEADD(DAY,-7,GETDATE()) AND (StatusID = 1 OR StatusID = 16);
Heidi: "Duration for 1 query: 1.015 sec."
SSMS: "00:00:01"
I am obviously green, but my understanding was that the execution plan was determined server side and not application side. This leads me to suspect that there is some sort of overhead in Heidi with respect to this query (simpler queries execute MUCH faster so the overhead would not be universal).
This is just a point of curiosity for me. I am still learning. Can anyone offer a clue about what I can check/google/research to try to understand this?
Thanks!
EDIT: The times I have reported do not agree with my statement that the SSMS time is 1/10 that of Heidi. They are both approximately 1 second. My subjective wait time (wall clock time) between execution and display is MUCH faster (and much less than 1 second) in SSMS. Can this be due to SSMS caching the results?
I've trying to identify which query is causing my workload to stall, according to the metrics (Metrics (preview) tab in Azure Portal) I see: 100% DTU utilization, caused by the CPU
But when I go to QDS I see a different picture:
And the reported queries by QDS in this period don't take that as long as the DTU cap is being hit.
I know that the 1 minute reported by the metrics view is the correct one, since the operation from the user side takes that long and I can see in the Web App telemetry the app not responding in this time period.
So how can I identify the query that hits the DTU limit?
P.S. The db is an S0.
UPDATE
#Alberto Morillo, I've executed the query, it there are a lot of cheap queries ran (~2k) - the largest values for total_worker_time are in the 54k (54 ms). On the other hand I see the wait stats is dominated by SOS_WORK_DISPATCHER.
Does this mean that the queries are blocking because the workers can't be spawned by the scheduler that fast?
Please run the following query:
SELECT TOP 10 q.query_id, p.plan_id,
rs.count_executions,
qsqt.query_sql_text,
CONVERT(NUMERIC(10,2), (rs.avg_cpu_time/1000)) as 'avg_cpu_time_seconds',
CONVERT(NUMERIC(10,2),(rs.avg_duration/1000)) as 'avg_duration_seconds',
CONVERT(NUMERIC(10,2),rs.avg_logical_io_reads ) as 'avg_logical_io_reads',
CONVERT(NUMERIC(10,2),rs.avg_logical_io_writes ) as 'avg_logical_io_writes',
CONVERT(NUMERIC(10,2),rs.avg_physical_io_reads ) as 'avg_physical_io_reads',
CONVERT(NUMERIC(10,0),rs.avg_rowcount ) as 'avg_rowcount'
from sys.query_store_query q
JOIN sys.query_store_plan p ON q.query_id = p.query_id
JOIN sys.query_store_runtime_stats rs ON p.plan_id = rs.plan_id
INNER JOIN sys.query_store_query_text qsqt
ON q.query_text_id = qsqt.query_text_id
WHERE rs.last_execution_time > dateadd(hour, -1, getutcdate())
ORDER BY rs.avg_duration DESC
Change the ORDER BY clause to avg_cpu_time and avg_rowcount also.
SELECT DISTINCT
OBJECT_SCHEMA_NAME (sc.object_id) as "schema", OBJECT_NAME(sc.object_id) as "name", sc.*
-- FROM syscomments sc
FROM sys.sql_modules sc
WHERE "Definition" LIKE '%raiserror%'
and
OBJECTPROPERTY(object_id, 'IsMSShipped') = 0
and
OBJECT_NAME(sc.object_id) like '%diagram%'
Why is this query returning these SPs? Aren't they from Microsoft?
sp_helpdiagramdefinition
sp_creatediagram
sp_renamediagram
sp_alterdiagram
sp_dropdiagram
IsMSShipped is set to 1 for any object that was created during SQL Server's installation. The Diagram objects are optional and are only added to a database after the initial installation.
In other words, although they are from MS, they are not Shipped from MS (at least not as MS is defining "Shipped").
Yes I know, it's dumb, everyone gets tripped up by this at least once. They should have called it something like IsMSInstalled instead. Just goes to show the importance of picking good names.
The SOP way to handle this is to also filter on the schema ("sys" is always schema_id 4).