Query slow on SQL Server 2008 R2 - sql-server

I've a server with 20 clients. If I don't use a client for 1 day, the result of query arrives after 2:30 minutes (over 1000 rows). After the execution of 5/6 queries, the result arrives after few seconds.
I think it's a scheduling problem of SQL Server. How can I resolve?
Thanks
UPDATE
this is the query
Select * from [WWALMDB].[dbo].[v_AlarmConsolidated]
Where Critico = 1 AND ApprovatoQA = 0
AND InAttesaDiRiconoscimento Like '%param1%'
AND (Tipo Like '%param2%') AND Area Like '%param3%'
AND Nome Like '%%param4%%' AND Descrizione Like '%%param5%%'
AND (([Dataora Scatto] >= CONVERT(DATETIME,'param6',105))
AND ([Dataora Scatto] <= CONVERT(DATETIME,'param7',105))
OR( ([Dataora Rientro] >= CONVERT(DATETIME,'param6',105))
AND ([Dataora Rientro] <= CONVERT(DATETIME,'param7',105)) )
OR( ([Dataora PresoInCarico] >= CONVERT(DATETIME,'param6',105))
AND ([Dataora PresoInCarico] <= CONVERT(DATETIME,'param7',105)) ))
ORDER BY AlarmID DESC

What you described is a normal SQL Server caching behavior.
More information about MSSQL Server caching mechanisms you can find here
It's impossible to tell you anything else without knowing the background (example sql queries, health status of your sql server machine and actual db data structure)
UPDATE:
Your query is probably so badly optimized, that is actually a miracle it's only 2:30
SELECT *
LIKE '%%*%%'
ORDER BY

Related

executing clickhouse on server is 20x faster than executing it from remote

I tested this query using clickhouse-benchmark
select pageUrl ,
pageTitle ,
sum(visits) as visits ,
count(DISTINCT visitorId) as visitors ,
any(featuredImage) as featuredImage ,
any(pageType) as pageType ,ssid
FROM PageViews FINAL where (createdAtDay >= toDate('2022-06-01') and createdAtDay <= toDate('2022-10-01')) GROUP BY pageUrl,pageTitle,ssid ORDER BY visits DESC
If executing the command from terminal on the server that's hosting clickhouse instance I get pretty fast response :
but when I try to execute same query remotely (which is the usual scenario, I'm not hosting my api in same server as clickhouse)
I get slow response :
I tried to get logs using this query :
SELECT
event_time, query_duration_ms / 1000 AS secs,
formatReadableSize(memory_usage) AS memory,
Settings['max_threads'] AS threads,
query
FROM system.query_log
WHERE (user = 'default') AND (type = 'QueryFinish') and (query like '%select pageUrl ...%')
ORDER BY event_time DESC
and it seems even in database logs the execution time is very big :
last 5 are the remote ones, first 5 are from the server, the time diff is very big.
is this normal? is there any type of config that I must setup ?

What accounts for different execution times between HeidiSQL and SSMS?

When I execute a particular query from Heidi against an MSSQL database, it takes approximately 10 times longer than executing the identical query in SSMS.
They are both being executed against the same server from the same workstation.
What can account for this difference?
Here is the exact query and relative execution times:
SELECT b.ID as BookingID, b.ReservationID, b.RoomID, b.EventName,
b.EventTypeID, b.StatusID, b.DateAdded, build.Value1 as BuildingID,
build.ValueDescription as Building
FROM EMS.dbo.tblBooking b
INNER JOIN EMS.dbo.tblRoom room
ON room.ID = b.RoomID
INNER JOIN ( SELECT deff.Value1, deff.ValueDescription
FROM tblDataExtractionFilter_Fields deff
INNER JOIN tblDataExtractionFilter def ON deff.FilterID = def.ID
WHERE def.Description = '[redacted]'
AND deff.FieldID = 28
AND deff.Show = 0) build
ON room.BuildingID = build.Value1
WHERE b.DateAdded > DATEADD(DAY,-7,GETDATE()) AND (StatusID = 1 OR StatusID = 16);
Heidi: "Duration for 1 query: 1.015 sec."
SSMS: "00:00:01"
I am obviously green, but my understanding was that the execution plan was determined server side and not application side. This leads me to suspect that there is some sort of overhead in Heidi with respect to this query (simpler queries execute MUCH faster so the overhead would not be universal).
This is just a point of curiosity for me. I am still learning. Can anyone offer a clue about what I can check/google/research to try to understand this?
Thanks!
EDIT: The times I have reported do not agree with my statement that the SSMS time is 1/10 that of Heidi. They are both approximately 1 second. My subjective wait time (wall clock time) between execution and display is MUCH faster (and much less than 1 second) in SSMS. Can this be due to SSMS caching the results?

Automatically stop SQL Server Agent when no jobs are running [duplicate]

I have around 40 different sql server jobs in one instance. They all have different schedules. Some run once a day some every two mins some every five mins. If I have a need to stop sql server agent, how can I find the best time when no jobs are running so I won't interrupt any of my jobs?
how can I find the best time when no jobs are running so I won't interrupt any of my jobs?
You basically want to find a good window to perform some maintenance. #MaxVernon has blogged about it here with a handy script
/*
Shows gaps between agent jobs
-- http://www.sqlserver.science/tools/gaps-between-sql-server-agent-jobs/
-- requires SQL Server 2012+ since it uses the LAG aggregate.
Note: On SQL Server 2005, SQL Server 2008, and SQL Server 2008 R2, you could replace the LastEndDateTime column definition with:
LastEndDateTime = (SELECT TOP(1) s1a.EndDateTime FROM s1 s1a WHERE s1a.rn = s1.rn - 1)
*/
DECLARE #EarliestStartDate DATETIME;
DECLARE #LatestStopDate DATETIME;
SET #EarliestStartDate = DATEADD(DAY, -1, GETDATE());
SET #LatestStopDate = GETDATE();
;WITH s AS
(
SELECT StartDateTime = msdb.dbo.agent_datetime(sjh.run_date, sjh.run_time)
, MaxDuration = MAX(sjh.run_duration)
FROM msdb.dbo.sysjobs sj
INNER JOIN msdb.dbo.sysjobhistory sjh ON sj.job_id = sjh.job_id
WHERE sjh.step_id = 0
AND msdb.dbo.agent_datetime(sjh.run_date, sjh.run_time) >= #EarliestStartDate
AND msdb.dbo.agent_datetime(sjh.run_date, sjh.run_time) < = #LatestStopDate
GROUP BY msdb.dbo.agent_datetime(sjh.run_date, sjh.run_time)
UNION ALL
SELECT StartDate = DATEADD(SECOND, -1, #EarliestStartDate)
, MaxDuration = 1
UNION ALL
SELECT StartDate = #LatestStopDate
, MaxDuration = 1
)
, s1 AS
(
SELECT s.StartDateTime
, EndDateTime = DATEADD(SECOND, s.MaxDuration - ((s.MaxDuration / 100) * 100)
+ (((s.MaxDuration - ((s.MaxDuration / 10000) * 10000))
- (s.MaxDuration - ((s.MaxDuration / 100) * 100))) / 100) * 60
+ (((s.MaxDuration - ((s.MaxDuration / 1000000) * 1000000))
- (s.MaxDuration - ((s.MaxDuration / 10000) * 10000))) / 10000) * 3600, s.StartDateTime)
FROM s
)
, s2 AS
(
SELECT s1.StartDateTime
, s1.EndDateTime
, LastEndDateTime = LAG(s1.EndDateTime) OVER (ORDER BY s1.StartDateTime)
FROM s1
)
SELECT GapStart = CONVERT(DATETIME2(0), s2.LastEndDateTime)
, GapEnd = CONVERT(DATETIME2(0), s2.StartDateTime)
, GapLength = CONVERT(TIME(0), DATEADD(SECOND, DATEDIFF(SECOND, s2.LastEndDateTime, s2.StartDateTime), 0))
FROM s2
WHERE s2.StartDateTime > s2.LastEndDateTime
ORDER BY s2.StartDateTime;
The question title scared me a bit - I thought you wanted to programmatically shut the SQL Server agent down anytime there were no jobs running. My answer to that question would be "Why?" There is no need to.
But if you are just looking to do a planned restart or shut down and you don't have a third party tool like Sentry One's SQL Sentry Event Manager to have a visualization, I would just let the SQL Server Agent Job History and Job Activity Monitor help here. The Job Activity monitor can show you which jobs are running right now in the status column. You can also see the last execute and next execute dates and times.
In the object browser in SSMS, connect to your instance, then expand SQL Server Agent, then you'll see Jobs and under that you'll see "Job Activity Monitor" - this view should show you what you need.
Also - don't worry about shutting down before a job executes. If you do that, you will either have that job just missing its schedule and you can let it run when it is next due to (depending on the job and its purpose) or you can manually right click and execute the job.
For more on the activity monitor for jobs, see Monitor Job Activity in the product documentation.
I recommend creating a script that will disable your jobs. Disabled jobs still exist but will not be automatically launched by their schedules. Run this script (based on procedure sp_update_job in the msdb database) to disable jobs, wait for any currently running jobs to finish execution, then stop SQL agent. A similar script to re-enable disabled jobs would be useful. You might need to plan around jobs that are and should remain disabled.
A complete “SQL Agent shutdown” process could be fully scripted, but I question the wisdom of doing so. A bit of research implies that there is no 100% reliable way of programmatically telling if a given job is or is not running, and while there is an undocumented (where "undocumented" means "you really shouldn't be using this") system procedure for stopping and starting services, doing so from with SQL Server itself seems like a pretty bad idea.
You can query the system tables as shown by Dattatrey Sindol in the MSSQLTips.com article Querying SQL Server Agent Job Information:
SELECT
[sJOB].[job_id] AS [JobID]
, [sJOB].[name] AS [JobName]
, [sDBP].[name] AS [JobOwner]
, [sCAT].[name] AS [JobCategory]
, [sJOB].[description] AS [JobDescription]
, CASE [sJOB].[enabled]
WHEN 1 THEN 'Yes'
WHEN 0 THEN 'No'
END AS [IsEnabled]
, [sJOB].[date_created] AS [JobCreatedOn]
, [sJOB].[date_modified] AS [JobLastModifiedOn]
, [sSVR].[name] AS [OriginatingServerName]
, [sJSTP].[step_id] AS [JobStartStepNo]
, [sJSTP].[step_name] AS [JobStartStepName]
, CASE
WHEN [sSCH].[schedule_uid] IS NULL THEN 'No'
ELSE 'Yes'
END AS [IsScheduled]
, [sSCH].[schedule_uid] AS [JobScheduleID]
, [sSCH].[name] AS [JobScheduleName]
, CASE [sJOB].[delete_level]
WHEN 0 THEN 'Never'
WHEN 1 THEN 'On Success'
WHEN 2 THEN 'On Failure'
WHEN 3 THEN 'On Completion'
END AS [JobDeletionCriterion]
FROM
[msdb].[dbo].[sysjobs] AS [sJOB]
LEFT JOIN [msdb].[sys].[servers] AS [sSVR]
ON [sJOB].[originating_server_id] = [sSVR].[server_id]
LEFT JOIN [msdb].[dbo].[syscategories] AS [sCAT]
ON [sJOB].[category_id] = [sCAT].[category_id]
LEFT JOIN [msdb].[dbo].[sysjobsteps] AS [sJSTP]
ON [sJOB].[job_id] = [sJSTP].[job_id]
AND [sJOB].[start_step_id] = [sJSTP].[step_id]
LEFT JOIN [msdb].[sys].[database_principals] AS [sDBP]
ON [sJOB].[owner_sid] = [sDBP].[sid]
LEFT JOIN [msdb].[dbo].[sysjobschedules] AS [sJOBSCH]
ON [sJOB].[job_id] = [sJOBSCH].[job_id]
LEFT JOIN [msdb].[dbo].[sysschedules] AS [sSCH]
ON [sJOBSCH].[schedule_id] = [sSCH].[schedule_id]
ORDER BY [JobName]

why does "SELECT 1 from <table>" cause a LCK_M_IX on another process doing a DELETE

I have a table listing patient_clinic_visits. I have SQLSERVER 2005 backend. Access2010 frontend.
Each morning this table needed refreshing from a data dump from the hospital mainframe.
this data dump includes information on the day before (who attended, didn't, cancelled) and forward appointments for next 6 weeks.
I therefore do a "DELETE * from Patient_clinic_visits where Visit_Date > (a day) and < (a day + 1) to clean out the table for each day in turn before uploading the new data after some processing.
On most days there will be only about 100-150 records in each day. Their is one foreign key to Pat_ID in Master_Patient_Table which has NO_ACTION chain on it.
At the moment this Delete query is timing-out in the Access frontend after processing a couple of days of data (ie it works okay for Monday, Tuesday, Wednesday...)
I run sp_whoisactive and get:
00 01:53:01.926 52 [[query SELECT 1 FROM "dbo"."Patient_Clinic_Visits" ]] RAHCC_User (265ms)ASYNC_NETWORK_IO 0 0 0 NULL 51 0 0 2 suspended 0 NULL SAH0020663 RAHCC_DB Microsoft MDB RAHCC 2013-04-02 09:08:33.027 0 2013-04-02 11:01:35.033
00 00:00:27.610 53 [[query --
DELETE from Patient_Clinic_Visits WHERE clinic_date >= '26-Mar-2013' AND clinic_date < '27-Mar-2013' AND Clinic_location = 'MONC' ]] HAD\jhogan05 (27596ms)LCK_M_IX 16 0 0 52 1,074 0 0 130 suspended 2 NULL SAH0048645 RAHCC_DB Microsoft SQL Server Management Studio Express - Query 2013-04-02 11:01:07.343 0 2013-04-02 11:01:35.033
This indicates my client front end is waiting on a "SELECT 1 from Patient_Clinic_Vists" and this is blocking the procedure. Apparently this is due to an ASYNC_NETWORK_IO on the client (which can happen with Access front ends doing a table request and then not processing the data).
a) However a "SELECT 1 from Patient_Clinic_Visits" should really only return TRUE or FALSE ?? which is unlikely to fill up anything, and unclear why it would cause a block situation??
b) I can't find "SELECT 1..." in my Access frontend anywhere.. Is this perhaps part of a sequence of sub-selects made by SQLSERVER in response to a more complext select? If so how do I find the true select causing this situation in that process history?
cheers,
JonHD
a) The query "SELECT 1 from Patient_Clinic_Visits" will return either an empty result set (if the Patient_Clinic_Visits table is empty) or a result set with as many rows as the Patient_Clinic_Visits table, each row having a 1 in it. To do this, SQL Server will have to issue a query against the whole Patient_Clinic_Visits table, which (assuming default locking behavior) will cause shared (read) locks to be issues against the rows in that table.
b) (NOTE: the OP addressed this point in the comments) I might be missing something, but I don't see where you come up with the "SELECT 1 from Patient_Clinic_Vists" query based on sp_whosactive. The best way to understand the SQL being sent from your application to the database server might be to use SQL Profiler with an appropriate filter (perhaps filtering only for connections from the host running your application).

Performance issues on ASP.NET MVC 2 with SQL Server

EDIT: I think it's a problem on the subquery on the LINQ-generated query, it get all the records... But I don't know how could I fix it
I have made a simple ASP.NET MVC 2 application that does SELECT queries on a view, I get really poor performance, and while doing a simple benchmark with jMeter (10 conccurents connection) while disabling the cache (I don't want everything to rely on the non customizable/extreme OutputCache)
I see that the SQL Server get overloaded, consuming a LOT of CPU (up to 100%) and all its reserved memory space (512MB)
Here is the action code that cause the problems (manual transactions because it cause DeadLock with the other program that insert new data on the database) :
public ActionResult Index(int page = 0)
{
IronViperEntities db = new IronViperEntities();
db.Connection.Open();
DbTransaction transaction = db.Connection.BeginTransaction(IsolationLevel.ReadUncommitted);
var messages = (from globalView in db.GlobalViews orderby globalView.MessagePostDate descending select globalView).Skip(page*perPage).Take(perPage);
transaction.Commit();
db.Connection.Close();
ViewData["page"] = page;
ViewData["messages"] = messages;
return View();
}
Here is the query executed on the database :
SELECT TOP (100)
[Extent1].[MessageId] AS [MessageId],
[Extent1].[MessageUuid] AS [MessageUuid],
[Extent1].[MessageData] AS [MessageData],
[Extent1].[MessagePostDate] AS [MessagePostDate],
[Extent1].[ChannelName] AS [ChannelName],
[Extent1].[UserName] AS [UserName],
[Extent1].[UserUuid] AS [UserUuid],
[Extent1].[ChannelUuid] AS [ChannelUuid]
FROM ( SELECT [Extent1].[MessageId] AS [MessageId], [Extent1].[MessageUuid] AS [MessageUuid], [Extent1].[MessageData] AS [MessageData], [Extent1].[MessagePostDate] AS [MessagePostDate], [Extent1].[ChannelName] AS [ChannelName], [Extent1].[UserName] AS [UserName], [Extent1].[UserUuid] AS [UserUuid], [Extent1].[ChannelUuid] AS [ChannelUuid], row_number() OVER (ORDER BY [Extent1].[MessagePostDate] DESC) AS [row_number]
FROM (SELECT
[GlobalView].[MessageId] AS [MessageId],
[GlobalView].[MessageUuid] AS [MessageUuid],
[GlobalView].[MessageData] AS [MessageData],
[GlobalView].[MessagePostDate] AS [MessagePostDate],
[GlobalView].[ChannelName] AS [ChannelName],
[GlobalView].[UserName] AS [UserName],
[GlobalView].[UserUuid] AS [UserUuid],
[GlobalView].[ChannelUuid] AS [ChannelUuid]
FROM [dbo].[GlobalView] AS [GlobalView]) AS [Extent1]
) AS [Extent1]
WHERE [Extent1].[row_number] > 0
ORDER BY [Extent1].[MessagePostDate] DESC
View Code :
SELECT dbo.Messages.Id AS MessageId, dbo.Messages.Uuid AS MessageUuid, dbo.Messages.Data AS MessageData, dbo.Messages.PostDate AS MessagePostDate,
dbo.Channels.Name AS ChannelName, dbo.Users.Name AS UserName, dbo.Users.Uuid AS UserUuid, dbo.Channels.Uuid AS ChannelUuid
FROM dbo.Messages INNER JOIN
dbo.Users ON dbo.Messages.UserId = dbo.Users.Id INNER JOIN
dbo.Channels ON dbo.Messages.ChannelId = dbo.Channels.Id
I don't think the server hardware is a problem, I can run equivalent Rails/Grails application without any performance issue. (Dual Core, 3Gb of RAM)
A select count(*) on GlobalView returns ~270.000 lines, indexes are daily rebuilt and a explain show it uses all the clustered indexes.
I get an HTTP average response time of 8000ms, the SQL Server Management Studio shows an average CPU time for this SQL query of 866ms and an average logical IO of 7,592.03.
Database file size if ~180MB
I am using Windows Server 2008 R2 Enterprise Edition, ASP.NET MVC 2 with IIS 7.5 and SQL Server 2008 R2 Express Edition with Advanced Services. They are the only things running on this server.
What can I do ?
Thank you
I guess you got the query from SQL Server Profiler. Save the result, and pass it into the Database Engine Tuning Advisor. That might help you create additional indexes and statistics.
Just out of curiosity: wouldn't appending a .ToList() to the end of the var messages = ... line help?
I've found the probleme,
I replaced "orderby globalView.MessagePostDate descending" by "orderby globalView.MessageId descending", because there isn't any index on MessagePostDate, and that is muuuuch better !
Thank you

Resources