I tested this query using clickhouse-benchmark
select pageUrl ,
pageTitle ,
sum(visits) as visits ,
count(DISTINCT visitorId) as visitors ,
any(featuredImage) as featuredImage ,
any(pageType) as pageType ,ssid
FROM PageViews FINAL where (createdAtDay >= toDate('2022-06-01') and createdAtDay <= toDate('2022-10-01')) GROUP BY pageUrl,pageTitle,ssid ORDER BY visits DESC
If executing the command from terminal on the server that's hosting clickhouse instance I get pretty fast response :
but when I try to execute same query remotely (which is the usual scenario, I'm not hosting my api in same server as clickhouse)
I get slow response :
I tried to get logs using this query :
SELECT
event_time, query_duration_ms / 1000 AS secs,
formatReadableSize(memory_usage) AS memory,
Settings['max_threads'] AS threads,
query
FROM system.query_log
WHERE (user = 'default') AND (type = 'QueryFinish') and (query like '%select pageUrl ...%')
ORDER BY event_time DESC
and it seems even in database logs the execution time is very big :
last 5 are the remote ones, first 5 are from the server, the time diff is very big.
is this normal? is there any type of config that I must setup ?
Related
Is there a way to get the amount of 'selects' from a mssql table?
I mean without rewriting application to log every request or without parsing sql profile logs...
Is there any build-in tool (sql request)?
SQL Server maintains index usage statistics since last restart. You can use user_reads column from query bellow to get wanted statistics:
SELECT OBJECT_NAME(ddius.[object_id], ddius.database_id) AS [object_name] ,
ddius.index_id ,
ddius.user_seeks ,
ddius.user_scans ,
ddius.user_lookups ,
ddius.user_seeks + ddius.user_scans + ddius.user_lookups
AS user_reads ,
ddius.user_updates AS user_writes ,
ddius.last_user_scan ,
ddius.last_user_update
FROM sys.dm_db_index_usage_stats ddius
WHERE ddius.database_id > 4 -- filter out system tables
AND OBJECTPROPERTY(ddius.OBJECT_ID, 'IsUserTable') = 1
AND ddius.index_id > 0 -- filter out heaps
AND database_id = DB_ID()
ORDER BY user_reads DESC
I am working on to get the record count of every entity available in the CRM. I have seen so many solutions are available on the internet But I have searched in the database(As we have on-prem) and found one table called 'RecordCountSnapshot' has the count(and answer to my question). I am wondering can we query that table somehow and get the count.
I have tried using OData Query builder, I am able to prepare a query but unable to get the result.
Query:
Result:
We are using CRM 2015 on-prem version.
Go to Settings -> Customizations -> Developer Resources -> Service Endpoints -> Organization Data Service
Open by clicking /XRMServices/2011/OrganizationData.svc/, it is missing the definition for RecordCountSnapshot. That means this entity is not serviceable by OData. Even if you modify the other OData query url to use RecordCountSnapshotSet you will get 'Not found' error. (I tried in CRM REST builder)
1) As you are in Onpremise, You can use this query:
SELECT TOP 1000 [Count]
,[RecordCountSnapshotId]
,entityview.ObjectTypeCode, Name
FROM [YOURCRM_MSCRM].[dbo].[RecordCountSnapshot] , EntityView
where entityview.ObjectTypeCode = RecordCountSnapshot.ObjectTypeCode
and count > 0 order by count desc
2) In Odata Query Designer, you have statistics tab. Use it to get the records count.
One option to get the counts of all entities is to run this SQL query against the MSCRM database:
SELECT SO.Name, SI.rows
FROM sysindexes SI, SysObjects SO
WHERE SI.id = SO.ID AND SO.Type = 'U' AND SI.indid < 2
order by rows DESC
I have also built a command line app that's in beta testing that runs a count of all entities. If you're interested, let's chat.
I've a server with 20 clients. If I don't use a client for 1 day, the result of query arrives after 2:30 minutes (over 1000 rows). After the execution of 5/6 queries, the result arrives after few seconds.
I think it's a scheduling problem of SQL Server. How can I resolve?
Thanks
UPDATE
this is the query
Select * from [WWALMDB].[dbo].[v_AlarmConsolidated]
Where Critico = 1 AND ApprovatoQA = 0
AND InAttesaDiRiconoscimento Like '%param1%'
AND (Tipo Like '%param2%') AND Area Like '%param3%'
AND Nome Like '%%param4%%' AND Descrizione Like '%%param5%%'
AND (([Dataora Scatto] >= CONVERT(DATETIME,'param6',105))
AND ([Dataora Scatto] <= CONVERT(DATETIME,'param7',105))
OR( ([Dataora Rientro] >= CONVERT(DATETIME,'param6',105))
AND ([Dataora Rientro] <= CONVERT(DATETIME,'param7',105)) )
OR( ([Dataora PresoInCarico] >= CONVERT(DATETIME,'param6',105))
AND ([Dataora PresoInCarico] <= CONVERT(DATETIME,'param7',105)) ))
ORDER BY AlarmID DESC
What you described is a normal SQL Server caching behavior.
More information about MSSQL Server caching mechanisms you can find here
It's impossible to tell you anything else without knowing the background (example sql queries, health status of your sql server machine and actual db data structure)
UPDATE:
Your query is probably so badly optimized, that is actually a miracle it's only 2:30
SELECT *
LIKE '%%*%%'
ORDER BY
I'm trying to build an XE in order to find out which of our internal apps (that don't have app names and thus show up as .Net SQLClient Data Provider) are hitting particular servers. Ideally, I'd like to get the name of the Client and Database , but not sure if I can do that in one XE.
I figured for ease of use, I'd use a histogram/asynchronous_bucketizer, and save counts of what's trying to hit and how often. However, I can't seem to get it work on 2012, much less 2008. If I use sqlserver.existing_connection it works, but only gives me the count when it connects. I want to get counts during the day and see how often it occurs from each server, so I tried preconnect_completed. Is this the right event?
Also, and part of the reason I'm using XE, is that those servers can get thousands of calls a minute.
Here's what I've come up with thus far, which works but only gives me current SSMS connections that match - obviously, I'll change that to the .Net SQLClient Data Provider.
CREATE EVENT SESSION UnknownAppHosts
ON SERVER
ADD EVENT sqlserver.existing_connection(
ACTION(sqlserver.client_hostname)
WHERE ([sqlserver].[client_app_name] LIKE 'Microsoft SQL Server Management%')
)
ADD TARGET package0.histogram
( SET slots = 50,
filtering_event_name='sqlserver.existing_connection',
source_type=1,
source='sqlserver.client_hostname'
)
WITH(MAX_DISPATCH_LATENCY =1SECONDS);
GO
Aha! It's login, not preconnect_starting or preconnect_completed.
CREATE EVENT SESSION UnknownAppHosts
ON SERVER
ADD EVENT sqlserver.login(
ACTION(sqlserver.client_hostname)
WHERE ([sqlserver].[client_app_name] LIKE 'Microsoft SQL Server Management%')
)
ADD TARGET package0.histogram
( SET slots = 50,
filtering_event_name='sqlserver.login',
source_type=1,
source='sqlserver.client_hostname'
)
WITH(MAX_DISPATCH_LATENCY =1SECONDS);
GO
Then to query it, some awesome code I made horrid:
-- Parse the session data to determine the databases being used.
SELECT slot.value('./#count', 'int') AS [Count] ,
slot.query('./value').value('.', 'varchar(20)')
FROM
(
SELECT CAST(target_data AS XML) AS target_data
FROM sys.dm_xe_session_targets AS t
INNER JOIN sys.dm_xe_sessions AS s
ON t.event_session_address = s.address
WHERE s.name = 'UnknownAppHosts'
AND t.target_name = 'Histogram') AS tgt(target_data)
CROSS APPLY target_data.nodes('/HistogramTarget/Slot') AS bucket(slot)
ORDER BY slot.value('./#count', 'int') DESC
EDIT: I think it's a problem on the subquery on the LINQ-generated query, it get all the records... But I don't know how could I fix it
I have made a simple ASP.NET MVC 2 application that does SELECT queries on a view, I get really poor performance, and while doing a simple benchmark with jMeter (10 conccurents connection) while disabling the cache (I don't want everything to rely on the non customizable/extreme OutputCache)
I see that the SQL Server get overloaded, consuming a LOT of CPU (up to 100%) and all its reserved memory space (512MB)
Here is the action code that cause the problems (manual transactions because it cause DeadLock with the other program that insert new data on the database) :
public ActionResult Index(int page = 0)
{
IronViperEntities db = new IronViperEntities();
db.Connection.Open();
DbTransaction transaction = db.Connection.BeginTransaction(IsolationLevel.ReadUncommitted);
var messages = (from globalView in db.GlobalViews orderby globalView.MessagePostDate descending select globalView).Skip(page*perPage).Take(perPage);
transaction.Commit();
db.Connection.Close();
ViewData["page"] = page;
ViewData["messages"] = messages;
return View();
}
Here is the query executed on the database :
SELECT TOP (100)
[Extent1].[MessageId] AS [MessageId],
[Extent1].[MessageUuid] AS [MessageUuid],
[Extent1].[MessageData] AS [MessageData],
[Extent1].[MessagePostDate] AS [MessagePostDate],
[Extent1].[ChannelName] AS [ChannelName],
[Extent1].[UserName] AS [UserName],
[Extent1].[UserUuid] AS [UserUuid],
[Extent1].[ChannelUuid] AS [ChannelUuid]
FROM ( SELECT [Extent1].[MessageId] AS [MessageId], [Extent1].[MessageUuid] AS [MessageUuid], [Extent1].[MessageData] AS [MessageData], [Extent1].[MessagePostDate] AS [MessagePostDate], [Extent1].[ChannelName] AS [ChannelName], [Extent1].[UserName] AS [UserName], [Extent1].[UserUuid] AS [UserUuid], [Extent1].[ChannelUuid] AS [ChannelUuid], row_number() OVER (ORDER BY [Extent1].[MessagePostDate] DESC) AS [row_number]
FROM (SELECT
[GlobalView].[MessageId] AS [MessageId],
[GlobalView].[MessageUuid] AS [MessageUuid],
[GlobalView].[MessageData] AS [MessageData],
[GlobalView].[MessagePostDate] AS [MessagePostDate],
[GlobalView].[ChannelName] AS [ChannelName],
[GlobalView].[UserName] AS [UserName],
[GlobalView].[UserUuid] AS [UserUuid],
[GlobalView].[ChannelUuid] AS [ChannelUuid]
FROM [dbo].[GlobalView] AS [GlobalView]) AS [Extent1]
) AS [Extent1]
WHERE [Extent1].[row_number] > 0
ORDER BY [Extent1].[MessagePostDate] DESC
View Code :
SELECT dbo.Messages.Id AS MessageId, dbo.Messages.Uuid AS MessageUuid, dbo.Messages.Data AS MessageData, dbo.Messages.PostDate AS MessagePostDate,
dbo.Channels.Name AS ChannelName, dbo.Users.Name AS UserName, dbo.Users.Uuid AS UserUuid, dbo.Channels.Uuid AS ChannelUuid
FROM dbo.Messages INNER JOIN
dbo.Users ON dbo.Messages.UserId = dbo.Users.Id INNER JOIN
dbo.Channels ON dbo.Messages.ChannelId = dbo.Channels.Id
I don't think the server hardware is a problem, I can run equivalent Rails/Grails application without any performance issue. (Dual Core, 3Gb of RAM)
A select count(*) on GlobalView returns ~270.000 lines, indexes are daily rebuilt and a explain show it uses all the clustered indexes.
I get an HTTP average response time of 8000ms, the SQL Server Management Studio shows an average CPU time for this SQL query of 866ms and an average logical IO of 7,592.03.
Database file size if ~180MB
I am using Windows Server 2008 R2 Enterprise Edition, ASP.NET MVC 2 with IIS 7.5 and SQL Server 2008 R2 Express Edition with Advanced Services. They are the only things running on this server.
What can I do ?
Thank you
I guess you got the query from SQL Server Profiler. Save the result, and pass it into the Database Engine Tuning Advisor. That might help you create additional indexes and statistics.
Just out of curiosity: wouldn't appending a .ToList() to the end of the var messages = ... line help?
I've found the probleme,
I replaced "orderby globalView.MessagePostDate descending" by "orderby globalView.MessageId descending", because there isn't any index on MessagePostDate, and that is muuuuch better !
Thank you