Extended Events - building a histogram of servers that connect with a particular Application Name? - sql-server

I'm trying to build an XE in order to find out which of our internal apps (that don't have app names and thus show up as .Net SQLClient Data Provider) are hitting particular servers. Ideally, I'd like to get the name of the Client and Database , but not sure if I can do that in one XE.
I figured for ease of use, I'd use a histogram/asynchronous_bucketizer, and save counts of what's trying to hit and how often. However, I can't seem to get it work on 2012, much less 2008. If I use sqlserver.existing_connection it works, but only gives me the count when it connects. I want to get counts during the day and see how often it occurs from each server, so I tried preconnect_completed. Is this the right event?
Also, and part of the reason I'm using XE, is that those servers can get thousands of calls a minute.
Here's what I've come up with thus far, which works but only gives me current SSMS connections that match - obviously, I'll change that to the .Net SQLClient Data Provider.
CREATE EVENT SESSION UnknownAppHosts
ON SERVER
ADD EVENT sqlserver.existing_connection(
ACTION(sqlserver.client_hostname)
WHERE ([sqlserver].[client_app_name] LIKE 'Microsoft SQL Server Management%')
)
ADD TARGET package0.histogram
( SET slots = 50,
filtering_event_name='sqlserver.existing_connection',
source_type=1,
source='sqlserver.client_hostname'
)
WITH(MAX_DISPATCH_LATENCY =1SECONDS);
GO

Aha! It's login, not preconnect_starting or preconnect_completed.
CREATE EVENT SESSION UnknownAppHosts
ON SERVER
ADD EVENT sqlserver.login(
ACTION(sqlserver.client_hostname)
WHERE ([sqlserver].[client_app_name] LIKE 'Microsoft SQL Server Management%')
)
ADD TARGET package0.histogram
( SET slots = 50,
filtering_event_name='sqlserver.login',
source_type=1,
source='sqlserver.client_hostname'
)
WITH(MAX_DISPATCH_LATENCY =1SECONDS);
GO
Then to query it, some awesome code I made horrid:
-- Parse the session data to determine the databases being used.
SELECT slot.value('./#count', 'int') AS [Count] ,
slot.query('./value').value('.', 'varchar(20)')
FROM
(
SELECT CAST(target_data AS XML) AS target_data
FROM sys.dm_xe_session_targets AS t
INNER JOIN sys.dm_xe_sessions AS s
ON t.event_session_address = s.address
WHERE s.name = 'UnknownAppHosts'
AND t.target_name = 'Histogram') AS tgt(target_data)
CROSS APPLY target_data.nodes('/HistogramTarget/Slot') AS bucket(slot)
ORDER BY slot.value('./#count', 'int') DESC

Related

Identity Server with Entity Framework can't authenticate

I am trying to setup client_credentials grant type in identity server 4 using the entity framework project.
I am using a sample config file to populate the database: https://github.com/IdentityServer/IdentityServer4.Demo/blob/master/src/IdentityServer4Demo/Config.cs
This data is entered into the database and I attempt to connect to the token endpoint via Insomnia, here is a screenshot of my setup:
It states invalid_client but I am not sure why, I ran a SQL profile and every time I hit this endpoint its running:
exec sp_executesql N'SELECT [x.Properties].[Id], [x.Properties].[ClientId], [x.Properties].[Key], [x.Properties].[Value]
FROM [ClientProperties] AS [x.Properties]
INNER JOIN (
SELECT TOP(1) [x8].[Id]
FROM [Clients] AS [x8]
WHERE [x8].[ClientId] = #__clientId_0
ORDER BY [x8].[Id]
) AS [t7] ON [x.Properties].[ClientId] = [t7].[Id]
ORDER BY [t7].[Id]',N'#__clientId_0 nvarchar(200)',#__clientId_0=N'client'
It's trying to perform a join on the ClientProperties table, but this table is currently empty. I am not sure if this is the problem or not.
Am I doing something wrong?
When you INNER JOIN two tables, of which one is empty (as you state), the result set will always be empty so when the client can't be retrieved by client_id, you get "invalid_client".
Putting some data, linked to your client, into it should resolve the issue.

MSD CRM: Get the count of records of all entities in CRM

I am working on to get the record count of every entity available in the CRM. I have seen so many solutions are available on the internet But I have searched in the database(As we have on-prem) and found one table called 'RecordCountSnapshot' has the count(and answer to my question). I am wondering can we query that table somehow and get the count.
I have tried using OData Query builder, I am able to prepare a query but unable to get the result.
Query:
Result:
We are using CRM 2015 on-prem version.
Go to Settings -> Customizations -> Developer Resources -> Service Endpoints -> Organization Data Service
Open by clicking /XRMServices/2011/OrganizationData.svc/, it is missing the definition for RecordCountSnapshot. That means this entity is not serviceable by OData. Even if you modify the other OData query url to use RecordCountSnapshotSet you will get 'Not found' error. (I tried in CRM REST builder)
1) As you are in Onpremise, You can use this query:
SELECT TOP 1000 [Count]
,[RecordCountSnapshotId]
,entityview.ObjectTypeCode, Name
FROM [YOURCRM_MSCRM].[dbo].[RecordCountSnapshot] , EntityView
where entityview.ObjectTypeCode = RecordCountSnapshot.ObjectTypeCode
and count > 0 order by count desc
2) In Odata Query Designer, you have statistics tab. Use it to get the records count.
One option to get the counts of all entities is to run this SQL query against the MSCRM database:
SELECT SO.Name, SI.rows
FROM sysindexes SI, SysObjects SO
WHERE SI.id = SO.ID AND SO.Type = 'U' AND SI.indid < 2
order by rows DESC
I have also built a command line app that's in beta testing that runs a count of all entities. If you're interested, let's chat.

Recently created index in SQL Server

How to find recently created index details in my SQL Server database? Any query to find this?
In my database there are a lot of indexes. I want to know which of those indexes were recently created, with all their details.
You can use SCHEMA changes history to know index creation changes along with many changes
Below is how you do it..
1.Right click server
2.Goto reports -->standard reports-->Schema changes history
below is screenshot from mt device
Default trace is enabled by default,unless you turn it on
below query tells you,if default trace status is ON
select * from sys.configurations where name like '%trace%'
below query can provide object creation stats
SELECT OBJECT_NAME(objectid),objectname,indexid
FROM sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1
f.[value]
FROM sys.fn_trace_getinfo(NULL) f
WHERE f.property = 2
)), DEFAULT) T
JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
where DatabaseName=db_name()
ORDER BY t.StartTime ;

MS Access ODBC Error 6623: A Winsock virtual circuit was aborted

I tried to create a query with multiple tables in it, all work fine except that when I add this one table, I got ODBC call failed error
6623: A Winsock virtual circuit was aborted.
I used Advantage SQL to link the external database to create reports from it.
The sql for the query that gives the error:
SELECT podetail.ItemPartNbr
,podetail.ItemDescription
,Sum(podetail.Qty) AS LastYearOrdQty
FROM poheader
LEFT JOIN podetail
ON poheader.PoNbr = podetail.PoNbr
WHERE poheader.PoDate >= DateSerial(Year(Date())-1,Month(Date()),1)
and poheader.PoDate <= Date()
GROUP BY podetail.ItemPartNbr
,podetail.ItemDescription;
The main sql where I want to combine other tables with the query above:
SELECT
itemmast.ItemPartNbr
, itemmast.Description
, Sum(iteminv.QtyOnHand) AS SumOfQtyOnHand
, itemmast.MinOrderQty
, itemmast.Cost
, Sum(iteminv.QtyAllocated) AS SumOfQtyAllocated
, itemmast.ReOrderQty
, QtyLastYearPurchase.LastYearOrdQty
FROM
(itemmast
LEFT JOIN iteminv ON itemmast.ItemPartNbr = iteminv.ItemPartNbr)
LEFT JOIN QtyLastYearPurchase ON (itemmast.Description = QtyLastYearPurchase.ItemDescription)
AND (itemmast.ItemPartNbr = QtyLastYearPurchase.ItemPartNbr)
GROUP BY
itemmast.ItemPartNbr
, itemmast.Description
, itemmast.MinOrderQty
, itemmast.Cost
, itemmast.ReOrderQty
, QtyLastYearPurchase.LastYearOrdQty;
I set the joint fields by the ItemPartNbr and I just need the QtyLastYearPurchase.LastYearOrdQty from the first query above to be added into the second query.
I tried to open each one of the queries/tables including the one that gives error and they all open just fine individually, so it doesn't have anything to do with the connection.
If I remove WHERE (((poheader.PoDate)>=DateSerial(Year(Date())-1,Month(Date()),1) And (poheader.PoDate)<=Date())) from the first query, the second query can display the records just fine, but it takes extremely long to show the records.
Any recommendation to fix this? Thank you!
So I tried to create queries between the itemmast and the iteminv tables, and connect the itemmast table with the query i just created and the first query I posted. It works now for now, except that it is still taking forever to open.
This is a bit troublesome since I have to create multiple queries just to combine them all into one query. I have almost 20 queries just to create three reports.

Performance issues on ASP.NET MVC 2 with SQL Server

EDIT: I think it's a problem on the subquery on the LINQ-generated query, it get all the records... But I don't know how could I fix it
I have made a simple ASP.NET MVC 2 application that does SELECT queries on a view, I get really poor performance, and while doing a simple benchmark with jMeter (10 conccurents connection) while disabling the cache (I don't want everything to rely on the non customizable/extreme OutputCache)
I see that the SQL Server get overloaded, consuming a LOT of CPU (up to 100%) and all its reserved memory space (512MB)
Here is the action code that cause the problems (manual transactions because it cause DeadLock with the other program that insert new data on the database) :
public ActionResult Index(int page = 0)
{
IronViperEntities db = new IronViperEntities();
db.Connection.Open();
DbTransaction transaction = db.Connection.BeginTransaction(IsolationLevel.ReadUncommitted);
var messages = (from globalView in db.GlobalViews orderby globalView.MessagePostDate descending select globalView).Skip(page*perPage).Take(perPage);
transaction.Commit();
db.Connection.Close();
ViewData["page"] = page;
ViewData["messages"] = messages;
return View();
}
Here is the query executed on the database :
SELECT TOP (100)
[Extent1].[MessageId] AS [MessageId],
[Extent1].[MessageUuid] AS [MessageUuid],
[Extent1].[MessageData] AS [MessageData],
[Extent1].[MessagePostDate] AS [MessagePostDate],
[Extent1].[ChannelName] AS [ChannelName],
[Extent1].[UserName] AS [UserName],
[Extent1].[UserUuid] AS [UserUuid],
[Extent1].[ChannelUuid] AS [ChannelUuid]
FROM ( SELECT [Extent1].[MessageId] AS [MessageId], [Extent1].[MessageUuid] AS [MessageUuid], [Extent1].[MessageData] AS [MessageData], [Extent1].[MessagePostDate] AS [MessagePostDate], [Extent1].[ChannelName] AS [ChannelName], [Extent1].[UserName] AS [UserName], [Extent1].[UserUuid] AS [UserUuid], [Extent1].[ChannelUuid] AS [ChannelUuid], row_number() OVER (ORDER BY [Extent1].[MessagePostDate] DESC) AS [row_number]
FROM (SELECT
[GlobalView].[MessageId] AS [MessageId],
[GlobalView].[MessageUuid] AS [MessageUuid],
[GlobalView].[MessageData] AS [MessageData],
[GlobalView].[MessagePostDate] AS [MessagePostDate],
[GlobalView].[ChannelName] AS [ChannelName],
[GlobalView].[UserName] AS [UserName],
[GlobalView].[UserUuid] AS [UserUuid],
[GlobalView].[ChannelUuid] AS [ChannelUuid]
FROM [dbo].[GlobalView] AS [GlobalView]) AS [Extent1]
) AS [Extent1]
WHERE [Extent1].[row_number] > 0
ORDER BY [Extent1].[MessagePostDate] DESC
View Code :
SELECT dbo.Messages.Id AS MessageId, dbo.Messages.Uuid AS MessageUuid, dbo.Messages.Data AS MessageData, dbo.Messages.PostDate AS MessagePostDate,
dbo.Channels.Name AS ChannelName, dbo.Users.Name AS UserName, dbo.Users.Uuid AS UserUuid, dbo.Channels.Uuid AS ChannelUuid
FROM dbo.Messages INNER JOIN
dbo.Users ON dbo.Messages.UserId = dbo.Users.Id INNER JOIN
dbo.Channels ON dbo.Messages.ChannelId = dbo.Channels.Id
I don't think the server hardware is a problem, I can run equivalent Rails/Grails application without any performance issue. (Dual Core, 3Gb of RAM)
A select count(*) on GlobalView returns ~270.000 lines, indexes are daily rebuilt and a explain show it uses all the clustered indexes.
I get an HTTP average response time of 8000ms, the SQL Server Management Studio shows an average CPU time for this SQL query of 866ms and an average logical IO of 7,592.03.
Database file size if ~180MB
I am using Windows Server 2008 R2 Enterprise Edition, ASP.NET MVC 2 with IIS 7.5 and SQL Server 2008 R2 Express Edition with Advanced Services. They are the only things running on this server.
What can I do ?
Thank you
I guess you got the query from SQL Server Profiler. Save the result, and pass it into the Database Engine Tuning Advisor. That might help you create additional indexes and statistics.
Just out of curiosity: wouldn't appending a .ToList() to the end of the var messages = ... line help?
I've found the probleme,
I replaced "orderby globalView.MessagePostDate descending" by "orderby globalView.MessageId descending", because there isn't any index on MessagePostDate, and that is muuuuch better !
Thank you

Resources