Capturing reads, writes, and CPU in SQL Server replay Trace results - sql-server

I'm trying to guide devs to use replay traces to A/B test code optimizations and their correlating database impact. In my sample I use the replay Trace template and while I can include CPU, reads, writes in the capture Trace, the only way I can gather these metrics in the replay results is running a second Trace to intercept that traffic.
Is there a way to modify the replay result template to include these fields? I'd think you'd be able to since you can get results, execution times, etc.
-Edit - Target is a 2014 instance, but also tried a 2016 instance. I've tried 2014, 2016, and 2018 SSMS JIC there were different profiler functionalities. Haven't found much in regards to documentation.

There are many tools available for performance tuning.
For example, Entity Framework Profiler (commercial tool sold on a subscription basis - which limits availability) could satisfy any developer (that using EF as ORM) in tracing and tuning.
In other hands, SQL Server profiler is an advance tool to trace your work.
So, I can help if you specify what ORM is using.

Related

Predict using MS SCOM 2012 R2 Data

Has anybody tried to predict application or server failures using MS SCOM 2012 R2 Data? We would like to perform following tasks with the data and were wondering if somebody has already done this, we could use the guidance:
Predict whether an application or server is risking failure.
Perform Root Cause Analysis on failures for quick resolution of issues so that support engineers get guidance on where to go next.
Perform some form of clustering so that we can say that when Application A fails, B and C tend to fail right after also.
Our specific questions are:
What data/features did you have to use to build your predictive model (Events, Alerts, State, Performance)?
Which algorithms did you find most useful?
I suggest to look if Operational Insights can fullfill this requirement.
There is a Capacity Intelligence Pack which promises predicitve analitics.
https://preview.opinsights.azure.com/

SQL Server Profiler causing CPU spike

Is it possible for SQL Profile to cause issues for SQL Server?
We noticed an issue with one of the CPU at full capacity and we decided to turn SQL profiler off and the spike was gone!
Has anyone experienced this and how does it happen?
How can you use SQL profiler if it causes this issue?
SQL Profiler will impact performance to some degree as it is effectively subscribing to events and transactions that are being processed by the SQL engine. Profiler will be performing tracing and filtering of events to produce output that will require CPU usage, to what degree is dependant on what the load is like on the server and what the configuration is like.
Here's an article I've found that provides some tips:
http://weblogs.sqlteam.com/dang/archive/2007/12/16/Avoid-Causing-Problems-with-Profiler.aspx
Summary Points - SQL Trace performance guidelines:
Run Profiler remotely instead of directly on server
Avoid including events that occur frequently (e.g. Lock:Acquired) unless absolutely needed
Include only event classes needed
Specify limiting filters to reduce the number of events
Avoid redundant data (e.g. SQL:BatchStarting and SQL:BatchCompleted)
Avoid running large traces with Profiler; consider a server-side SQL
Trace instead Limit server-side trace file size and manage space
usage

Better understanding of MySQL transactions

I just realized that my application was needlessly making 50+ database calls per user request due to some hidden coding -- hidden in the sense that between LINQ, persistence frameworks and events it just so turned out that a huge number of calls were being made without me being aware.
Is there a recommended way to analyze individual transactions going to my SQL 2008 database, preferably with some integration to my Visual Studio 2010 environment? I want to be able to 'spy' on individual transactions being made, but only for certain pieces of my code, and without making serious changes to either the code or database.
I addition to SQL Server Profiler, there are a number of performance counters you can look at to see both a real time evaluation and a historic trend:
Batch Requests/sec: Effectively measures the number of actual calls made to the SQL Server
Transactions/sec: Number of transactions in each database.
Connection resets/sec: number of new connections started from the connection pool by your site.
There are many more performance counters you can monitor, specially if you want to measure performance, but going through is besides the scope here. A good starting point is Monitoring Resource Usage.
You can use the SQL Profiler tool that comes with SQL Server Management Studio.
Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the Database Engine or Analysis Services. You can capture and save data about each event to a file or table to analyze later. For example, you can monitor a production environment to see which stored procedures are affecting performance by executing too slowly.
As mentioned, SQL Profiler is userful at the SQL Server level. It is not available in SQL Server SSMS Express however.
At the .NET level, LINQ to SQL and the Entity Framework both support logging. See Logging every data change with Entity Framework, http://msdn.microsoft.com/en-us/magazine/gg490349.aspx, http://peterkellner.net/2008/12/04/linq-debug-output-vs2008/.

Understanding SQL Profiler trace

I'm currently experiencing some problems on my DotNetNuke SQL Server 2005 Express site on Win2k8 Server. It runs smoothly for most of the time. However, occasionally (order once or twice an hour) it runs very slowly indeed - from a user perspective it's almost like there's a deadlock of some description when this occurs.
To try to work out what the problem is I've run SQL Profiler against the SQL Express database.
Looking at the results, some specific questions I have are:
The SQL trace shows an Audit Logon and Audit Logoff for every RPC:Completed - does this mean Connection Pooling isn't working?
When I look in Performance Monitor at ".NET CLR Data", then none of the "SQL client" counters have any instances - is this just a SQL Express lack-of-functionality problem or does it suggest I have something misconfigured?
The queries running when the slowness occur don't yet seem unusual - they run fast at other times. What other perfmon counters or other trace/log files can you suggest as useful tools for my further investigation.
Jumping straight to Profiler is probably the wrong first step. First, try checking the Perfmon stats on the server. I've got a tutorial online here:
http://www.brentozar.com/perfmon
Start capturing those metrics, and then after it's experienced one of those slowdowns, stop the collection. Look at the performance metrics around that time, and the bottleneck will show up. If you want to send me the csv output from Perfmon at brento#brentozar.com I can give you some insight as to what's going on.
You might still need to run Profiler afterwards, but I'd rule out the OS and hardware first. Also, just a thought - have you checked the server's System and Application event logs to make sure nothing's happening during those times? I've seen instances where, say, the antivirus client downloads new patches too often, and does a light scan after each update.
My spidey sense tells me that you may have SQL Server blocking issues. Read this article to help you monitor blocking on your server to check if its the cause.
If you think the issues may be performance related and want to see what your hardware bottleneck is, then you should gather some cpu, disk and memory stats using perfmon and then co-relate them with your profiler trace to see if the slow response is related.
no
nothing wrong with that...it shows that you're not using the .NET functionality embed in SQL Server.
You can check http://www.xsqlsoftware.com/Product/xSQL_Profiler.aspx for more detailed analysis of profiler trace. It has reports that show top queries by time or CPU (Not one single query, but sum of all execution of a single query).
Some other things to check:
Make sure your datafiles or log files
are not auto-extending.
Make sure your anti-virus is set to
ignore your sql data and log
files.
When looking at the profiler output, be sure the check the queries that finished just prior to your targets,
they could've been blocking.
Make sure you've turned off Auto-close on the database; re-opening after closing takes some
time.

Reducing the overhead of a SQL Trace with filters

We have a SQL 2000 server that has widely varied jobs that run at different times of day, or even different days of the month. Normally, we only use the SQL profiler to run traces for very short periods of time for performance troubleshooting, but in this case, that really wouldn't give me a good overall picture of the kinds of queries that are run against the database over the course of a day or week or month.
How can I minimize the performance overhead of a long-running SQL trace? I already know to:
Execute the trace server-side (sp_ create_trace), instead of using the SQL Profiler UI.
Trace to a file, and not to a database table (which would add extra overhead to the DB server).
My question really is about filters. If I add a filter to only log queries that run more than a certain duration or reads, it still has to examine all activity on the server to decide if it needs to log it, right? So even with that filter, is the trace going to create an unacceptable level of overhead for a server that is already on the edge of unacceptable performance?
Adding Filters does minimize the overhead of event collection and also prevents the server from logging transaction entries you don't need.
As for whether the trace is going to create an unacceptable level of overhead, you'll just have to test it out and stop it if there are additional complaints. Taking the hints of the DB Tuning Advisor with that production trace file could improve performance for everyone tomorrow though.
You actually should not have the server process the trace as that can cause problems: "When the server processes the trace, no event are dropped - even if it means sacrificing server performace to capture all the events. Whereas if Profiler is processing the trace, it will skip events if the server gets too busy." (From SQL 70-431 exam book best practices.)
I found an article that actually measures the performance impact of a SQL profiler session vs a server-side trace:
http://sqlblog.com/blogs/linchi_shea/archive/2007/08/01/trace-profiler-test.aspx
This really was my underlying question, how to make sure that I don't bog down my production server during a trace. It appears that if you do it correctly, there is minimal overhead.
It’s actually possible to collect more detailed measurements than you can collect from Profiler – and do it 24x7 across an entire instance -- without incurring any overhead. This avoids the necessity of figuring out ahead of time what you need to filter… which can be tricky.
Full disclosure: I work for one of the vendors who provide such tools… but whether you use ours or someone else’s… this may get you around the core issue here.
More info on our tool here http://bit.ly/aZKerz

Resources