SQL SERVER - TRACEWRITE - How to find out why it would be slow? - sql-server

I have been looking at the performance of one of our testing environments and the TRACEWRITE is at the top, these waits seems a bit long 1918.8 ms which to seems rather slow?
I am running sp_BlitzFirst to give me these details.
exec sp_BlitzFirst #SinceStartup = 1
I am happy with the other wait stats(I know how to make better) but unsure were to go with the TRACEWRITE wait, is this bad and if so what should I look at, to make it faster?
After the comments we can see we get this, which seems to be a rowset trace whatever that is!?
This would seem to be SQL Sentry which is used by the DBA's. Is 2 seconds acceptable for SQL Sentry?

Related

DbVisualizer 8.0.12 performs two updates instead of expected one update

I am using DbVisualizer 8.0.12 as the client tool towards MS SQL Server 2012 database.
I want to perform simple update:
update table1 set field1=0 where filed2='something';
I expect exactly one row to be updated, since field2 is primary key of table1.
Also, doing a:
select * from table1 where field2='something';
returns exactly one row.
But when executing the update sql, DBVisualizer informs me that there were two updates successfully executed.
11:16:58 [UPDATE - 1 row(s), 0.003 secs] Command processed
11:16:58 [UPDATE - 1 row(s), 0.003 secs] Command processed
... 2 statement(s) executed, 2 row(s) affected, exec/fetch time: 0.006/0.000 sec [2 successful, 0 warnings, 0 errors]
I don't understand why is there two updates performed? Shouldn't there be only one update?
Can anybody please advise? Thank you in advance for any kind of information.
[EDIT]
I have used MS SQL Server profiler, as #TomTom suggested.
And I also ran my SQL update using Microsoft SQL Server Management Studio.
Things I had to turn on for the profiler (and for my needs) were:
1. 'Trace properties > Events Selection > Column Filters > Database name – Like: my_db_name', since we have a lot of db on the server, so in order to trace only my database named 'my_db_name'
2. 'Trace properties > Events Selection > Stored procedures > enable SP:StmtStarting and SP:StmtCompleted', since I wanted to enable trigger trace
It seems that this info message from DBVisualizer is misleading (this happens only for tables that have triggers - in this particular case the trigger inserted data into another table(so called, archive table) on every update). Actually, only one update was done, so all fine there.
Microsoft SQL Server Management Studio shows correct info: 1 update and 1 insert.
Hope this will help someone having similar "problem". #TomTom please put your comment as an answer, so I can give you credit for it. Thank you.
Still, there is one more thing I would like to know about Profiler.
Is there a way you can actually see which rows (in which table) will be updated.
From the information I have above, I can only see that there was one update (so I am assuming it is this one on table1, which I expected). But I would like to see information, something like, in this table: 'tablename' this rows: list of rows will be updated with these values or something like that...
Is this possible with the Profiler?
Consider doing some work yourself. It is obvious that there are 2 commands issues. Fist Trace them - with the profiler - and check whether they are what you think they are.
SQL Server comes with a decent profiler out of the box. Old rule when you do stuff like that: NEVER assume, always validate. The statements may not even be the same... as long as you do not know that.... all is a vild guess.
I have used MS SQL Server profiler, as #TomTom suggested.
And I also ran my SQL update using Microsoft SQL Server Management Studio.
Things I had to turn on for the profiler (and for my needs) were:
1. 'Trace properties > Events Selection > Column Filters > Database name – Like: my_db_name', since we have a lot of db on the server, so in order to trace only my database named 'my_db_name'
2. 'Trace properties > Events Selection > Stored procedures > enable SP:StmtStarting and SP:StmtCompleted', since I wanted to enable trigger trace
It seems that this info message from DBVisualizer is misleading (this happens only for tables that have triggers - in this particular case the trigger inserted data into another table(so called, archive table) on every update). Actually, only one update was done, so all fine there.
Microsoft SQL Server Management Studio shows correct info: 1 update and 1 insert.
Hope this will help someone having similar "problem". #TomTom please put your comment as an answer, so I can give you credit for it. Thank you.
[EDIT]
#TomTom
Hmmm, maybe not.
I think you had enough time to think about it ...
Your answer wasn't helpfull at all (except the little track of light in the confirmative form of:
"Yes, SQL server has profiler included, DAAAH ..."
with no constructive suggestions of your own and with lot of "being a smarty" guy).
An answer to a question should include some more useful information and concrete guidance if you have it, otherwise, don't be a smartass.
Since I did all the work without your help, I think you don't actually deserve credit for it.
The funny thing about it is that you ACTUALLY think you do.
No comment on that, except that I really have ZERO (0.000000000000000000000 > is it going to change, hmmm, let see ... 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000 ... well, I guess not > that is a little bit of smartass for you :) tolerance politics with smartasses like you.
[END OF EDIT]
Still, there is one more thing I would like to know about Profiler.
Is there a way you can actually see which rows (in which table) will be updated.
From the information I have above, I can only see that there was one update (so I am assuming it is this one on table1, which I expected). But I would like to see information, something like, in this table: 'tablename' this rows: list of rows will be updated with these values or something like that...
Is this possible with the Profiler?
Thank you in advance for your time and answers.

Delay a SQL Server 2000 query until processor is below 50% utilization using a query or sproc?

I have a several very expensive queries which seem to hog resources that seems to put the system over the top.
Is there a delay function I can call to wait until processor resources come back down in SQL Server 2000 - 2008?
My eventual goal is to go back and make these more efficient, use a sproc, but in the meantime I need to get these to work asap because I'm rewriting legacy code.
you could try something like this:
DECLARE #Busy int
,#Ticks int
SELECT #Busy=##CPU_BUSY
,#Ticks=7777 --<you have to determine this value based on your machine
WAITFOR DELAY '00:00:10' --10 seconds
WHILE ##CPU_BUSY-#Ticks>#BUSY
BEGIN
--too busy, wait longer
#BUSY=##CPU_BUSY
WAITFOR DELAY '00:00:10' --10 seconds
END
EXEC YourProcedureHere
to determine the #Ticks value, just write a loop to print out the difference between ##CPU_BUSY values every 10 seconds. When the system is at your low load, use this value as #Ticks.
You can't control or throttle CPU except for higher editions of SQL Server 2008.
Your best option seems to be to set options to allow only half (or less) your CPUs to be used for any query. This can be done 2 ways
at the server level for all queries using "max degree of parallelism Option"
per query, for offending query with MAXDOP hint
Also see:
KB article "General guidelines to use to configure the MAXDOP option"
SO question: Control the CPU usage during TSQL query- sql 2008 (not a duplicate of this)
Edit:
The question would be: do you want to delay execution (with all the issues like CommandTimneout, user response time etc) or improve concurrency for all queries
This answer should improve concurrency: I usually deal with client apps and I can't make a business user wait.
When delaying execution, you also have to delay all queries (say to disallow the expensive queries from running) which reduce concurrency throughout as calls will back up. And you'll have to be careful about 2 expensive queries starting around the same time
The only thing I can think of here to actually kick things off in quiet times is to use scheduled tasks and osql to execute your statements. Scheduled tasks has to option to run when idle.
I'm not sure about the 50% bit though.
This strategy shouldn't be too sensitive to SQL version either.

how can I test performance in Sql Server Mgmt Studio without outputting data?

Using SQL Server Management Studio.
How can I test the performance of a large select (say 600k rows) without the results window impacting my test? All things being equal it doesn't really matter, since the two queries will both be outputting to the same place. But I'd like to speed up my testing cycles and I'm thinking that the output settings of SQL Server Management Studio are getting in my way. Output to text is what I'm using currently, but I'm hoping for a better alternative.
I think this is impacting my numbers because the database is on my local box.
Edit: Had a question about doing WHERE 1=0 here (thinking that the join would happen but no output), but I tested it and it didn't work -- not a valid indicator of query performance.
You could do SET ROWCOUNT 1 before your query. I'm not sure it's exactly what you want but it will avoid having to wait for lots of data to be returned and therefore give you accurate calculation costs.
However, if you add Client Statistics to your query, one of the numbers is Wait time on server replies which will give you the server calculation time not including the time it takes to transfer the data over the network.
You can SET STATISTICS TIME ON to get a measurement of the time on server. And you can use the Query/Include Client Statistics (Shift+Alt+S) on SSMS to get detail information about the client time usage. Note that SQL queries don't run and then return the result to the client when finished, but instead they run as they return results and even suspend execution if the communication channel is full.
The only context under which a query completely ignores sending the result packets back to the client is activation. But then the time to return the output to the client should be also considered when you measure your performance. Are you sure your own client will be any faster than SSMS?
SET ROWCOUNT 1 will stop processing after the first row is returned which means unless the plan happens to have a blocking operator the results will be useless.
Taking a trivial example
SELECT * FROM TableX
The cost of this query in practice will heavily depend on the number of rows in TableX.
Using SET ROWCOUNT 1 won't show any of that. Irrespective of whether TableX has 1 row or 1 billion rows it will stop executing after the first row is returned.
I often assign the SELECT results to variables to be able to look at things like logical reads without being slowed down by SSMS displaying the results.
SET STATISTICS IO ON
DECLARE #name nvarchar(35),
#type nchar(3)
SELECT #name = name,
#type = type
FROM master..spt_values
There is a related Connect Item request Provide "Discard results at server" option in SSMS and/or TSQL
The best thing you can do is to check the Query Execution Plan (press Ctrl+L) for the actual query. That will give you the best guesstimate for performance available.
I'd think that the where clause of WHERE 1=0 is definitely happening on the SQL Server side, and not Management Studio. No results would be returned.
Is you DB engine on the same machine that you're running the Mgmt Studio on?
You could :
Output to Text or
Output to File.
Close the Query Results pane.
That'd just move the cycles spent on drawing the grid in Mgmt Studio. Perhaps the Resuls to Text would be more performant on the whole. Hiding the pane would save the cycles on Mgmt Studio on having to draw the data. It's still being returned to the Mgmt Studio, so it really isn't saving a lot of cycles.
How can you test performance of your query if you don't output the results? Speeding up the testing is pointless if the testing doesn't tell you anything about how the query is going to perform. Do you really want to find out this dog of a query takes ten minutes to return data after you push it to prod?
And of course its going to take some time to return 600,000 records. It will in your user interface as well, it will probably take longer than in your query window because the info has to go across the network.
There is a lot of more correct answers of answers but I assume real question here is the one I just asked myself when I stumbled upon this question:
I have a query A and a query B on the same test data. Which is faster? And I want to check quick and dirty. For me the answer is - temp tables (overhead of creating temp table here is easy to ignore). This is to be done on perf/testing/dev server only!
Query A:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS (to clear statistics
SELECT * INTO #temp1 FROM ...
Query B
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
SELECT * INTO #temp2 FROM ...

Intriguing SQL Server performance-tuning problem

I have been working on a stored procedure performance problem for over a week now and is related to my other post on Stackoverflow here. Let me give you some background information.
We have a nightly process which runs and is started by a stored procedure which calls many many many other stored procedures. Lots of the called stored procedures call others, etc. I have looked at some of the called procs and there is all sorts of frightnening complicated stuff in there such as XML string processing, unnecessary over-use of cursors, NOLOCK hints over-used, rare use of set-based processing, etc - the list goes on, it's quite horrendous.
This nightly process in our production environment takes on average 1:15 to run. It sometimes takes 2 hours to run which is unacceptable. I have created a test environment on identical hardware to production and run the proc. It took 45 minutes the first time I ran it. If I restore the database to the exact same point and run it again, it takes longer: indeed, if I repeat this action several times (restoring and re-running), the proc takes progressively longer until it plateaus at around 2 hours. This really puzzles me because I restore the database to the exact same point every time. There are no other user databases on the server.
I thought of two lines of investigation to pursue:
Query plans and parameter spoofing
Tempdb
As a test, I restarted SQL Server to clear out both the cache and tempdb and re-ran the proc with the same database restore. The proc took 45 minutes. I repeated this several times to ensure that it was repeatable - again it took 45 minutes each time. I then embarked on several tests to try and isolate the puzzling increase in run times when SQL Server does not get restarted:
Run the initial stored procedure WITH RECOMPILE
Before running the procedure, executre DBCC FREEPROCCACHE to clear out the procedure cache
Before running the procedure, execute CHECKPOINT followed by DBCC DROPCLEANBUFFERS to ensure that the cache was empty and clean
Executed the following script to ensure all stored procedures were marked for recompilation:
DECLARE #proc_schema SYSNAME
DECLARE #proc_name SYSNAME
DECLARE prcCsr CURSOR local
FOR SELECT specific_schema,
specific_name
FROM INFORMATION_SCHEMA.routines
WHERE routine_type = 'PROCEDURE'
OPEN prcCsr
FETCH NEXT FROM prcCsr INTO #proc_schema, #proc_name
DECLARE #stmt NVARCHAR(MAX)
WHILE ##FETCH_STATUS = 0
BEGIN
SET #stmt = N'exec sp_recompile ''[' + #proc_schema + '].['
+ #proc_name + ']'''
-- PRINT #stmt -- DEBUG
EXEC ( #stmt
)
FETCH NEXT FROM prcCsr INTO #proc_schema, #proc_name
END
In all the above tests, the procedure takes longer and longer to run with the same database restore. I am really at a loss now as to what to try. Looking into the code at this point is an option but realistically its going to take 3-6 months to get that optimised as there is lots of room for improvement there. What I am really interested in getting to the bottom of, is why does the proc execution time get longer each time when a database restore has been performed even when the procedure and buffer caches have been cleaned?
I did also investigate tempdb, and try and clear out old tables in there as described in my other stackoverflow post, but I am unable to manually clear out temp tables that were created from table variables manually and they don't seem to want to disappear on their own (even after leaving them for 24 hours).
Any insight or suggestions for further testing would be greatly appreciated. I am running SQL Server 2005 SP3 64-bit Enterprise edition on a Windows 2003 R2 Ent. edition cluster.
Regards,
Mark.
One thing that could cause this is if the process is leaking XML documents. That would cause SQL Server to use more memory, and parts of that might be written to a page file on disk, causing the process to slow down.
Code that creates an XML document looks like:
EXEC sp_xml_preparedocument #idoc OUTPUT, #strXML
It leaks if there is no corresponding:
EXEC sp_xml_removedocument #idoc
XML documents are COM objects stored outside the configured SQL Server memory. Even if you set SQL Server to use max 5 GB, leaking XML documents grows memory usage beyond that.
Reviewing all posts to-date and your related question, it certainly sounds like your strongest lead is the mystery behind those tempdb objects. Some leading questions:
After a fresh start, after the process is run how many objects are in tempdb? Is it the same number after every fresh start?
Do the numbers grow after “successive” runs? Do they grow at the same rate?
Can you determine if they occupy space?
For that matter, your tempdb files grow with each successive run of your process?
I followed the links, but didn’t find any reference discussion the actual problem. You might want to raise the issue on the Microsoft SQL Technet forums here -- they can be pretty good with the abstract stuff. (If all else fails, you can open a case with MS technical support. It might take days, but odds are very good that they will figure things out. And if it is an MS bug, they refund your money!)
You've said that rewriting the code is not an option. However, if temp table abuse is a factor, identifying and refactoring those parts of the code first might help a lot. To find which those may be, run SQL Profiler while your process executes. This kind of work is, alas, subjective and highly iterative (meaning you hardly ever get just the right set of counters on the first pass). Some thoughts:
Start with tracking SP:Started, to track which stored proedures are being called.
SQL Profiler can be used to group data; it’s awkward and I’m not sure how to describe it in mere text, but configured properly you’ll get a Profiler display showing the number of times each procedures was. Ideally, this would show the most frequenly called procs, and you can analyze them for temp table abuse and refactor as necessary.
If nothing jumps out there, you can trace SP:StmtStarting and do the same thing for individual statements. The problem here is that in a 2+/- hour spaghetti-code run, you might run out of disk space, and analyzing 100s of MB of trace data can be a nightmare. (Hint: load it in a table, build indexes, then carefully delete out the cruft.) Again, the goal would be to identify overly used/abused temp table code to be refactored.
Mark-
So it might take 3-6 months to totally re-write this procedure, but that doesn't mean you can't do some relatively quick performance optimization.
Some of the routines I have to support run 30hrs+, I would be ecstatic to get them to run in 2hrs!! The kind of optimization that you do on these routines is a little different than your normal OLTP database:
Capture a trace of the entire process, making sure to capture SP:StmtCompleted and SQL:StmtCompleted events. Make sure to put a filter on Duration (>10ms or something) to eliminate all the quick, unimportant statements.
Pull this trace into a table, and do some filtering/sorting/grouping, focusing on Duration and Reads. You will likely end up with one of two situations:
(A) A handful of individual queries/statements are responsible for the bulk of the time of the procedure (good news)
(B) A whole lot of similar statements each take a short amount of time, but together they add up to a long time.
In scenario (A), just focus your attention on these queries. Optimize them using indexes, or using other standard techniques. I highly recommend Dan Tow's book "SQL Tuning" for a powerful technique to optimize queries, especially messy ones with complicated joins.
In scenario (B), step back a bit and look at the set of statements as a whole. Are they all similar in some way? Can you add an index on a key, common table that will improve them all? Can you eliminate a loop that executes 10,000 dynamic queries, and instead do a single set-based query?
Still two other possibilities, I suppose:
(C) 15,000 totally different dynamic SQL statements, each requiring its own painstaking optimization. In this case, try to focus on server-level optimizations, such as I/O based improvements that will benefit them all.
(D) Something else weird going on with TempDB or something mis-configured on the server. Not much else I can say here, other than find the problem, and fix it!
Hope this helps.
Can you try the following scenario on the test server:
Make two copies of the database on the server: [A] and [B]. [A] is the database in question, [B] is the copy.
Restart server
Run your process
Drop the database [A]
Rename [B] to [A]
Run your process
This would be like a hot database swap. If the second run takes longer, something on the server level is happening (tempdb, memory, I/O, etc). If the second run takes about the same time, then the problem is on the database level (locks, index fragmentation, etc).
Good luck!
Run the following script at start of test and then after each iteration:
select sum(single_pages_kb) as sum_bp_kb
, sum(multi_pages_kb) as sum_va_kb
, type
from sys.dm_os_memory_clerks
group by type
having sum(single_pages_kb+multi_pages_kb) > 16
order by sum(single_pages_kb+multi_pages_kb) desc
select sum(total_pages), type_desc
from tempdb.sys.allocation_units
group by type_desc;
select * from sys.dm_os_performance_counters
where counter_name in (
'Log Truncations'
,'Log Growths'
,'Log Shrinks'
,'Data File(s) Size (KB)'
,'Log File(s) Size (KB)'
,'Active Temp Tables');
If the results are not self-evident, you can post them somewhere and place a link here, I can look into them and see if something strikes as odd.
What does the overall process do, what is the purpose of the operation being performed?
I would assume that executing the process results in data modification within the database. Is this the case?
If this is the case, then each time you run the process, the data begin considered is different and so different execution plan production is a possibility and so too are differing execution times.
Assuming that modification to the database data is occuring then you should also investigate:
Updating relevant database statistics
between each process run.
Reviewing the level of index
fragmentation between each process
run and determine if defragmentation could prove benificial.
Apparently you want to try anything except what you really have to do which is fix the process. Start by getting rid of the cursors. If it takes two hours right now, without the cursors I'll bet you can get it down to less than ten minutes.
I would log information into a log_table and the time it took to run each steps...that will help you narrow down the issue and also help you progressively improve the process by tackling it one at time (from improving procs that take the longest).
Best way is to simply insert at the beginning and the end of each proc.
Cursors are not peformance boosters, others address that. (not your decision)
Look into the temp tables use/management. Are they global temp tables or session/local temp tables? The fact that they are hanging around looks interesting. The tempdb is locked when temp tables are created which might be part of the issue.
Local temp tables (#mytable syntax) should go away when the session goes out of scope, but you SHOULD have dropped these (release early) to free up resources.
Use of local temp tables in transaction then cancel without COMMIT/ROLLBACK can increase locking in tempdb causing performance issues.
Speaking of transactions - this will cause locks on syscolumns, sysindexes etc. if temp tables are created in transactions - thus other exeuctions are blocked from using the same query.
Use of temp tables created by calling procedures in the called procedures points to logic need - rethink and try to use relational structures instead.
IF you need temp tables (to eliminate cursors :) then avoid SELECT INTO - to avoid system objects locks.
Use of global temp tables (##myglobaltable syntax) should be avoided as multiple sessions accessing can be and issue (the table hangs around until all sessions clear), and for me at least, makes no additive logical value proposition (look into the use of a permanent table instead). Question if global, are there blocking procedures?
Are there a lot of sparse temp tables (grow with large data, but have smaller data sets in them?)
Microsoft SQL Server Book Online,
“Consider using table variables instead of temporary tables. Temporary tables are useful in cases when indexes need to be created explicitly on them, or when the table values need to be visible across multiple stored procedures or functions. In general, table variables contribute to more efficient query processing.”
Of course if the temp table needs indexes, tabel variables are not an option.
I don't have the answer but some ideas of what I would do to isolate issues like this.
First, I would take snapshots of sys.dm_os_wait_stats before and after each execution. You subtract the 2 snapshots (get a deltas) and see if any particular WAIT is prominent or gets worse with each run. An easy way to calculate deltas is to copy the sys.dm_os_wait_stats values into Excel worksheets and use VLOOKUP() to subtract corresponding values. I've used this investigation technique hundreds of times. You don't know what aspect SQL Server is hung up on?! Let SQL Server "tell" you via sys.dm_os_wait_stats !
The other thing I might try is to adjust the behavior of the loop to understand if the subsequent slower executions exhibit constant throughput for all records from beginning to end or does it only slow down for particular sproc(s) in INFORMATION_SCHEMA.routines ... 2 techniques for exploring this is:
1) Add a "top N" clause the SQL SELECT such as "top 100" or "top 1000" (create an artificial limit) to see if you get subsequent slowdowns for all record count scenarios ... or ... do you only get the slowdowns when the cursor resultset is large enough to include the offending sproc.
2) Instead of adding "top N", you can add more print statements (instrumentation) to calculate the throughput as it is processing.
Of course, you can do combination of both.
Maybe these diagnostics will get you closer to the root cause.
Edited to add: Btw, SQL2008 has a new performance monitor that makes it easy to "eyeball" the numbers of sys.dm_os_wait_stats. However for SQL2005, you'll have to manually calculate the deltas via Excel or a script.
These are long shots:
Quickly look through all of the
stored procedures for things that are
unusual and SQL Server should not
really be doing, for example sending
email or writing files, etc. SQL trying to send email to a non-exist email server could cause delays.
The other thing to keep in mind is
that as you restore the database
before each test possibly your disk
is getting more fragmented (not
really sure about this though). So
that may explain why run times get longer each time until they plateau.
Firstly, thanks to everyone for some really great help. I much appreciate your time and expertise in helping me to solve this very strange issue. I have an update.
I started a server-side trace to try and isolate the stored procs that were running slower between iterations. What I found surprised me. 96 stored procedures are involved in the process. Most of these stored procedures ran slower the second time around - about 50 of them. The rest were very quick to run and didn't influence the overall time at all, and in fact some of these ran a little quicker (as would be expected).
I failed over the database instance to another node in my cluster and ran the tests there with the exact same results - so I can rule out any OS differences between cluster nodes - when building the clusters I was very conscious to build them identically.
1100 temp tables get created during the process and persist after it has finished - these are all table variables and I found a way to remove them. Running sp_recompile on every proc and function in the database caused all the temp tables to get cleared up. However this did not improve the run times at all. The only thing that helps the run times is a restart of the SQL Server service. Unfortunately I am out of time now to investigate this further - I have other work to do, but would like to persist with it. Perhaps I will come back to it later if I get a spare few hours. In the meantime however, I have to admit defeat with no solution and no bounty to give.
Thanks again everyone.

Clever tricks to find specific LINQ queries in SQL Profiler

Profiling LINQ queries and their execution plans is especially important due to the crazy SQL that can sometimes be created.
I often find that I need to track a specific query and have a hard time finding in query analyzer. I often do this on a database which has a lot of running transactions (sometimes production server) - so just opening Profiler is no good.
I've also found tryin to use the DataContext to trace inadequate, since it doesnt give me SQL I can actually execute myself.
My best strategy so far is to add in a 'random' number to my query, and filter for it in the trace.
LINQ:
where o.CompletedOrderID != "59872547981"
Profiler filter:
'TextData' like '%59872547981'
This works fine with a couple caveats :
I have to be careful to remember to remove the criteria, or pick something that wont affect the query plan too much. Yes I know leaving it in is asking for trouble.
As far as I can tell though, even with this approach I need to start a new trace for every LINQ query I need to track. If I go to 'File > Properties' for an existing trace I cannot change the filter criteria.
You cant beat running a query in your app and seeing it pop up in the Profiler without any extra effort. Was just hoping someone else had come up with a better way than this, or at least suggest a less 'dangerous' token to search for than a query on a column.
Messing with the where clause is maybe not the best thing to do since it can and will affect the execution plans for your queries.
Do something funky with projection into anonymous classes instead - use a unique static column name or something that will not affect the execution plan. (That way you can leave it intact in production code in case you later need to do any profiling of production code...)
from someobject in dc.SomeTable
where someobject.xyz = 123
select new { MyObject = someobject, QueryTraceID1234132412='boo' }
You can use the Linq to SQL Debug Visualiser - http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx and see it in your watch window.
Or you can use DataContext.GetCommand(); to see the SQL before it executes.
You can also look at the DataContext.GetChangeSet() to view what's going to be inserted/ updated or deleted.
EFCore has a feature TagWith() exactly for this purpose.
var nearestFriends =
(from f in context.Friends.TagWith("This is my spatial query!")
orderby f.Location.Distance(myLocation) descending
select f).Take(5).ToList();
https://learn.microsoft.com/en-us/ef/core/querying/tags
Unfortunately you can't use Query Store to find them :-)
This is because comments before the query are stripped out.
Such a shame! Hope I don't have to wait another 12 years.
You can have your datacontext log out the raw SQL, which you could then search for in the profiler to examine performance.
using System.Diagnostics.Debugger;
yourDataContext.Log = new DebuggerWriter();
All of your SQL queries will be displayed in the debugger output window now.

Resources