How to reduce SQL Server transaction log usage - sql-server

We have the application that writes logs in Azure SQL tables. The structure of the table is the following.
CREATE TABLE [dbo].[xyz_event_history]
(
[event_history_id] [uniqueidentifier] NOT NULL,
[event_date_time] [datetime] NOT NULL,
[instance_id] [uniqueidentifier] NOT NULL,
[scheduled_task_id] [int] NOT NULL,
[scheduled_start_time] [datetime] NULL,
[actual_start_time] [datetime] NULL,
[actual_end_time] [datetime] NULL,
[status] [int] NOT NULL,
[log] [nvarchar](max) NULL,
CONSTRAINT [PK__crg_scheduler_event_history] PRIMARY KEY NONCLUSTERED
(
[event_history_id] ASC
)
)
Table stored as clustered index by scheduled_task_id column (non-unique).
CREATE CLUSTERED INDEX [IDX__xyz_event_history__scheduled_task_id] ON [dbo].[xyz_event_history]
(
[scheduled_task_id] ASC
)
The event_history_id generated by the application, it's random (not sequential) GUID. The application either creates, updates and removes old entities from the table. The log column usually holds 2-10 KB of data, but it can grow up to 5-10 MB in some cases. The items are usually accessed by PK (event_history_id) and the most frequent sort order is event_date_time desc.
The problem we see after we lowered performance tier for the Azure SQL to "S3" (100 DTUs) is crossing transaction log rate limits. It can be clearly seen within sys.dm_exec_requests table - there will be records with wait type LOG_RATE_GOVERNOR (msdn).
Occurs when DB is waiting for quota to write to the log.
The operations I've noticed that cause big impact on log rate are deletions from xyz_event_history and updates in log column. The updates made in the following fashion.
UPDATE xyz_event_history
SET [log] = COALESCE([log], '') + #log_to_append
WHERE event_history_id = #id
The recovery model for Azure SQL databases is FULL and can not be changed.
Here is the physical index statistics - there are many pages that crossed 8K per row limit.
TableName AllocUnitTp PgCt AvgPgSpcUsed RcdCt MinRcdSz MaxRcdSz
xyz_event_history IN_ROW_DATA 4145 47.6372868791698 43771 102 7864
xyz_event_history IN_ROW_DATA 59 18.1995058067705 4145 11 19
xyz_event_history IN_ROW_DATA 4 3.75277983691623 59 11 19
xyz_event_history IN_ROW_DATA 1 0.914257474672597 4 11 19
xyz_event_history LOB_DATA 168191 97.592290585619 169479 38 8068
xyz_event_history IN_ROW_DATA 7062 3.65090190264393 43771 38 46
xyz_event_history IN_ROW_DATA 99 22.0080800593032 7062 23 23
xyz_event_history IN_ROW_DATA 1 30.5534964170991 99 23 23
xyz_event_history IN_ROW_DATA 2339 9.15620212503089 43771 16 38
xyz_event_history IN_ROW_DATA 96 8.70488015814184 2339 27 27
xyz_event_history IN_ROW_DATA 1 34.3711391153941 96 27 27
xyz_event_history IN_ROW_DATA 1054 26.5034840622683 43771 28 50
xyz_event_history IN_ROW_DATA 139 3.81632073140598 1054 39 39
xyz_event_history IN_ROW_DATA 1 70.3854707190511 139 39 39
Is there a way to reduce transaction log usage?
How does SQL Server log update transactions as in example above? Is it just "old" plus "new" value? (that would conceivably make adding little pieces of data frequently quite inefficient in terms of transaction log size)
UPDATE (April, 20):
I've made some experiments with suggestions in answers and was impressed by difference that INSERT instead of UPDATE makes.
As per following msdn article about SQL Server Transaction log internals (https://technet.microsoft.com/en-us/library/jj835093(v=sql.110).aspx):
Log records for data modifications record either the logical operation
performed or they record the before and after images of the modified
data. The before image is a copy of the data before the operation is
performed; the after image is a copy of the data after the operation
has been performed.
This automatically makes the scenario with UPDATE ... SET X = X + 'more' highly inefficient in terms of transaction log usage - it requires "before image" capture.
I've created simple test suite to test original way of adding data to "log" column versus the way where we just insert new piece of data into the new table. The results I got quite astonishing (at lest for me, not too experienced with SQL Server guy).
The test is simple: 5'000 times add 1'024 character long parts of log - just 5MB of text as the result (not too bad as one might think).
FULL recovery mode, SQL Server 2014, Windows 10, SSD
UPDATE INSERT
Duration 07:48 (!) 00:02
Data file grow ~8MB ~8MB
Tran. Log grow ~218MB (!) 0MB (why?!)
Just 5000 updates that add 1KB of data can hang out SQL Server for 8 minutes (wow!) - I didn't expect that!
I think original question is resolved at this point, but the following ones raised:
Why transaction log grow looks linear (not quadratic as we can expect when simply capturing "before" and "after" images)? From the diagram we can see that "items per second" grows proportionally to the square root - it's as expected if overhead grows linearly with amount of items inserted.
Why in case with inserts transaction log appears to have the same size as before any inserts at all?
I've took a look on the transaction log (with Dell's Toad) for the case with inserts and looks like only last 297 items are in there - conceivably transaction log got truncated, but why if it's FULL recovery mode?
UPDATE (April, 21).
DBCC LOGINFO output for case with INSERT - before and after. The physical size of the log file matches the output - exactly 1,048,576 bytes on disk.
Why it looks like transaction log remains still?
RecoveryUnitId FileId FileSize StartOffset FSeqNo Status Parity CreateLSN
0 2 253952 8192 131161 0 64 0
0 2 253952 262144 131162 2 64 0
0 2 253952 516096 131159 0 128 0
0 2 278528 770048 131160 0 128 0
RecoveryUnitId FileId FileSize StartOffset FSeqNo Status Parity CreateLSN
0 2 253952 8192 131221 0 128 0
0 2 253952 262144 131222 0 128 0
0 2 253952 516096 131223 2 128 0
0 2 278528 770048 131224 2 128 0
For those who interested I've recorded "sqlserv.exe" activities using Process Monitor - I can see that file being overwritten again and again - looks like SQL Server treats old log items as no longer needed by some reason: https://dl.dropboxusercontent.com/u/1323651/stackoverflow-sql-server-transaction-log.pml.
UPDATE (April, 24). Seems I've finally started to understand what is going on there and want to share with you. The reasoning above is true in general, but has serious caveat that also produced confusion about strange transaction log re-usage with INSERTs.
Database will behave like in SIMPLE recovery mode until first full
backup is taken (even though it's in FULL recovery mode).
We can treat numbers and diagram above as valid for SIMPLE recovery mode, and I have to redo my measurement for real FULL - they are even more astonishing.
UPDATE INSERT
Duration 13:20 (!) 00:02
Data file grow 8MB 11MB
Tran. log grow 55.2GB (!) 14MB

You are violating one of the basic tenants of the normal form with the log field. The log field seams to be holding an appending sequence of info related to the primary. The fix is to stop doing that.
1 Create a table. xyz_event_history_LOG(event_history_id,log_sequence#,log)
2 stop doing updates to the log field in [xyz_event_history], instead do inserts to the xyz_event_history_LOG
The amount of data in your transaction log will decrease GREATLY.

The transaction log contains all the changes to a database in the order they were made, so if you update a row multiple times you will get multiple entries to that row. It does store the entire value, old and new, so you are correct that multiple small updates to a large data type such as nvarchar(max) would be inefficient, you would be better off storing the updates in separate columns if they are only small values.

Related

COLUMNS_UPDATED() skips a bit starting with columns in the middle of the table

I'm using COLUMNS_UPDATED() in a trigger to identify those columns whose values should be written to an audit table. The trigger / auditing had been working fine for multiple years. I noticed yesterday that the auditing is no longer working consistently.
I've listed the first forty columns of the table in question at the bottom for reference, along with the ORDINAL_POSITION from INFORMATION_SCHEMA.COLUMNS. The table has a total of 109 columns.
I added print COLUMNS_UPDATED() to my trigger to get some debug info.
When I update CurrentOnFleaTick, the 9th column, I see this printed:
0x0001000000000000000000000000
This is expected - the 9th column should be represented as the least significant bit of the second byte. Similarly, if I update HasAttackedAnotherAnimalExplanation I see this:
0x0000010000000000000000000000
Again, expected - the 17th column should be represented as the least significant bit of the third byte.
But... when I update HouseholdIncludesCats, I see this:
0x0000000200000000000000000000
Not expected! Where you see the 2 there should be a 1, as HouseholdIncludesCats ordinal position is 25, making it the first column represented in the fourth byte, which should be represented in the least significant bit of that byte.
I narrowed things down by updating every column between HasAttackedAnotherAnimalExplanation and HouseholdIncludesCats and found that the 'off by one' problem I'm having starts with HouseTrainedId, ordinal position 24. When updating HouseTrainedId I'm expecting
0x0000800000000000000000000000
but instead I get
0x0000000100000000000000000000
which I believe is wrong, and it is what I expect to be getting for updates to the HouseholdIncludesCats column.
I don not believe the mask should skip ahead. The mask is currently not using the most significant bit of the 3rd byte.
I did recently drop a column, but I don't have a record of its ordinal position. Based on the original code that would have created the table, I believe the ordinal position of the column that was dropped was NOT 24. (I think it was 7... It had been defined after the BreedIds.)
I'm not necessarily looking for a deep root cause determination. If there was something I could do to reset whatever internal data SQL Server uses that'd be fine. Sort of like a rebuild index idea for table metadata? Is there something like that that might fix this?
Thanks in advance for helpful answers! :)
COLUMN_NAME ORDINAL_POSITION
PetId 1
AdopterUserId 2
AdoptionDeadline 3
AgeMonths 4
AgeYears 5
BreedIds 6
Color 7
CreatedOn 8
CurrentOnFleaTick 9
CurrentOnHeartworm 10
CurrentOnVaccinations 11
FoodTypeId 12
GenderId 13
GuardianForMonths 14
GuardianForYears 15
HairCoatLength 16
HasAttackedAnotherAnimalExplanation 17
HasAttackedAnotherAnimalId 18
HasBeenReferredByShelter 19
HasHadTraining 20
HasMedicalConditions 21
HasRecentlyBittenExplanation 22
HasRecentlyBittenId 23
HouseTrainedId 24
HouseholdIncludesCats 25
HouseholdIncludesChildren5to10 26
HouseholdIncludesChildrenUnder5 27
HouseholdIncludesDogs 28
HouseholdIncludesOlderChildren 29
HouseholdIncludesOtherPets 30
HouseholdOtherPets 31
KnowsCommandDown 32
KnowsCommandPaw 33
KnowsCommandSit 34
KnowsCommandStay 35
KnowsOtherCommands 36
LastUpdatedOn 37
LastVisitedVetOn 38
ListingCodeId 39
LitterTypeClumping 40
So... I thought I had googled enough before posting this, but I guess I hadn't. I found this:
https://www.sqlservercentral.com/forums/topic/columns_updated-and-phantom-fields
using COLUMNPROPERTY() to get ColumnID is definitely the way to go.

Cannot repair specific tables on specific nodes in Cassandra

I'm running 5 nodes in one DC of Cassandra 3.10.
As I'm trying to maintain those nodes I'm running on daily basis on every node
nodetool repair -pr
and weekly
nodetool repair -full
This is only table I have difficulties:
Table: user_tmp
SSTable count: 4
Space used (live): 366.71 MiB
Space used (total): 366.71 MiB
Space used by snapshots (total): 216.87 MiB
Off heap memory used (total): 5.28 MiB
SSTable Compression Ratio: 0.4690289976332873
Number of keys (estimate): 1968368
Memtable cell count: 2353
Memtable data size: 84.98 KiB
Memtable off heap memory used: 0 bytes
Memtable switch count: 1108
Local read count: 62938927
Local read latency: 0.324 ms
Local write count: 62938945
Local write latency: 0.018 ms
Pending flushes: 0
Percent repaired: 76.94
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 4.51 MiB
Bloom filter off heap memory used: 4.51 MiB
Index summary off heap memory used: 717.62 KiB
Compression metadata off heap memory used: 76.96 KiB
Compacted partition minimum bytes: 51
Compacted partition maximum bytes: 654949
Compacted partition mean bytes: 194
Average live cells per slice (last five minutes): 2.503074492537404
Maximum live cells per slice (last five minutes): 179
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 19 bytes
Percent repaired is never above 80% on this table on this and one more node but on others is above 85%. RF is 3, and strategy is SizeTieredCompactionStrategy
gc_grace_period is on 10days and as I somewhere in that period I'm getting writetimeout on exactly this table but after consumer which got this timeout is immediately replaced with another one everything keep going like nothing happened. Its like one time writetimeout.
My question is: Are you maybe have suggestion for better repair strategy because I'm kind of a noob and every suggest is a big win for me + any other for this table?
Maybe repair -inc instead of repair -pr
The nodetool repair command in Casandra 3.10 defaults to running incremental repair. There have been some major issues with incremental repair and it's currently not recommended by the community to run incremental repair. Please see this article for some great insight into repair and the issues with incremental repair: http://thelastpickle.com/blog/2017/12/14/should-you-use-incremental-repair.html
I would recommend, as does many others, to run:
nodetool repair -full -pr
Please be aware that you need to run repair on every node in your cluster. This means that if you run repair on one node per day you can have a max of 7 nodes (since with default gc_grace you should aim to finish repair within 7 days). And you also have to rely on that nothing goes wrong when doing repair since you would have to restart any failing jobs.
This is why tools like Reaper exist. It solves these issues with ease, it automates repair and makes life simpler. Reaper runs scheduled repairs and provides a web interface to make administration easier. I would highly recommend using reaper for routine maintance and nodetool repair for unplanned activities.
Edit: Link http://cassandra-reaper.io/

11 seconds to delete 240 rows in SQL Server

i am running a delete statement:
DELETE FROM TransactionEntries
WHERE SessionGUID = #SessionGUID
The actual execution plan of the delete is:
Execution Tree
--------------
Clustered Index Delete(
OBJECT:([GrobManagementSystemLive].[dbo].[TransactionEntries].IX_TransactionEntries_SessionGUIDTransactionGUID]),
WHERE:([TransactionEntries].[SessionGUID]=[#SessionGUID])
)
The table is clustered by SessionGUID, so the 240 rows are physically together.
The table has no triggers on it.
The operation takes:
Duration: 11821 ms
CPU: 297
Reads: 14340
Writes: 1707
The table contains 11 indexes:
1 clustered index (SessionGUID)
1 unique (primary key) index
9 other non-unique, non-clustered indexes
How can i figure out why this delete operation is performing 14,340 reads, and takes 11 seconds?
the Avg. Disk Read Queue Length reaches 0.8
the Avg. Disk sec/Read never exceeds 4ms
the Avg. Disk Write Queue Length reaches 0.04
the Avg. Disk sec/Write never exceeds 4ms
What are the other reads for? The execution plan gives no indication of what it's reading.
Update:
EXECUTE sp_spaceused TransactionEntries
TransactionEntries
Rows 6,696,199
Data: 1,626,496 KB (249 bytes per row)
Indexes: 7,303,848 KB (1117 bytes per row)
Unused: 91,648 KB
============
Reserved: 9,021,992 KB (1380 bytes per row)
With 1,380 bytes per row, and 240 rows, that's 340 kB to be deleted.
Counter intuitive that it can be so difficult for 340 kB.
Update Two: Fragmentation
Name Scan Density Logical Fragmentation
============================= ============ =====================
IX_TransactionEntries_Tran... 12.834 48.392
IX_TransactionEntries_Curr... 15.419 41.239
IX_TransactionEntries_Tran... 12.875 48.372
TransactionEntries17 98.081 0.0049325
TransactionEntries5 12.960 48.180
PK_TransactionEntries 12.869 48.376
TransactionEntries18 12.886 48.480
IX_TranasctionEntries_CDR... 12.799 49.157
IX_TransactionEntries_CDR... 12.969 48.103
IX_TransactionEntries_Tra... 13.181 47.127
i defragmented TransactionEntries17
DBCC INDEXDEFRAG (0, 'TransactionEntries', 'TransactionEntries17')
since INDEXDEFRAG is an "online operation" (i.e. it only holds IS Intent Shared locks). i was going to then manually defragment the others until the business operations called, saying that the system is dead - and they switched to doing everything on paper.
What say you; 50% fragmentation, and only 12% scan density, cause horrible index scan performance?
As #JoeStefanelli points out in comments, it's the extra non-clustered indexes.
You are deleting 240 rows from the table.
This equates to 2640 index rows, 240 of which include all fields in the table.
Depending on how wide they are and how many included fields you have, this could equate to all the extra read activity you are seeing.
The non-clustered index rows will definitely NOT be grouped together on disk, which will increase delays.
I think the indexing might be the likeliest culprit but I wanted to throw out another possibility. You mentioned no triggers, but are there any tables that have a foreign key relationship to this table? They would have to be checked to make sure no records are in them and if you have cascade delete turned on, those records would have to be deleted as well.
Having banged my head on many-a-SQL performance issue, my standard operating procedure for something like this is to:
Back up the data
Delete one of the indexes on the table in question
Measure the operation
Restore DB
Repeat w/#2 until #3 shows a drastic change. That's likely your culprit.

Should I add a common property of foreign keys to my table?

I have a database of test data that have been collected on behalf of agents. The test data are grouped together (after the fact) into result sets. As the tests come in, they are stored in the database with the ID of the corresponding agent:
TEST_ID TEST_OWNER TIMESTAMP RESULT_ID
1 1 0 null
2 1 15 null
3 2 30 null
4 2 32 null
5 1 34 null
The result sets are generated at a later time in such a way that groups tests that took place during a similar time frame. This judgment cannot be made as the tests come in.
RESULT_ID
1
2
3
All of the tests in a result set must belong to the same owner. I can ensure this (in code) as I assign the result IDs to the tests in my later operation, but some things would be easier if I had a TEST_OWNER field in my result set table.
Would adding this field be a violation of some normalization goal? The TEST_OWNER information will be duplicated, even though one instance of it is really implicit. I'm not a DBA, and I don't want to do things that are bad style.
Jim I am not completely sure if you are saying this is a table in your DB??
TEST_ID TEST_OWNER TIMESTAMP RESULT_ID
1 1 0 null
2 1 15 null
3 2 30 null
4 2 32 null
5 1 34 null
If so the first thing I would do is pull the result attribute out of this table to achieve normalization. Or is this your Result table?
Regardless are these results being derived from from other data in the DB? If so I don't see the need to duplicate things and store the results (calculated) also. Just derive as needed and keep the DB clean.
If you need further info I need a better understanding of what you are presenting.

Performance-related: How does SQL Server process concurrent queries from multiple connections from a single .NET desktop application?

Single-threaded version description:
Program gathers a list of questions.
For each question, get model answers, and run each one through a scoring module.
Scoring module makes a number of (read-only) database queries.
Serial processing, single database connection.
I decided to multi-thread the above described program by splitting the question list into chunks and creating a thread for each one.
Each thread opens it's own database connection and works on it's own list of questions (about 95 questions on each of 6 threads). The application waits for all threads to finish, then aggregates the results for display.
To my surprise, the multi-threaded version ran in approximately the same time, taking about 16 seconds instead of 17.
Questions:
Why am I not seeing the kind of gain in performance I would expect from executing queries concurrently on separate threads with separate connections? Machine has 8 processors.
Will SQL Server process queries concurrently when they are coming from a single application, or might it (or .net itself) be serializing them?
Might there be something misconfigured, that would make it go faster, or might I just be pushing SQL Server to its computational limits?
Current configuration:
Microsoft SQL Server Developer Edition 9.0.1406 RTM
OS: Windows Server 2003 Standard
Processors: 8
RAM: 4GB
This is just a shot in the dark, but I bet you are not seeing the performance gain because they serialize themselves in the database due to locking of shared resources (records). Now for the small print.
I assume your C# code is actually correct and you actually do start separate threads and issue each query in parallel. No offense, but I've seen many making that claim and the code being actually serial in the client, for various reasons. You should validate this by monitoring the server (via Profiler, or use the sys.dm_exec_requests and sys.dm_exec_sessions).
Also I assume that your queries are of similar weight. i.e., you do not have one thread that lasts 15 seconds and 5 that 100 ms.
The symptoms you describe, in lack of more details, would point that you have a write operation at the beginning of each thread that takes an X lock on some resource. First thread starts and locks the resource, other 5 wait. 1st thread is done, releases the resource then the next one grabs it, other 4 wait. So last thread has to wait for the execution of all other 5. This would be extremely easy to troubleshoot by looking at sys.dm_exec_requests and monitor what blocks the requests.
BTW you should consider using Asynchronous Processing=true and rely on the async methods like BeginExecuteReader to launch your commands in execution in parallel w/o the overhead of client side threads.
You can simply check the task manager when the process is running. If it's showing 100% CPU usage then its CPU bound. Otherwise its IO Bound.
For hyperthreading 50% CPU usage is roughly equal to 100% usage!
Wow I didn't realize how old the thread was. I guess its always good to leave the response for others looking.
How large is your database?
How fast are your HDDs / Raid / Other storage
Perhaps your DB is I/O bound?
My first inclination is that you're trying to solve an IO problem with threads, which almost never works. IO is IO, and more threads doesn't increase the pipe. You'd be better off downloading all questions and their answers in one batch and processing the batch locally with multiple threads.
Having said that, you're probably experiencing some db locking that is causing slowness. Since you're talking about read-only queries, try using the with (nolock) hint on your queries to see if that helps.
Regarding SQL server processing, it is my understanding that SQL Server will try to process as many connections concurrently as possible (one statement at a time per connection), up to the max connections allowed by configuration. The kind if issue you're seeing is almost never a thread issue and almost always a locking or IO problem.
is it possible that the the threads share a connection? did you verify that multiple SPIDs are created when this runs (sp_who)?
I ran a join query across sys.dm_os_workers, sys.dm_os_tasks, and sys.dm_exec_requests on task_address, and here are the results (some uninteresting/zero-valued fields excluded, others prefixed with ex or os to resolve ambiguities):
-COL_NAME- -Thread_1- -Thread_2- -Thread_3- -Thread_4-
task_state SUSPENDED SUSPENDED SUSPENDED SUSPENDED
context_switches_count 2 2 2 2
worker_address 0x3F87A0E8 0x5993E0E8 0x496C00E8 0x366FA0E8
is_in_polling_io_completion_routine 0 0 0 0
pending_io_count 0 0 0 0
pending_io_byte_count 0 0 0 0
pending_io_byte_average 0 0 0 0
wait_started_ms_ticks 1926478171 1926478187 1926478171 1926478187
wait_resumed_ms_ticks 1926478171 1926478187 1926478171 1926478187
task_bound_ms_ticks 1926478171 1926478171 1926478156 1926478171
worker_created_ms_ticks 1926137937 1923739218 1921736640 1926137890
locale 1033 1033 1033 1033
affinity 1 4 8 32
state SUSPENDED SUSPENDED SUSPENDED SUSPENDED
start_quantum 3074730327955210 3074730349757920 3074730321989030 3074730355017750
end_quantum 3074730334339210 3074730356141920 3074730328373030 3074730361401750
quantum_used 6725 11177 11336 6284
max_quantum 4 15 5 20
boost_count 999 999 999 999
tasks_processed_count 765 1939 1424 314
os.task_address 0x006E8A78 0x00AF12E8 0x00B84C58 0x00D2CB68
memory_object_address 0x3F87A040 0x5993E040 0x496C0040 0x366FA040
thread_address 0x7FF08E38 0x7FF8CE38 0x7FF0FE38 0x7FF92E38
signal_worker_address 0x4D7DC0E8 0x571360E8 0x2F8560E8 0x4A9B40E8
scheduler_address 0x006EC040 0x00AF4040 0x00B88040 0x00E40040
os.request_id 0 0 0 0
start_time 2009-05-26 19:39 39:43.2 39:43.2 39:43.2
ex.status suspended suspended suspended suspended
command SELECT SELECT SELECT SELECT
sql_handle 0x020000009355F1004BDC90A51664F9174D245A966E276C61 0x020000009355F1004D8095D234D39F77117E1BBBF8108B26 0x020000009355F100FC902C84A97133874FBE4CA6614C80E5 0x020000009355F100FC902C84A97133874FBE4CA6614C80E5
statement_start_offset 94 94 94 94
statement_end_offset -1 -1 -1 -1
plan_handle 0x060007009355F100B821C414000000000000000000000000 0x060007009355F100B8811331000000000000000000000000 0x060007009355F100B801B259000000000000000000000000 0x060007009355F100B801B259000000000000000000000000
database_id 7 7 7 7
user_id 1 1 1 1
connection_id BABF5455-409B-4F4C-9BA5-B53B35B11062 A2BBCACF-D227-466A-AB08-6EBB56F34FF2 D330EDFE-D49B-4148-B7C5-8D26FE276D30 649F0EC5-CB97-4B37-8D4E-85761847B403
blocking_session_id 0 0 0 0
wait_type CXPACKET CXPACKET CXPACKET CXPACKET
wait_time 46 31 46 31
ex.last_wait_type CXPACKET CXPACKET CXPACKET CXPACKET
wait_resource
open_transaction_count 0 0 0 0
open_resultset_count 1 1 1 1
transaction_id 3052202 3052211 3052196 3052216
context_info 0x 0x 0x 0x
percent_complete 0 0 0 0
estimated_completion_time 0 0 0 0
cpu_time 0 0 0 0
total_elapsed_time 54 41 65 39
reads 0 0 0 0
writes 0 0 0 0
logical_reads 78745 123090 78672 111966
text_size 2147483647 2147483647 2147483647 2147483647
arithabort 0 0 0 0
transaction_isolation_level 2 2 2 2
lock_timeout -1 -1 -1 -1
deadlock_priority 0 0 0 0
row_count 6 0 1 1
prev_error 0 0 0 0
nest_level 2 2 2 2
granted_query_memory 512 512 512 512
The query plan predictor for all queries shows a couple nodes, 0% for select, and 100% for a clustered index seek.
Edit: The fields and values I left out where (same for all 4 threads, except for context_switch_count): exec_context_id(0), host_address(0x00000000), status(0), is_preemptive(0), is_fiber(0), is_sick(0), is_in_cc_exception(0), is_fatal_exception(0), is_inside_catch(0), context_switch_count(3-89078), exception_num(0), exception_Severity(0), exception_address(0x00000000), return_code(0), fiber_address(NULL), language(us_english), date_format(mdy), date_first(7), quoted_identifier(1), ansi_defaults(0), ansi_warnings(1), ansi_padding(1), ansi_nulls(1), concat_null_yields_null(1), executing_managed_code(0)

Resources