I have written relatively complicated query that returns 7k rows in 1-2s.
But when I add where condition as follows, it takes 10-20s
WHERE exists (select 1 from BOM where CustomerId = #__customerId_0 and Code = queryAbove.Code)
I've checked the execution plan and the problem is not BOM from where clause itself (it is indexed and returns ccs 100 rows).
According to actual execution plan, 90% of the cost consumed by subquery that is fast without this where condition.
The question is, why are the estimates from execution plan so inaccurate? Is this "normal", or something is off with my database
I have tried sp_updatestats and rebuild indexes - it didn't have any effect.
As per comment, here is the query and execution plan: https://www.brentozar.com/pastetheplan/?id=HkKgCWGuq and my indexes:
ALTER TABLE [dbo].[ČíselníkMateriál] ADD CONSTRAINT [PK_SkladČíselníkMateriál] PRIMARY KEY CLUSTERED
( [KódMateriálu] ASC )
ALTER TABLE [dbo].[SkladVýdajMateriál] ADD CONSTRAINT [PK_SkladVýdajMateriál] PRIMARY KEY CLUSTERED
( [ID] ASC )
ALTER TABLE [dbo].[Výrobky] ADD CONSTRAINT [Výrobky_PK] PRIMARY KEY CLUSTERED
( [ID] ASC )
ALTER TABLE [dbo].[VýrobkyMateriál] ADD CONSTRAINT [PK_VýrobkyMateriál] PRIMARY KEY CLUSTERED
( [ID] ASC )
ALTER TABLE [dbo].[Zákazky] ADD CONSTRAINT [Zákazky_PK] PRIMARY KEY CLUSTERED
( [ID] ASC )
CREATE UNIQUE NONCLUSTERED INDEX [UQ_ČíselníkMateriál_ID] ON [dbo].[ČíselníkMateriál]
( [ID] ASC ) WITH (IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
CREATE NONCLUSTERED INDEX [IX_SkladVýdajMateriál_IDVýrobky] ON [dbo].[SkladVýdajMateriál]
( [IDVýrobky] ASC )
CREATE NONCLUSTERED INDEX [IX_SkladVýdajMateriál_KódMateriálu] ON [dbo].[SkladVýdajMateriál]
( [KódMateriálu] ASC )
CREATE NONCLUSTERED INDEX [IX_Výrobky_Status] ON [dbo].[Výrobky]
( [Status] ASC )
CREATE NONCLUSTERED INDEX [NonClusteredIndex-20220527-162213] ON [dbo].[Výrobky]
( [ID] ASC ) INCLUDE([IDZákazky],[Množstvo]) WITH (DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
CREATE NONCLUSTERED INDEX [VýrobkyIDZákazky] ON [dbo].[Výrobky]
( [IDZákazky] ASC ) INCLUDE([Množstvo])
CREATE NONCLUSTERED INDEX [IX_VýrobkyMateriál_IdMateriálu] ON [dbo].[VýrobkyMateriál]
( [IdMateriálu] ASC ) INCLUDE([IdVýrobku],[Množstvo],[ID])
CREATE NONCLUSTERED INDEX [IX_VýrobkyMateriál_IdVýrobku] ON [dbo].[VýrobkyMateriál]
( [IdVýrobku] ASC ) INCLUDE([IdMateriálu],[Množstvo],[ID])
CREATE NONCLUSTERED INDEX [IDX_Zákazky_Status] ON [dbo].[Zákazky]
( [Status] ASC ) INCLUDE([ID])
why are the estimates so inaccurate?
In my experience, JOIN too many tables that might let estimates more and more inaccurate in one query, because of that estimates rows histogram statistics and formula SQL Server Join Estimation using Histogram Coarse Alignment
I would try to rewrite your query first, but I didn't know the logic of your tables.
There are two Key Lookup that might cost more logical read from non-cluster index Lookup to cluster index, so I would suggest modifying there are two indexes.
using IdVýrobku column be the key instead of INCLUDE column inIX_VýrobkyMateriál_IdMateriálu index, because IdVýrobku is JOIN key.
CREATE NONCLUSTERED INDEX [IX_VýrobkyMateriál_IdMateriálu] ON [dbo].[VýrobkyMateriál]
([IdMateriálu] ASC,[IdVýrobku])
INCLUDE([Množstvo],[ID]);
Add include Množstvo column on IX_SkladVýdajMateriál_IDVýrobky which add data in leaf page might reduce Key Lookup to cluster index from IX_SkladVýdajMateriál_IDVýrobky non-clustered index.
CREATE NONCLUSTERED INDEX [IX_SkladVýdajMateriál_IDVýrobky] ON [dbo].[SkladVýdajMateriál]
( [IDVýrobky] ASC )
INCLUDE (Množstvo);
D-Shih is right. When you JOIN many tables, wrong estimates get multiplied into really wrong estimates. But the first wrong estimates (506 customer/materials and not 65) is due to the distribution of data in isys.MaterialCustomers.
I'm not sure if 65 is the average number of materials per customer or just the average in that particular bucket, but you end up with 8x the number of lookups SQL Server estimated, and from there it just compounds.
SQL Server thinks it only needs to do 65 iterations on the initial query, but in fact has to do 500. Had it known in advance, maybe it would have chosen another plan. If you look for customerIds with fewer materials, you should get a faster execution, but also inaccurate estimates because now you're looping over fewer than 65 materials.
One way to fix you query is to
SELECT Code
INTO #CodesToConsider
FROM isys.MaterialCustomers
WHERE CustomerId = #__customer_id;
(... your huge select ...)
AS [s]
INNER JOIN #CodesToConsider ctc ON (s.MaterialCode = ctc.Code);
The temp-table will provide SQL Server with statistics on how many Codes to consider and then recompile the second part if those statistics change. They need to change by a bit, so I'm a bit unsure if 65 -> 506 is enough, but that's for you to find out.
Related
I have a SQL Server database with EventJournal table with the following columns:
Ordering (bigint, primary key)
PersistenceID (nvarchar(255))
SequenceNr (bigint)
Payload (varbinary(max))
Other columns are omitted for clarity. In addition to the primary key on Ordering there is a unique constraint on PersistenceID+SequenceNr.
If I run a query
select top 100 * from EventJournal where PersistenceID like 'msc:%'
... it takes very long time to execute (the table contains more than 100M rows)
But if I add ordering to results:
select top 100 * from EventJournal where PersistenceID like 'msc:%' order by Ordering
... then it returns the result immediately.
The execution plan for both queries are the same and in essence is the clustered index scan on PK. Then why does the first query take long time to execute?
Here's the table definition:
CREATE TABLE [dbo].[EventJournal](
[PersistenceID] [nvarchar](255) NOT NULL,
[SequenceNr] [bigint] NOT NULL,
[IsDeleted] [bit] NOT NULL,
[Manifest] [nvarchar](500) NOT NULL,
[Payload] [varbinary](max) NOT NULL,
[Timestamp] [bigint] NOT NULL,
[Tags] [nvarchar](100) NULL,
[Ordering] [bigint] IDENTITY(1,1) NOT NULL,
[SerializerId] [int] NULL,
CONSTRAINT [PK_EventJournal] PRIMARY KEY CLUSTERED
(
[Ordering] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [QU_EventJournal] UNIQUE NONCLUSTERED
(
[PersistenceID] ASC,
[SequenceNr] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
This it 1st plan:
https://www.brentozar.com/pastetheplan/?id=SJ3kCo-Fv
And here's the 2nd one:
https://www.brentozar.com/pastetheplan/?id=Hy1B0ibtP
As I mentioned in my comments, the plans are different, the difference is in access method:
The first plan uses unordered scan:
And the second plan uses ordered scan:
By the way, the other answer suggests useless index.
SQL Server will NOT use this index, as it's equivalent to the non-clustered index already in place. As the index QU_EventJournal on ([PersistenceID], [SequenceNr]) was not used, the same way the index on (PersistenceID, Ordering) will not be used.
Both of these indexes has PersistenceID, Ordering in the index as Ordering is clustered index key, so it is presented in index on ([PersistenceID], [SequenceNr]) even if you don't see it in the definition. The suggested index will be also bigger as it is not defined as unique, and the sizes of other fields are the same: Ordering is bigint, SequenceNr is bigint.
It's wrong to think that in index on 2 fields the second field(Ordering) can be used to avoid the sort in the second query, it's not true.
For example the index on PersistenceID, Ordering can have rows like these:
msc:123, 100
msc:124, 5
msc:124, 6
msc:125, 1
I hope you clearly see that the index is ordered by PersistenceID, Ordering,
but the result of the second query is expected to be
msc:125, 1
msc:124, 5
msc:124, 6
msc:123, 100
So the SORT operator is needed, so this index will not be used.
Now to your question:
shouldn't lack of ORDER BY be used by the query analyzer as an
opportunity to build more efficient execution plan without ordering
constraints
Yes you are correct, without order by server is free to choose both the ordered and unordered scan, and yes you are right in this:
I also don't understand why using TOP without ORDER BY is a bad
practice in case I want ANY N rows from the result
When you don't need top N ordered by, because you just want to see what kind of records have 'msc:' in them, you should not add order by because it could cause a SORT in your plan.
And to your main question:
Then why does the first query take long time to execute?
The answer is: this was pure coincidence.
Your data is laying in way that the rows with 'msc:' in them go first, in the order defined by Ordering. And if you scan your index not in order they are just in the middle or at the end of the table.
If you seek for another pattern in PersistenceID the unordered scan will be faster
I need to create statistics from several log tables. Most of the time every hour but sometimes more frequently every 5 minutes.
Selecting rows only by datetime isn't fast enough for larger logs so I thought I select only rows that are new since the last query by storing the max Id and reusing it next time:
SELECT TOP(1000) * -- so that it's not too much
FROM [dbo].[Log]
WHERE Id > lastId AND [Timestamp] >= timestampMin
ORDER BY [Id] DESC
My question: is the SQL Server smart enough to:
first filter the rows by Id and then by the Timestamp even if I change the order of the conditions or does the condition order matter or
do I need a subquery to first select the rows by Id and then filter them by the Timestamp.
with subquery:
SELECT *
FROM (
SELECT TOP(1000) * FROM [dbo].[Log]
WHERE Id > lastId
ORDER BY [Id] DESC
) t
WHERE t.[TimeStamp] >= timestampMin
The table schema is:
CREATE TABLE [dbo].[Log](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Timestamp] [datetime2](7) NOT NULL,
-- other columns
CONSTRAINT [PK_dbo_Log] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
I tried to use the query plan to find out how it works but it turns out that I cannot read it and I don't understand it.
In your case you don't have an index on TimeStamp so SQL Server will always use the Clustered Index (Id) first (the Clustered index seek you see in the query plan) to find the first row matching Id > lastId and then perform a scan on the remaining rows with the predicate [Timestamp] >= timestampMin (actually is the other way around since you are sorting in reverse order with DESC).
If you were to add a index on TimeStamp SQL Server might use it based on:
the cardinality of the predicate [Timestamp] >= timestampMin. Please note that cardinality is always an estimate based on statistics (see https://msdn.microsoft.com/en-us/library/ms190397.aspx) and the cardinality estimator (it changed from SQL 2012 to 2014+, see https://msdn.microsoft.com/en-us/library/dn600374.aspx).
how covering the non-clustered index is (since you are using the wildcard it would hardly matters anyway). If the non-clustered index is non covering SQL Server would have to add a Key Lookup (see https://technet.microsoft.com/en-us/library/bb326635(v=sql.105).aspx) operator in order to retrieve all the fields (or perform a join). This will likely make the index not worthwhile for this query.
Also note that your two queries - the one with subplan and the one without - are functionally different. The first will give you the first 1000 rows the have both Id > lastId AND [Timestamp] >= timestampMin. The second will give you only the rows having [Timestamp] >= timestampMin from the first 1000 rows having Id > lastId. So, for example, you might get 1000 rows from the first query but less than that on the second one.
I was having timeout issue when giving long period of DateTime in below query (query runs from c# application). Table had 30 million rows with a non-clustered index on ID(not a primary key).
Found that there was no primary key so I recently updated ID as Primary Key, it’s not giving me timeout now. Can anyone help me for the below query to create index on more than one key for future and also if I remove non clustered index from this table and create on more than one column? Data is increasing rapidly and need improvement on performace
select
ID, ReferenceNo, MinNo, DateTime, DataNo from tbl1
where
DateTime BETWEEN '04/09/2013' AND '20/11/2013'
and ReferenceNo = 4 and MinNo = 3 and DataNo = 14 Order by ID
this is the create script
CREATE TABLE [dbo].[tbl1]( [ID] [int] IDENTITY(1,1) not null, [ReferenceNo] [int] not null, [MinNo] [int] not null, [DateTime] [datetime] not null, [DataNo] [int] not null, CONSTRAINT [tbl1_pk] PRIMARY KEY CLUSTERED ([ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS
= ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY]
Its hard to tell which index you should use without knowing more about your database and how its used.
You may want to change the ID column to a clustered index. If ID is an identity column you will get very few page splits while inserting new data. It will however require you to rebuild the table and that may be a problem depending on your usage of the database. You will be looking at some downtime.
If you want a covering index it should look something like this:
CREATE NONCLUSTERED INDEX [MyCoveringIndex] ON tbl1
(
[ReferenceNo] ASC,
[MinNo] ASC,
[DataNo] ASC,
[DateTime ] ASC
)
Its no need to include ID as a column as its already in the clusted index (clusted index columns will be included in all other indexes). This will however use up a whole lot of space (somewhere in the range of 1GB if the columns above are of the types int and datetime). It will also affect your insert, update and delete performance on the table in (most cases) a negative way.
You can create the index in online mode if you are using Enterprice Edition of SQL server. In all other cases there will be a lock on the table while creating the index.
Its also hard to know what other queries that are made against the table. You may want to tweek the order of the columns in the index to better match other queries.
Indexing all fields would be fastest, but would likely waste a ton of space. I would guess that a date index would provide the most benefit with the least storage capacity cost because the data is probably evenly spread out over a large period of time. If the MIN() MAX() dates are close together, then this will not be as effective:
CREATE NONCLUSTERED INDEX [IDX_1] ON [dbo].[tbl1] (
[DateTime] ASC
)
GO
As a side note, you can use SSMSE's "Display Estimated Execution Plan" which will show you what the DB needs to do to get your data. It will suggest missing indexes and also provide CREATE INDEX statements. These suggestions can be quite wasteful, but they will give you an idea of what is taking so long. This option is in the Standard Toolbar, four icons to the right from "Execute".
I hope you could share your time to help me on this.
Currently, I'm using 3 tables to compare performance in getting data. These 3 tables have the same columns (LocInvID, ActivityDate,ItemID,StoreID,CustomerID), same data (around 13 million records):
Table LocInv1: Using Clustered Index at LocInvID (it's primary key too). Using Partition Table for ActivityDate. And 3 columns (ItemID, StoreID, CustomerID) are Non-Clustered Index.
Table LocInv2: Using Clustered Index at LocInvID (it's primary key too). Not using Partition Table.
Table LocInv3: Using Clustered Index at LocInvID (it's primary key too). And 3 columns (ItemID, StoreID, CustomerID) are Non-Clustered Index. Not using Partition Table.
CREATE NONCLUSTERED INDEX [IX_LocInv3] ON [LocInv3]
(
[ItemID] ASC ,[StoreID] ASC, [CustomerID] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
And when I run this query string (in 3 tables)
select ActivityDate,ItemID,StoreID,CustomerID from LocInv1 WITH (INDEX(IX_LocInv)) where ItemID=43
select ActivityDate,ItemID,StoreID,CustomerID from LocInv2 where ItemID=43
select ActivityDate,ItemID,StoreID,CustomerID from LocInv3 where ItemID=43
the result is quite weird:
Table LocInv1 got slowest. Is it possible? my query string is incorrect?
Table LocInv3 uses Non-Clustered Index, but at Actual Execution Plan, it is Clustered Index Scan. I don't understand, I query with ItemID, why it is Clustered Index Scan?
Table LocInv2 uses only Clustered Index for LocInvID, but it got fastest result. Is it correct?
Please advise.
Thanks.
the query optimizer choses the fast way he can find, not only depending on the indexes but in the data they contain too.
A search by a clustered index is usually faster, but in some cases it is faster to do the way around, you can test it removing and putting the index again.
And not to mention, depending on the operations that the table sufered (inserts, updates, deletes) and late index insertions will afect search too.
these changes will change how the indexs are stored, depending on the size of the thing you might have indexes with several pages.
if you can post one insert script of the data inside these table i can look better.
if you only did the query analyser test (ctrl+l) it should not be 100% acurate
I've table that contains some buy/sell data, with around 8M records in it:
CREATE TABLE [dbo].[Transactions](
[id] [int] IDENTITY(1,1) NOT NULL,
[itemId] [bigint] NOT NULL,
[dt] [datetime] NOT NULL,
[count] [int] NOT NULL,
[price] [float] NOT NULL,
[platform] [char](1) NOT NULL
) ON [PRIMARY]
Every X mins my program gets new transactions for each itemId and I need to update it. My first solution is two step DELETE+INSERT:
delete from Transactions where platform=#platform and itemid=#itemid
insert into Transactions (platform,itemid,dt,count,price) values (#platform,#itemid,#dt,#count,#price)
[...]
insert into Transactions (platform,itemid,dt,count,price) values (#platform,#itemid,#dt,#count,#price)
The problem is, that this DELETE statement takes average 5 seconds. It's much too long.
The second solution I found is to use MERGE. I've created such Stored Procedure, wchich takes Table-valued parameter:
CREATE PROCEDURE [dbo].[sp_updateTransactions]
#Table dbo.tp_Transactions readonly,
#itemId bigint,
#platform char(1)
AS
BEGIN
MERGE Transactions AS TARGET
USING #Table AS SOURCE
ON (
TARGET.[itemId] = SOURCE.[itemId] AND
TARGET.[platform] = SOURCE.[platform] AND
TARGET.[dt] = SOURCE.[dt] AND
TARGET.[count] = SOURCE.[count] AND
TARGET.[price] = SOURCE.[price] )
WHEN NOT MATCHED BY TARGET THEN
INSERT VALUES (SOURCE.[itemId],
SOURCE.[dt],
SOURCE.[count],
SOURCE.[price],
SOURCE.[platform])
WHEN NOT MATCHED BY SOURCE AND TARGET.[itemId] = #itemId AND TARGET.[platform] = #platform THEN
DELETE;
END
This procedure takes around 7 seconds with table with 70k records. So with 8M it would probably take few minutes. The bottleneck is "When not matched" - when I commented this line, this procedure runs on average 0,01 second.
So the question is: how to improve perfomance of the delete statement?
Delete is needed to make sure, that table doesn't contains transaction that as been removed in application. But it real scenario it happens really rarely, ane the true need of deleting records is less than 1 on 10000 transaction updates.
My theoretical workaround is to create additional column like "transactionDeleted bit" and use UPDATE instead of DELETE, ane then make table cleanup by batch job every X minutes or hours and Execute
delete from transactions where transactionDeleted=1
It should be faster, but I would need to update all SELECT statements in other parts of application, to use only transactionDeleted=0 records and so it also may afect application performance.
Do you know any better solution?
UPDATE: Current indexes:
CREATE NONCLUSTERED INDEX [IX1] ON [dbo].[Transactions]
(
[platform] ASC,
[ItemId] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 50) ON [PRIMARY]
CONSTRAINT [IX2] UNIQUE NONCLUSTERED
(
[ItemId] DESC,
[count] ASC,
[dt] DESC,
[platform] ASC,
[price] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
OK, here is another approach also. For a similar problem (large scan WHEN NOT MATCHED BY SOURCE then DELETE) I reduced the MERGE execute time from 806ms to 6ms!
One issue with the problem above is that the "WHEN NOT MATCHED BY SOURCE" clause is scanning the whole TARGET table.
It is not that obvious but Microsoft allows the TARGET table to be filtered (by using a CTE) BEFORE doing the merge. So in my case the TARGET rows were reduced from 250K to less than 10 rows. BIG difference.
Assuming that the above problem works with the TARGET being filtered by #itemid and #platform then the MERGE code would look like this. The changes above to the indexes would help this logic too.
WITH Transactions_CTE (itemId
,dt
,count
,price
,platform
)
AS
-- Define the CTE query that will reduce the size of the TARGET table.
(
SELECT itemId
,dt
,count
,price
,platform
FROM Transactions
WHERE itemId = #itemId
AND platform = #platform
)
MERGE Transactions_CTE AS TARGET
USING #Table AS SOURCE
ON (
TARGET.[itemId] = SOURCE.[itemId]
AND TARGET.[platform] = SOURCE.[platform]
AND TARGET.[dt] = SOURCE.[dt]
AND TARGET.[count] = SOURCE.[count]
AND TARGET.[price] = SOURCE.[price]
)
WHEN NOT MATCHED BY TARGET THEN
INSERT
VALUES (
SOURCE.[itemId]
,SOURCE.[dt]
,SOURCE.[count]
,SOURCE.[price]
,SOURCE.[platform]
)
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
Using a BIT field for IsDeleted (or IsActive as many people do) is valid but it does require modifying all code plus creating a separate SQL Job to periodically come through and remove the "deleted" records. This might be the way to go but there is something less intrusive to try first.
I noticed in your set of 2 indexes that neither is CLUSTERED. Can I assume that the IDENTITY field is? You might consider making the [IX2] UNIQUE index the CLUSTERED one and changing the PK (again, I assume the IDENTITY field is a CLUSTERED PK) to be NONCLUSTERED. I would also reorder the IX2 fields to put [Platform] and [ItemID] first. Since your main operation is looking for [Platform] and [ItemID] as a set, physically ordering them this way might help. And since this index is unique, that is a good candidate for being CLUSTERED. It is certainly worth testing as this will impact all queries against the table.
Also, if changing the indexes as I have suggested helps, it still might be worth trying both ideas and hence doing the IsDeleted field as well to see if that increases performance even more.
EDIT:
I forgot to mention, by making the IX2 index CLUSTERED and moving the [Platform] field to the top, you should get rid of the IX1 index.
EDIT2:
Just to be very clear, I am suggesting something like:
CREATE UNIQUE CLUSTERED INDEX [IX2]
(
[ItemId] DESC,
[platform] ASC,
[count] ASC,
[dt] DESC,
[price] ASC
)
And to be fair, changing which index is CLUSTERED could also negatively impact queries where JOINs are done on the [id] field which is why you need to test thoroughly. In the end you need to tune the system for your most frequent and/or expensive queries and might have to accept that some queries will be slower as a result but that might be worth this operation being much faster.
See this https://stackoverflow.com/questions/3685141/how-to-....
would the update be the same cost as a delete? No. The update would be
a much lighter operation, especially if you had an index on the PK
(errrr, that's a guid, not an int). The point being that an update to
a bit field is much less expensive. A (mass) delete would force a
reshuffle of the data.
In light of this information, your idea to use a bit field is very valid.