different estimated rows on same index operation? - sql-server

Introduction and Background
I had to optimize a simple query (example below). After rewriting it several times I recognized that the estimated row count on the one and same index operation differs depending on the way the query is written.
Originally the query did a clustered index scan, as the table in production contains a binary column the table is quite large (about 100 GB) and the full table scan takes too much time to execute.
Question
Why is the estimated row count different on the same index operation (example will show)? What is the optimizer doing here?
the example database - I am using SQL Server 2008 R2
I tried to create a very simplyfied version of my production tables that shows the behaviour.
-- CREATE THE SAMPLE TABLES
----------------------------
CREATE TABLE dbo.MasterTable(
MasterId smallint NOT NULL,
Name varchar(5) NOT NULL,
CONSTRAINT PK_MasterTable PRIMARY KEY CLUSTERED (MasterId ASC)
) ON [PRIMARY]
GO
CREATE TABLE dbo.DetailTable(
DetailId bigint IDENTITY(1,1) NOT NULL,
MasterId smallint NOT NULL,
Name nvarchar(50) NOT NULL,
CreateDate datetime NOT NULL,
CONSTRAINT PK_DetailTable PRIMARY KEY CLUSTERED (DetailId ASC)
) ON [PRIMARY]
GO
ALTER TABLE dbo.DetailTable
ADD CONSTRAINT FK1
FOREIGN KEY(MasterId) REFERENCES dbo.MasterTable (MasterId)
GO
CREATE NONCLUSTERED INDEX IX_DetailTable
ON dbo.DetailTable( MasterId ASC, Name ASC )
GO
-- INSERT SOME SAMPLE DATA
----------------------------
SET NOCOUNT ON
GO
-- These are some Codes. In our system we always use these codes to search for "types" of data.
INSERT INTO dbo.MasterTable (MasterId, Name)
VALUES (1, 'N1'), (2, 'N2'), (3, 'N3'), (4, 'N4'), (5, 'N5'), (6, 'N6'), (7, 'N7'), (8, 'N8')
GO
-- ADD ROWS TO THE DETAIL TABLE
-- Takes about 1 minute to run
-- Don't care about the logic, it's just to get a distribution similar to production system
----------------------------
declare #x int = 1
DECLARE #MasterID INT
while (#x <= 400000)
begin
SET #MasterID = ABS(CHECKSUM(NEWID())) % 8 + 1
INSERT INTO dbo.DetailTable(MasterId,Name,CreateDate)
VALUES(
CASE
WHEN #MasterID IN (1, 3, 4) AND #x % 20 != 0 THEN 2
WHEN #MasterID IN (5, 6) AND #x % 20 != 0 THEN 7
WHEN #MasterID = 8 AND #x % 100 != 0 THEN 7
ELSE #MasterID
END,
NEWID(),
DATEADD(DAY, - ABS(CHECKSUM(NEWID())) % 1000, GETDATE())
)
SET #x = #x + 1
end
go
-- DO THE INDEX AND STATISTIC MAINTENANCE
----------------------------
alter index all on dbo.DetailTable reorganize
alter index all on dbo.MasterTable reorganize
update statistics dbo.DetailTable WITH FULLSCAN
update statistics dbo.MasterTable WITH FULLSCAN
go
Preparation is done, let's start with the query
Let's have a look at the statistics first, look at RANGE_HI_KEY=8, there are 489 EQ_ROWS
-- CHECK THE STATISTICS
----------------------------
dbcc show_statistics ('dbo.DetailTable', IX_DetailTable)
GO
Now we do the query. The first one is the original query I had to optimize.
Please activate the current execution plan when executing.
Have a look at the operation "index seek (nonclustered) [DetailTable].[IX_DetailTable]"
-- ORIGINAL QUERY
----------------------------
SELECT d.DetailId
FROM dbo.DetailTable d
INNER JOIN dbo.MasterTable m ON d.MasterId = m.MasterId
WHERE m.Name = 'N8'
AND d.CreateDate > '20150312 11:00:00'
GO
-- FORCESEEK
----------------------------
SELECT d.DetailId
FROM dbo.DetailTable d WITH (FORCESEEK)
INNER JOIN dbo.MasterTable m ON d.MasterId = m.MasterId
WHERE m.Name = 'N8'
AND d.CreateDate > '20150312 11:00:00'
GO
-- Actual: 489, Estimated 50.000
-- TABLE VARIABLE
----------------------------
DECLARE #MasterId AS TABLE( MasterId SMALLINT )
INSERT INTO #MasterId (MasterId)
SELECT MasterID FROM dbo.MasterTable WHERE Name = 'N8'
SELECT d.DetailId
FROM dbo.DetailTable d WITH (FORCESEEK)
INNER JOIN #MasterId m ON d.MasterId = m.MasterId
WHERE d.CreateDate > '20150312 11:00:00'
GO
-- Actual: 489, Estimated 40.000
-- TEMP TABLE
----------------------------
CREATE TABLE #MasterId( MasterId SMALLINT )
INSERT INTO #MasterId (MasterId)
SELECT MasterID FROM dbo.MasterTable WHERE Name = 'N8'
SELECT d.DetailId
FROM dbo.DetailTable d --WITH (FORCESEEK)
INNER JOIN #MasterId m ON d.MasterId = m.MasterId
WHERE d.CreateDate > '20150312 11:00:00'
-- Actual 489, Estimated 489
DROP TABLE #MasterId
GO
Analyse and final question(s)
Please have a look at the operation "index seek (nonclustered) [DetailTable].[IX_DetailTable]"
The comments in the script above show you the values I got for estimated and actual row count.
In our production environment this table has 33 million rows, the estimated rows in the queries above differ from 3 million to 16 million.
To summarize:
when a join between the DetailTable and the MasterTable is made, the estimated rowcount is 12,5% (there are 8 values in the master table, it makes sense, kind of...)
when a join between the DetailTable and the table variable is made, the estimated rowcount is 10%
when a join between the DetailTable and the temp table is made, the estimated rowcount is exactly the same as the actual row count
The question is why do these values differ?
The statistics are up to date and making an estimation should really be easy.
I just would like to understand this.

As nobody answer i ll try to give answer :
Please don`t force optimizer to follow you
(1) Explanation about you original query :
SELECT d.DetailId
FROM dbo.DetailTable d
INNER JOIN dbo.MasterTable m ON d.MasterId = m.MasterId
WHERE m.Name = 'N8'
AND d.CreateDate > '20150312 11:00:00'
Why this query is slow ?
this query is slow because your indexes are not covering this query,
both query are using index scan and than joining with "Hash join":
WHY scanning entire row for mastertable ?
Because index on Master table is on column MasterId , not on column Name.
WHY scanning entire row for Detailtable? Because here as well index is on
(DETAILID) "CLUSTERED" AND ( MasterId ASC, Name ASC ) "NON CLUSTERED"
not on Createdate column.
having one NONCLUSTERED index will help this query ON column (CREATEDATE,MasterId ) for this particular Query.
If your Master table is huge as well you can create NONCLUSTERED index on (Name) column.
(2) Explanation on FORCESEEK :
-- FORCESEEK
SELECT d.DetailId
FROM dbo.DetailTable d WITH (FORCESEEK)
INNER JOIN dbo.MasterTable m ON d.MasterId = m.MasterId
WHERE m.Name = 'N8'
AND d.CreateDate > '20150312 11:00:00'
GO
Why optimizer estimated 50,000 row ?
Here you are joining on column d.MasterId = m.MasterId and you are FORCING optimizer to choose seek on Detail table, so
optizer using INDEX IX_DetailTable () to join your Mastertable using LOOP join .
Since Optimizer chooosing Loop join to join all rows (Actually ONE) of MAster table to Detail table
so it will choose one key from master table then seek for entire index and then pass the matching value to further iterator.
so optimizer chooses Average of rows per value .
8 unique values in column 40000 table cardinality (rows) so
40000 / 8 Is 50,000 rows estimated (fair enough).
(3) -- TABLE VARIABLE
Here is your query :
DECLARE #MasterId AS TABLE( MasterId SMALLINT )
INSERT INTO #MasterId (MasterId)
SELECT MasterID FROM dbo.MasterTable WHERE Name = 'N8'
SELECT d.DetailId
FROM dbo.DetailTable d WITH (FORCESEEK)
INNER JOIN #MasterId m ON d.MasterId = m.MasterId
WHERE d.CreateDate > '20150312 11:00:00'
GO
Statatictic does not maintain on table variable so optimzer has no idia how many rows( so it estimate 1 row )it gonaa deal with to produce a good plan,
here as well estimated rows are 1 and actual row 1 aswell congrates!!
but how optimizer Estimated "40.000" ROWS
Personally i never checked this and because of this question i did servels testing, but have no idia how optimzer calculating estimated rows, so it will be great if someone come up and enlight us.
(4) -- TEMP TABLE
Your Query
CREATE TABLE #MasterId( MasterId SMALLINT )
INSERT INTO #MasterId (MasterId)
SELECT MasterID FROM dbo.MasterTable WHERE Name = 'N8'
SELECT d.DetailId
FROM dbo.DetailTable d --WITH (FORCESEEK)
INNER JOIN #MasterId m ON d.MasterId = m.MasterId
WHERE d.CreateDate > '20150312 11:00:00'
-- Actual 489, Estimated 489
DROP TABLE #MasterId
here as well optimizer is choosing same query plan as was choosing in table variable but diffrence is
Statistics does maintain on temp tables, So Here in query optimizer has a fair idia what row it actually going to join.
"N8" key has 8, and 8`s estimated rows in dbo.DetailTable is 489.

Related

Select all Main table rows with detail table column constraints with GROUP BY

I've 2 tables tblMain and tblDetail on SQL Server that are linked with tblMain.id=tblDetail.OrderID for orders usage. I've not found exactly the same situation in StackOverflow.
Here below is the sample table design:
/* create and populate tblMain: */
CREATE TABLE tblMain (
ID int IDENTITY(1,1) NOT NULL,
DateOrder datetime NULL,
CONSTRAINT PK_tblMain PRIMARY KEY
(
ID ASC
)
)
GO
INSERT INTO tblMain (DateOrder) VALUES('2021-05-20T12:12:10');
INSERT INTO tblMain (DateOrder) VALUES('2021-05-21T09:13:13');
INSERT INTO tblMain (DateOrder) VALUES('2021-05-22T21:30:28');
GO
/* create and populate tblDetail: */
CREATE TABLE tblDetail (
ID int IDENTITY(1,1) NOT NULL,
OrderID int NULL,
Gencod VARCHAR(255),
Quantity float,
Price float,
CONSTRAINT PK_tblDetail PRIMARY KEY
(
ID ASC
)
)
GO
INSERT INTO tblDetail (OrderID, Gencod, Quantity, Price) VALUES(1, '1234567890123', 8, 12.30);
INSERT INTO tblDetail (OrderID, Gencod, Quantity, Price) VALUES(1, '5825867890321', 2, 2.88);
INSERT INTO tblDetail (OrderID, Gencod, Quantity, Price) VALUES(3, '7788997890333', 1, 1.77);
INSERT INTO tblDetail (OrderID, Gencod, Quantity, Price) VALUES(3, '9882254656215', 3, 5.66);
INSERT INTO tblDetail (OrderID, Gencod, Quantity, Price) VALUES(3, '9665464654654', 4, 10.64);
GO
Here is my SELECT with grouping:
SELECT tblMain.id,SUM(tblDetail.Quantity*tblDetail.Price) AS TotalPrice
FROM tblMain LEFT JOIN tblDetail ON tblMain.id=tblDetail.orderid
WHERE (tblDetail.Quantity<>0) GROUP BY tblMain.id;
GO
This gives:
The wished output:
We see that id=2 is not shown even with LEFT JOIN, as there is no records with OrderID=2 in tblDetail.
How to design a new query to show tblMain.id = 2? Mean while I must keep WHERE (tblDetail.Quantity<>0) constraints. Many thanks.
EDIT:
The above query serves as CTE (Common Table Expression) for a main query that takes into account payments table tblPayments again.
After testing, both solutions work.
In my case, the main table has 15K records, while detail table has some millions. With (tblDetail.Quantity<>0 OR tblDetail.Quantity IS NULL) AND tblDetail.IsActive=1 added on JOIN ON clause it takes 37s to run, while the first solution of #pwilcox, the condition being added on the where clause, it ends up on 29s. So a gain of time of 20%.
tblDetail.IsActive column permits me ignore detail rows that is temporarily ignored by setting it to false.
So the for me it's ( #pwilcox's answer).
where (tblDetail.quantity <> 0 or tblDetail.quantity is null)
Change
WHERE (tblDetail.Quantity<>0)
to
where (tblDetail.quantity <> 0 or tblDetail.quantity is null)
as the former will omit id = 2 because the corresponding quantity would be null in a left join.
And as HABO mentions, you can also make the condition a part of your join logic as opposed to your where statement, avoiding the need for the 'or' condition.
select m.id,
totalPrice = sum(d.quantity * d.price)
from tblMain m
left join tblDetail d
on m.id = d.orderid
and d.quantity <> 0
group by m.id;

Speed up view performance

I have an old view that takes 4 mins to run, I have been asked to speed it up. The FROM looks like this:
FROM TableA
CROSS JOIN ViewA
INNER JOIN TableB on ViewA.Name = TableB.Name
AND TableA.Code = TableB.Code
AND TableA.Location = TableB.Location
WHERE (DATEDIFF(m, ViewA.SubmitDate, GETDATE()) = 1) -- Only pull last months rows
Table A has around 99k rows, ViewA has around 2000 rows and TableB has around 101K rows. I think the problem is at the INNER JOIN because it I remove it, the query takes 1 second.
My first thought was to see if I could down the number of rows in ViewA by breaking the whole thing into CTEs but this made zero impact. I am thinking I need to index TableB, because it is just a bunch of varchars being used in the joins. I am now changing it to temp tables so I can index it. I can not change the underlying tables and views. Is index temp tables a good way to go, or is there a better solution.
Edit to add info regarding existing indexes. Only thing with an index on it right now is TableA.Id which is the PK and a clustered Index. TableB has an Id field but it is not the PK. ViewA is not indexed.
Edit again to correct some structure. SubmitDate is in the View, not the table.
Here is a very basic structure:
CREATE TABLE TableA
(
Id int NOT NULL PRIMARY KEY,
Section varchar(20) NULL,
Code varchar(20) NULL
)
CREATE TABLE TableB
(
Id int NOT NULL PRIMARY KEY,
Name varchar(20) NULL,
Code varchar(20) NULL,
Section varchar(20) NULL
)
CREATE TABLE TableC
(
Id int NOT NULL PRIMARY KEY,
Name varchar(20) NULL,
SubmitDate DateTime NOT NULL
)
CREATE TABLE TableD
(
Id int NOT NULL PRIMARY KEY,
Section varchar(20) NULL
)
CREATE VIEW ViewA
AS
SELECT c.Section, d.Name, c.SubmitDate
FROM TableC c
JOIN TableD d ON a.Id = b.Id
One improovement is to rewrite where clause into sargable clause. Add index to SubmitDate if there is no index and change query to:
FROM TableA
CROSS JOIN ViewA
INNER JOIN TableB on ViewA.Name = TableB.Name
AND TableA.Code = TableB.Code
AND TableA.Location = TableB.Location
WHERE
TableA.SubmitDate >=DATEADD(MONTH,DATEDIFF(MONTH,0,GETDATE())-1,0)
And TableA.SubmitDate < Dateadd(DAY, 1, DATEADD(MONTH,
DATEDIFF(MONTH, -1, GETDATE())-1, -1) )
Also add nonclustered indexes on Name, Code and Location columns.

SQL query performance using CTE vs WHILE loop in very large table

I'm trying to figure out the best performing query given the scenario below.
The following table is very large, over 5 million rows. Every day, a few loads occur although exactly how many times varies. Each load is identified by load_id (a part of clustered index) and load_datetimestamp is time stamp applicable to that load. Each load inserts about 30,000 rows.
CREATE TABLE [VeryLargeTable](
[load_id] [int] NOT NULL,
[acct_cd] [varchar](20) NOT NULL,
[acct_num] [varchar](255) NULL,
[prod_id] [varchar](50) NOT NULL,
[domestic_foreign_cd] [varchar](3) NOT NULL,
[vendor_prod_id] [varchar](15) NULL,
[prod_name] [varchar](100) NULL,
[currency_cd] [varchar](3) NULL,
[total_Qty] [int] NULL,
[mkt_price] [float] NULL,
[load_datetimestamp] [datetime] NULL,
CONSTRAINT [pk_VeryLargeTable] PRIMARY KEY CLUSTERED (
[load_id] ASC,
[acct_cd] ASC,
[prod_id] ASC,
[domestic_foreign_cd] ASC )
)
On any given evening, I would like to get all the rows from today's FIRST load. Query must be as performant as possible given above DDL. Obviously, you don't want to start out with something like "WHERE datediff(day,load_datetimestamp,getDate())=0" on the entire table.
I wrote these 2 queries. Is any one better than the other? I know that both return the same result. Can you suggest even a better one than either one of these 2?
Query 1
With
T1 as (select
load_id,
load_datetimestamp
from dbo.VeryLargeTable
group by
load_id,load_datetimestamp),
T2 as (select
load_id,
load_datetimestamp
from T1
where
datediff(day,load_datetimestamp,getDate())=0),
T3 as (select min(load_id) as loadID from T2)
select * from dbo.VeryLargeTable
where load_id = (select loadID from T3)
Query 2
declare #found tinyint;
declare #loadID int;
declare #dateTimeStamp datetime;
-- Get max value of load id
select #loadID = max(load_id) from [dbo].[VeryLargeTable];
-- Keep looping until previous day is found or minimum load_id is reached
set #found = 0;
WHILE (#found=0)
BEGIN
select #dateTimeStamp = load_datetimestamp from [dbo].[VeryLargeTable] where load_id=#loadID;
if (#loadID=0) SET #found=1
else
BEGIN
if (DATEPART(day, #dateTimeStamp) = DATEPART(day, GetDate())) SET #loadID = #loadID - 1;
else SET #found=1;
END
END
SELECT * from [dbo].[VeryLargeTable] where load_id=(#loadID + 1);
1) As it was mentioned in the comments don't use expression like datediff(day,load_datetimestamp,getDate())=0). This expression can't use index on column load_datetimestamp.
Use load_datetimestamp >= CAST(CAST(GETDATE() AS date) AS datetime).
2) Create an index on column load_datetimestamp.
3) Find the first row that was loaded today and take load_id from it:
SELECT TOP(1) load_id
FROM dbo.VeryLargeTable
WHERE load_datetimestamp >= CAST(CAST(GETDATE() AS date) AS datetime)
ORDER BY load_datetimestamp
This query should be an instant seek of the index on load_datetimestamp. Check the execution plan.
4) Create an index on load_id column.
5) Use the found load_id to return all rows for that load:
WITH
CTE_FirstLoad
AS
(
SELECT TOP(1) load_id
FROM dbo.VeryLargeTable
WHERE load_datetimestamp >= CAST(CAST(GETDATE() AS date) AS datetime)
ORDER BY load_datetimestamp
)
SELECT *
FROM dbo.VeryLargeTable
WHERE load_id IN (SELECT load_id FROM CTE_FirstLoad)
;
This should use the index on load_id.
I've edited the answer and put IN instead of = in the last WHERE. With IN the query will work even if there were no loads for today at all. With = most likely there would be an error.
Edit
Now we know that you can't create new indexes on this table. And we know that the number of loads per day is small (around 5).
With this limitation it may well be that your second query with the explicit loop would be the fastest. In any case, you have to try all suggestions on the real system and measure their performance.
Can we assume that each load starts and ends during the same day? In other words, for each load_id the date of the load_datetimestamp is the same?
Then, you can try few tweaks to your loop. They may not change the performance much or at all, but you have to measure.
instead of:
select #loadID = max(load_id) from [dbo].[VeryLargeTable];
try:
SET #loadID =
(select TOP(1) load_id from [dbo].[VeryLargeTable] ORDER BY load_id DESC);
in the same fashion inside the loop (I doubt that the current line actually works, because you have many rows for the same load_id) instead of:
select #dateTimeStamp = load_datetimestamp from [dbo].[VeryLargeTable] where load_id=#loadID;
try:
SET #dateTimeStamp =
(select TOP(1) load_datetimestamp from [dbo].[VeryLargeTable] where load_id=#loadID);
If you are allowed to create another table
I assume that load IDs increase with time, i.e. load with ID=N has load_datetimestamp less than load with ID=N+1.
Create a separate table Loads with column: load_id with primary unique clustered index on load_id.
This table would contain one row for the last load_id for each day. It would be much smaller than the main table.
The purpose of this table is to quickly find the "first load_id for today" instead of scanning the large table.
Every evening when you run your report this small Loads table would contain rows for loads from previous days, but not from today.
Can you say, that at the time when you run your report in the evening all loading has finished for the day? If yes:
Get the last load from previous days from the small table:
SET #PrevLoadID =
(SELECT TOP(1) load_id FROM Loads ORDER BY load_id DESC)
Get the first load for today from the large table:
SET #TodayFirstLoadID =
(SELECT TOP(1) load_id
FROM VeryLargeTable
WHERE VeryLargeTable.load_id > #PrevLoadID
ORDER BY load_id ASC)
Get the last load for today from the large table:
SET #TodayLastLoadID =
(SELECT TOP(1) load_id
FROM VeryLargeTable
ORDER BY load_id DESC)
INSERT the #TodayLastLoadID into Loads.
Use #TodayFirstLoadID to run your main report:
SELECT *
FROM VeryLargeTable
WHERE load_id = #TodayFirstLoadID
You don't need 3 CTEs here. You can replace the first 2 CTEs by simply selecting all distinct combinations of Load ID and Load DateTimeStamp which satisfy the filter condition. Then, you can get rid of the 3rd CTE by moving that check directly to the subquery in your final select. The resulting query would look like this:
;With
T2 as
(select distinct
load_id,
load_datetimestamp
from dbo.VeryLargeTable
where datediff(day,load_datetimestamp,getDate())=0)
select * from dbo.VeryLargeTable
where load_id = (select min(load_ID) from T2)
You need to check it on your execution plan but if I were you I would use a temp table instead of using 3 table expressions and joining it with my original table.
Also as you mentioned earlier better not to use functions in your WHERE clause. Instead you can use date BETWEEN start AND end

Make use of index when JOIN'ing against multiple columns

Simplified, I have two tables, contacts and donotcall
CREATE TABLE contacts
(
id int PRIMARY KEY,
phone1 varchar(20) NULL,
phone2 varchar(20) NULL,
phone3 varchar(20) NULL,
phone4 varchar(20) NULL
);
CREATE TABLE donotcall
(
list_id int NOT NULL,
phone varchar(20) NOT NULL
);
CREATE NONCLUSTERED INDEX IX_donotcall_list_phone ON donotcall
(
list_id ASC,
phone ASC
);
I would like to see what contacts matches the phone number in a specific list of DoNotCall phone.
For faster lookup, I have indexed donotcall on list_id and phone.
When I make the following JOIN it takes a long time (eg. 9 seconds):
SELECT DISTINCT c.id
FROM contacts c
JOIN donotcall d
ON d.list_id = 1
AND d.phone IN (c.phone1, c.phone2, c.phone3, c.phone4)
Execution plan on Pastebin
While if I LEFT JOIN on each phone field seperately it runs a lot faster (eg. 1.5 seconds):
SELECT c.id
FROM contacts c
LEFT JOIN donotcall d1
ON d1.list_id = 1
AND d1.phone = c.phone1
LEFT JOIN donotcall d2
ON d2.list_id = 1
AND d2.phone = c.phone2
LEFT JOIN donotcall d3
ON d3.list_id = 1
AND d3.phone = c.phone3
LEFT JOIN donotcall d4
ON d4.list_id = 1
AND d4.phone = c.phone4
WHERE
d1.phone IS NOT NULL
OR d2.phone IS NOT NULL
OR d3.phone IS NOT NULL
OR d4.phone IS NOT NULL
Execution plan on Pastebin
My assumption is that the first snippet runs slowly because it doesn't utilize the index on donotcall.
So, how to do a join towards multiple columns and still have it use the index?
SQL Server might think resolving IN (c.phone1, c.phone2, c.phone3, c.phone4) using an index is too expensive.
You can test if the index would be faster with a hint:
SELECT c.*
FROM contacts c
JOIN donotcall d with (index(IX_donotcall_list_phone))
ON d.list_id = 1
AND d.phone IN (c.phone1, c.phone2, c.phone3, c.phone4)
From the query plans you posted, it shows the first plan is estimated to produce 40k rows, but it just returns 21 rows. The second plan estimates 1 row (and of course returns 21 too.)
Are your statistics up to date? Out-of-date statistics can explain the query analyzer making bad choices. Statistics should be updated automatically or in a weekly job. Check the age of your statistics with:
select object_name(ind.object_id) as TableName
, ind.name as IndexName
, stats_date(ind.object_id, ind.index_id) as StatisticsDate
from sys.indexes ind
order by
stats_date(ind.object_id, ind.index_id) desc
You can update them manually with:
EXEC sp_updatestats;
With this poor database structure, a UNION ALL query might be fastest.

Options for indexing a view with cte

I have a view for which I want to create an Indexed view. After a lot of energy I was able to put the sql query in place for the view and It looks like this -
ALTER VIEW [dbo].[FriendBalances] WITH SCHEMABINDING as
WITH
trans (Amount,PaidBy,PaidFor, Id) AS
(SELECT Amount,userid AS PaidBy, PaidForUsers_FbUserId AS PaidFor, Id FROM dbo.Transactions
FULL JOIN dbo.TransactionUser ON dbo.Transactions.Id = dbo.TransactionUser.TransactionsPaidFor_Id),
bal (PaidBy,PaidFor,Balance) AS
(SELECT PaidBy,PaidFor, SUM( Amount/ transactionCounts.[_count]) AS Balance FROM trans
JOIN (SELECT Id,COUNT(*)AS _count FROM trans GROUP BY Id) AS transactionCounts ON trans.Id = transactionCounts.Id AND trans.PaidBy <> trans.PaidFor
GROUP BY trans.PaidBy,trans.PaidFor )
SELECT ISNULL(bal.PaidBy,bal2.PaidFor)AS PaidBy,ISNULL(bal.PaidFor,bal2.PaidBy)AS PaidFor,
ISNULL( bal.Balance,0)-ISNULL(bal2.Balance,0) AS Balance
FROM bal
left JOIN bal AS bal2 ON bal.PaidBy = bal2.PaidFor AND bal.PaidFor = bal2.Paidby
WHERE ISNULL( bal.Balance,0)>ISNULL(bal2.Balance,0)
Sample Data for FriendBalances View -
PaidBy PaidFor Balance
------ ------- -------
9990 9991 1000
9990 9992 2000
9990 9993 1000
9991 9993 1000
9991 9994 1000
It is mainly a join of 2 tables.
Transactions -
CREATE TABLE [dbo].[Transactions](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Date] [datetime] NOT NULL,
[Amount] [float] NOT NULL,
[UserId] [bigint] NOT NULL,
[Remarks] [nvarchar](255) NULL,
[GroupFbGroupId] [bigint] NULL,
CONSTRAINT [PK_Transactions] PRIMARY KEY CLUSTERED
Sample data in Transactions Table -
Id Date Amount UserId Remarks GroupFbGroupId
-- ----------------------- ------ ------ -------------- --------------
1 2001-01-01 00:00:00.000 3000 9990 this is a test NULL
2 2001-01-01 00:00:00.000 3000 9990 this is a test NULL
3 2001-01-01 00:00:00.000 3000 9991 this is a test NULL
TransactionUsers -
CREATE TABLE [dbo].[TransactionUser](
[TransactionsPaidFor_Id] [bigint] NOT NULL,
[PaidForUsers_FbUserId] [bigint] NOT NULL
) ON [PRIMARY]
Sample Data in TransactionUser Table -
TransactionsPaidFor_Id PaidForUsers_FbUserId
---------------------- ---------------------
1 9991
1 9992
1 9993
2 9990
2 9991
2 9992
3 9990
3 9993
3 9994
Now I am not able to create a view because my query contains cte(s). What are the options that I have now?
If cte can be removed, what should be the other option which would help in creating indexed views.
Here is the error message -
Msg 10137, Level 16, State 1, Line 1 Cannot create index on view "ShareBill.Test.Database.dbo.FriendBalances" because it references common table expression "trans". Views referencing common table expressions cannot be indexed. Consider not indexing the view, or removing the common table expression from the view definition.
The concept:
Transaction mainly consists of:
an Amount that was paid
UserId of the User who paid that amount
and some more information which is not important for now.
TransactionUser table is a mapping between a Transaction and a User Table. Essentially a transaction can be shared between multiple persons. So we store that in this table.
So we have transactions where 1 person is paying for it and other are sharing the amount. So if A pays 100$ for B then B would owe 100$ to A. Similarly if B pays 90$ for A then B would owe only $10 to A. Now if A pays 300$ for A,b,c that means B would owe 110$ and C would owe 10$ to A.
So in this particular view we are aggregating the effective amount that has been paid (if any) between 2 users and thus know how much a person owes another person.
Okay, this gives you an indexed view (that needs an additional view on top of to sort out the who-owes-who detail), but it may not satisfy your requirements still.
/* Transactions table, as before, but with handy unique constraint for FK Target */
CREATE TABLE [dbo].[Transactions](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Date] [datetime] NOT NULL,
[Amount] [float] NOT NULL,
[UserId] [bigint] NOT NULL,
[Remarks] [nvarchar](255) NULL,
[GroupFbGroupId] [bigint] NULL,
CONSTRAINT [PK_Transactions] PRIMARY KEY CLUSTERED (Id),
constraint UQ_Transactions_XRef UNIQUE (Id,Amount,UserId)
)
Nothing surprising so far, I hope
/* Much expanded TransactionUser table, we'll hide it away and most of the maintenance is automatic */
CREATE TABLE [dbo]._TransactionUser(
[TransactionsPaidFor_Id] int NOT NULL,
[PaidForUsers_FbUserId] [bigint] NOT NULL,
Amount float not null,
PaidByUserId bigint not null,
UserCount int not null,
LowUserID as CASE WHEN [PaidForUsers_FbUserId] < PaidByUserId THEN [PaidForUsers_FbUserId] ELSE PaidByUserId END,
HighUserID as CASE WHEN [PaidForUsers_FbUserId] < PaidByUserId THEN PaidByUserId ELSE [PaidForUsers_FbUserId] END,
PerUserDelta as (Amount/UserCount) * CASE WHEN [PaidForUsers_FbUserId] < PaidByUserId THEN -1 ELSE 1 END,
constraint PK__TransactionUser PRIMARY KEY ([TransactionsPaidFor_Id],[PaidForUsers_FbUserId]),
constraint FK__TransactionUser_Transactions FOREIGN KEY ([TransactionsPaidFor_Id]) references dbo.Transactions,
constraint FK__TransactionUser_Transaction_XRef FOREIGN KEY ([TransactionsPaidFor_Id],Amount,PaidByUserID)
references dbo.Transactions (Id,Amount,UserId) ON UPDATE CASCADE
)
This table now maintains enough information to allow the view to be constructed. The rest of the work we do is to construct/maintain the data in the table. Note that, with the foreign key constraint, we've already ensured that if, say, an amount is changed in the transactions table, everything gets recalculated.
/* View that mimics the original TransactionUser table -
in fact it has the same name so existing code doesn't need to change */
CREATE VIEW dbo.TransactionUser
with schemabinding
as
select
[TransactionsPaidFor_Id],
[PaidForUsers_FbUserId]
from
dbo._TransactionUser
GO
/* Effectively the PK on the original table */
CREATE UNIQUE CLUSTERED INDEX PK_TransactionUser on dbo.TransactionUser ([TransactionsPaidFor_Id],[PaidForUsers_FbUserId])
Anything that's already written to work against TransactionUser will now work against this view, and be none the wiser. Except, they can't insert/update/delete the rows without some help:
/* Now we write the trigger that maintains the underlying table */
CREATE TRIGGER dbo.T_TransactionUser_IUD
ON dbo.TransactionUser
INSTEAD OF INSERT, UPDATE, DELETE
AS
SET NOCOUNT ON;
/* Every delete affects *every* row for the same transaction
We need to drop the counts on every remaining row, as well as removing the actual rows we're interested in */
WITH DropCounts as (
select TransactionsPaidFor_Id,COUNT(*) as Cnt from deleted group by TransactionsPaidFor_Id
), KeptRows as (
select tu.TransactionsPaidFor_Id,tu.PaidForUsers_FbUserId,UserCount - dc.Cnt as NewCount
from dbo._TransactionUser tu left join deleted d
on tu.TransactionsPaidFor_Id = d.TransactionsPaidFor_Id and
tu.PaidForUsers_FbUserId = d.PaidForUsers_FbUserId
inner join DropCounts dc
on
tu.TransactionsPaidFor_Id = dc.TransactionsPaidFor_Id
where
d.PaidForUsers_FbUserId is null
), ChangeSet as (
select TransactionsPaidFor_Id,PaidForUsers_FbUserId,NewCount,1 as Keep
from KeptRows
union all
select TransactionsPaidFor_Id,PaidForUsers_FbUserId,null,0
from deleted
)
merge into dbo._TransactionUser tu
using ChangeSet cs on tu.TransactionsPaidFor_Id = cs.TransactionsPaidFor_Id and tu.PaidForUsers_FbUserId = cs.PaidForUsers_FbUserId
when matched and cs.Keep = 1 then update set UserCount = cs.NewCount
when matched then delete;
/* Every insert affects *every* row for the same transaction
This is why the indexed view couldn't be generated */
WITH TU as (
select TransactionsPaidFor_Id,PaidForUsers_FbUserId,Amount,PaidByUserId from dbo._TransactionUser
where TransactionsPaidFor_Id in (select TransactionsPaidFor_Id from inserted)
union all
select TransactionsPaidFor_Id,PaidForUsers_FbUserId,Amount,UserId
from inserted i inner join dbo.Transactions t on i.TransactionsPaidFor_Id = t.Id
), CountedTU as (
select TransactionsPaidFor_Id,PaidForUsers_FbUserId,Amount,PaidByUserId,
COUNT(*) OVER (PARTITION BY TransactionsPaidFor_Id) as Cnt
from TU
)
merge into dbo._TransactionUser tu
using CountedTU new on tu.TransactionsPaidFor_Id = new.TransactionsPaidFor_Id and tu.PaidForUsers_FbUserId = new.PaidForUsers_FbUserId
when matched then update set Amount = new.Amount,PaidByUserId = new.PaidByUserId,UserCount = new.Cnt
when not matched then insert
([TransactionsPaidFor_Id],[PaidForUsers_FbUserId],Amount,PaidByUserId,UserCount)
values (new.TransactionsPaidFor_Id,new.PaidForUsers_FbUserId,new.Amount,new.PaidByUserId,new.Cnt);
Now that the underlying table is being maintained, we can finally write the indexed view you wanted in the first place... almost. The issue is that the totals we create may be positive or negative, because we've normalized the transactions so that we can easily sum them:
CREATE VIEW [dbo]._FriendBalances
WITH SCHEMABINDING
as
SELECT
LowUserID,
HighUserID,
SUM(PerUserDelta) as Balance,
COUNT_BIG(*) as Cnt
FROM dbo._TransactionUser
WHERE LowUserID != HighUserID
GROUP BY
LowUserID,
HighUserID
GO
create unique clustered index IX__FriendBalances on dbo._FriendBalances (LowUserID, HighUserID)
So we finally create a view, built on the indexed view above, that if the balance is negative, we flip the person owed, and the person owing around. But it will use the index on the above view, which is most of the work we were seeking to save by having the indexed view:
create view dbo.FriendBalances
as
select
CASE WHEN Balance >= 0 THEN LowUserID ELSE HighUserID END as PaidBy,
CASE WHEN Balance >= 0 THEN HighUserID ELSE LowUserID END as PaidFor,
ABS(Balance) as Balance
from
dbo._FriendBalances WITH (NOEXPAND)
Now, finally, we insert your sample data:
set identity_insert dbo.Transactions on --Ensure we get IDs we know
GO
insert into dbo.Transactions (Id,[Date] , Amount , UserId , Remarks ,GroupFbGroupId)
select 1 ,'2001-01-01T00:00:00.000', 3000, 9990 ,'this is a test', NULL union all
select 2 ,'2001-01-01T00:00:00.000', 3000, 9990 ,'this is a test', NULL union all
select 3 ,'2001-01-01T00:00:00.000', 3000, 9991 ,'this is a test', NULL
GO
set identity_insert dbo.Transactions off
GO
insert into dbo.TransactionUser (TransactionsPaidFor_Id, PaidForUsers_FbUserId)
select 1, 9991 union all
select 1, 9992 union all
select 1, 9993 union all
select 2, 9990 union all
select 2, 9991 union all
select 2, 9992 union all
select 3, 9990 union all
select 3, 9993 union all
select 3, 9994
And query the final view:
select * from dbo.FriendBalances
PaidBy PaidFor Balance
9990 9991 1000
9990 9992 2000
9990 9993 1000
9991 9993 1000
9991 9994 1000
Now, there is additional work we could do, if we were concerned that someone may find a way to dodge the triggers and perform direct changes to the base tables. The first would be yet another indexed view, that will ensure that every row for the same transaction has the same UserCount value. Finally, with a few additional columns, check constraints, FK constraints and more work in the triggers, I think we can ensure that the UserCount is correct - but it may add more overhead than you want.
I can add scripts for these aspects if you want me to - it depends on how restrictive you want/need the database to be.

Resources