I have an indexed view but when I run queries on that view the index which is built on View is not applied and the query runs without index. Below is my dummy script:
Tables + View+ Index on View
CREATE TABLE P_Test
(
[PID] INT IDENTITY,
[TID] INT,
[StatusID] INT
)
CREATE TABLE T_Test
(
[TID] INT IDENTITY,
[FID] INT,
)
CREATE TABLE F_Test
(
[FID] INT IDENTITY,
[StatusID] INT
)
GO
INSERT INTO F_Test
SELECT TOP 1000 ABS(CAST(NEWID() AS BINARY(6)) %10) --below 100
FROM master..spt_values
INSERT INTO T_Test
SELECT TOP 10000 ABS(CAST(NEWID() AS BINARY(6)) %1000) --below 1000
FROM master..spt_values,
master..spt_values v2
INSERT INTO P_Test
SELECT TOP 100000 ABS(CAST(NEWID() AS BINARY(6)) %10000) --below 10000
,
ABS(CAST(NEWID() AS BINARY(6)) %10)--below 10
FROM master..spt_values,
master..spt_values v2
GO
CREATE VIEW [TestView]
WITH SCHEMABINDING
AS
SELECT P.StatusID AS PStatusID,
F.StatusID AS FStatusID,
P.PID
FROM dbo.P_Test P
INNER JOIN dbo.T_Test T
ON T.TID = P.TID
INNER JOIN dbo.F_Test F
ON T.FID = F.FID
GO
CREATE UNIQUE CLUSTERED INDEX [PK_TestView]
ON [dbo].[TestView] ( [PStatusID] ASC, [FStatusID] ASC, [PID] ASC )
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
Now when I run the following queries the [PK_TestView] index is not being applied:
SELECT PStatusID ,
FStatusID ,
PID FROM [TestView]
SELECT PStatusID ,
FStatusID ,
PID FROM [TestView]
WHERE [PStatusID]=1
SELECT COUNT(PStatusID) FROM [TestView]
WHERE [PStatusID]=1
Can you help me fixing this?
You need to use the NOEXPAND hint. SQL Server will not consider matching indexed views without this (even if the view name is referenced in the query) unless you are on Enterprise Edition engine.
SELECT COUNT(PStatusID)
FROM [TestView]
WITH (NOEXPAND) -- this line
WHERE [PStatusID]=1
This should give you the first, much cheaper, plan
Related
I want to search a table's varchar column for content in another table's varchar column.
Certain words are banned and I want to identify the rows that have the banned words. I want an EXACT match on the banned word.
I'm using MS SQL Server 2016.
Table 1:
CREATE TABLE [dbo].[BlogComment](
[BlogCommentId] [int] IDENTITY(1,1) NOT NULL,
[BlogCommentContent] [varchar](max) NOT NULL,
CONSTRAINT [PK_BlogComment] PRIMARY KEY CLUSTERED
(
[BlogCommentId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS =
ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
3 rows - and the data in BlogCommentContent:
There are many of us.
This is the man.
I hear you.
Table 2:
CREATE TABLE [dbo].[BannedWords](
[BannedWordsId] [int] IDENTITY(1,1) NOT NULL,
[Description] [varchar](250) NOT NULL
CONSTRAINT [PK_BannedWords] PRIMARY KEY CLUSTERED
(
[BannedWordsId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS =
ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
3 rows - and the data in Description:
though
man
hear
My Sql:
SELECT BlogCommentContent
FROM dbo.BlogComment,
dbo.BannedWords
WHERE ( CHARINDEX( [Description], BlogCommentContent, 1 ) ) > 1
It's finding 'man', 'hear' and 'man' in the word 'many'.
So it returns 3 rows.
I only WANT EXACT matches.
So only return 2 rows.
How do I accomplish this?
Please try the following solution.
It will work starting from SQL Server 2017 onwards.
SQL
-- DDL and sample data population, start
DECLARE #BlogComment TABLE (
BlogCommentId INT IDENTITY PRIMARY KEY,
BlogCommentContent VARCHAR(MAX));
INSERT INTO #BlogComment (BlogCommentContent) VALUES
('There are many of us.'),
('This is the man.'),
('I hear you.');
DECLARE #BannedWords TABLE (
BannedWordsId INT IDENTITY PRIMARY KEY,
Word varchar(250))
INSERT INTO #BannedWords (Word) VALUES
('though'),
('man'),
('hear');
-- DDL and sample data population, end
;WITH rs AS
(
SELECT word = TRIM('.,' FROM value )
FROM #BlogComment
CROSS APPLY STRING_SPLIT(BlogCommentContent, SPACE(1))
)
SELECT DISTINCT bw.Word
FROM rs
INNER JOIN #BannedWords bw ON rs.word = bw.Word;
SQL Server 2016
;WITH rs AS
(
--SELECT word = TRIM('.,' FROM [value])
SELECT word = REPLACE(REPLACE([value],'.',''),',','')
FROM #BlogComment
CROSS APPLY STRING_SPLIT(BlogCommentContent, SPACE(1))
)
SELECT DISTINCT bw.Word
FROM rs
INNER JOIN #BannedWords bw ON rs.word = bw.Word;
Output
+------+
| Word |
+------+
| man |
| hear |
+------+
If you want exact matches what you mean is that there must not be another work touching so man can match "man", "man and women", "mice and men", "a man or two". You need to check
BlogCommentContent = 'man'
left(BlogCommentContent,3) = 'man'
right(BlogCommentContent,3) = 'man'
BlogCommentContent like ' man '
the length of man can be found with len('man') to be used in right() and left().
The last value, for like, can be constructed with concat(' ','man',' ')
We have some SQL "normal" and graph tables, a script that syncs the information between them.
Everything is working ok from SSMS but when building the database project in Visual Studio using msbuild we get warnings (see code and warnings details below).
If we set TreatTSqlWarningsAsErrors to True these warnings become errors.
We don't want to ignore warnings but its unclear why we even get them.
Are these warnings correct?
Why are they showing in Visual Studio only and not in SSMS?
How can we resolve them without ignoring them?
The details:
We have the following two "normal" SQL tables:
CREATE TABLE [dbo].[Currency]
(
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](250) NOT NULL,
[UId] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_Currency]
PRIMARY KEY CLUSTERED ([Id] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Currency]
ADD CONSTRAINT [DF_Currency_UId] DEFAULT (NEWID()) FOR [UId]
GO
and
CREATE TABLE [dbo].[Portfolio]
(
[Id] [INT] IDENTITY(1,1) NOT NULL,
[Name] [NVARCHAR](250) NULL,
[CurrencyId] [INT] NOT NULL,
[UId] [UNIQUEIDENTIFIER] NOT NULL,
CONSTRAINT [PK_Portfolio]
PRIMARY KEY CLUSTERED ([Id] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Portfolio]
ADD CONSTRAINT [DF_Portfolio_UId] DEFAULT (NEWID()) FOR [UId]
GO
ALTER TABLE [dbo].[Portfolio] WITH CHECK
ADD CONSTRAINT [FK_Portfolio_Currency]
FOREIGN KEY([CurrencyId]) REFERENCES [dbo].[Currency] ([Id])
GO
ALTER TABLE [dbo].[Portfolio] CHECK CONSTRAINT [FK_Portfolio_Currency]
GO
and we created the following SQL graph schema together with two SQL node tables and one SQL edge:
CREATE SCHEMA [graph] ;
CREATE TABLE [graph].[Currency]
(
[Id] UNIQUEIDENTIFIER NOT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC),
INDEX [GRAPH_UNIQUE_INDEX_4FFC60C0FCBE4843A7F4B9AB0729FF78] UNIQUE NONCLUSTERED ($node_id)
) AS NODE;
CREATE TABLE [graph].[Portfolio]
(
[Id] UNIQUEIDENTIFIER NOT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC),
INDEX [GRAPH_UNIQUE_INDEX_F39F92BE8DD34DD791D6CA955DA0DA9A] UNIQUE NONCLUSTERED ($node_id)
) AS NODE;
CREATE TABLE [graph].[isOf]
(
[IsActive] BIT CONSTRAINT [DF_isOf_IsActive] DEFAULT ((1)) NOT NULL,
INDEX [GRAPH_UNIQUE_INDEX_AD8C5B40D277413580EDD943AC192869] UNIQUE NONCLUSTERED ($edge_id)
) AS EDGE;
CREATE UNIQUE NONCLUSTERED INDEX [UQ_FromTo]
ON [graph].[isOf] ($from_id, $to_id) ON [PRIMARY];
GO
A sync stored procedure which will populate the nodes and the edge from the "normal" tables that will be executed daily.
Attached parts of it:
PRINT ( 'Sync graph.[Portfolio]' );
INSERT INTO graph.Portfolio ( Id )
SELECT P.[UId]
FROM dbo.Portfolio P
WHERE NOT EXISTS ( SELECT 1
FROM graph.Portfolio G
WHERE G.Id = P.[UId] );
PRINT ( 'Sync graph.[Currency]' );
INSERT INTO graph.Currency ( Id )
SELECT C.[UId]
FROM dbo.Currency C
WHERE NOT EXISTS ( SELECT 1
FROM graph.Currency G
WHERE G.Id = C.[UId] );
PRINT('Sync Portfolio isOf Currency edge');
;
WITH UidCTE
AS ( SELECT P.[UId] AS PortfolioUid ,
C.[UId] AS CurrencyUid
FROM dbo.Portfolio P
JOIN dbo.Currency C ON C.Id = P.CurrencyId )
MERGE graph.isOf AS TGT
USING graph.[Portfolio] AS SourceFrom
JOIN UidCTE CTE ON SourceFrom.Id = CTE.PortfolioUid
JOIN graph.[Currency] AS SourceTo ON CTE.CurrencyUid = SourceTo.Id
ON MATCH(SourceFrom-(TGT)->SourceTo)
WHEN NOT MATCHED BY TARGET THEN
INSERT ( $from_id , $to_id )
VALUES ( SourceFrom.$node_id, SourceTo.$node_id)
WHEN MATCHED AND TGT.[IsActive] = 0 THEN
UPDATE SET TGT.[IsActive] = 1;
and the following script which updates the edge's IsActive column in case the relationship doesn't exist anymore in the "normal" tables(we don't want to delete it from the graph nodes/edges)
;WITH PortfolioIsOfCurrency
AS ( SELECT P.[UId] AS PortfolioUid ,
C.[UId] AS CurrencyUid
FROM dbo.Portfolio P
JOIN dbo.Currency C ON C.Id = P.CurrencyId ) ,
NodeCTE
AS ( SELECT P.$node_id AS FromNode ,
C.$node_id AS ToNode
FROM PortfolioIsOfCurrency CTE
JOIN graph.Portfolio P ON P.Id = CTE.PortfolioUid
JOIN graph.Currency C ON C.Id = CTE.CurrencyUid )
UPDATE ISOF
SET isOf.IsActive = 0
FROM graph.isOf ISOF
WHERE NOT EXISTS ( SELECT 1
FROM NodeCTE N
WHERE N.FromNode = ISOF.$from_id
AND N.ToNode = ISOF.$to_id );
We created a database project in VS 2019 with the following settings:
<DSP>Microsoft.Data.Tools.Schema.Sql.Sql150DatabaseSchemaProvider</DSP>
<TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
<TreatWarningsAsErrors>False</TreatWarningsAsErrors>
<TreatTSqlWarningsAsErrors>False</TreatTSqlWarningsAsErrors>
From SSMS everything works perfect.
The problem is with the last syntax(UPDATE isof.active) I get the following warnings:
Warning SQL71502: Procedure: [graph].[SyncEngagementContentGraphs] has an unresolved reference to object [graph].[isOf].[N]. graph\Stored Procedures\SyncEngagementContentGraphs.sql 57
Warning SQL71502: Procedure: [graph].[SyncEngagementContentGraphs] has an unresolved reference to object [graph].[isOf].[N]. graph\Stored Procedures\SyncEngagementContentGraphs.sql 58
Warning SQL71509: The model already has an element that has the same name NodeCTE.$node_id. graph\Stored Procedures\SyncEngagementContentGraphs.sql 47
Warning SQL71509: The model already has an element that has the same name NodeCTE.$node_id. graph\Stored Procedures\SyncEngagementContentGraphs.sql 48
and if we set TreatTSqlWarningsAsErrors to True these warnings become errors and we cannot leave it like this (set to False) in the long run.
The solution is to define correctly the NodeCTE common table expression with columns names.
This seems to be mandatory in the case of a CTE with graph tables.
The correct query whithout warnings would be:
;WITH PortfolioIsOfCurrency
AS ( SELECT P.[UId] AS PortfolioUid ,
C.[UId] AS CurrencyUid
FROM dbo.Portfolio P
JOIN dbo.Currency C ON C.Id = P.CurrencyId ) ,
**NodeCTE(FromNode, ToNode)**
AS ( SELECT P.$node_id AS FromNode ,
C.$node_id AS ToNode
FROM PortfolioIsOfCurrency CTE
JOIN graph.Portfolio P ON P.Id = CTE.PortfolioUid
JOIN graph.Currency C ON C.Id = CTE.CurrencyUid )
UPDATE ISOF
SET isOf.IsActive = 0
FROM graph.isOf ISOF
WHERE NOT EXISTS ( SELECT 1
FROM NodeCTE N
WHERE N.FromNode = ISOF.$from_id
AND N.ToNode = ISOF.$to_id );
This change is needed just for the VS SqlProject warnings. Both versions work with no problems with SSMS/ADS.
I have a view which comprises 4 yearly tables:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW [dbo].[BGT_BETWAYDETAILS]
WITH SCHEMABINDING
AS
SELECT [bwd_BetTicketNr] ,
[bwd_LineID] [int] ,
[bwd_ResultID] [bigint] NOT NULL,
[bwd_DateModified] ,
[bwd_DateModifiedTrunc] ,
[bwd_LineMaxPayout]
FROM [dbo].[BGT_BETWAYDETAILS_2020]
UNION ALL
SELECT [bwd_BetTicketNr] ,
[bwd_LineID] [int] ,
[bwd_DateModified] ,
[bwd_DateModifiedTrunc] ,
[bwd_LineMaxPayout]
FROM [dbo].[BGT_BETWAYDETAILS_2019]
UNION ALL
SELECT [bwd_BetTicketNr] ,
[bwd_LineID] [int] ,
[bwd_DateModified] ,
[bwd_DateModifiedTrunc] ,
[bwd_LineMaxPayout]
FROM [dbo].[BGT_BETWAYDETAILS_2018]
UNION ALL
SELECT [bwd_BetTicketNr] ,
[bwd_LineID] [int] ,
[bwd_DateModified] ,
[bwd_DateModifiedTrunc] ,
[bwd_LineMaxPayout]
FROM [dbo].[BGT_BETWAYDETAILS_2017];
GO
Each table has the following structure:
CREATE TABLE [dbo].[BGT_BETWAYDETAILS_2020]
(
[bwd_BetTicketNr] [bigint] NOT NULL,
[bwd_LineID] [int] NOT NULL,
[bwd_ResultID] [bigint] NOT NULL,
[bwd_DateModified] [datetime] NULL,
[bwd_DateModifiedTrunc] [date] NULL,
[bwd_LineMaxPayout] [decimal](18, 4) NULL,
CONSTRAINT [CSTR__BGT_BETWAYDETAILS_2020_CKEY]
PRIMARY KEY CLUSTERED ([bwd_BetTicketNr] ASC, [bwd_LineID] ASC, [bwd_ResultID] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
I have added an non-clustered index on
CREATE NONCLUSTERED INDEX [NCI__DATEMODIFIED]
ON [dbo].[BGT_BETWAYDETAILS_2020] ([bwd_DateModifiedTrunc] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF,
ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
I am running the following 3 queries:
SELECT COALESCE(MAX([bwd_DateModifiedTrunc]), '2019-01-01') AS next_date
FROM [dbo].[BGT_BETWAYDETAILS_2020]
SELECT COALESCE(MAX([bwd_DateModifiedTrunc]), '2019-01-01') AS next_date
FROM [dbo].[BGT_BETWAYDETAILS]
SELECT COALESCE(CAST(MAX([bwd_DateModified]) AS date), '2019-01-01') AS next_date
FROM [dbo].[BGT_BETWAYDETAILS]
The first one, when run on each yearly table, runs instantly.
The second one, seems to take forever. The query plan for this, seems very strange.
The plan shows two index scans on each yearly table.
The plan for each yearly table is what I expected to see:
Finally, the plan on the non-indexed date column is also what I expected to see (a clustered index scan). A clustered index scan on each table. This query runs in ~3mins which is expected.
What is the issue here? Some anti-pattern I am missing? Why the index scan on the non-clustered index is done 2 times according to the live plan? I expected the view to respond as fast as the individual tables.
For the record, I am running this on SQL Server 2017.
This just looks like an optimiser limitation. I have submitted a suggestion that this should be improved.
A simpler example is
CREATE TABLE T1(X INT NULL UNIQUE CLUSTERED);
CREATE TABLE T2(X INT NULL UNIQUE CLUSTERED);
INSERT INTO T1
OUTPUT INSERTED.X INTO T2
SELECT TOP 100000 NULLIF(ROW_NUMBER() OVER (ORDER BY 1/0),1)
FROM sys.all_objects o1,
sys.all_objects o2;
And then
WITH CTE AS
(
SELECT X FROM T1
UNION ALL
SELECT X FROM T2
)
SELECT MAX(X)
FROM CTE
OPTION (QUERYRULEOFF ScalarGbAggToTop)
This disables the query optimizer rule ScalarGbAggToTop and the query plan does a MAX on each individual table then computes a MAX of the MAX-es - so the same as
SELECT MAX(MaxX)
FROM
(
SELECT MAX(X) AS MaxX FROM T1
UNION ALL
SELECT MAX(X) AS MaxX FROM T1
) T
With the ScalarGbAggToTop rule enabled the plan now looks like this
It is effectively doing the following...
SELECT MAX(MaxX)
FROM (SELECT MAX(X) AS MaxX
FROM (SELECT TOP 1 X
FROM T1
WHERE X IS NULL
UNION ALL
SELECT TOP 1 X
FROM T1
WHERE X IS NOT NULL
ORDER BY X DESC) T1
UNION ALL
SELECT MAX(X) AS MaxX
FROM (SELECT TOP 1 X
FROM T2
WHERE X IS NULL
UNION ALL
SELECT TOP 1 X
FROM T2
WHERE X IS NOT NULL
ORDER BY X DESC) T2) T0
... but in a very inefficient way. Running the SQL above would give a plan with seeks and each branch only reading a single row.
The plan produced by ScalarGbAggToTop only has minimal changes to the stream aggregate plan. It looks like it takes the scan from that and applies a backwards ordering to it and then uses the backwards ordering for both the NOT NULL and NULL branches. And does not perform any additional exploration to see if there is a more efficient access path.
This means that in the pathological case that all of the rows are either NULL or NOT NULL one of the scans will end up reading all of the rows in the table (5 billion in your case if applicable to all 4 tables). Even if there is a mix of NULL and NOT NULL the fact that the IS NULL branch is doing a backwards scan is sub optimal because NULL is ordered first in SQL Server so would be at the beginning of the index.
The addition of a NOT NULL branch in the first place seems largely unnecessary as the query would return the same results without it. I imagine it is only needed so that it knows whether or not to display the message
Warning: Null value is eliminated by an aggregate or other SET
operation.
but I doubt you care about that. In which case adding an explicit WHERE ... NOT NULL resolves the issue.
WITH CTE AS
(
SELECT X FROM T1
UNION ALL
SELECT X FROM T2
)
SELECT MAX(X)
FROM CTE
WHERE X IS NOT NULL
;
It now has a seek into the NOT NULL part of the index and reads backwards (stopping after the first row is read from each table)
I need to create a method of creating a unique order number. Each order number must always be greater than the last, however they should not always be consecutive. The solution must work within a web farm environment.
Currently have a stored procedure which is responsible for getting a new Order number, which has to be seeded so that the order number is not consecutive. The application is now moving from a single server to a web farm and therefore controlling access to the stored procedure via a using a lock in C# is no longer viable as a method of controlling access. I have updated the stored procedure as below however I am concerned that I am going to introduce blocks\locks\deadlocks when concurrent calls occur.
The table and index structures are as follows
MyAppSetting Table
CREATE TABLE [dbo].[MyAppSetting](
[SettingName] [nvarchar](255) NOT NULL,
[SettingValue] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_MyAppSetting] PRIMARY KEY CLUSTERED
(
[SettingName] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
My Order table
CREATE TABLE [dbo].[MyOrder](
[id] [int] IDENTITY(1,1) NOT NULL,
[OrderNumber] [nvarchar](50) NOT NULL CONSTRAINT [DF_MyOrder_OrderNumber] DEFAULT (N''),
... rest of the table
CONSTRAINT [PK_MyOrder] PRIMARY KEY NONCLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
The Sql Transaction
Set Transaction Isolation Level Serializable;
Begin Transaction
--Gen random number
SELECT #Random = ROUND(((#HighSeed - #LowSeed -1) * RAND() + #LowSeed), 0)
--Get Seed
select #Seed = [SettingValue] FROM [MyAppSetting] where [SettingName] = 'OrderNumberSeed'
--Removed concurrency and not required as order numbe should not exceed the seed number
--select #MaxOrderNUmber = Max(OrderNumber) FROM MyOrder
--if #MaxOrderNumber >= #Seed Begin
-- Set #Seed = #MaxOrderNumber
--end
-- New Seed
Set #OrderNumber = #Seed + #Random
Update [MyAppSetting] Set [SettingValue] = #OrderNumber where [SettingName] = 'OrderNumberSeed'
select #OrderNumber
Commit
With the revised SQL you provided you only select and update one table. You can do this in a single query which should avoid the risk of deadlocks, and avoids the need for an explicit transaction.
Setup:
CREATE TABLE OrderNumber ( NextOrderNumber int)
INSERT OrderNumber(NextOrderNumber) values (123)
Get Next Order Number
DECLARE #MinIncrement int = 5
DECLARE #MaxIncrement int = 50
DECLARE #Random int = ROUND(((#MaxIncrement - #MinIncrement -1) * RAND() + #MinIncrement), 0)
DECLARE #OrderNumber int
UPDATE OrderNumber
SET #OrderNumber=NextOrderNumber, NextOrderNumber = NextOrderNumber + #Random
SELECT #OrderNumber
I changed LowSeed and HighSeed to MinIncrement and MaxIncrement as I found the term Seed here to be confusing. I would use a table dedicated to tracking the order number to avoid locking anything else on the MyAppSetting table.
I would also challenge the requirement of having an order that always increases, but not sequentially - without this a GUID would be easier.
Alternatives to consider would be to have the order number derived from the time somehow - with last digit to identify different servers.
This is my query:
exec sp_executesql N'set arithabort off;set statistics time on; set transaction isolation level read uncommitted;With cte as (Select peta_rn = ROW_NUMBER() OVER (ORDER BY d.LastStatusChangedDateTime desc )
, d.DocumentID,
d.IsReEfiled, d.IGroupID, d.ITypeID, d.RecordingDateTime, d.CreatedByAccountID, d.JurisdictionID,
d.LastStatusChangedDateTime as LastStatusChangedDateTime
, d.IDate, d.InstrumentID, d.DocumentStatusID
, u.Username
, it.Abbreviation AS ITypeAbbreviation
, ig.Abbreviation AS IGroupAbbreviation,
d.DocumentDate
From Documents d
Inner Join ITypes it on it.ITypeID = d.ITypeID
Inner Join Users u on d.UserID = u.UserID Inner Join IGroupes ig on ig.IGroupID = d.IGroupID
Where 1=1 And ( d.DocumentStatusID = 9 ) ) Select cte.DocumentID,
cte.IsReEfiled, cte.IGroupID, cte.ITypeID, cte.RecordingDateTime, cte.CreatedByAccountID, cte.JurisdictionID,
cte.LastStatusChangedDateTime as LastStatusChangedDateTime
, cte.IDate, cte.InstrumentID, cte.DocumentStatusID,cte.IGroupAbbreviation, cte.Username, j.JDAbbreviation, inf.DocumentName,
cte.ITypeAbbreviation, cte.DocumentDate, ds.Abbreviation as DocumentStatusAbbreviation, ds.Name as DocumentStatusName,
( SELECT CAST(CASE WHEN cte.DocumentID = (
SELECT TOP 1 doc.DocumentID
FROM Documents doc
WHERE doc.JurisdictionID = cte.JurisdictionID
AND doc.DocumentStatusID = cte.DocumentStatusID
ORDER BY LastStatusChangedDateTime)
THEN 1
ELSE 0
END AS BIT)
) AS CanChangeStatus ,
Upper((Select Top 1 Stuff( (Select ''='' + dbo.GetDocumentNameFromParamsWithPartyType(Business, FirstName, MiddleName, LastName, t.Abbreviation, NameTypeID, pt.Abbreviation, IsGrantor, IsGrantee) From DocumentNames dn
Left Join Titles t
on dn.TitleID = t.TitleID
Left Join PartyTypes pt
On pt.PartyTypeID = dn.PartyTypeID
Where DocumentID = cte.DocumentID
For XML PATH('''')),1,1,''''))) as FlatDocumentName, (SELECT COUNT(*) FROM CTE) AS TotalRecords
FROM cte Left Join DocumentStatuses ds On
cte.DocumentStatusID = ds.DocumentStatusID Left Join InstrumentFiles inf On cte.DocumentID = inf.DocumentID
Left Join Jurisdictions j on j.JurisdictionID = cte.JurisdictionID Where 1=1 And
peta_rn>#7 AND peta_rn<=#8 Order by peta_rn set statistics time off; ',N'#0 int,#1 int,#2 int,#3 int,#4 int,#5 int,#6 int,#7 int,#8 int',
#0=1,#1=5,#2=9,#3=1,#4=5,#5=9,#6=1,#7=97500,#8=97550
And this is my IGroupes table definition:
CREATE TABLE [dbo].[IGroupes](
[IGroupID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](64) NOT NULL,
[JurisdictionID] [int] NOT NULL,
[Abbreviation] [varchar](12) NOT NULL,
CONSTRAINT [PK_IGroupes] PRIMARY KEY NONCLUSTERED
(
[IGroupID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [IX_IGroupes_Abbreviation] Script Date: 10/11/2013 4:21:46 AM ******/
CREATE NONCLUSTERED INDEX [IX_IGroupes_Abbreviation] ON [dbo].[IGroupes]
(
[Abbreviation] ASC
)
INCLUDE ( [IGroupID],
[Name],
[JurisdictionID]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [IX_IGroupes_JurisdictionID] Script Date: 10/11/2013 4:21:46 AM ******/
CREATE NONCLUSTERED INDEX [IX_IGroupes_JurisdictionID] ON [dbo].[IGroupes]
(
[JurisdictionID] ASC
)
INCLUDE ( [IGroupID],
[Name],
[Abbreviation]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
GO
SET ANSI_PADDING ON
GO
/****** Object: Index [IX_IGroupes_Name] Script Date: 10/11/2013 4:21:46 AM ******/
CREATE NONCLUSTERED INDEX [IX_IGroupes_Name] ON [dbo].[IGroupes]
(
[Name] ASC
)
INCLUDE ( [IGroupID],
[JurisdictionID],
[Abbreviation]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
GO
Yet please see it is using table scan. This operation is costing me too much. IGroupes table just has 7 rows and Documents table has approximately 98K records. Yet when I join on d.IGroupID = ig.IGroupID it shows actual number of rows above 600K! That is the problem. Please see the attached screenshot:
In case anybody is interested in the full query plan xml, here it is:
https://www.dropbox.com/s/kldx24x3j8vndpe/plan.xml
Any help is appreciated. Thanks!
None of the 3 indexes (other than the PK) you have on IGroupes are going to help this query because you are not using any of those fields in a where or join clause. Unless you need those indexes for other queries, I would delete them. They are just going to give the query optimizer more choices to test (and reject).
The index on the Primary Key PK_IGroupes should be clustered. That will allow it to do an index seek (or bookmark lookup). If it can't be clustered for some other reason, try creating an index on IGroupID and Abbreviation, in that order (or including the Abbreviation column in the existing PK index).
If it still doesn't pick up the right index, you can use a hint such as WITH(INDEX(0)) or WITH(INDEX('index-name')).
The 600k rows does come from the fact that it is doing a nested loop join on 98k rows multiplied by the 7 rows. If the index above doesn't work, you can try replacing the INNER JOIN iGroupes with INNER HASH JOIN IGroupes.
Probably in this case table scan is more efficient than using any of the indexes you have on the IGroupes table.
If you think table scan operation is bottleneck in this query (though with 3% cost I'm not sure it is) either you may try modifying PK_IGroupes to become clustered index or you may try index like
CREATE UNIQUE NONCLUSTERED INDEX [IX_IGroupes_IGroupID]
ON [dbo].[IGroupes] ([IGroupID]) INCLUDE ([Abbreviation])