Unable to set MAXDOP option on a non-clustered columnstore Index - sql-server

I'm trying to set MAXDOP option on a non-clustered columnstore index using the following ways and it doesn't seem to be sticking. Server is SQL Server 2019.
Index properties -> Options Page -> Maximum Degree of Parallelism. I have set this to 2 and hit OK. It doesn't show me any error but when I check the index properties again, it shows 0 for Maximum Degree of Parallelism.
ALTER INDEX IXCS_Test ON [DBTest].[TBLTest] REBUILD WITH (MAXDOP = 2) This statement executes successfully without errors but again, when I check the properties of the index it's still at 0.
After trying each the above options, I tried scripting the index as a Create statement and I don't see the MAXDOP option in the script.
What am I missing here? Please help.

Related

How do I set Fillfactor to 0 in SQL Server?

I have a set of indexes in my databases, and I would like to set the Fill Factor to 0. Some of these are currently set to 100. I know that as far as SQL Server is concerned, this is the same, but we have a piece of software which is comparing two databases with the same schema and it is detecting a difference between them if the Fill Factor of an index is 0 in one database and 100 in the other. This is a problem.
Whilst we go about updating our software to be a bit cleverer about this, I would like to be able to set the Fill Factor to 0 on various indexes. SQL Server won't let you specify a Fill Factor of 0 (it must be 1..100), so the only way I can think of doing this is to DROP the index and recreate it (with the server default set to 0).
But is there another (and preferably quicker) way?
If the default instance fill factor (%) configuration value is set to zero, you can omit FILLBACTOR and recreate indexes with CREATE INDEX...WITH (DROP_EXISTING=ON). This will be a bit more efficient than DROP/CREATE because no sort will be needed to recreate the index and, the case of the clustered index, non-clustered indexes won't need to be rebuilt twice.

SQL Server errors out prematurely when starting a batch, for an error that will not actually occur

The code below results in error when a table [mytable] exists and has a columnstore index, even though the table will have been dropped and recreated without columnstore index before page compression is applied to it.
drop table if exists [mytable]
select top 0
*
into [mytable]
from [myexternaltable]
alter table [mytable] rebuild partition = all with (data_compression = page)
The error thrown:
This is not a valid data compression setting for a columnstore index. Please choose COLUMNSTORE or COLUMNSTORE_ARCHIVE compression.
At this point [mytable] has not been dropped so SQL Server has apparently not started executing any code.
The code runs just fine when I run the drop table statement first and the rest of the code after. SQL Server seemingly stops in error prematurely if it detects an inconsistency (that will not necessarily persist) with an existing table when starting a batch, but is perfectly happy with table [mytable] not existing at all, whereas a table not existing can hardly be seen as consistent with applying compression on it. SQL Server's consistency checking does not look particularly consistent itself.
I recall having had similar issues when using column references that did not exist yet and were to be created in code, if only SQL Server would allow the code to run instead of terminating on a wrongly predicted error.
What would be the most straightforward solution to this issue? I would not mind suppressing the error altogether - if possible - since it is obviously wrong.
I am trying to avoid work-arounds such as running the code as 2 separate batches, putting part of the code in an exec phrase, or trying and catching the error. The code is being used in hundreds of stored procedures, so the simpler the solution the better.

do these facts really hint to use FORCESEEK or not?

The said answer for the following question is C to use FORCESEEK hint. But, to use hint, we have to review the execution plan first, right? The question doesn't mention anything about execution plan. The problem seems to be "Readers block Writers". So, won't SNAPSHOT ISOLATION help in this kind of situation?
Question:
A database application runs slowly because of a query against a frequently updated table that has a clustered index. The query returns four columns: three columns in its where clause contained in a non-clustered index and one additional column. To optimize the statement
A. Add a HASH hint to the query
B. Add a LOOP hint to the query
C. Add a FORCESEEK hint to the query
D. Add an INCLUDE clause to the index
E. Add a FORCESCAN hint to the attach query
F. Add a columnstore index to cover the query
G. Enable the optimize for ad hoc workloads option.
H. Conver the unique clustered index with a columnstore index.
I. Include a SET FORCEPLAN ON statement before you run the query
J. Include a SET STATISTICS PROFILE ON statement before you run the query
K. Include a SET STATISTICS SHOWPLAN_XML ON statement before you run the query
L. Include a SET TRANSACTION ISOLATION LEVEL REPEATABLE READ statement before you run the query
M. Include a SET TRANSADCTION ISOLATION LEVEL SNAPSHOT statement before you run the query
N. Include a SET TRANSACTION ISOLATION LEVEL SERIALIZABLE statement before you run the query
I will go for option D. because it covers the missing non-Clusterd Index on the table.

SQL Server 2008 R2 - Auto Change Tracking does not index some records

We have a fulltext index table set with automatic change tracking. After inserting some rows into this table and waiting for the crawl to be completed we found out that some lines were not indexed by the crawl. I even tried a start full population, but those rows are missing in the fulltext index.
Does anyone knows why does this happen?
I'm not sure I understand your question, but if you are missing some records from the full text index, you should first try to rebuild the full text index.

SQL Server query execution plan shows wrong "actual row count" on an used index and performance is terrible slow

Today i stumbled upon an interesting performance problem with a stored procedure running on Sql Server 2005 SP2 in a db running on compatible level of 80 (SQL2000).
The proc runs about 8 Minutes and the execution plan shows the usage of an index with an actual row count of 1.339.241.423 which is about factor 1000 higher than the "real" actual rowcount of the table itself which is 1.144.640 as shown correctly by estimated row count. So the actual row count given by the query plan optimizer is definitly wrong!
Interestingly enough, when i copy the procs parameter values inside the proc to local variables and than use the local variables in the actual query, everything works fine - the proc runs 18 seconds and the execution plan shows the right actual row count.
EDIT: As suggested by TrickyNixon, this seems to be a sign of the parameter sniffing problem. But actually, i get in both cases exact the same execution plan. Same indices are beeing used in the same order. The only difference i see is the way to high actual row count on the PK_ED_Transitions index when directly using the parametervalues.
I have done dbcc dbreindex and UPDATE STATISTICS already without any success.
dbcc show_statistics shows good data for the index, too.
The proc is created WITH RECOMPILE so every time it runs a new execution plan is getting compiled.
To be more specific - this one runs fast:
CREATE Proc [dbo].[myProc](
#Param datetime
)
WITH RECOMPILE
as
set nocount on
declare #local datetime
set #local = #Param
select
some columns
from
table1
where
column = #local
group by
some other columns
And this version runs terribly slow, but produces exactly the same execution plan (besides the too high actual row count on an used index):
CREATE Proc [dbo].[myProc](
#Param datetime
)
WITH RECOMPILE
as
set nocount on
select
some columns
from
table1
where
column = #Param
group by
some other columns
Any ideas?
Anybody out there who knows where Sql Server gets the actual row count value from when calculating query plans?
Update: I tried the query on another server woth copat mode set to 90 (Sql2005). Its the same behavior. I think i will open up an ms support call, because this looks to me like a bug.
Ok, finally i got to it myself.
The two query plans are different in a small detail which i missed at first. the slow one uses a nested loops operator to join two subqueries together. And that results in the high number at current row count on the index scan operator which is simply the result of multiplicating the number of rows of input a with number of rows of input b.
I still don't know why the optimizer decides to use the nested loops instead of a hash match which runs 1000 timer faster in this case, but i could handle my problem by creating a new index, so that the engine does an index seek statt instead of an index scan under the nested loops.
When you're checking execution plans of the stored proc against the copy/paste query, are you using the estimated plans or the actual plans? Make sure to click Query, Include Execution Plan, and then run each query. Compare those plans and see what the differences are.
It sounds like a case of Parameter Sniffing. Here's an excellent explanation along with possible solutions: I Smell a Parameter!
Here's another StackOverflow thread that addresses it: Parameter Sniffing (or Spoofing) in SQL Server
To me it still sounds as if the statistics were incorrect. Rebuilding the indexes does not necessarily update them.
Have you already tried an explicit UPDATE STATISTICS for the affected tables?
Have you run sp_spaceused to check if SQL Server's got the right summary for that table? I believe in SQL 2000 the engine used to use that sort of metadata when building execution plans. We used to have to run DBCC UPDATEUSAGE weekly to update the metadata on some of the rapidly changing tables, as SQL Server was choosing the wrong indexes due to the incorrect row count data.
You're running SQL 2005, and BOL says that in 2005 you shouldn't have to run UpdateUsage anymore, but since you're in 2000 compat mode you might find that it is still required.

Resources