Sql Server Primary Key With Partition Issue - sql-server

I am building a table that will be partitioned and contain a FILESTREAM column. The issue I am encountering is that it appears I have to have a composite primary key (FILE_ID and FILE_UPLOADED_DATE) because FILE_UPLOADED_DATE is part of my partition scheme. Is that correct? I would prefer not to have this be a composite key and simply just have FILE_ID being the primary key.....could this be just an user error?
Any suggestions would be appreciated.
Version: SQL Server 2008 R2
Partition Schemes and Function:
CREATE PARTITION FUNCTION DocPartFunction (datetime)
AS RANGE RIGHT FOR VALUES ('20101220')
GO
CREATE PARTITION SCHEME DocPartScheme AS
PARTITION DocPartFunction TO (DATA_FG_20091231, DATA_FG_20101231);
GO
CREATE PARTITION SCHEME DocFSPartScheme AS
PARTITION DocPartFunction TO (FS_FG_20091231,FS_FG_20101231);
GO
Create Statement:
CREATE TABLE [dbo].[FILE](
[FILE_ID] [int] IDENTITY(1,1) NOT NULL,
[DOCUMENT] [varbinary](max) FILESTREAM NULL,
[FILE_UPLOADED_DATE] [datetime] NOT NULL,
[FILE_INT] [int] NOT NULL,
[FILE_EXTENSION] [varchar](10) NULL,
[DocGUID] [uniqueidentifier] ROWGUIDCOL NOT NULL UNIQUE ON [PRIMARY],
CONSTRAINT [PK_File] PRIMARY KEY CLUSTERED
( [FILE_ID] ASC
) ON DocPartScheme ([FILE_UPLOADED_DATE])
)ON DocPartScheme ([FILE_UPLOADED_DATE])
FILESTREAM_ON DocFSPartScheme;
Error if I don't include FILE_UPLOADED_DATE:
Msg 1908, Level 16, State 1, Line 1
Column 'FILE_UPLOADED_DATE' is partitioning column of the index 'PK_File'. Partition columns for a unique index must be a subset of the index key.
Msg 1750, Level 16, State 0, Line 1
Could not create constraint. See previous errors.
Thanks!

You are confusing the primary key and the clustered index. There is no reason for the two to be one and the same. You can have a clustered index on FILE_UPLOADED_DATE and a separate, non-clustered, primary key on FILE_ID. In fact you already do something similar for the DocGUID column:
CREATE TABLE [dbo].[FILE](
[FILE_ID] [int] IDENTITY(1,1) NOT NULL,
[DOCUMENT] [varbinary](max) FILESTREAM NULL,
[FILE_UPLOADED_DATE] [datetime] NOT NULL,
[FILE_INT] [int] NOT NULL,
[FILE_EXTENSION] [varchar](10) NULL,
[DocGUID] [uniqueidentifier] ROWGUIDCOL NOT NULL,
constraint UniqueDocGUID UNIQUE NONCLUSTERED ([DocGUID])
ON [PRIMARY])
ON DocPartScheme ([FILE_UPLOADED_DATE])
FILESTREAM_ON DocFSPartScheme;
CREATE CLUSTERED INDEX cdx_File
ON [FILE] (FILE_UPLOADED_DATE)
ON DocPartScheme ([FILE_UPLOADED_DATE])
FILESTREAM_ON DocFSPartScheme;
ALTER TABLE [dbo].[FILE]
ADD CONSTRAINT PK_File PRIMARY KEY NONCLUSTERED (FILE_ID)
ON [PRIMARY];
However such a design will lead to non-aligned indexes which can cause very serious performance problems, and also block all fast partition switch operations. See Special Guidelines for Partitioned Indexes:
Each sort table requires a minimum amount of memory to build. When you
are building a partitioned index that is aligned with its base table,
sort tables are built one at a time, using less memory. However, when
you are building a nonaligned partitioned index, the sort tables are
built at the same time.
As a result, there must be sufficient memory to handle these
concurrent sorts. The larger the number of partitions, the more memory
required. The minimum size for each sort table, for each partition, is
40 pages, with 8 kilobytes per page. For example, a nonaligned
partitioned index with 100 partitions requires sufficient memory to
serially sort 4,000 (40 * 100) pages at the same time. If this memory
is available, the build operation will succeed, but performance may
suffer. If this memory is not available, the build operation will fail
Your design already has a non-aligned index for DocGUID, so the performance problems are likely already present. If you must keep your indexes aligned then you have to admit one of the side effects of choosing a partition scheme: you can no longer have a logical primary key, nor unique constraints enforcement, unless the key includes the partitioning key.
And finally, one must ask: why use a partitioned table? They are always slower than a non-partitioned alternative. Unless you need fast partition switch operations for ETL (which you are already punting due to the non-aligned index on DocGUID), there is basically no incentive to use a partitioned table. (Preemptive comment: clustered index on the FILE_UPLOADED_DATE is guaranteed a better alternative than 'partition elimination').

The partitioning column must always be present in a partitioned table's clustered index. Any work-around you come up with has to factor this in.

I know, its an old question, but maybe google leads someone else to this question:
A possible solution would be not to partition by the date-column but by the File_ID. Every day / week / month (or whatever time period you use) you have to run a Agent Job at midnight that takes the Max(File_ID) where file_uploadet_date < GetDate(), adds the next filegroup to the partition scheme and does a split on the MaxID + 1.
Of course you will still have the problem with the non aligned index on the DocID, except you eighter add the file_id to this unique index too (could cause non unique DocIds) and / or check its uniqueness in an insert / update trigger.

Related

Azure Synapse Analytics: Can I use non-unique column as hash column in hash distributed tables?

I'm using Dedicated SQL Pools (AKA Azure Synapse Analytics). Trying to optimize a fact table and according to documentation FACT tables should be hash distributed for better performance.
Problems is:
My fact table has a composite primary key.
You can specify only column as hash distribution column.
Can I use one of those columns as distribution column? Any one of the columns would have duplicates, though they are all NOT NULL.
CREATE TABLE myTable
(
[ITEM] [varchar](50) NOT NULL,
[LOC] [varchar](50) NOT NULL,
[MEASURE] [varchar](50) NOT NULL
CONSTRAINT [PK] PRIMARY KEY NONCLUSTERED
(
[LOC] ASC,
[ITEM] ASC
) NOT ENFORCED
)
WITH
(
DISTRIBUTION = HASH([ITEM]),
CLUSTERED COLUMNSTORE INDEX
)
Yes, you can! You can use any column as a hash distribution column, but be aware that this introduces a constraint into your table: you cannot drop the distribution column.
There are two reasons to use a hash distribution column: one is the to prevent data movement across distributions for queries, but the other is to ensure even distribution of data across your distributions to ensure all the workers are efficiently used in queries. Hash-distributing by a non-skewed column, even if not unique, can help with the second case.
However, if you do want to distribute by your primary key, consider creating a composite primary key by hashing together the different columns of your composite primary key. You can hash-distribute by your hashed key and this will also hopefully reduce data movement if you need to upsert on that hashed key later.

Composite clustered index as primary key vs heap table in SQL Server

To ensure uniqueness there is a composite PK (clustered) containing:
[timestamp] [datetime2]
[userId] [varchar](36)
[cost_type] [varchar](20)
There are two more columns in the table:
[cost_cent] [bigint] NULL
[consumption_cent] [bigint] NULL
Composite clustered primary keys are not ideal (incl. varchar) but what is the alternative?
Having a heap table with a non clustered primary key? Additionally add another clustered index? But on what column? There is no identity column.
Background: there is a constant insert/update on this table via Merge statements. Table size is ~50 million rows
Queries will use the PK with a time range mainly.
Your index size is 58 bytes,i don't see a big issue with this size..
there is a constant insert/update on this table via Merge statements
if you go with existing setup of composite key(since 56 bytes is not that huge) ,updating primary key is a red flag,since
1.You may see some fragmentation
2.update/delete commands will also have to touch non clustered indexes
Some more options i would experiment with,since 50 million is not much huge
Leave this table as heap and add a non clustered index with timestamp column as leading column and rest of the columns needed for a query as included columns .When you leave this table as heap,try answering the following questions yourself to see if leaving this table as heap helps you
Will you ever need to join this table to other tables?
Do you need a way to uniquely identify a record?
2.I would also try adding an identity column and make it as primary key..

Recreate index on column store indexed table with 35 billion rows

I have a big table that I need to rebuild the index. The table is configured with Clustered Column Store Index (CCI) and we realized we need to sort the data according to specific use case.
User performs date range and equality query but because the data was not sorted in the way they would like to get it back, the query is not optimal. SQL Advisory Team recommended that data are organized in right row group so query can benefit from row group elimination.
Table Description:
Partition by Timestamp1, monthly PF
Total Rows: 31 billion
Est row size: 60 bytes
Est table size: 600 GB
Table Definition:
CREATE TABLE [dbo].[Table1](
[PkId] [int] NOT NULL,
[FKId1] [smallint] NOT NULL,
[FKId2] [int] NOT NULL,
[FKId3] [int] NOT NULL,
[FKId4] [int] NOT NULL,
[Timestamp1] [datetime2](0) NOT NULL,
[Measurement1] [real] NULL,
[Measurement2] [real] NULL,
[Measurement3] [real] NULL,
[Measurement4] [real] NULL,
[Measurement5] [real] NULL,
[Timestamp2] [datetime2](3) NULL,
[TimeZoneOffset] [tinyint] NULL
)
CREATE CLUSTERED COLUMNSTORE INDEX [Table1_ColumnStoreIndex] ON [dbo].[Table1] WITH (DROP_EXISTING = OFF)
GO
Environment:
SQL Server 2014 Enterprise Ed.
8 Cores, 32 GB RAM
VMWare High
Performance Platform
My strategy is:
Drop the existing CCI
Create ordinary Clustered Row Index with the right columns, this will sort the data
Recreate CCI with DROP EXISTING = OFF. This will convert the existing CRI into CCI.
My questions are:
Does it make sense to rebuild the index or just reload the data? Reloading may take a month to complete where as rebuilding the index may take as much time either, maybe...
If I drop the existing CCI, the table will expand as it may not be compressed anymore?
31 billion rows is 31,000 perfect row groups, a rowgroup is just another horizontal partitioning, so it really matters when and how you load your data. SQL 2014 supports only offline index build.
There are a few cons and pros when considering create index vs. reload:
Create index is a single operation, so if it fails at any point you lost your progress. I would not recommend it at your data size.
Index build will create primary dictionaries so for low cardinality dictionary encoded columns it is beneficial.
Bulk load won't create primary dictionaries, but you can reload data if for some reason your batches fail.
Both index build and bulk load will be parallel if you give enough resources, which means your ordering from the base clustered index won't be perfectly preserved, this is just something to be aware of; at your scale of data it won't matter if you have a few overlapping rowgroups.
If your data will undergo updates/deletes and you reorganize (from SQL19 will also do it Tuple Mover) your ordering might degrade over time.
I would create a Clustered Index ordered and partition on the date_range column so that you have anything between 50-200 rowgroups per partition (do some experiments). Then you can create a partition aligned Clustered Columnstore Index and switch in one partition at a time, the partition switch will trigger index build so you'll get the benefit from primary dictionaries and if you end up with updates/deletes on a partition you can fix the index quality up by rebuilding the partition rather than the whole table. If you decide to use reorganize you still maintain some level of ordering, because rowgroups will only be merged within the same partition.

Whats the difference between Primary key in table definitions vs. unique clustered index

What is the difference between defining the PK as part of the table definition vs. adding it as a unique clustered index. Using the example below, both tables show up as index_id 1 in sys.indexes, but only table1 has is_primary_key=1
I thought this was the same, but SSMS only shows the key-symbol on table1
Thanks.
CREATE DATABASE IndexVsHeap
GO
USE [IndexVsHeap]
GO
-- Clustered index table
CREATE TABLE [dbo].[Table1](
[LogDate] [datetime2](7) NOT NULL,
[Database_Name] [nvarchar](128) NOT NULL,
[Cached_Size_MB] [decimal](10, 2) NULL,
[Buffer_Pool_Percent] [decimal](5, 2) NULL
CONSTRAINT [PK_LogDate_DatabaseName] PRIMARY KEY(LogDate, Database_Name)
)
-- Table as heap, PK-CI added later, or did i?
CREATE TABLE [dbo].[Table2](
[LogDate] [datetime2](7) NOT NULL,
[Database_Name] [nvarchar](128) NOT NULL,
[Cached_Size_MB] [decimal](10, 2) NULL,
[Buffer_Pool_Percent] [decimal](5, 2) NULL
)
-- Adding PK-CI to table2
CREATE UNIQUE CLUSTERED INDEX [PK_LogDate_Database_Name] ON [dbo].[Table2]
(
[LogDate] ASC,
[Database_Name] ASC
)
GO
SELECT object_name(object_id), * FROM sys.index_columns
WHERE object_id IN ( object_id('table1'), object_id('table2') )
SELECT * FROM sys.indexes
WHERE name LIKE '%PK_LogDate%'
To all intents and purposes there is no difference here.
A unique index would allow null but the columns are not null anyway.
Also a unique index (though not constraint) could be declared with included columns or as a filtered index but neither of those apply here as the index is clustered.
The primary key creates a named constraint object that is schema scoped so the name must be unique. An index must only be named uniquely within the table it is part of.
I would still opt for the PK though to get the visual indicator in the tooling. It allows other developers (and possibly code) to more easily detect what is the unique row identifier.
Also remember that while a table can have only one PK, it could have multiple unique indexes (although only one can be clustered).
I can see where you might want to cluster on information that is unique in some meaningful way but might want to have a separate autogenerated nonclustered PK to make joins faster than joining on the automobile VIN number, for instance. That is why both are available.
Primary key is a key that identifies each row in a unique way (it's a unique index too). It could be clustered or not but it's highly recommended to be clustered. If it is clustered, data is stored based on that key.
A unique clustered index is a unique value (or combination of values) and the data is stored based on that index.
What's the advantage of a clustered index? if you have to an index scan (scan the whole index), data is stored together so it's faster.

SQL design for various data types

I need to store data in a SQL Server 2008 database from various data sources with different data types. Data types allowed are: Bit, Numeric (1, 2 or 4 bytes), Real and String. There is going to be a value, a timestamp, a FK to the item of which the value belongs and some other information for the data stored.
The most important points are the read performance and the size of the data. There might be a couple thousand items and each item may have millions of values.
I have 5 possible options:
Separate tables for each data type (ValueBit, ValueTinyInt, ValueSmallInt, etc... tables)
Separate tables with inheritance (Value table as base table, ValueBit table just for storing the Bit value, etc...)
Single value table for all data types, with separate fields for each data type (Value table, with ValueBit BIT, ValueTinyInt TINYINT etc...)
Single table and single value field using sql_variant
Single table and single value field using UDT
With case 2, a PK is a must, and,
1000 item * 10 000 000 data each > Int32.Max, and,
1000 item * 10 000 000 data each * 8 byte BigInt PK is huge
Other than that, I am considering 1 or 3 with no PK. Will they differ in size?
I do not have experience with 4 or 5 and I do not think that they will perform well in this scenario.
Which way shall I go?
Your question is hard to answer as you seem to use a relational database system for something it is not designed for. The data you want to keep in the database seems to be too unstructured for getting much benefit from a relational database system. Database designs with mostly fields like "parameter type" and "parameter value" that try to cover very generic situations are mostly considered to be bad designs. Maybe you should consider using a "non relational database" like BigTable. If you really want to use a relational database system, I'd strongly recommend to read Beginning Database Design by Clare Churcher. It's an easy read, but gets you on the right track with respect to RDBS.
What are usage scenarios? Start with samples of queries and calculate necessary indexes.
Consider data partitioning as mentioned before. Try to understand your data / relations more. I believe the decision should be based on business meaning/usages of the data.
I think it's a great question - This situation is fairly common, though it is awkward to make tables to support it.
In terms of performance, having a table like indicated in #3 potentially wastes a huge amount of storage and RAM because for each row you allocate space for a value of every type, but only use one. If you use the new sparse table feature of 2008 it could help, but there are other issues too: it's a little hard to constrain/normalize, because you want only only one of the multiple values to be populated for each row - having two values in two columns would be an error, but the design doesn't reflect that. I'd cross that off.
So, if it were me I'd be looking at option 1 or 2 or 4, and the decision would be driven by this: do I typically need to make one query returning rows that have a mix of values of different types in the same result set? Or will I almost always ask for the rows by item and by type. I ask because if the values are different types it implies to me some difference in the source or the use of that data (you are unlikely, for example, to compare a string and a real, or a string and a bit.) This is relevant because having different tables per type might actually be a significant performance/scalability advantage, if partitioning the data that way makes queries faster. Partitioning data into smaller sets of more closely related data can give a performance advantage.
It's like having all the data in one massive (albeit sorted) set or having it partitioned into smaller, related sets. The smaller sets favor some types of queries, and if those are the queries you will need, it's a win.
Details:
CREATE TABLE [dbo].[items](
[itemid] [int] IDENTITY(1,1) NOT NULL,
[item] [varchar](100) NOT NULL,
CONSTRAINT [PK_items] PRIMARY KEY CLUSTERED
(
[itemid] ASC
)
)
/* This table has the problem of allowing two values
in the same row, plus allocates but does not use a
lot of space in memory and on disk (bad): */
CREATE TABLE [dbo].[vals](
[itemid] [int] NOT NULL,
[datestamp] [datetime] NOT NULL,
[valueBit] [bit] NULL,
[valueNumericA] [numeric](2, 0) NULL,
[valueNumericB] [numeric](8, 2) NULL,
[valueReal] [real] NULL,
[valueString] [varchar](100) NULL,
CONSTRAINT [PK_vals] PRIMARY KEY CLUSTERED
(
[itemid] ASC,
[datestamp] ASC
)
)
ALTER TABLE [dbo].[vals] WITH CHECK
ADD CONSTRAINT [FK_vals_items] FOREIGN KEY([itemid])
REFERENCES [dbo].[items] ([itemid])
GO
ALTER TABLE [dbo].[vals] CHECK CONSTRAINT [FK_vals_items]
GO
/* This is probably better, though casting is required
all the time. If you search with the variant as criteria,
that could get dicey as you have to be careful with types,
casting and indexing. Also everything is "mixed" in one
giant set */
CREATE TABLE [dbo].[allvals](
[itemid] [int] NOT NULL,
[datestamp] [datetime] NOT NULL,
[value] [sql_variant] NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[allvals] WITH CHECK
ADD CONSTRAINT [FK_allvals_items] FOREIGN KEY([itemid])
REFERENCES [dbo].[items] ([itemid])
GO
ALTER TABLE [dbo].[allvals] CHECK CONSTRAINT [FK_allvals_items]
GO
/* This would be an alternative, but you trade multiple
queries and joins for the casting issue. OTOH the implied
partitioning might be an advantage */
CREATE TABLE [dbo].[valsBits](
[itemid] [int] NOT NULL,
[datestamp] [datetime] NOT NULL,
[val] [bit] NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[valsBits] WITH CHECK
ADD CONSTRAINT [FK_valsBits_items] FOREIGN KEY([itemid])
REFERENCES [dbo].[items] ([itemid])
GO
ALTER TABLE [dbo].[valsBits] CHECK CONSTRAINT [FK_valsBits_items]
GO
CREATE TABLE [dbo].[valsNumericA](
[itemid] [int] NOT NULL,
[datestamp] [datetime] NOT NULL,
[val] numeric( 2, 0 ) NOT NULL
) ON [PRIMARY]
GO
... FK constraint ...
CREATE TABLE [dbo].[valsNumericB](
[itemid] [int] NOT NULL,
[datestamp] [datetime] NOT NULL,
[val] numeric ( 8, 2 ) NOT NULL
) ON [PRIMARY]
GO
... FK constraint ...
etc...

Resources