How to limit the length of an array in PostgreSQL? - arrays

Is there any way to add a constraint on a column that is an array to limit it's length? I want these arrays to be no longer than 6. And yes, I understand that often a new table is better than storing in an array but I am in a situation where an array makes more sense.

You can add a CHECK constraint to the table definition:
CREATE TABLE my_table (
id serial PRIMARY KEY,
arr int[] CHECK (array_length(arr, 1) < 7),
...
);
If the table already exists, you can add the constraint with ALTER TABLE:
ALTER TABLE my_table ADD CONSTRAINT arr_len CHECK (array_length(arr, 1) < 7);

Related

Check Constraints in SQL

I want not to allow my DB user to enter bigger dates than 2017-03-18. How can add this constraint to my table?
Is this Correct?
(Year([ContractEnd])<2017) and (Month([ContractEnd])<03) and (Day([ContractEnd])<18)
You can add a constraint like that to an existing table like so:
alter table t add constraint chk_ContractEnd_lt_20170319
check (ContractEnd<'20170319');
rextester demo: http://rextester.com/FQWFMI88817
create table t (
id int not null identity(1,1)
, ContractEnd date
/* at table creation */
, constraint chk_ContractEnd_lt_20170319 check (ContractEnd<'20170319')
)
alter table t drop constraint chk_ContractEnd_lt_20170319;
/* to existing table */
alter table t add constraint chk_ContractEnd_lt_20170319
check (ContractEnd<='20170318');
insert into t values ('20161231')
insert into t values ('20170318')
/* all good */
insert into t values ('20170319')
/* -- Error, constraint violation */
Try
[ContractEnd] DATE CHECK ([ContractEnd] <= '20170318')

Ensure foreign key of a foreign key matches a base foreign key

Basically let's say I have a "Business" that owns postal codes that it services. Let's also suppose I have another relational table that sets up fees.
CREATE TABLE [dbo].[BusinessPostalCodes]
(
[BusinessPostalCodeId] INT IDENTITY (1, 1) NOT NULL,
[BusinessId] INT NOT NULL,
[PostalCode] VARCHAR (10) NOT NULL
)
CREATE TABLE [dbo].[BusinessPostalCodeFees]
(
[BusinessId] INT NOT NULL,
[BusinessProfileFeeTypeId] INT NOT NULL,
[BusinessPostalCodeId] INT NOT NULL,
[Fee] SMALLMONEY NULL
)
I want to know if it's possible to set up a foreign key (or something) on BusinessPostalCodeFees that ensures that the related BusinessId of BusinessPostalCodes is the same as the BusinessId of BusinessPostalCodeFees.
I realize that I can remove BusinessId entirely, but I would much rather keep this column and have a way of guaranteeing they will be the same. Is there anything I can do?
It sounds like (and correct me if I'm wrong) that you're trying to make sure that any entry into BusinessPostalCodeFees' BusinessId and BusinessPostalCodeId columns match an entry in the BusinessPostalCodes table. If that's the case, then yes, you can definitely have a foreign key that references a compound primary key.
However, if you need to keep the BusinessId, I'd recommend normalizing your tables a step further than you have. You'll end up with duplicate data as-is.
On a side note, I would recommend you don't use the money data types in SQL: See here.
In the end, Jeffrey's solution didn't quite work for my particular situation. Both columns in the relation have to be unique (like a composite key). Turns out the answer here (for me) is a Checked Constraint.
Create a function that you want to have the constraint pass or fail:
CREATE FUNCTION [dbo].[MatchingBusinessIdPostalCodeAndProfileFeeType]
(
#BusinessId int,
#BusinessPostalCodeId int,
#BusinessProfileFeeTypeId int
)
RETURNS BIT
AS
BEGIN
-- This works because BusinessPostalCodeId is a unique Id.
-- If businessId doesn't match, its filtered out.
DECLARE #pcCount AS INT
SET #pcCount = (SELECT COUNT(*)
FROM BusinessPostalCodes
WHERE BusinessPostalCodeId = #BusinessPostalCodeId AND
BusinessId = #BusinessId)
-- This works because BusinessProfileFeeTypeId is a unique Id.
-- If businessId doesn't match, its filtered out.
DECLARE #ftCount AS INT
SET #ftCount = (SELECT COUNT(*)
FROM BusinessProfileFeeTypes
WHERE BusinessProfileFeeTypeId = #BusinessProfileFeeTypeId AND
BusinessId = #BusinessId)
-- Both should have only one record
BEGIN IF (#pcCount = 1 AND #ftCount = 1)
RETURN 1
END
RETURN 0
END
Then just add it to your table:
CONSTRAINT [CK_BusinessPostalCodeFees_MatchingBusinessIdPostalCodeAndProfileFeeType]
CHECK (dbo.MatchingBusinessIdPostalCodeAndProfileFeeType(
BusinessId,
BusinessPostalCodeId,
BusinessProfileFeeTypeId) = 1)

Partial value constraint of sql server table

I have a SQL Server table with two columns id:INT and flagged:BOOLEAN. Is it possible to add a constraint that ensures that there is only one entry for (id=a, flagged=b) where b = 1?
For example:
ok
(id=1, flagged=1)
(id=1, flagged=0)
(id=1, flagged=0)
not ok
(id=1, flagged=1)
(id=1, flagged=1)
(id=1, flagged=0)
Create Unique Index with filter:
CREATE UNIQUE INDEX idx_name ON your_table(id)
WHERE flagged=1;
Demo:
SqlFiddleDemo
CREATE TABLE your_table(id INT, flagged INT);
CREATE UNIQUE INDEX idx_name ON your_table(id)
WHERE flagged=1;
INSERT INTO your_table(id, flagged)
VALUES (1, 0), (1,1), (1,0);
INSERT INTO your_table(id, flagged) -- will fail
VALUES (1,1);
/* Cannot insert duplicate key row in object 'dbo.your_table'
with unique index 'idx_name'. The duplicate key value is (1).*/

Nonclustered index on varchar column or partitioned table

I have simple table:
CREATE TABLE dbo.Table1 (
ID int IDENTITY(1,1) PRIMARY KEY,
TextField varchar(100)
)
I have nonclustered index on TextField column.
I am creating a simple query which selects both columns and in where condition i have next situation:
...
WHERE SUBSTRING(TextField, 1, 1) = 'x'
Is it better to convert query to LIKE condition with 'x%' or to create partition function on TextField column.
How partitioning can affect on search condition over varchar column, and what solution will be better for large amount of rows?
By default ,SUBSTRING(TextField, 1, 1) = 'x' is not SARGable.
First, I would test that query with following solutions (SQL Profiler > {SQLStatement|Batch} Completed > CPU,Reads,Writes,Duration columns):
1) A non-clustered index on TextField column:
CREATE INDEX IN_Table1_TextField
ON dbo.Table1(TextField)
INCLUDE(non-indexed columns); -- See `SELECT` columns
GO
And the query should use LIKE:
SELECT ... FROM TextField LIKE 'x%'; -- Where "x" represent one or more chars.
Pros/cons: B-Tree/index will have many levels because o key length (maximum 100 chars + RowID if isn't a UNIQUE index) .
2) I would create an computed column for the first char:
-- TextField column needs to be mandatory
ALTER TABLE dbo.Table1
ADD FirstChar AS (CONVERT(CHAR(1),SUBSTRING(TextField,1,1))); -- This computed column could be non-persistent
GO
plus
CREATE INDEX IN_Table1_FirstChar
On dbo.Table1(FirstChar)
INCLUDE (non-indexed columns);
GO
In this case, the predicate could be
WHERE SUBSTRING(TextField, 1, 1) = 'x'
or
WHERE FirstChar = 'x'
Pros/cons: B-Tree/index will have far less levels because o key length (1 char + RowID). I would use if predicate selectivity is high (small number of rows verifies) but without covered columns (see INCLUDE clause).
3) A clustered index on FirstChar column thus:
CREATE TABLE dbo.Table1 (
ID int IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,
TextField varchar(100) NOT NULL, -- This column needs to be mandatory
ADD FirstChar AS (CONVERT(CHAR(1),SUBSTRING(TextField,1,1))),
UNIQUE CLUSTERED(FirstChar,ID)
);
In this case, the predicate could be
WHERE SUBSTRING(TextField, 1, 1) = 'x'
or
WHERE FirstChar = 'x'
Pros/cons: should give you good performance if you have many rows. In this case, the B-Tree levels will minimum (1 CHAR + 1 INT) or minimum->medium.
Your non-clustered index can not be utilized if there is a function applied to the column (i.e. SUBSTRING). LIKE 'x%' would be preferable here.

Simple CHECK Constraint not so simple

2nd Edit: The source code for the involved function is as follows:
ALTER FUNCTION [Fileserver].[fn_CheckSingleFileSource] ( #fileId INT )
RETURNS INT
AS
BEGIN
-- Declare the return variable here
DECLARE #sourceCount INT ;
-- Add the T-SQL statements to compute the return value here
SELECT #sourceCount = COUNT(*)
FROM Fileserver.FileUri
WHERE FileId = #fileId
AND FileUriTypeId = Fileserver.fn_Const_SourceFileUriTypeId() ;
-- Return the result of the function
RETURN #sourceCount ;
END
Edit: The example table is a simplification. I need this to work as a Scaler Function / CHECK CONSTRAINT operation. The real-world arrangement is not so simple.
Original Question: Assume the following table named FileUri
FileUriId, FileId, FileTypeId
I need to write a check constraint such that FileId are unique for a FileTypeId of 1. You could insert the same FileId as much as you want, but only a single row where FileTypeId is 1.
The approach that DIDN'T work:
1) dbo.fn_CheckFileTypeId returns INT with following logic: SELECT Count(FileId) FROM FileUri WHERE FileTypeId = 1
2) ALTER TABLE FileUri ADD CONSTRAINT CK_FileUri_FileTypeId CHECK dbo.fn_CheckFileTypeId(FileId) <= 1
When I insert FileId 1, FileTypeId 1 twice, the second insert is allowed.
Thanks SO!
You need to create a filtered unique index (SQL Server 2008)
CREATE UNIQUE NONCLUSTERED INDEX ix ON YourTable(FileId) WHERE FileTypeId=1
or simulate this with an indexed view (2000 and 2005)
CREATE VIEW dbo.UniqueConstraintView
WITH SCHEMABINDING
AS
SELECT FileId
FROM dbo.YourTable
WHERE FileTypeId = 1
GO
CREATE UNIQUE CLUSTERED INDEX ix ON dbo.UniqueConstraintView(FileId)
Why don't you make FieldTypeID and Field both the primary key of the table?
Or at least a Unique Index on the table. That should solve your problem.

Resources