I need to add a constraint to validate that number of rows referencing master table is lower than value in master row, e.g we have a table
master(master_id int pk, max_val int) and slave(slave_id int pk, master_id fk ref master(master_id)) (so slave is de facto a colection of something), and I want that count(master_id) in slave is <= than max_val for this master_id. I have a constraint
constraint NO_MORE_PASS check ((select count(head_id) from parts p where
p.head_id = head_id) <= (select max_val from head where id = head_id));
(not sure if it is correct, however SQL Server tells that subqueries are not allowed (sql server 2017) so...).
I have also read Check Constraint - Subqueries are not allowed in this context, so the question: is there any other alternative (I would like to avoid using trigger)?.
I'am using this in spring app with spring data jpa (and hibernate) - may be useful, but would like to make it on db side rather than in the app.
Nethertheless entity it is like:
#Entity
#Table(name = "route_parts")
data class RoutePart(
#Id
#Column(name = "route_part_id")
#GeneratedValue(strategy = GenerationType.AUTO)
var id: Long? = null,
//...
#Column(nullable = false)
var slots: Int? = null,
#ManyToMany(fetch = FetchType.LAZY)
#JoinTable(name = "route_part_passengers",
joinColumns = [(JoinColumn(name = "route_part_id"))],
inverseJoinColumns = [(JoinColumn(name = "user_id"))]
)
var passengers: Set<ApplicationUser> = setOf()
)
and in that case ApplicationUser is a slave (or better - another table will be created, and actually this will be that slave table) limited by slots value.
So the question is...
How can I achieve limiting number of ApplicationUser attached to each RoutePart
If you want your check constraints to be based on queries, you must use a user defined function for the check constraint to work with.
Here is a quick example:
Tables:
CREATE TABLE dbo.Parent
(
Id int,
MaxNumberOfChildren int NOT NULL
);
CREATE TABLE dbo.Child
(
Id int,
ParentId int
);
User defined function (All it does is return the difference between the MaxNumberOfChildren and the number of records in the Child table with the same ParentId):
CREATE FUNCTION dbo.RestrictNumbrOfChildren
(
#ParentId int
)
RETURNS int
AS
BEGIN
RETURN
(
SELECT MaxNumberOfChildren
FROM dbo.Parent
WHERE Id = #ParentId
)
-
(
SELECT COUNT(Id)
FROM dbo.Child
WHERE ParentId = #ParentId
)
END;
Add the check constraint to the Child table:
ALTER TABLE dbo.Child
ADD CONSTRAINT chk_childCount CHECK (dbo.RestrictNumbrOfChildren(ParentId) >= 0);
And that's basically all you need, unless MaxNumberOfChildren is nullable.
In that case, you should add ISNULL() to the first query, with either 0 if null means no children are allowed, or the maximum value of int (2,147,483,647) if null means no restriction on the number of children - so it becomes SELECT ISNULL(MaxNumberOfChildren, 0)... or SELECT ISNULL(MaxNumberOfChildren, 2147483647)....
To test the script, let's insert some data to the Parent table:
INSERT INTO Parent (Id, MaxNumberOfChildren) VALUES
(1, 3), (2, 2), (3, 1);
And insert some valid data to the Child table:
INSERT INTO Child (Id, ParentId) VALUES
(1, 1), (2, 2);
So far, we have not exceeded the maximum number of records allowed. Now let's try to do that by insert some more data to the Child table:
INSERT INTO Child (Id, ParentId) VALUES
(3, 1), (4, 1), (5, 1);
Now, this insert statement will fail with the error message:
The INSERT statement conflicted with the CHECK constraint "chk_childCount". The conflict occurred in database "<your database name here>", table "dbo.Child", column 'ParentId'.
You can see a live demo on rextester.
Related
I have been trying to create a production ERP using C# and SQL Server.
I want to create a table where the insert statement should only occur when at least one of the 3 main columns have a different value.
The main columns are prod_date, order_no, mach_no, shift_no, prod_type. If all the values are repeated a second time the data must not be entered.
create table p1_order(id int not null,
order_no int not null,
prod_date date notnull,
prod_type nvarchar(5),
shift_no int not null,
mach_no nvarchar(5) not null,
prod_qty,
float not null)
Based on the information you provided, You should check for the identical values when executing insert query, while writing your code. for example you can write:
if(prod_date == order_no == mach_no)// or any other 3 variables
{
//error of identical values
}
else{
// your insert query
}
The best way to implement this is by creating a unique constraint on the table.
alter table p1_order
add constraint UC_Order unique (prod_date,order_no,mach_no,shift_no,prod_type);
Due to some reason, if you are not able to create a unique constraint, you can write your query like the following using NOT EXISTS
insert into p1_order (order_no , prod_date , prod_type <remaining columns>)
select 123, '2022-09-20 15:11:43.680', 't1' <remaining values>
where not exists
(select 1
from p1_order
where order_no = 123 AND prod_date = '2022-09-20 15:11:43.680' <additional condition>)
I'd like to express:
"insertion of record with 'parent' value that is not included in 'rowid' AFTER INSERTION is forbidden."
My intention is to keep the table internally consistent as a directed acyclic graph, with every record being a node referring to its parent (root nodes are their own parent). How can I do that?
Here's what I have (with rowid used as the primary key):
CREATE TABLE Heap (
name TEXT CHECK(typeof(name) = 'text')
NOT NULL
UNIQUE ,
parent INTEGER DEFAULT rowid ,
color INTEGER CHECK(color BETWEEN 0 AND 2)
);
CREATE TRIGGER parent_not_in_rowid
BEFORE INSERT ON Heap
BEGIN
SELECT RAISE(FAIL, 'parent id inconsistent') FROM Heap
WHERE NOT EXISTS(SELECT 1 FROM Heap WHERE NEW.rowid = NEW.parent);
END;
I would suggest to use null values in the column parent for root nodes, because this way all you have to do is add referential integrity to your table.
Add a column id defined as INTEGER PRIMARY KEY, so that it is an alias of the rowid and also make the column parent to reference id:
CREATE TABLE Heap (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL UNIQUE CHECK(typeof(name) = 'text'),
parent INTEGER REFERENCES Heap(id),
color INTEGER CHECK(color BETWEEN 0 AND 2)
);
Now, turn on foreign key support:
PRAGMA foreign_keys = ON;
and insert rows:
INSERT INTO Heap (name, parent, color) VALUES ('name1', null, 1);
INSERT INTO Heap (name, parent, color) VALUES ('name2', 1, 1);
This will fail:
INSERT INTO Heap (name, parent, color) VALUES ('name3', 5, 2);
because there is no row in the table with id = 5.
See the demo.
I am moving a small database from MS Access into SQL Server. Each year, the users would create a new Access database and have clean data, but this change will put data across the years into one pot. The users have relied on the autonumber value in Access as a reference for records. That is very inaccurate if, say, 238 records are removed.
So I am trying to accommodate them with an id column they can control (somewhat). They will not see the real primary key in the SQL table, but I want to give them an ID they can edit, but still be unique.
I've been working with this trigger, but it has taken much longer than I expected.
Everything SEEMS TO work fine, except I don't understand why I have the same data in my INSERTED table as the table the trigger is on. (See note in code.)
ALTER TRIGGER [dbo].[trg_tblAppData]
ON [dbo].[tblAppData]
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #NewUserEnteredId int = 0;
DECLARE #RowIdForUpdate int = 0;
DECLARE #CurrentUserEnteredId int = 0;
DECLARE #LoopCount int = 0;
--*** Loop through all records to be updated because the values will be incremented.
WHILE (1 = 1)
BEGIN
SET #LoopCount = #LoopCount + 1;
IF (#LoopCount > (SELECT Count(*) FROM INSERTED))
BREAK;
SELECT TOP 1 #RowIdForUpdate = ID, #CurrentUserEnteredId = UserEnteredId FROM INSERTED WHERE ID > #RowIdForUpdate ORDER BY ID DESC;
IF (#RowIdForUpdate IS NULL)
BREAK;
-- WHY IS THERE A MATCH HERE? HAS THE RECORD ALREADY BEEN INSERTED?
IF EXISTS (SELECT UserEnteredId FROM tblAppData WHERE UserEnteredId = #CurrentUserEnteredId)
BEGIN
SET #NewUserEnteredId = (SELECT Max(t1.UserEnteredId) + 1 FROM tblAppData t1);
END
ELSE
SET #NewUserEnteredId = #CurrentUserEnteredId;
UPDATE tblAppData
SET UserEnteredId = #NewUserEnteredId
FROM tblAppData a
WHERE a.ID = #RowIdForUpdate
END
END
Here is what I want to accomplish:
When new record(s) are added, it should increment values from the Max existing
When a user overrides a value, it should check to see the existence of that value. If found restore the existing value, otherwise allow the change.
This trigger allows for multiple rows being added at a time.
It is great for this to be efficient for future use, but in reality, they will only add 1,000 records a year.
I wouldn't use a trigger to accomplish this.
Here is a script you can use to create a sequence (op didn't tag version), create the primary key, use the sequence as your special id, and put a constraint on the column.
create table dbo.test (
testid int identity(1,1) not null primary key clustered
, myid int null constraint UQ_ unique
, somevalue nvarchar(255) null
);
create sequence dbo.myid
as int
start with 1
increment by 1;
alter table dbo.test
add default next value for dbo.myid for myid;
insert into dbo.test (somevalue)
select 'this' union all
select 'that' union all
select 'and' union all
select 'this';
insert into dbo.test (myid, somevalue)
select 33, 'oops';
select *
from dbo.test
insert into dbo.test (somevalue)
select 'oh the fun';
select *
from dbo.test
--| This should error
insert into dbo.test (myid, somevalue)
select 3, 'This is NO fun';
Here is the result set:
testid myid somevalue
1 1 this
2 2 that
3 3 and
4 4 this
5 33 oops
6 5 oh the fun
And at the very end a test, which will error.
Given two tables:
TableA
(
id : primary key,
type : tinyint,
...
)
TableB
(
id : primary key,
tableAId : foreign key to TableA.id,
...
)
There is a check constraint on TableA.type with permitted values of (0, 1, 2, 3). All other values are forbidden.
Due to the known limitations, records in TableB can exist only when TableB.TableAId references the record in TableA with TableA.type=0, 1 or 2 but not 3. The latter case is forbidden and leads the system into an invalid state.
How can I guarantee that in such case the insert to TableB will fail?
Cross-table constraint using an empty indexed view:
Tables
CREATE TABLE dbo.TableA
(
id integer NOT NULL PRIMARY KEY,
[type] tinyint NOT NULL
CHECK ([type] IN (0, 1, 2, 3))
);
CREATE TABLE dbo.TableB
(
id integer NOT NULL PRIMARY KEY,
tableAId integer NOT NULL
FOREIGN KEY
REFERENCES dbo.TableA
);
The 'constraint view'
-- This view is always empty (limited to error rows)
CREATE VIEW dbo.TableATableBConstraint
WITH SCHEMABINDING AS
SELECT
Error =
CASE
-- Error condition: type = 3 and rows join
WHEN TA.[type] = 3 AND TB.id = TA.id
-- For a more informative error
THEN CONVERT(bit, 'TableB cannot reference type 3 rows in TableA.')
ELSE NULL
END
FROM dbo.TableA AS TA
JOIN dbo.TableB AS TB
ON TB.id = TA.id
WHERE
TA.[type] = 3;
GO
CREATE UNIQUE CLUSTERED INDEX cuq
ON dbo.TableATableBConstraint (Error);
Online demo:
-- All succeed
INSERT dbo.TableA (id, [type]) VALUES (1, 1);
INSERT dbo.TableA (id, [type]) VALUES (2, 2);
INSERT dbo.TableA (id, [type]) VALUES (3, 3);
INSERT dbo.TableB
(id, tableAId)
VALUES
(1, 1),
(2, 2);
-- Fails
INSERT dbo.TableB (id, tableAId) VALUES (3, 3);
-- Fails
UPDATE dbo.TableA SET [type] = 3 WHERE id = 1;
This is similar in concept to the linked answer to Check constraints that ensures the values in a column of tableA is less the values in a column of tableB, but this solution is self-contained (does not require a separate table with more than one row at all times). It also produces a more informational error message, for example:
Msg 245, Level 16, State 1
Conversion failed when converting the varchar value 'TableB cannot reference type 3 rows in TableA.' to data type bit.
Important notes
The error condition must be completely specified in the CASE expression to ensure correct operation in all cases. Do not be tempted to omit conditions implied by the rest of the statement. In this example, it would be an error to omit TB.id = TA.id (implied by the join).
The SQL Server query optimizer is free to reorder predicates, and makes no general guarantees about the timing or number of evaluations of scalar expressions. In particular, scalar computations can be deferred.
Completely specifying the error condition(s) within a CASE expression ensures the complete set of tests is evaluated together, and no earlier than correctness requires. From an execution plan perspective, this means the Compute Scalar associated with the CASE tests will appear on the indexed view delta maintenance branch:
The light shaded area highlights the indexed view maintenance region; the Compute Scalar containing the CASE expression is dark-shaded.
We have a table where we store all the exceptions (message, stackTrace, etc..), the table is getting big and we would like to reduce it.
There are plenty of repeated StackTraces, Messages, etc, but enabling compression produces a modest size reduction (10%) while I think much bigger benefits could come if somehow Sql Server will intern the strings in some per-column hash-table.
I could get some of the benefits if I normalize the table and extract StackTraces to another one, but exception messages, exception types, etc.. are also repeated.
Is there a way to enable string interning for some column in Sql Server?
There is no built-in way to do this. You could easily do something like:
SELECT MessageID = IDENTITY(INT, 1, 1), Message
INTO dbo.Messages
FROM dbo.HugeTable GROUP BY Message;
ALTER TABLE dbo.HugeTable ADD MessageID INT;
UPDATE h
SET h.MessageID = m.MessageID
FROM dbo.HugeTable AS h
INNER JOIN dbo.Messages AS m
ON h.Message = m.Message;
ALTER TABLE dbo.HugeTable DROP COLUMN Message;
Now you'll need to do a few things:
Change your logging procedure to perform an upsert to the Messages table
Add proper indexes to the messages table (wasn't sure of Message data type) and PK
Add FK to MessageID column
Rebuild indexes on HugeTable to reclaim space
Do this in a test environment first!
Aaron's posting answers the questions of adding interning to a table, but afterwards you will need to modify your application code and stored-procedures to work with the new schema.
...or so you might think. You can actually create a VIEW that returns data matching the old schema, and you can also support INSERT operations on the view too, which are translated into child operations on the Messages and HugeTable tables. For readability I'll use the names InternedStrings and ExceptionLogs for the tables.
So if the old table was this:
CREATE TABLE ExceptionLogs (
LogId int IDENTITY(1,1) NOT NULL PRIMARY KEY,
Message nvarchar(1024) NOT NULL,
ExceptionType nvarchar(512) NOT NULL,
StackTrace nvarchar(4096) NOT NULL
)
And the new tables are:
CREATE TABLE InternedStrings (
StringId int IDENTITY(1,1) NOT NULL PRIMARY KEY,
Value nvarchar(max) NOT NULL
)
CREATE TABLE ExceptionLogs2 ( -- note the new name
LogId int IDENTITY(1,1) NOT NULL PRIMARY KEY,
Message int NOT NULL,
ExceptionType int NOT NULL,
StackTrace int NOT NULL
)
Add an index to InternedStrings to make the value lookups faster:
CREATE UNIQUE NONCLUSTERED INDEX IX_U_InternedStrings_Value ON InternedStrings ( Value ASC )
Then you would also have a VIEW:
CREATE VIEW ExeptionLogs AS
SELECT
LogId,
MessageStrings .Value AS Message,
ExceptionTypeStrings.Value AS ExceptionType,
StackTraceStrings .Value AS StackTrace
FROM
ExceptionLogs2
INNER JOIN InternedStrings AS MessageStrings ON
MessageStrings.StringId = ExceptionLogs2.Message
INNER JOIN InternedStrings AS ExceptionTypeStrings ON
ExceptionTypeStrings.StringId = ExceptionLogs2.ExceptionType
INNER JOIN InternedStrings AS StackTraceStrings ON
StackTraceStrings.StringId = ExceptionLogs2.StackTrace
And to handle INSERT operations from unmodified clients:
CREATE TRIGGER ExceptionLogsInsertHandler
ON ExceptionLogs INSTEAD OF INSERT AS
DECLARE #messageId int = SELECT StringId FROM InternedStrings WHERE Value = inserted.Message
IF #messageId IS NULL
BEGIN
INSERT INTO InternedStrings ( Text ) VALUES ( inserted.Message )
SET #messageId = SCOPE_IDENTITY()
END
DECLARE #exceptionTypeId int = SELECT StringId FROM InternedStrings WHERE Value = inserted.ExceptionType
IF #exceptionTypeId IS NULL
BEGIN
INSERT INTO InternedStrings ( Text ) VALUES ( inserted.ExceptionType )
SET #exceptionTypeId = SCOPE_IDENTITY()
END
DECLARE #stackTraceId int = SELECT StringId FROM InternedStrings WHERE Value = inserted.StackTrace
IF #stackTraceId IS NULL
BEGIN
INSERT INTO InternedStrings ( Text ) VALUES ( inserted.StackTrace )
SET #stackTraceId = SCOPE_IDENTITY()
END
INSERT INTO ExceptionLogs2 ( Message, ExceptionType, StackTrace )
VALUES ( #messageId, #exceptionTypeId, #stackTraceId )
Note this TRIGGER can be improved: it only supports single-row insertions, and is not entirely concurrency-safe, though because previous data won't be mutated it means that there's a slight risk of data duplication in the InternedStrings table - and because of a UNIQUE index the insert will fail. There are different possible ways to handle this, such as using a TRANSACTION and changing the queries to use holdlock and updlock.