I need to be sure about integrity of data in MSSQL database. My data model contains two important fields which are foreign keys to another tables. For example TripId and ReservationId (random names so don't bother about them).
I need the ability to insert data if:
- ReservationId and TripId are not null
- ReservationId is null and TripId is not null
-ReservationId is not null and TripId is null
It is tricky, because I need to reject inserts when one of Id was used in antoher combination, so for example:
My db contains record with RES111 and TRIP666. I must be able to insert another record with the same Ids for reservation and trip.
I mustn't insert data which contains only one ReservationId or TripId or another combination( for example reject: RES111 and TRIP777 must be rejected)
The same when one Id is provided, for example ReservationId.
Inserts containing used ReservationId with any tripId must be rejected.
I can provide such filtering in application code but it has to be done on database level
You could write a conditional insert
For Example
insert into tblname (value, name)
select 'foo', 'bar'
where not exists (select 1 from table where null IN (ReservationId, TripId));
Of course you will need to tailor this to what you need exactly.
The basic concept is, as long as the subquery returns a result, the insert will happen, so the subquery can be a really complicated constrained or check.
You could do this either with a Trigger, or with a CHECK CONSTRAINT that calls a UDF that encapsulates the logic you want to enforce.
Related
Using SQL Server I have a table with a computed column. That column concatenates 60 columns:
CREATE TABLE foo
(
Id INT NOT NULL,
PartNumber NVARCHAR(100),
field_1 INT NULL,
field_2 INT NULL,
-- and so forth
field_60 INT NULL,
-- and so forth up to field_60
)
ALTER TABLE foo
ADD RecordKey AS CONCAT (field_1, '-', field_2, '-', -- and so on up to 60
) PERSISTED
CREATE INDEX ix_foo_RecordKey ON dbo.foo (RecordKey);
Why I used a persisted column:
Not having the need to index 60 columns
To test to see if a current record exists by checking just one column
This table will contain no fewer than 20 million records. Adds/Inserts/updates happen a lot, and some binaries do tens of thousands of inserts/updates/deletes per run and we want these to be quick and live.
Currently we have C# code that manages records in table foo. It has a function which concatenates the same fields, in the same order, as the computed column. If a record with that same concatenated key already exists we might not insert, or we might insert but call other functions that we may not normally.
Is this a bad design? The big danger I see is if the code for any reason doesn't match the concatenation order of the computed column (if one is edited but not the other).
Rules/Requirements
We want to show records in JQGrid. We already have C# that can do so if the records come from a single table or view
We need the ability to check two records to verify if they both have the same values for all of the 60 columns
A better table design would be
parts table
-----------
id
partnumber
other_common_attributes_for_all_parts
attributes table
----------------
id
attribute_name
attribute_unit (if needed)
part_attributes table
---------------------
part_id (foreign key to parts)
attribute_id (foreign key to attributes)
attribute value
It looks complicated but due to proper indexing this is super fast even if part_attributes contain billions of records!
I have a task to Generate a Derived Column (RestrictionID) from a table and add it to a source table if 6 columns on both tables match. The 6 columns include (Country, Department, Account .etc) I decided to go with the SSIS Lookup and generate the Derived column when there's a match. This ID also has a time and amount Limit. Once all the records have ID's, I'm supposed to calculate a running total based on the ID to enforce the limits which is the easy part.
The only problem is this Lookup table changes almost daily and any or all of the 6 columns can have NULLS. Even the completely null rows have an Id. Nulls mean the Restriction is open. eg. If the Country column on one record on the lookup table is null, then the ID of that record can be assigned to records with any country on the source. If one row on the lookup has all null columns, then this is completely open and all records on the source qualify for that ID. The Source table doesn't have NULLS.
Please assist if possible
Thanks
If NULL means any and ignore column in lookup then add this to your where:
use a stored proc and pass your values in and return:
select lookup.ID
from lookup
where #Country = isnull(lookup.Country,#Country) //If lookup inull then it refers to itself creating a 1=1 scenario
and #Department = isnull(lookup.Department,#Department)
and ...
This question already has answers here:
How can I do a BEFORE UPDATED trigger with sql server?
(9 answers)
Closed 2 years ago.
This is on Azure.
I have a supertype entity and several subtype entities, the latter of which needs to obtain their foreign keys from the primary key of the super type entity on each insert. In Oracle, I use a BEFORE INSERT trigger to accomplish this. How would one accomplish this in SQL Server / T-SQL?
DDL
CREATE TABLE super (
super_id int IDENTITY(1,1)
,subtype_discriminator char(4) CHECK (subtype_discriminator IN ('SUB1', 'SUB2')
,CONSTRAINT super_id_pk PRIMARY KEY (super_id)
);
CREATE TABLE sub1 (
sub_id int IDENTITY(1,1)
,super_id int NOT NULL
,CONSTRAINT sub_id_pk PRIMARY KEY (sub_id)
,CONSTRAINT sub_super_id_fk FOREIGN KEY (super_id) REFERENCES super (super_id)
);
I wish for an insert into sub1 to fire a trigger that actually inserts a value into super and uses the super_id generated to put into sub1.
In Oracle, this would be accomplished by the following:
CREATE TRIGGER sub_trg
BEFORE INSERT ON sub1
FOR EACH ROW
DECLARE
v_super_id int; //Ignore the fact that I could have used super_id_seq.CURRVAL
BEGIN
INSERT INTO super (super_id, subtype_discriminator)
VALUES (super_id_seq.NEXTVAL, 'SUB1')
RETURNING super_id INTO v_super_id;
:NEW.super_id := v_super_id;
END;
Please advise on how I would simulate this in T-SQL, given that T-SQL lacks the BEFORE INSERT capability?
Sometimes a BEFORE trigger can be replaced with an AFTER one, but this doesn't appear to be the case in your situation, for you clearly need to provide a value before the insert takes place. So, for that purpose, the closest functionality would seem to be the INSTEAD OF trigger one, as #marc_s has suggested in his comment.
Note, however, that, as the names of these two trigger types suggest, there's a fundamental difference between a BEFORE trigger and an INSTEAD OF one. While in both cases the trigger is executed at the time when the action determined by the statement that's invoked the trigger hasn't taken place, in case of the INSTEAD OF trigger the action is never supposed to take place at all. The real action that you need to be done must be done by the trigger itself. This is very unlike the BEFORE trigger functionality, where the statement is always due to execute, unless, of course, you explicitly roll it back.
But there's one other issue to address actually. As your Oracle script reveals, the trigger you need to convert uses another feature unsupported by SQL Server, which is that of FOR EACH ROW. There are no per-row triggers in SQL Server either, only per-statement ones. That means that you need to always keep in mind that the inserted data are a row set, not just a single row. That adds more complexity, although that'll probably conclude the list of things you need to account for.
So, it's really two things to solve then:
replace the BEFORE functionality;
replace the FOR EACH ROW functionality.
My attempt at solving these is below:
CREATE TRIGGER sub_trg
ON sub1
INSTEAD OF INSERT
AS
BEGIN
DECLARE #new_super TABLE (
super_id int
);
INSERT INTO super (subtype_discriminator)
OUTPUT INSERTED.super_id INTO #new_super (super_id)
SELECT 'SUB1' FROM INSERTED;
INSERT INTO sub (super_id)
SELECT super_id FROM #new_super;
END;
This is how the above works:
The same number of rows as being inserted into sub1 is first added to super. The generated super_id values are stored in a temporary storage (a table variable called #new_super).
The newly inserted super_ids are now inserted into sub1.
Nothing too difficult really, but the above will only work if you have no other columns in sub1 than those you've specified in your question. If there are other columns, the above trigger will need to be a bit more complex.
The problem is to assign the new super_ids to every inserted row individually. One way to implement the mapping could be like below:
CREATE TRIGGER sub_trg
ON sub1
INSTEAD OF INSERT
AS
BEGIN
DECLARE #new_super TABLE (
rownum int IDENTITY (1, 1),
super_id int
);
INSERT INTO super (subtype_discriminator)
OUTPUT INSERTED.super_id INTO #new_super (super_id)
SELECT 'SUB1' FROM INSERTED;
WITH enumerated AS (
SELECT *, ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS rownum
FROM inserted
)
INSERT INTO sub1 (super_id, other columns)
SELECT n.super_id, i.other columns
FROM enumerated AS i
INNER JOIN #new_super AS n
ON i.rownum = n.rownum;
END;
As you can see, an IDENTIY(1,1) column is added to #new_user, so the temporarily inserted super_id values will additionally be enumerated starting from 1. To provide the mapping between the new super_ids and the new data rows, the ROW_NUMBER function is used to enumerate the INSERTED rows as well. As a result, every row in the INSERTED set can now be linked to a single super_id and thus complemented to a full data row to be inserted into sub1.
Note that the order in which the new super_ids are inserted may not match the order in which they are assigned. I considered that a no-issue. All the new super rows generated are identical save for the IDs. So, all you need here is just to take one new super_id per new sub1 row.
If, however, the logic of inserting into super is more complex and for some reason you need to remember precisely which new super_id has been generated for which new sub row, you'll probably want to consider the mapping method discussed in this Stack Overflow question:
Using merge..output to get mapping between source.id and target.id
While Andriy's proposal will work well for INSERTs of a small number of records, full table scans will be done on the final join as both 'enumerated' and '#new_super' are not indexed, resulting in poor performance for large inserts.
This can be resolved by specifying a primary key on the #new_super table, as follows:
DECLARE #new_super TABLE (
row_num INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
super_id int
);
This will result in the SQL optimizer scanning through the 'enumerated' table but doing an indexed join on #new_super to get the new key.
I will do my best to lay this out in text. Essentially, we have an application that tracks actions performed by users. Each action has it's own table since each action has different parameters that need to be stored. This allows us to store those extra attributes and run analytics on the actions across multiple or single users rather easily. The actions are not really associated with each other other than by what user performed these actions.
Example:
ActionTableA Id | UserId | AttributeA | AttributeB
ActionTableB Id | UserId | AttributeC | AttributeD | AttributeE
ActionTableC Id | UserId | AttributeF
Next, we need to allocate a value to each action performed by the user and keep a running total of those values.
Example:
ValueTable: Id | UserId | Value | ActionType | ActionId
What would be the best way to link the value in the value table to the actual action performed? We know the action type (A, B, C) - but from a SQL design perspective, I cannot see a good way to have an indexed relationship between the Values of the actions in the ActionsTable and the actual actions themselves. The only thing that makes sense would be to modify the ValueTable to the following:
ValueTable
Id | UserId | Value | ActionType | ActionAId(FK Nullable) | ActionBId(FK Nullable) | ActionCId(FK Nullable)
But the problem I have with this that only one of the 3 actionTableId columns would have a value, the rest would be Null. Additionally, as action types are added, the columns in the value table would too. Lastly, to programatically find the data, I would either a) have to check the ActionType to get the appropriate column for the Id or b) scan the row and pick the non-null actionId.
Is there a better way/design or is this just 'the way it is' for this particular issue.
EDIT
Attached is a diagram of the above setup:
Sorry for the clarity issues, typing SQL questions is always challenging. So I think your comment gave me an idea of something... I could have an SystemActionId table that essentially has an auto-generated value
SystemActions: Id | Type
Then, each ActionTable would have an additional FK to the SystemAction table. Lastly, in the ValueTable - associate it to the SystemActions table as well. This would allow us to tie values to specific actions. I would need to join the action tables to the system actions table where
JOIN (((SystemActions.Id = ActionTableA.Id) JOIN (SystemActions.Id = ActionTableB.Id)) JOIN (SystemActions.Id = ActionTableC.Id)
crappy quick sql syntax
Is this what you were alluding to in the answer below? A snapshot of what that could potentially look like:
Your question is a little unclear, but it looks like you should either have a (denormalized) value column in each action table, or have an artificial key in the value table that is keyed to by each of the seperate action tables.
You have essentially a supertype/subtype structure, or an exclusive arc. Attributes common to all actions bubble "up" into the supertype (the table "actions"). Columns unique to each subtype bubble "down" into the distinct subtypes.
create temp table actions (
action_id integer not null,
action_type char(1) not null check (action_type in ('a', 'b', 'c')),
user_id integer not null, -- references users, not shown.
primary key (action_id),
-- Required for data integrity. See below.
unique (action_id, action_type)
);
create temp table ActionTableA (
action_id integer primary key,
-- default and check constraint guarantee that only an 'a' row goes
-- in this table.
action_type char(1) not null default 'a' check (action_type = 'a'),
-- FK guarantees that this row matches only an 'a' row in actions.
-- To make this work, you need a UNIQUE constraint on these two columns
-- in the table "actions".
foreign key (action_id, action_type)
references actions (action_id, action_type),
attributeA char(1) not null,
attributeB char(1) not null
);
-- ActionTableB and ActionTableC are similar.
create temp table ValueTable (
action_id integer primary key,
action_type char(1) not null,
-- Since this applies to all actions, FK should reference the supertype,
-- which is the table "actions". You can reference either action_id alone,
-- which has a PRIMARY KEY constraint, or the pair {action_id, action_type},
-- which has a UNIQUE constraint. Using the pair makes some kinds of
-- accounting queries easier and faster.
foreign key (action_id, action_type)
references actions (action_id, action_type),
value integer not null default 0 check (value >= 0)
);
To round this out, build updatable views that join the supertype to each subtype, and have all users use the views instead of the base tables.
I would just have a single table for actions, to be honest. Is there a reason (other than denormalization) for having multiple tables? Especially when it will increase the complexity of your business logic?
Are the attribute columns significant in the context of the schema? Could you compress it into an object storage column "attributes"?
Actions: actionID, type, attributes
I think you need something similar to an Audit Trail. Can we have a simple design so that all the actions will be captured in a singe table ?
If the way you want it to work is that for every time a user performs action A you insert a new row in table ActionTableA and a row in ValueTable, and having them both linked, why not have a value column in each action table? This would work only if you want to insert a new row each time the user performs the action rather than if you want to update the value if the user performs the same action again. It seems overly complicated to have a separate table for values if it can be stored in a column. On the other hand if a "value" is a set of different pieces of data (or if you want to have all values in one place) then you do need an extra table but I would still have a foreign key column pointing from the action tables to the value table.
I am by no means a sql programmer and I am trying to accomplish something that I am pretty sure has been done a million times before.
I am trying to auto generate a customer number in sql every time a new customer is inserted, but the trigger (or sp?) will only work if at least the first name, last name and another value called case number is entered. If any of these fields are missing, the system generates an error. If the criteria is met, the system generates and assigns a unique id to that customer that begins with letters GL- and then uses 5 digit number so a customer John Doe would be GL-00001 and Jane Doe would be GL-00002.
I am sorry if I am asking too much but I am basically a select insert update guy and nothing more so thanks in advance for any help.
If I were in this situation, I would:
--Alter the table(s) so that first name, last name and case number are required (NOT NULL) columns. Handle your checks for required fields on the application side before submitting the record to the database.
--If it doesn't already exist, add an identity column to the customer table.
--Add a persisted computed column to the customer table that will format the identity column into the desired GL-00000 format.
/* Demo computed column for customer number */
create table #test (
id int identity,
customer_number as 'GL-' + left('00000', 5-len(cast(id as varchar(5)))) + cast(id as varchar(5)) persisted,
name char(20)
)
insert into #test (name) values ('Joe')
insert into #test (name) values ('BobbyS')
select * from #test
drop table #test
This should satisfy your requirements without the need to introduce the overhead of a trigger.
So what do you want to do? generate a customer number even when these fields arn't populated?
Have you looked at the SQL for the trigger? You can do this in SSMS (SQL Server Managment Studio) by going to the table in question in the Object Explorer, expanding the table and then expanding triggers.
If you open up the trigger you'll see what it does to generate the customer number. If you are unsure on how this code works, then post the code for the trigger up.
If you are making changes to an existing system i'd advise you to find out any implications that changing the way data is inputted works.
For example, others parts of the application may depend on all of the initial values being populated, so after changing the trigger to allow incomplete data to be added, you may inturn break something else.
You have probably a unique constraint and/or NOT NULL constraints set on the table.
Remove/Disable these (for example with the SQL-Server Management Console in Design Mode) and then try again to insert the data. Keep in mind, that you will probably not be able to enable the constraints after your insert, since you are violating conditions after the insert. Only disable or reomve the constraints, if you are absolutely sure that they are unecessary.
Here's example syntax (you need to know the constraint names):
--disable
ALTER TABLE customer NOCHECK CONSTRAINT your_constraint_name
--enable
ALTER TABLE customer CHECK CONSTRAINT your_constraint_name
Caution: If I were you, I'd rather try to insert dummy values for the not null columns like this:
insert into customers select afield , 1 as dummyvalue, 2 as dummyvalue from your datasource
A very easy way to do this would be to create a table of this sort of structure:
CustomerID of type in that is a primary key and set it as identity
CustomerIDPrfix of type varchar(3) which stores GL- as a default value.
Then add your other fields and set them to NOT NULL.
If that way is not acceptable and you do need to write a trigger check out these two articles:
http://msdn.microsoft.com/en-us/library/aa258254(SQL.80).aspx
http://www.kodyaz.com/articles/sql-trigger-example-in-sql-server-2008.aspx
Basiclly it is all about getting the logic right to check if the fields are blank. Experiment with a test database on your local machine. This will help you get it right.