I am building an API with a table (I am deliberately simplifying the schema below to only focus on what is questionable) that looks like this:
CREATE TABLE IF NOT EXISTS some_table
(
id INTEGER GENERATED ALWAYS AS IDENTITY NOT NULL PRIMARY KEY,
user_ids TEXT[] NOT NULL,
field_1 TEXT NOT NULL,
field_2 TEXT NOT NULL,
field_3 TEXT NOT NULL,
hash_id TEXT GENERATED ALWAYS AS ( MD5(field_1 || field_2 || field_3)) STORED UNIQUE NOT NULL
)
The API is a bit trickier than a conventional CRUD in that:
(1) Inserting into the table depends on whether md5(field_1||field_2||field_3) already exists. If it does, I need to append the value user_id to the array field user_ids. Else, insert the row.
(2) Deleting a row also depends on the state of user_ids. Actually, my current implementation makes the database handle deletions in that there is a trigger that acts on updates and deletes rows whenever cardinality(user_ids) = 0.
CREATE OR REPLACE FUNCTION delete_row() RETURNS trigger AS
$$
BEGIN
IF tg_op = 'UPDATE' THEN
DELETE FROM some_table WHERE id = NEW.id;
RETURN NULL;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER some_table_delete_row
AFTER UPDATE
ON some_table
FOR EACH ROW
WHEN (CARDINALITY(NEW.user_ids) = 0)
EXECUTE PROCEDURE delete_row();
As you can see, there is no traditional deleting. What really happens is removing items from user_ids until the length of the array is 0 and then the database will autoremove the row.
I think that the PUT method is the best match for how I want to implement upserts.
It's trickier with DELETE/PUT for decrementing user_ids. Ideally, that looks likes PATCH in that only one field is modified at a time and nothing is allowed to be deleted manually.
Using an auto-generated hash_id value is convenient. That said, I am not sure whether it's the best option when I think of how deletes should work. The endpoint for that is base_url/items/{hash_id}, but in this case I will also need to calculate the hashed value in code or, as another option, just always pass the object in the request so that I can do WHERE hash_id = md5($field_1 || $field_2 || $field_3).
What do you think?
Related
I have a stored procedure that updates a type of Star. The database table, starValList, has a foreign key for a table called galaxyValList. That key is galaxyID.
So I need to create a new galaxyID value if it is null or empty GUID.
So I try this:
IF(#galaxyID IS NULL OR #galaxyID = '00000000-0000-0000-0000-000000000000')
BEGIN
SELECT #galaxyID=NEWID()
END
UPDATE starValList SET
[starIRR]= #starIRR,
[starDesc] = #starDesc,
[starType] = #starType,
[galaxyID]=#galaxyID
WHERE [starID] = #starID;
And it works for the starValList table!
But I think it fails too because of this error:
The UPDATE statement conflicted with the FOREIGN KEY constraint "FK_starValList_galaxyValList". The conflict occurred in database "Astro105", table "dbo.galaxyValList", column 'galaxyID'.
It fails because there may not yet be an entry for that particular galaxy in galaxyValList table.
But I still need the row in galaxyValList because it can be used later.
How can I fix my stored procedure so that it doesn't generate this error?
Thanks!
Use if exists to check if the value exists on the table. If it does then do an update. If it doesn't then have some other logic to maybe create it or whatever your requirements may be so the value can then be used in an update. Basic example below:
IF(#galaxyID IS NULL OR #galaxyID = '00000000-0000-0000-0000-000000000000')
BEGIN
SELECT #galaxyID=NEWID()
END
if not exists ( select top 1 1 from galaxyTable where galaxyId = #galaxyId)
begin
-- the #galaxyId doesnt exist, create it so you can use the value in an update later
insert into galaxyTable ( galaxyId ) select #galaxyId
end
UPDATE starValList SET
[starIRR]= #starIRR,
[starDesc] = #starDesc,
[starType] = #starType,
[galaxyID]=#galaxyID
WHERE [starID] = #starID;
I have an insert statement in a stored procedure who's primary key is a serial id. I want to be able to populate an additional field in the same table during the same insert statement with the serial id used for the primary key. Is this possible?
Unfortunately this is a solution already in place... I just have to implement it.
Regards
I can't imagine a reason why you would want a copy of the key in another column. But in order to do it, I think you'll need to follow your update with a statement to get the value of the identity key, and then an update to put that value in the other column. Since you're already in a stored procedure, it's probably ok to have a few extra statements, instead of doing it in the very same one.
DECLARE #ID INT;
INSERT INTO TABLE_THINGY (Name, Address) VALUES ('Joe Blow', '123 Main St');
SET #ID = SCOPE_IDENTITY();
UPDATE TABLE_THINGY SET IdCopy = #Id WHERE ID = #ID
If it's important that this be done every single time, you might want to create a Trigger to do it; beware, however, that many people hate triggers because of the obfuscation and difficulty in debugging, among other reasons.
http://blog.sqlauthority.com/2007/03/25/sql-server-identity-vs-scope_identity-vs-ident_current-retrieve-last-inserted-identity-of-record/
I agree, it is odd that you would replicate the key within the same table but with that said you could use a trigger, thus making it have no impact to current insert statements.
The below trigger is "After Insert" so technically it happens milliseconds after the insert if you truly wanted it to happen at the same time you would use a FOR INSERT instead and just replicate the logic used to create the serial id field into the new field.
CREATE TRIGGER triggerName ON dbo.tableName
AFTER INSERT
AS
BEGIN
update dbo.tableName set newField = inserted.SerialId where serialId = inserted.SerialId
END
GO
You could have a computed column that just returns the id column.
CREATE TABLE dbo.Products
(
ProductID int IDENTITY (1,1) NOT NULL
, OtherProductID AS ProductID
);
Having said that, data should only live in one place and to duplicate it in the same table is just a wrong design.
No, you cannot use the same insert statement for identity Id and copy that auto generated Id to the same row.
Multi-Statement using OUTPUT inserted or Trigger is your best bet.
I have a table which has a bit column and a corresponding datetime2 column which tracks when that flag was set:
CREATE TABLE MyTable
(
Id int primary key identity,
Processed bit not null,
DateTimeProcessed datetime2
)
I've added a check constraint as follows:
ALTER TABLE MyTable
ADD CHECK ((Processed = 0 AND DateTimeProcessed IS NULL)
OR (Processed = 1 AND DateTimeProcessed IS NOT NULL))
I attempted to control setting of the DateTimeProcessed column using an AFTER UPDATE trigger:
CREATE TRIGGER tr_MyTable_AfterUpdate ON MyTable
AFTER UPDATE
AS
BEGIN
IF(UPDATE(Processed))
BEGIN
UPDATE MyTable
SET DateTimeProcessed = CASE
WHEN tab.Processed = 1 THEN GETDATE()
ELSE NULL
END
FROM MyTable tab
JOIN INSERTED ins
ON ins.Id = tab.Id
END
END
The problem with this is that the check constraint is enforced before the AFTER UPDATE trigger runs, so the constraint is violated when the Processed column is updated.
What would be the best way to achieve what I am trying to do here?
Now, according to the MSDN page for CREATE TABLE:
If a table has FOREIGN KEY or CHECK CONSTRAINTS and triggers, the constraint conditions are evaluated before the trigger is executed.
This rules out the possibility of using an "INSTEAD OF" trigger as well.
You should remove the CHECK CONSTRAINT as ultimately it is not needed since the AFTER trigger itself can provide the same enforcement of the rule:
You are already making sure that the date field is being set upon the BIT field being set to 1.
Your CASE statement is already handling the BIT field being set to 0 by NULLing out the date field.
You can have another block to check on IF UPDATE(DateTimeProcessed) and either put it back to what it was in the DELETED table or throw an error.
If you update it back to the original value then you might need to test for recursive trigger calls and exit if it is a recursive call.
If you want to throw an error, just use something along the lines of:
IF(UPDATE(DateTimeProcessed))
BEGIN
RAISERROR('Update of [DateTimeProcessed] field is not allowed.', 16, 1);
ROLLBACK; -- cancel the UPDATE statement
RETURN;
END;
Keep in mind that the UPDATE() function only indicates that the field was in the UPDATE statement; it is not an indication of the value changing. Hence, doing an UPDATE wherein you SET DateTimeProcessed = DateTimeProcessed would clearly not change the value but would cause UPDATE(DateTimeProcessed) to return "true".
You can also handle this portion of the "rule" outside of a trigger by using a column-level DENY:
DENY UPDATE ON MyTable (DateTimeProcessed) TO {User and/or Role};
performing some MSSQL exercises, and I am trying to create a trigger. However, the solution I have, comes across as theoretically correct to me but it is not working.
The aim is to create a trigger for a table that has only two columns. One column is the primary key and is Identity and does not allow null values. The other column is one that ALLOWS NULL values. However, it permits NULL values ONLY FOR A SINGLE ROW in the entire table. Basically a trigger should fire for an insert/update operation on this table which attempts to insert/update the column to a NULL value when there is already an existing NULL value for the column in the table.
This condition I capture in my trigger code as follows:
After Insert, Update
AS
set ANSI_WARNINGS OFF
If ( (select count(NoDupName) from TestUniqueNulls where NoDupName is null) > 1 )
BEGIN
Print 'There already is a row that contains a NULL value, transaction aborted';
ROLLBACK TRAN
END
However, the transaction executes itself nonetheless. I am not sure why this is happening and the trigger is not firing itself.
So anybody to enlighten my misgivings here?
I also have used set ANSI_WARNINGS OFF at the start of the trigger.
count(col) only counts non null values so count(NoDupName) ... where NoDupName is null will always be zero. You would need to check count(*) instead.
I realise this is just a practice exercise but an indexed view might be a better mechanism for this.
CREATE VIEW dbo.NoMoreThanOneNull
WITH SCHEMABINDING
AS
SELECT NoDupName
FROM dbo.TestUniqueNulls
WHERE NoDupName IS NULL
GO
CREATE UNIQUE CLUSTERED INDEX ix ON dbo.NoMoreThanOneNull(NoDupName)
Yeah that's a gotcha. The expression inside parens of the COUNT has to evaluate to not null, otherwise it will not be counted. So it is safer to use *, or 1 or any not nullable column in the expression. The most commonly encountered expression is '*', although you can come across '1' as well. There is no difference between these expressions in terms of performance. However if you use expression that can evaluate to null (like nullable column), your counts and other aggregations can be completely off.
create table nulltest(a int null)
go
insert nulltest(a) values (1), (null), (2)
go
select * from nulltest
select COUNT(*) from nulltest
select COUNT(1) from nulltest
select COUNT(a) from nulltest
go
drop table nulltest
SQL SERVER, I'm working with a table that uses a guid for the key instead of an int but for the integration I'm working on, I need it to be an int. So, I want to write something that will create an ID column if it doesn't exist and populate it with the next highest ID. I'm not really sure how to do this though. Does anyone have code that does this?
i've tried this but the update doesn't work because they're null
IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'c_Product'AND COLUMN_NAME = 'ProductId')
BEGIN
ALTER TABLE c_Product ADD ProductId INT
END
UPDATE c_Product SET ProductId = (SELECT Max(ProductId) + 1 END FROM c_Product)
Am I being dense here. Are you not just wanting to do:
IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'c_Product'AND COLUMN_NAME = 'ProductId')
BEGIN
ALTER TABLE c_Product ADD ProductId INT IDENTITY(1,1) not null
END
Which will assign identity values for all existing rows, and then provide new numbers as new rows are inserted.
Ok, so you add a column called ProductId of type INT, which by the way is NULL. SQL Server stores values in one of three states. TRUE, FALSE, NULL. Now remember Null is not empty, it is not zero, it is not true and it is not false. It is pretty much that thing that never happens and doesnt exist. Get the point? Ok, so your row with this highest value in the 'newly' created column is ---- NULL. So what is the MAX(PorductId)? NULL. What is Null + 1? NULL. So, unless you set one value to 1 anywhere, you will be stuck in this never ending loop forever.
Back to table design. Using a GUID for a primary key, is a very bad no no! This will severely negatively impact the performance of your table. Unless you write an app that has to join over multiple servers, and platforms, STAY AWAY FROM USING GUIDS as the primary key. Long explanation, but basically all inserts are random, and then index page splits occur just about with every insert, and searches are slower too, and to use 16 bytes per row just for the PK column, not good. And then that is added to every other NC index, bad bad bad. If you can, change your table structure for good. Use and int.