Do you have advice or best practice or recommendation about sequence for identifiers?
I work on a database where all the identifier or document numbers are 'complex' sequence. For example the sequence for our invoices are INVCCC-2016-0000 where INV is fixed, CCC is the client reference, 2016 is the year and 0000 is a counter from 1 to 9999. This number must be unique and at this moment we keep it in a columns.
When I create a new invoice I need to check the last number created for this client this year then increment it of one then save my data in my database.
I see two way to do it
I create a special table that contain and maintain all last used number for each client. Each time I have a new invoice I check the number in this table, I increment it of one, I use this number to save my invoice then I update the sequence table. 1 READ, 1 INSERT, 1 UPDATE (may be an INSERT in sequence is new)
var keyType = "INV" + ClientPrefix + "-" + Year;
var keyValue = Context.SequenceTable.SingleOrDefault(y => y.KeyType == keyType).KeyValue;
I check the last number in my invoice table, I increment it then I save my invoice. 1 READ, 1 INSERT. I don't need tu update another table and this seems more logic to me. But my database administrator tell me this can create lock or other troubles.
var keyType = "INV" + ClientPrefix + "-" + Year;
var keyValue = Invoices.Where(y => y.InvoiceId.StartsWith(keyType)).OrderByDescending(y => y.InvoiceId).LastOrDefault();
Note I use SQL server before 2015 version and then before the SEQUENCE feature. I fear an inconsistency with solution 1. I fear performance issue with solution 2.
I suggest a table to maintain the last used value for the custom sequence. This method will also ensure there are no gaps, if that is a business requirement.
The example below uses a transactional stored procedure to avoid a race condition in the event an invoice number is generated concurrently for the same client. You'll need to change the CATCH block to use RAISERROR instead of THROW if you are using a pre SQL Server 2012 version.
CREATE TABLE InvoiceSequence (
ClientReferenceCode char(3) NOT NULL
, SequenceYear char(4) NOT NULL
, SequenceNumber smallint NOT NULL
CONSTRAINT PK_InvoiceSequence PRIMARY KEY (ClientReferenceCode, SequenceYear)
);
GO
CREATE PROC dbo.GetNextInvoiceSequence
#ClientReferenceCode char(3) = 'CCC'
, #SequenceYear char(4) = '2016'
AS
SET XACT_ABORT, NOCOUNT ON;
DECLARE #SequenceNumber smallint;
BEGIN TRY
BEGIN TRAN;
UPDATE dbo.InvoiceSequence
SET #SequenceNumber = SequenceNumber = SequenceNumber + 1
WHERE
ClientReferenceCode = #ClientReferenceCode
AND SequenceYear = #SequenceYear;
IF ##ROWCOUNT = 0
BEGIN
SET #SequenceNumber = 1;
INSERT INTO dbo.InvoiceSequence(ClientReferenceCode, SequenceYear, SequenceNumber)
VALUES (#ClientReferenceCode, #SequenceYear, #SequenceNumber);
END;
IF #SequenceNumber > 9999
BEGIN
RAISERROR('Invoice sequence limit reached for client %s year %s', 16, 1, #ClientReferenceCode, #SequenceYear) AS InvoiceNumber;
END;
COMMIT;
SELECT 'INV' + #ClientReferenceCode + '_' + #SequenceYear + '_' + RIGHT('000' + CAST(#SequenceNumber AS varchar(4)), 4);
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK;
THROW;
END CATCH;
GO
--sample usage:
--note that year could be assigned in the proc if it is based on the current date
EXEC dbo.GetNextInvoiceSequence
#ClientReferenceCode = 'CCC'
, #SequenceYear = '2016';
Related
I have a table in my oracle DB which has a column with random values. Screenshot is attached below :
I had manually updated the first row to "V0001". Is there any way I can update the rest of the rows to "V0002", "V0003" and so on without manual intervention.
You could use a sequence for this. Create a sequence, convert the sequence's .NEXTVAL to a string, use CONCAT() and UPDATE eg:
Table
create table demo
as
select dbms_random.string( 'x', 11 ) as vehicleid
from dual
connect by level <= 100 ;
select * from demo fetch first 10 rows only ;
-- output
VEHICLEID
LS23XFRNH5N
47DUDNOIRO9
POS5GQSQLMO
BBEEZJMQZI4
2Q8QE30HM2E
S7M5V40YNTD
N2X1YN0OIE3
...
Sequence
create sequence vehicleid_seq start with 1 increment by 1 ;
Update
update demo
set vehicleid = concat( 'V', to_char( vehicleid_seq.nextval, 'FM00000' ) ) ;
Result
select * from demo order by vehicleid fetch first 10 rows only ;
VEHICLEID
V00001
V00002
V00003
V00004
V00005
V00006
V00007
V00008
V00009
V00010
dbfiddle
The identifier code of a table is recommended to be a numeric data, what you could do is an extra field that works as a second code, perhaps called secondary_code. You can do it with Stored Procedure, I give you a small example:
DELIMITER$$
DROP PROCEDURE IF EXISTS sp_genwrar_code$$
CREATE PROCEDURE sp_genwrar_code(
OUT p_secondary_code VARCHAR(4)
)
BEGIN
DECLARE accountant INT;
BEGIN
SET accountant = (SELECT COUNT(*)+1 FROM product);
IF(accountant <10)THEN
SET p_secondary_code= CONCAT('V00',accountant );
ELSE IF(accountant<100) THEN
SET p_secondary_code= CONCAT('V0',accountant);
ELSE IF(accountant<1000)THEN
SET p_secondary_code= CONCAT('V',accountant );
END IF;
END IF;
END IF;
END;
END$$
With that you can generate codes as you need with the structure 'V001'
I am moving a small database from MS Access into SQL Server. Each year, the users would create a new Access database and have clean data, but this change will put data across the years into one pot. The users have relied on the autonumber value in Access as a reference for records. That is very inaccurate if, say, 238 records are removed.
So I am trying to accommodate them with an id column they can control (somewhat). They will not see the real primary key in the SQL table, but I want to give them an ID they can edit, but still be unique.
I've been working with this trigger, but it has taken much longer than I expected.
Everything SEEMS TO work fine, except I don't understand why I have the same data in my INSERTED table as the table the trigger is on. (See note in code.)
ALTER TRIGGER [dbo].[trg_tblAppData]
ON [dbo].[tblAppData]
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #NewUserEnteredId int = 0;
DECLARE #RowIdForUpdate int = 0;
DECLARE #CurrentUserEnteredId int = 0;
DECLARE #LoopCount int = 0;
--*** Loop through all records to be updated because the values will be incremented.
WHILE (1 = 1)
BEGIN
SET #LoopCount = #LoopCount + 1;
IF (#LoopCount > (SELECT Count(*) FROM INSERTED))
BREAK;
SELECT TOP 1 #RowIdForUpdate = ID, #CurrentUserEnteredId = UserEnteredId FROM INSERTED WHERE ID > #RowIdForUpdate ORDER BY ID DESC;
IF (#RowIdForUpdate IS NULL)
BREAK;
-- WHY IS THERE A MATCH HERE? HAS THE RECORD ALREADY BEEN INSERTED?
IF EXISTS (SELECT UserEnteredId FROM tblAppData WHERE UserEnteredId = #CurrentUserEnteredId)
BEGIN
SET #NewUserEnteredId = (SELECT Max(t1.UserEnteredId) + 1 FROM tblAppData t1);
END
ELSE
SET #NewUserEnteredId = #CurrentUserEnteredId;
UPDATE tblAppData
SET UserEnteredId = #NewUserEnteredId
FROM tblAppData a
WHERE a.ID = #RowIdForUpdate
END
END
Here is what I want to accomplish:
When new record(s) are added, it should increment values from the Max existing
When a user overrides a value, it should check to see the existence of that value. If found restore the existing value, otherwise allow the change.
This trigger allows for multiple rows being added at a time.
It is great for this to be efficient for future use, but in reality, they will only add 1,000 records a year.
I wouldn't use a trigger to accomplish this.
Here is a script you can use to create a sequence (op didn't tag version), create the primary key, use the sequence as your special id, and put a constraint on the column.
create table dbo.test (
testid int identity(1,1) not null primary key clustered
, myid int null constraint UQ_ unique
, somevalue nvarchar(255) null
);
create sequence dbo.myid
as int
start with 1
increment by 1;
alter table dbo.test
add default next value for dbo.myid for myid;
insert into dbo.test (somevalue)
select 'this' union all
select 'that' union all
select 'and' union all
select 'this';
insert into dbo.test (myid, somevalue)
select 33, 'oops';
select *
from dbo.test
insert into dbo.test (somevalue)
select 'oh the fun';
select *
from dbo.test
--| This should error
insert into dbo.test (myid, somevalue)
select 3, 'This is NO fun';
Here is the result set:
testid myid somevalue
1 1 this
2 2 that
3 3 and
4 4 this
5 33 oops
6 5 oh the fun
And at the very end a test, which will error.
I an working on a Stored Procedure, and I am using SQL.
I asked a question Insert zero between two IDs and make it length 10 in Stored Procedure. I got answer for that and now my solution works fine and this is my sp now.
ALTER PROCEDURE [dbo].[spInsertaItemPackingDetail] #ItemID INT
,#PackingTypeID INT
,#PackingSlNo INT
,#PackingBarCode VARCHAR(25)
,#active BIT
AS
BEGIN
-- Create Barcode if #PackingBarCode is null
IF #PackingBarCode = NULL OR #PackingBarCode = ''
BEGIN
SET #PackingBarCode = (
SELECT CASE
-- Ceck If total length < 10, then add zeros in between
WHEN LEN(#ItemID) + LEN(#PackingTypeID) < 10
THEN CONCAT(#ItemID, RIGHT(CONCAT('0000000000', #PackingTypeID), 10 -( LEN(#ItemID) + LEN(#PackingTypeID)) ),#PackingTypeID)
ELSE CONCAT(#ItemID, #PackingTypeID)
END AS BarCode
)
END
INSERT INTO aItemPackingDetail (
ItemID
,PackingTypeID
,PackingSlNo
,PackingBarCode
,active
)
VALUES (
#ItemID
,#PackingTypeID
,#PackingSlNo
,#PackingBarCode
,#active
)
END
Now I want to check duplicate for #PackingBarCode, I can check duplicate by following code
WHERE NOT EXISTS (SELECT 1 from aItemPackingDetail WHERE categoryid=#cat and PackingBarCode = #PackingBarCode)
But then it get rejected and niot saved, I want to save it, but before saving add #PackingSlNo to the end and make it unique, again check for duplicate and if add #PackingSlNo once again and so on, when it is not duplicate add it into database.
The duplicate #PackingBarCode check needs to done if #PackingBarCode is generated in sp or entered by user.
How do I get there? please help
I am looking for that performance 'sweet spot' when trying to update multiple columns for multiple rows...
Background.
I work in an MDM/abstract/hierarchical classification/functional SQL Server environment. This question pertains to a post data driven calculation process where I need to save the results. I pass JSON to a SQL function that will automatically create SQL for inserts/updates (and skip updates if the values match).
The tblDestination looks like
create table tblDestination as
(
sysPrimaryKey bigint identity(1,1)
, sysHeaderId bigint -- udt_ForeignKey
, sysLevel1Id bigint -- udt_ForeignKey (classification level 1)
, strText nvarchar(100) -- from here down: the values that need to be updated
, dtmDate datetime2(7)
, numNumeric flat
, flgFlag bit
, intInteger bigint
, sysRefKey bigint-- ForeignKey
, primary key non clustered (sysPrimaryKey)
)
/* note that the clustered index on this table exists, contains more than the columns listed above, and is physically modeled correctly. you may use any clustered/IX/UI indexes that you need to if you are testing */
#JSON looks like... (ARBITRARY# ranges between 2 and 100)
declare #JSON nvarchar(max)='{"ARBITRARY NAME 1":"3/1/2017","ARBITRARY NAME 2": "Value", "ARBITRARY NAME 3": 45.3}'
The function cursors through the incoming #JSON and builds insert or update statements.
Cursor local static forward_only read_only
for
select [key] as JSON_Key, [value] from openjson(#json)
while ##fetch
-- get the id for ARBITRARY ID plus a classification id
select #level1Id = level1Id, #level2id = level2Id
from tblLevel1 where ProgrammingName = #JSON_Key
-- get a ProgrammingName field for the previously retrieved level2Id
Select #ProgrammingName = /*INTEGER/FLAG/NUMERIC/TEXT/DATE/REFKEY
from tblLevel2 where level2id = #level2id
-- clear variables
set #numeric = null, #integer = null, #text = null etc..
-- check to see if insert or update is required
Select #DestinationID from tblDestination where HeaderId = #header and Level1 = #Level1
if #DestinationId is null
begin
If #ProgrammingName = 'Numeric' begin #Numeric = #JSON_Value end
else if #ProgrammingName = 'Integer' begin #Integer = #JSON_value end
etc..
-- dynamically build the updates here..
/*
'update tblDestination
Set numeric = ' +#numeric
+', flag = '+#flag
+', date = '+#date
.. etc
+'where HeaderId = '+#header + ' and level1Id = '+#Level1Id
end
IE:
Update tblDestination
Set numNumeric = NULL
, flgFlag = NULL
, dtmDate = '3/1/2017'
Where sysPrimaryKey = 33676224
*/
Finally... to the point of this post: has anyone here had experience with multiple row updates on multiple columns?
Something like:
Set TableNumeric
= CASE WHEN Level3Id = 33676224 then null
when leve3id = 33676225 then 3.2
when level3id = 33676226 then null
end
, tableDate = case when level3id = 33676224 then '3/1/2017'
when 33676225 then null
when 33676226 then null
end
where headerId = 23897
and IDs in (33676224, 33676225, 33676226)
I know that the speed varies for Insert statements (Number of Columns inserted vs Number of records), and have that part dialed in.
I am curious to know if anyone has found that 'sweet spot' for updates.
The sweet spot meaning:
How many CASES before I should make a new update block?
Is 'Update tbl set (Column = Case (When Id = ## then ColumnValue)^n END)^n' the proper approach to reduce the number of actual Updates being fired?
Is wrapping Updates in a transaction a faster option (and how many per COMMIT)?
Legibility of the update statement is irrelevant. Nobody will actually see the code.
I have isolated the single update statement chain to be approx 70%+ of the query cost in question (compared to all inserts and 20/80,50/50,80/20 %update/%inserts)
I've done this before somewhere I'm sure of it!
I have a SQL Server 2000 table that I need to log changes to fields on updates and inserts into a second Logging table. A simplified version of the structure I'm using is below:
MainTable
ID varchar(10) PRIMARY KEY
DESCRIPTION varchar(50)
LogTable
OLDID varchar(10)
NEWID varchar(10)
For any other field something like this would work great:
Select i.DESCRIPTION As New, d.DESCRIPTION As Old
From Inserted i
LEFT JOIN Deleted d On i.ID=d.ID
...But obviously the join would fail if ID was changed.
I cannot modify the Tables in way, the only power I have in this database is to create a trigger.
Alternatively is there someone who can teach me time travelling and I'll go back into the past and ask myself back then how I did this? Cheers :)
Edit:
I think I need to clarify a few things here. This is not actually my database, it is a pre-existing system that I have almost no control of, other than writing this trigger.
My question is how can I retrieve the old primary key if said primary key was changed. I don't need to be told that I shouldn't change the primary key or about chasing up foreign keys etc. That's not my problem :)
DECLARE #OldKey int, #NewKey int;
SELECT #Oldkey = [ID] FROM DELETED;
SELECT #NewKey = [ID] FROM INSERTED;
This only works if you have a single row. Otherwise you have no "anchor" to link old and new rows. So check in your trigger for > 1 in INSERTED.
Is it possible to assume that the INSERTED and DELETED tables presented to you in a trigger are guaranteed to be in the same order?
I don't think it's possible. Imagine if you have 4 rows in the table:
1 Val1
2 Val2
3 Val3
4 Val4
Now issue the following update:
UPDATE MainTable SET
ID = CASE ID WHEN 1 THEN 2 WHEN 2 THEN 1 ELSE ID END
Description = CASE ID WHEN 3 THEN 'Val4' WHEN 4 THEN 'Val3' ELSE Description END
Now, how are you going to distinguish between what happened to rows 1 & 2 and what happened to rows 3 & 4. And more importantly, can you describe what's different between them? All of the stuff that tells you which columns have been updated won't help you.
If it's possible in this case that there's an additional key on the table (e.g. Description is UNIQUE), and your update rules allow it, you could write the trigger to prevent simultaneous updates to both keys, and then you can use whichever key hasn't been updated to correlate the two tables.
If you must handle multiple-row inserts/updates, and there's no alternate key that's guaranteed not to change, the only way I can see to do this is to use an INSTEAD OF trigger. For example, in the trigger you could break the original insert/update command into one command per row, grabbing each old id before you insert/update.
Within triggers in SQL Server you have access to two tables: deleted and inserted. Both of these have already been mentioned. Here's how they function depending on what action the trigger is firing on:
INSERT OPERATION
deleted - not used
inserted - contains the new rows being added to the table
DELETE OPERATION
deleted - contains the rows being removed from the table
inserted - not used
UPDATE OPERATION
deleted - contains the rows as they would exist before the UPDATE operation
inserted - contains the rows as they would exist after the UPDATE operation
These function in every way like tables. Therefore, it is entirely possible to use a row based operation such as something like the following (Operation exists only on the audit table, as does DateChanged):
INSERT INTO MyAuditTable
(ID, FirstColumn, SecondColumn, ThirdColumn, Operation, DateChanged)
VALUES
SELECT ID, FirstColumn, SecondColumn, ThirdColumn, 'Update-Before', GETDATE()
FROM deleted
UNION ALL
SELECT ID, FirstColumn, SecondColumn, ThirdColumn, 'Update-After', GETDATE()
FROM inserted
----new----
add an identity column to the table that the application can not change, you can then use that new column to join the inserted to the deleted tables within the trigger:
ALTER TABLE YourTableName ADD
PrivateID int NOT NULL IDENTITY (1, 1)
GO
----old----
Don't ever update/change key values. How can you do this and fix all of your foreign keys?
I wouldn't recommend ever using a trigger that can't handle a set of rows.
If you must change the key, insert a new row with the proper new key and values, use SCOPE_IDENTITY() if that is what your are doing. Delete the old row. Log for the old row that it was changed to the new row's key, which you should now have. I hope there is no foreign key on the changed key in your log...
You can create a new identity column on table MainTable (named for example correlationid) and correlate inserted and deleted tables using this column.
This new column should be transparent for existing code.
INSERT INTO LOG(OLDID, NEWID)
SELECT deleted.id AS OLDID, inserted.id AS NEWID
FROM inserted
INNER JOIN deleted
ON inserted.correlationid = deleted.correlationid
Pay attention, you could insert duplicate records in the log table.
Of course nobody should be changing the primary key on the table -- but that is exactly what triggers are supposed to be for (in part), is to keep people from doing things they shouldn't do. It's a trivial task in Oracle or MySQL to write a trigger that intercepts changes to primary keys and stops them, but not at all easy in SQL Server.
What you of course would love to be able to do would be to simply do something like this:
if exists
(
select *
from inserted changed
join deleted old
where changed.rowID = old.rowID
and changed.id != old.id
)
... [roll it all back]
Which is why people go out googling for the SQL Server equivalent of ROWID. Well, SQL Server doesn't have it; so you have to come up with another approach.
A fast, but sadly not bombproof, version is to write an instead of update trigger that looks to see whether any of the inserted rows have a primary key not found in the updated table or vice versa. This would catch MOST, but not all, of the errors:
if exists
(
select *
from inserted lost
left join updated match
on match.id = lost.id
where match.id is null
union
select *
from deleted new
left join inserted match
on match.id = new.id
where match.id is null
)
-- roll it all back
But this still doesn't catch an update like...
update myTable
set id = case
when id = 1 then 2
when id = 2 then 1
else id
end
Now, I've tried making the assumption that the inserted and deleted tables are ordered in such a way that cursoring through the inserted and deleted tables simultaneously will give you properly matching rows. And this APPEARS to work. In effect you turn the trigger into the equivalent of the for-each-row triggers available in Oracle and mandatory in MySQL...but I would imagine the performance will be bad on massive updates since this is not native behavior to SQL Server. Also it depends upon an assumption that I can't actually find documented anywhere and so am reluctant to depend on. But code structured that way APPEARS to work properly on my SQL Server 2008 R2 installation. The script at the end of this post highlights both the behavior of the fast-but-not-bombproof solution and the behavior of the second, pseudo-Oracle solution.
If anybody could point me to someplace where my assumption is documented and guaranteed by Microsoft I'd be a very grateful guy...
begin try
drop table kpTest;
end try
begin catch
end catch
go
create table kpTest( id int primary key, name nvarchar(10) )
go
begin try
drop trigger kpTest_ioU;
end try
begin catch
end catch
go
create trigger kpTest_ioU on kpTest
instead of update
as
begin
if exists
(
select *
from inserted lost
left join deleted match
on match.id = lost.id
where match.id is null
union
select *
from deleted new
left join inserted match
on match.id = new.id
where match.id is null
)
raisError( 'Changed primary key', 16, 1 )
else
update kpTest
set name = i.name
from kpTest
join inserted i
on i.id = kpTest.id
;
end
go
insert into kpTest( id, name ) values( 0, 'zero' );
insert into kpTest( id, name ) values( 1, 'one' );
insert into kpTest( id, name ) values( 2, 'two' );
insert into kpTest( id, name ) values( 3, 'three' );
select * from kpTest;
/*
0 zero
1 one
2 two
3 three
*/
-- This throws an error, appropriately
update kpTest set id = 5, name = 'FIVE' where id = 1
go
select * from kpTest;
/*
0 zero
1 one
2 two
3 three
*/
-- This allows the change, inappropriately
update kpTest
set id = case
when id = 1 then 2
when id = 2 then 1
else id
end
, name = UPPER( name )
go
select * from kpTest
/*
0 ZERO
1 TWO -- WRONG WRONG WRONG
2 ONE -- WRONG WRONG WRONG
3 THREE
*/
-- Put it back
update kpTest
set id = case
when id = 1 then 2
when id = 2 then 1
else id
end
, name = LOWER( name )
go
select * from kpTest;
/*
0 zero
1 one
2 two
3 three
*/
drop trigger kpTest_ioU
go
create trigger kpTest_ioU on kpTest
instead of update
as
begin
declare newIDs cursor for select id, name from inserted;
declare oldIDs cursor for select id from deleted;
declare #thisOldID int;
declare #thisNewID int;
declare #thisNewName nvarchar(10);
declare #errorFound int;
set #errorFound = 0;
open newIDs;
open oldIDs;
fetch newIDs into #thisNewID, #thisNewName;
fetch oldIDs into #thisOldID;
while ##FETCH_STATUS = 0 and #errorFound = 0
begin
if #thisNewID != #thisOldID
begin
set #errorFound = 1;
close newIDs;
deallocate newIDs;
close oldIDs;
deallocate oldIDs;
raisError( 'Primary key changed', 16, 1 );
end
else
begin
update kpTest
set name = #thisNewName
where id = #thisNewID
;
fetch newIDs into #thisNewID, #thisNewName;
fetch oldIDs into #thisOldID;
end
end;
if #errorFound = 0
begin
close newIDs;
deallocate newIDs;
close oldIDs;
deallocate oldIDs;
end
end
go
-- Succeeds, appropriately
update kpTest
set name = UPPER( name )
go
select * from kpTest;
/*
0 ZERO
1 ONE
2 TWO
3 THREE
*/
-- Succeeds, appropriately
update kpTest
set name = LOWER( name )
go
select * from kpTest;
/*
0 zero
1 one
2 two
3 three
*/
-- Fails, appropriately
update kpTest
set id = case
when id = 1 then 2
when id = 2 then 1
else id
end
go
select * from kpTest;
/*
0 zero
1 one
2 two
3 three
*/
-- Fails, appropriately
update kpTest
set id = id + 1
go
select * from kpTest;
/*
0 zero
1 one
2 two
3 three
*/
-- Succeeds, appropriately
update kpTest
set id = id, name = UPPER( name )
go
select * from kpTest;
/*
0 ZERO
1 ONE
2 TWO
3 THREE
*/
drop table kpTest
go