SQL Server Sequence constant restarting - sql-server

I'm looking at a pattern where a SQL Server sequence is being used as a sub-index of records, and getting reset with each new set of records.
Something like:
create sequence dbo.MySequence start with 1 increment by 1;
create table dbo.Addresses (
PersonID int
,AddressSequence int
,StreetAddress varchar(100)
);
declare #PersonID int;
set #PersonID = 1;
alter sequence dbo.MySequence restart with 1;
insert dbo.Addresses (PersonID, AddressSequence, StreetAddress)
values (#PersonID, next value for dbo.MySequence, '123');
insert dbo.Addresses (PersonID, AddressSequence, StreetAddress)
values (#PersonID, next value for dbo.MySequence, '456');
set #PersonID = 2;
alter sequence dbo.MySequence restart with 1;
insert dbo.Addresses (PersonID, AddressSequence, StreetAddress)
values (#PersonID, next value for dbo.MySequence, '789');
PersonID AddressSequence StreetAddress
----------- --------------- ---------------
1 1 123
1 2 456
2 1 789
With each new person, the sequence is altered back to 1. In some scenarios, this would obviously be no good. In this particular scenario, records are only inserted one time and never edited, only by this one application, no parallelism/threading, and always with all addresses inserted before moving on to the next person.
Seems like it will work just fine, given the existing scenario. Of course this means we can never change those requirements, like having multiple processes do inserts at the same time.
But assuming all that is ok, is there something here that would hurt us? I'd expect altering a database object takes a little bit more work than just incrementing or resetting an in-memory variable, but are there any other gotchas I should look into or pass on to the DBA?

There's no simple, efficient, scalable way to do this. You should just allow the AddressSequence to increment across PersonIDs. It's functionally equivilent for most purposes. eg
PersonID AddressID StreetAddress
----------- --------------- ---------------
1 1 123
1 2 456
2 3 789
2 4 789
With PK (PersonID,AddressID).
And for display purposes you can always produce the AddressSequence with an expression like row_number() over (partition by PersonID order by AddressID)

Related

IN statement inconsistency with PRIMARY KEY

So I have a simple table called temp that can be created by:
CREATE TABLE temp (value int, id int not null primary key);
INSERT INTO temp
VALUES(0,1),
(0,2),
(0,3),
(0,4),
(1,5),
(1,6),
(1,7),
(1,8);
I have a second table temp2 that can be created by:
CREATE TABLE temp (value int, id int);
INSERT INTO temp
VALUES(0,1),
(0,2),
(0,3),
(0,4),
(1,5),
(1,6),
(1,7),
(1,8);
The only difference between temp and temp2 is that the id field is the primary key in temp, and temp2 has no primary key. I'm not sure how, but I am getting differing results with the following query:
select * from temp
where id in (
select id
from (
select id, ROW_NUMBER() over (partition by value order by value) rownum
from temp
) s1
where rownum = 1
)
This is the result for temp:
value id
----------- -----------
0 1
0 2
0 3
0 4
1 5
1 6
1 7
1 8
and this is what I get when temp is replaced by temp2 (THE CORRECT RESULT):
value id
----------- -----------
0 1
1 5
When running the inner-most query (s1), the expected results are retrieved:
id rownum
----------- --------------------
1 1
2 2
3 3
4 4
5 1
6 2
7 3
8 4
When just running the in statement query on both, I also get the expected result:
id
-----------
1
5
I cannot figure out what the reason for this could possibly be. Is this a bug?
Notes: temp2 was created with a simple select * into temp2 from temp. I am running SQL Server 2008. My apologies if this is a known glitch. It is difficult to search for this since it requires an in statement. An "equivalent" query that uses a join does produce the correct results on both tables.
Edit: dbfiddle showing the differences:
Unexpected Results
Expected Results
I can't specifically answer your question, but changing the ORDER BY fixes the problem. partition by value order by value doesn't really make sense, and it looks like the problem is "fooling" SQL Server; as you're partitioning the rows by the same value you're ordering by, every row is "row number 1" as they could all be at the start. Don't forget, a table is an unordered heap; even when it has a Primary Key (clustered or not).
If you change your ORDER BY to id instead the problem goes away.
SELECT *
FROM temp2 t2
WHERE t2.id IN (SELECT s1.id
FROM (SELECT sq.id,
ROW_NUMBER() OVER (PARTITION BY sq.value ORDER BY sq.id) AS rownum
FROM temp2 sq) s1
WHERE s1.rownum = 1);
In fact, changing the ORDER BY clause to anything else fixes the problem:
SELECT *
FROM temp2 t2
WHERE t2.id IN (SELECT s1.id
FROM (SELECT sq.id,
ROW_NUMBER() OVER (PARTITION BY sq.value ORDER BY (SELECT NULL)) AS rownum
FROM temp2 sq) s1
WHERE s1.rownum = 1);
So the problem is that your are using the same expression (column) for both your PARTITION BY and ORDER BY clause; meaning that any of those rows could be row number 1, and none of them; thus all are returned. It doesn't make sense for both to be the same, so they should be different.
Still, this problem does persist in SQL Server 2017 (and I suspect 2019) so you might want to raise a support ticket with them anyway (but as you're using 2008 don't expect it to get fixed, as your support is about to end).
As comments can be deleted without notice I wanted to add #scsimon's comment and my response:
scsimon: Interesting. Changing rownum = 2 gives expected results without changing order by. I think it's a bug.
Larnu: I agree at #scsimon. I suspect that changing the WHERE to s1.rownum = 2 effectively forces the data engine to actually determine the values of rownum, rather than assume every row is "equal"; as if that were the case none would be returned.
Even so, changing the WHERE to s1.rownum = 2 is still resigning to "return a random row", if the PARTITION BY and ORDER BY clauses are the same

Create table 6 x 6 with automatic spill from upline

I am creating a code to company MMN. the idea is a system which has a table 6 x 6 with automatic spill.
For example.
I register 6 new persons.
John
Peter
Mary
Lary
Anderson
Paul
When I register my 7th the system automatic follow the order below me and put into John network. When I register the 8th the system automatic follow the order below me and put into Peter network.
Table 6 x 6
Firt level: 6
Second level: 36
I am trying to creating a test with stored procedure in sqlserver.
I am stuck in the part how I can do automatically put the new person registered to below me when I reach the limit of the table.
Creating a Matrix would be denormalizing your data. It is usually best practice NOT to do this, as it makes data manipulation a lot more difficult, among other reasons. How would you prevent the rows from being more than 6? You'd have to add a weird constraint like so:
create table #matrix ( ID int identity(1,1),
Name1 varchar(64),
Name2 varchar(64),
Name3 varchar(64),
Name4 varchar(64),
Name5 varchar(64),
Name6 varchar(64),
CONSTRAINT ID_PK PRIMARY KEY (ID),
CONSTRAINT Configuration_SixRows CHECK (ID <= 6))
I'm betting you aren't doing this, and thus, you can't "ensure" no more than 6 rows is inserted into your table. If you are doing this, then you'd have to insert data one row at a time which goes against everything SQL Server is about. This would be to check if the first column is full yet, then move to the second, then the third, etc... it just doesn't make sense.
Instead, I would create a ParentID column to relate your names to their respective network as you stated. This can be done with a computed column like so:
declare #table table (ID int identity(1,1),
Names varchar(64),
ParentID as case
when ID <= 6 then null
else replace(ID % 6,0,6)
end)
insert into #table
values
('John')
,('Peter')
,('Mary')
,('Lary')
,('Anderson')
,('Paul')
,('Seven')
,('Eight')
,('Nine')
,('Ten')
,('Eleven')
,('Twelve')
,('Thirteen')
,('Fourteen')
select * from #table
Then, if you wanted to display it in a matrix you would use PIVOT(), specifically Dynamic Pivot. There are a lot of examples on Stack Overflow on how to do this. This also accounts for if you want the matrix to be larger than 6 X N... perhaps the network grows so each member has 50 individuals... thus 6 (rows) X 51 (columns)
IF it's only going to be 6 columns, or not many more, then you can also use a simple join logic...
select
t.ID
,t.Names
,t2.Names
,t3.Names
from #table t
left join
#table t2 on t2.ParentID = t.ID and t2.ID = t.ID + 6
left join
#table t3 on t3.ParentID = t.ID and t3.ID = t.ID + 12
--continue on
where
t.ParentID is null
You can see this in action with This OnLine DEMO
Here is some information on normalization

Get a count based on the row order

I have a table with this structure
Create Table Example (
[order] INT,
[typeID] INT
)
With this data:
order|type
1 7
2 11
3 11
4 18
5 5
6 19
7 5
8 5
9 3
10 11
11 11
12 3
I need to get the count of each type based on the order, something like:
type|count
7 1
11 **2**
18 1
5 1
19 1
5 **2**
3 1
11 **2**
3 1
Context
Lets say that this table is about houses, so I have a list houses in an order. So I have
Order 1: A red house
2: A white house
3: A white house
4: A red house
5: A blue house
6: A blue house
7: A white house
So I need to show that info condensed. I need to say:
I have 1 red house
Then I have 2 white houses
Then I have 1 red house
Then I have 2 blue houses
Then I have 1 white house
So the count is based on the order. The DENSE_RANK function would help me if I were able to reset the RANK when the partition changes.
So I have an answer, but I have to warn you it's probably going to get some raised eyebrows because of how it's done. It uses something known as a "Quirky Update". If you plan to implement this, please for the love of god read through the linked article and understand that this is an "undocumented hack" which needs to be implemented precisely to avoid unintended consequences.
If you have a tiny bit of data, I'd just do it row by agonizing row for simplicity and clarity. However if you have a lot of data and still need high performance, this might do.
Requirements
Table must have a clustered index in the order you want to progress in
Table must have no other indexes (these might cause SQL to read the data from another index which is not in the correct order, causing the quantum superposition of row order to come collapsing down).
Table must be completely locked down during the operation (tablockx)
Update must progress in serial fashion (maxdop 1)
What it does
You know how people tell you there is no implicit order to the data in a table? That's still true 99% of the time. Except we know that ultimately it HAS to be stored on disk in SOME order. And it's that order that we're exploiting here. By forcing a clustered index update and the fact that you can assign variables in the same update statement that columns are updated, you can effectively scroll through the data REALLY fast.
Let's set up the data:
if object_id('tempdb.dbo.#t') is not null drop table #t
create table #t
(
_order int primary key clustered,
_type int,
_grp int
)
insert into #t (_order, _type)
select 1,7
union all select 2,11
union all select 3,11
union all select 4,18
union all select 5,5
union all select 6,19
union all select 7,5
union all select 8,5
union all select 9,3
union all select 10,11
union all select 11,11
union all select 12,3
Here's the update statement. I'll walk through each of the components below
declare #Order int, #Type int, #Grp int
update #t with (tablockx)
set #Order = _order,
#Grp = case when _order = 1 then 1
when _type != #Type then #grp + 1
else #Grp
end,
#Type = _type,
_grp = #Grp
option (maxdop 1)
Update is performed with (tablockx). If you're working with a temp table, you know there's no contention on the table, but still it's a good habit to get into (if using this approach can even be considered a good habit to get into at all).
Set #Order = _order. This looks like a pointless statement, and it kind of is. However since _order is the primary key of the table, assigning that to a variable is what forces SQL to perform a clustered index update, which is crucial to this working
Populate an integer to represent the sequential groups you want. This is where the magic happens, and you have to think about it in terms of it scrolling through the table. When _order is 1 (the first row), just set the #Grp variable to 1. If, on any given row, the column value of _type differs from the variable value of #type, we increment the grouping variable. If the values are the same, we just stick with the #Grp we have from the previous row.
Update the #Type variable with the column _type's value. Note this HAS to come after the assignment of #Grp for it to have the correct value.
Finally, set _grp = #Grp. This is where the actual column value is updated with the results of step 3.
All this must be done with option (maxdop 1). This means the Maximum Degree of Parallelism is set to 1. In other words, SQL cannot do any task parallelization which might lead to the ordering being off.
Now it's just a matter of grouping by the _grp field. You'll have a unique _grp value for each consecutive batch of _type.
Conclusion
If this seems bananas and hacky, it is. As with all things, you need to take this with a grain of salt, and I'd recommend really playing around with the concept to fully understand it if you plan to implement it because I guarantee nobody else is going to know how to troubleshoot it if you get a call in the middle of the night that it's breaking.
This solution is using a recursive CTE and is relying on a gapless order value. If you don't have this, you can create it with ROW_NUMBER() on the fly:
DECLARE #mockup TABLE([order] INT,[type] INT);
INSERT INTO #mockup VALUES
(1,7)
,(2,11)
,(3,11)
,(4,18)
,(5,5)
,(6,19)
,(7,5)
,(8,5)
,(9,3)
,(10,11)
,(11,11)
,(12,3);
WITH recCTE AS
(
SELECT m.[order]
,m.[type]
,1 AS IncCounter
,1 AS [Rank]
FROM #mockup AS m
WHERE m.[order]=1
UNION ALL
SELECT m.[order]
,m.[type]
,CASE WHEN m.[type]=r.[type] THEN r.IncCounter+1 ELSE 1 END
,CASE WHEN m.[type]<>r.[type] THEN r.[Rank]+1 ELSE r.[Rank] END
FROM #mockup AS m
INNER JOIN recCTE AS r ON m.[order]=r.[order]+1
)
SELECT recCTE.[type]
,MAX(recCTE.[IncCounter])
,recCTE.[Rank]
FROM recCTE
GROUP BY recCTE.[type], recCTE.[Rank];
The recursion is traversing down the line increasing the counter if the type is unchanged and increasing the rank if the type is different.
The rest is a simple GROUP BY
I thought I'd post another approach I worked out, I think more along the lines of the dense_rank() work others were thinking about. The only thing this assumes is that _order is a sequential integer (i.e. no gaps).
Same data setup as before:
if object_id('tempdb.dbo.#t') is not null drop table #t
create table #t
(
_order int primary key clustered,
_type int,
_grp int
)
insert into #t (_order, _type)
select 1,7
union all select 2,11
union all select 3,11
union all select 4,18
union all select 5,5
union all select 6,19
union all select 7,5
union all select 8,5
union all select 9,3
union all select 10,11
union all select 11,11
union all select 12,3
What this approach does is row_number each _type so that regardless of where a _type exists, and how many times, the types will have a unique row_number in the order of the _order field. By subtracting that type-specific row number from the global row number (i.e. _order), you'll end up with groups. Here's the code for this one, then I'll walk through this as well.
;with tr as
(
select
-- Create an incrementing integer row_number over each _type (regardless of it's position in the sequence)
_type_rid = row_number() over (partition by _type order by _order),
-- This shows that on rows 6-8 (the transition between type 19 and 5), naively they're all assigned the same group
naive_type_rid = _order - row_number() over (partition by _type order by _order),
-- By adding a value to the type_rid which is a function of _type, those two values are distinct.
-- Originally I just added the value, but I think squaring it ensures that there can't ever be another gap of 1
true_type_rid = (_order - row_number() over (partition by _type order by _order)) + power(_type, 2),
_type,
_order
from #t
-- order by _order -- uncomment this if you want to run the inner select separately
)
select
_grp = dense_rank() over (order by max(_order)),
_type = max(_type)
from tr
group by true_type_rid
order by max(_order)
What's Going On
First things first; I didn't have to create a separate column in the src cte to return _type_rid. I did that mostly for troubleshooting and clarity. Secondly, I also didn't really have to do a second dense_rank on the final selection for the column _grp. I just did that so it matched exactly the results from my other approach.
Within each type, type_rid is unique, and increments by 1. _order also increments by one. So as long as a given type is chugging along, gapped by only 1, _order - _type_rid will be the same value. Let's look at a couple examples (This is the result of the src cte, ordered by _order):
_type_rid naive_type_rid true_type_rid _type _order
-------------------- -------------------- -------------------- ----------- -----------
1 8 17 3 9
2 10 19 3 12
1 4 29 5 5
2 5 30 5 7
3 5 30 5 8
1 0 49 7 1
1 1 122 11 2
2 1 122 11 3
3 7 128 11 10
4 7 128 11 11
1 3 327 18 4
1 5 366 19 6
First row, _order - _type_rid = 1 - 1 = 0. This assigns this row (type 7) to group 0
Second row, 2 - 1 = 1. This assigns type 11 to group 1
Third row, 3 - 2 = 1. This assigns the second sequential type 11 to group 1 also
Forth row, 4 - 1 = 3. This assigns type 18 to group 3
... and so forth.
The groups aren't sequential, but they ARE in the same order as _order which is the important part. You'll also notice I added the value of _type to that value as well. That's because when we hit some of the later rows, groups switched, but the sequence was still incremented by 1. By adding _type, we can differentiate those off-by-one values and still do it in the right order as well.
The final outer select from src orders by the max(_order) (in both my unnecessary dense_rank() _grp modification, and just the general result order).
Conclusion
This is still a little wonky, but definitely well within the bounds of "supported functionality". Given that I ran into one gotcha in there (the off-by-one thing), there might be others I haven't considered, so again, take that with a grain of salt, and do some testing.

SQL Server 2008 T-SQL: How to implement increasing number space PER foreign key value (with thread safety)

This works but I feel there must be a better way. I don't understand table/row locking very well. I have a table and a SP to manage ever increasing transactions numbers (1,2,3, etc.) PER person:
CREATE TABLE PersonTransaction (
PersonID int NOT NULL,
TransactionID int NOT NULL
)
INSERT INTO PersonTransaction VALUES (1,0), (2,0), (3,0)
CREATE PROCEDURE PersonNewTransaction
#PersonID int
AS
BEGIN
UPDATE PersonTransaction
SET TransactionID=TransactionID+1
WHERE PersonID=#PersonID
SELECT TransactionID
FROM PersonTransaction
WHERE PersonID=#PersonID
END
PersonNewTransaction 1
PersonNewTransaction 3
PersonNewTransaction 1
select * from PersonTransaction
PersonID TransactioID
1 2
2 0
3 1
Should I wrap the SP with a transaction and sp_getapplock and call it a day or is there a more elegant approach?
comment from Alex
UPDATE PersonTransaction
SET TransactionID=TransactionID+1
output inserted.TransactionID
WHERE PersonID=#PersonID

Assign Unique ID within groups of records

I have a situation where I need to add an arbitrary unique id to each of a group of records. It's easier to visualize this below.
Edited 11:26 est:
Currently the lineNum field has garbage. This is running on sql server 2000. The sample that follows is what the results should look like but the actual values aren't important, the numbers could anything as long as the two combined fields can be used for a unique key.
OrderID lineNum
AAA 1
AAA 2
AAA 3
BBB 1
CCC 1
CCC 2
The value of line num is not important, but the field is only 4 characters. This needs to by done in a sql server stored procedure. I have no problem doing it programatically.
Assuming your using SQL Server 2005 or better you can use Row_Number()
select orderId,
row_number() over(PARTITION BY orderId ORDER BY orderId) as lineNum
from Order
While adding a record to the table, you could create the "linenum" field dynamically:
In Transact-SQL, something like this:
Declare #lineNum AS INT
-- Get next linenum
SELECT #lineNum = MAX(COALESCE(linenum, 0)) FROM Orders WHERE OrderID = #OrderID
SET #lineNum = #lineNum + 1
INSERT INTO ORDERS (OrderID, linenum, .....)
VALUES (#OrderID, #lineNum, ....)
You could create a cursor that reads all values sorted, then at each change in value resets the 1 then steps through incrementing each time.
E.g.:
AAA reset 1
AAA set 1 + 1 = 2
AAA set 2 + 1 = 3
BBB reset 1
CCC reset 1
CCC set 1 + 1 = 1
Hmmmmm, could you create a view that returns the line number information in order and group it based on your order ID? Making sure the line number is always returned in the same order.
Either that or you could use a trigger and on the insert calculate the max id for the order?
Or perhaps you could use a select from max statement on the insert?
Perhaps none of these are satisfactory?
If you're not using SQL 2005 this is a slightly more set based way to do this (I don't like temp tables much but I like cursors less):
declare #out table (id tinyint identity(1,1), orderid char(4))
insert #out select orderid from THESOURCETABLE
select
o.orderid, o.id - omin.minid + 1 as linenum
from #out o
inner join
(select orderid, min(id) minid from #out group by orderid) as omin on
o.orderid = omin.orderid

Resources