I am trying to puzzle out a trigger in a SQL Server database. I am a student working on a summer project so I am no pro at this but can easily learn it.
This is a simplified version of my database table sorted by rank:
ID as primary key
ID | RANK
--------------
2 | NULL
1 | 1
3 | 2
4 | 3
7 | 4
The objective for me right now is to have the ability to insert/delete/update the rank and maintain incremental order of ranks in the database without any missing numbers in available positions along with no duplicates.
/* Insert new row */
INSERT INTO TABLE (ID, RANK) VALUES (6, 4)
/* AFTER INSERT */
ID | RANK
--------------
2 | NULL
1 | 1
3 | 2
4 | 3
6 | 4 <- new
7 | 5 <- notice how the rank increased to make room for the new row
I think doing this in a trigger is the most efficient/easiest way; although I may be wrong.
Alternatively to a trigger, I have made a temporary solution that uses front end code to run updates on each row when any rank is changed.
If you know how or if a trigger could do this please share.
EDIT: Added scenarios
The rank being inserted would always take its assigned number. Everything that is greater than or equal to the one being inserted would increase.
The rank causing the trigger will always have priority to claim its number while everything else will have rank increased to accommodate.
If rank is the highest number then the trigger would ensure that the number is +1 of the max.
This may work for you. Let me know.
DROP TABLE dbo.test
CREATE TABLE dbo.test (id int, ranke int)
INSERT INTO test VALUES (2, NULL)
INSERT INTO test VALUES (1, 1)
INSERT INTO test VALUES (3, 2)
INSERT INTO test VALUES (4, 3)
INSERT INTO test VALUES (7, 4)
GO
CREATE TRIGGER t_test
ON test
AFTER INSERT
AS
UPDATE test set ranke += 1 WHERE ranke >= (select max(ranke) from inserted) and id <> (select max(id) from inserted)
GO
INSERT INTO test values (6,4)
INSERT INTO test values (12,NULL)
SELECT * FROM test
Related
I have a target table for which partial data arrives at different times from 2 departments. The keys they use are the same, but the fields they provide are different. Most of the rows they provide have common keys, but there are some rows that are unique to each department. My question is about the fields, not the rows:
Scenario
the target table has a key and 30 fields.
Dept. 1 provides fields 1-20
Dept. 2 provides fields 21-30
Suppose I loaded Q1 data from Dept. 1, and that created new rows 100-199 and populated fields 1-20. Later, I receive Q1 data from Dept. 2. Can I execute the same merge code I previously used for Dept. 1 to update rows 100-199 and populate fields 21-30 without unintentionally changing fields 1-20? Alternatively, would I have to tailor separate merge code for each Dept.?
In other words, does (or can) "Merge / Update" operate only on target fields that are present in the source table while ignoring target fields that are NOT present in the source table? In this way, Dept. 1 fields would NOT be modified when merging Dept. 2, or vice-versa, in the event I get subsequent corrections to this data from either Dept.
You can use a merge instruction, where you define a source and a target data, and what happens when a registry is found on both, just on the source, just on the target, and even expand it with custom logic, like it's just on the source, and it's older than X, or it's from department Y.
-- I'm skipping the fields 2-20 and 22-30, just to make this shorter.
create table #target (
id int primary key,
field1 varchar(100), -- and so on until 20
field21 varchar(100), -- and so on until 30
)
create table #dept1 (
id int primary key,
field1 varchar(100)
)
create table #dept2 (
id int primary key,
field21 varchar(100)
)
/*
Creates some data to merge into the target.
The expected result is:
| id | field1 | field21 |
| - | - | - |
| 1 | dept1: 1 | dept2: 1 |
| 2 | | dept2: 2 |
| 3 | dept1: 3 | |
| 4 | dept1: 4 | dept2: 4 |
| 5 | | dept2: 5 |
*/
insert into #dept1 values
(1,'dept1: 1'),
--(2,'dept1: 2'),
(3,'dept1: 3'),
(4,'dept1: 4')
insert into #dept2 values
(1,'dept2: 1'),
(2,'dept2: 2'),
--(3,'dept2: 3'),
(4,'dept2: 4'),
(5,'dept2: 5')
-- Inserts the data from the first department. This could be also a merge, it necessary.
insert into #target(id, field1)
select id, field1 from #dept1
merge into #target t
using (select id, field21 from #dept2) as source_data(id, field21)
on (source_data.id = t.id)
when matched then update set field21=source_data.field21
when not matched by source and t.field21 is not null then delete -- you can even use merge to remove some records that match your criteria
when not matched by target then insert (id, field21) values (source_data.id, source_data.field21); -- Every merge statement should end with ;
select * from #target
You can see this code running on this DB Fiddle
I have a interesting issue with the design for user permissions on our existing database.
We have a large table with well over 10 million records that the powers that be now want individual permissions placed on.
I immediately think OK, create a new table, each user has its own column and access is done with a bit value.
[U1] [U2] [U3] [U4] etc.
=============================================
Record 1 | 1 | 0 | 1 | 0 |
Record 2 | 0 | 0 | 1 | 1 |
Record 3 | 0 | 1 | 1 | 0 |
Record 4 | 1 | 0 | 1 | 1 |
Then I was told to expect 300 users as we allow more access to the database which would mean 300 columns :/
So can anyone out there think of a better way to do this? Any thoughts or suggestions will be gratefully received.
Like I mention in the comments, use a normalised approach. Considering that the value for the allow/dey is a bit, you could actually just do with with 2 columns, the User's ID and the Record's ID:
CREATE TABLE dbo.UserAllowPermissions (UserID int,
RecordID int);
ALTER TABLE dbo.UserAllowPermissions
ADD CONSTRAINT PK_UserRecordPermission
PRIMARY KEY CLUSTERED (UserID,RecordID);
Then you can INSERT the data you already have above like below:
INSERT INTO dbo.UserAllowPermissions (UserID,
RecordID)
VALUES (1, 1),
(1, 4),
(2, 3),
(3, 1),
(3, 2),
(3, 3),
(3, 4),
(4, 2),
(4, 4);
If you want to revoke the permission of a User, then just delete the relevant row. For example, say you want to revoke the permission for User 3 to Record 4:
DELETE
FROM dbo.UserAllowPermissions
WHERE UserID = 3
AND RecordID = 4;
And, unsurprisingly, if you want to grant a user permission, just INSERT the row:
INSERT INTO dbo.UserAllowPermissions (UserID, RecordID)
VALUES(5,1);
Which would grant the User with the ID of 5 access to the Record with an ID of 1.
Quick Summary: I have a function that pulls data from table X. I'm running an UPDATE on table X, and using a CROSS APPLY on the function that is pulling data from X (during the update) and the function doesn't look to be returning updated data.
The real-world scenario is much more complicated, but here's a sample of what I'm seeing.
Table
create table BO.sampleData (id int primary key, data1 int, val int)
Function
create function BO.getPrevious(
#id int
)
returns #info table (
id int, val int
)
as
begin
declare #val int
declare #prevRow int = #id - 1
-- grab data from previous row
insert into #info
select #id, val
from BO.sampleData where id = #prevRow
-- if previous row doesn't exist, return 3*prev row id
if ##rowcount = 0
insert into #info values (#id, #prevRow * 3)
return
end
Issue
Populate some sample data:
delete BO.sampleData
insert into BO.sampleData values (10, 20, 0)
insert into BO.sampleData values (11, 22, 0)
insert into BO.sampleData values (12, 24, 0)
insert into BO.sampleData values (13, 26, 0)
insert into BO.sampleData values (14, 28, 0)
select * from BO.sampleData
id data1 val
----------- ----------- -----------
10 20 0
11 22 0
12 24 0
13 26 0
14 28 0
Update BO.sampleData using a CROSS APPLY on BO.getPrevious (which accesses data from BO.sampleData):
update t
set t.val = ca.val
from bo.sampleData t
cross apply BO.getPrevious(t.id) ca
where t.id = ca.id
Problem
I'm expecting the row with id 10 to have the value 27 (since there is no row 9, the function will return 9*3). For id 11, I assumed it would look in 10 (which just got updated with 27) and set it's val to 27 -- and this would cascade down the rest of the table. But what I get is:
id data1 val
----------- ----------- -----------
10 20 27
11 22 0
12 24 0
13 26 0
14 28 0
I'm guessing this isn't allowed/supported -- the function doesn't have access to the updated data yet? Or I've got something wrong with the syntax? In the real scenario I'm researching, the function is much more complex, does some child table look ups, aggregates, etc.. before returning a result. But this represents the basics of what I'm seeing -- the function that queries BO.sampleData doesn't seem to have access to the updated values of BO.sampleData within the CROSS APPLY during the UPDATE.
Any ideas welcomed.
Thanks to #Martin Smith for identifying the issue -- i.e. "Halloween Protection". Now that my issue has a name, I did some research and found the following article which mentions this specific scenario in SQL Server:
... update plans consist of two parts: a read cursor that identifies
the rows to be updated and a write cursor that actually performs the
updates. Logically speaking, SQL Server must execute the read cursor
and write cursor of an update plan in two separate steps or phases.
To put it another way, the actual update of rows must not affect the
selection of which rows to update.
Emphasis mine. It makes sense now. The CROSS APPLY is happening over the read cursor where all of the values are still zero.
The data is always coming from #info
For input id = 11, it will execute:
insert into #info
select #id, val --which #id = 10, val = 0
from BO.sampleData where id = 10
so from the #info, the val for id=10 is 0(which comes from BO.sampleData where id = 11), then cross apply, it deal with id = 10 from #info, which is val = 10.
everything is what it is in your UDF. And there is no update val to 27 when id = 10 from #info in your UDF, be careful that #info is the table get returned.
I have a WHILE LOOP in an SQL query.
I have a table with 5 ROWS matching the counter
I'm randomizing 2048 rows and want to INSERT 1 - 5 over those rows, randomly into a single column but what I'm getting is, the query loops once over 2048 and inserts "1", then it loops a second time and inserts "5", then inserts, "3", then "4", and finally "2".
What I seek is loop through one time through the 2048 rows and insert randomly, 1 - 5 through 2048 rows (1 time) in the single column.
Here's the SQL which works but wrong.
declare #counter int
SET #counter = 1
BEGIN TRAN
WHILE (#counter <= 6)
BEGIN
SELECT id, city, wage_level
FROM myFirstTable
ORDER BY NEWID()
UPDATE myFirstTable
SET wage_level = #counter
SET #counter = #counter + 1
CONTINUE
END
COMMIT TRAN
The values in the table that contain 5 rows are irrelevant but fact that the "IDs" in that table are from 1 - 5 "ARE."
I'm close, but no cigar...
The result should be something like this:
id-----city------wage_level
---------------------
1 Denver 2
2 Chicago 3
3 Seattle 5
4 Los Angeles 1
5 Boise 4
---
2047 Charleston 2
2048 Rochester 1
And so on...
Thanks, everyone
No need for a loop. SQL works best on a set based approach.
Here is one way to do it:
Create and populate sample table (Please save us this step in your future questions)
CREATE TABLE myFirstTable
(
id int identity(1,1),
city varchar(20),
wage_level int
)
INSERT INTO myFirstTable (city) VALUES
('Denver'),
('Chicago'),
('Seattle'),
('Los Angeles'),
('Boise')
The update statement:
UPDATE myFirstTable
SET wage_level = (ABS(CHECKSUM(NEWID())) % 5) + 1
Check the update:
SELECT *
FROM myFirstTable
Results:
id city wage_level
1 Denver 3
2 Chicago 3
3 Seattle 2
4 Los Angeles 4
5 Boise 3
Explanation: use NEWID() to generate a guid, CHECKSUM() to get a number based on that guid, ABS() to get only positive values, % 5 to get only values between 0 and 4, and finally, + 1 to get only values between 1 and 5:
In sql server 2005, i in the query builder, i select "Add group by" to automatically
add the group by clause to all of the fields i selected. If one or more of those fields are a bit type, i get an error. Why is this? Is casting the column to TINYINT a good fix?
It looks like a limitation of that tool. if you just write the actual sql yourself in SQL Server Management Studio, it will work.
here is my test code:
CREATE TABLE Test2
(ID INT,
bitvalue bit,
flag char(1))
GO
insert into test2 values (1,1,'a')
insert into test2 values (2,1,'a')
insert into test2 values (3,1,'a')
insert into test2 values (4,1,'b')
insert into test2 values (5,1,'b')
insert into test2 values (6,1,'b')
insert into test2 values (7,1,'b')
insert into test2 values (10,0,'a')
insert into test2 values (20,0,'a')
insert into test2 values (30,0,'a')
insert into test2 values (40,0,'b')
insert into test2 values (50,0,'b')
insert into test2 values (60,0,'b')
insert into test2 values (70,0,'b')
select * from test2
select count(*),bitvalue,flag from test2 group by bitvalue,flag
OUTPUT
ID bitvalue flag
----------- -------- ----
1 1 a
2 1 a
3 1 a
4 1 b
5 1 b
6 1 b
7 1 b
10 0 a
20 0 a
30 0 a
40 0 b
50 0 b
60 0 b
70 0 b
(14 row(s) affected)
bitvalue flag
----------- -------- ----
3 0 a
3 1 a
4 0 b
4 1 b
(4 row(s) affected)
The tools don't allow some operations such as indexing or grouping on bit columns. Raw SQL does.
Note, you can't aggregate on bit columns. You have to cast first. Of course, averaging a bit columns is kinda pointless, but MAX/MIN is useful as a OR/AND spanning multiple rows.