I have some data like
Id, GroupId, Whatever
1, 1, 10
2, 1, 10
3, 1, 10
4, 2, 10
5, 2, 10
6, 3, 10
And I need to add a "group row id" column such as
Id, GroupId, Whatever, GroupRowId
1, 1, 10 1
2, 1, 10 2
3, 1, 10 3
4, 2, 10 1
5, 2, 10 2
6, 3, 10 1
Ideally it would be computed and enforced by the database. So when I do
INSERT INTO Foos (GroupId, Whatever) VALUES (1, 20)
I'd get the correct GroupRowId. Continuing the example data above, this row would then look like
Id, GroupId, Whatever, GroupRowId
7, 1, 20 4
This data is to be shared with a 3rd party and one of the requirements is for those GroupRowIds to be fixed regardless of any different ORDER BY or WHERE clauses.
I've considered a view with a row_id over/partition by but that view could still be modified in the future breaking previously shared data.
Our business rules dictate that no rows will be deleted so the GroupRowId will never need to be recomputed in this respect and there will never** be missing values.
** in the perfect world of business rules.
My thinking is that it would be preferable that this be a physical column so that it exists within the row. It can be queried and won't change based on a ORDER BY or WHERE clause.
You might try something along this:
--create a test database (will be dropped at the end! Carefull with real data!!)
USE master;
GO
CREATE DATABASE GroupingTest;
GO
USE GroupingTest;
GO
--Your table, I use an IDENTITY column for your Id column
CREATE TABLE dbo.tbl(Id INT IDENTITY,GroupId INT,Whatever INT);
GO
--Insert your test values
INSERT INTO tbl(GroupId, Whatever)
VALUES
(1,10)
,(1,10)
,(1,10)
,(2,10)
,(2,10)
,(3,10);
GO
--This is necessary to add the new column and to fill it initially
ALTER TABLE tbl ADD GroupRowId INT;
GO
WITH cte AS
(
SELECT GroupRowId
,ROW_NUMBER() OVER(PARTITION BY GroupId ORDER BY Id) AS NewValue
FROM tbl
)
UPDATE cte SET GroupRowId=NewValue;
--check the result
SELECT * FROM tbl ORDER BY GroupId,Id;
GO
--Now we create a trigger, which does exactly the same for new rows
--Very important: This must work with single inserts and with multiple inserts as well!
CREATE TRIGGER dbo.SetNextGroupRowId ON dbo.tbl
FOR INSERT
AS
BEGIN
WITH cte AS
(
SELECT GroupRowId
,ROW_NUMBER() OVER(PARTITION BY GroupId ORDER BY Id) AS NewValue
FROM tbl
)
UPDATE cte
SET GroupRowId=NewValue
WHERE GroupRowId IS NULL; --<-- this ensures to change only new rows
END
GO
--Now we can test this with a single value
INSERT INTO tbl(GroupId, Whatever)
VALUES(1,20);
SELECT * FROM tbl ORDER BY GroupId,Id;
--And we can test this with multiple inserts
INSERT INTO tbl(GroupId, Whatever)
VALUES
(1,30)
,(2,30)
,(2,30)
,(3,30)
,(4,30); --<-- the "4" is a new group
SELECT * FROM tbl ORDER BY GroupId,Id;
GO
--Cleaning
USE master;
GO
DROP DATABASE GroupingTest;
What you should keep in mind:
This might get in troubles with values inserted manually into GroupRowId or with any manipulation of this column by any other statement.
This might get in troubles with deleted rows
You can think about an approach selecting MAX(GroupRowId)+1 for the given group. This depends on your needs.
You might add an unique index on GroupId,GroupRowId. This would - at least - avoid giving the same number twice, but would lead into an error.
...but in your perfect world of business rules :-) this won't happen...
And to be honest: The whole issue has some smell...
Related
how to delete the duplicate records from snowflake table. Thanks
ID Name
1 Apple
1 Apple
2 Apple
3 Orange
3 Orange
Result should be:
ID Name
1 Apple
2 Apple
3 Orange
Adding here a solution that doesn't recreate the table. This because recreating a table can break a lot of existing configurations and history.
Instead we are going to delete only the duplicate rows and insert a single copy of each, within a transaction:
-- find all duplicates
create or replace transient table duplicate_holder as (
select $1, $2, $3
from some_table
group by 1,2,3
having count(*)>1
);
-- time to use a transaction to insert and delete
begin transaction;
-- delete duplicates
delete from some_table a
using duplicate_holder b
where (a.$1,a.$2,a.$3)=(b.$1,b.$2,b.$3);
-- insert single copy
insert into some_table
select *
from duplicate_holder;
-- we are done
commit;
Advantages:
Doesn't recreate the table
Doesn't modify the original table
Only deletes and inserts duplicated rows (good for time travel storage costs, avoids unnecessary reclustering)
All in a transaction
If you have some primary key as such:
CREATE TABLE fruit (key number, id number, name text);
insert into fruit values (1,1, 'Apple'), (2,1,'Apple'),
(3,2, 'Apple'), (4,3, 'Orange'), (5,3, 'Orange');
as then
DELETE FROM fruit
WHERE key in (
SELECT key
FROM (
SELECT key
,ROW_NUMBER() OVER (PARTITION BY id, name ORDER BY key) AS rn
FROM fruit
)
WHERE rn > 1
);
But if you do not have a unique key then you cannot delete that way. At which point a
CREATE TABLE new_table_name AS
SELECT id, name FROM (
SELECT id
,name
,ROW_NUMBER() OVER (PARTITION BY id, name) AS rn
FROM table_name
)
WHERE rn > 1
and then swap them
ALTER TABLE table_name SWAP WITH new_table_name
Here's a very simple approach that doesn't need any temporary tables. It will work very nicely for small tables, but might not be the best approach for large tables.
insert overwrite into some_table
select distinct * from some_table
;
The OVERWRITE keyword means that the table will be truncated before the insert takes place.
Snowflake does not have effective primary keys, their use is primarily with ERD tools.
Snowflake does not have something like a ROWID either, so there is no way to identify duplicates for deletion.
It is possible to temporarily add a "is_duplicate" column, eg. numbering all the duplicates with the ROW_NUMBER() function, and then delete all records with "is_duplicate" > 1 and finally delete the utility column.
Another way is to create a duplicate table and swap, as others have suggested.
However, constraints and grants must be kept. One way to do this is:
CREATE TABLE new_table LIKE old_table COPY GRANTS;
INSERT INTO new_table SELECT DISTINCT * FROM old_table;
ALTER TABLE old_table SWAP WITH new_table;
The code above removes exact duplicates. If you want to end up with a row for each "PK" you need to include logic to select which copy you want to keep.
This illustrates the importance to add update timestamp columns in a Snowflake Data Warehouse.
this has been bothering me for some time as well. As snowflake has added support for qualify you can now create a dedupped table with a single statement without subselects:
CREATE TABLE fruit (id number, nam text);
insert into fruit values (1, 'Apple'), (1,'Apple'),
(2, 'Apple'), (3, 'Orange'), (3, 'Orange');
CREATE OR REPLACE TABLE fruit AS
SELECT * FROM
fruit
qualify row_number() OVER (PARTITION BY id, nam ORDER BY id, nam) = 1;
SELECT * FROM fruit;
Of course you are left with a new table and loose table history, primary keys, foreign keys and such.
Based on above ideas.....following query worked perfectly in my case.
CREATE OR REPLACE TABLE SCHEMA.table
AS
SELECT
DISTINCT *
FROM
SCHEMA.table
;
Your question boils down to: How can I delete one of two perfectly identical rows? . You can't. You can only do a DELETE FROM fruit where ID = 1 and Name = 'Apple';, then both rows will go away. Or you don't, and keep both.
For some databases, there are workarounds using internal rows, but there isn't any in snowflake, see https://support.snowflake.net/s/question/0D50Z00008FQyGqSAL/is-there-an-internalmetadata-unique-rowid-in-snowflake-that-i-can-reference . You cannot limit deletes, either, so your only option is to create a new table and swap.
Additional Note on Hans Henrik Eriksen's remark on the importance of update timestamps: This is a real help when the duplicates where added later. If, for example, you want to keep the newer values, you can then do this:
-- setup
create table fruit (ID Integer, Name VARCHAR(16777216), "UPDATED_AT" TIMESTAMP_NTZ);
insert into fruit values (1, 'Apple', CURRENT_TIMESTAMP::timestamp_ntz)
, (2, 'Apple', CURRENT_TIMESTAMP::timestamp_ntz)
, (3, 'Orange', CURRENT_TIMESTAMP::timestamp_ntz);
-- wait > 1 nanosecond
insert into fruit values (1, 'Apple', CURRENT_TIMESTAMP::timestamp_ntz)
, (3, 'Orange', CURRENT_TIMESTAMP::timestamp_ntz);
-- delete older duplicates (DESC)
DELETE FROM fruit
WHERE (ID
, UPDATED_AT) IN (
SELECT ID
, UPDATED_AT
FROM (
SELECT ID
, UPDATED_AT
, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY UPDATED_AT DESC) AS rn
FROM fruit
)
WHERE rn > 1
);
simple UNION eliminate duplicates on use case of just all columns/no pks.
anyway problem should he solved as early on ingestion pipeline, and/or use scd etc.
Just a raw magic best way how to delete is wrong in principle, use scd with high resolution timestamp, solves any problem.
you want fix massive dups load ? then add column like batch id and remove all batch loaded records
Its like being healthy, you have 2 approaches:
eat a lot > get far > go-to a gym to burn it
eat well > have healthy life style and no need for gym.
So before discussing best gym, try change life style.
hope this helps, learn to do pressure upstream on data producers instead of living like jesus christ trying to clean up the mess of everyone.
The following solution is effective if you are looking at one or few columns as primary key references for the table.
-- Create a temp table to hold our duplicates (only second occurrence)
CREATE OR REPLACE TRANSIENT TABLE temp_table AS (
SELECT [col1], [col2], .. [coln]
FROM (
SELECT *, ROW_NUMBER () OVER(
PARTITION BY [pk]1, [pk]2, .. [pk]m
ORDER BY [pk]1, [pk]2, .. [pk]m) AS duplicate_count
FROM [schema].[table]
) WHERE duplicate_count = 2
);
-- Delete all the duplicate records from the table
DELETE FROM [schema].[table] t1
USING temp_table t2
WHERE
t1.[pk]1 = t2.[pk]1 AND
t1.[pk]2 = t2.[pk]2 AND
..
t1.[pk]n = t2.[pk]m;
-- Insert single copy using the temp_table in the original table
INSERT INTO [schema].[table]
SELECT *
FROM temp_table;
This is inspired by #Felipe Hoffa's answer:
##create table with dupes and take the max id
create or replace transient table duplicate_holder as (
select max(S.ID) ID, some_field, count(some_field) numberAssets
from some_table S
group by some_field
having count(some_field)>1
)
##join back to the original table on the field excluding the ID in the duplicate table and delete.
delete from some_table as t
USING duplicate_holder as d
WHERE t.some_field=d.some_field
and t.id <> d.id
Not sure if people are still interested in this but I've used the below query which is more elegant and seems to have worked
create or replace table {{your_table}} as
select * from {{your_table}}
qualify row_number() over (partition by {{criteria_columns}} order by 1) = 1
I do not want the record to the table more than 15.
Scenario:
A new record is saved. If it were a record number of 16. The first record to be deleted.
How do I remove the first record?Can it be done automatically?
if it is entity framework and you want to use a basic rule here it is
suppose your object is person and its set is called people
Before you do context.people.add(new person()) apply following logic
obtain count of people in database context.people.count()
check if this count is greater than 15 you can do this via single statment if(context.people.count()>15)
inside if you can write people firstperson = context.people.OrderBy(x=>x.ID).First() or if you have date inserted or added you can use.OrderBy(x => x.dateadded)and pick the first element. Make sure you order it in correct way usingOrderByorOrderByDescending`
place this record in a variable and call context.remove(firstperson) before you do context.add(new person())
If you are doing this in an empty table your ID's would increment but you can safely delete by ID order and pick the least one every time you delete.
WITH A AS
(
SELECT TOP 1 *
FROM MyTable
)
DELETE FROM A
The rows referenced in the TOP expression used with INSERT, UPDATE, or DELETE are not arranged in any order.
Therefore, you better use WITH decision with ORDER BY clause, which will let you specify more exactly which row you consider to be the first.
This uses a trigger and an identity column to ensure only the 15 most-recently-inserted rows are kept in the table.
CREATE TABLE MyTable
(
rowID INT IDENTITY(1,1) PRIMARY KEY
,MyColumn VARCHAR(255) NOT NULL
)
GO
CREATE TRIGGER TG_MyTable_Only15
ON MyTable
AFTER INSERT
AS
BEGIN
WITH
t1
(
rowID
)
AS
(
SELECT TOP 15
rowID
FROM MyTable
ORDER BY rowID DESC
)
DELETE FROM MyTable
WHERE rowID NOT IN (SELECT rowID FROM t1)
END
GO
I have a table with an "order" column which needs to maintain a contiguous range of unique numbers. What I'm trying to create is a trigger that fires after deleting rows which updates the "order" column so that the numbers remain contiguous.
I know a lot of people would argue that an "order" column only needs to be continuous, not contiguous, however there is a lot of front end JavaScript, and other SQL, for ordering/reordering these items which depends on order being contiguous. I would prefer to simply get this trigger working rather than having to rewrite that, of course I'm open to suggestions ;)
The trigger I have works fine for a single row delete, but when a multiple row delete occurs, only the first row gets deleted and the rest remain with no error thrown.
I thought the problem may have been recursion, as it updates the table it fired from, but it's only a delete trigger so I don't think that's the problem. Turning off RECURSIVE_TRIGGERS didn't fix the issue.
Here's the code:
CREATE TABLE [dbo].[Item]
(
[ItemID] INT NOT NULL IDENTITY(1, 1),
[ItemOrder] INT NOT NULL,
[ItemName] NVARCHAR (50) NOT NULL
)
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
CREATE TRIGGER [dbo].[trItem_odr] ON [dbo].[Item]
AFTER DELETE
AS
BEGIN
SET NOCOUNT ON ;
DECLARE #MinOrder INT
SELECT #MinOrder = MIN(ItemOrder)
FROM DELETED
DECLARE #UpdatedItems TABLE
(
ID INT IDENTITY(0, 1)
PRIMARY KEY,
ItemID INT
)
INSERT INTO #UpdatedItems (ItemID)
SELECT ItemID
FROM dbo.Item
WHERE ItemOrder > #MinOrder
AND ItemID NOT IN (SELECT ItemID
FROM DELETED)
ORDER BY ItemOrder
UPDATE dbo.Item
SET ItemOrder = (SELECT ID + #MinOrder
FROM #UpdatedItems
WHERE ItemID = Item.ItemID)
WHERE ItemID IN (SELECT ItemID
FROM #UpdatedItems)
END
GO
ALTER TABLE [dbo].[Item] ADD CONSTRAINT [PK_Item] PRIMARY KEY CLUSTERED ([ItemID])
GO
ALTER TABLE [dbo].[Item] ADD CONSTRAINT [IX_Item_1] UNIQUE NONCLUSTERED ([ItemName])
GO
CREATE UNIQUE NONCLUSTERED INDEX [IX_Item_2] ON [dbo].[Item] ([ItemOrder])
GO
INSERT INTO [dbo].[Item] ([ItemOrder], [ItemName])
SELECT 1, N'King Size Bed' UNION ALL
SELECT 2, N'Queen size bed' UNION ALL
SELECT 3, N'Double Bed' UNION ALL
SELECT 4, N'Single Bed' UNION ALL
SELECT 5, N'Filing Cabinet' UNION ALL
SELECT 6, N'Washing Machine' UNION ALL
SELECT 7, N'2 Seater Couch' UNION ALL
SELECT 8, N'3 Seater Couch' UNION ALL
SELECT 9, N'1 Seater Couch' UNION ALL
SELECT 10, N'Flat Screen TV' UNION ALL
SELECT 11, N'Fridge' UNION ALL
SELECT 12, N'Dishwasher' UNION ALL
SELECT 13, N'4 Seater couch' UNION ALL
SELECT 14, N'Lawn Mower' UNION ALL
SELECT 15, N'Dining table'
GO
Rewrite your front end. Prefer to trade more development time for less runtime.
Keeping the order column contiguous by updating all out-of-order rows in the table is tremendously inefficient (remove item 1 => update 100000000 items) resulting in potentially huge update operations, is going to generate tremendous contention because updates modify a lot of rows so they content with almost any read, and ultimately, incorrect under concurrency (you'll end up with gaps and overlaps anyway). Don't do it.
Suppose the table with two columns:
ParentEntityId int foreign key
Number int
ParentEntityId is a foreign key to another table.
Number is a local identity, i.e. it is unique within single ParentEntityId.
Uniqueness is easily achieved via unique key over these two columns.
How to make Number be automatically incremented in the context of the ParentEntityId on insert?
Addendum 1
To clarify the problem, here is an abstract.
ParentEntity has multiple ChildEntity, and each ChiildEntity should have an unique incremental Number in the context of its ParentEntity.
Addendum 2
Treat ParentEntity as a Customer.
Treat ChildEntity as an Order.
So, orders for every customer should be numbered 1, 2, 3 and so on.
Well, there's no native support for this type of column, but you could implement it using a trigger:
CREATE TRIGGER tr_MyTable_Number
ON MyTable
INSTEAD OF INSERT
AS
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN;
WITH MaxNumbers_CTE AS
(
SELECT ParentEntityID, MAX(Number) AS Number
FROM MyTable
WHERE ParentEntityID IN (SELECT ParentEntityID FROM inserted)
)
INSERT MyTable (ParentEntityID, Number)
SELECT
i.ParentEntityID,
ROW_NUMBER() OVER
(
PARTITION BY i.ParentEntityID
ORDER BY (SELECT 1)
) + ISNULL(m.Number, 0) AS Number
FROM inserted i
LEFT JOIN MaxNumbers_CTE m
ON m.ParentEntityID = i.ParentEntityID
COMMIT
Not tested but I'm pretty sure it'll work. If you have a primary key, you could also implement this as an AFTER trigger (I dislike using INSTEAD OF triggers, they're harder to understand when you need to modify them 6 months later).
Just to explain what's going on here:
SERIALIZABLE is the strictest isolation mode; it guarantees that only one database transaction at a time can execute these statements, which we need in order to guarantee the integrity of this "sequence." Note that this irreversibly promotes the entire transaction, so you won't want to use this inside of a long-running transaction.
The CTE picks up the highest number already used for each parent ID;
ROW_NUMBER generates a unique sequence for each parent ID (PARTITION BY) starting from the number 1; we add this to the previous maximum if there is one to get the new sequence.
I probably should also mention that if you only ever need to insert one new child entity at a time, you're better off just funneling those operations through a stored procedure instead of using a trigger - you'll definitely get better performance out of it. This is how it's currently done with hierarchyid columns in SQL '08.
Need add OUTPUT clause to trigger for Linq to SQL сompatibility.
For example:
INSERT MyTable (ParentEntityID, Number)
OUTPUT inserted.*
SELECT
i.ParentEntityID,
ROW_NUMBER() OVER
(
PARTITION BY i.ParentEntityID
ORDER BY (SELECT 1)
) + ISNULL(m.Number, 0) AS Number
FROM inserted i
LEFT JOIN MaxNumbers_CTE m
ON m.ParentEntityID = i.ParentEntityID
This solves the question as I understand it: :-)
DECLARE #foreignKey int
SET #foreignKey = 1 -- or however you get this
INSERT Tbl (ParentEntityId, Number)
VALUES (#foreignKey, ISNULL((SELECT MAX(Number) FROM Tbl WHERE ParentEntityId = #foreignKey), 0) + 1)
I'm working on a sql query that is passed a list of values as a parameter, like
select *
from ProductGroups
where GroupID in (24,12,7,14,65)
This list is constructed of relations used througout the database, and must be kept in this order.
I would like to order the results by this list. I only need the first result, but it could be the one with GroupId 7 in this case.
I can't query like
order by (24,12,7,14,65).indexOf(GroupId)
Does anyone know how to do this?
Additional info:
Building a join works and running it in the mssql query editor, but...
Due to limitiations of the software sending the query to mssql, I have to pass it to some internal query builder as 1 parameter, thus "24,12,7,14,65". And I don't know upfront how many numbers there will be in this list, could be 2, could be 20.
You can also order by on a CASE:
select *
from ProductGroups
where GroupID in (24,12,7,14,65)
order by case GroupId
when 7 then 1 -- First in ordering
when 14 then 2 -- Second
else 3
end
Use a table variable or temporary table with an identity column, feed in your values and join to that, e.g.
declare #rank table (
ordering int identity(1,1)
, number int
)
insert into #rank values (24)
insert into #rank values (12)
insert into #rank values (7)
insert into #rank values (14)
insert into #rank values (65)
select pg.*
from ProductGroups pg
left outer join
#rank r
on pg.GroupId = r.number
order by
r.ordering
I think I might have found a possible solution (but it's ugly):
select *
from ProductGroups
where GroupID in (24,12,7,14,65)
order by charindex(
','+cast(GroupID as varchar)+',' ,
','+'24,12,7,14,65'+',')
this will order the rows by the position they occur in the list. And I can pass the string like I need too.
Do a join with a temporary table, in which you have the values that you want to filter by as rows. Add a column to it that has the order that you want as the second column, and sort by it.