I'm testing something in Oracle and populated a table with some sample data, but in the process I accidentally loaded duplicate records, so now I can't create a primary key using some of the columns.
How can I delete all duplicate rows and leave only one of them?
Use the rowid pseudocolumn.
DELETE FROM your_table
WHERE rowid not in
(SELECT MIN(rowid)
FROM your_table
GROUP BY column1, column2, column3);
Where column1, column2, and column3 make up the identifying key for each record. You might list all your columns.
From Ask Tom
delete from t
where rowid IN ( select rid
from (select rowid rid,
row_number() over (partition by
companyid, agentid, class , status, terminationdate
order by rowid) rn
from t)
where rn <> 1);
(fixed the missing parenthesis)
From DevX.com:
DELETE FROM our_table
WHERE rowid not in
(SELECT MIN(rowid)
FROM our_table
GROUP BY column1, column2, column3...) ;
Where column1, column2, etc. is the key you want to use.
DELETE FROM tablename a
WHERE a.ROWID > ANY (SELECT b.ROWID
FROM tablename b
WHERE a.fieldname = b.fieldname
AND a.fieldname2 = b.fieldname2)
Solution 1)
delete from emp
where rowid not in
(select max(rowid) from emp group by empno);
Solution 2)
delete from emp where rowid in
(
select rid from
(
select rowid rid,
row_number() over(partition by empno order by empno) rn
from emp
)
where rn > 1
);
Solution 3)
delete from emp e1
where rowid not in
(select max(rowid) from emp e2
where e1.empno = e2.empno );
create table t2 as select distinct * from t1;
You should do a small pl/sql block using a cursor for loop and delete the rows you don't want to keep. For instance:
declare
prev_var my_table.var1%TYPE;
begin
for t in (select var1 from my_table order by var 1) LOOP
-- if previous var equal current var, delete the row, else keep on going.
end loop;
end;
To select the duplicates only the query format can be:
SELECT GroupFunction(column1), GroupFunction(column2),...,
COUNT(column1), column1, column2...
FROM our_table
GROUP BY column1, column2, column3...
HAVING COUNT(column1) > 1
So the correct query as per other suggestion is:
DELETE FROM tablename a
WHERE a.ROWID > ANY (SELECT b.ROWID
FROM tablename b
WHERE a.fieldname = b.fieldname
AND a.fieldname2 = b.fieldname2
AND ....so on.. to identify the duplicate rows....)
This query will keep the oldest record in the database for the criteria chosen in the WHERE CLAUSE.
Oracle Certified Associate (2008)
create table abcd(id number(10),name varchar2(20))
insert into abcd values(1,'abc')
insert into abcd values(2,'pqr')
insert into abcd values(3,'xyz')
insert into abcd values(1,'abc')
insert into abcd values(2,'pqr')
insert into abcd values(3,'xyz')
select * from abcd
id Name
1 abc
2 pqr
3 xyz
1 abc
2 pqr
3 xyz
Delete Duplicate record but keep Distinct Record in table
DELETE
FROM abcd a
WHERE ROWID > (SELECT MIN(ROWID) FROM abcd b
WHERE b.id=a.id
);
run the above query 3 rows delete
select * from abcd
id Name
1 abc
2 pqr
3 xyz
This blog post was really helpful for general cases:
If the rows are fully duplicated (all values in all columns can have copies) there are no columns to use! But to keep one you still need a unique identifier for each row in each group.
Fortunately, Oracle already has something you can use. The rowid.
All rows in Oracle have a rowid. This is a physical locator. That is, it states where on disk Oracle stores the row. This unique to each row. So you can use this value to identify and remove copies. To do this, replace min() with min(rowid) in the uncorrelated delete:
delete films
where rowid not in (
select min(rowid)
from films
group by title, uk_release_date
)
The Fastest way for really big tables
Create exception table with structure below:
exceptions_table
ROW_ID ROWID
OWNER VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
CONSTRAINT VARCHAR2(30)
Try create a unique constraint or primary key which will be violated by the duplicates. You will get an error message because you have duplicates. The exceptions table will contain
the rowids for the duplicate rows.
alter table add constraint
unique --or primary key
(dupfield1,dupfield2) exceptions into exceptions_table;
Join your table with exceptions_table by rowid and delete dups
delete original_dups where rowid in (select ROW_ID from exceptions_table);
If the amount of rows to delete is big, then create a new table (with all grants and indexes) anti-joining with exceptions_table by rowid and rename the original table into original_dups table and rename new_table_with_no_dups into original table
create table new_table_with_no_dups AS (
select field1, field2 ........
from original_dups t1
where not exists ( select null from exceptions_table T2 where t1.rowid = t2.row_id )
)
Using rowid-
delete from emp
where rowid not in
(select max(rowid) from emp group by empno);
Using self join-
delete from emp e1
where rowid not in
(select max(rowid) from emp e2
where e1.empno = e2.empno );
Solution 4)
delete from emp where rowid in
(
select rid from
(
select rowid rid,
dense_rank() over(partition by empno order by rowid
) rn
from emp
)
where rn > 1
);
1. solution
delete from emp
where rowid not in
(select max(rowid) from emp group by empno);
2. sloution
delete from emp where rowid in
(
select rid from
(
select rowid rid,
row_number() over(partition by empno order by empno) rn
from emp
)
where rn > 1
);
3.solution
delete from emp e1
where rowid not in
(select max(rowid) from emp e2
where e1.empno = e2.empno );
4. solution
delete from emp where rowid in
(
select rid from
(
select rowid rid,
dense_rank() over(partition by empno order by rowid
) rn
from emp
)
where rn > 1
);
5. solution
delete from emp where rowid in
(
select rid from
(
select rowid rid,rank() over (partition by emp_id order by rowid)rn from emp
)
where rn > 1
);
DELETE from table_name where rowid not in (select min(rowid) FROM table_name group by column_name);
and you can also delete duplicate records in another way
DELETE from table_name a where rowid > (select min(rowid) FROM table_name b where a.column=b.column);
DELETE FROM tableName WHERE ROWID NOT IN (SELECT MIN (ROWID) FROM table GROUP BY columnname);
delete from dept
where rowid in (
select rowid
from dept
minus
select max(rowid)
from dept
group by DEPTNO, DNAME, LOC
);
For best performance, here is what I wrote :
(see execution plan)
DELETE FROM your_table
WHERE rowid IN
(select t1.rowid from your_table t1
LEFT OUTER JOIN (
SELECT MIN(rowid) as rowid, column1,column2, column3
FROM your_table
GROUP BY column1, column2, column3
) co1 ON (t1.rowid = co1.rowid)
WHERE co1.rowid IS NULL
);
Check below scripts -
1.
Create table test(id int,sal int);
2.
insert into test values(1,100);
insert into test values(1,100);
insert into test values(2,200);
insert into test values(2,200);
insert into test values(3,300);
insert into test values(3,300);
commit;
3.
select * from test;
You will see here 6-records.
4.run below query -
delete from
test
where rowid in
(select rowid from
(select
rowid,
row_number()
over
(partition by id order by sal) dup
from test)
where dup > 1)
select * from test;
You will see that duplicate records have been deleted.
Hope this solves your query.
Thanks :)
I didn't see any answers that use common table expressions and window functions.
This is what I find easiest to work with.
DELETE FROM
YourTable
WHERE
ROWID IN
(WITH Duplicates
AS (SELECT
ROWID RID,
ROW_NUMBER()
OVER(
PARTITION BY First_Name, Last_Name, Birth_Date)
AS RN
SUM(1)
OVER(
PARTITION BY First_Name, Last_Name, Birth_Date
ORDER BY ROWID ROWS BETWEEN UNBOUNDED PRECEDING
AND UNBOUNDED FOLLOWING)
AS CNT
FROM
YourTable
WHERE
Load_Date IS NULL)
SELECT
RID
FROM
duplicates
WHERE
RN > 1);
Somethings to note:
1) We are only checking for duplication on the fields in the partition clause.
2) If you have some reason to pick one duplicate over others you can use an order by clause to make that row will have row_number() = 1
3) You can change the number duplicate preserved by changing the final where clause to "Where RN > N" with N >= 1 (I was thinking N = 0 would delete all rows that have duplicates, but it would just delete all rows).
4) Added the Sum partition field the CTE query which will tag each row with the number rows in the group. So to select rows with duplicates, including the first item use "WHERE cnt > 1".
solution :
delete from emp where rowid in
(
select rid from
(
select rowid rid,
row_number() over(partition by empno order by empno) rn
from emp
)
where rn > 1
);
create or replace procedure delete_duplicate_enq as
cursor c1 is
select *
from enquiry;
begin
for z in c1 loop
delete enquiry
where enquiry.enquiryno = z.enquiryno
and rowid > any
(select rowid
from enquiry
where enquiry.enquiryno = z.enquiryno);
end loop;
end delete_duplicate_enq;
This is similar to the top answer but gives me a much better explain plan:
delete from your_table
where rowid in (
select max(rowid)
from your_table
group by column1, column2, column3
having count(*) > 1
);
Related
How can I delete duplicate rows where no unique row id exists?
My table is
col1 col2 col3 col4 col5 col6 col7
john 1 1 1 1 1 1
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
sally 2 2 2 2 2 2
I want to be left with the following after the duplicate removal:
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
I've tried a few queries but I think they depend on having a row id as I don't get the desired result. For example:
DELETE
FROM table
WHERE col1 IN (
SELECT id
FROM table
GROUP BY id
HAVING (COUNT(col1) > 1)
)
I like CTEs and ROW_NUMBER as the two combined allow us to see which rows are deleted (or updated), therefore just change the DELETE FROM CTE... to SELECT * FROM CTE:
WITH CTE AS(
SELECT [col1], [col2], [col3], [col4], [col5], [col6], [col7],
RN = ROW_NUMBER()OVER(PARTITION BY col1 ORDER BY col1)
FROM dbo.Table1
)
DELETE FROM CTE WHERE RN > 1
DEMO (result is different; I assume that it's due to a typo on your part)
COL1 COL2 COL3 COL4 COL5 COL6 COL7
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
This example determines duplicates by a single column col1 because of the PARTITION BY col1. If you want to include multiple columns simply add them to the PARTITION BY:
ROW_NUMBER()OVER(PARTITION BY Col1, Col2, ... ORDER BY OrderColumn)
I would prefer CTE for deleting duplicate rows from sql server table
strongly recommend to follow this article ::http://codaffection.com/sql-server-article/delete-duplicate-rows-in-sql-server/
by keeping original
WITH CTE AS
(
SELECT *,ROW_NUMBER() OVER (PARTITION BY col1,col2,col3 ORDER BY col1,col2,col3) AS RN
FROM MyTable
)
DELETE FROM CTE WHERE RN<>1
without keeping original
WITH CTE AS
(SELECT *,R=RANK() OVER (ORDER BY col1,col2,col3)
FROM MyTable)
DELETE CTE
WHERE R IN (SELECT R FROM CTE GROUP BY R HAVING COUNT(*)>1)
Without using CTE and ROW_NUMBER() you can just delete the records just by using group by with MAX function here is an example
DELETE
FROM MyDuplicateTable
WHERE ID NOT IN
(
SELECT MAX(ID)
FROM MyDuplicateTable
GROUP BY DuplicateColumn1, DuplicateColumn2, DuplicateColumn3)
If you have no references, like foreign keys, you can do this. I do it a lot when testing proofs of concept and the test data gets duplicated.
SELECT DISTINCT [col1],[col2],[col3],[col4],[col5],[col6],[col7]
INTO [newTable]
FROM [oldTable]
Go into the object explorer and delete the old table.
Rename the new table with the old table's name.
Remove all duplicates, but the very first ones (with min ID)
should work equally in other SQL servers, like Postgres:
DELETE FROM table
WHERE id NOT IN (
select min(id) from table
group by col1, col2, col3, col4, col5, col6, col7
)
DELETE from search
where id not in (
select min(id) from search
group by url
having count(*)=1
union
SELECT min(id) FROM search
group by url
having count(*) > 1
)
There are two solutions in mysql:
A) Delete duplicate rows using DELETE JOIN statement
DELETE t1 FROM contacts t1
INNER JOIN contacts t2
WHERE
t1.id < t2.id AND
t1.email = t2.email;
This query references the contacts table twice, therefore, it uses the table alias t1 and t2.
The output is:
1
Query OK, 4 rows affected (0.10 sec)
In case you want to delete duplicate rows and keep the lowest id, you can use the following statement:
DELETE c1 FROM contacts c1
INNER JOIN contacts c2
WHERE
c1.id > c2.id AND
c1.email = c2.email;
B) Delete duplicate rows using an intermediate table
The following shows the steps for removing duplicate rows using an intermediate table:
1. Create a new table with the structure the same as the original table that you want to delete duplicate rows.
2. Insert distinct rows from the original table to the immediate table.
3. Insert distinct rows from the original table to the immediate table.
Step 1. Create a new table whose structure is the same as the original table:
CREATE TABLE source_copy LIKE source;
Step 2. Insert distinct rows from the original table to the new table:
INSERT INTO source_copy
SELECT * FROM source
GROUP BY col; -- column that has duplicate values
Step 3. drop the original table and rename the immediate table to the original one
DROP TABLE source;
ALTER TABLE source_copy RENAME TO source;
Source: http://www.mysqltutorial.org/mysql-delete-duplicate-rows/
Please see the below way of deletion too.
Declare #table table
(col1 varchar(10),col2 int,col3 int, col4 int, col5 int, col6 int, col7 int)
Insert into #table values
('john',1,1,1,1,1,1),
('john',1,1,1,1,1,1),
('sally',2,2,2,2,2,2),
('sally',2,2,2,2,2,2)
Created a sample table named #table and loaded it with given data.
Delete aliasName from (
Select *,
ROW_NUMBER() over (Partition by col1,col2,col3,col4,col5,col6,col7 order by col1) as rowNumber
From #table) aliasName
Where rowNumber > 1
Select * from #table
Note: If you are giving all columns in the Partition by part, then order by do not have much significance.
I know, the question is asked three years ago, and my answer is another version of what Tim has posted, But posting just incase it is helpful for anyone.
It can be done by many ways in sql server
the most simplest way to do so is:
Insert the distinct rows from the duplicate rows table to new temporary table. Then delete all the data from duplicate rows table then insert all data from temporary table which has no duplicates as shown below.
select distinct * into #tmp From table
delete from table
insert into table
select * from #tmp drop table #tmp
select * from table
Delete duplicate rows using Common Table Expression(CTE)
With CTE_Duplicates as
(select id,name , row_number()
over(partition by id,name order by id,name ) rownumber from table )
delete from CTE_Duplicates where rownumber!=1
To delete the duplicate rows from the table in SQL Server, you follow these steps:
Find duplicate rows using GROUP BY clause or ROW_NUMBER() function.
Use DELETE statement to remove the duplicate rows.
Setting up a sample table
DROP TABLE IF EXISTS contacts;
CREATE TABLE contacts(
contact_id INT IDENTITY(1,1) PRIMARY KEY,
first_name NVARCHAR(100) NOT NULL,
last_name NVARCHAR(100) NOT NULL,
email NVARCHAR(255) NOT NULL,
);
Insert values
INSERT INTO contacts
(first_name,last_name,email)
VALUES
('Syed','Abbas','syed.abbas#example.com'),
('Catherine','Abel','catherine.abel#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Hazem','Abolrous','hazem.abolrous#example.com'),
('Hazem','Abolrous','hazem.abolrous#example.com'),
('Humberto','Acevedo','humberto.acevedo#example.com'),
('Humberto','Acevedo','humberto.acevedo#example.com'),
('Pilar','Ackerman','pilar.ackerman#example.com');
Query
SELECT
contact_id,
first_name,
last_name,
email
FROM
contacts;
Delete duplicate rows from a table
WITH cte AS (
SELECT
contact_id,
first_name,
last_name,
email,
ROW_NUMBER() OVER (
PARTITION BY
first_name,
last_name,
email
ORDER BY
first_name,
last_name,
email
) row_num
FROM
contacts
)
DELETE FROM cte
WHERE row_num > 1;
Should delete the record now
Try to Use:
SELECT linkorder
,Row_Number() OVER (
PARTITION BY linkorder ORDER BY linkorder DESC
) AS RowNum
FROM u_links
Microsoft has a vey ry neat guide on how to remove duplicates. Check out http://support.microsoft.com/kb/139444
In brief, here is the easiest way to delete duplicates when you have just a few rows to delete:
SET rowcount 1;
DELETE FROM t1 WHERE myprimarykey=1;
myprimarykey is the identifier for the row.
I set rowcount to 1 because I only had two rows that were duplicated. If I had had 3 rows duplicated then I would have set rowcount to 2 so that it deletes the first two that it sees and only leaves one in table t1.
with myCTE
as
(
select productName,ROW_NUMBER() over(PARTITION BY productName order by slno) as Duplicate from productDetails
)
Delete from myCTE where Duplicate>1
After trying the suggested solution above, that works for small medium tables.
I can suggest that solution for very large tables. since it runs in iterations.
Drop all dependency views of the LargeSourceTable
you can find the dependecies by using sql managment studio, right click on the table and click "View Dependencies"
Rename the table:
sp_rename 'LargeSourceTable', 'LargeSourceTable_Temp'; GO
Create the LargeSourceTable again, but now, add a primary key with all the columns that define the duplications add WITH (IGNORE_DUP_KEY = ON)
For example:
CREATE TABLE [dbo].[LargeSourceTable]
(
ID int IDENTITY(1,1),
[CreateDate] DATETIME CONSTRAINT [DF_LargeSourceTable_CreateDate] DEFAULT (getdate()) NOT NULL,
[Column1] CHAR (36) NOT NULL,
[Column2] NVARCHAR (100) NOT NULL,
[Column3] CHAR (36) NOT NULL,
PRIMARY KEY (Column1, Column2) WITH (IGNORE_DUP_KEY = ON)
);
GO
Create again the views that you dropped in the first place for the new created table
Now, Run the following sql script, you will see the results in 1,000,000 rows per page, you can change the row number per page to see the results more often.
Note, that I set the IDENTITY_INSERT on and off because one the columns contains auto incremental id, which I'm also copying
SET IDENTITY_INSERT LargeSourceTable ON
DECLARE #PageNumber AS INT, #RowspPage AS INT
DECLARE #TotalRows AS INT
declare #dt varchar(19)
SET #PageNumber = 0
SET #RowspPage = 1000000
select #TotalRows = count (*) from LargeSourceTable_TEMP
While ((#PageNumber - 1) * #RowspPage < #TotalRows )
Begin
begin transaction tran_inner
; with cte as
(
SELECT * FROM LargeSourceTable_TEMP ORDER BY ID
OFFSET ((#PageNumber) * #RowspPage) ROWS
FETCH NEXT #RowspPage ROWS ONLY
)
INSERT INTO LargeSourceTable
(
ID
,[CreateDate]
,[Column1]
,[Column2]
,[Column3]
)
select
ID
,[CreateDate]
,[Column1]
,[Column2]
,[Column3]
from cte
commit transaction tran_inner
PRINT 'Page: ' + convert(varchar(10), #PageNumber)
PRINT 'Transfered: ' + convert(varchar(20), #PageNumber * #RowspPage)
PRINT 'Of: ' + convert(varchar(20), #TotalRows)
SELECT #dt = convert(varchar(19), getdate(), 121)
RAISERROR('Inserted on: %s', 0, 1, #dt) WITH NOWAIT
SET #PageNumber = #PageNumber + 1
End
SET IDENTITY_INSERT LargeSourceTable OFF
-- this query will keep only one instance of a duplicate record.
;WITH cte
AS (SELECT ROW_NUMBER() OVER (PARTITION BY col1, col2, col3-- based on what? --can be multiple columns
ORDER BY ( SELECT 0)) RN
FROM Mytable)
delete FROM cte
WHERE RN > 1
You need to group by the duplicate records according to the field(s), then hold one of the records and delete the rest.
For example:
DELETE prg.Person WHERE Id IN (
SELECT dublicateRow.Id FROM
(
select MIN(Id) MinId, NationalCode
from prg.Person group by NationalCode having count(NationalCode ) > 1
) GroupSelect
JOIN prg.Person dublicateRow ON dublicateRow.NationalCode = GroupSelect.NationalCode
WHERE dublicateRow.Id <> GroupSelect.MinId)
Deleting duplicates from a huge(several millions of records) table might take long time . I suggest that you do a bulk insert into a temp table of the selected rows rather than deleting.
--REWRITING YOUR CODE(TAKE NOTE OF THE 3RD LINE) WITH CTE AS(SELECT NAME,ROW_NUMBER()
OVER (PARTITION BY NAME ORDER BY NAME) ID FROM #TB) SELECT * INTO #unique_records FROM
CTE WHERE ID =1;
This might help in your case
DELETE t1 FROM table t1 INNER JOIN table t2 WHERE t1.id > t2.id AND t1.col1 = t2.col1
With reference to https://support.microsoft.com/en-us/help/139444/how-to-remove-duplicate-rows-from-a-table-in-sql-server
The idea of removing duplicate involves
a) Protecting those rows that are not duplicate
b) Retain one of the many rows that qualified together as duplicate.
Step-by-step
1) First identify the rows those satisfy the definition of duplicate
and insert them into temp table, say #tableAll .
2) Select non-duplicate(single-rows) or distinct rows into temp table
say #tableUnique.
3) Delete from source table joining #tableAll to delete the
duplicates.
4) Insert into source table all the rows from #tableUnique.
5) Drop #tableAll and #tableUnique
If you have the ability to add a column to the table temporarily, this was a solution that worked for me:
ALTER TABLE dbo.DUPPEDTABLE ADD RowID INT NOT NULL IDENTITY(1,1)
Then perform a DELETE using a combination of MIN and GROUP BY
DELETE b
FROM dbo.DUPPEDTABLE b
WHERE b.RowID NOT IN (
SELECT MIN(RowID) AS RowID
FROM dbo.DUPPEDTABLE a WITH (NOLOCK)
GROUP BY a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE
);
Verify that the DELETE performed correctly:
SELECT a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE, COUNT(*)--MIN(RowID) AS RowID
FROM dbo.DUPPEDTABLE a WITH (NOLOCK)
GROUP BY a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE
ORDER BY COUNT(*) DESC
The result should have no rows with a count greater than 1. Finally, remove the rowid column:
ALTER TABLE dbo.DUPPEDTABLE DROP COLUMN RowID;
Oh wow, i feel so stupid by ready all this answers, they are like experts' answer with all CTE and temp table and etc.
And all I did to get it working was simply aggregated the ID column by using MAX.
DELETE FROM table WHERE col1 IN (
SELECT MAX(id) FROM table GROUP BY id HAVING ( COUNT(col1) > 1 )
)
NOTE: you might need to run it multiple time to remove duplicate as this will only delete one set of duplicate rows at a time.
please simply add the keyword DISTINCT right after the SELECT command,
for example:
SELECT DISTICNT ColumnOne, ColumnTwo, ColumnThree
FROM YourTable
Another way of removing dublicated rows without loosing information in one step is like following:
delete from dublicated_table t1 (nolock)
join (
select t2.dublicated_field
, min(len(t2.field_kept)) as min_field_kept
from dublicated_table t2 (nolock)
group by t2.dublicated_field having COUNT(*)>1
) t3
on t1.dublicated_field=t3.dublicated_field
and len(t1.field_kept)=t3.min_field_kept
DECLARE #TB TABLE(NAME VARCHAR(100));
INSERT INTO #TB VALUES ('Red'),('Red'),('Green'),('Blue'),('White'),('White')
--**Delete by Rank**
;WITH CTE AS(SELECT NAME,DENSE_RANK() OVER (PARTITION BY NAME ORDER BY NEWID()) ID FROM #TB)
DELETE FROM CTE WHERE ID>1
SELECT NAME FROM #TB;
--**Delete by Row Number**
;WITH CTE AS(SELECT NAME,ROW_NUMBER() OVER (PARTITION BY NAME ORDER BY NAME) ID FROM #TB)
DELETE FROM CTE WHERE ID>1;
SELECT NAME FROM #TB;
DELETE FROM TBL1 WHERE ID IN
(SELECT ID FROM TBL1 a WHERE ID!=
(select MAX(ID) from TBL1 where DUPVAL=a.DUPVAL
group by DUPVAL
having count(DUPVAL)>1))
DELETE p1 FROM Person p1,
Person p2
WHERE
p1.Email = p2.Email AND p1.Id > p2.Id
How can I delete duplicate rows where no unique row id exists?
My table is
col1 col2 col3 col4 col5 col6 col7
john 1 1 1 1 1 1
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
sally 2 2 2 2 2 2
I want to be left with the following after the duplicate removal:
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
I've tried a few queries but I think they depend on having a row id as I don't get the desired result. For example:
DELETE
FROM table
WHERE col1 IN (
SELECT id
FROM table
GROUP BY id
HAVING (COUNT(col1) > 1)
)
I like CTEs and ROW_NUMBER as the two combined allow us to see which rows are deleted (or updated), therefore just change the DELETE FROM CTE... to SELECT * FROM CTE:
WITH CTE AS(
SELECT [col1], [col2], [col3], [col4], [col5], [col6], [col7],
RN = ROW_NUMBER()OVER(PARTITION BY col1 ORDER BY col1)
FROM dbo.Table1
)
DELETE FROM CTE WHERE RN > 1
DEMO (result is different; I assume that it's due to a typo on your part)
COL1 COL2 COL3 COL4 COL5 COL6 COL7
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
This example determines duplicates by a single column col1 because of the PARTITION BY col1. If you want to include multiple columns simply add them to the PARTITION BY:
ROW_NUMBER()OVER(PARTITION BY Col1, Col2, ... ORDER BY OrderColumn)
I would prefer CTE for deleting duplicate rows from sql server table
strongly recommend to follow this article ::http://codaffection.com/sql-server-article/delete-duplicate-rows-in-sql-server/
by keeping original
WITH CTE AS
(
SELECT *,ROW_NUMBER() OVER (PARTITION BY col1,col2,col3 ORDER BY col1,col2,col3) AS RN
FROM MyTable
)
DELETE FROM CTE WHERE RN<>1
without keeping original
WITH CTE AS
(SELECT *,R=RANK() OVER (ORDER BY col1,col2,col3)
FROM MyTable)
DELETE CTE
WHERE R IN (SELECT R FROM CTE GROUP BY R HAVING COUNT(*)>1)
Without using CTE and ROW_NUMBER() you can just delete the records just by using group by with MAX function here is an example
DELETE
FROM MyDuplicateTable
WHERE ID NOT IN
(
SELECT MAX(ID)
FROM MyDuplicateTable
GROUP BY DuplicateColumn1, DuplicateColumn2, DuplicateColumn3)
If you have no references, like foreign keys, you can do this. I do it a lot when testing proofs of concept and the test data gets duplicated.
SELECT DISTINCT [col1],[col2],[col3],[col4],[col5],[col6],[col7]
INTO [newTable]
FROM [oldTable]
Go into the object explorer and delete the old table.
Rename the new table with the old table's name.
Remove all duplicates, but the very first ones (with min ID)
should work equally in other SQL servers, like Postgres:
DELETE FROM table
WHERE id NOT IN (
select min(id) from table
group by col1, col2, col3, col4, col5, col6, col7
)
DELETE from search
where id not in (
select min(id) from search
group by url
having count(*)=1
union
SELECT min(id) FROM search
group by url
having count(*) > 1
)
There are two solutions in mysql:
A) Delete duplicate rows using DELETE JOIN statement
DELETE t1 FROM contacts t1
INNER JOIN contacts t2
WHERE
t1.id < t2.id AND
t1.email = t2.email;
This query references the contacts table twice, therefore, it uses the table alias t1 and t2.
The output is:
1
Query OK, 4 rows affected (0.10 sec)
In case you want to delete duplicate rows and keep the lowest id, you can use the following statement:
DELETE c1 FROM contacts c1
INNER JOIN contacts c2
WHERE
c1.id > c2.id AND
c1.email = c2.email;
B) Delete duplicate rows using an intermediate table
The following shows the steps for removing duplicate rows using an intermediate table:
1. Create a new table with the structure the same as the original table that you want to delete duplicate rows.
2. Insert distinct rows from the original table to the immediate table.
3. Insert distinct rows from the original table to the immediate table.
Step 1. Create a new table whose structure is the same as the original table:
CREATE TABLE source_copy LIKE source;
Step 2. Insert distinct rows from the original table to the new table:
INSERT INTO source_copy
SELECT * FROM source
GROUP BY col; -- column that has duplicate values
Step 3. drop the original table and rename the immediate table to the original one
DROP TABLE source;
ALTER TABLE source_copy RENAME TO source;
Source: http://www.mysqltutorial.org/mysql-delete-duplicate-rows/
Please see the below way of deletion too.
Declare #table table
(col1 varchar(10),col2 int,col3 int, col4 int, col5 int, col6 int, col7 int)
Insert into #table values
('john',1,1,1,1,1,1),
('john',1,1,1,1,1,1),
('sally',2,2,2,2,2,2),
('sally',2,2,2,2,2,2)
Created a sample table named #table and loaded it with given data.
Delete aliasName from (
Select *,
ROW_NUMBER() over (Partition by col1,col2,col3,col4,col5,col6,col7 order by col1) as rowNumber
From #table) aliasName
Where rowNumber > 1
Select * from #table
Note: If you are giving all columns in the Partition by part, then order by do not have much significance.
I know, the question is asked three years ago, and my answer is another version of what Tim has posted, But posting just incase it is helpful for anyone.
It can be done by many ways in sql server
the most simplest way to do so is:
Insert the distinct rows from the duplicate rows table to new temporary table. Then delete all the data from duplicate rows table then insert all data from temporary table which has no duplicates as shown below.
select distinct * into #tmp From table
delete from table
insert into table
select * from #tmp drop table #tmp
select * from table
Delete duplicate rows using Common Table Expression(CTE)
With CTE_Duplicates as
(select id,name , row_number()
over(partition by id,name order by id,name ) rownumber from table )
delete from CTE_Duplicates where rownumber!=1
To delete the duplicate rows from the table in SQL Server, you follow these steps:
Find duplicate rows using GROUP BY clause or ROW_NUMBER() function.
Use DELETE statement to remove the duplicate rows.
Setting up a sample table
DROP TABLE IF EXISTS contacts;
CREATE TABLE contacts(
contact_id INT IDENTITY(1,1) PRIMARY KEY,
first_name NVARCHAR(100) NOT NULL,
last_name NVARCHAR(100) NOT NULL,
email NVARCHAR(255) NOT NULL,
);
Insert values
INSERT INTO contacts
(first_name,last_name,email)
VALUES
('Syed','Abbas','syed.abbas#example.com'),
('Catherine','Abel','catherine.abel#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Hazem','Abolrous','hazem.abolrous#example.com'),
('Hazem','Abolrous','hazem.abolrous#example.com'),
('Humberto','Acevedo','humberto.acevedo#example.com'),
('Humberto','Acevedo','humberto.acevedo#example.com'),
('Pilar','Ackerman','pilar.ackerman#example.com');
Query
SELECT
contact_id,
first_name,
last_name,
email
FROM
contacts;
Delete duplicate rows from a table
WITH cte AS (
SELECT
contact_id,
first_name,
last_name,
email,
ROW_NUMBER() OVER (
PARTITION BY
first_name,
last_name,
email
ORDER BY
first_name,
last_name,
email
) row_num
FROM
contacts
)
DELETE FROM cte
WHERE row_num > 1;
Should delete the record now
Try to Use:
SELECT linkorder
,Row_Number() OVER (
PARTITION BY linkorder ORDER BY linkorder DESC
) AS RowNum
FROM u_links
Microsoft has a vey ry neat guide on how to remove duplicates. Check out http://support.microsoft.com/kb/139444
In brief, here is the easiest way to delete duplicates when you have just a few rows to delete:
SET rowcount 1;
DELETE FROM t1 WHERE myprimarykey=1;
myprimarykey is the identifier for the row.
I set rowcount to 1 because I only had two rows that were duplicated. If I had had 3 rows duplicated then I would have set rowcount to 2 so that it deletes the first two that it sees and only leaves one in table t1.
with myCTE
as
(
select productName,ROW_NUMBER() over(PARTITION BY productName order by slno) as Duplicate from productDetails
)
Delete from myCTE where Duplicate>1
After trying the suggested solution above, that works for small medium tables.
I can suggest that solution for very large tables. since it runs in iterations.
Drop all dependency views of the LargeSourceTable
you can find the dependecies by using sql managment studio, right click on the table and click "View Dependencies"
Rename the table:
sp_rename 'LargeSourceTable', 'LargeSourceTable_Temp'; GO
Create the LargeSourceTable again, but now, add a primary key with all the columns that define the duplications add WITH (IGNORE_DUP_KEY = ON)
For example:
CREATE TABLE [dbo].[LargeSourceTable]
(
ID int IDENTITY(1,1),
[CreateDate] DATETIME CONSTRAINT [DF_LargeSourceTable_CreateDate] DEFAULT (getdate()) NOT NULL,
[Column1] CHAR (36) NOT NULL,
[Column2] NVARCHAR (100) NOT NULL,
[Column3] CHAR (36) NOT NULL,
PRIMARY KEY (Column1, Column2) WITH (IGNORE_DUP_KEY = ON)
);
GO
Create again the views that you dropped in the first place for the new created table
Now, Run the following sql script, you will see the results in 1,000,000 rows per page, you can change the row number per page to see the results more often.
Note, that I set the IDENTITY_INSERT on and off because one the columns contains auto incremental id, which I'm also copying
SET IDENTITY_INSERT LargeSourceTable ON
DECLARE #PageNumber AS INT, #RowspPage AS INT
DECLARE #TotalRows AS INT
declare #dt varchar(19)
SET #PageNumber = 0
SET #RowspPage = 1000000
select #TotalRows = count (*) from LargeSourceTable_TEMP
While ((#PageNumber - 1) * #RowspPage < #TotalRows )
Begin
begin transaction tran_inner
; with cte as
(
SELECT * FROM LargeSourceTable_TEMP ORDER BY ID
OFFSET ((#PageNumber) * #RowspPage) ROWS
FETCH NEXT #RowspPage ROWS ONLY
)
INSERT INTO LargeSourceTable
(
ID
,[CreateDate]
,[Column1]
,[Column2]
,[Column3]
)
select
ID
,[CreateDate]
,[Column1]
,[Column2]
,[Column3]
from cte
commit transaction tran_inner
PRINT 'Page: ' + convert(varchar(10), #PageNumber)
PRINT 'Transfered: ' + convert(varchar(20), #PageNumber * #RowspPage)
PRINT 'Of: ' + convert(varchar(20), #TotalRows)
SELECT #dt = convert(varchar(19), getdate(), 121)
RAISERROR('Inserted on: %s', 0, 1, #dt) WITH NOWAIT
SET #PageNumber = #PageNumber + 1
End
SET IDENTITY_INSERT LargeSourceTable OFF
-- this query will keep only one instance of a duplicate record.
;WITH cte
AS (SELECT ROW_NUMBER() OVER (PARTITION BY col1, col2, col3-- based on what? --can be multiple columns
ORDER BY ( SELECT 0)) RN
FROM Mytable)
delete FROM cte
WHERE RN > 1
You need to group by the duplicate records according to the field(s), then hold one of the records and delete the rest.
For example:
DELETE prg.Person WHERE Id IN (
SELECT dublicateRow.Id FROM
(
select MIN(Id) MinId, NationalCode
from prg.Person group by NationalCode having count(NationalCode ) > 1
) GroupSelect
JOIN prg.Person dublicateRow ON dublicateRow.NationalCode = GroupSelect.NationalCode
WHERE dublicateRow.Id <> GroupSelect.MinId)
Deleting duplicates from a huge(several millions of records) table might take long time . I suggest that you do a bulk insert into a temp table of the selected rows rather than deleting.
--REWRITING YOUR CODE(TAKE NOTE OF THE 3RD LINE) WITH CTE AS(SELECT NAME,ROW_NUMBER()
OVER (PARTITION BY NAME ORDER BY NAME) ID FROM #TB) SELECT * INTO #unique_records FROM
CTE WHERE ID =1;
This might help in your case
DELETE t1 FROM table t1 INNER JOIN table t2 WHERE t1.id > t2.id AND t1.col1 = t2.col1
With reference to https://support.microsoft.com/en-us/help/139444/how-to-remove-duplicate-rows-from-a-table-in-sql-server
The idea of removing duplicate involves
a) Protecting those rows that are not duplicate
b) Retain one of the many rows that qualified together as duplicate.
Step-by-step
1) First identify the rows those satisfy the definition of duplicate
and insert them into temp table, say #tableAll .
2) Select non-duplicate(single-rows) or distinct rows into temp table
say #tableUnique.
3) Delete from source table joining #tableAll to delete the
duplicates.
4) Insert into source table all the rows from #tableUnique.
5) Drop #tableAll and #tableUnique
If you have the ability to add a column to the table temporarily, this was a solution that worked for me:
ALTER TABLE dbo.DUPPEDTABLE ADD RowID INT NOT NULL IDENTITY(1,1)
Then perform a DELETE using a combination of MIN and GROUP BY
DELETE b
FROM dbo.DUPPEDTABLE b
WHERE b.RowID NOT IN (
SELECT MIN(RowID) AS RowID
FROM dbo.DUPPEDTABLE a WITH (NOLOCK)
GROUP BY a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE
);
Verify that the DELETE performed correctly:
SELECT a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE, COUNT(*)--MIN(RowID) AS RowID
FROM dbo.DUPPEDTABLE a WITH (NOLOCK)
GROUP BY a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE
ORDER BY COUNT(*) DESC
The result should have no rows with a count greater than 1. Finally, remove the rowid column:
ALTER TABLE dbo.DUPPEDTABLE DROP COLUMN RowID;
Oh wow, i feel so stupid by ready all this answers, they are like experts' answer with all CTE and temp table and etc.
And all I did to get it working was simply aggregated the ID column by using MAX.
DELETE FROM table WHERE col1 IN (
SELECT MAX(id) FROM table GROUP BY id HAVING ( COUNT(col1) > 1 )
)
NOTE: you might need to run it multiple time to remove duplicate as this will only delete one set of duplicate rows at a time.
please simply add the keyword DISTINCT right after the SELECT command,
for example:
SELECT DISTICNT ColumnOne, ColumnTwo, ColumnThree
FROM YourTable
Another way of removing dublicated rows without loosing information in one step is like following:
delete from dublicated_table t1 (nolock)
join (
select t2.dublicated_field
, min(len(t2.field_kept)) as min_field_kept
from dublicated_table t2 (nolock)
group by t2.dublicated_field having COUNT(*)>1
) t3
on t1.dublicated_field=t3.dublicated_field
and len(t1.field_kept)=t3.min_field_kept
DECLARE #TB TABLE(NAME VARCHAR(100));
INSERT INTO #TB VALUES ('Red'),('Red'),('Green'),('Blue'),('White'),('White')
--**Delete by Rank**
;WITH CTE AS(SELECT NAME,DENSE_RANK() OVER (PARTITION BY NAME ORDER BY NEWID()) ID FROM #TB)
DELETE FROM CTE WHERE ID>1
SELECT NAME FROM #TB;
--**Delete by Row Number**
;WITH CTE AS(SELECT NAME,ROW_NUMBER() OVER (PARTITION BY NAME ORDER BY NAME) ID FROM #TB)
DELETE FROM CTE WHERE ID>1;
SELECT NAME FROM #TB;
DELETE FROM TBL1 WHERE ID IN
(SELECT ID FROM TBL1 a WHERE ID!=
(select MAX(ID) from TBL1 where DUPVAL=a.DUPVAL
group by DUPVAL
having count(DUPVAL)>1))
DELETE p1 FROM Person p1,
Person p2
WHERE
p1.Email = p2.Email AND p1.Id > p2.Id
I have a large table (130 columns). It is a monthly dataset that is separated by month (jan,feb,mar,...). every month I get a small set of duplicate rows. I would like to remove one of the rows, it does not matter which row to be deleted.
This query seems to work ok when I only select the ID that I want to filter the dups on, but when I select everything "*" from the table I end up with all of the rows, dups included. My goal is to filter out the dups and insert the result set into a new table.
SELECT DISTINCT a.[ID]
FROM MonthlyLoan a
JOIN (SELECT COUNT(*) as Count, b.[ID]
FROM MonthlyLoan b
GROUP BY b.[ID])
AS b ON a.[ID] = b.[ID]
WHERE b.Count > 1
and effectiveDate = '01/31/2017'
Any help will be appreciated.
This will show you all duplicates per ID:
;WITH Duplicates AS
(
SELECT ID
rn = ROW_NUMBER() OVER (PARTITION BY ID ORDER BY ID)
FROM MonthlyLoan
)
SELECT ID,
rn
FROM Duplicates
WHERE rn > 1
Alternatively, you can set rn = 2 to find the immediate duplicate per ID.
Since your ID is dupped (A DUPPED ID!!!!)
all you need it to use the HAVING clause in your aggregate.
See the below example.
declare #tableA as table
(
ID int not null
)
insert into #tableA
values
(1),(2),(2),(3),(3),(3),(4),(5)
select ID, COUNT(*) as [Count]
from #tableA
group by ID
having COUNT(*) > 1
Result:
ID Count
----------- -----------
2 2
3 3
To insert the result into a #Temporary Table:
select ID, COUNT(*) as [Count]
into #temp
from #tableA
group by ID
having COUNT(*) > 1
select * from #temp
How can I delete duplicate rows where no unique row id exists?
My table is
col1 col2 col3 col4 col5 col6 col7
john 1 1 1 1 1 1
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
sally 2 2 2 2 2 2
I want to be left with the following after the duplicate removal:
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
I've tried a few queries but I think they depend on having a row id as I don't get the desired result. For example:
DELETE
FROM table
WHERE col1 IN (
SELECT id
FROM table
GROUP BY id
HAVING (COUNT(col1) > 1)
)
I like CTEs and ROW_NUMBER as the two combined allow us to see which rows are deleted (or updated), therefore just change the DELETE FROM CTE... to SELECT * FROM CTE:
WITH CTE AS(
SELECT [col1], [col2], [col3], [col4], [col5], [col6], [col7],
RN = ROW_NUMBER()OVER(PARTITION BY col1 ORDER BY col1)
FROM dbo.Table1
)
DELETE FROM CTE WHERE RN > 1
DEMO (result is different; I assume that it's due to a typo on your part)
COL1 COL2 COL3 COL4 COL5 COL6 COL7
john 1 1 1 1 1 1
sally 2 2 2 2 2 2
This example determines duplicates by a single column col1 because of the PARTITION BY col1. If you want to include multiple columns simply add them to the PARTITION BY:
ROW_NUMBER()OVER(PARTITION BY Col1, Col2, ... ORDER BY OrderColumn)
I would prefer CTE for deleting duplicate rows from sql server table
strongly recommend to follow this article ::http://codaffection.com/sql-server-article/delete-duplicate-rows-in-sql-server/
by keeping original
WITH CTE AS
(
SELECT *,ROW_NUMBER() OVER (PARTITION BY col1,col2,col3 ORDER BY col1,col2,col3) AS RN
FROM MyTable
)
DELETE FROM CTE WHERE RN<>1
without keeping original
WITH CTE AS
(SELECT *,R=RANK() OVER (ORDER BY col1,col2,col3)
FROM MyTable)
DELETE CTE
WHERE R IN (SELECT R FROM CTE GROUP BY R HAVING COUNT(*)>1)
Without using CTE and ROW_NUMBER() you can just delete the records just by using group by with MAX function here is an example
DELETE
FROM MyDuplicateTable
WHERE ID NOT IN
(
SELECT MAX(ID)
FROM MyDuplicateTable
GROUP BY DuplicateColumn1, DuplicateColumn2, DuplicateColumn3)
If you have no references, like foreign keys, you can do this. I do it a lot when testing proofs of concept and the test data gets duplicated.
SELECT DISTINCT [col1],[col2],[col3],[col4],[col5],[col6],[col7]
INTO [newTable]
FROM [oldTable]
Go into the object explorer and delete the old table.
Rename the new table with the old table's name.
Remove all duplicates, but the very first ones (with min ID)
should work equally in other SQL servers, like Postgres:
DELETE FROM table
WHERE id NOT IN (
select min(id) from table
group by col1, col2, col3, col4, col5, col6, col7
)
DELETE from search
where id not in (
select min(id) from search
group by url
having count(*)=1
union
SELECT min(id) FROM search
group by url
having count(*) > 1
)
There are two solutions in mysql:
A) Delete duplicate rows using DELETE JOIN statement
DELETE t1 FROM contacts t1
INNER JOIN contacts t2
WHERE
t1.id < t2.id AND
t1.email = t2.email;
This query references the contacts table twice, therefore, it uses the table alias t1 and t2.
The output is:
1
Query OK, 4 rows affected (0.10 sec)
In case you want to delete duplicate rows and keep the lowest id, you can use the following statement:
DELETE c1 FROM contacts c1
INNER JOIN contacts c2
WHERE
c1.id > c2.id AND
c1.email = c2.email;
B) Delete duplicate rows using an intermediate table
The following shows the steps for removing duplicate rows using an intermediate table:
1. Create a new table with the structure the same as the original table that you want to delete duplicate rows.
2. Insert distinct rows from the original table to the immediate table.
3. Insert distinct rows from the original table to the immediate table.
Step 1. Create a new table whose structure is the same as the original table:
CREATE TABLE source_copy LIKE source;
Step 2. Insert distinct rows from the original table to the new table:
INSERT INTO source_copy
SELECT * FROM source
GROUP BY col; -- column that has duplicate values
Step 3. drop the original table and rename the immediate table to the original one
DROP TABLE source;
ALTER TABLE source_copy RENAME TO source;
Source: http://www.mysqltutorial.org/mysql-delete-duplicate-rows/
Please see the below way of deletion too.
Declare #table table
(col1 varchar(10),col2 int,col3 int, col4 int, col5 int, col6 int, col7 int)
Insert into #table values
('john',1,1,1,1,1,1),
('john',1,1,1,1,1,1),
('sally',2,2,2,2,2,2),
('sally',2,2,2,2,2,2)
Created a sample table named #table and loaded it with given data.
Delete aliasName from (
Select *,
ROW_NUMBER() over (Partition by col1,col2,col3,col4,col5,col6,col7 order by col1) as rowNumber
From #table) aliasName
Where rowNumber > 1
Select * from #table
Note: If you are giving all columns in the Partition by part, then order by do not have much significance.
I know, the question is asked three years ago, and my answer is another version of what Tim has posted, But posting just incase it is helpful for anyone.
It can be done by many ways in sql server
the most simplest way to do so is:
Insert the distinct rows from the duplicate rows table to new temporary table. Then delete all the data from duplicate rows table then insert all data from temporary table which has no duplicates as shown below.
select distinct * into #tmp From table
delete from table
insert into table
select * from #tmp drop table #tmp
select * from table
Delete duplicate rows using Common Table Expression(CTE)
With CTE_Duplicates as
(select id,name , row_number()
over(partition by id,name order by id,name ) rownumber from table )
delete from CTE_Duplicates where rownumber!=1
To delete the duplicate rows from the table in SQL Server, you follow these steps:
Find duplicate rows using GROUP BY clause or ROW_NUMBER() function.
Use DELETE statement to remove the duplicate rows.
Setting up a sample table
DROP TABLE IF EXISTS contacts;
CREATE TABLE contacts(
contact_id INT IDENTITY(1,1) PRIMARY KEY,
first_name NVARCHAR(100) NOT NULL,
last_name NVARCHAR(100) NOT NULL,
email NVARCHAR(255) NOT NULL,
);
Insert values
INSERT INTO contacts
(first_name,last_name,email)
VALUES
('Syed','Abbas','syed.abbas#example.com'),
('Catherine','Abel','catherine.abel#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Kim','Abercrombie','kim.abercrombie#example.com'),
('Hazem','Abolrous','hazem.abolrous#example.com'),
('Hazem','Abolrous','hazem.abolrous#example.com'),
('Humberto','Acevedo','humberto.acevedo#example.com'),
('Humberto','Acevedo','humberto.acevedo#example.com'),
('Pilar','Ackerman','pilar.ackerman#example.com');
Query
SELECT
contact_id,
first_name,
last_name,
email
FROM
contacts;
Delete duplicate rows from a table
WITH cte AS (
SELECT
contact_id,
first_name,
last_name,
email,
ROW_NUMBER() OVER (
PARTITION BY
first_name,
last_name,
email
ORDER BY
first_name,
last_name,
email
) row_num
FROM
contacts
)
DELETE FROM cte
WHERE row_num > 1;
Should delete the record now
Try to Use:
SELECT linkorder
,Row_Number() OVER (
PARTITION BY linkorder ORDER BY linkorder DESC
) AS RowNum
FROM u_links
Microsoft has a vey ry neat guide on how to remove duplicates. Check out http://support.microsoft.com/kb/139444
In brief, here is the easiest way to delete duplicates when you have just a few rows to delete:
SET rowcount 1;
DELETE FROM t1 WHERE myprimarykey=1;
myprimarykey is the identifier for the row.
I set rowcount to 1 because I only had two rows that were duplicated. If I had had 3 rows duplicated then I would have set rowcount to 2 so that it deletes the first two that it sees and only leaves one in table t1.
with myCTE
as
(
select productName,ROW_NUMBER() over(PARTITION BY productName order by slno) as Duplicate from productDetails
)
Delete from myCTE where Duplicate>1
After trying the suggested solution above, that works for small medium tables.
I can suggest that solution for very large tables. since it runs in iterations.
Drop all dependency views of the LargeSourceTable
you can find the dependecies by using sql managment studio, right click on the table and click "View Dependencies"
Rename the table:
sp_rename 'LargeSourceTable', 'LargeSourceTable_Temp'; GO
Create the LargeSourceTable again, but now, add a primary key with all the columns that define the duplications add WITH (IGNORE_DUP_KEY = ON)
For example:
CREATE TABLE [dbo].[LargeSourceTable]
(
ID int IDENTITY(1,1),
[CreateDate] DATETIME CONSTRAINT [DF_LargeSourceTable_CreateDate] DEFAULT (getdate()) NOT NULL,
[Column1] CHAR (36) NOT NULL,
[Column2] NVARCHAR (100) NOT NULL,
[Column3] CHAR (36) NOT NULL,
PRIMARY KEY (Column1, Column2) WITH (IGNORE_DUP_KEY = ON)
);
GO
Create again the views that you dropped in the first place for the new created table
Now, Run the following sql script, you will see the results in 1,000,000 rows per page, you can change the row number per page to see the results more often.
Note, that I set the IDENTITY_INSERT on and off because one the columns contains auto incremental id, which I'm also copying
SET IDENTITY_INSERT LargeSourceTable ON
DECLARE #PageNumber AS INT, #RowspPage AS INT
DECLARE #TotalRows AS INT
declare #dt varchar(19)
SET #PageNumber = 0
SET #RowspPage = 1000000
select #TotalRows = count (*) from LargeSourceTable_TEMP
While ((#PageNumber - 1) * #RowspPage < #TotalRows )
Begin
begin transaction tran_inner
; with cte as
(
SELECT * FROM LargeSourceTable_TEMP ORDER BY ID
OFFSET ((#PageNumber) * #RowspPage) ROWS
FETCH NEXT #RowspPage ROWS ONLY
)
INSERT INTO LargeSourceTable
(
ID
,[CreateDate]
,[Column1]
,[Column2]
,[Column3]
)
select
ID
,[CreateDate]
,[Column1]
,[Column2]
,[Column3]
from cte
commit transaction tran_inner
PRINT 'Page: ' + convert(varchar(10), #PageNumber)
PRINT 'Transfered: ' + convert(varchar(20), #PageNumber * #RowspPage)
PRINT 'Of: ' + convert(varchar(20), #TotalRows)
SELECT #dt = convert(varchar(19), getdate(), 121)
RAISERROR('Inserted on: %s', 0, 1, #dt) WITH NOWAIT
SET #PageNumber = #PageNumber + 1
End
SET IDENTITY_INSERT LargeSourceTable OFF
-- this query will keep only one instance of a duplicate record.
;WITH cte
AS (SELECT ROW_NUMBER() OVER (PARTITION BY col1, col2, col3-- based on what? --can be multiple columns
ORDER BY ( SELECT 0)) RN
FROM Mytable)
delete FROM cte
WHERE RN > 1
You need to group by the duplicate records according to the field(s), then hold one of the records and delete the rest.
For example:
DELETE prg.Person WHERE Id IN (
SELECT dublicateRow.Id FROM
(
select MIN(Id) MinId, NationalCode
from prg.Person group by NationalCode having count(NationalCode ) > 1
) GroupSelect
JOIN prg.Person dublicateRow ON dublicateRow.NationalCode = GroupSelect.NationalCode
WHERE dublicateRow.Id <> GroupSelect.MinId)
Deleting duplicates from a huge(several millions of records) table might take long time . I suggest that you do a bulk insert into a temp table of the selected rows rather than deleting.
--REWRITING YOUR CODE(TAKE NOTE OF THE 3RD LINE) WITH CTE AS(SELECT NAME,ROW_NUMBER()
OVER (PARTITION BY NAME ORDER BY NAME) ID FROM #TB) SELECT * INTO #unique_records FROM
CTE WHERE ID =1;
This might help in your case
DELETE t1 FROM table t1 INNER JOIN table t2 WHERE t1.id > t2.id AND t1.col1 = t2.col1
With reference to https://support.microsoft.com/en-us/help/139444/how-to-remove-duplicate-rows-from-a-table-in-sql-server
The idea of removing duplicate involves
a) Protecting those rows that are not duplicate
b) Retain one of the many rows that qualified together as duplicate.
Step-by-step
1) First identify the rows those satisfy the definition of duplicate
and insert them into temp table, say #tableAll .
2) Select non-duplicate(single-rows) or distinct rows into temp table
say #tableUnique.
3) Delete from source table joining #tableAll to delete the
duplicates.
4) Insert into source table all the rows from #tableUnique.
5) Drop #tableAll and #tableUnique
If you have the ability to add a column to the table temporarily, this was a solution that worked for me:
ALTER TABLE dbo.DUPPEDTABLE ADD RowID INT NOT NULL IDENTITY(1,1)
Then perform a DELETE using a combination of MIN and GROUP BY
DELETE b
FROM dbo.DUPPEDTABLE b
WHERE b.RowID NOT IN (
SELECT MIN(RowID) AS RowID
FROM dbo.DUPPEDTABLE a WITH (NOLOCK)
GROUP BY a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE
);
Verify that the DELETE performed correctly:
SELECT a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE, COUNT(*)--MIN(RowID) AS RowID
FROM dbo.DUPPEDTABLE a WITH (NOLOCK)
GROUP BY a.ITEM_NUMBER,
a.CHARACTERISTIC,
a.INTVALUE,
a.FLOATVALUE,
a.STRINGVALUE
ORDER BY COUNT(*) DESC
The result should have no rows with a count greater than 1. Finally, remove the rowid column:
ALTER TABLE dbo.DUPPEDTABLE DROP COLUMN RowID;
Oh wow, i feel so stupid by ready all this answers, they are like experts' answer with all CTE and temp table and etc.
And all I did to get it working was simply aggregated the ID column by using MAX.
DELETE FROM table WHERE col1 IN (
SELECT MAX(id) FROM table GROUP BY id HAVING ( COUNT(col1) > 1 )
)
NOTE: you might need to run it multiple time to remove duplicate as this will only delete one set of duplicate rows at a time.
please simply add the keyword DISTINCT right after the SELECT command,
for example:
SELECT DISTICNT ColumnOne, ColumnTwo, ColumnThree
FROM YourTable
Another way of removing dublicated rows without loosing information in one step is like following:
delete from dublicated_table t1 (nolock)
join (
select t2.dublicated_field
, min(len(t2.field_kept)) as min_field_kept
from dublicated_table t2 (nolock)
group by t2.dublicated_field having COUNT(*)>1
) t3
on t1.dublicated_field=t3.dublicated_field
and len(t1.field_kept)=t3.min_field_kept
DECLARE #TB TABLE(NAME VARCHAR(100));
INSERT INTO #TB VALUES ('Red'),('Red'),('Green'),('Blue'),('White'),('White')
--**Delete by Rank**
;WITH CTE AS(SELECT NAME,DENSE_RANK() OVER (PARTITION BY NAME ORDER BY NEWID()) ID FROM #TB)
DELETE FROM CTE WHERE ID>1
SELECT NAME FROM #TB;
--**Delete by Row Number**
;WITH CTE AS(SELECT NAME,ROW_NUMBER() OVER (PARTITION BY NAME ORDER BY NAME) ID FROM #TB)
DELETE FROM CTE WHERE ID>1;
SELECT NAME FROM #TB;
DELETE FROM TBL1 WHERE ID IN
(SELECT ID FROM TBL1 a WHERE ID!=
(select MAX(ID) from TBL1 where DUPVAL=a.DUPVAL
group by DUPVAL
having count(DUPVAL)>1))
DELETE p1 FROM Person p1,
Person p2
WHERE
p1.Email = p2.Email AND p1.Id > p2.Id
I need to remove some duplicate entries from an intersection table.
The table is incredibly badly set up, without primary keys, so I'm having some trouble removing entries which are duplicates.
Here's just a rough overview of the table:
col1 col2
------------
1 70
1 70
1 71
Both columns carry id's, and duplicates breaks stuff.
You can use RANKING Functions
with cte as
(
select row_number() over(partition by col1,col2 order by col1,col2 )as rowNum
from tableName
)
delete from cte where rowNum>1
SQL FIDDLE DEMO
with t1dups (col1, coldups)
AS (
select col2, ROW_NUMBER() Over (Partition by col1, col2 order by col2) as dups from t1 )
delete from t1dups where coldups > 1
drop table #t
create table #t(col1 int,col2 int)
insert into #t values(1,70),(1,70),(2,71)
;with cte as
(
select [col1],[col2],rn=row_number() over(partition by col1 order by col2) from #t
)
delete from cte where rn>1
select * from #t
DEMO