What this Query does? - sql-server

I am not sure in human language what this query will eventually do in SQL Server.
The idea is that if an order with the amount X exist in both tables #Temp and #TempDuplPos it should be deleted from the table #Temp.
DELETE #Temp
FROM #Temp
INNER JOIN #TempDuplPos ON (#Temp.[OrderNumber] = #TempDuplPos.[OrderNumber])
AND (#Temp.[Amount] = #TempDuplPos.[Amount])
What I did to test is the following:
SELECT 1 AS OrderNumber, 10 AS Amount
INTO #Temp
SELECT 1 AS OrderNumber, 10 AS Amount
INTO #TempDuplPos
INSERT INTO #Temp
VALUES (2,20)
INSERT INTO #TempDuplPos
VALUES (3,30)
DELETE #Temp
FROM #Temp
INNER JOIN #TempDuplPosON (#Temp.[OrderNumber] = #TempDuplPos.[OrderNumber])
AND (#Temp.[Amount] = #TempDuplPos.[Amount])
SELECT *
FROM #Temp
SELECT *
FROM #TempDuplPos
DROP TABLE #Temp
DROP TABLE #TempDuplPos
It looks like it does the job but I am not sure I miss something which will hit me in a large data set. So my question is, is this query doing what I want? If no what it does? Thanks!

It deletes records from #Temp table where records on #TempDuplPos with the same OrderNumber and Amount exists.
If you want to only delete records with a specific amount you need to add a where clause:
DELETE #Temp
FROM #Temp
INNER JOIN #TempDuplPos
ON (#Temp.[OrderNumber] = #TempDuplPos.[OrderNumber])
AND (#Temp.[Amount] = #TempDuplPos.[Amount])
WHERE #Temp.[Amount] = #Amount

Related

How To Set Limit Table Row SQL?

How to set limit table row in SQL Server?
I want to set limit of my table rows to 100 rows only.
So when the table have more than 100 rows, I want to delete first row then add new row to last row (100).
How can I do this?
One thing that i can assure you..
Create a trigger that if > 100 then delete first record record.
see here as your guide.
I think you hv to do two things
i) Create Trigger
declare #MaxRowLimit int=5
declare #t table(col1 int)
insert into #t values(1),(2),(3),(4),(5)
insert into #t VALUES(12)
;With CTE as
(
select top (#MaxRowLimit) col1
from #t t1
order by t1.col1 desc
)
,CTE1 as(
select * from #t t
where not exists
(select col1
from cte t1 where t.col1=t1.col1
)
)
delete from cte1
select * from #t
ii) If it is bulk insert then,you hv to do manipulation before bulk insert.
like if bulk insert count is greater than 100 then sort and keep last 100 rows and remove rest rows.

How can I delete the duplicate rows while keeping the original record?

I have an one table as below a picture which indicates some duplicated rows.I can find the duplicated rows but I could not able to delete it because of there is no any unique ID that I can distinguish. There were lots of duplicated rows like that in same table I just screenshot a piece of that.
As a result,according to the below picture, how can I delete the duplicated rows but keep original ?
One solution you could consider is copying all unique records into a temporary table, thus removing the duplicates. You could then truncate the original table and re-populate it from the temporary table you've created. The code would be something like this:
SELECT DISTINCT * INTO #tempTable FROM MyTable
TRUNCATE TABLE MyTable;
INSERT INTO MyTable (LocationID, UnitID, CameraID ... IsActiveHours)
SELECT LocationID, UnitID, CameraID ... IsActiveHours FROM #tempTable;
This isn't always an option due to key constraints and amount of data, but useful in certain cases. Take it as you may.
You could use a cte and Row_Number() to accomplish this. If you are satisfied with the results, remove the final select and un-comment the delete statement
;with cte as (
Select *,RowNr=Row_Number() over (Partition By LocationId Order by Date_T)
From YourTable
)
Select * from cte Where RowNr>1
-- Delete From cte Where RowNr>1
You would be best adding an identity column to make things easier however this can be done without a TRUNCATE using the following:
--GET DUPLICATE ROWS INTO A TEMP TABLE (YOU MAY NOT NEED TO USE ALL THE COLUMNS TO IDENTIFY A DUPLICATE)
SELECT ROW_NUMBER() OVER (ORDER BY ColA) AS RowNo, ColA, ColB, ColC, COUNT(*) As [Count]
INTO #TEMP1
FROM test
GROUP BY ColA, ColB, ColC
HAVING COUNT(*) > 1
--LOOP THROUGH DUPLICATES
DECLARE #RowNo INT
DECLARE #Duplicates INT
SET #RowNo = 1
WHILE EXISTS(SELECT * FROM #TEMP1)
BEGIN
--GET A COUNT OF ADDITIONAL ROWS FOR THIS DUPLICATE
SET #Duplicates = (SELECT [Count] FROM #TEMP1 WHERE RowNo = #RowNo) - 1
--DELETE THE ROWS WE DONT NEED
DELETE TOP (#Duplicates) t1
FROM test t1
JOIN #TEMP1 t2 ON t1.ColA = t2.ColA AND t1.ColB = t2.ColB AND t1.ColC = t2.ColC
WHERE t2.RowNo = #RowNo
--REMOVE THE ROW FROM THE TEMP TABLE
DELETE FROM #TEMP1 WHERE RowNo = #RowNo
--INCREASE THE ROW NO TO MOVE TO THE NEXT ROW
SET #RowNo = #RowNo + 1
END
--DROP THE TEMP TABLE
DROP TABLE #TEMP1
This is the query that fix this issue.
WITH X AS (
SELECT ROW_NUMBER() OVER(PARTITION BY LocationId,date_t ORDER BY LocationId desc) as 'rownum',LocationId,
date_T AS T
FROM Counts
)
--SELECT * FROM X WHERE rownum >1
DELETE FROM X
WHERE rownum <> 1

PATINDEX all values of a column

I'm making a query that will delete all rows from table1 that has its column table1.id = table2.id
table1.id column is in nvarchar(max) with an xml format like this:
<customer><name>Paulo</name><gender>Male</gender><id>12345</id></customer>
EDIT:
The id column is just a part of a huge XML so the ending tag may not match the starting tag.
I've tried using name.nodes but it only applies to xml columns and changing the column datatype is not a choice, So far this is the my code using PATINDEX
DELETE t1
FROM table1 t1
WHERE PATINDEX('%12345%',id) != 0
But what I need is to search for all values from table2.id which contains like this:
12345
67890
10000
20000
30000
Any approach would be nice like sp_executesql and/or while loop, or is there a better approach than using patindex? thanks!
Select *
--Delete A
From Table1 A
Join Table2 B on CharIndex('id>'+SomeField+'<',ID)>0
I don't know the name of the field in Table2. I am also assuming it is a varchar. If not, cast(SomeField as varchar(25))
EDIT - This is what I tested. It should work
Declare #Table1 table (id varchar(max))
Insert Into #Table1 values
('<customer><name>Paulo</name><gender>Male</gender><id>12345</id></customer>'),
('<customer><name>Jane</name><gender>Femail</gender><id>7895</id></customer>')
Declare #Table2 table (SomeField varchar(25))
Insert into #Table2 values
('12345'),
('67890'),
('10000'),
('20000'),
('30000')
Select *
--Delete A
From #Table1 A
Join #Table2 B on CharIndex('id>'+SomeField+'<',ID)>0
;with cteBase as (
Select *,XMLData=cast(id as xml) From Table1
)
Select *
From cteBase
Where XMLData.value('(customer/id)[1]','int') in (12345,67890,10000,20000,30000)
If you are satisfied with the results, change the final Select * to Delete

Insert trigger for a spesific column

I have two tables. When I insert a new value for a spesific column I want to update another column in the second table. How I can do that?
Here is a simple example but it gives "Incorrect syntax near the keyword 'Insert'." error as expected.
Create trigger trigger_Insert_Months
on [Quantities]
after Insert
As
if Insert([Work Name])
begin
declare #NewWorkName varchar(200)
select #NewWorkName = [Work Name] from inserted
insert into [April]([Work Name])
values (#NewWorkName)
End
Try This:
CREATE TRIGGER trigger_Insert_Months
ON [Quantities]
AFTER INSERT
AS
BEGIN
INSERT INTO [April]([Work Name])
SELECT [Work Name] from inserted
WHERE NOT EXISTS (SELECT 1 FROM [Quantities] WHERE [Quantities].[Work Name] = INSERTED.[Work Name] AND INSERTED.PrimaryKey != [Quantities].[PrimaryKey])
End
Correct me if I am wrong. You want to insert values in table1 and update values in table2 with the inserted values.
create trigger tr1 on Table1
for insert
as
begin
if exists (select 1 from inserted)
begin
update a
set a.col1 = b.col
from table2 as a
inner join (select * from inserted) as b
on a.id = b.id
end
end
This code activates the trigger when an insert happens in Table1 and updates Values of table2 of col1 with the inserted rows.
Change the ID column with the column having primary key in table2 and table1 and col1 with the column to be updated in table2

T-SQL: Two Level Aggregation in Same Query

I have a query that joins a master and a detail table. Master table records are duplicated in results as expected. I get aggregation on detail table an it works fine. But I also need another aggregation on master table at the same time. But as master table is duplicated, aggregation results are duplicated too.
I want to demonstrate this situation as below;
If Object_Id('tempdb..#data') Is Not Null Drop Table #data
Create Table #data (Id int, GroupId int, Value int)
If Object_Id('tempdb..#groups') Is Not Null Drop Table #groups
Create Table #groups (Id int, Value int)
/* insert groups */
Insert #groups (Id, Value)
Values (1,100), (2,200), (3, 200)
/* insert data */
Insert #data (Id, GroupId, Value)
Values (1,1,10),
(2,1,20),
(3,2,50),
(4,2,60),
(5,2,70),
(6,3,90)
My select query is
Select Sum(data.Value) As Data_Value,
Sum(groups.Value) As Group_Value
From #data data
Inner Join #groups groups On groups.Id = data.GroupId
The result is;
Data_Value Group_Value
300 1000
Expected result is;
Data_Value Group_Value
300 500
Please note that, derived table or sub-query is not an option. Also Sum(Distinct groups.Value) is not suitable for my case.
If I am not wrong, you just want to sum value column of both table and show it in a single row. in that case you don't need to join those just select the sum as a column like :
SELECT (SELECT SUM(VALUE) AS Data_Value FROM #DATA),
(SELECT SUM(VALUE) AS Group_Value FROM #groups)
SELECT
(
Select Sum(d.Value) From #data d
WHERE EXISTS (SELECT 1 FROM #groups WHERE Id = d.GroupId )
) AS Data_Value
,(
SELECT Sum( g.Value) FROM #groups g
WHERE EXISTS (SELECT 1 FROM #data WHERE GroupId = g.Id)
) AS Group_Value
I'm not sure what you are looking for. But it seems like you want the value from one group and the collected value that represents a group in the data table.
In that case I would suggest something like this.
select Sum(t.Data_Value) as Data_Value, Sum(t.Group_Value) as Group_Value
from
(select Sum(data.Value) As Data_Value, groups.Value As Group_Value
from data
inner join groups on groups.Id = data.GroupId
group by groups.Id, groups.Value)
as t
The edit should do the trick for you.

Resources