How to use 'Merge' to combine rows into a single one - sql-server

Scenario:
I have a simplified version of a result set obtained from a series of complex joins. I have placed the result set in a temporary table. The result set consists of records of activity/activities in a day.
I need to join the 2 rows (merge activities of a day into a single row) with similar dates so that the resulting result set would be
I am trying to make this work
Merge #temp as target
using #temp as source
on (target.Date = source.Date) and target.Writing is NULL
when matched then
update set target.Writing = source.Writing;
but I'm running into this error:
The MERGE statement attempted to UPDATE or DELETE the same row more
than once. This happens when a target row matches more than one source
row. A MERGE statement cannot UPDATE/DELETE the same row of the target
table multiple times. Refine the ON clause to ensure a target row
matches at most one source row, or use the GROUP BY clause to group
the source rows.
What code modifications can you suggest?

This should do it:
SELECT dfl.mydate, dfl.firststart, dfl.lastend, fa.ActivityA, sa.ActivityB
FROM
(select s.mydate, firststart, lastend FROM
(SELECT mydate, MIN(starttime) as firststart from target GROUP by mydate) s
iNNER JOIN
(SELECT mydate, MAX(EndTime) as lastend from target GROUP by mydate) e
ON s.mydate = e.mydate) AS dfl
INNER JOIN
target fa on dfl.mydate = fa.mydate and dfl.firststart = fa.starttime
INNER JOIN
target sa on dfl.mydate = sa.mydate and dfl.lastend = sa.EndTime
Please note for my test I have called my table target and the columns: mydate, starttime, endtime, activitya and activityb.
No need to merge, a (relatively) simple select yields the results you want.
HTH
PS It helps when using time data to use a 24 hour clock. I have assumed by 5:00 you really meant 17:00

You don't need MERGE statement.
DECLARE #Test TABLE ([Id] int, [Date] nvarchar(10), [TimeIn] nvarchar(10), [TimeOut] nvarchar(10), [Reading] nvarchar(10), [Writeing] nvarchar(10))
INSERT INTO #Test
VALUES
(1,'08-01','8:00','5:00','Y',NULL),
(2,'08-02','8:00','5:00',NULL,'Y'),
(3,'08-02','5:00','12:00',NULL,'Y'),
(4,'08-03','8:00','5:00',NULL,'Y'),
(5,'08-04','1:00','5:00','Y',NULL),
(6,'08-04','5:00','7:00',NULL,'Y'),
(7,'08-04','7:00','10:00',NULL,'Y'),
(8,'08-04','10:00','13:00',NULL,'Y'),
(9,'08-05','8:00','5:00','Y',NULL)
;WITH CTE AS
(
SELECT
t1.[Date],
t1.TimeIn,
ISNULL(t2.TimeOut, t1.TimeOut) AS TimeOut,
ROW_NUMBER() OVER (PARTITION BY t1.[Date] ORDER BY t1.Id) AS RowNumber
FROM #Test AS t1
LEFT OUTER JOIN #Test AS t2 ON t1.TimeOut = t2.TimeIn AND t1.[Date] = t2.[Date]
)
SELECT
c.[Date],
(SELECT c2.TimeIn FROM CTE AS c2 WHERE c2.[Date] = c.[Date] AND c2.RowNumber = MIN(c.RowNumber)) AS TimeIn,
(SELECT c2.TimeOut FROM CTE AS c2 WHERE c2.[Date] = c.[Date] AND c2.RowNumber = MAX(c.RowNumber)) AS TimeOut
FROM CTE AS c
GROUP BY c.[Date]

You can use merge statements in tables where you have an identical column. You can identify the one or more columns that can be used to uniquely identify the row to be merged.

Related

How to optimize the insert query from multiple tables?

I have 2 tables, Table 1 (temp table in SP) has around 400 records. Table 2 has around 30,550,284 records.
I need to run a loop on table 1 for each record and get the top 1 from table 2 based on a few conditions (where clause) and then order by modified date in decreasing order.
There is an index on the modified date.
declare #iPos int;
declare #iCount int;
select #iCount = count(*) from Table1;
set #iPos = 1;
declare #Table2 table(......)
declare #timestampLocal2 datetime
while (#iPos <= #iCount)
BEGIN
select #val1 = Col1, #timestampLocal = TimeStamp
from #Table1 where ID = #iPos
set #timestampLocal2 = DATEADD(HH,-96,#timestampLocal)
INSERT INTO #Temp3 ( .... ),....)
select top 1 r.LastModified, r.[Col2], r.Col3, #iPos
from Table2 (NOLOCK) r
where Col1 =#val1 and
r.LastModified <= #timestampLocal
and r.LastModified >= #timestampLocal2
and (r.Col2 is not null and r.Col3 is not null)
order by LastModified desc
SELECT #iPos = #iPos + 1;
END
This query is very slow.
I have also thought to archive table 2, But I want to keep that as the second option for now.
Do I really need to add an index on the columns which are involved in the where clause?
So my question is, in terms of performance is there a better way to do this?
I believe a CROSS APPLY or OUTER APPLY may do the trick. These can be thought of as being similar to INNER JOIN or LEFT JOIN, except that they allow you to reference a subquery having more complex conditions such as TOP 1 and ORDER BY. Ideal for cases like this.
-- INSERT INTO #Temp3 ( .... )
select r.LastModified, r.[Col2], r.Col3, t1.ID
from #Table1 t1
cross apply (
SELECT TOP 1 r.*
from Table2 r -- Don't use (NOLOCK)
where r.Col1 = t.Col1
and r.LastModified <= t1.[TimeStamp]
and r.LastModified >= DATEADD(HH,-96,t1.[TimeStamp])
and (r.Col2 is not null and r.Col3 is not null)
order by r.LastModified desc
) r
For efficiency, I recommend an index on Table2(Col1,LastModified) or as an absolute minimum, an index on Table2(Col1).
I would strongly discourage the use of (NOLOCK) or 'READ UNCOMMITTED` in queries that update the database (like the insert into table3 above). While the query may appear to work most of the time, seemingly random occurrences of missing or duplicate rows may result.
Do you need to handle cases where no matching Table2 record is found? The above will quietly ignore such cases. Changing the CROSS APPLY to an OUTER APPLY together with logic to handle null r.xxx values could be what you need.

TSQL Subquery filter to performance problem

I'm working on this query. When I assign a date filter from subquery to main query, the response time increases from 1 second to 4.5 minutes.
I don't know how to solve this problem and fix my query. I'm writing the query and the methods I've tried.
Thank you for your help.
My query:
select
START_DATE as DATE,
[MINUTE] as MIN,
map1.LT,
ISNULL((SELECT
(SELECT CAST((main.MIN) AS FLOAT)) /
(
(nullif(
(select cast(
(select
sum(MIN2)
from fooTable2 d2
CROSS APPLY (select Top(1) LT from FooMap2 where x = d2.x) k2
where k2.LT = map1.LT
**-- PROBLEM CODE START**
and YEAR(d2.DATE) = YEAR(main.DATE) and MONTH(d2.DATE) = MONTH(main.DATE)
**-- PROBLEM CODE END**
) as float)),0))
as XX,
.......
......
from Table1 main
OUTER APPLY (select Top(1) LT from FooMap where x = main.x) map1
I tried creating a virtual table.
But not working.
declare #child table ([Year] smallint, [Month] smallint, [Total] float,[LTCode] nvarchar(20))
insert into #child ([LTCode],[Year],[Month],[Total])
(select
k2.LT,YEAR(d2.DATE) as YIL,MONTH(d2.DATE) as AY,sum(MIN) as SURE
from DURUS d2
CROSS APPLY (select Top(1) LT from FooMap2 where x = d2.x) k2
group by k2.LT,YEAR(d2.DATE),MONTH(d2.DATE))
...
....
(select [Total] from #child where [YEAR] = YEAR(main.DATE) and [MONTH] = MONTH(main.DATE) and [LTCode] = map1.LT)
What should I do ?
The root problem is the data model. You need to filter on month and year but store your data as a DATE, DATETIME or similar. There is no easy way to make this fast:
and YEAR(d2.DATE) = YEAR(main.DATE)
and MONTH(d2.DATE) = MONTH(main.DATE)
WHERE FUNCTION(Input) = FUNCTION(Input) forces a scan against each table, having two such filters means you are touching/evaluating each value (d2.date and main.date) twice for each row in each table. To fix this your best options include:
Adding a persisted computed column on each table for year and month then add the appropriate index (on year, month with all columns involved in your query added as Include columns.
Use an indexed view to pre-join Durus and main, not simple but doable.
Learn how to create and utilize a correctly indexed calendar table. This will require some effort but will also change your career.
Work other filters on the Left side of your joins...
For example: add a WHERE clause after from fooTable2 d2 to filter out any additional rows before the join.
Common ways to optimize:
1) remove calculations from the left in filters
2) render subqueries in separate temporary tables or in intermediate CTE queries
3) do not use table variables
4) in the early stages try to filter as much data as possible before connections

ROW_NUMBER in cross apply generating "incorrect" values based on exists clause

Here is the sql:
-- Schema
DECLARE #ModelItem TABLE (
ModelItemId UNIQUEIDENTIFIER,
MetamodelItemId UNIQUEIDENTIFIER
)
DECLARE #MetamodelItemAncestor TABLE (
MetamodelItemId UNIQUEIDENTIFIER,
ParentMetamodelItemId UNIQUEIDENTIFIER,
AncestorLevel INT
)
DECLARE #SolutionMetamodelItem TABLE (
MetamodelItemId UNIQUEIDENTIFIER,
SolutionId UNIQUEIDENTIFIER
)
INSERT INTO #ModelItem VALUES ('EC6AC6A9-684E-E611-8117-00155D026308', '2AB1F075-684E-E611-8117-00155D026308')
INSERT INTO #MetamodelItemAncestor
VALUES ('2AB1F075-684E-E611-8117-00155D026308', '2AB1F075-684E-E611-8117-00155D026308', 0),
('2AB1F075-684E-E611-8117-00155D026308', 'AA12E380-CA4D-E611-8117-00155D026308', 1)
INSERT INTO #SolutionMetamodelItem
VALUES ('2AB1F075-684E-E611-8117-00155D026308', 'f612a333-ca4d-e611-8117-00155d026308'),
('AA12E380-CA4D-E611-8117-00155D026308', 'fc160f3e-ca4d-e611-8117-00155d026308')
-- query
DECLARE #ModelItemId TABLE (EntityId UNIQUEIDENTIFIER)
DECLARE #SolutionId TABLE (EntityId UNIQUEIDENTIFIER)
INSERT INTO #ModelItemId
VALUES ('EC6AC6A9-684E-E611-8117-00155D026308')
INSERT INTO #SolutionId
VALUES ('f612a333-ca4d-e611-8117-00155d026308'), ('fc160f3e-ca4d-e611-8117-00155d026308')
SELECT mia.*
FROM (
SELECT M.EntityId AS ModelItemId, S.EntityId AS SolutionId
FROM #ModelItemId AS M
CROSS JOIN #SolutionId AS S
) AS m
CROSS APPLY (
SELECT
MI.ModelItemId,
OTA.ParentMetamodelItemId AS [MetamodelItemId],
ROW_NUMBER() OVER (PARTITION BY [MI].[ModelItemId] ORDER BY [OTA].[AncestorLevel] ASC) AS [AspectRank]
FROM #ModelItem AS MI
INNER JOIN #MetamodelItemAncestor AS OTA
ON MI.MetamodelItemId = OTA.MetamodelItemId
WHERE
MI.ModelItemId = m.ModelItemId
AND EXISTS (
SELECT 1
FROM #SolutionMetamodelItem AS MSMI
WHERE MSMI.MetamodelItemId = OTA.ParentMetamodelItemId
AND MSMI.SolutionId = m.SolutionId
)
) mia
SELECT mia.*
FROM #ModelItemId AS m
CROSS APPLY (
SELECT
MI.ModelItemId,
OTA.ParentMetamodelItemId AS [MetamodelItemId],
ROW_NUMBER() OVER (PARTITION BY [MI].[ModelItemId] ORDER BY [OTA].[AncestorLevel] ASC) AS [AspectRank]
FROM #ModelItem as MI
INNER JOIN #MetamodelItemAncestor AS OTA
ON MI.MetamodelItemId = OTA.MetamodelItemId
WHERE
MI.ModelItemId = m.EntityId
AND EXISTS (
SELECT 1
FROM #SolutionMetamodelItem MSMI
WHERE MSMI.MetamodelItemId = OTA.ParentMetamodelItemId
AND MSMI.SolutionId IN (SELECT s.EntityId FROM #SolutionId AS s)
)
) mia
Notice the AspectRank. In the second query it has correctly increased the value sequentially based on the partition.
Looking at the execution plan, for the first query it seems like the row_number (sequence project) is running concurrently to the scan of the #solution table, but I still am not fully sure why it has not increased the row number value since there a duplicate items.
Could someone explain this? I need to use the first approach because the cross apply query is in fact a UDF with the ModelItemId and SolutionId as parameters.
I would assume the cross apply is executed separately for each of the rows in your outer query -> each of the rows returned is the 1st (and only) row.
Why do you need to have the row number inside the cross apply, instead of being in the outer query, if that's actually where your data is?

SQL Select set of records from one table, join each record to top 1 record of second table matching 1 column, sorted by a column in the second table

This is my first question on here, so I apologize if I break any rules.
Here's the situation. I have a table that lists all the employees and the building to which they are assigned, plus training hours, with ssn as the id column, I have another table that list all the employees in the company, also with ssn, but including name, and other personal data. The second table contains multiple records for each employee, at different points in time. What I need to do is select all the records in the first table from a certain building, then get the most recent name from the second table, plus allow the result set to be sorted by any of the columns returned.
I have this in place, and it works fine, it is just very slow.
A very simplified version of the tables are:
table1 (ssn CHAR(9), buildingNumber CHAR(7), trainingHours(DEC(5,2)) (7200 rows)
table2 (ssn CHAR(9), fName VARCHAR(20), lName VARCHAR(20), sequence INT) (708,000 rows)
The sequence column in table 2 is a number that corresponds to a predetermined date to enter these records, the higher number, the more recent the entry. It is common/expected that each employee has several records. But several may not have the most recent(i.e. '8').
My SProc is:
#BuildingNumber CHAR(7), #SortField VARCHAR(25)
BEGIN
DECLARE #returnValue TABLE(ssn CHAR(9), buildingNumber CAHR(7), fname VARCHAR(20), lName VARCHAR(20), rowNumber INT)
INSERT INTO #returnValue(...)
SELECT(ssn,buildingNum,fname,lname,rowNum)
FROM SELECT(...,CASE #SortField Row_Number() OVER (PARTITION BY buildingNumber ORDER BY {sortField column} END AS RowNumber)
FROM table1 a
OUTER APPLY(SELECT TOP 1 fName,lName FROM table2 WHERE ssn = a.ssn ORDER BY sequence DESC) AS e
where buildingNumber = #BuildingNumber
SELECT * from #returnValue ORDER BY RowNumber
END
I have indexes for the following:
table1: buildingNumber(non-unique,nonclustered)
table2: sequence_ssn(unique,nonclustered)
Like I said this gets me the correct result set, but it is rather slow. Is there a better way to go about doing this?
It's not possible to change the database structure or the way table 2 operates. Trust me if it were it would be done. Are there any indexes I could make that would help speed this up?
I've looked at the execution plans, and it has a clustered index scan on table 2(18%), then a compute scalar(0%), then an eager spool(59%), then a filter(0%), then top n sort(14%).
That's 78% of the execution so I know it's in the section to get the names, just not sure of a better(faster) way to do it.
The reason I'm asking is that table 1 needs to be updated with current data. This is done through a webpage with a radgrid control. It has a range, start index, all that, and it takes forever for the users to update their data.
I can change how the update process is done, but I thought I'd ask about the query first.
Thanks in advance.
I would approach this with window functions. The idea is to assign a sequence number to records in the table with duplicates (I think table2), such as the most recent records have a value of 1. Then just select this as the most recent record:
select t1.*, t2.*
from table1 t1 join
(select t2.*,
row_number() over (partition by ssn order by sequence desc) as seqnum
from table2 t2
) t2
on t1.ssn = t1.ssn and t2.seqnum = 1
where t1.buildingNumber = #BuildingNumber;
My second suggestion is to use a user-defined function rather than a stored procedure:
create function XXX (
#BuildingNumber int
)
returns table as
return (
select t1.ssn, t1.buildingNum, t2.fname, t2.lname, rowNum
from table1 t1 join
(select t2.*,
row_number() over (partition by ssn order by sequence desc) as seqnum
from table2 t2
) t2
on t1.ssn = t1.ssn and t2.seqnum = 1
where t1.buildingNumber = #BuildingNumber;
);
(This doesn't have the logic for the ordering because that doesn't seem to be the central focus of the question.)
You can then call it as:
select *
from dbo.XXX(<building number>);
EDIT:
The following may speed it up further, because you are only selecting a small(ish) subset of the employees:
select *
from (select t1.*, t2.*, row_number() over (partition by ssn order by sequence desc) as seqnum
from table1 t1 join
table2 t2
on t1.ssn = t1.ssn
where t1.buildingNumber = #BuildingNumber
) t
where seqnum = 1;
And, finally, I suspect that the following might be the fastest:
select t1.*, t2.*, row_number() over (partition by ssn order by sequence desc) as seqnum
from table1 t1 join
table2 t2
on t1.ssn = t1.ssn
where t1.buildingNumber = #BuildingNumber and
t2.sequence = (select max(sequence) from table2 t2a where t2a.ssn = t1.ssn)
In all these cases, an index on table2(ssn, sequence) should help performance.
Try using some temp tables instead of the table variables. Not sure what kind of system you are working on, but I have had pretty good luck. Temp tables actually write to the drive so you wont be holding and processing so much in memory. Depending on other system usage this might do the trick.
Simple define the temp table using #Tablename instead of #Tablename. Put the name sorting subquery in a temp table before everything else fires off and make a join to it.
Just make sure to drop the table at the end. It will drop the table at the end of the SP when it disconnects, but it is a good idea to make tell it to drop to be on the safe side.

SQL Server Comparing Subsequent Rows for Duplicates

I am trying to write a SQL Server query but have had no luck and was wondering if anyone may have any ideas on how to achieve my query.
What i'm trying to do:
I have a table with several columns naming the ones that i am dealing with TaskID, StatusCode, Timestamp. Now this table just holds tasks for one of our systems that run throughout the day and when something runs it gets a timestamp and the statuscode depending on the status for that task.
Sometimes what happens is the task table will be updated with a new timestamp but the statusCode will not have changed since the last update of the task so for two or more consecutive rows of a given task the statusCode can be the same. When i say consecutive rows i mean with regards to timestamp.
So example task 88 could have twenty rows at statusCode 2 after which the status code changes to something else.
Now what i am trying to do with no luck at the moment is to retrieve a list from this table of all the tasks and the statuscodes and the timestamps but in the case where i have more than one consecutive row for a task with the same statuscode i just want to take the first row with the lowest timestamp and ignore the rest of the row until the statuscode for that task changes.
To make it simpler in this case you can assume that i have a taskid which i am filtering on so i am just looking at a single task.
Does anyone have any ideas as to how i can do this or perhaps something that i coudl probably read to help me?
Thanks
Irfan.
This are a couple ways of getting what you want:
SELECT
T1.task_id,
T1.status_code,
T1.status_timestamp
FROM
My_Table T1
LEFT OUTER JOIN My_Table T2 ON
T2.task_id = T1.task_id AND
T2.status_timestamp < T1.status_timestamp
LEFT OUTER JOIN My_Table T3 ON
T3.task_id = T1.task_id AND
T3.status_timestamp < T1.status_timestamp AND
T3.status_timestamp > T2.status_timestamp
WHERE
T3.task_id IS NULL AND
(T2.status_code IS NULL OR T2.status_code <> T1.status_code)
ORDER BY
T1.status_timestamp
or
SELECT
T1.task_id,
T1.status_code,
T1.status_timestamp
FROM
My_Table T1
LEFT OUTER JOIN My_Table T2 ON
T2.task_id = T1.task_id AND
T2.status_timestamp = (
SELECT
MAX(status_timestamp)
FROM
My_Table T3
WHERE
T3.task_id = T1.task_id AND
T3.status_timestamp < T1.status_timestamp)
WHERE
(T2.status_code IS NULL OR T2.status_code <> T1.status_code)
ORDER BY
T1.status_timestamp
Both methods rely on there being no exact matches of the status_timestamp values (two rows can't have the same exact status_timestamp for a given task_id.)
Something like
select TaskID,StatusCode,Min(TimeStamp)
from table
group by TaskID,StatusCode
order by 1,2
Note that is statuscode can duplicate, you will need an additional field, but hopefully this can point you in the right direction...
Something like the following should get you in the right direction....
CREATE TABLE #T
(
TaskId INT
,StatusCode INT
,StatusTimeStamp DATETIME
)
INSERT INTO #T
SELECT 1, 1, '2009-12-01 14:20'
UNION SELECT 1, 2, '2009-12-01 16:20'
UNION SELECT 1, 2, '2009-12-02 09:15'
UNION SELECT 1, 2, '2009-12-02 12:15'
UNION SELECT 1, 3, '2009-12-02 18:15'
;WITH CTE AS
(
SELECT TaskId
,StatusCode
,StatusTimeStamp
,ROW_NUMBER() OVER (PARTITION BY TaskId, StatusCode ORDER BY TaskId, StatusTimeStamp DESC) AS RNUM
FROM #T
)
SELECT TaskId
,StatusCode
,StatusTimeStamp
FROM CTE
WHERE RNUM = 1
DROP TABLE #T

Resources