I have a tree, where specific node in tree can appear in another node in tree. (2 in my example):
1
/ \
2 3
/ \ \
4 5 6
\
2
/ \
4 5
Notice 2 is duplicated. First under 1, and second under 6.
My recursion is:
with cte (ParentId, ChildId, Field1, Field2) AS (
select BOM.ParentId, BOM.ChildId, BOM.Field1, BOM.Field2
from BillOfMaterials BOM
WHERE ParentId=x
UNION ALL
SELECT BOM.ParentId, BOM.ChildId, BOM.Field1, BOM.Field2 FROM BillOfMaterials BOM
JOIN cte on BOM.ParentId = cte.ChildId
)
select * from cte;
But the problem is that in result relation 2-4 and 2-5 is duplicated (first from relation 1-2 and second from relation 6-2):
ParentId ChildId OtherFields
1 2
1 3
2 4 /*from 1-2*/
2 5 /*from 1-2*/
3 6
6 2
2 4 /*from 6-2*/
2 5 /*from 6-2*/
Is there any way, to skip visiting duplicated relationships? I do no see any logic why should recursion run over rows that are already in result. It would be faster. Something like that:
with cte (ParentId, ChildId, Field1, Field2) AS (
select BOM.ParentId, BOM.ChildId, BOM.Field1, BOM.Field2
from BillOfMaterials BOM
WHERE ParentId=x
UNION ALL
SELECT BOM.ParentId, BOM.ChildId, BOM.Field1, BOM.Field2 FROM BillOfMaterials BOM
JOIN cte on BOM.ParentId = cte.ChildId
------> WHERE (select count(*) FROM SoFarCollectedResult WHERE ParentId=BOM.ParentId AND ChildId=BOM.ChildId ) = 0
)
select * from cte;
I found this thread, but it is 8 years old.
I am using SQL server 2016.
If this is not possible, then my question is how can I remove duplicates from final result, but check distinct only on ParentId and ChildId columns?
Edited:
Expected result is:
ParentId ChildId OtherFields
1 2
1 3
2 4
2 5
3 6
6 2
You can, with adding to 2 little tricks to the SQL.
But you need an extra Id column with a sequential number.
For example via an identity, or a datetime field that shows when the record was inserted.
For the simple reason that as far the database is concerned, there is no order in the records as they were inserted, unless you got a column that indicates that order.
Trick 1) Join the CTE record only to Id's that are higher. Because if they were lower then those are the duplicates you don't want to join.
Trick 2) Use the window function Row_number to get only those that are nearest to the Id the recursion started from
Example:
declare #BillOfMaterials table (Id int identity(1,1) primary key, ParentId int, ChildId int, Field1 varchar(8), Field2 varchar(8));
insert into #BillOfMaterials (ParentId, ChildId, Field1, Field2) values
(1,2,'A','1-2'),
(1,3,'B','1-3'),
(2,4,'C','2-4'), -- from 1-2
(2,5,'D','2-5'), -- from 1-2
(3,6,'E','3-6'),
(6,2,'F','6-2'),
(2,4,'G','2-4'), -- from 6-2
(2,5,'H','2-5'); -- from 6-2
;with cte AS
(
select Id as BaseId, 0 as Level, BOM.*
from #BillOfMaterials BOM
WHERE ParentId in (1)
UNION ALL
SELECT CTE.BaseId, CTE.Level + 1, BOM.*
FROM cte
JOIN #BillOfMaterials BOM on (BOM.ParentId = cte.ChildId and BOM.Id > CTE.Id)
)
select ParentId, ChildId, Field1, Field2
from (
select *
--, row_number() over (partition by BaseId, ParentId, ChildId order by Id) as RNbase
, row_number() over (partition by ParentId, ChildId order by Id) as RN
from cte
) q
where RN = 1
order by ParentId, ChildId;
Result:
ParentId ChildId Field1 Field2
-------- ------- ------ ------
1 2 A 1-2
1 3 B 1-3
2 4 C 2-4
2 5 D 2-5
3 6 E 3-6
6 2 F 6-2
Anyway, as a sidenote, normally a Parent-Child relation table is used differently.
More often it's just a table with unique Parent-Child combinations that are foreign keys to another table where that Id is a primary key. So that the other fields are kept in that other table.
Change your last query from:
select * from cte;
To:
select * from cte group by ParentId, ChildId;
This will essentially take what you have right now, but go one step further and remove rows that have already appeared, which would take care of your duplicate problem. Just be sure that all * returns here is ParentId and ChildId, should it be returning other columns you will need to either add them to the GROUP BY or apply some sort of aggregator to it so that it can still group (max, min, count...).
Should you have more rows that you can't aggregate or group on, you could write the query as such:
select * from cte where ID in (select MAX(ID) from cte group by ParentId, ChildId);
Where ID would be your primary table id for cte. This would take the maximum id when rows matched, which would normally be your latest entry, if you want the earliest entry just change MAX() to MIN().
Related
I have a table with few hundred thousand rows, with columns containing a start and finish datetime, something like this:
ID StartDateTime FinishDateTime
--------------------------------------------------------
1 2001-01-01 04:05:06.789 2001-02-03 04:05:06.789
2 2001-01-01 05:05:06.789 2001-01-01 07:05:06.789
3 2001-01-01 06:05:06.789 2001-02-04 07:05:06.789
4 2001-03-01 06:05:06.789 2001-02-03 04:05:06.789
For each row, I need to count the number of 'active' rows at the start time; as in count rows that start before and finish after the startdatetime for each row. For instance: for ID=3, the startdatetime falls between the startdatetime and finishdatetime of ID=1 and ID=2, but not ID=3 or ID=4, so it should return 2.
The desired output is:
ID ActiveRows
-----------------
1 0
2 1
3 2
4 0
I can get it to work using the query below, but it takes hours to run.
select
ID,
(select count(1)
from table tbl2
where tbl2.StartDateTime < tbl.StartDateTime
and tbl2.FinishDateTime > tbl.StartDateTime) as 'ActiveRows'
from
table tbl
I've also tried joining the table on itself, but it also seems extremely slow.
select
tbl.ID, count(1)
from
table tbl
left join table
tbl2 on tbl2.StartDateTime < tbl.StartDateTime
and tbl2.FinishDateTime > tbl.StartDateTime
group by
tbl.ID
What is the fastest way to perform this calculation?
You can do this using Apply operator
SELECT tbl.id,
oa.activerows
FROM yourtable tbl
OUTER apply(SELECT Count(tbl2.id)
FROM yourtable tbl2
WHERE tbl2.startdatetime < tbl.startdatetime
AND tbl2.finishdatetime > tbl.startdatetime) oa (activerows)
and your original query should be using LEFT JOIN to get the ID's with 0 count
To further improve the performance you can create a non clustered index on yourtable
Create Nonclustered Index Nix_table on
yourtable (startdatetime,finishdatetime) Include (Id)
Live Demo
I have following table in T-SQL(there are other columns too but no identity column or primary key column):
Oid Cid
1 a
1 b
2 f
3 c
4 f
5 a
5 b
6 f
6 g
7 f
So in above example I would like to highlight that following Oid are duplicate when looking at Cid column values as "PAIRS":
Oid:
1 (1 matches Oid: 5)
2 (2 matches Oid: 4 and 7)
Please NOTE that Oid 2 match did not include Oid 6, since the pair of 6 has letter 'G' as well.
Is it possible to create a query without using While loop to highlight the "Oid" like above? along with how many other matches count exist in database?
I am trying to find the patterns within the dataset relating to these two columns. Thank you in Advance.
Here is a worked example - see comments for explanation:
--First set up your data in a temp table
declare #oidcid table (Oid int, Cid char(1));
insert into #oidcid values
(1,'a'),
(1,'b'),
(2,'f'),
(3,'c'),
(4,'f'),
(5,'a'),
(5,'b'),
(6,'f'),
(6,'g'),
(7,'f');
--This cte gets a table with all of the cids in order, for each oid
with cte as (
select distinct Oid, (select Cid + ',' from #oidcid i2
where i2.Oid = i.Oid order by Cid
for xml path('')) Cids
from #oidcid i
)
select Oid, cte.Cids
from cte
inner join (
-- Here we get just the lists of cids that appear more than once
select Cids, Count(Oid) as OidCount
from cte group by Cids
having Count(Oid) > 1 ) as gcte on cte.Cids = gcte.Cids
-- And when we list them, we are showing the oids with duplicate cids next to each other
Order by cte.Cids
select o1.Cid, o1.Oid, o2.Oid
, count(*) + 1 over (partition by o1.Cid) as [cnt]
from table o1
join table o2
on o1.Cid = o2.Cid
and o1.Oid < o2.Oid
order by o1.Cid, o1.Oid, o2.Oid
Maybe Like this then:
WITH CTE AS
(
SELECT Cid, oid
,ROW_NUMBER() OVER (PARTITION BY cid ORDER BY cid) AS RN
,SUM(1) OVER (PARTITION BY oid) AS maxRow2
,SUM(1) OVER (PARTITION BY cid) AS maxRow
FROM oid
)
SELECT * FROM CTE WHERE maxRow != 1 AND maxRow2 = 1
ORDER BY oid
I do have following table
ID Name
1 Jagan Mohan Reddy868
2 Jagan Mohan Reddy869
3 Jagan Mohan Reddy
Name column size is VARCHAR(55).
Now for some other task we need to take only 10 varchar length i.e. VARCHAR(10).
My requirement is to check that after taking the only 10 bits length of Name column value for eg if i take Name value of ID 1 i.e. Jagan Mohan Reddy868 by SUBSTRING(Name, 0,11) if it equals with another row value. here in this case the final value of SUBSTRING(Jagan Mohan Reddy868, 0,11) is equal to Name value of ID 3 row whose Name is 'Jagan Mohan Reddy'. I need to make a list of those kind rows. Can somebody help me out on how can i achieve in SQL Server.
My main check is that the truncated values of my Name column should not match with any non truncated values of Name column. If so i need to get those records.
Assuming I understand the question, I think you are looking for something like this:
Create and populate sample data (Please save us this step in your future questions)
DECLARE #T as TABLE
(
Id int identity(1,1),
Name varchar(15)
)
INSERT INTO #T VALUES
('Hi, I am Zohar.'),
('Hi, I am Peled.'),
('Hi, I am Z'),
('I''m Zohar peled')
Use a cte with a self inner join to get the list of ids that match the first 10 chars:
;WITH cte as
(
SELECT T2.Id As Id1, T1.Id As Id2
FROM #T T1
INNER JOIN #T T2 ON LEFT(T1.Name, 10) = t2.Name AND T1.Id <> T2.Id
)
Select the records from the original table, inner joined with a union of the Id1 and Id2 from the cte:
SELECT T.Id, Name
FROM #T T
INNER JOIN
(
SELECT Id1 As Id
FROM CTE
UNION
SELECT Id2
FROM CTE
) U ON T.Id = U.Id
Results:
Id Name
----------- ---------------
1 Hi, I am Zohar.
3 Hi, I am Z
Try this
SELECT Id,Name
FROM(
SELECT *,ROW_NUMBER() OVER(PARTITION BY Name, LEFT(Name,11) ORDER BY ID) RN
FROM Tbale1 T
) Tmp
WHERE Tmp.RN = 1
loop over your column for all the values and put your substring() function inside this loop and I think in Sql index of string starts from 1 instead of 0. If you pass your string to charindex() like this
CHARINDEX('Y', 'Your String')
thus you will come to know whether it is starting from 0 or 1
and you can save your substring value as value of other column with length 10
I hope it will help you..
I think this should cover all the cases you are looking for.
-- Create Table
DECLARE #T as TABLE
(
Id int identity(1,1),
Name varchar(55)
)
-- Create Data
INSERT INTO #T VALUES
('Jagan Mohan Reddy868'),
('Jagan Mohan Reddy869'),
('Jagan Mohan Reddy'),
('Mohan Reddy'),
('Mohan Reddy123551'),
('Mohan R')
-- Get Matching Items
select *, SUBSTRING(name, 0, 11) as ShorterName
from #T
where SUBSTRING(name, 0, 11) in
(
-- get all shortnames with a count > 1
select SUBSTRING(name, 0, 11) as ShortName
from #T
group by SUBSTRING(name, 0, 11)
having COUNT(*) > 1
)
order by Name, LEN(Name)
I have a table with something like the following:
ID Name Color
------------
1 Bob Blue
2 John Yellow
1 Bob Green
3 Sara Red
3 Sara Green
What I would like to do is return a filtered list of results whereby the following data is returned:
ID Name Color
------------
1 Bob Blue
2 John Yellow
3 Sara Red
i.e. I would like to return 1 row per user. (I do not mind which row is returned for the particular user - I just need that the [ID] is unique.) I have something already that works but is really slow where I create a temp table adding all the ID's and then using a "OUTER APPLY" selecting the top 1 from the same table, i.e.
CREATE TABLE #tb
(
[ID] [int]
)
INSERT INTO #tb
select distinct [ID] from MyTable
select
T1.[ID],
T2.[Name],
T2.Color
from
#tb T1
OUTER APPLY
(
SELECT TOP 1 * FROM MyTable T2 WHERE T2.[ID] = T1.[ID]
) AS V2
DROP TABLE #tb
Can somebody suggest how I may improve it?
Thanks
Try:
WITH CTE AS
(
SELECT ROW_NUMBER() OVER (PARTITION BY ID ORDER BY ID) AS 'RowNo',
ID, Name, Color
FROM table
)
SELECT ID,Name,color
FROM CTE
WHERE RowNo = 1
or
select
*
from
(
Select
ID, Name, Color,
rank() over (partition by Id order by sum(Name) desc) as Rank
from
table
group by
ID
)
HRRanks
where
rank = 1
If you're using SQL Server 2005 or higher, you could use the Ranking functions and just grab the first one in the list.
http://msdn.microsoft.com/en-us/library/ms189798.aspx
I am trying to have a running average column in the SELECT statement based on a column from the n previous rows in the same SELECT statement. The average I need is based on the n previous rows in the resultset.
Let me explain
Id Number Average
1 1 NULL
2 3 NULL
3 2 NULL
4 4 2 <----- Average of (1, 3, 2),Numbers from previous 3 rows
5 6 3 <----- Average of (3, 2, 4),Numbers from previous 3 rows
. . .
. . .
The first 3 rows of the Average column are null because there are no previous rows. The row 4 in the Average column shows the average of the Number column from the previous 3 rows.
I need some help trying to construct a SQL Select statement that will do this.
This should do it:
--Test Data
CREATE TABLE RowsToAverage
(
ID int NOT NULL,
Number int NOT NULL
)
INSERT RowsToAverage(ID, Number)
SELECT 1, 1
UNION ALL
SELECT 2, 3
UNION ALL
SELECT 3, 2
UNION ALL
SELECT 4, 4
UNION ALL
SELECT 5, 6
UNION ALL
SELECT 6, 8
UNION ALL
SELECT 7, 10
--The query
;WITH NumberedRows
AS
(
SELECT rta.*, row_number() OVER (ORDER BY rta.ID ASC) AS RowNumber
FROM RowsToAverage rta
)
SELECT nr.ID, nr.Number,
CASE
WHEN nr.RowNumber <=3 THEN NULL
ELSE ( SELECT avg(Number)
FROM NumberedRows
WHERE RowNumber < nr.RowNumber
AND RowNumber >= nr.RowNumber - 3
)
END AS MovingAverage
FROM NumberedRows nr
Assuming that the Id column is sequential, here's a simplified query for a table named "MyTable":
SELECT
b.Id,
b.Number,
(
SELECT
AVG(a.Number)
FROM
MyTable a
WHERE
a.id >= (b.Id - 3)
AND a.id < b.Id
AND b.Id > 3
) as Average
FROM
MyTable b;
Edit: I missed the point that it should average the three previous records...
For a general running average, I think something like this would work:
SELECT
id, number,
SUM(number) OVER (ORDER BY ID) /
ROW_NUMBER() OVER (ORDER BY ID) AS [RunningAverage]
FROM myTable
ORDER BY ID
A simple self join would seem to perform much better than a row referencing subquery
Generate 10k rows of test data:
drop table test10k
create table test10k (Id int, Number int, constraint test10k_cpk primary key clustered (id))
;WITH digits AS (
SELECT 0 as Number
UNION SELECT 1
UNION SELECT 2
UNION SELECT 3
UNION SELECT 4
UNION SELECT 5
UNION SELECT 6
UNION SELECT 7
UNION SELECT 8
UNION SELECT 9
)
,numbers as (
SELECT
(thousands.Number * 1000)
+ (hundreds.Number * 100)
+ (tens.Number * 10)
+ ones.Number AS Number
FROM digits AS ones
CROSS JOIN digits AS tens
CROSS JOIN digits AS hundreds
CROSS JOIN digits AS thousands
)
insert test10k (Id, Number)
select Number, Number
from numbers
I would pull the special case of the first 3 rows out of the main query, you can UNION ALL those back in if you really want it in the row set. Self join query:
;WITH NumberedRows
AS
(
SELECT rta.*, row_number() OVER (ORDER BY rta.ID ASC) AS RowNumber
FROM test10k rta
)
SELECT nr.ID, nr.Number,
avg(trailing.Number) as MovingAverage
FROM NumberedRows nr
join NumberedRows as trailing on trailing.RowNumber between nr.RowNumber-3 and nr.RowNumber-1
where nr.Number > 3
group by nr.id, nr.Number
On my machine this takes about 10 seconds, the subquery approach that Aaron Alton demonstrated takes about 45 seconds (after I changed it to reflect my test source table) :
;WITH NumberedRows
AS
(
SELECT rta.*, row_number() OVER (ORDER BY rta.ID ASC) AS RowNumber
FROM test10k rta
)
SELECT nr.ID, nr.Number,
CASE
WHEN nr.RowNumber <=3 THEN NULL
ELSE ( SELECT avg(Number)
FROM NumberedRows
WHERE RowNumber < nr.RowNumber
AND RowNumber >= nr.RowNumber - 3
)
END AS MovingAverage
FROM NumberedRows nr
If you do a SET STATISTICS PROFILE ON, you can see the self join has 10k executes on the table spool. The subquery has 10k executes on the filter, aggregate, and other steps.
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Check out some solutions here. I'm sure that you could adapt one of them easily enough.
If you want this to be truly performant, and arn't afraid to dig into a seldom-used area of SQL Server, you should look into writing a custom aggregate function. SQL Server 2005 and 2008 brought CLR integration to the table, including the ability to write user aggregate functions. A custom running total aggregate would be the most efficient way to calculate a running average like this, by far.
Alternatively you can denormalize and store precalculated running values. Described here:
http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/01/23/denormalizing-to-enforce-business-rules-running-totals.aspx
Performance of selects is as fast as it goes. Of course, modifications are slower.