I have a dataset as shown below:
ProductionOrder RootID SalesOrder Line Quantity
829602 60124786_7275 60124786 7375 1
829603 60124786_7275 60124786 7400 1
109051 60126867_10000 60126867 10000 3
109058 60126867_10000 60126867 10050 3
109063 60126867_10000 60126867 10075 3
109071 60126867_10000 60126867 10125 3
109076 60126867_10000 60126867 10150 3
I was wondering if it would be possible to "explode" out this view into it's individual components for each quantity. For example, that last row (ProductionOrder: 109076) would look like this instead:
ProductionOrder RootID SalesOrder Line QtyID
109076 60126867_10000 60126867 10150 1 of 3
109076 60126867_10000 60126867 10150 2 of 3
109076 60126867_10000 60126867 10150 3 of 3
And this would be done for every line, dynamic based on that total qty. I can achieve this with a loop, but this is thousands and thousands of rows, so I was wondering if anyone could help me with a CTE-based example of this. I am trying to wrap my head around it but it has proven to be difficult. Any ideas?
This can be very easily achieved with a Tally. If you have very small values, then a JOIN to a small VALUES clause works:
--Sample data
WITH YourTable AS(
SELECT *
FROM (VALUES(829602,'60124786_7275',60124786,7375,1),
(829603,'60124786_7275',60124786,7400,1),
(109051,'60126867_10000',60126867,10000,3),
(109058,'60126867_10000',60126867,10050,3),
(109063,'60126867_10000',60126867,10075,3),
(109071,'60126867_10000',60126867,10125,3),
(109076,'60126867_10000',60126867,10150,3))V(ProductionOrder,RootID,SalesOrder,Line,Quantity))
--Solution
SELECT ProductionOrder,
RootID,
SalesOrder,
Line,
CONCAT(V.I,' of ',YT.Quantity) AS QtyID
FROM YourTable YT
JOIN (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9),(10))V(I) ON V.I <= YT.Quantity
If, however, you have much larger values for Quantity, then a larger tally will be needed:
--Sample data
WITH YourTable AS(
SELECT *
FROM (VALUES(829602,'60124786_7275',60124786,7375,1),
(829603,'60124786_7275',60124786,7400,1),
(109051,'60126867_10000',60126867,10000,3),
(109058,'60126867_10000',60126867,10050,3),
(109063,'60126867_10000',60126867,10075,3),
(109071,'60126867_10000',60126867,10125,3),
(109076,'60126867_10000',60126867,10150,101))V(ProductionOrder,RootID,SalesOrder,Line,Quantity)),
--Solution
N AS(
SELECT N
FROM(VALUES(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL))N(N)),
Tally AS(
SELECT TOP (SELECT MAX(Quantity) FROM YourTable)
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS I
FROM N N1, N N2, N N3) --1,000 rows, add more N's for more rows
SELECT ProductionOrder,
RootID,
SalesOrder,
Line,
CONCAT(T.I,' of ',YT.Quantity) AS QtyID
FROM YourTable YT
JOIN Tally T ON T.I <= YT.Quantity
ORDER BY ProductionOrder,
T.I;
This is not a runnable query, however, it should be enough to give you an idea on how to use recursion to replicate rows for a set number of times.
;WITH Normalized AS
(
SELECT *, RowNumber = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM YourData
)
,ReplicateAmount AS
(
SELECT ProductionOrder , RootID, SalesOrder, Line, Quantity
FROM Normalized
UNION ALL
SELECT R.ProductionOrder , R.RootID, R.SalesOrder, R.Line, Quantity=(R.Quantity - 1)
FROM ReplicateAmount R INNER JOIN Normalized N ON R.RowNumber = N.RowNumber
WHERE R.Quantity > 1
)
Related
Hi I have a table with following fields:
ALERTID POLY_CODE ALERT_DATETIME ALERT_TYPE
I need to query above table for records in the last 24 hour.
Then group by POLY_CODE and ALERT_TYPE and get the latest Alert_Level value ordered by ALERT_DATETIME
I can get up to this, but I need the AlertID of the resulting records.
Any suggestions what would be an efficient way of getting this ?
I have created an SQL in SQL Server. See below
SELECT POLY_CODE, ALERT_TYPE, X.ALERT_LEVEL AS LAST_ALERT_LEVEL
FROM
(SELECT * FROM TableA where ALERT_DATETIME >= GETDATE() -1) T1
OUTER APPLY (SELECT TOP 1 [ALERT_LEVEL]
FROM (SELECT * FROM TableA where ALERT_DATETIME >= GETDATE() -1) T2
WHERE T2.POLY_CODE = T1.POLY_CODE AND
T2.ALERT_TYPE = T1.ALERT_TYPE ORDER BY T2.[ALERT_DATETIME] DESC) X
GROUP BY POLY_CODE, ALERT_TYPE, X.[ALERT_LEVEL]
POLY_CODE ALERT_TYPE ALERT_LEVEL
04575 Elec 2
04737 Gas 3
06239 Elec 2
06552 Elec 2
06578 Elec 2
10320 Elec 2
select top 1 with ties *
from TableA
where ALERT_DATETIME >= GETDATE() -1
order by row_number() over (partition by POLY_CODE,ALERT_TYPE order by [ALERT_DATETIME] DESC)
The way this works is that for each group of POLY_CODE,ALERT_TYPE get their own row_number() starting from the most recent alert_datetime. Then, the with ties clause ensures that all rows(= all groups) with the row_number value of 1 get returned.
One way of doing it is creating a cte with the grouping that calculates the latesdatetime for each and then crosses it with the table to get the results. Just keep in mind that if there are more than one record with the same combination of poly_code, alert_type, alert_level and datetime they will all show.
WITH list AS (
SELECT ta.poly_code,ta.alert_type,MAX(ta.alert_datetime) AS LatestDatetime,
ta.alert_level
FROM dbo.TableA AS ta
WHERE ta.alert_datetime >= DATEADD(DAY,-1,GETDATE())
GROUP BY ta.poly_code, ta.alert_type,ta.alert_level
)
SELECT ta.*
FROM list AS l
INNER JOIN dbo.TableA AS ta ON ta.alert_level = l.alert_level AND ta.alert_type = l.alert_type AND ta.poly_code = l.poly_code AND ta.alert_datetime = l.LatestDatetime
I have a query that select the count of data from each day, I want to modify the query so it can get the data from a date between two dates
The first Query as follows:
SELECT ROW_NUMBER() OVER (ORDER BY q.english_Name DESC) as id,
COUNT(t.id) AS ticket,
q.english_name queue_name,
ts.code current_status,
COUNT(t.assigned_to) AS assigned,
(COUNT(t.id)-COUNT(t.assigned_to)) AS not_assigned
,trunc(t.create_date) create_Date
FROM ticket t
INNER JOIN ref_queue q
ON (q.id = t.queue_id)
INNER JOIN ref_ticket_status ts
ON(ts.id=t.current_status_id)
GROUP BY q.english_name,
ts.code
,trunc(t.create_date)
but when I modify it to :
SELECT ROW_NUMBER() OVER (ORDER BY q.english_Name DESC) as id,
COUNT(t.id) AS ticket,
q.english_name queue_name,
ts.code current_status,
COUNT(t.assigned_to) AS assigned,
(COUNT(t.id)-COUNT(t.assigned_to)) AS not_assigned
,trunc(t.create_date) create_Date
FROM ticket t
INNER JOIN ref_queue q
ON (q.id = t.queue_id)
INNER JOIN ref_ticket_status ts
ON(ts.id=t.current_status_id)
where t.create_date between '18-FEB-19' and '24-FEB-19'
GROUP BY q.english_name,
ts.code
,trunc(t.create_date)
the output is
1 1 Technical Support Sec. CLOSED 0 1 19-FEB-19
2 6 Technical Support Sec. OPEN 4 2 18-FEB-19
3 1 Technical Support Sec. OPEN 0 1 21-FEB-19
4 3 Network Sec. OPEN 2 1 18-FEB-19
5 1 Network Sec. OPEN 0 1 21-FEB-19
how can i get the total output of the days so that the output is:
1 7 Technical Support Sec. OPEN 4 3
2 4 Network Sec. OPEN 2 2
When you GROUP BY in a query, your result set will include one row for every distinct set of values in your GROUP BY list. For example, the reason you are getting two rows for the OPEN records for "Techical Support Sec" is because there are two distinct values for TRUNC(t.create_date) resulting in two groups and, therefore, two rows in your result set.
To avoid that, stop grouping by TRUNC(t.create_date).
SELECT ROW_NUMBER() OVER (ORDER BY q.english_Name DESC) as id,
COUNT(t.id) AS ticket,
q.english_name queue_name,
ts.code current_status,
COUNT(t.assigned_to) AS assigned,
(COUNT(t.id)-COUNT(t.assigned_to)) AS not_assigned
-- ,trunc(t.create_date) create_Date
FROM ticket t
INNER JOIN ref_queue q
ON (q.id = t.queue_id)
INNER JOIN ref_ticket_status ts
ON(ts.id=t.current_status_id)
where t.create_date between '18-FEB-19' and '24-FEB-19'
GROUP BY q.english_name,
ts.code
-- ,trunc(t.create_date)
Am trying to merge and split polygons from postgis. Merge works but I dont have any idea how to divide polygon into two parts.
I have tried using this query but it is not dividing
insert into edited values('2','338',
(SELECT ST_ASText(geom_part2) FROM(
WITH RECURSIVE ref(geom, env) AS (
SELECT geom,
ST_Envelope(geom) As env,
ST_Area(geom)/2 As targ_area,
1000 As nit
FROM plots
WHERE plot_no = 338 limit 1
),
T(n,overlap) AS (
VALUES (CAST(0 As Float),CAST(0 As Float))
UNION ALL
SELECT n + nit, ST_Area(ST_Intersection(geom, ST_Translate(env, n+nit, 0)))
FROM T CROSS JOIN ref
WHERE ST_Area(ST_Intersection(geom, ST_Translate(env, n+nit, 0)))>
ref.targ_area
),
bi(n) AS(
SELECT n
FROM T
ORDER BY n DESC LIMIT 1
)
SELECT bi.n,
ST_Difference(geom, ST_Translate(ref.env, n,0)) As geom_part1,
ST_Intersection(geom, ST_Translate(ref.env, n,0)) As geom_part2
FROM bi CROSS JOIN ref) AS TT));
i got problem with oracle query. in this query i want to show status resume where the record is taken by max value of seq coloumn and extern coloumn. this is my query:
select x.order_id, z.status_resume,
max(y.seq) as seq2,
max(y.extern_order_status) as extern
from t_order_demand x
JOIN t_order_log y ON x.order_id=y.order_id
JOIN p_catalog_status z ON z.status_code_sc=y.extern_order_status
and x.order_id like '%1256%'
group by x.order_id, z.status_resume;
and this is the result:
order id status_resume seq extern
1256 proccess 2 4
1256 registered 1 2
1256 pre registered 0 1
i want the result just status resume based on max value from seq and extern. how can i do it? help me.. thanks.
order id status_resume seq extern
1256 proccess 2 4
WITH t AS
(SELECT x.order_id
,z.status_resume
,MAX(y.seq) AS seq2
,MAX(y.extern_order_status) AS extern
FROM t_order_demand x
JOIN t_order_log y
ON x.order_id = y.order_id
JOIN p_catalog_status z
ON z.status_code_sc = y.extern_order_status
AND x.order_id LIKE '%1256%'
GROUP BY x.order_id
,z.status_resume)
SELECT *
FROM t
WHERE (t.seq || t.extern) = (SELECT MAX(tt.seq || tt.extern) FROM t tt);
Might work for you.
WITH data AS
(
SELECT x.order_id,
z.status_resume,
Max(y.seq) AS seq2,
Max(y.extern_order_status) AS extern
FROM t_order_demand x
join t_order_log y
ON x.order_id=y.order_id
join p_catalog_status z
ON z.status_code_sc=y.extern_order_status
AND x.order_id LIKE '%1256%'
GROUP BY x.order_id,
z.status_resume )
SELECT *
FROM data
WHERE seq || extern =
(select max(seq || extern)
FROM data)
/
A simple test case to validate :
SQL> WITH DATA AS(
2 SELECT 1 col1, 2 col2 FROM dual UNION ALL
3 SELECT 5, 7 FROM dual
4 )
5 SELECT * FROM DATA
6 where col1||col2 = (select max(col1||col2) from data)
7 /
COL1 COL2
---------- ----------
5 7
SQL>
You can use analytic function RANK:
select order_id, status_resume, seq2, extern
from (
select x.order_id, z.status_resume,
max(y.seq) as seq2,
max(y.extern_order_status) as extern,
rank() over(partition by x.order_id, z.status_resume order by max(y.seq) desc, max(y.extern_order_status) desc) rnk
from t_order_demand x
JOIN t_order_log y ON x.order_id=y.order_id
JOIN p_catalog_status z ON z.status_code_sc=y.extern_order_status
and x.order_id like '%1256%'
group by x.order_id, z.status_resume
) where rnk = 1;
But it's not clear what do you mean by max of two fields. There sum? The query above retrive the rows with max seq and if several rows have the same seq then only rows with max extern_order_status are retrieved.
I have a history table containing a snapshot of each time a record is changed. I'm trying to return a certain history row with the original captured date. I am currently using this at the moment:
select
s.Description,
h.CaptureDate OriginalCaptureDate
from
HistoryStock s
left join
( select
StockId,
CaptureDate
from
HistoryStock
where
HistoryStockId in ( select MIN(HistoryStockId) from HistoryStock group by StockId )
) h on s.StockId = h.StockId
where
s.HistoryStockId = #HistoryStockId
This works but with 1 Million records its on the slow side and I'm not sure how to optimize this query.
How can this query be optimized?
UPDATE:
WITH OriginalStock (StockId, HistoryStockId)
AS (
SELECT StockId, min(HistoryStockId)
from HistoryStock group by StockId
),
OriginalCaptureDate (StockId, OriginalCaptureDate)
As (
SELECT h.StockId, h.CaptureDate
from HistoryStock h join OriginalStock o on h.HistoryStockId = o.HistoryStockId
)
select
s.Description,
h.OriginalCaptureDate
from
HistoryStock s left join OriginalCaptureDate h on s.StockId = h.StockId
where
s.HistoryStockId = #HistoryStockId
I've update the code to use CTE but I'm not better off performance wise, only have small performance increase. Any ideas?
Just another note, I need to get to the first record in the history table for StockId and not the earliest Capture date.
I am not certain I understand entirely how the data works from your query but nesting queries like that is never good for performance in my opinion. You could try something along the lines of:
WITH MinCaptureDate (StockID, MinCaptureDate)
AS (
SELECT HS.StockID
,MIN(HS.CaptureDate) AS OriginalCaptureDate
FROM HistoryStock HS
GROUP BY
HS.Description
)
SELECT HS.Description
,MCD.OriginalCaptureDate
FROM HistoryStock HS
JOIN MinCaptureDate MCD
ON HS.StockID = MCD.StockID
WHERE HS.StockID = #StockID
I think i see what you are trying to achieve. You basically want the description of the specified history stock record, but you want the date associated with the first history record for the stock... so if your history table looks like this
StockId HistoryStockId CaptureDate Description
1 1 Apr 1 Desc 1
1 2 Apr 2 Desc 2
1 3 Apr 3 Desc 3
and you specify #HistoryStockId = 2, you want the following result
Description OriginalCaptureDate
Desc 2 Apr 1
I think the following query would give you a slightly better performance.
WITH OriginalStock (StockId, CaptureDate, RowNumber)
AS (
SELECT
StockId,
CaptureDate,
RowNumber = ROW_NUMBER() OVER (PARTITION BY StockId ORDER BY HistoryStockId ASC)
from HistoryStock
)
select
s.Description,
h.CaptureDate
from
HistoryStock s left join OriginalStock h on s.StockId = h.StockId and h.RowNumber = 1
where
s.HistoryStockId = #HistoryStockId