Get next 2 values for sequence in single row - sql-server

MSSQL 2014
I am performing a split on records in sql. The key of the table MyTable is generated by a sequence MySeq. Given a set of records, I want to generate 2 new keys that I can then do some work with then insert child rows.
MyTable
+----+
| Id |
+----+
| 1 |
+----+
| 2 |
Now to select my two new keys:
SELECT Id,
NEXT VALUE FOR MySeq AS ChildId1,
NEXT VALUE FOR MySeq AS ChildId2
FROM MyTable
I want:
+----+----------+----------+
| Id | ChildId1 | ChildId2 |
+----+----------+----------+
| 1 | 3 | 4 |
+----+----------+----------+
| 2 | 5 | 6 |
I get:
+----+----------+----------+
| Id | ChildId1 | ChildId2 |
+----+----------+----------+
| 1 | 3 | 3 |
+----+----------+----------+
| 2 | 4 | 4 |
I think the reason for the single run of the sequence per row has something to do with the design of the feature. For example, it looks like you can order the sequence runs separately from the SELECT.
I have a work around that is fine enough (update the table var after initial INSERT), but before I left it that way, I thought I would see if there is a more natural way to get the result I am looking for.

What you can do in this situation is make the sequence to increment by two
CREATE SEQUENCE MySeq AS 
INT
START WITH 0 
INCREMENT BY 2; 
and then do a:
SELECT Id,
NEXT VALUE FOR MySeq AS ChildId1,
1 + NEXT VALUE FOR MySeq AS ChildId2
FROM MyTable

I could be wrong (as my MSSQL experience is limited... I mainly use other SQL programs) but could you not simply solve this with a couple of selects within selects?
i.e.:
SELECT ID, ((SELECT MAX(ID) FROM MyTable)+ID) AS ChildId1,
((SELECT MAX(ID) FROM MyTable)+ID+1) AS ChildId2
FROM MyTable

Related

Update hierarchy after deletion of row

I have a table that contains tree-like data (hierarchic design). Here is a small sample:
+----+----------+-----------+-------+----------+---------+
| ID | ParentID | Hierarchy | Order | FullPath | Project |
+----+----------+-----------+-------+----------+---------+
| 1 | null | 1 | 1 | 1 | 1 |
| 2 | null | 2 | 2 | 2 | 1 |
| 3 | 1 | 1.1 | 1 | 1-3 | 1 |
| 4 | 1 | 1.2 | 2 | 1-4 | 1 |
| 5 | 4 | 1.2.1 | 1 | 1-4-5 | 1 |
| 6 | 2 | 2.1 | 1 | 2-6 | 1 |
| 7 | null | 3 | 1 | 1 | 2 |
+----+----------+-----------+-------+----------+---------+
Project indicates which project owns the hierarchic dataset
ParentID is the ID of the parent node, it has a foreign key on ID.
Order is the rank of the element in one branch. For example, IDs 1, 2 and 7 are on the same node while 3 and 4 are in another.
FullPath shows the order using the ID (it's for system use and performance reasons).
Hierarchy is the column displayed to the user, which displays the hierarchy to the UI. It auto calculates after every insert, update and delete, and it's the one I'm having issues.
I created a procedure for deletion elements in the table. It receives as input the ID of the element to delete and deletes it, along with it's children if any. Then, it recalculates the FullPath and the Order Column .That works.
Problems is when I try to update the Hierarchy column. I use this procedure:
SELECT T.ID,
T.ParentID,
CASE WHEN T.ParentID IS NOT NULL THEN
CONCAT(T1.Hierarchy, '.', CAST(T.Order AS NVARCHAR(255)))
ELSE
CAST(T.Order AS NVARCHAR(255))
END AS Hierarchy
INTO #tmp
FROM t_HierarchyTable T
LEFT JOIN t_HierarchyTable T1
ON T1.ID = T.ParentID
WHERE Project = #Project --Variable to only update the current project for performance
ORDER BY T.FullPath
--Update the table with ID as key on tmp table
This fails when I delete items that have lower order than others and they have children.
For example, if I delete the item 3, item 4 Hierachy will be corrected (1.1), BUT its child won't (it will stay at 1.2.1, while it should be 1.1.1). I added the order by to make sure parents where updated first, but no change.
What is my error, I really don't know how to fix this.
I managed to update the hierarchy with a CTE. Since I have the order, I can append it to Hierarchy, based on the previous branch (parent) who is already updated.
;WITH CODES(ID, sCode, iLevel) AS
(
SELECT
T.[ID] AS [ID],
CONVERT(VARCHAR(8000), T.[Order]) AS [Hierarchy],
1 AS [iLevel]
FROM
[dbo].[data] AS T
WHERE
T.[ParentID] IS NULL
UNION ALL
SELECT
T.[ID] AS [ID],
P.[Hierarchy] + IIF(RIGHT(P.[Hierarchy], 1) <> '-', '-', '') + CONVERT(VARCHAR(8000), T.[Order]) AS [Hierarchy],
P.[iLevel] + 1 AS [iLevel]
FROM
[dbo].[data] AS T
INNER JOIN CODES AS P ON
P.[ID] = T.[ParentID]
WHERE
P.[iLevel] < 100
)
SELECT
[ID], [Hierarchy], [iLevel]
INTO
#CODES
FROM
CODES

Create SQL Server Select/Delete Query from value in other table

I have a master table named Master_Table and the columns and values in the master table are below:
| ID | Database | Schema | Table_name | Common_col | Value_ID |
+-------+------------+--------+-------------+------------+----------+
| 1 | Database_1 | Test1 | Test_Table1 | Test_ID | 1 |
| 2 | Database_2 | Test2 | Test_Table2 | Test_ID | 1 |
| 3 | Database_3 | Test3 | Test_Table3 | Test_ID2 | 2 |
I have another Value_Table which consist of values that need to be deleted.
| Value_ID | Common_col | Value |
+----------+------------+--------+
| 1 | Test_ID | 110 |
| 1 | Test_ID | 111 |
| 1 | Test_ID | 115 |
| 2 | Test_ID2 | 999 |
I need to build a query to create a SQL query to delete the value from the table provided in Master_Table whose database and schema information is provided in the same row. The column that I need to refer to delete the record is given in Common_col column of master table and the value I need to select is in Value column of Value_Table.
The result of my query should create a query as given below :
DELETE FROM Database_1.Test1.Test_Table1 WHERE Test_ID=110;
or
DELETE FROM Database_1.Test1.Test_Table1 WHERE Test_ID in (110,111,115);
These query should be inside a loop so that I can delete all the row from all the database and tables provided in master table.
Queries don't really create queries.
One way to do what you're saying, which could be useful if this is a one time thing or very occasional thing, is to use SSMS to generate query statements, then copy them to the clipboard, paste them into the window, and execute there.
SELECT 'DELETE FROM Database_1.Test1.Test_Table1 WHERE '
+ common_col
+ ' = '
+ convert(VARCHAR(10),value)
This probably isn't what you want; it sounds more like you want to automate cleanup or something.
You can turn this into one big query if you don't mind repeating yourself a little:
DELETE T1
FROM Database_1.Test1.Test_Table1 T1
INNER JOIN Database_1.Test1.ValueTable VT ON
(VT.common_col = 'Test_ID' and T1.Test_ID=VT.Value) OR
(VT.common_col = 'Test_ID2' and T1.Test_ID2=VT.Value)
You can also use dynamic SQL combined with the first part ... but I hate dynamic SQL so I'm not going to put it in my answer.

Getting a lineage of linked rows with details

I'm trying to get a "lineage" or similar, and also information about the first and last links (at least; all would be good), out of a table that has self-referential links between rows that have been "replaced" and rows that have replaced them. The table has a structure along these lines:
CREATE TABLE Thing (
Id INT PRIMARY KEY,
TStamp DATETIME,
Replaces INT NULL,
ReplacedBy INT NULL
);
I'm stuck with this structure. :-) It's sort of doubly-linked (yes, it's a bit silly): Each row has a unique Id, and then a row that has been "replaced" by another will have a non-NULL ReplacedBy giving the Id of the replacement row, and the replacement row will also have a link back to what it replaces in Replaces. So we can use either Replaces or ReplacedBy (or both) if we like.
Here's some sample data:
INSERT INTO Thing
(Id, TStamp, Replaces, ReplacedBy)
VALUES
(1, '2017-01-01', NULL, 11),
(2, '2017-01-02', NULL, 12),
(3, '2017-01-03', NULL, NULL),
(4, '2017-01-04', NULL, NULL),
(11, '2017-01-11', 1, NULL),
(12, '2017-01-12', 2, 22),
(22, '2017-01-22', 12, NULL);
So 1 was replaced by 11, 2 was replaced by 12, and 12 was replaced by 22.
I'd like to get the following information for each chain of links from this table in a reasonable way:
Details of the row that started the chain
Details of the final row in the chain
Details of the links in-between or at least how many links (total) there are in the chain
...filtered by a date range applied to the last row in the chain.
In an ideal universe, I'd get back something like this:
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−−−−−−+
| FirstId | LastId | Id | Links | TStamp |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−−−−−−+
| 1 | 11 | 1 | 2 | 2017−01−01 |
| 1 | 11 | 11 | 2 | 2017−01−11 |
| 2 | 22 | 2 | 3 | 2017−01−02 |
| 2 | 22 | 12 | 3 | 2017−01−12 |
| 2 | 22 | 22 | 3 | 2017−01−22 |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−−−−−−+
So far I have this query, which I could post-process to get the above:
WITH Data AS (
SELECT Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
UNION ALL
SELECT Thing.Id, Thing.TStamp, Thing.Replaces, Thing.ReplacedBy, Depth + 1
FROM Data
JOIN Thing
ON Thing.Replaces = Data.Id
)
SELECT *
FROM Data
WHERE ReplacedBy IS NOT NULL OR Depth > 0
ORDER BY
Id, Depth;
That gives me:
+−−−−+−−−−−−−−−−−−+−−−−−−−−−−+−−−−−−−−−−−−+−−−−−−−+
| Id | TStamp | Replaces | ReplacedBy | Depth |
+−−−−+−−−−−−−−−−−−+−−−−−−−−−−+−−−−−−−−−−−−+−−−−−−−+
| 1 | 2017−01−01 | NULL | 11 | 0 |
| 2 | 2017−01−02 | NULL | 12 | 0 |
| 11 | 2017−01−11 | 1 | NULL | 1 |
| 12 | 2017−01−12 | 2 | 12 | 0 |
| 12 | 2017−01−12 | 2 | 12 | 1 |
| 22 | 2017−01−13 | 12 | NULL | 1 |
| 22 | 2017−01−13 | 12 | NULL | 2 |
+−−−−+−−−−−−−−−−−−+−−−−−−−−−−+−−−−−−−−−−−−+−−−−−−−+
And I could use something like this to figure out (for instance) the final row of each chain:
WITH Data AS (
SELECT Id, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
UNION ALL
SELECT Thing.Id, Thing.Replaces, Thing.ReplacedBy, Depth + 1
FROM Data
JOIN Thing
ON Thing.Replaces = Data.Id
),
MaxData AS (
SELECT Data.Id, Data.Depth
FROM Data
JOIN (
SELECT Id, MAX(Depth) AS MaxDepth
FROM Data
GROUP BY Id
) j ON data.Id = j.Id AND Data.Depth = j.MaxDepth
WHERE Depth > 0
)
SELECT *
FROM MaxData
ORDER BY
Id;
...which gives me:
+−−−−+−−−−−−−+
| Id | Depth |
+−−−−+−−−−−−−+
| 11 | 1 |
| 12 | 1 |
| 22 | 2 |
+−−−−+−−−−−−−+
...but I've lost the starting point and the points along the way.
I have the strong feeling I'm missing something really straight-forward — but clever — that would let me get this largely with the query rather than post-processing, some kind of join with a "min" and "max" query (but not like my one above). What would it be?
The table doesn't have any indexes on Replaces or ReplacedBy, but we could add any needed. The table is only lightly used (roughly 300k rows and probably only a couple of hundred updates/inserts a day).
I'm limited to SQL Server 2008 features.
Inspired by Gordon Linoff's answer and HABO's comment which highlighted something Gordon was doing that was critical, I:
Removed the SQL Server 2012+ FIRST_VALUE function, replacing it with a CROSS JOIN on an "overview" query of the data
Included the Links count in the overview query
Removed the reliance on t in Gordon's WHERE NOT EXISTS (SELECT 1 FROM Thing t2 WHERE t2.ReplacedBy = t.id), which (at last on SQL Server 2008) wasn't bound to anything
Filtered out rows that weren't replaced
Below, I also add the date filtering mentioned in the question
...filtered by a date range applied to the last row in the chain.
...which Gordon didn't cover at all, and changes our approach, but only in terms of the arrow of time.
So, first, without the date criteria, sticking fairly close to Gordon's answer:
WITH Data AS (
SELECT Id AS FirstId, Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
WHERE Replaces IS NULL AND ReplacedBy IS NOT NULL
UNION ALL
SELECT d.FirstId, t.Id, t.TStamp, t.Replaces, t.ReplacedBy, d.Depth + 1
FROM Data d
JOIN Thing t ON t.Replaces = d.Id
),
Overview AS (
SELECT FirstId, MAX(Id) AS LastId, COUNT(*) AS Links
FROM Data
GROUP BY
FirstId
)
SELECT d.FirstId, o.LastId, d.Id, o.Links, d.Depth, d.TStamp
FROM Data d
CROSS APPLY (
SELECT LastId, Links
FROM Overview
WHERE FirstId = d.FirstId
) o
ORDER BY
d.FirstId, d.Depth
;
The critical parts of that are grabbing the seed Id as FirstId here:
SELECT Id AS FirstId, Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
WHERE Replaces IS NULL AND ReplacedBy IS NOT NULL
and then propagating it through the results of the recursive join:
SELECT d.FirstId, t.Id, t.TStamp, t.Replaces, t.ReplacedBy, d.Depth + 1
FROM Data d
JOIN Thing t ON t.Replaces = d.Id
Just adding that to my original query gives us most of what I wanted. Then we add a second query to get the LastId for each FirstId (Gordon did it as a FIRST_VALUE over a partition, but I can't do that in SQL Server 2008) and using an overview query also lets me grab the number of links. We cross-apply that on the basis of the FirstId value to get the overall results I wanted.
The query above returns the following for the sample data:
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| FirstId | LastId | Id | Links | Depth | TStamp |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| 1 | 11 | 1 | 2 | 0 | 2017-01-01 |
| 1 | 11 | 11 | 2 | 1 | 2017-01-11 |
| 2 | 22 | 2 | 3 | 0 | 2017-01-02 |
| 2 | 22 | 12 | 3 | 1 | 2017-01-12 |
| 2 | 22 | 22 | 3 | 2 | 2017-01-13 |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
...e.g., exactly what I wanted, plus Depth if I want (so I know what order the intermediary links were in).
If we wanted to include rows that were never replaced, we'd just change
WHERE Replaces IS NULL AND ReplacedBy IS NOT NULL
to
WHERE Replaces IS NULL
Giving us:
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| FirstId | LastId | Id | Links | Depth | TStamp |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| 1 | 11 | 1 | 2 | 0 | 2017-01-01 |
| 1 | 11 | 11 | 2 | 1 | 2017-01-11 |
| 2 | 22 | 2 | 3 | 0 | 2017-01-02 |
| 2 | 22 | 12 | 3 | 1 | 2017-01-12 |
| 2 | 22 | 22 | 3 | 2 | 2017-01-13 |
| 3 | 3 | 3 | 1 | 0 | 2017-01-03 |
| 4 | 4 | 4 | 1 | 0 | 2017-01-04 |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
But we've ignored the date criteria required by the question:
...filtered by a date range applied to the last row in the chain.
To do that without building a massive temporary result set, we have to work backward: Instead of selecting the starting point (the first entry in a chain, Replaces IS NULL), we need to select the ending point (the last entry in a chain, ReplacedBy IS NULL), and then invert our logic working back through the chain. It's largely a matter of:
Swapping FirstId with LastId
Swapping Replaces with ReplacedBy (convenient the table had both!)
Using MIN to get the first ID in the chain rather than MAX to get the last
Using d.Depth - 1 rather than d.Depth + 1
Then fixing-up Depth based on Links once we know it in our final select, to get those nice values where 0 = first link rather than some varying negative number: o.Links + d.Depth - 1 AS Depth
All of which gives us:
WITH Data AS (
SELECT Id AS LastId, Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
WHERE ReplacedBy IS NULL AND Replaces IS NOT NULL
-- Filtering by date of last entry would go here
UNION ALL
SELECT d.LastId, t.Id, t.TStamp, t.Replaces, t.ReplacedBy, d.Depth - 1
FROM Data d
JOIN Thing t ON t.ReplacedBy = d.Id
),
Overview AS (
SELECT LastId, MIN(Id) AS FirstId, COUNT(*) AS Links
FROM Data
GROUP BY
LastId
)
SELECT o.FirstId, d.LastId, d.Id, o.Links, o.Links + d.Depth - 1 AS Depth, d.TStamp
FROM Data d
CROSS APPLY (
SELECT FirstId, Links
FROM Overview
WHERE LastId = d.LastId
) o
ORDER BY
o.FirstId, d.Depth
;
So for instance, if we used
AND TStamp BETWEEN '2017-01-12' AND '2017-02-01'
where I have
-- Filtering by date of last entry would go here
above, with our sample data we'd get this result:
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| FirstId | LastId | Id | Links | Depth | TStamp |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| 2 | 22 | 2 | 3 | 0 | 2017−01−02 |
| 2 | 22 | 12 | 3 | 1 | 2017−01−12 |
| 2 | 22 | 22 | 3 | 2 | 2017−01−13 |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
...because the last link the Id = 1 chain is outside the date range, so we don't include it.
This is a little tricky. Arrange the CTE to start at the beginning of each list. That makes the subsequent processing easier:
WITH Data AS (
SELECT Id as FirstId, Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing t
WHERE NOT EXISTS (SELECT 1 FROM Thing t2 WHERE t2.ReplacedBy = t.id)
UNION ALL
SELECT d.FirstId, t.Id, t.TStamp, t.Replaces, t.ReplacedBy, d.Depth + 1
FROM Data d JOIN
Thing t
ON t.Replaces = d.Id
)
SELECT d.*,
FIRST_VALUE(id) OVER (PARTITION BY FirstId ORDER BY Depth DESC) as LastId
FROM Data d;
Then, you can use FIRST_VALUE() with a reverse sort to get the last value in the chain.
This returns chains that have no links. You can add a filter to remove these.

How to have a child group span across three columns?

What I'm trying to do
In my report, I am trying to get some basic data in a tablix. In this tablix there is one main summary row and detail rows inside it. What I want to do is put the details in the child row but split into three columns.
For example my tablix looks like this right now
Row11| Row12| Row13 |
1 | 5 | 4 |
| Column1 | Column2|
| 1 | 4 |
| 2 | 5 |
| 3 | 6 |
2 | 20 | 25 |
Column1 Column2 |
| 7 | 8 |
| 9 | 5 |
| 3 | 6 |
(This is just a demo table. The number of columns in my application is not necessarily this number and it should be irrelevant anyway)
How I want it to look like:
Row11| Row12| Row13 |
1 | 5 | 4 |
| Column1 | Column2| Column1 | Column2| Column1 | Column2|
| 1 | 4 | 2 | 5 | 3 | 6 |
2 | 20 | 25 |
| Column1 | Column2| Column1 | Column2| Column1 | Column2|
| 7 | 8 | 9 | 5 | 3 | 6 |
I just want to split the detail table into three columns. I have tried various approaches but in vein.
What approaches have I tried?
Attaching a sub report method. I attached a sub report and divided the report into three separate tables and split the columns in this order. This works except that it is terribly slow when trying to get large amount of data. Really do not want to do this.
The method mentioned here. Did not work.
I have been experimenting with the SQL itself as well but SQL does not look like to be an issue here.
Tried with Matrix instead of tablix too trying to push my limits but did not succeed.
Side note: If it matters I am using SSRS SDK for PHP and grabbing the PDFs from the Report Server and using Visual Studio to design the reports.
This seems such a simple thing but I am stuck with this. Has anybody in a situation like this before?
Please let me know if you need more clarifications.
Create three detail tables, adjust which rows get shown in each, and put them in a List.
This solution works on the assumption that your raw data looks something like this:
Add a table report item and add the Column1 and Column2 data to it, leaving the grouping as just the details. Right-click the detail row, and go to Row Visibility.
Switch this to 'Show or hide based on an expression', and add this expression:
=IIF(RowNumber("tblFirstColumn") MOD 3 = 1, False, True)
This will make only the first, fourth, seventh etc. record show in that table. Paste two copies of this table next to the first, and adjust the row visibility expression on each:
=IIF(RowNumber("tblSecondColumn") MOD 3 = 2, False, True)
=IIF(RowNumber("tblThirdColumn") MOD 3 = 0, False, True)
Next add a List item. Change the row grouping of the list to group by Row11, add each row field to the top of this list (as text boxes or a non-grouped table), and move the three detail tables into the bottom of the list.
This should perform better than using subreports. I understand that when using subreports the datasets will be queried with every instance of that subreport. With all the design in one report, the queries should only run once.
Method 1: main tablix = three columns with TWO detail rows. In the 2nd detail row merge the three columns together. Create a new tablix for the detail information and put it inside the merged detail cell.
Method 2: main tablix = six columns and two detail rows. In the 1st detail row merge cells 1/2, 3/4, and 5/6 together.
I had to tackle a same problem once and the way I did it is by "Inserting a TABLIX inside a TABLIX". I believe if you follow the link below that shuold resolve what you are looking for:
http://www.sqlcircuit.com/2012/03/ssrs-how-to-show-tablix-inside-tablix.html
Waht I have additionaly done in my report to increase the width of the nested tablix so that it does not affect the width on the main tablix is:
1) On the the row above the insertted tablix I have created a column and kept it empty and merged the cell below it where the nested tablix should be.
2) now you can increase the size of the empty column (make the borders invisible) to what ever the width of the inserted TABLIX you would like.
Hope this helped.
For what it's worth (I see you have already accepted an answer), I think this could be done primarily within SQL if you wished.
Assuming your raw data looks like this:
/-------------------------------------------\
| Row11 | Row12 | Row13 | Column1 | Column2 |
|-------+-------+-------+---------+---------|
| 1 | 5 | 4 | 1 | 4 |
| 1 | 5 | 4 | 2 | 5 |
| 1 | 5 | 4 | 3 | 6 |
| 2 | 20 | 25 | 3 | 6 |
| 2 | 20 | 25 | 7 | 8 |
| 2 | 20 | 25 | 9 | 5 |
\-------------------------------------------/
Let's create demo data to illustrate:
CREATE TABLE data (
Row11 INT,
Row12 INT,
Row13 INT,
Column1 INT,
Column2 INT
)
INSERT INTO data
SELECT 1,5,4,1,4
UNION
SELECT 1,5,4,2,5
UNION
SELECT 1,5,4,3,6
UNION
SELECT 2,20,25,7,8
UNION
SELECT 2,20,25,9,5
UNION
SELECT 2,20,25,3,6
You could aggregate each summary and detail row like this:
SELECT DISTINCT d.Row11,
d.Row12,
d.Row13,
dfirst.Column1,
dfirst.Column2,
dsecond.Column1,
dsecond.Column2,
dthird.Column1,
dthird.Column2
FROM data d
CROSS APPLY
(
SELECT TOP 1 Column1, Column2
FROM data d1
WHERE d1.Row11 = d.Row11 AND d1.Row12 = d.Row12 AND d1.Row13 = d.Row13
ORDER BY 1,2
) dfirst
CROSS APPLY
(
SELECT Column1, Column2
FROM
(
SELECT Column1, Column2, ROW_NUMBER() OVER (ORDER BY Column1, Column2) AS rownumber
FROM data d1
WHERE d1.Row11 = d.Row11 AND d1.Row12 = d.Row12 AND d1.Row13 = d.Row13
) drows
WHERE rownumber = 2
) dsecond
CROSS APPLY
(
SELECT TOP 1 Column1, Column2
FROM data d1
WHERE d1.Row11 = d.Row11 AND d1.Row12 = d.Row12 AND d1.Row13 = d.Row13
ORDER BY 1 DESC,2 DESC
) dthird
Which gives the results:
/-----------------------------------------------------------------------------------\
| Row11 | Row12 | Row13 | Column1 | Column2 | Column1 | Column2 | Column1 | Column2 |
|-------+-------+-------+---------+---------+---------+---------+---------+---------|
| 1 | 5 | 4 | 1 | 4 | 2 | 5 | 3 | 6 |
| 2 | 20 | 25 | 3 | 6 | 7 | 8 | 9 | 5 |
\-----------------------------------------------------------------------------------/
It should then be relatively trivial to group this in the table in your SSRS report by Row11, Row12, Row13, placing the values for Row11, Row12 and Row13 into the Group Header row and the values for all 6 Column1 and Column2 values into the detail row:
Design:
Results:
Note: this only works for 3 (or fewer) pairs of Column1/Column2 values per tuple of Row11, Row12, Row13 values.

Why am I getting an index scan for a covered query using aggregate function?

I have a query:
select min(timestamp) from table
This table has 60+million rows, and daily I delete a few off the end. To determine whether or not there is any data old enough do delete I run the query above. There is an index on timestamp ascending, containing only one column, and the query plan in oracle causes this to be a full index scan. Should this not be the definition of a seek?
edit including plan:
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 2 | INDEX FULL SCAN (MIN/MAX)| NEVENTS_I2 | 1 | 8 | 4 (100)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 0 | SELECT STATEMENT | | 1 | 8 | 4 (0)| 00:00:01 |
Can you post the actual query plan? Are you sure that it is not doing a min/max index full scan? As you can see in this example, we're getting the MIN value from a 100,000 row table using a min/max index full scan with only a handful of consistent gets.
SQL> create table foo (
2 col1 date not null
3 );
Table created.
SQL> insert into foo
2 select sysdate + level
3 from dual
4 connect by level <= 100000;
100000 rows created.
SQL> create index idx_foo_col1
2 on foo( col1 );
Index created.
SQL> analyze table foo compute statistics for all indexed columns;
Table analyzed.
SQL> set autotrace on;
<<Note that I ran this statement once just to get the delayed block cleanout to
happen so that the consistent gets number wouldn't be skewed. You could run a
different query as well>>
1* select min(col1) from foo
SQL> /
MIN(COL1)
---------
02-FEB-11
Execution Plan
----------------------------------------------------------
Plan hash value: 817909383
--------------------------------------------------------------------------------
-----------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
Time |
--------------------------------------------------------------------------------
-----------
| 0 | SELECT STATEMENT | | 1 | 7 | 2 (0)|
00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 7 | |
|
| 2 | INDEX FULL SCAN (MIN/MAX)| IDX_FOO_COL1 | 1 | 7 | 2 (0)|
00:00:01 |
--------------------------------------------------------------------------------
-----------
Note
-----
- dynamic sampling used for this statement (level=2)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
532 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
At first I thought that the index would only be used if the column is declared NOT NULL. I tested with the following setup:
SQL> CREATE TABLE my_table (ts TIMESTAMP);
Table created
SQL> INSERT INTO my_table
2 SELECT systimestamp + ROWNUM * INTERVAL '1' SECOND
3 FROM dual CONNECT BY LEVEL <= 100000;
100000 rows inserted
SQL> CREATE INDEX ix ON my_table(ts);
Index created
SQL> EXPLAIN PLAN FOR SELECT MIN(ts) FROM my_table;
Explained
SQL> SELECT * FROM TABLE(dbms_xplan.display);
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 69 (2)| 00:00:0
| 1 | SORT AGGREGATE | | 1 | 13 | |
| 2 | INDEX FULL SCAN (MIN/MAX)| IX | 90958 | 1154K| |
--------------------------------------------------------------------------------
Here we notice that the index is used, but all rows from the index are read. If we specify that the column is not null we get a much better plan:
SQL> ALTER TABLE my_table MODIFY ts NOT NULL;
Table altered
SQL> EXPLAIN PLAN FOR SELECT MIN(ts) FROM my_table;
Explained
SQL> SELECT * FROM TABLE(dbms_xplan.display);
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 2 (0)| 00:00:0
| 1 | SORT AGGREGATE | | 1 | 13 | |
| 2 | INDEX FULL SCAN (MIN/MAX)| IX | 90958 | 1154K| 2 (0)| 00:00:0
--------------------------------------------------------------------------------
In fact this is the same plan that is also used if we add a WHERE clause (Oracle will read a single row from the index):
SQL> EXPLAIN PLAN FOR SELECT MIN(ts) FROM my_table WHERE ts IS NOT NULL;
Explained
SQL> SELECT * FROM TABLE(dbms_xplan.display);
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 2 (0)| 00:00:
| 1 | SORT AGGREGATE | | 1 | 13 | |
| 2 | FIRST ROW | | 90958 | 1154K| 2 (0)| 00:00:
| 3 | INDEX FULL SCAN (MIN/MAX)| IX | 90958 | 1154K| 2 (0)| 00:00:
--------------------------------------------------------------------------------
This last plan shows (line 2) that Oracle is indeed performing a "seek".
Just wanted to hone in on the fact that an "INDEX FULL SCAN (MIN/MAX)" is simply not the same as an "INDEX FULL SCAN". An INDEX FULL SCAN really does scan the entire index (possibly with filtering). However an INDEX FULL SCAN (MIN/MAX) or INDEX RANGE SCAN (MIN/MAX) only gets the smallest or largest leaf block (from the range), but can only be employed as long as the column is NOT NULL (which is a bit silly, and really a bug, since a NULL value is by definition neither the smallest nor largest value). The (MIN/MAX) optimization is an implicit FIRST_ROWS action, and doesn't need the "WHERE ... IS NOT NULL" query condition to perform the optimization. Interestingly the MIN/MAX optimization is normally not considered by the CBO for function-based indexes, that's another little bug.

Resources