Actually what i need was something like this : i have one row with so many columns, and my script should calculate the difference starting from the highest and the second highest, then 2nd highest and 3rd one.. it goes like that.
Well Sql server can't even calculate the maximum value in a row ( it can calculate max in one column as i know )
So i used pivot for my table and now i have one column. I ordered them from max to lowest, now what i need is, how will i get the diffence :first value minus second value, second value minus third value...
A couple of ways you can do this - and you should read up sql server analytic and window functions.
Given
DROP TABLE T
CREATE TABLE T (ID INT)
INSERT INTO T VALUES
(1),(10),(2),(5)
You could use the lag analytic function
SELECT ID ,
LAG(ID, 1,0) OVER (ORDER BY ID DESC) LAGID,
ID - LAG(ID, 1,0) OVER (ORDER BY ID DESC) DIFF
FROM T
result
ID LAGID DIFF
----------- ----------- -----------
10 0 10
5 10 -5
2 5 -3
1 2 -1
(4 row(s) affected)
or using the row_number() window function
SELECT TID,TRN,UID,URN, TID - UID AS DIFF
FROM
(
SELECT T.ID TID
,ROW_NUMBER() OVER (ORDER BY T.ID DESC) TRN
FROM T
) S
LEFT JOIN
(SELECT ID UID
,ROW_NUMBER() OVER (ORDER BY ID DESC) URN
FROM T
) U
ON URN = TRN - 1
result
TID TRN UID URN DIFF
----------- -------------------- ----------- -------------------- -----------
10 1 NULL NULL NULL
5 2 10 1 -5
2 3 5 2 -3
1 4 2 3 -1
(4 row(s) affected)
Related
There are three column,wherever D_ID=13,value_amount holds value for mode of payment and wherever D_ID=10,value_amount holds value for amount.
ID D_ID Value_amount
1 13 2
1 13 2
1 10 1500
1 10 1500
2 13 1
2 13 1
2 10 2000
2 10 2000
Now I have to add two more columns amount and mode_of_payment and result should come like below
ID amount mode_of_payment
1 1500 2
1 1500 2
2 2000 1
2 2000 1
This is too long for a comment.
Simply put, your data is severely flawed. For the example data you've given, you're "ok", because the rows have the same values to the same ID, but what about when they don't? Let's assume, for example, we have data that looks like this:
ID D_ID Value_amount
1 13 1 --1
1 13 2 --2
1 10 1500 --3
1 10 1000 --4
2 13 1 --5
2 13 2 --6
2 10 2000 --7
2 10 3000 --8
I've added a "row number" next to data, for demonstration purposes only.
Here, what row is row "1" related to? Row "3" or row "4"? How do you know? There's no always ascending value in your data, so row "3" could just as easily be row "4". In fact, if we were to order the data using ID ASC, D_ID DESC, Value_amount ASC then rows 3 and 4 would "swap" in order. This could mean that when you attempt a solution, the order in wrong.
Tables aren't stored in any particular order, that are unordered. What determines the order the data is presented in is the ORDER BY clause, and if you don't have a value to define that "order", then that "order" is lost as soon as you INSERT it.
If, however, we add a always ascending value into your data, you can achieve this.
CREATE TABLE dbo.YourTable (UID int IDENTITY,
ID int,
DID int,
Value_amount int);
GO
INSERT INTO dbo.YourTable (ID, DID, Value_amount)
VALUES (1,13,1 ),
(1,13,2 ),
(1,10,1500),
(1,10,1000),
(2,13,1 ),
(2,13,2 ),
(2,10,2000),
(2,10,3000);
GO
WITH RNs AS(
SELECT ID,
DID,
Value_amount,
ROW_NUMBER() OVER (PARTITION BY ID, DID ORDER BY UID ASC) AS RN
FROM dbo.YourTable)
SELECT ID,
MAX(CASE DID WHEN 13 THEN Value_Amount END) AS Amount,
MAX(CASE DID WHEN 10 THEN Value_Amount END) AS PaymentMode
FROM RNs
GROUP BY RN,
ID;
GO
DROP TABLE dbo.YourTable;
Of course, you need to fix your design to implement this, but you need to do that anyway.
Have a table like this
FileID Value Version
-------------------------
1 Welle 2
1 Achse 3
2 Box 5
2 Enclosure 7
I need to "sum" up the lines with same FileID -> take highest value from column VERSION and get back the related value.
Desired result would be:
FileID Value Version
-------------------------
1 Achse 3
2 Enclosure 7
However using GROUP By sums up, but brings wrong result for Value:
SELECT
[FileID],
MAX([Value]),
MAX([Version])
FROM [ValueMist]
GROUP BY FileID
This returns:
FileID Value Version
------------------------
1 Welle 3
2 Enclosure 7
One option is WITH TIES in concert with row_number()
Example
Select top 1 with ties *
From YourTable
Order By row_number() over (partition by FileId Order By version desc)
You can achieve this by using ROW NUMBER
;WITH CTE AS (SELECT ROW_NUMBER() OVER ( PARTITION BY
ID ORDER BY VERSION DESC) AS RW
FROM TABLE)
SELECT * FROM CTE
WHERE RW=1
I have query returning few rows. There is column with consecutive numbers and nulls in it.
For example, it has values from 1-10 then 5 nulls, then from 16-30 and then 10 nulls, then from 41-45 and so on.
I need to update that column or create another column to create groupId for consecutive columns.
Meaning as per above example, for rows 1-10, groupID can be 1. Then for 5 nulls nothing and then from 16-30 groupId can be 2. Then for 10 nulls nothing. Then from 41-45 groupId can be 3 and so on.
Please let me know
This was a fun one. Here is the solution with a simple table that contains just integers, but with gaps.
create table n(v int)
insert n values (1),(2),(3),(5),(6),(7),(9),(10)
select n.*, g.group_no
from n
join (
select row_number() over (order by low.v) group_no, low.v as low, min(high.v) as high
from n as low
join n as high on high.v>low.v
and not exists(select * from n h2 where h2.v=high.v+1)
where not exists(select * from n l2 where l2.v=low.v-1)
group by low.v
) g on g.low<=n.v and g.high>=n.v
Result:
v group_no
1 1
2 1
3 1
5 2
6 2
7 2
9 3
10 3
Typical island & gap solution
select col, grp = dense_rank() over (order by grp)
from
(
select col, grp = col - dense_rank() over (order by col)
from yourtable
) d
So I have a table that has two records that need to be one. I can identify them but I want to update them in groups (sort of like a scan update =1, then proceed, then some other field changes, increment the number by 1 and proceed.)
Example table:
IDEvent 1 2 3 4 5
Col1 1 1 0 1 0
Col2 a a b a b
So essentially, my outcome would look like this afterwards so that I can write a select and group by col1 to then group the two first records into one but leave non consecutive record alone. I tried while loops but I couldn't figure it out.
IDEvent 1 2 3 4 5
Col1 1 1 0 2 0
Col2 A A B A B
alter view PtypeGroup as
WITH q AS
(
SELECT *,
ROW_Number() OVER (PARTITION BY idsession, comment ORDER BY ideventrecord) AS rnd,
ROW_NUMBER() OVER (PARTITION BY idsession ORDER BY ideventrecord) AS rn
FROM [ratedeventssorted]
)
SELECT min(ideventrecord) as IDEventRecord, idsession, min(distancestamp) as distancestamp, sum(length) as length, min(comment) as comment2, min(eventscorename) as firstptype, min(eventscoredescription) as Ptype2,
MIN(ideventrecord) AS first_number,
MAX(ideventrecord) AS last_number,
comment
,COUNT(ideventrecord) AS numbers_count
--into test
FROM q
where eventscorename IN ('Flex', 'Chpsl')
GROUP BY idsession,
rnd - rn,
comment
I have a need to SELECT all the rows from a table where the selected rows are greater than the datetime of the previously selected row by a given constant number of minutes. An example probably speaks best.
The following represents the table of data - we will call it myTable.
guid fkGuid myDate
------- ------- ---------------------
1 100 2013-01-10 11:00:00.0
2 100 2013-01-10 11:05:00.0
3 100 2013-01-10 11:10:00.0
4 100 2013-01-10 11:15:00.0
5 100 2013-01-10 11:20:00.0
6 100 2013-01-10 11:25:00.0
7 100 2013-01-10 11:30:00.0
8 100 2013-01-10 11:35:00.0
9 100 2013-01-10 11:40:00.0
10 100 2013-01-10 11:50:00.0
11 100 2013-01-10 11:55:00.0
What I want to do is provide a constant increment (say 10 minutes) and get back all the rows from the first that are 10 minutes or more from the previous row. So, with 10 minutes the result set should look like this:
guid myDate
------- ---------------------
1 2013-01-10 11:00:00.0
3 2013-01-10 11:10:00.0
5 2013-01-10 11:20:00.0
7 2013-01-10 11:30:00.0
9 2013-01-10 11:40:00.0
11 2013-01-10 11:55:00.0
The constant is passed in as a variable so it could be anything. Let's say it was 23 minutes, then the result set should look like this:
guid myDate
------- ---------------------
1 2013-01-10 11:00:00.0
6 2013-01-10 11:25:00.0
10 2013-01-10 11:50:00.0
The last example shows that I start at row 0's time (11:00:00) add 23 minutes and get the next >= row which is 11:25:00, add 23 minutes to the new row's time and then get the next (11:50:00) and so on.
I have tried doing this with a CTE but although I can quite easily get back all my times or none of them, I can't seem to figure how to get the rows I need. My current test code using 23 minutes hard coded into the WHERE clause:
WITH myCTE AS
(
SELECT guid,
myDate,
ROW_NUMBER() OVER (PARTITION BY guid ORDER BY myDate ASC) AS rowNum
FROM myTable
WHERE fkGuid = 100
)
SELECT currentRow.guid, currentRow.myDate
FROM myCTE AS currentRow
LEFT OUTER JOIN
myCTE AS previousRow
ON currentRow.guid = previousRow.guid
AND currentRow.rowNum = previousRow.rowNum + 1
WHERE
currentRow.myDate > DATEADD(minute, 23, previousRow.myDate)
ORDER BY
currentRow.myDate ASC
This returns nothing. If I omit the WHERE clause I get all rows back (obviously because I'm not filtering).
What am I missing?
Any and all help would be very much appreciated as it always is!
#gilly3, hardly SQL voodoo
WITH CTE
AS
(
SELECT TOP 1
guid
,fkGuid
,myDate
,ROW_NUMBER() OVER (ORDER BY myDate) RowNum
FROM MyTable
UNION ALL
SELECT mt.guid
,mt.fkGuid
,mt.myDate
,ROW_NUMBER() OVER (ORDER BY mt.myDate)
FROM MyTable mt
INNER JOIN
CTE ON mt.myDate>=DATEADD(minute,23,CTE.myDate)
WHERE RowNum=1
)
SELECT guid
,fkGuid
,myDate
FROM CTE
WHERE RowNum=1
The SQL Fiddle is here
First, your join will never return any rows, regardless of the where clause. Guid and rowNum are both unique keys per row, so if the guid is the same, so will be the rowNum. You can see that the join always fails by adding a field from previousRow to your select list and running your query without the where clause.
Next, joining on rowNum + 1 prevents skipping rows. You will only select adjacent rows that satisfy the date filter.
There may be some SQL voodoo with recursive queries that will make this work, but there will be a huge performance hit. Filter the data in your application code. Eg, in C#:
List<DataRow> FilterByInterval(IEnumerable<DataRow> rows, string dateColumn, int minutes)
{
List<DataRow> filteredRows = new List<DataRow>();
DateTime lastDate = DateTime.MinValue;
foreach (DataRow row in rows)
{
DateTime dt = row.Field<DateTime>(dateColumn);
TimeSpan diff = dt - lastDate;
if (diff.TotalMinutes >= minutes)
{
filteredRows.Add(row);
lastDate = dt;
}
}
return rows;
}