There are three column,wherever D_ID=13,value_amount holds value for mode of payment and wherever D_ID=10,value_amount holds value for amount.
ID D_ID Value_amount
1 13 2
1 13 2
1 10 1500
1 10 1500
2 13 1
2 13 1
2 10 2000
2 10 2000
Now I have to add two more columns amount and mode_of_payment and result should come like below
ID amount mode_of_payment
1 1500 2
1 1500 2
2 2000 1
2 2000 1
This is too long for a comment.
Simply put, your data is severely flawed. For the example data you've given, you're "ok", because the rows have the same values to the same ID, but what about when they don't? Let's assume, for example, we have data that looks like this:
ID D_ID Value_amount
1 13 1 --1
1 13 2 --2
1 10 1500 --3
1 10 1000 --4
2 13 1 --5
2 13 2 --6
2 10 2000 --7
2 10 3000 --8
I've added a "row number" next to data, for demonstration purposes only.
Here, what row is row "1" related to? Row "3" or row "4"? How do you know? There's no always ascending value in your data, so row "3" could just as easily be row "4". In fact, if we were to order the data using ID ASC, D_ID DESC, Value_amount ASC then rows 3 and 4 would "swap" in order. This could mean that when you attempt a solution, the order in wrong.
Tables aren't stored in any particular order, that are unordered. What determines the order the data is presented in is the ORDER BY clause, and if you don't have a value to define that "order", then that "order" is lost as soon as you INSERT it.
If, however, we add a always ascending value into your data, you can achieve this.
CREATE TABLE dbo.YourTable (UID int IDENTITY,
ID int,
DID int,
Value_amount int);
GO
INSERT INTO dbo.YourTable (ID, DID, Value_amount)
VALUES (1,13,1 ),
(1,13,2 ),
(1,10,1500),
(1,10,1000),
(2,13,1 ),
(2,13,2 ),
(2,10,2000),
(2,10,3000);
GO
WITH RNs AS(
SELECT ID,
DID,
Value_amount,
ROW_NUMBER() OVER (PARTITION BY ID, DID ORDER BY UID ASC) AS RN
FROM dbo.YourTable)
SELECT ID,
MAX(CASE DID WHEN 13 THEN Value_Amount END) AS Amount,
MAX(CASE DID WHEN 10 THEN Value_Amount END) AS PaymentMode
FROM RNs
GROUP BY RN,
ID;
GO
DROP TABLE dbo.YourTable;
Of course, you need to fix your design to implement this, but you need to do that anyway.
Related
I want to know who has the most friends from the app I own(transactions), which means it can be either he got paid, or paid himself to many other users.
I can't make the query to show me only those who have the max friends number (it can be 1 or many, and it can be changed so I can't use limit).
;with relationships as
(
select
paid as 'auser',
Member_No as 'afriend'
from Payments$
union all
select
member_no as 'auser',
paid as 'afriend'
from Payments$
),
DistinctRelationships AS (
SELECT DISTINCT *
FROM relationships
)
select
afriend,
count(*) cnt
from DistinctRelationShips
GROUP BY
afriend
order by
count(*) desc
I just can't figure it out, I've tried count, max(count), where = max, nothing worked.
It's a two columns table - "Member_No" and "Paid" - member pays the money, and the paid is the one who got the money.
Member_No
Paid
14
18
17
1
12
20
12
11
20
8
6
3
2
4
9
20
8
10
5
20
14
16
5
2
12
1
14
10
It's from Excel, but I loaded it into sql-server.
It's just a sample, there are 1000 more rows
It seems like you are massively over-complicating this. There is no need for self-joining.
Just unpivot each row so you have both sides of the relationship, then group it up by one side and count distinct of the other side
SELECT
-- for just the first then SELECT TOP (1)
-- for all that tie for the top place use SELECT TOP (1) WITH TIES
v.Id,
Relationships = COUNT(DISTINCT v.Other),
TotalTransactions = COUNT(*)
FROM Payments$ p
CROSS APPLY (VALUES
(p.Member_No, p.Paid),
(p.Paid, p.Member_No)
) v(Id, Other)
GROUP BY
v.Id
ORDER BY
COUNT(DISTINCT v.Other) DESC;
db<>fiddle
I have a data set produced from a UNION query that aggregates data from 2 sources.
I want to select that data based on whether or not data was found in only of those sources,or both.
The data relevant parts of the set looks like this, there are a number of other columns:
row
preference
group
position
1
1
111
1
2
1
111
2
3
1
111
3
4
1
135
1
5
1
135
2
6
1
135
3
7
2
111
1
8
2
135
1
The [preference] column combined with the [group] column is what I'm trying to filter on, I want to return all the rows that have the same [preference] as the MIN([preference]) for each [group]
The desired output given the data above would be rows 1 -> 6
The [preference] column indicates the original source of the data in the UNION query so a legitimate data set could look like:
row
preference
group
position
1
1
111
1
2
1
111
2
3
1
111
3
4
2
111
1
5
2
135
1
In which case the desired output would be rows 1,2,3, & 5
What I can't work out is how to do (not real code):
SELECT * WHERE [preference] = MIN([preference]) PARTITION BY [group]
One way to do this is using RANK:
SELECT row
, preference
, [group]
, position
FROM (
SELECT row
, preference
, [group]
, position
, RANK() OVER (PARTITION BY [group] ORDER BY preference) AS seq
FROM t) t2
WHERE seq = 1
Demo here
Should by doable via simple inner join:
SELECT t1.*
FROM t AS t1
INNER JOIN (SELECT [group], MIN(preference) AS preference
FROM t
GROUP BY [group]
) t2 ON t1.[group] = t2.[group]
AND t1.preference = t2.preference
I'm trying to select randomly few rows for each Id stored in one table where these Ids have multiple rows on this table. It's difficult to explain with words, so let me show you with an example :
Example from the table :
Id Review
1 Text11
1 Text12
1 Text13
2 Text21
3 Text31
3 Text32
4 Text41
5 Text51
6 Text61
6 Text62
6 Text63
Result expected :
Id Review
1 Text11
1 Text13
2 Text21
3 Text32
4 Text41
5 Text51
6 Text62
In fact, the table contains thousands of rows. Some Ids contain only one Review but others can contain hundreds of reviews. I would like to select 10% of these, and select at least once, all rows wich have 1-9 reviews (I saw the SELECT TOP 10 percent FROM table ORDER BY NEWID() includes the row even if it's alone)
I read some Stack topics, I think I have to use a subquery but I don't find the correct solution.
Thanks by advance.
Regards.
Try this:
DECLARE #t table(Id int, Review char(6))
INSERT #t values
(1,'Text11'),
(1,'Text12'),
(1,'Text13'),
(2,'Text21'),
(3,'Text31'),
(3,'Text32'),
(4,'Text41'),
(5,'Text51'),
(6,'Text61'),
(6,'Text62'),
(6,'Text63')
;WITH CTE AS
(
SELECT
id, Review,
row_number() over (partition by id order by newid()) rn,
count(*) over (partition by id) cnt
FROM #t
)
SELECT id, Review
FROM CTE
WHERE rn <= (cnt / 10) + 1
Result(random):
id Review
1 Text12
2 Text21
3 Text31
4 Text41
5 Text51
6 Text63
So I have a table that has two records that need to be one. I can identify them but I want to update them in groups (sort of like a scan update =1, then proceed, then some other field changes, increment the number by 1 and proceed.)
Example table:
IDEvent 1 2 3 4 5
Col1 1 1 0 1 0
Col2 a a b a b
So essentially, my outcome would look like this afterwards so that I can write a select and group by col1 to then group the two first records into one but leave non consecutive record alone. I tried while loops but I couldn't figure it out.
IDEvent 1 2 3 4 5
Col1 1 1 0 2 0
Col2 A A B A B
alter view PtypeGroup as
WITH q AS
(
SELECT *,
ROW_Number() OVER (PARTITION BY idsession, comment ORDER BY ideventrecord) AS rnd,
ROW_NUMBER() OVER (PARTITION BY idsession ORDER BY ideventrecord) AS rn
FROM [ratedeventssorted]
)
SELECT min(ideventrecord) as IDEventRecord, idsession, min(distancestamp) as distancestamp, sum(length) as length, min(comment) as comment2, min(eventscorename) as firstptype, min(eventscoredescription) as Ptype2,
MIN(ideventrecord) AS first_number,
MAX(ideventrecord) AS last_number,
comment
,COUNT(ideventrecord) AS numbers_count
--into test
FROM q
where eventscorename IN ('Flex', 'Chpsl')
GROUP BY idsession,
rnd - rn,
comment
I have a need to SELECT all the rows from a table where the selected rows are greater than the datetime of the previously selected row by a given constant number of minutes. An example probably speaks best.
The following represents the table of data - we will call it myTable.
guid fkGuid myDate
------- ------- ---------------------
1 100 2013-01-10 11:00:00.0
2 100 2013-01-10 11:05:00.0
3 100 2013-01-10 11:10:00.0
4 100 2013-01-10 11:15:00.0
5 100 2013-01-10 11:20:00.0
6 100 2013-01-10 11:25:00.0
7 100 2013-01-10 11:30:00.0
8 100 2013-01-10 11:35:00.0
9 100 2013-01-10 11:40:00.0
10 100 2013-01-10 11:50:00.0
11 100 2013-01-10 11:55:00.0
What I want to do is provide a constant increment (say 10 minutes) and get back all the rows from the first that are 10 minutes or more from the previous row. So, with 10 minutes the result set should look like this:
guid myDate
------- ---------------------
1 2013-01-10 11:00:00.0
3 2013-01-10 11:10:00.0
5 2013-01-10 11:20:00.0
7 2013-01-10 11:30:00.0
9 2013-01-10 11:40:00.0
11 2013-01-10 11:55:00.0
The constant is passed in as a variable so it could be anything. Let's say it was 23 minutes, then the result set should look like this:
guid myDate
------- ---------------------
1 2013-01-10 11:00:00.0
6 2013-01-10 11:25:00.0
10 2013-01-10 11:50:00.0
The last example shows that I start at row 0's time (11:00:00) add 23 minutes and get the next >= row which is 11:25:00, add 23 minutes to the new row's time and then get the next (11:50:00) and so on.
I have tried doing this with a CTE but although I can quite easily get back all my times or none of them, I can't seem to figure how to get the rows I need. My current test code using 23 minutes hard coded into the WHERE clause:
WITH myCTE AS
(
SELECT guid,
myDate,
ROW_NUMBER() OVER (PARTITION BY guid ORDER BY myDate ASC) AS rowNum
FROM myTable
WHERE fkGuid = 100
)
SELECT currentRow.guid, currentRow.myDate
FROM myCTE AS currentRow
LEFT OUTER JOIN
myCTE AS previousRow
ON currentRow.guid = previousRow.guid
AND currentRow.rowNum = previousRow.rowNum + 1
WHERE
currentRow.myDate > DATEADD(minute, 23, previousRow.myDate)
ORDER BY
currentRow.myDate ASC
This returns nothing. If I omit the WHERE clause I get all rows back (obviously because I'm not filtering).
What am I missing?
Any and all help would be very much appreciated as it always is!
#gilly3, hardly SQL voodoo
WITH CTE
AS
(
SELECT TOP 1
guid
,fkGuid
,myDate
,ROW_NUMBER() OVER (ORDER BY myDate) RowNum
FROM MyTable
UNION ALL
SELECT mt.guid
,mt.fkGuid
,mt.myDate
,ROW_NUMBER() OVER (ORDER BY mt.myDate)
FROM MyTable mt
INNER JOIN
CTE ON mt.myDate>=DATEADD(minute,23,CTE.myDate)
WHERE RowNum=1
)
SELECT guid
,fkGuid
,myDate
FROM CTE
WHERE RowNum=1
The SQL Fiddle is here
First, your join will never return any rows, regardless of the where clause. Guid and rowNum are both unique keys per row, so if the guid is the same, so will be the rowNum. You can see that the join always fails by adding a field from previousRow to your select list and running your query without the where clause.
Next, joining on rowNum + 1 prevents skipping rows. You will only select adjacent rows that satisfy the date filter.
There may be some SQL voodoo with recursive queries that will make this work, but there will be a huge performance hit. Filter the data in your application code. Eg, in C#:
List<DataRow> FilterByInterval(IEnumerable<DataRow> rows, string dateColumn, int minutes)
{
List<DataRow> filteredRows = new List<DataRow>();
DateTime lastDate = DateTime.MinValue;
foreach (DataRow row in rows)
{
DateTime dt = row.Field<DateTime>(dateColumn);
TimeSpan diff = dt - lastDate;
if (diff.TotalMinutes >= minutes)
{
filteredRows.Add(row);
lastDate = dt;
}
}
return rows;
}