bring column name into row in sql server [duplicate] - sql-server

This question already has answers here:
Unpivot with column name
(3 answers)
Closed 4 years ago.
I have a table with column column_name_1 , column_name_2 , column_name_3 and with one row values 0,0,1
column_name_1 , column_name_2 , column_name_3
--------------,----------------,---------------
0 , 0 , 1
I need output like
column_name_1 | 0
column_name_2 | 0
column_name_3 | 1
Is it possible?
I have checked for some unpivot example, thats not exactly my case.
Because I need Column name into column value and one row into column.
1.Unpivot with column name
Name, Maths, Science, English
Tilak, 90, 40, 60
Raj, 30, 20, 10
changed into
Name, Subject, Marks
Tilak, Maths, 90
Tilak, Science, 40
Tilak, English, 60
Clearly there is a view, Name column remains its position as it is.
2.SQL Query for generating matrix like output querying related table in SQL Server
Above link also have Customer Name column which is remains same as it is.
But in my case input and output, both not have any same position.
So if still it can be achievable through pivot, Pls help with the code.

Clearly UNPIVOT would be more performant, but if you need a dynamic approach without actually using dynamic SQL
Example
Select C.*
From YourTable A
Cross Apply ( values (cast((Select A.* for XML RAW) as xml))) B(XMLData)
Cross Apply (
Select Item = a.value('local-name(.)','varchar(100)')
,Value = a.value('.','varchar(max)')
From B.XMLData.nodes('/row') as C1(n)
Cross Apply C1.n.nodes('./#*') as C2(a)
Where a.value('local-name(.)','varchar(100)') not in ('Columns','ToExclude')
) C
Returns
Item Value
Column_name_1 0
column_name_2 0
column_name_3 1

You want APPLY :
SELECT tt.cols, tt.colsvalue
FROM table t CROSS APPLY
( VALUES ([column_name_1], 'column_name_1'),
([column_name_2], 'column_name_2'),
([column_name_3], 'column_name_3')
) tt (colsvalue, cols);

Related

Move Data from One Table To Another Table With New Layout

Please note, I asked this previously, but realized that I left out some very important information and felt it better to remove the original question and post a new one. My apologies to all........
I have a table that has the following columns:
ID
Name
2010/Jan
2010/Jan_pct
2010/Feb
2010/Feb_pct
.....
.....
2017/Nov
2017/Nov_pct
And then a column like that for every month/year combination to the present (hopefully that makes sense). Please note though: it is NOT a given that every month / year combination is present. There might be a gap or a missing month/year. For instance, I know 2017/Jan, 2017/Feb are missing and there could be any number missing. I just didn't want to list out every column but give a general idea of the layout.
Added to that, there isn't one row in the database, but can have multiple rows for a Name / ID and the ID is not an identity, but can be any number.
To give an idea of how the table looks, here is some sample data (mind you, I only added two of the Year/Mon combinations, but there are dozens that do not necessarily have one for each month/year)
ID Name 2010/Jan 2010/Jan_Pct 2010/Feb 2010/Feb_Pct
10 Gold 81 0.00123 79 0.01242
134 Silver 82 0 75 0.21291
678 Iron 987 1.53252 1056 2.9897
As you can imagine, this isn't the best design as you need to add a new two new columns every month. So I created a new table with the following definitions
ID - float,
Name - varchar(255),
Month - varchar(3),
Year - int,
Value - int,
Value_Pct - float
I am trying to figure out how to move the existing data from the old table into the new table design.
Any help would be greatly appreciated.....
You can work with the unpivot operator to get what you need, with one added step of combining extra rows returned by the unpivot operator.
Sample Data Setup:
Given that the destination table has a value column of int datatype and the value_pct of float data type, I followed the same datatype guidance for the existing data table.
create table dbo.data_table
(
ID float not null
, [Name] varchar(255) not null
, [2010/Jan] int null
, [2010/Jan_Pct] float null
, [2010/Feb] int null
, [2010/Feb_Pct] float null
)
insert into dbo.data_table
values (10, 'Gold', 81, 0.00123, 79, 0.01242)
, (134, 'Sliver', 82, 0, 75, 0.21291)
, (678, 'Iron', 987, 1.53252, 1056, 2.9897)
Answer:
--combine what was the "value" row and the "value_pct" row
--into a single row via summation
select a.ID
, a.[Name]
, a.[Month]
, a.[Year]
, sum(a.value) as value
, sum(a.value_pct) as value_pct
from (
--get the data out of the unpivot with one row for value
--and one row for value_pct.
select post.ID
, post.[Name]
, substring(post.col_nm, 6, 3) as [Month]
, cast(substring(post.col_nm, 1, 4) as int) as [Year]
, iif(charindex('pct', post.col_nm, 0) = 0, post.value_prelim, null) as value
, iif(charindex('pct', post.col_nm, 0) > 0, post.value_prelim, null) as value_pct
from (
--cast the columns that are currently INT as Float so that
--all data points can fit in one common data type (will separate back out later)
select db.ID
, db.[Name]
, cast(db.[2010/Jan] as float) as [2010/Jan]
, db.[2010/Jan_Pct]
, cast(db.[2010/Feb] as float) as [2010/Feb]
, db.[2010/Feb_Pct]
from dbo.data_table as db
) as pre
unpivot (value_prelim for col_nm in (
[2010/Jan]
, [2010/Jan_Pct]
, [2010/Feb]
, [2010/Feb_Pct]
--List all the rest of the column names here
)
) as post
) as a
group by a.ID
, a.[Name]
, a.[Month]
, a.[Year]
Final Output:

Retrieve Sorted Column Value in SQL Server

What i have:
I have a Column
ID  SerialNo
1  101
2  102
3  103
4  104
5  105
6  116
7  117
8  118
9  119
10 120
These are just the 10 dummy rows. The actual table has over 100 000 rows.
What I Want to get:
A method or formula like any sorting technique which could return me the starting and ending element of [SerialNo] Column for every sub-series. For example
Expected Result: 101-105, 115-120
The comma separation in the above result is not important, only the starting and ending elements are important.
What I have tried:
I did it by PL/SQL programming, by running a loop in which I’m getting the starting and ending elements getting stored in a TABLE.
But due to no. of rows (over 100 000) the query execution is taking around 2 minutes.
I have also searched about some sorting techniques for the SQL Server but I found nothing. Because rendering every row will take twice the time then a sorting algorithm
Assuming every sub series should contain 5 records, I got expected result using below sql. I hope this helps.
DECLARE #subSeriesRange INT=5;
CREATE TABLE #Temp(ID INT,SerialNo INT);
INSERT INTO #Temp VALUES(1,101),
(2,102),
(3,103),
(4,104),
(5,105),
(6,116),
(7,117),
(8,115),
(9,119),
(10,120);
SELECT STUFF((SELECT CONCAT(CASE ID%#subSeriesRange WHEN 1 THEN ',' ELSE '-' END,SerialNo)
FROM #Temp
WHERE ID%#subSeriesRange = 1 OR ID%#subSeriesRange=0
ORDER BY ID
FOR XML PATH('')),1,1,''
);
DROP TABLE #Temp;
Just finding the start and end of each series is quite straightforward:
declare #t table (ID int not null, SerialNo int not null)
insert into #t(ID,SerialNo) values
(1 ,101), (2 ,102), (3 ,103),
(4 ,104), (5 ,105), (6 ,116),
(7 ,117), (8 ,118), (9 ,119),
(10,120)
;With Starts as (
select t1.SerialNo,ROW_NUMBER() OVER (ORDER BY t1.SerialNo) as rn
from
#t t1
left join
#t t1_no
on t1.SerialNo = t1_no.SerialNo + 1
where t1_no.ID is null
), Ends as (
select t1.SerialNo,ROW_NUMBER() OVER (ORDER BY t1.SerialNo) as rn
from
#t t1
left join
#t t1_no
on t1.SerialNo = t1_no.SerialNo - 1
where t1_no.ID is null
)
select
s.SerialNo as StartSerial,
e.SerialNo as EndSerial
from
Starts s
inner join
Ends e
on s.rn = e.rn
The logic being that a Start is a row where there is no row that has the SerialNo one less than the current row, and an End is a row where there is no row that has the SerialNo one more than the current row.
This may still perform poorly if there is no index on the SerialNo column.
Results:
StartSerial EndSerial
----------- -----------
101 105
116 120
Which is hopefully acceptable since you didn't seem to care what the specific results look like. It's also keeping things set-based.

SQL Join one-to-many tables, selecting only most recent entries

This is my first post - so I apologise if it's in the wrong seciton!
I'm joining two tables with a one-to-many relationship using their respective ID numbers: but I only want to return the most recent record for the joined table and I'm not entirely sure where to even start!
My original code for returning everything is shown below:
SELECT table_DATES.[date-ID], *
FROM table_CORE LEFT JOIN table_DATES ON [table_CORE].[core-ID] = table_DATES.[date-ID]
WHERE table_CORE.[core-ID] Like '*'
ORDER BY [table_CORE].[core-ID], [table_DATES].[iteration];
This returns a group of records: showing every matching ID between table_CORE and table_DATES:
table_CORE date-ID iteration
1 1 1
1 1 2
1 1 3
2 2 1
2 2 2
3 3 1
4 4 1
But I need to return only the date with the maximum value in the "iteration" field as shown below
table_CORE date-ID iteration Additional data
1 1 3 MoreInfo
2 2 2 MoreInfo
3 3 1 MoreInfo
4 4 1 MoreInfo
I really don't even know where to start - obviously it's going to be a JOIN query of some sort - but I'm not sure how to get the subquery to return only the highest iteration for each item in table 2's ID field?
Hope that makes sense - I'll reword if it comes to it!
--edit--
I'm wondering how to integrate that when I'm needing all the fields from table 1 (table_CORE in this case) and all the fields from table2 (table_DATES) joined as well?
Both tables have additional fields that will need to be merged.
I'm pretty sure I can just add the fields into the "SELECT" and "GROUP BY" clauses, but there are around 40 fields altogether (and typing all of them will be tedious!)
Try using the MAX aggregate function like this with a GROUP BY clause.
SELECT
[ID1],
[ID2],
MAX([iteration])
FROM
table_CORE
LEFT JOIN table_DATES
ON [table_CORE].[core-ID] = table_DATES.[date-ID]
WHERE
table_CORE.[core-ID] Like '*' --LIKE '%something%' ??
GROUP BY
[ID1],
[ID2]
Your example field names don't match your sample query so I'm guessing a little bit.
Just to make sure that I have everything you’re asking for right, I am going to restate some of your question and then answer it.
Your source tables look like this:
table_core:
table_dates:
And your outputs are like this:
Current:
Desired:
In order to make that happen all you need to do is use a subquery (or a CTE) as a “cross-reference” table. (I used temp tables to recreate your data example and _ in place of the - in your column names).
--Loading the example data
create table #table_core
(
core_id int not null
)
create table #table_dates
(
date_id int not null
, iteration int not null
, additional_data varchar(25) null
)
insert into #table_core values (1), (2), (3), (4)
insert into #table_dates values (1,1, 'More Info 1'),(1,2, 'More Info 2'),(1,3, 'More Info 3'),(2,1, 'More Info 4'),(2,2, 'More Info 5'),(3,1, 'More Info 6'),(4,1, 'More Info 7')
--select query needed for desired output (using a CTE)
; with iter_max as
(
select td.date_id
, max(td.iteration) as iteration_max
from #table_dates as td
group by td.date_id
)
select tc.*
, td.*
from #table_core as tc
left join iter_max as im on tc.core_id = im.date_id
inner join #table_dates as td on im.date_id = td.date_id
and im.iteration_max = td.iteration
select *
from
(
SELECT table_DATES.[date-ID], *
, row_number() over (partition by table_CORE date-ID order by iteration desc) as rn
FROM table_CORE
LEFT JOIN table_DATES
ON [table_CORE].[core-ID] = table_DATES.[date-ID]
WHERE table_CORE.[core-ID] Like '*'
) tt
where tt.rn = 1
ORDER BY [core-ID]

How do I exclude rows when an incremental value starts over?

I am a newbie poster but have spent a lot of time researching answers here. I can't quite figure out how to create a SQL result set using SQL Server 2008 R2 that should probably be using lead/lag from more modern versions. I am trying to aggregate data based on sequencing of one column, but there can be varying numbers of instances in each sequence. The only way I know a sequence has ended is when the next row has a lower sequence number. So it may go 1-2, 1-2-3-4, 1-2-3, and I have to figure out how to make 3 aggregates out of that.
Source data is joined tables that look like this (please help me format):
recordID instanceDate moduleID iResult interactionNum
1356 10/6/15 16:14 1 68 1
1357 10/7/15 16:22 1 100 2
1434 10/9/15 16:58 1 52 1
1435 10/11/15 17:00 1 60 2
1436 10/15/15 16:57 1 100 3
1437 10/15/15 16:59 1 100 4
I need to find a way to separate the first 2 rows from the last 4 rows in this example, based on values in the last column.
What I would love to ultimately get is a result set that looks like this, which averages the iResult column based on the grouping and takes the first instanceDate from the grouping:
instanceDate moduleID iResult
10/6/15 1 84
10/9/15 1 78
I can aggregate to get this result using MIN and AVG if I can just find a way to separate the groups. The data is ordered by instanceDate (please ignore the date formatting here) then interactionNum and the group separation should happen when the query finds a row where the interactionNum is <= than the previous row (will usually start over with '1' but not always, so prefer just to separate on a lower or equal integer value).
Here is the query I have so far (includes the joins that give the above data set):
SELECT
X.*
FROM
(SELECT TOP 100 PERCENT
instanceDate, b.ModuleID, iResult, b.interactionNum
FROM
(firstTable a
INNER JOIN
secondTable b ON b.someID = a.someID)
WHERE
a.someID = 2
AND b.otherID LIKE 'xyz'
AND a.ModuleID = 1
ORDER BY
instanceDate) AS X
OUTER APPLY
(SELECT TOP 1
*
FROM
(SELECT
instanceDate, d.ModuleID, iResult, d.interactionNum
FROM
(firstTable c
INNER JOIN
secondTable d ON d.someID = c.someID)
WHERE
c.someID = 2
AND d.otherID LIKE 'xyz'
AND c.ModuleID = 1
AND d.interactionNum = X.interactionNum
AND c.instanceDate < X.instanceDate) X2
ORDER BY
instanceDate DESC) Y
WHERE
NOT EXISTS (SELECT Y.interactionNum INTERSECT SELECT X.interactionNum)
But this is returning an interim result set like this:
instanceDate ModuleID iResult interactionNum
10/6/15 16:10 1 68 1
10/6/15 16:14 1 100 2
10/15/15 16:57 1 100 3
10/15/15 16:59 1 100 4
and the problem is that interactionNum 3, 4 do not belong in this result set. They would go in the next result set when I loop over this query. How do I keep them out of the result set in this iteration? I need the result set from this query to just include the first two rows, 'seeing' that row 3 of the source data has a lower value for interactionNum than row 2 has.
Not sure what ModuleID was supposed to be used, but I guess you're looking for something like this:
select min (instanceDate), [moduleID], avg([iResult])
from (
select *,row_number() over (partition by [moduleID] order by instanceDate) as RN
from Table1
) X
group by [moduleID], RN - [interactionNum]
The idea here is to create a running number with row_number for each moduleid, and then use the difference between that and InteractionNum as grouping criteria.
Example in SQL Fiddle
Here is my solution, although it should be said, I think #JamesZ answer is cleaner.
I created a new field called newinstance which is 1 wherever your instanceNumber is 1. I then created a rolling sum(newinstance) called rollinginstance to group on.
Change the last select to SELECT * FROM cte2 to show all the fields I added.
IF OBJECT_ID('tempdb..#tmpData') IS NOT NULL
DROP TABLE #tmpData
CREATE TABLE #tmpData (recordID INT, instanceDate DATETIME, moduleID INT, iResult INT, interactionNum INT)
INSERT INTO #tmpData
SELECT 1356,'10/6/15 16:14',1,68,1 UNION
SELECT 1357,'10/7/15 16:22',1,100,2 UNION
SELECT 1434,'10/9/15 16:58',1,52,1 UNION
SELECT 1435,'10/11/15 17:00',1,60,2 UNION
SELECT 1436,'10/15/15 16:57',1,100,3 UNION
SELECT 1437,'10/15/15 16:59',1,100,4
;WITH cte1 AS
(
SELECT *,
CASE WHEN interactionNum=1 THEN 1 ELSE 0 END AS newinstance,
ROW_NUMBER() OVER(ORDER BY recordID) as rowid
FROM #tmpData
), cte2 AS
(
SELECT *,
(select SUM(newinstance) from cte1 b where b.rowid<=a.rowid) as rollinginstance
FROM cte1 a
)
SELECT MIN(instanceDate) AS instanceDate, moduleID, AVG(iResult) AS iResult
FROM cte2
GROUP BY moduleID, rollinginstance

Get the missing value in a sequence of numbers

I made the following query for the SQL Server backend
SELECT TOP(1) (v.rownum + 99)
FROM
(
SELECT incrementNo-99 as id, ROW_NUMBER() OVER (ORDER BY incrementNo) as rownum
FROM proposals
WHERE [year] = '12'
) as v
WHERE v.rownum <> v.id
ORDER BY v.rownum
to find the first unused proposal number.
(It's not about the lastrecord +1)
But I realized ROW_NUMBER is not supported in access.
I looked and I can't find something similar.
Does anyone know how to get the same result as a ROW_NUMBER in access?
Maybe there's a better way of doing this.
Actually people insert their proposal No (incrementID) with no constraint. This number looks like this 13-152. xx- is for the current year and the -xxx is the proposal number. The last 3 digits are supposed to be incremental but in some case maybe 10 times a year they have to skip some numbers. That's why I can't have the auto increment.
So I do this query so when they open the form, the default number is the first unused.
How it works:
Because the number starts at 100, I do -99 so it starts at 1.
Then I compare the row number with the id so it looks like this
ROW NUMBER | ID
1 1 (100)
2 2 (101)
3 3 (102)
4 5 (104)<--------- WRONG
5 6 (105)
So now I know that we skip 4. So I return (4 - 99) = 103
If there's a better way, I don't mind changing but I really like this query.
If there's really no other way and I can't simulate a row number in access, i will use the pass through query.
Thank you
From your question it appears that you are looking for a gap in a sequence of numbers, so:
SELECT b.akey, (
SELECT Top 1 akey
FROM table1 a
WHERE a.akey > b.akey) AS [next]
FROM table1 AS b
WHERE (
SELECT Top 1 akey
FROM table1 a
WHERE a.akey > b.akey) <> [b].[akey]+1
ORDER BY b.akey
Where table1 is the table and akey is the sequenced number.
SELECT T.Value, T.next -1 FROM (
SELECT b.Value , (
SELECT Top 1 Value
FROM tblSequence a
WHERE a.Value > b.Value) AS [next]
FROM tblSequence b
) T WHERE T.next <> T.Value +1

Resources