I am having lookup_table as following:-
| id | Val |
+------+-----+
| 1 | A |
| 11 | B |
| 111 | C |
| 1111 | D |
I am creating words using the values from lookup_table like $id! and saving it into another table.
example : bad - $11!$1!$1111!
So my data_table will be something like,
| expression |
+-----------------+
| $111!$1!$11! | -- cab
| $1111!$1!$1111! | -- dad
| $11!$1!$1111! | -- bad
I want to reverse-build the word from the data_table.
What I've tried: used CHARINDEX on $ and ! to get the first id from expression and tried to replace it with matching val from look_up table recursively using CTE. I was not getting the exact result I was getting, but with some filetring, I got something close.
The query I've tried :
;WITH cte AS
(SELECT replace(expression,'$' + CONVERT(varchar(10),id) + '!' ,val) AS 'expression',
cnt =1
FROM data_table
JOIN lookup_table ON id =
SUBSTRING(
SUBSTRING(expression, CHARINDEX('$', expression) + 1, LEN(expression) - CHARINDEX('$', expression)), 1, CHARINDEX('!', expression) - 2)
UNION ALL
SELECT replace(expression,'$' + CONVERT(varchar(10),id) + '!' ,val) AS 'expression' ,
cnt = cnt +1
FROM cte
JOIN lookup_table ON id =
SUBSTRING(
SUBSTRING(expression, CHARINDEX('$', expression) + 1, LEN(expression) - CHARINDEX('$', expression)), 1, CHARINDEX('!', expression) - (cnt +2))
WHERE CHARINDEX('$', expression) > 0 )
SELECT expression
FROM cte
WHERE CHARINDEX('$', expression) = 0
Current output :
| expression |
+------------+
| DAD |
| CAB |
Expected output:
| expression |
+------------+
| DAD |
| CAB |
| BAD |
fiddle
What am I doing wrong?
Edit: There was a typo in the data setup in fiddler. d in bad was having five 1s instead of four. Thanks, DarkRob for pointing it out.
You may try this. Instead of using recursive cte you may use multiple cte to create your expression. Since sql already introduced this string_split function to convert your row cell into rows on particular delimeter, this will make our work lot easier.
First we convert each cell value into individual rows. Further we can easily get our expression word by using inner join with lookup table. At the last using stuff we'll again get our string as we need.
;WITH CTE AS (
SELECT ROW_NUMBER() OVER (ORDER BY EXPRESSION) AS SLNO, * FROM data_table
),
CT AS (
SELECT *, REPLACE(VALUE,'$','') AS NEWVAL
FROM CTE CROSS APPLY string_split(EXPRESSION,'!') WHERE VALUE <> ''
),
CTFINAL AS (
SELECT * FROM CT INNER JOIN lookup_table AS LT ON CT.NEWVAL=LT.id
)
--SELECT * FROM CTFINAL
SELECT DISTINCT SLNO,
STUFF( (SELECT '' + VAL + '' FROM CTFINAL AS CFF WHERE CFF.SLNO=CF.SLNO FOR XML PATH('')), 1,0,'') AS MYVAL
FROM CTFINAL AS CF
Related
I have text, that include for example #(sharp) character. String contains parameters. Parameters begin with # and end with #.
declare #TEXT varchar(200) = 'Dear #NAMEOFGUEST# , we glad to see youSOMEHOTEL tomorrow.'
declare #scanChar char(1)='#'
select
SUBSTRING(#TEXT, CHARINDEX(#scanChar, #TEXT) + 1, (((LEN(#TEXT)) - CHARINDEX(#scanChar, REVERSE(#TEXT))) - CHARINDEX(#scanChar, #TEXT)))
Return:
NAMEOFGUEST
It's the correct result.
When string contains only one parameter #NAMEOFGUEST# it works. If we add SOMEHOTEL to into the #, as #SOMEHOTEL# result is not as we want.
declare #TEXT varchar(200) = 'Dear #NAMEOFGUEST# , we glad to see you #SOMEHOTEL# tomorrow.'
declare #scanChar char(1)='#'
Returns:
NAMEOFGUEST# , we glad to see you #SOMEHOTEL
I want the same result as in previous, like NAMEOFGUEST only.
Using CHARINDEX(#FindString, #PrintData, CHARINDEX(#FindString, #PrintData) + 1) you can find the second occurences of the #, then based on that the remaining calculation can be done.
The following query will work.
DECLARE #PrintData AS VARCHAR (200) = 'Dear #NAMEOFGUEST# , we glad to see you #SOMEHOTEL# tomorrow.';
DECLARE #FindString AS CHAR (1) = '#';
DECLARE #LenFindString AS INT = LEN(#FindString);
SELECT SUBSTRING(#PrintData,
CHARINDEX(#FindString, #PrintData) + #LenFindString,
CHARINDEX(#FindString, #PrintData, CHARINDEX(#FindString, #PrintData) + 1) - (CHARINDEX(#FindString, #PrintData) + #LenFindString)
);
Demo on db<>fiddle
You can use a recursive approach like this:
A mockup-table to simulate a set oriented scenario
declare #tbl TABLE(ID INT IDENTITY, SomeComment VARCHAR(100),SomeString varchar(200));
INSERT INTO #tbl VALUES('3 Terms','Dear #NAMEOFGUEST# , we glad to see you #SOMEHOTEL# tomorrow. And even #AThirdOne# is here.')
,('1 Term','Dear #NAMEOFGUEST# , we glad to see you soon.')
,('No Term','Dear Guest, nice to see you.')
,('invalid 1','Dear Guest, nice to #see you.')
,('invalid ?','Dear #Guest, nice# to see you.');
declare #scanChar char(1)='#';
--the query
WITH recCTE AS
(
SELECT t.ID
,t.SomeComment
,t.SomeString AS TextToWork
,1 AS TermIndex
,D.*
FROM #tbl t
OUTER APPLY(SELECT CHARINDEX(#scanChar,t.SomeString)) A(StartingAt)
OUTER APPLY(SELECT CHARINDEX(#scanChar,t.SomeString,A.StartingAt+1)) B(EndingAt)
OUTER APPLY(SELECT CASE WHEN A.StartingAt>0 AND B.EndingAt >0 THEN SUBSTRING(t.SomeString,A.StartingAt+1,B.EndingAt- A.StartingAt-1) END) C(TermCandidate)
OUTER APPLY(SELECT A.StartingAt,B.EndingAt,C.TermCandidate,SUBSTRING(t.SomeString,B.EndingAt+1,1000) AS RestString) D
UNION ALL
SELECT t.ID
,t.SomeComment
,t.RestString
,t.TermIndex+1
,D.*
FROM recCTE t
OUTER APPLY(SELECT CHARINDEX(#scanChar,t.RestString)) A(StartingAt)
OUTER APPLY(SELECT CHARINDEX(#scanChar,t.RestString,A.StartingAt+1)) B(EndingAt)
OUTER APPLY(SELECT CASE WHEN A.StartingAt>0 AND B.EndingAt >0 THEN SUBSTRING(t.RestString,A.StartingAt+1,B.EndingAt- A.StartingAt-1) END) C(TermCandidate)
OUTER APPLY(SELECT A.StartingAt,B.EndingAt,C.TermCandidate,SUBSTRING(t.RestString,B.EndingAt+1,1000) AS RestString) D
WHERE (LEN(t.RestString) - LEN(REPLACE(t.RestString,#scanChar,'')))%2=0 AND CHARINDEX(#scanChar,t.RestString)>0
)
SELECT ID
,SomeComment
,TermIndex
--this will exclude "Guest, nice" due to the blank
,CASE WHEN CHARINDEX(' ',TermCandidate)>0 THEN NULL ELSE TermCandidate END AS Term
FROM recCTE
ORDER BY ID,TermIndex;
The result
+----+-------------+-----------+-------------+
| ID | SomeComment | TermIndex | Term |
+----+-------------+-----------+-------------+
| 1 | 3 Terms | 1 | NAMEOFGUEST |
+----+-------------+-----------+-------------+
| 1 | 3 Terms | 2 | SOMEHOTEL |
+----+-------------+-----------+-------------+
| 1 | 3 Terms | 3 | AThirdOne |
+----+-------------+-----------+-------------+
| 2 | 1 Term | 1 | NAMEOFGUEST |
+----+-------------+-----------+-------------+
| 3 | No Term | 1 | NULL |
+----+-------------+-----------+-------------+
| 4 | invalid 1 | 1 | NULL |
+----+-------------+-----------+-------------+
| 5 | invalid ? | 1 | NULL |
+----+-------------+-----------+-------------+
We have a string variable where we capture string listed below:
String-like >>
Temp Table Temp | Temp1 Table1 Temp1 | Temp2 Table2 Temp2 | ABD EFG
EFG
Now we need to check, in this particular string how many Palindromes exists.
So, can you help me with this, that how may I fetch the number of Palindrome counts exists.
Note: "|" this pipeline exists after every successful string completion.
Answer should be: 3
The query which I have written, I used Reverse() / Replace() functions but not able to understand how to split the string after every pipeline symbol.
So, please help me in doing that, I am a beginner in SQL Server.
It seems you are confusing your requirement with searching for palindromes, so I have put together a solution to your question as well as a few methods should anyone else come across this question looking for and answer relating to actual palindromes:
Answer to your question as it is here
To do this, you can split your string on the delimiter and then split the result again on the spaces (I have included the function I've used here at the end). With this ordered list of words, you can compare the words in order to the words in reverse order to see if they are the same:
declare #s nvarchar(100) = 'Temp Table Temp | Temp1 Table1 Temp1 | Temp2 Table2 Temp2 | ABD EFG EFG';
with w as
(
select s.item as s
,ss.rn
,row_number() over (partition by s.item order by ss.rn desc) as rrn
,ss.item as w
from dbo.fn_StringSplit4k(#s,'|',null) as s
cross apply dbo.fn_StringSplit4k(ltrim(rtrim(s.item)),' ',null) as ss
)
select w.s
,case when sum(case when w.w = wr.w then 1 else 0 end) = max(w.rn) then 1 else 0 end as p
from w
join w as wr
on w.s = wr.s
and w.rn = wr.rrn
group by w.s
order by w.s
Which outputs:
+----------------------+---+
| s | p |
+----------------------+---+
| ABD EFG EFG | 0 |
| Temp1 Table1 Temp1 | 1 |
| Temp2 Table2 Temp2 | 1 |
| Temp Table Temp | 1 |
+----------------------+---+
Solution for actual palindromes
Firstly to check if a string value is a proper palindrome (ie: spelled the same forwards and backwards) this is a trivial comparison of the original string with it's reverse value, which in the example below correctly outputs 1:
declare #p nvarchar(100) = 'Temp Table elbaT pmeT';
select case when #p = reverse(#p)
then 1
else 0
end as p
To do this across a set of delimited values within the same string, you should firstly feel bad for storing your data in a delimited string within your database and contemplate why you are doing this. Seriously, it's incredibly bad design and you should fix it as soon as possible. Once you have done that you can apply the above technique.
It that is genuinely unavoidable however, you can split your string using one of many set based table valued functions and then apply the above operation on the output:
declare #ps nvarchar(100) = 'Temp Table elbaT pmeT | Temp1 Table1 1elbaT 1pmeT | Temp2 Table2 Temp2 | ABD EFG EFG';
select ltrim(rtrim(s.item)) as s
,case when ltrim(rtrim(s.item)) = reverse(ltrim(rtrim(s.item))) then 1 else 0 end as p
from dbo.fn_StringSplit4k(#ps,'|',null) as s
Which outputs:
+---------------------------+---+
| s | p |
+---------------------------+---+
| Temp Table elbaT pmeT | 1 |
| Temp1 Table1 1elbaT 1pmeT | 1 |
| Temp2 Table2 Temp2 | 0 |
| ABD EFG EFG | 0 |
+---------------------------+---+
String split function
create function [dbo].[fn_StringSplit4k]
(
#str nvarchar(4000) = ' ' -- String to split.
,#delimiter as nvarchar(1) = ',' -- Delimiting value to split on.
,#num as int = null -- Which value to return, null returns all.
)
returns table
as
return
-- Start tally table with 10 rows.
with n(n) as (select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1 union all select 1)
-- Select the same number of rows as characters in #str as incremental row numbers.
-- Cross joins increase exponentially to a max possible 10,000 rows to cover largest #str length.
,t(t) as (select top (select len(isnull(#str,'')) a) row_number() over (order by (select null)) from n n1,n n2,n n3,n n4)
-- Return the position of every value that follows the specified delimiter.
,s(s) as (select 1 union all select t+1 from t where substring(isnull(#str,''),t,1) = #delimiter)
-- Return the start and length of every value, to use in the SUBSTRING function.
-- ISNULL/NULLIF combo handles the last value where there is no delimiter at the end of the string.
,l(s,l) as (select s,isnull(nullif(charindex(#delimiter,isnull(#str,''),s),0)-s,4000) from s)
select rn
,item
from(select row_number() over(order by s) as rn
,substring(#str,s,l) as item
from l
) a
where rn = #num
or #num is null;
I'm pulling data from an API in JSON with a format like the example data below. Where essentially every "row" is an array of values. The API doc defines the columns and their types in advance. So I know the col1 is, for example, a varchar, and that col2 is an int.
CREATE TEMP TABLE dat (data json);
INSERT INTO dat
VALUES ('{"COLUMNS":["col1","col2"],"DATA":[["a","1"],["b","2"]]}');
I want to transform this within PostgreSQL 9.3 such that I end up with:
col1 | col2
------------
a | 1
b | 2
Using json_array_elements I can get to:
SELECT json_array_elements(data->'DATA')
FROM dat
json_array_elements
json
---------
["a","1"]
["b","2"]
but then I can't figure out how to do either convert the JSON array to a PostgreSQL array so I can perform something like unnest(ARRAY['a','1'])
General case for unknown columns
To get a result like
col1 | col2
------------
a | 1
b | 2
will require a bunch of dynamic SQL, because you don't know the types of the columns in advance, nor the column names.
You can unpack the json with something like:
SELECT
json_array_element_text(colnames, colno) AS colname,
json_array_element_text(colvalues, colno) AS colvalue,
rn,
idx,
colno
FROM (
SELECT
data -> 'COLUMNS' AS colnames,
d AS colvalues,
rn,
row_number() OVER () AS idx
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> 'DATA') d
) elements
cross join generate_series(0, json_array_length(colnames) - 1) colno;
producing a result set like:
colname | colvalue | rn | idx | colno
---------+----------+----+-----+-------
col1 | a | 1 | 1 | 0
col2 | 1 | 1 | 1 | 1
col1 | b | 1 | 2 | 0
col2 | 2 | 1 | 2 | 1
(4 rows)
You can then use this as input to the crosstab function from the tablefunc module with something like:
SELECT * FROM crosstab('
SELECT
to_char(rn,''00000000'')||''_''||to_char(idx,''00000000'') AS rowid,
json_array_element_text(colnames, colno) AS colname,
json_array_element_text(colvalues, colno) AS colvalue
FROM (
SELECT
data -> ''COLUMNS'' AS colnames,
d AS colvalues,
rn,
row_number() OVER () AS idx
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> ''DATA'') d
) elements
cross join generate_series(0, json_array_length(colnames) - 1) colno;
') results(rowid text, col1 text, col2 text);
producing:
rowid | col1 | col2
---------------------+------+------
00000001_ 00000001 | a | 1
00000001_ 00000002 | b | 2
(2 rows)
The column names are not retained here.
If you were on 9.4 you could avoid the row_number() calls and use WITH ORDINALITY, making it much cleaner.
Simplified with fixed, known columns
Since you apparently know the number of columns and their types in advance the query can be considerably simplified.
SELECT
col1, col2
FROM (
SELECT
rn,
row_number() OVER () AS idx,
elem ->> 0 AS col1,
elem ->> 1 :: integer AS col2
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> 'DATA') elem
ORDER BY 1, 2
) x;
result:
col1 | col2
------+------
a | 1
b | 2
(2 rows)
Using 9.4 WITH ORDINALITY
If you were using 9.4 you could keep it cleaner using WITH ORDINALITY:
SELECT
col1, col2
FROM (
SELECT
elem ->> 0 AS col1,
elem ->> 1 :: integer AS col2
FROM
dat
CROSS JOIN
json_array_elements(dat.data -> 'DATA') WITH ORDINALITY AS elements(elem, idx)
ORDER BY idx
) x;
this code worked fine for me, maybe it be useful for someone.
select to_json(array_agg(t))
from (
select text, pronunciation,
(
select array_to_json(array_agg(row_to_json(d)))
from (
select part_of_speech, body
from definitions
where word_id=words.id
order by position asc
) d
) as definitions
from words
where text = 'autumn'
) t
Credits:
https://hashrocket.com/blog/posts/faster-json-generation-with-postgresql
; WITH cte AS
(SELECT p.BudgetNumber, t.MilestoneNumber FROM
(SELECT DISTINCT BudgetNumber FROM tblMilestones) p
CROSS JOIN
(SELECT DISTINCT MilestoneNumber FROM tblMilestoneTemplate) t)
SELECT BudgetNumber, MilestoneNumber FROM cte
EXCEPT (SELECT BudgetNumber, MilestoneNumber FROM tblMilestones)
ORDER BY BudgetNumber, MilestoneNumber
The query above creates all possible BudgetNumber and MilestoneNumber combinations using a cross join, and then attempts to locate combinations that are not in the tblMilestones table (I didn't create this database, I know the table prefixes are weird and this db isn't normalized).
There are no NULL entries in any of these fields. When I use this query with the EXCEPT clause above, I get some missing values (but not all), but I also get some non-missing values. When I change the EXCEPT to a LEFT JOIN, I get the same results. When I change the EXCEPT to a WHERE NOT EXISTS, I get no results at all. Can anyone please help?
SQLFiddle Output:
| BUDGETNUMBER | MILESTONENUMBER |
|--------------|-----------------|
| BA04001 | 0 |
| BA04001 | 99 |
| BA04005 | 0 |
| BA04005 | 99 |
| BA05001 | 0 |
| BA05001 | 99 |
| BA05002 | 0 |
| BA05002 | 99 |
Here is the way you would need to use NOT EXISTS correctly. You need specify where clause inside subquery to get correct result.
;
WITH cte
AS (
SELECT p.BudgetNumber
,t.MilestoneNumber
FROM (
SELECT DISTINCT BudgetNumber
FROM tblMilestones
) p
CROSS JOIN (
SELECT DISTINCT MilestoneNumber
FROM tblMilestoneTemplate
) t
)
SELECT BudgetNumber
,MilestoneNumber
FROM cte t
WHERE NOT EXISTS ( SELECT 1
FROM tblMilestones s
WHERE t.BudgetNumber = s.BudgetNumber
AND t.MilestoneNumber = s.MilestoneNumber )
ORDER BY BudgetNumber
,MilestoneNumber
Look at following two examples
DECLARE #NoPrecision AS TABLE ( MyNumber DECIMAL )
INSERT INTO #NoPrecision ( MyNumber ) VALUES ( 12345.123456789 )
SELECT * FROM #NoPrecision
output: 12345
DECLARE #Precision AS TABLE ( MyNumber DECIMAL(10,4) )
INSERT INTO #Precision ( MyNumber ) VALUES ( 12345.123456789 )
SELECT * FROM #Precision
output: 12345.1235
Applies to Microsoft SQL Server 2008 R2.
The problem is
If we have a few dozen Outer Apply (30) then they begin to work pretty slowly. In the middle of the Outer Apply I have something more complicated than a simple select, a view.
Details
I'm writing a sort of attributes assigned to tables (in the database). Generally, a few tables, holds a reference to a table of attributes (key, value).
Pseudo structure looks like this:
DECLARE #Lot TABLE (
LotId INT PRIMARY KEY IDENTITY,
SomeText VARCHAR(8))
INSERT INTO #Lot
OUTPUT INSERTED.*
VALUES ('Hello'), ('World')
DECLARE #Attribute TABLE(
AttributeId INT PRIMARY KEY IDENTITY,
LotId INT,
Val VARCHAR(8),
Kind VARCHAR(8))
INSERT INTO #Attribute
OUTPUT INSERTED.* VALUES
(1, 'Foo1', 'Kind1'), (1, 'Foo2', 'Kind2'),
(2, 'Bar1', 'Kind1'), (2, 'Bar2', 'Kind2'), (2, 'Bar3', 'Kind3')
LotId SomeText
----------- --------
1 Hello
2 World
AttributeId LotId Val Kind
----------- ----------- -------- --------
1 1 Foo1 Kind1
2 1 Foo2 Kind2
3 2 Bar1 Kind1
4 2 Bar2 Kind2
5 2 Bar3 Kind3
I can now run a query such as:
SELECT
[l].[LotId]
, [SomeText]
, [Oa1].[AttributeId]
, [Oa1].[LotId]
, 'Kind1Val' = [Oa1].[Val]
, [Oa1].[Kind]
, [Oa2].[AttributeId]
, [Oa2].[LotId]
, 'Kind2Val' = [Oa2].[Val]
, [Oa2].[Kind]
, [Oa3].[AttributeId]
, [Oa3].[LotId]
, 'Kind3Val' = [Oa3].[Val]
, [Oa3].[Kind]
FROM #Lot AS l
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind1') AS Oa1
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind2') AS Oa2
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind3') AS Oa3
LotId SomeText AttributeId LotId Kind1Val Kind AttributeId LotId Kind2Val Kind AttributeId LotId Kind3Val Kind
----------- -------- ----------- ----------- -------- -------- ----------- ----------- -------- -------- ----------- ----------- -------- --------
1 Hello 1 1 Foo1 Kind1 2 1 Foo2 Kind2 NULL NULL NULL NULL
2 World 3 2 Bar1 Kind1 4 2 Bar2 Kind2 5 2 Bar3 Kind3
The simple way to get the pivot table of attribute values and results for Lot rows that do not have attribute such a Kind3.
I know Microsoft PIVOT and it is not simple and do not fits here.
Finally, what will be faster and will give the same results?
In order to get the result you can unpivot and then pivot the data.
There are two ways that you can perform this. First, you can use the UNPIVOT and the PIVOT function:
select *
from
(
select LotId,
SomeText,
col+'_'+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from #Lot l
left join #Attribute a
on l.LotId = a.LotId
) src
unpivot
(
value
for col in (attributeid, a_Lotid, val, kind)
) unpiv
) d
pivot
(
max(value)
for col in (attributeid_1, a_LotId_1, Val_1, Kind_1,
attributeid_2, a_LotId_2, Val_2, Kind_2,
attributeid_3, a_LotId_3, Val_3, Kind_3)
) piv
See SQL Fiddle with Demo.
Or starting in SQL Server 2008+, you can use CROSS APPLY with a VALUES clause to unpivot the data:
select *
from
(
select LotId,
SomeText,
col+'_'+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from #Lot l
left join #Attribute a
on l.LotId = a.LotId
) src
cross apply
(
values ('attributeid', attributeid),('LotId', a_LotId), ('Value', Val), ('Kind', Kind)
) c (col, value)
) d
pivot
(
max(value)
for col in (attributeid_1, LotId_1, Value_1, Kind_1,
attributeid_2, LotId_2, Value_2, Kind_2,
attributeid_3, LotId_3, Value_3, Kind_3)
) piv
See SQL Fiddle with Demo.
The unpivot process takes the multiple columns for each LotID and SomeText and converts it into rows giving the result:
| LOTID | SOMETEXT | COL | VALUE |
--------------------------------------------
| 1 | Hello | attributeid_1 | 1 |
| 1 | Hello | LotId_1 | 1 |
| 1 | Hello | Value_1 | Foo1 |
| 1 | Hello | Kind_1 | Kind1 |
| 1 | Hello | attributeid_2 | 2 |
I added a row_number() to the inner subquery to be used to create the new column names to pivot. Once the names are created the pivot can be applied to the new columns giving the final result
This could also be done using dynamic SQL:
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT ',' + QUOTENAME(col+'_'+rn)
from
(
select
cast(ROW_NUMBER() over(partition by l.lotid order by a.attributeid) as varchar(10)) rn
from Lot l
left join Attribute a
on l.LotId = a.LotId
) t
cross apply (values ('attributeid', 1),
('LotId', 2),
('Value', 3),
('Kind', 4)) c (col, so)
group by col, rn, so
order by rn, so
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT LotId,
SomeText,' + #cols + '
from
(
select LotId,
SomeText,
col+''_''+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from Lot l
left join Attribute a
on l.LotId = a.LotId
) src
cross apply
(
values (''attributeid'', attributeid),(''LotId'', a_LotId), (''Value'', Val), (''Kind'', Kind)
) c (col, value)
) x
pivot
(
max(value)
for col in (' + #cols + ')
) p '
execute(#query)
See SQL Fiddle with Demo
All three versions will give the same result:
| LOTID | SOMETEXT | ATTRIBUTEID_1 | LOTID_1 | VALUE_1 | KIND_1 | ATTRIBUTEID_2 | LOTID_2 | VALUE_2 | KIND_2 | ATTRIBUTEID_3 | LOTID_3 | VALUE_3 | KIND_3 |
-----------------------------------------------------------------------------------------------------------------------------------------------------------
| 1 | Hello | 1 | 1 | Foo1 | Kind1 | 2 | 1 | Foo2 | Kind2 | (null) | (null) | (null) | (null) |
| 2 | World | 3 | 2 | Bar1 | Kind1 | 4 | 2 | Bar2 | Kind2 | 5 | 2 | Bar3 | Kind3 |