I'd like to know how can I replace multiple text values from a string in SQL?
I have a formula that I get from a table but inside that formula there are some text values with apostrophes that I need to replace for numeric values from another table, example:
Table_Values
ID| DESC |VALUE
01 | ABC | 5
02 | DEF | 10
03 | GHI | 15
TABLE_FORMULA
ID | FORMULA
01 | X='ABC'+'DEF'+'GHI'
The basic idea is to get the same formula with a result like this:
X='5'+'10'+'15'
Any idea or example would be great. Thanks.
I don't know why your data is stored like that but here is my attempt to solve your problem.
First, you need a Pattern Splitter to parse your FORMULA. Here is one taken from Dwain Camp's article.
-- PatternSplitCM will split a string based on a pattern of the form
-- supported by LIKE and PATINDEX
--
-- Created by: Chris Morris 12-Oct-2012
CREATE FUNCTION [dbo].[PatternSplitCM]
(
#List VARCHAR(8000) = NULL
,#Pattern VARCHAR(50)
) RETURNS TABLE WITH SCHEMABINDING
AS
RETURN
WITH numbers AS (
SELECT TOP(ISNULL(DATALENGTH(#List), 0))
n = ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
FROM
(VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) d (n),
(VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) e (n),
(VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) f (n),
(VALUES (0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) g (n)
)
SELECT
ItemNumber = ROW_NUMBER() OVER(ORDER BY MIN(n)),
Item = SUBSTRING(#List,MIN(n),1+MAX(n)-MIN(n)),
[Matched]
FROM (
SELECT n, y.[Matched], Grouper = n - ROW_NUMBER() OVER(ORDER BY y.[Matched],n)
FROM numbers
CROSS APPLY (
SELECT [Matched] = CASE WHEN SUBSTRING(#List,n,1) LIKE #Pattern THEN 1 ELSE 0 END
) y
) d
GROUP BY [Matched], Grouper
Here is your final query. This uses a combination of string functions like CHARINDEX, LEFT, RIGHT and string concatenation using FOR XML PATH(''):
WITH Cte AS(
SELECT
f.*,
LHS = LEFT(f.FORMULA, CHARINDEX('=', f.FORMULA) - 1),
RHS = RIGHT(f.FORMULA, LEN(f.FORMULA) - CHARINDEX('=', f.FORMULA)),
s.*,
v.VALUE
FROM Table_Formula f
CROSS APPLY dbo.PatternSplitCM(RIGHT(f.FORMULA, LEN(f.FORMULA) - CHARINDEX('=', f.FORMULA)), '[+-/\*]') s
LEFT JOIN Table_Values v
ON v.[DESC] = REPLACE(s.Item, '''', '')
)
--SELECT * FROM Cte
SELECT
c.ID,
c.FORMULA,
LHS + '=' + STUFF((
SELECT ISNULL('''' + CONVERT(VARCHAR(5), VALUE) + '''', ITEM)
FROM Cte
WHERE ID = c.ID
ORDER BY ItemNumber
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)')
, 1, 0, '')
FROM Cte c
GROUP BY C.ID, c.FORMULA, c.LHS
SQL Fiddle
RESULT
| ID | FORMULA | |
|----|---------------------|-----------------|
| 1 | X='ABC'+'DEF'+'GHI' | X='5'+'10'+'15' |
Related
Given json like this...
{"setting1":"A","setting2":"B","setting3":"C"}
I would like to see results like...
+----------+-------+
| name | value |
+----------+-------+
| setting1 | A |
| setting2 | B |
| setting3 | C |
+----------+-------+
My struggle is I'm trying to find out how to extract the key's name (i.e., "setting1", "setting2", "setting3", etc.)
I could do something like the following query, but I don't know how many settings there will be and what their names will be, so I'd like something more dynamic.
SELECT
B.name,
B.value
FROM OPENJSON(#json) WITH
(
setting1 varchar(50) '$.setting1',
setting2 varchar(50) '$.setting2',
setting3 varchar(50) '$.setting3'
) A
CROSS APPLY
(
VALUES
('setting1', A.setting1),
('setting2', A.setting2),
('setting3', A.setting3)
) B (name, value)
With XML, I could do something simple like this:
DECLARE #xml XML = '<settings><setting1>A</setting1><setting2>B</setting2><setting3>C</setting3></settings>'
SELECT
A.setting.value('local-name(.)', 'VARCHAR(50)') name,
A.setting.value('.', 'VARCHAR(50)') value
FROM #xml.nodes('settings/*') A (setting)
Any way to do something similar with SQL Server's json functionality?
Aaron Bertrand has written about json key value in Advanced JSON Techniques
SELECT x.[Key], x.[Value]
FROM OPENJSON(#Json, '$') AS x;
Return
Key Value
------------------
setting1 A
setting2 B
setting3 C
Option Using a Table
Declare #YourTable table (ID int,JSON_String varchar(max))
Insert Into #YourTable values
(1,'{"setting1":"A","setting2":"B","setting3":"C"}')
Select A.ID
,C.*
From #YourTable A
Cross Apply (values (try_convert(xml,replace(replace(replace(replace(replace(JSON_String,'"',''),'{','<row '),'}','"/>'),':','="'),',','" '))) ) B (XMLData)
Cross Apply (
Select Name = attr.value('local-name(.)','varchar(100)')
,Value = attr.value('.','varchar(max)')
From B.XMLData.nodes('/row') as C1(r)
Cross Apply C1.r.nodes('./#*') as C2(attr)
) C
Returns
ID Name Value
1 setting1 A
1 setting2 B
1 setting3 C
Option Using a String Variable
Declare #String varchar(max) = '{"setting1":"A","setting2":"B","setting3":"C"}'
Select C.*
From (values (try_convert(xml,replace(replace(replace(replace(replace(#String,'"',''),'{','<row '),'}','"/>'),':','="'),',','" '))) ) A (XMLData)
Cross Apply (
Select Name = attr.value('local-name(.)','varchar(100)')
,Value = attr.value('.','varchar(max)')
From A.XMLData.nodes('/row') as C1(r)
Cross Apply C1.r.nodes('./#*') as C2(attr)
) C
Returns
Name Value
setting1 A
setting2 B
setting3 C
If you are open to a TVF.
The following requires my Extract UDF. This function was created because I was tired of extracting string (patindex,charindex,left,right, etc). It is a modified tally parse which accepts two non-like delimiters.
Example
Declare #YourTable table (ID int,JSON_String varchar(max))
Insert Into #YourTable values
(1,'{"setting1":{"global":"A","type":"1"},"setting2":{"global":"B","type":"1"},"setting3":{"global":"C","type":"1"}} ')
Select A.ID
,B.Setting
,C.*
From #YourTable A
Cross Apply (
Select Setting = replace(replace(B1.RetVal,'"',''),'{','')
,B2.RetVal
From [dbo].[udf-Str-Extract](A.JSON_String,',',':{') B1
Join [dbo].[udf-Str-Extract](A.JSON_String,':{','}') B2
on B1.RetSeq=B2.RetSeq
) B
Cross Apply (
Select Name = C1.RetVal
,Value = C2.RetVal
From [dbo].[udf-Str-Extract](','+B.RetVal,',"','":') C1
Join [dbo].[udf-Str-Extract](B.RetVal+',',':"','",') C2
on C1.RetSeq=C2.RetSeq
) C
Returns
ID Setting Name Value
1 setting1 global A
1 setting1 type 1
1 setting2 global B
1 setting2 type 1
1 setting3 global C
1 setting3 type 1
The UDF if Interested
CREATE FUNCTION [dbo].[udf-Str-Extract] (#String varchar(max),#Delimiter1 varchar(100),#Delimiter2 varchar(100))
Returns Table
As
Return (
with cte1(N) As (Select 1 From (Values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) N(N)),
cte2(N) As (Select Top (IsNull(DataLength(#String),0)) Row_Number() over (Order By (Select NULL)) From (Select N=1 From cte1 N1,cte1 N2,cte1 N3,cte1 N4,cte1 N5,cte1 N6) A ),
cte3(N) As (Select 1 Union All Select t.N+DataLength(#Delimiter1) From cte2 t Where Substring(#String,t.N,DataLength(#Delimiter1)) = #Delimiter1),
cte4(N,L) As (Select S.N,IsNull(NullIf(CharIndex(#Delimiter1,#String,s.N),0)-S.N,8000) From cte3 S)
Select RetSeq = Row_Number() over (Order By N)
,RetPos = N
,RetVal = left(RetVal,charindex(#Delimiter2,RetVal)-1)
From (
Select *,RetVal = Substring(#String, N, L)
From cte4
) A
Where charindex(#Delimiter2,RetVal)>1
)
/*
Max Length of String 1MM characters
Declare #String varchar(max) = 'Dear [[FirstName]] [[LastName]], ...'
Select * From [dbo].[udf-Str-Extract] (#String,'[[',']]')
*/
i have a table like this
CREATE TABLE #tbl(PackId NVARCHAR(MAX),AmntRemain NVARCHAR(MAX),AmntUsed NVARCHAR(MAX),IsCount NVARCHAR(MAX),IsValue NVARCHAR(MAX))
INSERT INTO #tbl VALUES('1,2','10,20','10,20','1,0','0,1')
above table output is
my concern is how to get output like below
how to insert data into a table of above table data as all columns individual value as a independent row
You should never store your data like this. You should really fix your etl processes and database schema per all the comments on your question.
Using cross apply(values ...) to unpivot your data, splitting the strings, and using conditional aggregation to pivot the data back to rows:
In SQL Server 2016+ you can use string_split().
In SQL Server pre-2016, using a CSV Splitter table valued function by Jeff Moden:
;with cte as (
select
Id = row_number() over (order by (select null)) /* adding an id to uniquely identify rows */
, *
from #tbl
)
select
cte.Id
, s.ItemNumber
, PackId = max(case when u.column_name = 'PackId' then s.item end)
, AmntRemain = max(case when u.column_name = 'AmntRemain' then s.item end)
, AmntUsed = max(case when u.column_name = 'AmntUsed' then s.item end)
, IsCount = max(case when u.column_name = 'IsCount' then s.item end)
, IsValue = max(case when u.column_name = 'IsValue' then s.item end)
from cte
cross apply (values ('PackId',PackId),('AmntRemain',AmntRemain),('AmntUsed',AmntUsed),('IsCount',IsCount),('IsValue',IsValue)) u (column_name,column_value)
cross apply dbo.delimitedsplit8K(u.column_value,',') s
group by cte.Id, s.ItemNumber
rextester demo: http://rextester.com/ZIFFQX41171
returns:
+----+------------+--------+------------+----------+---------+---------+
| Id | ItemNumber | PackId | AmntRemain | AmntUsed | IsCount | IsValue |
+----+------------+--------+------------+----------+---------+---------+
| 1 | 1 | 1 | 10 | 10 | 1 | 0 |
| 1 | 2 | 2 | 20 | 20 | 0 | 1 |
+----+------------+--------+------------+----------+---------+---------+
splitting strings reference:
Tally OH! An Improved SQL 8K “CSV Splitter” Function - Jeff Moden
Splitting Strings : A Follow-Up - Aaron Bertrand
Split strings the right way – or the next best way - Aaron Bertrand
string_split() in SQL Server 2016 : Follow-Up #1 - Aaron Bertrand
Ordinal workaround for **string_split()** - Solomon Rutzky
You want to insert into 2 rows. Try:
INSERT INTO #tbl VALUES
('1','10','10','1','0')
,('2','20','20','0','1')
Just about any parse/split function will do. The one supplied below also returns an Item Sequence Number which can be used to join and the individual elements in an appropriate row.
The number of elements are NOT fixed, one record could have 2 while another has 5.
I should add, if you can't use or want a UDF, it would be a small matter to create an in-line approach.
Example
Declare #Staging TABLE (PackId NVARCHAR(MAX),AmntRemain NVARCHAR(MAX),AmntUsed NVARCHAR(MAX),IsCount NVARCHAR(MAX),IsValue NVARCHAR(MAX))
INSERT INTO #Staging VALUES
('1,2','10,20','10,20','1,0','0,1')
Select B.*
From #Staging A
Cross Apply (
Select PackId = B1.RetVal
,AmntRemain = B2.RetVal
,AmntUsed = B3.RetVal
,IsCount = B4.RetVal
,IsValue = B5.RetVal
From [dbo].[udf-Str-Parse-8K](A.PackId,',') B1
Join [dbo].[udf-Str-Parse-8K](A.AmntRemain,',') B2 on B1.RetSeq=B2.RetSeq
Join [dbo].[udf-Str-Parse-8K](A.AmntUsed,',') B3 on B1.RetSeq=B3.RetSeq
Join [dbo].[udf-Str-Parse-8K](A.IsCount,',') B4 on B1.RetSeq=B4.RetSeq
Join [dbo].[udf-Str-Parse-8K](A.IsValue,',') B5 on B1.RetSeq=B5.RetSeq
) B
Returns
PackId AmntRemain AmntUsed IsCount IsValue
1 10 10 1 0
2 20 20 0 1
The UDF if interested
CREATE FUNCTION [dbo].[udf-Str-Parse-8K] (#String varchar(max),#Delimiter varchar(25))
Returns Table
As
Return (
with cte1(N) As (Select 1 From (Values(1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) N(N)),
cte2(N) As (Select Top (IsNull(DataLength(#String),0)) Row_Number() over (Order By (Select NULL)) From (Select N=1 From cte1 a,cte1 b,cte1 c,cte1 d) A ),
cte3(N) As (Select 1 Union All Select t.N+DataLength(#Delimiter) From cte2 t Where Substring(#String,t.N,DataLength(#Delimiter)) = #Delimiter),
cte4(N,L) As (Select S.N,IsNull(NullIf(CharIndex(#Delimiter,#String,s.N),0)-S.N,8000) From cte3 S)
Select RetSeq = Row_Number() over (Order By A.N)
,RetVal = LTrim(RTrim(Substring(#String, A.N, A.L)))
From cte4 A
);
--Orginal Source http://www.sqlservercentral.com/articles/Tally+Table/72993/
--Much faster than str-Parse, but limited to 8K
--Select * from [dbo].[udf-Str-Parse-8K]('Dog,Cat,House,Car',',')
--Select * from [dbo].[udf-Str-Parse-8K]('John||Cappelletti||was||here','||')
You can't, unless your inserting data from another table you will have to create individual insert statements for each row you want to create.
I need to generate combinations from the string of numbers
3,4,5,6,7 digit combinations
for example from this string
01;05;06;03;02;10;11;
here 7 numbers are there. for 3 digit 35 combinations will be there and it should be in order of order numbers in the string.
like
01;05;06;|
01;05;03;|
01;05;02;|
01;05;10;|
01;05;11;|
01;06;03;|
01;06;02;|
01;06;10;|
01;06;11;|
01;03;02;|
01;03;10;|
01;03;11;|
01;02;10;|
01;02;11;|
01;10;11;|
05;06;03;|
05;06;02;|
05;06;10;|
05;06;11;|
05;03;02;|
05;03;10;|
05;03;11;|
05;02;10;|
05;02;11;|
05;10;11;|
06;03;02;|
06;03;10;|
06;03;11;|
06;02;10;|
06;02;11;|
06;10;11;|
03;02;10;|
03;02;11;|
03;10;11;|
02;10;11;|
You can do this with two inner joins after splitting the string.
rextester: http://rextester.com/JJGKI77804
String Splitter for the test:
/* Jeff Moden's http://www.sqlservercentral.com/articles/Tally+Table/72993/ */
create function dbo.DelimitedSplitN4K (#pString nvarchar(4000), #pDelimiter nchar(1))
returns table with schemabinding as
return
with e1(n) as (
select 1 union all select 1 union all select 1 union all
select 1 union all select 1 union all select 1 union all
select 1 union all select 1 union all select 1 union all select 1
)
, e2(n) as (select 1 from e1 a, e1 b)
, e4(n) as (select 1 from e2 a, e2 b)
, cteTally(n) as (select top (isnull(datalength(#pString)/2,0))
row_number() over (order by (select null)) from e4)
, cteStart(n1) as (select 1 union all
select t.n+1 from cteTally t where substring(#pString,t.n,1) = #pDelimiter)
, ctelen(n1,l1) as(select s.n1
, isnull(nullif(charindex(#pDelimiter,#pString,s.n1),0)-s.n1,4000)
from cteStart s
)
select Itemnumber = row_number() over(order by l.n1)
, Item = substring(#pString, l.n1, l.l1)
from ctelen l;
go
the query
declare #str nvarchar(4000)= '01;05;06;03;02;10;11;';
with cte as (
select ItemNumber, Item
from dbo.DelimitedSplitN4K(#str,';')
where Item != ''
)
select combo=a.Item+';'+b.Item+';'+c.Item
from cte as a
inner join cte as b on a.ItemNumber<b.ItemNumber
inner join cte as c on b.ItemNumber<c.ItemNumber;
order by a.ItemNumber, b.ItemNumber, c.ItemNumber
ordered by ItemNumber results:
01;05;06
01;05;03
01;05;02
01;05;10
01;05;11
01;06;03
01;06;02
01;06;10
01;06;11
01;03;02
01;03;10
01;03;11
01;02;10
01;02;11
01;10;11
05;06;03
05;06;02
05;06;10
05;06;11
05;03;02
05;03;10
05;03;11
05;02;10
05;02;11
05;10;11
06;03;02
06;03;10
06;03;11
06;02;10
06;02;11
06;10;11
03;02;10
03;02;11
03;10;11
02;10;11
If you want to return a single string, pipe delimited then:
with cte as (
select ItemNumber, Item
from dbo.DelimitedSplitN4K(#str,';')
where Item != ''
)
select combo=stuff(
(select '|'+a.Item+';'+b.Item+';'+c.Item
from cte as a
inner join cte as b on a.ItemNumber<b.ItemNumber
inner join cte as c on b.ItemNumber<c.ItemNumber
order by a.ItemNumber, b.ItemNumber, c.ItemNumber
for xml path (''), type).value('.','nvarchar(max)')
,1,1,'')
results:
01;05;06|01;05;03|01;05;02|01;05;10|01;05;11|01;06;03|01;06;02|01;06;10|01;06;11|01;03;02|01;03;10|01;03;11|01;02;10|01;02;11|01;10;11|05;06;03|05;06;02|05;06;10|05;06;11|05;03;02|05;03;10|05;03;11|05;02;10|05;02;11|05;10;11|06;03;02|06;03;10|06;03;11|06;02;10|06;02;11|06;10;11|03;02;10|03;02;11|03;10;11|02;10;11
splitting strings reference:
Tally OH! An Improved SQL 8K “CSV Splitter” Function
Splitting Strings : A Follow-Up - Aaron Bertrand
Split strings the right way – or the next best way
I had nearly the same query but resulted somehow different
Please check
/*
create table Combination (id char(2))
insert into Combination values ('01'),('05'),('06'),('03'),('02'),('10'),('11')
*/
select c1.id, c2.id, c3.id, c1.id + ';' + c2.id + ';' + c3.id Combination
from Combination c1, Combination c2, Combination c3
where
c2.id between c1.id and c3.id
and c1.id <> c2.id
and c2.id <> c3.id
order by c1.id, c2.id, c3.id
The output is
I'm looking for a join expression for matching strings from two different tables which both contain the same sub-string of 4 consective characters.
For example, the following should match:
String1 String2
-------- -----------
xxjohnyy abcjohnabc [common substring: "john"]
xxjohnyy johnny [common substring: "john"]
birdsings ravenbird [common substring: "bird"]
singbird a singer [common substring: "sing"]
This problem is very similar to finding the Longest Common Substring problem. You find the Longest Common Substring and then you pick those with common strings of 4. You will definitely find this link and this link helpful for you.
This is a very good exercise. Here is my attempt using Tally Table.
SQL Fiddle
;WITH E1(N) AS(
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
),
E2(N) AS(SELECT 1 FROM E1 a CROSS JOIN E1 b),
E4(N) AS(SELECT 1 FROM E2 a CROSS JOIN E2 b),
E8(N) AS(SELECT 1 FROM E4 a CROSS JOIN E4 b),
Tally(N) AS(
SELECT TOP (
SELECT
CASE
WHEN MAX(LEN(String1)) > MAX(LEN(String2)) THEN MAX(LEN(String1))
ELSE MAX(LEN(String2))
END
FROM TestTable
)
ROW_NUMBER() OVER(ORDER BY (SELECT NULL))
FROM E8
),
CteTable AS( -- Added an ID to uniquely identify each row
SELECT *, Id = ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM TestTable
),
CteSubStr1 AS(
SELECT
ct.*,
substr = SUBSTRING(ct.String1, t.N, 4)
FROM CteTable ct
CROSS APPLY(
SELECT N FROM Tally
WHERE N <= LEN(ct.String1) - 3
)t
),
CteSubStr2 AS(
SELECT
ct.*,
substr = SUBSTRING(ct.String2, t.N, 4)
FROM CteTable ct
CROSS APPLY(
SELECT N FROM Tally
WHERE N <= LEN(ct.String2) - 3
)t
),
CteCommon AS(
SELECT * FROM CteSubStr1 c1
WHERE EXISTS(
SELECT 1 FROM CteSubStr2
WHERE
Id = c1.Id
AND substr = c1.substr
)
)
SELECT
String1, String2, substr
FROM (
SELECT *, RN = ROW_NUMBER() OVER(PARTITION BY Id ORDER BY LEN(substr) DESC)
FROM CteCommon
)t
WHERE RN = 1
Result
| String1 | String2 | substr |
|-----------|------------|--------|
| xxjohnyy | abcjohnabc | john |
| xxjohnyy | johnny | john |
| birdsings | ravenbird | bird |
| singbird | a singer | sing |
This part looks for the longest common substring.
SELECT
String1, String2, substr
FROM (
SELECT *, RN = ROW_NUMBER() OVER(PARTITION BY Id ORDER BY LEN(substr) DESC)
FROM CteCommon
)t
WHERE RN = 1
To get all the common substrings, use this instead:
SELECT * FROM CteCommon
;with pos as(select 1 as p
union all
select p + 1 from pos where p < 100),
uni as(select *, row_number() over(order by (select null)) id from t)
select t1.s1, t1.s2, ca.s
from uni t1
cross apply(select substring(t2.s2, p, 4) s
from uni t2
cross join pos
where t1.id = t2.id and
len(substring(t2.s2, p, 4)) = 4 and
t1.s1 like '%' + substring(t2.s2, p, 4) + '%')ca
Fiddlee http://sqlfiddle.com/#!3/bd4dd/16
Just change 100 to actual length of your columns...
UNPIVOT will not return NULLs, but I need them in a comparison query. I am trying to avoid using ISNULL the following example (Because in the real sql there are over 100 fields):
Select ID, theValue, column_name
From
(select ID,
ISNULL(CAST([TheColumnToCompare] AS VarChar(1000)), '') as TheColumnToCompare
from MyView
where The_Date = '04/30/2009'
) MA
UNPIVOT
(theValue FOR column_name IN
([TheColumnToCompare])
) AS unpvt
Any alternatives?
To preserve NULLs, use CROSS JOIN ... CASE:
select a.ID, b.column_name
, column_value =
case b.column_name
when 'col1' then a.col1
when 'col2' then a.col2
when 'col3' then a.col3
when 'col4' then a.col4
end
from (
select ID, col1, col2, col3, col4
from table1
) a
cross join (
select 'col1' union all
select 'col2' union all
select 'col3' union all
select 'col4'
) b (column_name)
Instead of:
select ID, column_name, column_value
From (
select ID, col1, col2, col3, col4
from table1
) a
unpivot (
column_value FOR column_name IN (
col1, col2, col3, col4)
) b
A text editor with column mode makes such queries easier to write. UltraEdit has it, so does Emacs. In Emacs it's called rectangular edit.
You might need to script it for 100 columns.
It's a real pain. You have to switch them out before the UNPIVOT, because there is no row produced for ISNULL() to operate on - code generation is your friend here.
I have the problem on PIVOT as well. Missing rows turn into NULL, which you have to wrap in ISNULL() all the way across the row if missing values are the same as 0.0 for example.
I ran into the same problem. Using CROSS APPLY (SQL Server 2005 and later) instead of Unpivot solved the problem. I found the solution based on this article An Alternative (Better?) Method to UNPIVOT
and I made the following example to demonstrate that CROSS APPLY will NOT Ignore NULLs like Unpivot.
create table #Orders (OrderDate datetime, product nvarchar(100), ItemsCount float, GrossAmount float, employee nvarchar(100))
insert into #Orders
select getutcdate(),'Windows',10,10.32,'Me'
union
select getutcdate(),'Office',31,21.23,'you'
union
select getutcdate(),'Office',31,55.45,'me'
union
select getutcdate(),'Windows',10,null,'You'
SELECT OrderDate, product,employee,Measure,MeasureType
from #Orders orders
CROSS APPLY (
VALUES ('ItemsCount',ItemsCount),('GrossAmount',GrossAmount)
)
x(MeasureType, Measure)
SELECT OrderDate, product,employee,Measure,MeasureType
from #Orders orders
UNPIVOT
(Measure FOR MeasureType IN
(ItemsCount,GrossAmount)
)AS unpvt;
drop table #Orders
or, in SQLServer 2008 in shorter way:
...
cross join
(values('col1'), ('col2'), ('col3'), ('col4')) column_names(column_name)
Using dynamic SQL and COALESCE, I solved the problem like this:
DECLARE #SQL NVARCHAR(MAX)
DECLARE #cols NVARCHAR(MAX)
DECLARE #dataCols NVARCHAR(MAX)
SELECT
#dataCols = COALESCE(#dataCols + ', ' + 'ISNULL(' + Name + ',0) ' + Name , 'ISNULL(' + Name + ',0) ' + Name )
FROM Metric WITH (NOLOCK)
ORDER BY ID
SELECT
#cols = COALESCE(#cols + ', ' + Name , Name )
FROM Metric WITH (NOLOCK)
ORDER BY ID
SET #SQL = 'SELECT ArchiveID, MetricDate, BoxID, GroupID, ID MetricID, MetricName, Value
FROM
(SELECT ArchiveID, [Date] MetricDate, BoxID, GroupID, ' + #dataCols + '
FROM MetricData WITH (NOLOCK)
INNER JOIN Archive WITH (NOLOCK)
ON ArchiveID = ID
WHERE BoxID = ' + CONVERT(VARCHAR(40), #BoxID) + '
AND GroupID = ' + CONVERT(VARCHAR(40), #GroupID) + ') p
UNPIVOT
(Value FOR MetricName IN
(' + #cols + ')
)AS unpvt
INNER JOIN Metric WITH (NOLOCK)
ON MetricName = Name
ORDER BY MetricID, MetricDate'
EXECUTE( #SQL )
I've found left outer joining the UNPIVOT result to the full list of fields, conveniently pulled from INFORMATION_SCHEMA, to be a practical answer to this problem in some contexts.
-- test data
CREATE TABLE _t1(name varchar(20),object_id varchar(20),principal_id varchar(20),schema_id varchar(20),parent_object_id varchar(20),type varchar(20),type_desc varchar(20),create_date varchar(20),modify_date varchar(20),is_ms_shipped varchar(20),is_published varchar(20),is_schema_published varchar(20))
INSERT INTO _t1 SELECT 'blah1', 3, NULL, 4, 0, 'blah2', 'blah3', '20100402 16:59:23.267', NULL, 1, 0, 0
-- example
select c.COLUMN_NAME, Value
from INFORMATION_SCHEMA.COLUMNS c
left join (
select * from _t1
) q1
unpivot (Value for COLUMN_NAME in (name,object_id,principal_id,schema_id,parent_object_id,type,type_desc,create_date,modify_date,is_ms_shipped,is_published,is_schema_published)
) t on t.COLUMN_NAME = c.COLUMN_NAME
where c.TABLE_NAME = '_t1'
</pre>
output looks like:
+----------------------+-----------------------+
| COLUMN_NAME | Value |
+----------------------+-----------------------+
| name | blah1 |
| object_id | 3 |
| principal_id | NULL | <======
| schema_id | 4 |
| parent_object_id | 0 |
| type | blah2 |
| type_desc | blah3 |
| create_date | 20100402 16:59:23.26 |
| modify_date | NULL | <======
| is_ms_shipped | 1 |
| is_published | 0 |
| is_schema_published | 0 |
+----------------------+-----------------------+
Writing in May'22 with testing it on AWS Redshift.
You can use a with clause where you can coalesce the columns where nulls are expected. Alternatively, you can use coalesce in the select statement prior to the UNPIVOT block.
And don't forget to alias with the original column name (Not following won't break or violate the rule but would save some time for coffee).
Select ID, theValue, column_name
From
(select ID,
coalesce(CAST([TheColumnToCompare] AS VarChar(1000)), '') as TheColumnToCompare
from MyView
where The_Date = '04/30/2009'
) MA
UNPIVOT
(theValue FOR column_name IN
([TheColumnToCompare])
) AS unpvt
OR
WITH TEMP1 as (
select ID,
coalesce(CAST([TheColumnToCompare] AS VarChar(1000)), '') as TheColumnToCompare
from MyView
where The_Date = '04/30/2009'
)
Select ID, theValue, column_name
From
(select ID, TheColumnToCompare
from MyView
where The_Date = '04/30/2009'
) MA
UNPIVOT
(theValue FOR column_name IN
([TheColumnToCompare])
) AS unpvt
I had your same problem and this is
my quick and dirty solution :
your query :
select
Month,Name,value
from TableName
unpivot
(
Value for Name in (Col_1,Col_2,Col_3,Col_4,Col_5
)
) u
replace with :
select Month,Name,value from
( select
isnull(Month,'no-data') as Month,
isnull(Name,'no-data') as Name,
isnull(value,'no-data') as value from TableName
) as T1
unpivot
(
Value
for Name in (Col_1,Col_2,Col_3,Col_4,Col_5)
) u
ok the null value is replaced with a string, but all rows will be returned !!
ISNULL is half the answer. Use NULLIF to translate back to NULL. E.g.
DECLARE #temp TABLE(
Foo varchar(50),
Bar varchar(50) NULL
);
INSERT INTO #temp( Foo,Bar )VALUES( 'licious',NULL );
SELECT * FROM #temp;
SELECT
Col,
NULLIF( Val,'0Null' ) AS Val
FROM(
SELECT
Foo,
ISNULL( Bar,'0Null' ) AS Bar
FROM
#temp
) AS t
UNPIVOT(
Val FOR Col IN(
Foo,
Bar
)
) up;
Here I use "0Null" as my intermediate value. You can use anything you like. However, you risk collision with user input if you choose something real-world like "Null". Garbage works fine "!##34())0" but may be more confusing to future coders. I am sure you get the picture.