JSON input looks like this:
{
"reporting.unit": [ "F-1", "F-2", "F-3"],
"notional.lc": [ 100.1, 140.2, 150.3]
}
Desired Output:
reporting.unit
notional.lc
F-1
100.1
F-2
140.2
F-3
150.3
Note I have upwards of 20 columns and many more elements
I tried:
DECLARE #json nvarchar(max);
SELECT #json = '{
"reporting.unit": [ "F-1", "F-2", "F-3"],
"notional.lc": [ 100.1, 140.2, 150.3]
}';
SELECT *
FROM OPENJSON (#json);
but the result was:
key
value
type
reporting.unit
[ "F-1", "F-2", "F-3"]
4
notional.lc
[ 100.1, 140.2, 150.3]
4
You can use OPENJSON with multiple JOIN's to join your columns together using your array keys to get the values in a column/row format.
DECLARE #json nvarchar(max);
SELECT #json = '{
"reporting.unit": [ "F-1", "F-2", "F-3"],
"notional.lc": [ 100.1, 140.2, 150.3]
}';
SELECT
a.value AS [reporting.unit],
b.value AS [notional.lc]
FROM OPENJSON(#json, '$."reporting.unit"') a
JOIN OPENJSON(#json, '$."notional.lc"') b
ON a.[key] = b.[key]
Result:
reporting.unit
notional.lc
F-1
100.1
F-2
140.2
F-3
150.3
Demo here.
Already +1 on griv's answer ... it was my first thought as well.
However, and just for fun, I wanted to try an alternative which ODDLY enough had a lower batch cost 40% vs 60% (image below)
I should also note that 2016 requires a literal while 2017+ can be an expression
Example
DECLARE #json nvarchar(max);
SELECT #json = '{
"reporting.unit": [ "F-1", "F-2", "F-3"],
"notional.lc": [ 100.1, 140.2, 150.3]
}';
Select ReportingUnit = JSON_VALUE(#json,concat('$."reporting.unit"[',N,']'))
,NotionalLC = JSON_VALUE(#json,concat('$."notional.lc"[',N,']'))
From ( Select Top 25 N=-1+Row_Number() Over (Order By (Select NULL))
From master..spt_values n1
) A
Where JSON_VALUE(#json,concat('$."reporting.unit"[',N,']')) is not null
Results
ReportingUnit NotionalLC
F-1 100.1
F-2 140.2
F-3 150.3
Image
Related
This is the JSON definition that is going to be provided (just a short example) and the code that I have implemented to get the expected result:
declare #json nvarchar(max)
set #json = '{
"testJson":{
"testID":"Test1",
"Value":[
{
"Value1":"",
"Value2":"",
"Value3":"",
"Type": "1A"
},
{
"Value1":"123",
"Value2":"456",
"Value3":"Automatic",
"Type": "2A"
},
{
"Value1":"789",
"Value2":"159",
"Value3":"Manual",
"Value4":"Success" ,
"Type": "3A"
}
]
}
}'
select
'ValueFields' as groupDef,
-- b.[key],
-- c.[key],
STRING_AGG( c.value , ' | ') as val
from
openjson(#json, '$.testJson.Value') as b
cross apply
openjson(b.value) as c
where
b.[key] not in (select b.[key]
from openjson(#json, '$.testJson.Value') as b
where b.value like ('%1A%'))
As you can see each element in the array can have different quantity of attributes (value1,.., value4..), and I only need to consider those elements where the type attribute is not equal to "1A". The query gives me the result requested, however, I am wondering how can I improve the performance of the code given that I'm using the like operator in the sub select, and obviously the original JSON file could a considerable number of elements in the array.
…
select b.Value --,c.value
from
openjson(#json, '$.testJson.Value')
with
(
Value nvarchar(max) '$' as json,
Type varchar(100) '$.Type'
) as b
--cross apply openjson(b.Value) as c
where b.Type <> '1A'
SELECT
'ValueFields' as groupDef,
J.value as val
FROM
OPENJSON(#json,'$.testJson.Value') J
WHERE
JSON_VALUE([value],'$.Type') <> '1A'
I have a JSON like this to process in SQL
{"RowIndex":[1,2], "Data":["a","b"]}
and i want to extract the data to show that as a table like this
RowIndex Data
1 a
2 b
I understand that i have to use OPENJSON, JSON_QUERY or JSON_VALUE but i cannot find a way to get what I want that not implies to write a query with many join like
select C1.value as RowIndex,
C2.value as Data,
From (select [key], value from OPENJSON(JSON_QUERY(#jsonstring, '$.RowIndex'))) C1
inner join (select [key], value from OPENJSON(JSON_QUERY(#jsonstring, '$.Data'))) C2 on C1.[key] = C2.[key]
Because if the arrays in the JSON grow the query will be unmaintenable and slow
One method, using a "couple" of OPENJSON clauses:
DECLARE #JSON nvarchar(MAX) = N'{"RowIndex":[1,2], "Data":["a","b"]}';
SELECT RI.[value] AS RowIndex,
D.[value] AS Data
FROM OPENJSON(#JSON)
WITH (RowIndex nvarchar(MAX) AS JSON,
Data nvarchar(MAX) AS JSON) J
CROSS APPLY OPENJSON(RowIndex) RI
CROSS APPLY OPENJSON(Data) D
WHERE RI.[key] = D.[key];
To elaborate on my comments though, it seems like you should be fixing the JSON design and have something like this:
[
{
"RowIndex": "1",
"Data": "a",
"Number": "1"
},
{
"RowIndex": "2",
"Data": "b",
"Number": "3"
}
]
Which can be far more easily queried:
DECLARE #JSON nvarchar(MAX) = N'[
{
"RowIndex": "1",
"Data": "a",
"Number": "1"
},
{
"RowIndex": "2",
"Data": "b",
"Number": "3"
}
]';
SELECT *
FROM OPENJSON(#JSON)
WITH (RowIndex int,
Data char(1),
Number int) OJ;
Declare #ResponseText nvarchar(4000)
set #responseText ='{
"submissions": [
{
"xml_id":"id_x5d94851726b470.68571510",
"fields": [
{"fieldvalue":"customerEmail#xyzdomain.com","fieldid":"57282490"},
{"fieldvalue":"123","fieldid":"57282423"},
{"fieldvalue":"12345-678900","fieldid":"57282500"},
{"fieldvalue":"Test Message here ","fieldid":"57282564"}
]
}
]
}'
SELECT *
FROM OPENJSON (#ResponseText, '$.submissions') WITH (
ID NVARCHAR(100) '$.xml_id',
$.fields.field NVARCHAR(100) ...
)
etc rest of all the record? I got "NULL" for the rest fields under fields array
You can try it like this:
Declare #ResponseText nvarchar(4000)
set #responseText ='{
"submissions": [
{
"xml_id":"id_x5d94851726b470.68571510",
"fields": [
{"fieldvalue":"customerEmail#xyzdomain.com","fieldid":"57282490"},
{"fieldvalue":"123","fieldid":"57282423"},
{"fieldvalue":"12345-678900","fieldid":"57282500"},
{"fieldvalue":"Test Message here ","fieldid":"57282564"}
]
}
]
}'
--The query
SELECT A.ID
,B.*
FROM OPENJSON (#ResponseText, '$.submissions')
WITH (ID NVARCHAR(100) '$.xml_id'
,fields NVARCHAR(MAX) AS JSON) A
OUTER APPLY OPENJSON(a.fields)
WITH(fieldvalue NVARCHAR(150)
,fieldid BIGINT) B;
The result
ID fieldvalue fieldid
id_x5d94851726b470.68571510 customerEmail#xyzdomain.com 57282490
id_x5d94851726b470.68571510 123 57282423
id_x5d94851726b470.68571510 12345-678900 57282500
id_x5d94851726b470.68571510 Test Message here 57282564
The idea in short:
You started correctly using the WITH-clause to read the xml_id. The property fields is nothing else than another element on the same level. But we return it AS JSON. This will allow to add another APPLY OPENJSON(), pass in the fragment we got from $.fields and use another WITH-clause to get the two properties of the objects within the array.
I have a column in SQL table that has json value like below:
[
{"address":{"value":"A9"},
"value":{"type":11,"value":"John"}},
{"address":{"value":"A10"},
"value":{"type":11,"value":"Doe"}}]
MSDN Examples for JSON_VALUE or JSON_QUERY require a json object at root. How can I query above to return rows that have "address" as A9 and "value" as John? I'm using SQL Azure.
Something like this:
declare #json nvarchar(max) = '[
{"address":{"value":"A9"},
"value":{"type":11,"value":"John"}},
{"address":{"value":"A10"},
"value":{"type":11,"value":"Doe"}}]'
select a.*
from openjson(#json) r
cross apply openjson(r.value)
with (
address nvarchar(200) '$.address.value',
name nvarchar(200) '$.value.value'
) a
where address = N'A9'
and name = N'John'
outputs
address name
------- -----
A9 John
(1 row affected)
It may not be entirely relevant to the OP's post as the usage is different, however it is possible to retrieve arbitrary items from a root-level unnamed JSON array e.g.
declare #json nvarchar(max) = '[
{"address":
{"value":"A9"},
"value":
{"type":11,"value":"John"}
},
{"address":
{"value":"A10"},
"value":
{"type":11,"value":"Doe"}
}
]'
select
JSON_VALUE(
JSON_QUERY(#json, '$[0]'),
'$.address.value') as 'First address.value',
JSON_VALUE(
JSON_QUERY(#json, '$[1]'),
'$.address.value') as 'Second address.value'
Output :
First address.value Second address.value
A9 A10
The sql-server OPENJSON() function can take a json array and convert it into sql table with key-value pairs, e.g.:
DECLARE #json NVARCHAR(MAX);
SET #json = '{
"key1": "val1",
"key2": "val2",
"key3": "val3"
}';
SELECT * FROM OPENJSON(#json, '$')
Result:
key value type
--------------------
key1 val1 1
key2 val2 1
key3 val3 1
What is the best general-purpose method for converting this key/value table back into a json array?
Why? If we can do this with a single function, it opens up a range of json modifications which are otherwise not possible on sql server, e.g.:
Re-order elements
Rename properties (key names)
Split json array into smaller arrays / combine json arrays
Compare json arrays (which key/value elements exists in both jsons? What are the differences?)
Clean json (remove syntactical whitespace/newlines to compress it)
Now, I could start to do simple CONCAT('"',[key],'":"',[value]), then do a comma-list-aggregration. But if I want a code that is both easy to apply across my codebase and works for all data types, this is not a simple task. By looking at the json format definition, the conversion should take into account a) the 6 different data types, b) escape characters, c) SQL NULL/json null handling, d) what I may have overlooked I.e. at minimum, the below example should be supported:
DECLARE #test_json NVARCHAR(MAX);
SET #test_json = '{
"myNull": null,
"myString": "start_\\_\"_\/_\b_\f_\n_\r_\t_\u2600_stop",
"myNumber": 3.14,
"myBool": true,
"myArray": ["1", 2],
"myObject": {"key":"val"}
}'
SELECT * FROM OPENJSON(#test_json, '$')
Result:
key value type
------------------------------------------------
myNull NULL 0
myString start_\_"_/___ _ _ _☀_stop 1
myNumber 3.14 2
myBool true 3
myArray ["1", 2] 4
myObject {"key":"val"} 5
For the string-aggregation part, we have long suffered the 'FOR XML PATH'-pain. Luckily we have STRING_AGG() on SQL2017/AzureDB, and I will accept a solution depending on STRING_AGG().
You can do with this command, using FOR JSON
select * from table for json auto
My result:
[{"LogId":1,"DtLog":"2017-09-30T21:04:45.6700000","FileId":1},
{"LogId":2,"DtLog":"2017-09-30T21:08:35.8633333","FileId":3},{"LogId":3,"DtLog":"2017-09-30T21:08:36.4433333","FileId":2},{"LogId":4,"DtLog":"2017-09-30T21:08:36.9866667","FileId":12},{"LogId":5,"DtLog":"2017-09-30T21:15:22.5366667","FileId":13},{"LogId":6,"DtLog":"2017-09-30T21:38:43.7866667","FileId":17}]
I use string_agg
declare #json table ( Name varchar(80), Value varchar(max) )
insert into #json
select [Key], Value from openjson(#attributes)
insert into #json values ( 'name', #name )
insert into #json values ( 'title', #title )
insert into #json values ( 'description', #description )
set #attributes = '{' + (select STRING_AGG( '"' + Name + '":"' +
REPLACE (value, '"', '\"' ) +'"', ',') from #json) + '}'