I have a JSON like this to process in SQL
{"RowIndex":[1,2], "Data":["a","b"]}
and i want to extract the data to show that as a table like this
RowIndex Data
1 a
2 b
I understand that i have to use OPENJSON, JSON_QUERY or JSON_VALUE but i cannot find a way to get what I want that not implies to write a query with many join like
select C1.value as RowIndex,
C2.value as Data,
From (select [key], value from OPENJSON(JSON_QUERY(#jsonstring, '$.RowIndex'))) C1
inner join (select [key], value from OPENJSON(JSON_QUERY(#jsonstring, '$.Data'))) C2 on C1.[key] = C2.[key]
Because if the arrays in the JSON grow the query will be unmaintenable and slow
One method, using a "couple" of OPENJSON clauses:
DECLARE #JSON nvarchar(MAX) = N'{"RowIndex":[1,2], "Data":["a","b"]}';
SELECT RI.[value] AS RowIndex,
D.[value] AS Data
FROM OPENJSON(#JSON)
WITH (RowIndex nvarchar(MAX) AS JSON,
Data nvarchar(MAX) AS JSON) J
CROSS APPLY OPENJSON(RowIndex) RI
CROSS APPLY OPENJSON(Data) D
WHERE RI.[key] = D.[key];
To elaborate on my comments though, it seems like you should be fixing the JSON design and have something like this:
[
{
"RowIndex": "1",
"Data": "a",
"Number": "1"
},
{
"RowIndex": "2",
"Data": "b",
"Number": "3"
}
]
Which can be far more easily queried:
DECLARE #JSON nvarchar(MAX) = N'[
{
"RowIndex": "1",
"Data": "a",
"Number": "1"
},
{
"RowIndex": "2",
"Data": "b",
"Number": "3"
}
]';
SELECT *
FROM OPENJSON(#JSON)
WITH (RowIndex int,
Data char(1),
Number int) OJ;
Related
JSON input looks like this:
{
"reporting.unit": [ "F-1", "F-2", "F-3"],
"notional.lc": [ 100.1, 140.2, 150.3]
}
Desired Output:
reporting.unit
notional.lc
F-1
100.1
F-2
140.2
F-3
150.3
Note I have upwards of 20 columns and many more elements
I tried:
DECLARE #json nvarchar(max);
SELECT #json = '{
"reporting.unit": [ "F-1", "F-2", "F-3"],
"notional.lc": [ 100.1, 140.2, 150.3]
}';
SELECT *
FROM OPENJSON (#json);
but the result was:
key
value
type
reporting.unit
[ "F-1", "F-2", "F-3"]
4
notional.lc
[ 100.1, 140.2, 150.3]
4
You can use OPENJSON with multiple JOIN's to join your columns together using your array keys to get the values in a column/row format.
DECLARE #json nvarchar(max);
SELECT #json = '{
"reporting.unit": [ "F-1", "F-2", "F-3"],
"notional.lc": [ 100.1, 140.2, 150.3]
}';
SELECT
a.value AS [reporting.unit],
b.value AS [notional.lc]
FROM OPENJSON(#json, '$."reporting.unit"') a
JOIN OPENJSON(#json, '$."notional.lc"') b
ON a.[key] = b.[key]
Result:
reporting.unit
notional.lc
F-1
100.1
F-2
140.2
F-3
150.3
Demo here.
Already +1 on griv's answer ... it was my first thought as well.
However, and just for fun, I wanted to try an alternative which ODDLY enough had a lower batch cost 40% vs 60% (image below)
I should also note that 2016 requires a literal while 2017+ can be an expression
Example
DECLARE #json nvarchar(max);
SELECT #json = '{
"reporting.unit": [ "F-1", "F-2", "F-3"],
"notional.lc": [ 100.1, 140.2, 150.3]
}';
Select ReportingUnit = JSON_VALUE(#json,concat('$."reporting.unit"[',N,']'))
,NotionalLC = JSON_VALUE(#json,concat('$."notional.lc"[',N,']'))
From ( Select Top 25 N=-1+Row_Number() Over (Order By (Select NULL))
From master..spt_values n1
) A
Where JSON_VALUE(#json,concat('$."reporting.unit"[',N,']')) is not null
Results
ReportingUnit NotionalLC
F-1 100.1
F-2 140.2
F-3 150.3
Image
This is the JSON definition that is going to be provided (just a short example) and the code that I have implemented to get the expected result:
declare #json nvarchar(max)
set #json = '{
"testJson":{
"testID":"Test1",
"Value":[
{
"Value1":"",
"Value2":"",
"Value3":"",
"Type": "1A"
},
{
"Value1":"123",
"Value2":"456",
"Value3":"Automatic",
"Type": "2A"
},
{
"Value1":"789",
"Value2":"159",
"Value3":"Manual",
"Value4":"Success" ,
"Type": "3A"
}
]
}
}'
select
'ValueFields' as groupDef,
-- b.[key],
-- c.[key],
STRING_AGG( c.value , ' | ') as val
from
openjson(#json, '$.testJson.Value') as b
cross apply
openjson(b.value) as c
where
b.[key] not in (select b.[key]
from openjson(#json, '$.testJson.Value') as b
where b.value like ('%1A%'))
As you can see each element in the array can have different quantity of attributes (value1,.., value4..), and I only need to consider those elements where the type attribute is not equal to "1A". The query gives me the result requested, however, I am wondering how can I improve the performance of the code given that I'm using the like operator in the sub select, and obviously the original JSON file could a considerable number of elements in the array.
…
select b.Value --,c.value
from
openjson(#json, '$.testJson.Value')
with
(
Value nvarchar(max) '$' as json,
Type varchar(100) '$.Type'
) as b
--cross apply openjson(b.Value) as c
where b.Type <> '1A'
SELECT
'ValueFields' as groupDef,
J.value as val
FROM
OPENJSON(#json,'$.testJson.Value') J
WHERE
JSON_VALUE([value],'$.Type') <> '1A'
I am new to SQL server I am trying to get two column values in the same row as a dictionary
Example
Table having columns Key,Value
So the dictionary/json should be
Key:Value
Any help on this would be great.
Thank you.
Consider the following example data...
create table dbo.Example (
ID int,
[Key] varchar(10),
Value varchar(10)
);
insert dbo.Example (ID, [Key], Value)
values
(1, 'Foo', 'Bar'),
(2, 'Baz', 'Chaz');
If you perform the following query on it:
select ID, [Key], Value
from dbo.Example
for json path;
You'll get the result where the object keys are the column names from the table:
[
{
"ID": 1,
"Key": "Foo",
"Value": "Bar"
},
{
"ID": 2,
"Key": "Baz",
"Value": "Chaz"
}
]
It sounds like you're wanting to pivot those using a query like the following:
select ID, [Foo], [Baz]
from (
select ID, [Key], Value
from dbo.Example
) src
pivot (max(Value) for [Key] in ([Foo], [Baz])) p
for json path;
Which yields the following result instead:
[
{
"ID": 1,
"Foo": "Bar"
},
{
"ID": 2,
"Baz": "Chaz"
}
]
The downside here is that you either (a) have to know all possible values of Key ahead of time so that they can be encoded in the statement, or (b) you have to write dynamic SQL statements that include the current set of values of Key.
Let's say that I have multiple rows in a table with an nvarchar column that contains JSON data. Each row would have some simple JSON object, like { "key": "value" }. What is the best way to compose all of these objects into a single JSON object as an array, for a group of rows, such as:
{
"data": [
{ "key": "value" },
{ "key": "value" },
{ "key": "value" }
]
}
There could be any number of groups, and any number of rows per group. Each object could be different.
Currently, my approach would be to use FOR XML PATH to concatenate them into a single string, but this is prone to odd text (e.g.
) getting in there which makes it less than a resilient approach. It seems possible that I could use JSON_MODIFY but I'm not sure how I would use it in a way that accommodates unknown rows per group.
Creating arrays of JSON data in SQL is not exactly straightforward. Take the following table as an example:
CREATE TABLE #test (id INT, jsonCol NVARCHAR(MAX))
INSERT INTO #test
(
id
,jsonCol
)
VALUES
(
1
,N'{ "make": "ford" }'
),
(
1
,N'{ "make": "mazda" }'
)
,
(
2
,N'{ "color": "black" }'
)
You can use the following query to create a single JSON object with the array as you posted in your question:
SELECT DISTINCT [data] =
JSON_QUERY( '[' + STUFF(
(SELECT ',' + jsonCol FROM #test FOR XML PATH (''))
, 1, 1, '')
+ ']')
FROM #test
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
The output would look like this:
{
"data":[
{ "make": "ford" },
{ "make": "mazda" },
{ "color": "black" }
]
}
You can also correlate the id columns in the main and subqueries to get the json grouped by id.
Here's an example:
SELECT DISTINCT id
, [json] = (SELECT DISTINCT [data] =
JSON_QUERY( '[' + STUFF(
(SELECT ',' + jsonCol FROM #test t3 WHERE t3.id = t2.id FOR XML PATH (''))
, 1, 1, '')
+ ']')
FROM #test t2
WHERE t2.id = t1.id
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
FROM #test t1
Output would be:
id json
1 {"data":[{ "make": "ford" },{ "make": "mazda" }]}
2 {"data":[{ "color": "black" }]}
I have a table with two columns: [ID] and [Content] (with ISJSON constraint, so every row must have valid JSON in [Content] column).
These JSONs have an array field, which consists of objects with some specific ID (and many more fields) .
{
"departments": [ { "id": 1, "fieldA": "somevalue" },
{ "id": 2, "fieldA": "somevalue" }]
}
I'd like to perform a select query, which returns all rows with some particular id in the object from departments field.
I managed to create a script that uses a cursor fetching departments field from [Content] column into #content variable and then:
SELECT * FROM OPENJSON(#content) WITH(id int) WHERE id IN (1, 2, 3, 9)
But it returns only departmentid and I need the whole row.
Preferably it should look like that (but code below unfortunately does not work):
SELECT * FROM ITEM I WHERE EXISTS
(SELECT * FROM OPENJSON(I.CONTENT) WITH(id int) WHERE id IN (1, 2, 3, 9))
If I understood you correctly, this could get you what you need:
declare #someId varchar(10) = '1'
declare #json nvarchar(1000) = N'{ "departments": [ { "id": 1, "fieldA": "somevalue" }, { "id": 1, "fieldA": "test" }, { "id": 2, "fieldA": "somevalue" }] }'
select ids.[key], ids.[value] id_val, depts.value content
from openjson(#json, '$.departments') as depts cross apply OPENJSON(depts.value) as ids
where ids.[key] = 'id' and ids.value = #someId
I have added another element to the departments array, with the same Id = 1 just for testing the results.
EDIT:
select ids.[key], ids.[value] id_val, depts.value content
from ITEMS i cross apply openjson(i.Content, '$.departments') as depts cross apply OPENJSON(depts.value) as ids
where ids.[key] = 'id' and ids.value = #id