SQL Server FOR JSON Path Nested Array - sql-server

We are trying to use FOR JSON Path in SQL Server 2016 for forming a Nested Array from a SQL Query.
SQL Query:
SELECT A,
B.name as [child.name],
B.date as [child.date]
from Table 1 join Table 2 on Table 1.ID=Table 2.ID FOR JSON PATH
Desired Output:
[{
A:"text",
"child:"[
{"name":"value", "date":"value"},
{"name":"value", "date":"value"}
]
}]
However what we are getting is:
[{
A:"text",
"child:" {"name":"value", "date":"value"}
},
{
A:"text",
"child":{"name":"value", "date":"value"}
}]
How can we use FOR JSON PATH to form nested child array.

instead of join use nested query, e.g.:
SELECT A
, child=(
SELECT B.name as [child.name]
, B.date as [child.date]
FROM Table 2
WHERE Table 2.ID = Table 1.ID
FOR JSON PATH
)
from Table 1 FOR JSON PATH
(the query in the question is broken af so this query is just as broken but should give you the idea)

Assuming this schema:
create table parent(id int primary key, name varchar(100));
create table child(id int primary key, name varchar(100), parent_id int references parent(id));
Here is a working solution - abeit more convoluted - that doesn't involve correlated subqueries and only uses FOR JSON PATH:
SELECT
parent.name AS [name],
child.json_agg AS [children]
FROM parent
JOIN (
SELECT
child.parent_code,
JSON_QUERY(CONCAT('[', STRING_AGG(child.json, ','), ']')) AS json_agg
FROM (
SELECT
child.parent_code,
(SELECT
child.name AS [name]
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
) AS json
FROM child
) AS child
GROUP BY child.parent_code
) AS child
ON child.parent_code = parent.code
FOR JSON PATH
If you have an index on child.parent_id, then using a correlated subquery as suggested, or the equivalent with CROSS/OUTER APPLY might be more efficient:
SELECT
parent.name AS [name],
child.json AS [children]
FROM parent
OUTER APPLY (
SELECT
name AS [name]
FROM child
WHERE child.parent_id = parent.id
FOR JSON PATH
) child(json)
FOR JSON PATH
Both queries will return :
[
{
"name": "foo",
"children": [
{ "name": "bar" },
{ "name": "baz" }
]
}
]

Related

Creating a merge statement from a conditional INSERT select

I'm trying to create a merge statement where I keep all the values in my FINAL_TABLE before the column DATE is >= today's date,
and insert new values from today's date from my LANDING_TABLE.
The working example with a DELETE and INSERT statement can be seen here:
DELETE FROM FINAL_TABLE
WHERE "DATE" >= CURRENT_DATE()
INSERT INTO FINAL_TABLE
SELECT X, Y.value :: string AS Y_SPLIT, "DATE", "PUBLIC"
FROM LANDING TABLE, LATERAL FLATTEN (INPUT => STRTOK_TO_ARRAY(LANDING_TABLE.column, ', '), OUTER => TRUE) y
WHERE "PUBLIC" ILIKE 'TRUE' AND "DATE" >= CURRENT_DATE()
I'd like to keep the FLATTEN statement and the WHERE conditions while having the whole statement in a single MERGE statement.
Is it possible or should I first create a temporary table with the values I want to insert and then use that in the merge statement?
The MERGE statement could use subqueries/cte as source:
MERGE INTO <target_table> USING <source>
ON <join_expr> { matchedClause | notMatchedClause } [ ... ]
source:
Specifies the table or subquery to join with the target table.
MERGE INTO FINAL_TABLE
USING (
SELECT X, Y.value :: string AS Y_SPLIT, "DATE" AS col1, "PUBLIC" AS col2
FROM LANDING TABLE
,LATERAL FLATTEN(INPUT=>STRTOK_TO_ARRAY(LANDING_TABLE.column, ', '), OUTER=>TRUE) y
WHERE "PUBLIC" ILIKE 'TRUE' AND "DATE" >= CURRENT_DATE()
) AS SRC
ON ...
WHEN ...;

Querying json that starts with an array

I have a JSON that starts with an array and I don't manage to query it.
JSON is in this format:
[
{"#id":1,
"field1":"qwerty",
"#field2":{"name":"my_name", "name2":"my_name_2"},
"field3":{"event":[{"event_type":"OP",...}]}
},
{"#id":2..
}
]
Any suggestions on how to query this?
If I try to use lateral flatten I don't know what key to use:
select
'???'.Value:#id::string as id
from tabl1
,lateral flatten (tabl1_GB_RECORD:???) as gb_record
Your SQL was close but not complete, the following will give you #id values
with tbl1 (v) as (
select parse_json('
[
{"#id":1,
"field1":"qwerty",
"#field2":{"name":"my_name", "name2":"my_name_2"},
"field3":{"event":[{"event_type":"OP"}]}
},
{"#id":2
}
]')
)
select t.value:"#id" id from tbl1
, lateral flatten (input => v) as t
Result:
id
___
1
2
Let me know if you have any other questions
You leverage the field that you want to flatten when the json begins with an array. Something along these lines:
WITH x AS (
SELECT parse_json('[
{"#id":1,
"field1":"qwerty",
"#field2":{"name":"my_name", "name2":"my_name_2"},
"field3":{"event":[{"event_type":"OP"}]}
},
{"#id":2,
"field1":"qwerty",
"#field2":{"name":"my_name", "name2":"my_name_2"},
"field3":{"event":[{"event_type":"OP"}]}
}
]') as json_data
)
SELECT y.value,
y.value:"#id"::number as id,
y.value:field1::string as field1,
y.value:"#field2"::variant as field2,
y.value:field3::variant as field3,
y.value:"#field2":name::varchar as name
FROM x,
LATERAL FLATTEN (input=>json_data) y;

How can I apply a function to each element of a array column?

I have a dataset were a column in with an ARRAY of OBJECTs like this:
ID TAGS
1 {"tags": [{"tag": "a"}, {"tag": "b"}]}
2 {"tags": [{"tag": "c"}, {"tag": "d"}]}
I want to extract the tag field of each element of the array, so the end result would be:
ID TAGS
1 ["a","b"]
2 ["c","d"]
Assuming the following table t1:
CREATE OR REPLACE TEMPORARY TABLE t1 AS (
select 1 as ID , PARSE_JSON('{"tags": [{"tag":"a"}, {"tag":"b"}]}') AS PAYLOAD
UNION ALL
select 2, PARSE_JSON('{"tags": [{"tag":"c"}, {"tag":"d"}]}')
);
One possible solution is to create a javascript function and use the javascript .map() to apply a function to each element of the array:
create or replace function extract_tags(a array)
returns array
language javascript
strict
as '
return A.map(function(d) {return d.tag});
';
SELECT ID, EXTRACT_TAGS(PAYLOAD:tags) AS tags from t1;
this gives the desired result:
ID TAGS
1 [ "a", "b" ]
2 [ "c", "d" ]
A pure SQL approach would be to combine LATERAL FLATTEN and ARRAY_AGG like this:
with t2 as (
select ID, t2.value:tag as tag
from t1, LATERAL FLATTEN(input => payload:tags) t2
)
select t2.id, ARRAY_AGG(t2.tag) as tags from t2
group by ID
order by ID ASC;
t2 itself will become:
ID TAG
1 "a"
1 "b"
2 "c"
2 "d"
and after the GROUP BY ID it becomes:
ID TAGS
1 [ "a", "b" ]
2 [ "c", "d" ]

T-SQL JSON: How do i search for a value in a JSON array

We are using Azure SQL - and have a table called companies where one of the columns contains JSON. The structure for the JSON field is:
{
"DepartmentLink": "https://company.com",
"ContactName": "John Doe",
"ContactTitle": "CEO",
"ContactEmail": "john.doe#company.com",
"ContactPhone": "xx xx xx xx xx",
"ContactImage": "https://company.com/xyz.jpg",
"ZipCodes": [
"7000",
"7007",
"7017",
"7018",
"7029"
]
}
The structure for the Table looks like
[Id] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](100) NULL,
[JsonString] [nvarchar](max) NULL,
-- other fields --
where [JsonString] has this structure:
{
"DepartmentLink": "https://company.com",
"ContactName": "John Doe",
"ContactTitle": "CEO",
"ContactEmail": "john.doe#company.com",
"ContactPhone": "xx xx xx xx xx",
"ContactImage": "https://company.com/xyz.jpg",
"ZipCodes": [
"7000",
"7007",
"7017",
"7018",
"7029"
]
}
Given a ZipCode .e.g. 7018 I need to find the company which has this ZipCode in the Json Array ZipCodes - and return elements from the record (these elements are all present as ordinary "fields" - so I do not need to return the JSON.).
I'm having problems finding out how to do this. Any suggestions? I'm quite new to JSON in SQL.
use OPENJSON and CROSS APPLY to shred the zip codes array then add a WHERE clause with your filter, something like this:
SELECT *, JSON_VALUE( yourJSON, '$.DepartmentLink' ) AS DepartmentLink
FROM dbo.Companies
CROSS APPLY OPENJSON( yourJSON, '$.ZipCodes') x
WHERE x.value = '7000';
It tried this and it seems to work. Is this a recommendable way to do it?
SELECT *
FROM [dbo].[Companies]
WHERE EXISTS (
Select *
FROM OPENJSON(JsonString,'$.ZipCodes')
WHERE Value = '7018'
)

Using union to do a crosstab query

I have a table which has the following structure :
id key data
1 A 10
1 B 20
1 C 30
I need to write a query so that i get these keys as columns and the value as rows.
Eg :
id A B C
1 10 20 30
I have tried using union and case but i get 3 rows for instead of one
Any suggestion?
The most straightforward way to do this is:
SELECT DISTINCT "id",
(SELECT "data" FROM Table1 WHERE "key" = 'A') AS "A",
(SELECT "data" FROM Table1 WHERE "key" = 'B') AS "B",
(SELECT "data" FROM Table1 WHERE "key" = 'C') AS "C"
FROM Table1
Or you can use a PIVOT:
SELECT * FROM
(SELECT "id", "key", "data" FROM Table1)
PIVOT (
MAX("data")
FOR ("key") IN ('A', 'B', 'C'));
sqlfiddle demo

Resources