I have table login( id int, meta_skills jsonb), but the jsonb is not stored in key-value pairs.
The data in the jsonb field look like
{
"Cat1": [
{
"Skill_1": 2,
"Skill_2": 2,
"Skill_3": 2,
"Skill_4": 2,
"Skill_5": 2
}
],
"Cat2": [
{
"Skill_1": 3,
"Skill_2": 2,
"Skill_3": 3
}
],
"Cat3": [
{
"Skill_1": 2,
"Skill_2": 2,
"Skill_3": 2,
"Skill_4": 2
}
]
}
The skills values are random values.
and I want to prepare the data in following format
You have to unnest the levels:
SELECT login.id,
u1.category,
u3.skill,
u3.level
FROM login
CROSS JOIN LATERAL jsonb_each(login.meta_skills) AS u1(category,v)
CROSS JOIN LATERAL jsonb_array_elements(u1.v) AS u2(v)
CROSS JOIN LATERAL jsonb_each(u2.v) AS u3(skill, level);
Related
I am looking for a solution to convert the table results to a JSON path.
I have a table with two columns as below. Column 1 Will always have normal values, but column 2 will have values up to 15 separated by ';' (semicolon).
ID Column1 Column2
--------------------------------------
1 T1 Re;BoRe;Va
I want to convert the above column data in to below JSON Format
{
"services":
[
{ "service": "T1"}
],
"additional_services":
[
{ "service": "Re" },
{ "service": "BoRe" },
{ "service": "Va" }
]
}
I have tried creating something like the below, but cannot get to the exact format that I am looking for
SELECT
REPLACE((SELECT d.Column1 AS services, d.column2 AS additional_services
FROM Table1 w (nolock)
INNER JOIN Table2 d (nolock) ON w.Id = d.Id
WHERE ID = 1
FOR JSON PATH), '\/', '/')
Please let me know if this is something we can achieve using T-SQL
As I mention in the comments, I strongly recommend you fix your design and normalise your design. Don't store delimited data in your database; Re;BoRe;Va should be 3 rows, not 1 delimited one. That doesn't mean you can't achieve what you want with your denormalised data, just that your design is flawed, and thus it needs being brought up.
One way to achieve what you're after is with some nested FOR JSON calls:
SELECT (SELECT V.Column1 AS service
FOR JSON PATH) AS services,
(SELECT SS.[value] AS service
FROM STRING_SPLIT(V.Column2,';') SS
FOR JSON PATH) AS additional_services
FROM (VALUES(1,'T1','Re;BoRe;Va'))V(ID,Column1,Column2)
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER;
This results in the following JSON:
{
"services": [
{
"service": "T1"
}
],
"additional_services": [
{
"service": "Re"
},
{
"service": "BoRe"
},
{
"service": "Va"
}
]
}
I have two tables, the second table contains a foreign key which references the first table primary key.
First table "Houses" (id,title,city,country), Second table "Images" (id,name,house_id)
I am implementing the following query:
SELECT * FROM houses INNER JOIN images ON houses.id = images.house_id;
The result is an array of repeated data except for a field name:
[
{
id:1,
title: "house1",
city:"c1",
country:"country2",
name:"image1",
house_id: 2
},
{
id:2,
title: "house1",
city:"c1",
country:"country2",
name:"image2",
house_id: 2
},
{
id:3,
title: "house1",
city:"c1",
country:"country2",
name:"image3"
house_id: 2,
},
]
How could I adjust the query to get the result like the following:
[
{
id:2,
title: "house1",
city:"c1",
country:"country2",
imagesNames:["image1","image2","image3"]
house_id: 2,
}
]
Is it doable using knex? I am using a PostgreSQL database.
GROUP BY all columns shared by all peers, and aggregate names. Like:
SELECT h.id, h.title, h.city, h.country
, array_agg(name) AS images_names
, i.house_id -- redundant?
FROM houses h
JOIN images i ON h.id = i.house_id;
GROUP BY h.id, h.title, h.city, h.country, i.house_id;
This SQL works but instead of joining on just the first value in the array in encounter.document -> account, I need to search all values in the array.
SELECT encounter.* FROM encounter JOIN account
ON (account.document -> 'identifier') #> jsonb_build_array(jsonb_build_object('value', encounter.document #> '{account, 0, identifier, value}', 'system', encounter.document #> '{account, 0, identifier, system}')
WHERE account.foo = 'bar'
Example Encounter:
encounter.document = {"account": [{"system": "foo", "value": "bar"}, {"system": "two-foo", "value": "two-bar"}]}
Example Account:
account.foo = bar
account.document = {"identifier": [{"system": "foo", "value": "bar"}, {"system": "blah", "value": "blah"}]}
Given the above records, I would expect to get the encounter record back because the "account" array in the encounter record contains and object that is in the "identifier" array of an account record and that account record has a foo value = bar.
Give me all the encounter records where the "account" array contains and item that is also contained in the "identifier" array on an account record and where that account record has foo = bar - Would be another way to put it.
It seems I'm able to do it if I add several ORs to the on clause incremnting the index of the array like:
ON ((account.document -> 'identifier') #> jsonb_build_array(jsonb_build_object('value', encounter.document #> '{account, 0, identifier, value}', 'system', encounter.document #> '{account, 0, identifier, system}')
OR(account.document -> 'identifier') #> jsonb_build_array(jsonb_build_object('value', encounter.document #> '{account, 1, identifier, value}', 'system', encounter.document #> '{account, 1, identifier, system}'))
But that feels very dirty.
one way to make it more generic and bullet-proof is :
select a.*
from(
select *
from encounter e
cross join jsonb_to_recordset(jsonb_extract_path(e.document, 'account')) as x (value varchar(100), system varchar(100))
) a
join (
select *
from account a
cross join jsonb_to_recordset(jsonb_extract_path(a.document, 'identifier')) as y (value varchar(100), system varchar(100))
) b on a.value = b.value
and a.system = b.system
and a.value = 'foo'
and a.system = 'bar'
I have a VARIANT column that contains a JSON response from a web service. It contains a nested array with a float value that I would like to aggregate and return as an average. Here is an example SnowSQL command that I am using:
select
value:disambiguated.id,
value:mentions
from TABLE(
FLATTEN(input =>
PARSE_JSON('{ "entities": [{"count": 2,"disambiguated": {"id": 123},"label": "Coronavirus Disease 2019","mentions": [{"confidence": 0.5928,}, {"confidence": 0.5445,}],"type": "MEDICAL"}]}'):entities
)
)
Which returns:
VALUE:DISAMBIGUATED.ID VALUE:MENTIONS
123 [ { "confidence": 0.5928 }, { "confidence": 0.5445 } ]
What I would like to return is something with the two "confidence" values averaged to 0.56825. I was able to add a second FLATTEN statement which isolated the "mentions" array and allowed me to extract each "confidence" value. I can not seem to figure out how to group the records to calculate the average. Would love to use the built in AVG() function if possible. Thank you in advance for any help you can provide.
Using your example, you can use LATERAL FLATTEN to create your required flattened fields, and then aggregate as you normally would. In this example, I'm grouping on the ID that is in the data, but you could also use y.index or z.index depending on which of those you wanted to group on for your AVG().
WITH x AS (
SELECT PARSE_JSON('{ "entities": [{"count": 2,"disambiguated": {"id": 123},"label": "Coronavirus Disease 2019","mentions": [{"confidence": 0.5928,}, {"confidence": 0.5445,}],"type": "MEDICAL"}]}') as json_str
)
SELECT
y.value:disambiguated.id as id,
avg(z.value:confidence)
from x,
LATERAL FLATTEN(input => json_str:entities) y,
LATERAL FLATTEN(input => y.value:mentions) z
GROUP BY id
;
I have received a data set from a client that is loaded in AWS S3. The data contains unnamed JSON key:value pairs. This isn't my area of expertise, so I was looking for a little help.
The structure of JSON data that I've typically worked with in the past looks similar to this:
{ "name":"John", "age":30, "car":null }
The data that I have received from my client is formatted as such:
{
"answer_id": "cc006",
"answer": {
"101086": 1,
"101087": 2,
"101089": 2,
"101090": 7,
"101091": 5,
"101092": 3,
"101125": 2
}
}
This is survey data, where the key on the left is a numeric customer identifier, and the value on the right is their response to a survey question, i.e. customer "101125" answered the survey with a value of "2". I need to be able to query the JSON data using Athena such that my result set looks similar to:
Cross joining the unnested children against the parent node isn't an issue. What I can't figure out is how to select all of the keys from the array "answer" without specifying that actual key name. Similarly, I want to be able to select all of the values as well.
Is it possible to create a virtual table in Athena that would allow for these results, or do I need to convert the JSON to a format this looks more similar to the following:
{
"answer_id": "cc006",
"answer": [
{ "key": "101086", "value": 1 },
{ "key": "101087", "value": 2 },
{ "key": "101089", "value": 2 },
{ "key": "101090", "value": 7 },
{ "key": "101091", "value": 5 },
{ "key": "101092", "value": 3 },
{ "key": "101125", "value": 2 }
]
}
EDIT 6/4/2020
I was able to use the code that Theon provided below along with the following table structure:
CREATE EXTERNAL TABLE answer_example (
answer_id string,
answer string
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://mybucket/'
That allowed me to use the following query to generate the results that I needed.
WITH Data AS(
SELECT
answer_id,
CAST(json_extract(answer, '$') AS MAP(VARCHAR, VARCHAR)) as answer
FROM
answer_example
)
SELECT
answer_id,
key,
element_at(answer, key) AS value
FROM
Data
CROSS JOIN UNNEST (map_keys(answer)) AS answer (key)
EDIT 6/5/2020
Taking additional advice from Theon's response below, the following DDL and Query simplify this quite a bit.
DDL:
CREATE EXTERNAL TABLE answer_example (
answer_id string,
answer map<string,string>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://mybucket/'
Query:
SELECT
answer_id,
key,
element_at(answer, key) AS value
FROM
answer_example
CROSS JOIN UNNEST (map_keys(answer)) AS answer (key)
Cross joining with the keys of the answer property and then picking the corresponding value. Something like this:
WITH data AS (
SELECT
'cc006' AS answer_id,
MAP(
ARRAY['101086', '101087', '101089', '101090', '101091', '101092', '101125'],
ARRAY[1, 2, 2, 7, 5, 3, 2]
) AS answers
)
SELECT
answer_id,
key,
element_at(answers, key) AS value
FROM data
CROSS JOIN UNNEST (map_keys(answers)) AS answer (key)
You could probably do something with transform_keys to create rows of the key value pairs, but the SQL above does the trick.