Duplicate row detected during DML action Row Values - snowflake-cloud-data-platform

Each of our messages has a unique id and several attributes; the final result should combine all of these attributes into a single message. We tried using snowflake merge, but it's not working as expected. In the first run, we used row number and partition to determine unique records, and we inserted them. In the second run, we considered updating more than one record, but we received the error message "Error 3:Duplicate row detected during DML action Row Values: \n".
Session ERROR ON NONDETERMINISTIC MERGE=FALSE was tried, however the outcome might not be reliable or consistent.
We tried javascript deep merge, however the volume was really high and there were performance problems.
Sample code below:
create or replace table test1 (Job_Id VARCHAR, RECORD_CONTENT VARIANT);
create or replace table test2 like test1;
insert into test1(JOB_ID, RECORD_CONTENT) select 1,parse_json('{
"customer": "Aphrodite",
"age": 32,
"orders": {"product": "socks","quantity": 4, "price": "$6", "attribute1" : "a1"}
}');
insert into test1(JOB_ID, RECORD_CONTENT) select 1,parse_json('{
"customer": "Aphrodite",
"age": 32,
"orders": {"product": "shoe", "quantity": 2, "brand" : "Woodland","attribute2" : "a2"}
}');
insert into test1(JOB_ID, RECORD_CONTENT) select 1,parse_json('{
"customer": "Aphrodite",
"age": 32,
"orders": {"product": "shoe polish","brand" : "Helios", "attribute3" : "a3" }
}');
merge into test2 t2 using (
select * from (select
row_number() over(partition by JOB_ID order by JOB_ID desc) as rno, JOB_ID, RECORD_CONTENT
from test1) where rno>1) t1 on //1. first run - unique values inserted using "=1" -> Successfully inserted unique values
//2. second run - updating attributes using ">1" -> Duplicate row detected during DML action Row Values
t1.JOB_ID = t2.JOB_ID
WHEN MATCHED THEN
UPDATE
SET t2.JOB_ID = t1.JOB_ID,
t2.RECORD_CONTENT = t1.RECORD_CONTENT
WHEN NOT MATCHED
THEN INSERT (JOB_ID, RECORD_CONTENT) VALUES (t1.JOB_ID, t1.RECORD_CONTENT)
Expected Output:-
select * from test2;
select parse_json('{
"customer": "Aphrodite",
"age": 32,
"orders": {"product": "shoe polish","quantity": 2, "brand" : "Helios","price": "$6",
"attribute1" : "a1","attribute2" : "a2","attribute3" : "a3" }
}');

Related

SQL Server - return count of json array items in sql server column across multiple rows

Using SQL Server 2019, I have data in a table with the following columns:
id, alias, value
The value column contains json in this format:
[
{
"num": 1,
"description": "test 1"
},
{
"num": 2,
"description": "test 2"
},
{
"num": 3,
"description": "test 2"
}
]
I want to get a count of "num" items within the json for each row in my SQL table.
For example query result should look like
id CountOfNumFromValueColumn
------------------------------
1 3
Update - The SQL below works with the above json and gives results in the above format
SELECT id, count(Num) AS CountOfNumFromValueColumn
FROM MyTable
CROSS APPLY OPENJSON ([value], N'$')
WITH (
num int)
WHERE ISJSON([value]) > 0
GROUP BY id

How to update single column in flutter sqflite?

This are my columns
await db.execute('CREATE TABLE LocalProduct('
'id INTEGER,'
'name TEXT,'
'price TEXT,'
'image TEXT,'
'qty INTEGER,'
'product_item_count INTEGER,'
'last_fetched DATETIME DEFAULT CURRENT_TIMESTAMP,'
'created_at DATETIME)');
var _flag = storage.getItem("isFirst");
This is my query
final res = await db.rawUpdate('UPDATE LocalProduct SET qty = ? WHERE id = ?', [podDAta.qty,podDAta.id]);
** This is the data that is coming from the api and the one that should be updated **
{
"id": 2877,
"name": "Britannia Cheese Block, 200g",
"qty": 9,
"created_at": "2021-04-20T11:30:08.000000Z",
"updated_at": "2021-04-20T11:30:08.000000Z"
},
i only want to update the qty of that id, but i am getting 2 errors.
the data in the db is been deleted or isnt there anymore.
it doesnt update.
so can i not update the qty only?
It seems that the ID you are trying to update does not exist in the table. Make sure that podData.id exists in LocalProduct table, because it looks like the UPDATE command is fine.
In know this should be a comment, but I don't have enough reputation yet.

Query JSON Key:Value Pairs in AWS Athena

I have received a data set from a client that is loaded in AWS S3. The data contains unnamed JSON key:value pairs. This isn't my area of expertise, so I was looking for a little help.
The structure of JSON data that I've typically worked with in the past looks similar to this:
{ "name":"John", "age":30, "car":null }
The data that I have received from my client is formatted as such:
{
"answer_id": "cc006",
"answer": {
"101086": 1,
"101087": 2,
"101089": 2,
"101090": 7,
"101091": 5,
"101092": 3,
"101125": 2
}
}
This is survey data, where the key on the left is a numeric customer identifier, and the value on the right is their response to a survey question, i.e. customer "101125" answered the survey with a value of "2". I need to be able to query the JSON data using Athena such that my result set looks similar to:
Cross joining the unnested children against the parent node isn't an issue. What I can't figure out is how to select all of the keys from the array "answer" without specifying that actual key name. Similarly, I want to be able to select all of the values as well.
Is it possible to create a virtual table in Athena that would allow for these results, or do I need to convert the JSON to a format this looks more similar to the following:
{
"answer_id": "cc006",
"answer": [
{ "key": "101086", "value": 1 },
{ "key": "101087", "value": 2 },
{ "key": "101089", "value": 2 },
{ "key": "101090", "value": 7 },
{ "key": "101091", "value": 5 },
{ "key": "101092", "value": 3 },
{ "key": "101125", "value": 2 }
]
}
EDIT 6/4/2020
I was able to use the code that Theon provided below along with the following table structure:
CREATE EXTERNAL TABLE answer_example (
answer_id string,
answer string
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://mybucket/'
That allowed me to use the following query to generate the results that I needed.
WITH Data AS(
SELECT
answer_id,
CAST(json_extract(answer, '$') AS MAP(VARCHAR, VARCHAR)) as answer
FROM
answer_example
)
SELECT
answer_id,
key,
element_at(answer, key) AS value
FROM
Data
CROSS JOIN UNNEST (map_keys(answer)) AS answer (key)
EDIT 6/5/2020
Taking additional advice from Theon's response below, the following DDL and Query simplify this quite a bit.
DDL:
CREATE EXTERNAL TABLE answer_example (
answer_id string,
answer map<string,string>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://mybucket/'
Query:
SELECT
answer_id,
key,
element_at(answer, key) AS value
FROM
answer_example
CROSS JOIN UNNEST (map_keys(answer)) AS answer (key)
Cross joining with the keys of the answer property and then picking the corresponding value. Something like this:
WITH data AS (
SELECT
'cc006' AS answer_id,
MAP(
ARRAY['101086', '101087', '101089', '101090', '101091', '101092', '101125'],
ARRAY[1, 2, 2, 7, 5, 3, 2]
) AS answers
)
SELECT
answer_id,
key,
element_at(answers, key) AS value
FROM data
CROSS JOIN UNNEST (map_keys(answers)) AS answer (key)
You could probably do something with transform_keys to create rows of the key value pairs, but the SQL above does the trick.

update value in list Postgres jsonb

I am trying to update json
[{"id": "1", "name": "myconf", "icons": "small", "theme": "light", "textsize": "large"},
{"id": 2, "name": "myconf2", "theme": "dark"}, {"name": "firstconf", "theme": "dark", "textsize": "large"},
{"id": 3, "name": "firstconxsf", "theme": "dassrk", "textsize": "lassrge"}]
and this is the table containing that json column :
CREATE TABLE USER_CONFIGURATIONS ( ID BIGSERIAL PRIMARY KEY, DATA JSONB );
adding new field is easy I am using:
UPDATE USER_CONFIGURATIONS
SET DATA = DATA || '{"name":"firstconxsf", "theme":"dassrk", "textsize":"lassrge"}'
WHERE id = 9;
But how to update single with where id = 1 or 2
Click: step-by-step demo:db<>fiddle
UPDATE users -- 4
SET data = s.updated
FROM (
SELECT
jsonb_agg( -- 3
CASE -- 2
WHEN ((elem ->> 'id')::int IN (1,2)) THEN
elem || '{"name":"abc", "icon":"HUGE"}'
ELSE elem
END
) AS updated
FROM
users,
jsonb_array_elements(data) elem -- 1
) s;
Expand array elements into one row each
If element has relevant id, update with || operator; if not, keep the original one
Reaggregate the array after updating the JSON data
Execute the UPDATE statement.

BigQuery: How to flatten repeated structured property imported from datastore

dear all
I started to use BigQuery to analysis data in GAE datastore this month. Firstly, I export data via "Datastore Admin" page of GAE console to Google Cloud Storage. And then, I import the data from Google Cloud Storage to BigQuery. It works very smoothly excepted the repeated structured property. I expected the imported record should be in the format of:
parent:"James",
children: [{
name: "name1",
age: 5,
gender: "M"
}, {
name: "name2",
age: 50,
gender: "F"
}, {
name: "name3",
age: 33,
gender: "M"
},
]
I know how to flatten data in above format. But the actual data format in BigQuery seems in following format:
parent: "James",
children.name:["name1", "name2", "name3"],
children.age:[5, 50, 33],
children.gender:["M", "F", "M"],
I'm wondering if it's possible to flatten above data in BigQuery for further analysis. The ideal format of result table in my mind is:
parentName, children.name, children.age, children.gender
James, name1, 5, "M"
James, name2, 50, "F"
James, name3, 33, "M"
Cheers!
With recently introduced BigQuery Standard SQL - things are so much nicer!
Try below (make sure to uncheck Use Legacy SQL checkbox under Show Options)
WITH parents AS (
SELECT
"James" AS parentName,
STRUCT(
["name1", "name2", "name3"] AS name,
[5, 50, 33] AS age,
["M", "F", "M"] AS gender
) AS children
)
SELECT
parentName, childrenName, childrenAge, childrenGender
FROM
parents,
UNNEST(children.name) AS childrenName WITH OFFSET AS pos_name,
UNNEST(children.age) AS childrenAge WITH OFFSET AS pos_age,
UNNEST(children.gender) AS childrenGender WITH OFFSET AS pos_gender
WHERE
pos_name = pos_age AND pos_name = pos_gender
Here - original table - parents - has below data
with respective schema as
[{
"parentName": "James",
"children": {
"name": ["name1", "name2", "name3"],
"age": ["5", "50", "33" ],
"gender": ["M", "F", "M"]
}
}]
and the output is
Note: above is solely based on what I see in original question and most likely needs to be adjusted to whatever specific needs you have
Hope this helps in terms of direction to go and where to start with!
Added:
Above Query is using row based CROSS JOINS, meaning all the variations for same parent first assembled and than WHERE clause filters out "wrong" ones.
In contrast, below version, use INNER JOIN to eliminate this "side effect"
WITH parents AS (
SELECT
"James" AS parentName,
STRUCT(
["name1", "name2", "name3"] AS name,
[5, 50, 33] AS age,
["M", "F", "M"] AS gender
) AS children
)
SELECT
parentName, childrenName, childrenAge, childrenGender
FROM
parents, UNNEST(children.name) AS childrenName WITH OFFSET AS pos_name
JOIN UNNEST(children.age) AS childrenAge WITH OFFSET AS pos_age
ON pos_name = pos_age
JOIN UNNEST(children.gender) AS childrenGender WITH OFFSET AS pos_gender
ON pos_age = pos_gender
Intuitively, I would expect second version to be little more efficient for bigger table
You should be able to use the 'large query results' feature to generate a new flattened table. Unfortunately, the syntax is terrifying. The basic principle is that you want to flatten each of the fields and save off the position, then filter where the position is the same.
Try something like:
SELECT parentName, children.name, children.age, children.gender,
position(children.name) as name_pos,
position(children.age) as age_pos,
position(children.gender) as gender_pos,
FROM table
SELECT
parent,
children.name,
children.age,
children.gender,
pos
FROM (
SELECT
parent,
children.name,
children.age,
children.gender,
gender_pos,
pos
FROM (
FLATTEN((
SELECT
parent,
children.name,
children.age,
children.gender,
pos,
POSITION(children.gender) as gender_pos
FROM (
SELECT
parent,
children.name,
children.age,
children.gender,
pos,
FROM (
FLATTEN((
SELECT
parent,
children.name,
children.age,
children.gender,
pos,
POSITION(children.age) AS age_pos
FROM (
FLATTEN((
SELECT
parent,
children.name,
children.age,
children.gender,
POSITION(children.name) AS pos
FROM table
),
children.name))),
children.age))
WHERE
age_pos = pos)),
children.gender)))
WHERE
gender_pos = pos;
To allow large results, if you are using the BigQuery UI, you should click the 'advanced options' button, specify a destination table, and check the 'allow large results' flag.
Note that if your data is stored as an entity that has a nested record that looks like {name, age, gender}, we should be transforming this into a nested record in bigquery instead of parallel arrays. I'll look into why this is happening.

Resources