How to update single column in flutter sqflite? - database

This are my columns
await db.execute('CREATE TABLE LocalProduct('
'id INTEGER,'
'name TEXT,'
'price TEXT,'
'image TEXT,'
'qty INTEGER,'
'product_item_count INTEGER,'
'last_fetched DATETIME DEFAULT CURRENT_TIMESTAMP,'
'created_at DATETIME)');
var _flag = storage.getItem("isFirst");
This is my query
final res = await db.rawUpdate('UPDATE LocalProduct SET qty = ? WHERE id = ?', [podDAta.qty,podDAta.id]);
** This is the data that is coming from the api and the one that should be updated **
{
"id": 2877,
"name": "Britannia Cheese Block, 200g",
"qty": 9,
"created_at": "2021-04-20T11:30:08.000000Z",
"updated_at": "2021-04-20T11:30:08.000000Z"
},
i only want to update the qty of that id, but i am getting 2 errors.
the data in the db is been deleted or isnt there anymore.
it doesnt update.
so can i not update the qty only?

It seems that the ID you are trying to update does not exist in the table. Make sure that podData.id exists in LocalProduct table, because it looks like the UPDATE command is fine.
In know this should be a comment, but I don't have enough reputation yet.

Related

Duplicate row detected during DML action Row Values

Each of our messages has a unique id and several attributes; the final result should combine all of these attributes into a single message. We tried using snowflake merge, but it's not working as expected. In the first run, we used row number and partition to determine unique records, and we inserted them. In the second run, we considered updating more than one record, but we received the error message "Error 3:Duplicate row detected during DML action Row Values: \n".
Session ERROR ON NONDETERMINISTIC MERGE=FALSE was tried, however the outcome might not be reliable or consistent.
We tried javascript deep merge, however the volume was really high and there were performance problems.
Sample code below:
create or replace table test1 (Job_Id VARCHAR, RECORD_CONTENT VARIANT);
create or replace table test2 like test1;
insert into test1(JOB_ID, RECORD_CONTENT) select 1,parse_json('{
"customer": "Aphrodite",
"age": 32,
"orders": {"product": "socks","quantity": 4, "price": "$6", "attribute1" : "a1"}
}');
insert into test1(JOB_ID, RECORD_CONTENT) select 1,parse_json('{
"customer": "Aphrodite",
"age": 32,
"orders": {"product": "shoe", "quantity": 2, "brand" : "Woodland","attribute2" : "a2"}
}');
insert into test1(JOB_ID, RECORD_CONTENT) select 1,parse_json('{
"customer": "Aphrodite",
"age": 32,
"orders": {"product": "shoe polish","brand" : "Helios", "attribute3" : "a3" }
}');
merge into test2 t2 using (
select * from (select
row_number() over(partition by JOB_ID order by JOB_ID desc) as rno, JOB_ID, RECORD_CONTENT
from test1) where rno>1) t1 on //1. first run - unique values inserted using "=1" -> Successfully inserted unique values
//2. second run - updating attributes using ">1" -> Duplicate row detected during DML action Row Values
t1.JOB_ID = t2.JOB_ID
WHEN MATCHED THEN
UPDATE
SET t2.JOB_ID = t1.JOB_ID,
t2.RECORD_CONTENT = t1.RECORD_CONTENT
WHEN NOT MATCHED
THEN INSERT (JOB_ID, RECORD_CONTENT) VALUES (t1.JOB_ID, t1.RECORD_CONTENT)
Expected Output:-
select * from test2;
select parse_json('{
"customer": "Aphrodite",
"age": 32,
"orders": {"product": "shoe polish","quantity": 2, "brand" : "Helios","price": "$6",
"attribute1" : "a1","attribute2" : "a2","attribute3" : "a3" }
}');

Compare multiple date fields in JSON and use them in where clause

So i have a text field in my Postgres 10.8 (json_array_elements not possible) DB. It has a json structure like this.
{
"code_cd": "02",
"tax_cd": null,
"earliest_exit_date": [
{
"date": "2023-03-31",
"_destroy": ""
},
{
"date": "2021-11-01",
"_destroy": ""
},
{
"date": "2021-12-21",
"_destroy": ""
}
],
"enter_date": null,
"leave_date": null
}
earliest exit_date can also be empty like this:
{
"code_cd": "02",
"tax_cd": null,
"earliest_exit_date":[],
"enter_date": null,
"leave_date": null
}
Now i want to get the earliest_exit_date back where the date is after current_date and is the closest one to current_date. From the example with earliest_exit_date the output have to be: 2021-12-21
Anyone knows how to do this?
If your table has unique value or has id you can use below query:
Sample table and data structure: dbfiddle
select distinct
id,
min("date") filter (where "date" > current_date) over (partition by id)
from
test t
cross join jsonb_to_recordset(t.data::jsonb -> 'earliest_exit_date') as e("date" date)
order by id

Peewee: Querying SUM of field/column

I've been trying to get the sum total of a field/column in peewee. I thought it would be straightforward, but I've been going around in circles for a couple of hours now.
All I'd like to get back from the query is sum total of the price field/column.
An example of the code I've been using is:
Model
class Package(db.Model):
id = PrimaryKeyField()
code = CharField(max_length=11, unique=True, null=False)
price = DecimalField(null=False, decimal_places=2)
description = TextField()
created = DateTimeField(default=datetime.now, null=False)
updated = DateTimeField(default=datetime.now, null=False)
Query
sum_total = fn.SUM(Package.price).alias('sum_total')
query = (Package
.select(sum_total)
.order_by(sum_total)
)
The outputs I'm getting are:
query.sum_total
AttributeError: 'ModelSelect' object has no attribute 'sum_total'
for q in query:
logger.debug(json.dumps(model_to_dict(q)))
{"code": null, "created": null, "description": null, "id": null, "numberOfTickets": null, "price": null, "updated": null}
I've sure I'm missing something really simple. I haven't been able to find any examples outside of the peewee documentation, and I've tried those, but am still getting nowhere.
Any ideas?
The "model_to_dict()" method is not magic. It does not automatically infer that you want to actually just dump the "sum_total" column into a dict. Additionally, are you trying to get a single sum total for all rows in the db? If so this is just a scalar value, so you can write:
total = Package.select(fn.SUM(Package.price)).scalar()
return {'sum_total': total}
If you want to group totals by some other columns, you need to select those columns and specify the appropriate group_by() - for example, this groups sum total by code:
sum_total = fn.SUM(Package.price).alias('sum_total')
query = (Package
.select(Package.code, sum_total)
.group_by(Package.code)
.order_by(sum_total))
accum = []
for obj in query:
accum.append({'code': obj.code, 'sum_total': obj.sum_total})

Query JSON Key:Value Pairs in AWS Athena

I have received a data set from a client that is loaded in AWS S3. The data contains unnamed JSON key:value pairs. This isn't my area of expertise, so I was looking for a little help.
The structure of JSON data that I've typically worked with in the past looks similar to this:
{ "name":"John", "age":30, "car":null }
The data that I have received from my client is formatted as such:
{
"answer_id": "cc006",
"answer": {
"101086": 1,
"101087": 2,
"101089": 2,
"101090": 7,
"101091": 5,
"101092": 3,
"101125": 2
}
}
This is survey data, where the key on the left is a numeric customer identifier, and the value on the right is their response to a survey question, i.e. customer "101125" answered the survey with a value of "2". I need to be able to query the JSON data using Athena such that my result set looks similar to:
Cross joining the unnested children against the parent node isn't an issue. What I can't figure out is how to select all of the keys from the array "answer" without specifying that actual key name. Similarly, I want to be able to select all of the values as well.
Is it possible to create a virtual table in Athena that would allow for these results, or do I need to convert the JSON to a format this looks more similar to the following:
{
"answer_id": "cc006",
"answer": [
{ "key": "101086", "value": 1 },
{ "key": "101087", "value": 2 },
{ "key": "101089", "value": 2 },
{ "key": "101090", "value": 7 },
{ "key": "101091", "value": 5 },
{ "key": "101092", "value": 3 },
{ "key": "101125", "value": 2 }
]
}
EDIT 6/4/2020
I was able to use the code that Theon provided below along with the following table structure:
CREATE EXTERNAL TABLE answer_example (
answer_id string,
answer string
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://mybucket/'
That allowed me to use the following query to generate the results that I needed.
WITH Data AS(
SELECT
answer_id,
CAST(json_extract(answer, '$') AS MAP(VARCHAR, VARCHAR)) as answer
FROM
answer_example
)
SELECT
answer_id,
key,
element_at(answer, key) AS value
FROM
Data
CROSS JOIN UNNEST (map_keys(answer)) AS answer (key)
EDIT 6/5/2020
Taking additional advice from Theon's response below, the following DDL and Query simplify this quite a bit.
DDL:
CREATE EXTERNAL TABLE answer_example (
answer_id string,
answer map<string,string>
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://mybucket/'
Query:
SELECT
answer_id,
key,
element_at(answer, key) AS value
FROM
answer_example
CROSS JOIN UNNEST (map_keys(answer)) AS answer (key)
Cross joining with the keys of the answer property and then picking the corresponding value. Something like this:
WITH data AS (
SELECT
'cc006' AS answer_id,
MAP(
ARRAY['101086', '101087', '101089', '101090', '101091', '101092', '101125'],
ARRAY[1, 2, 2, 7, 5, 3, 2]
) AS answers
)
SELECT
answer_id,
key,
element_at(answers, key) AS value
FROM data
CROSS JOIN UNNEST (map_keys(answers)) AS answer (key)
You could probably do something with transform_keys to create rows of the key value pairs, but the SQL above does the trick.

update value in list Postgres jsonb

I am trying to update json
[{"id": "1", "name": "myconf", "icons": "small", "theme": "light", "textsize": "large"},
{"id": 2, "name": "myconf2", "theme": "dark"}, {"name": "firstconf", "theme": "dark", "textsize": "large"},
{"id": 3, "name": "firstconxsf", "theme": "dassrk", "textsize": "lassrge"}]
and this is the table containing that json column :
CREATE TABLE USER_CONFIGURATIONS ( ID BIGSERIAL PRIMARY KEY, DATA JSONB );
adding new field is easy I am using:
UPDATE USER_CONFIGURATIONS
SET DATA = DATA || '{"name":"firstconxsf", "theme":"dassrk", "textsize":"lassrge"}'
WHERE id = 9;
But how to update single with where id = 1 or 2
Click: step-by-step demo:db<>fiddle
UPDATE users -- 4
SET data = s.updated
FROM (
SELECT
jsonb_agg( -- 3
CASE -- 2
WHEN ((elem ->> 'id')::int IN (1,2)) THEN
elem || '{"name":"abc", "icon":"HUGE"}'
ELSE elem
END
) AS updated
FROM
users,
jsonb_array_elements(data) elem -- 1
) s;
Expand array elements into one row each
If element has relevant id, update with || operator; if not, keep the original one
Reaggregate the array after updating the JSON data
Execute the UPDATE statement.

Resources