There is a part of a JSON file:
{
"payload": {
"orders": [
{
"quantity": 1,
"platinum": 4,
"visible": true,
"order_type": "sell",
"user": {
"reputation": 5,
"region": "en",
"last_seen": "2022-11-17T08:15:43.360+00:00",
"ingame_name": "Noxxat",
"id": "5b50d73859d885026b523cd1",
"avatar": null,
"status": "offline"
},
"platform": "pc",
"region": "en",
"creation_date": "2020-09-04T15:30:41.000+00:00",
"last_update": "2021-11-19T09:41:43.000+00:00",
"id": "5f525da1c98cd000d7513813"
},
{
"order_type": "sell",
"visible": true,
"quantity": 2,
"platinum": 6,
"user": {
"reputation": 3,
"region": "en",
"last_seen": "2022-11-18T14:22:53.023+00:00",
"ingame_name": "Dhatman",
"id": "5b79921649262103f74b6585",
"avatar": null,
"status": "offline"
},
"platform": "pc",
"region": "en",
"creation_date": "2020-11-06T10:32:32.000+00:00",
"last_update": "2022-10-11T16:51:55.000+00:00",
"id": "5fa526406ff3660486ef556c"
},
{
"quantity": 1,
"visible": true,
"platinum": 5,
"order_type": "sell",
"user": {
"reputation": 4,
"region": "en",
"last_seen": "2022-11-18T18:31:49.199+00:00",
"ingame_name": "TheronGuardxx",
"avatar": "user/avatar/5e235e94ab7656047a86f70c.png?7b1e90d474a62c6ba3c2d3ef06aed927",
"id": "5e235e94ab7656047a86f70c",
"status": "offline"
},
"platform": "pc",
"region": "en",
"creation_date": "2020-12-17T22:46:57.000+00:00",
"last_update": "2022-10-15T23:37:01.000+00:00",
"id": "5fdbdfe13e8c4f017f5e3352"
}
]
}
}
How to find the minimum amount of platinum in this file?
As I understand it, I need to make a loop that will go through the entire file and assign a new value to the variable min if the current amount of platinum is less than the amount currently written in min.
But what should the code look like?
At the moment I have written a block that finds the amount of platinum, the seller's alias and the number of items from the last element of the JSON-file.
num = 1
flagSell = 0
while flagSell == 0:
if r_json["payload"]["orders"][len(r_json["payload"]["orders"]) - num]["user"]['status'] == 'ingame':
if r_json["payload"]["orders"][len(r_json["payload"]["orders"]) - num]["region"] == 'en':
if r_json["payload"]["orders"][len(r_json["payload"]["orders"]) - num]["order_type"] == 'sell':
min = r_json["payload"]["orders"][len(r_json["payload"]["orders"]) - num]["platinum"]
author = r_json["payload"]["orders"][len(r_json["payload"]["orders"]) - num]["user"]["ingame_name"]
quantity = r_json["payload"]["orders"][len(r_json["payload"]["orders"]) - num]["quantity"]
flagSell = 1
else:
num += 1
else:
num += 1
else:
num += 1
Try to use bult-in function min() to find minimum order according to the platinum key (data is your dictionary from the question):
min_order = min(data["payload"]["orders"], key=lambda o: o["platinum"])
print("Min Platinum =", min_order["platinum"])
print("Name =", min_order["user"]["ingame_name"])
print("Quantity =", min_order["quantity"])
Prints:
Min Platinum = 4
Name = Noxxat
Quantity = 1
EDIT: If you want to search for a minimum in orders where order_type == 'sell':
min_order = min(
(o for o in data["payload"]["orders"] if o["order_type"] == "sell"),
key=lambda o: o["platinum"],
)
print("Min Platinum =", min_order["platinum"])
print("Name =", min_order["user"]["ingame_name"])
print("Quantity =", min_order["quantity"])
Related
I have following jsonb structure in column recipients in a table called mailing:
[
{
"text": "Text1",
"smsId": 1,
"value": "123456",
"status": "Sent"
},
{
"text": "Text1",
"smsId": 2,
"value": "23456",
"status": "Sent"
},
{
"text": "Text1",
"smsId": 3,
"value": "345678",
"status": "Sent"
}]
I need to update one field in multiple elements, so the outcome should look like this:
[
{
"text": "Text1",
"smsId": 1,
"value": "123456",
"status": "Delivered"
},
{
"text": "Text1",
"smsId": 2,
"value": "23456",
"status": "Delivered"
},
{
"text": "Text1",
"smsId": 3,
"value": "345678",
"status": "Delivered"
}]
The most close I got to solution is this:
WITH item AS (SELECT mailing_id, ('{' || INDEX-1 || ',status}')::text[] AS PATH
FROM mailing, jsonb_array_elements(recipients) WITH ORDINALITY arr(recipient, INDEX)
WHERE recipient->>'smsId' = any(array['1', '2', '3']))
UPDATE mailing m
SET recipients = jsonb_set(recipients, item.path, '"Delivered"',FALSE)
FROM item
WHERE m.mailing_id = item.mailing_id;
But this solution updates only first row, and I am not sure if I should somehow loop this or try different approach?
You need to aggregate modified array elements with jsonb_agg():
with new_data as (
select
mailing_id,
jsonb_agg(
case when value->>'smsId' = any('{1,2,3}') then value || '{"status": "Delivered"}'
else value
end) as recipients
from mailing
cross join jsonb_array_elements(recipients)
group by mailing_id
)
update mailing m
set recipients = n.recipients
from new_data n
where m.mailing_id = n.mailing_id;
Test it in db<>fidlle.
[
{
"id": 200,
"date_created": "2021-01-14T17:15:55",
"sale": "2500.00",
},
{
"id": 201,
"date_created": "2021-01-15T15:10:30",
"sale": "2000.00",
},
{
"id": 202,
"date_created": "2021-02-4T11:14:10",
"sale": "4000.00",
}
]
I am unable to sum it by the monthly basis in react js or next js.
Can someone guide me on how can I do it in React?
I want the output as:
monthly basis.
For Jan total sum = 4500
For Feb total sum = 4000
etc...
You have to browse your array and to add each sale.
Example :
const a = [
{
"id": 200,
"date_created": "2021-01-14T17:15:55",
"sale": "2500.00",
},
{
"id": 201,
"date_created": "2021-01-15T15:10:30",
"sale": "2000.00",
},
{
"id": 202,
"date_created": "2021-02-4T11:14:10",
"sale": "4000.00",
}
]
const sum = a.reduce((previous, current) => previous + new Number(current.sale), 0)
console.log(sum)
How can I get the data out of this array stored in a variant column in Snowflake. I don't care if it's a new table, a view or a query. There is a second column of type varchar(256) that contains a unique ID.
If you can just help me read the "confirmed" data and the "editorIds" data I can probably take it from there. Many thanks!
Output example would be
UniqueID ConfirmationID EditorID
u3kd9 xxxx-436a-a2d7 nupd
u3kd9 xxxx-436a-a2d7 9l34c
R3nDo xxxx-436a-a3e4 5rnj
yP48a xxxx-436a-a477 jTpz8
yP48a xxxx-436a-a477 nupd
[
{
"confirmed": {
"Confirmation": "Entry ID=xxxx-436a-a2d7-3525158332f0: Confirmed order submitted.",
"ConfirmationID": "xxxx-436a-a2d7-3525158332f0",
"ConfirmedOrders": 1,
"Received": "8/29/2019 4:31:11 PM Central Time"
},
"editorIds": [
"xxsJYgWDENLoX",
"JR9bWcGwbaymm3a8v",
"JxncJrdpeFJeWsTbT"
] ,
"id": "xxxxx5AvGgeSHy8Ms6Ytyc-1",
"messages": [],
"orderJson": {
"EntryID": "xxxxx5AvGgeSHy8Ms6Ytyc-1",
"Orders": [
{
"DropShipFlag": 1,
"FromAddressValue": 1,
"OrderAttributes": [
{
"AttributeUID": 548
},
{
"AttributeUID": 553
},
{
"AttributeUID": 2418
}
],
"OrderItems": [
{
"EditorId": "aC3f5HsJYgWDENLoX",
"ItemAssets": [
{
"AssetPath": "https://xxxx573043eac521.png",
"DP2NodeID": "10000",
"ImageHash": "000000000000000FFFFFFFFFFFFFFFFF",
"ImageRotation": 0,
"OffsetX": 50,
"OffsetY": 50,
"PrintedFileName": "aC3f5HsJYgWDENLoX-10000",
"X": 50,
"Y": 52.03909266409266,
"ZoomX": 100,
"ZoomY": 93.75
}
],
"ItemAttributes": [
{
"AttributeUID": 2105
},
{
"AttributeUID": 125
}
],
"ItemBookAttribute": null,
"ProductUID": 52,
"Quantity": 1
}
],
"SendNotificationEmailToAccount": true,
"SequenceNumber": 1,
"ShipToAddress": {
"Addr1": "Addr1",
"Addr2": "0",
"City": "City",
"Country": "US",
"Name": "Name",
"State": "ST",
"Zip": "00000"
}
}
]
},
"orderNumber": null,
"status": "order_placed",
"submitted": {
"Account": "350000",
"ConfirmationID": "xxxxx-436a-a2d7-3525158332f0",
"EntryID": "xxxxx-5AvGgeSHy8Ms6Ytyc-1",
"Key": "D83590AFF0CC0000B54B",
"NumberOfOrders": 1,
"Orders": [
{
"LineItems": [],
"Note": "",
"Products": [
{
"Price": "00.30",
"ProductDescription": "xxxxxint 8x10",
"Quantity": 1
},
{
"Price": "00.40",
"ProductDescription": "xxxxxut Black 8x10",
"Quantity": 1
},
{
"Price": "00.50",
"ProductDescription": "xxxxx"
},
{
"Price": "00.50",
"ProductDescription": "xxxscount",
"Quantity": 1
}
],
"SequenceNumber": "1",
"SubTotal": "00.70",
"Tax": "1.01",
"Total": "00.71"
}
],
"Received": "8/29/2019 4:31:10 PM Central Time"
},
"tracking": null,
"updatedOn": 1.598736670503000e+12
}
]
So, this is how I'd query that exact JSON assuming the data is in column var in table x:
SELECT x.var[0]:confirmed:ConfirmationID::varchar as ConfirmationID,
f.value::varchar as EditorID
FROM x,
LATERAL FLATTEN(input => var[0]:editorIds) f
;
Since your sample output doesn't match the JSON that you provided, I will assume that this is what you need.
Also, as a note, your JSON includes outer [ ] which indicates that the entire JSON string is inside an array. This is the reason for var[0] in my query. If you have multiple records inside that array, then you should remove that. In general, you should exclude those and instead load each record into the table separately. I wasn't sure whether you could make that change, so I just wanted to make note.
Does HERE have data on property parcel boundaries?
I am looking for the coordinates of individual properties to overlay their maps.
Unfortunately we do not have this kind of data available. Maybe this might be interesting for you:
Within the Reverse Geocode you can Request the shape of a postal district for a given latitude and longitude
This example retrieves the shape and details of the first address around a specified location in Chicago (41.8839,-87.6389) using a 150 meter radius to retrieve the address. The expected address is: 425 W Randolph St, Chicago, IL 60606, United States.
The addition of the additionaldata=IncludeShapeLevel,postalCode parameter ensures that the shape of the postal district is also included in the response. Reverse geocoding requests can be made using the reversegeocode endpoint and adding the prox parameter to the request URL. The number of results returned can be restricted using the maxresults parameter.
https://reverse.geocoder.ls.hereapi.com/6.2/reversegeocode.json?prox=41.8839%2C-87.6389%2C150&mode=retrieveAddresses&maxresults=1&additionaldata=IncludeShapeLevel%2CpostalCode&gen=9&apiKey=xxx
{
"Response": {
"MetaInfo": {
"Timestamp": "2020-07-27T09:56:24.943+0000",
"NextPageInformation": "2"
},
"View": [
{
"_type": "SearchResultsViewType",
"ViewId": 0,
"Result": [
{
"Relevance": 1,
"Distance": 16.3,
"MatchLevel": "houseNumber",
"MatchQuality": {
"Country": 1,
"State": 1,
"County": 1,
"City": 1,
"District": 1,
"Street": [
1
],
"HouseNumber": 1,
"PostalCode": 1
},
"MatchType": "pointAddress",
"Location": {
"LocationId": "NT_puy2gbuVuGd-an6zGdSyNA_xADM",
"LocationType": "address",
"DisplayPosition": {
"Latitude": 41.88403,
"Longitude": -87.63881
},
"NavigationPosition": [
{
"Latitude": 41.88401,
"Longitude": -87.63845
}
],
"MapView": {
"TopLeft": {
"Latitude": 41.8851542,
"Longitude": -87.6403199
},
"BottomRight": {
"Latitude": 41.8829058,
"Longitude": -87.6373001
}
},
"Address": {
"Label": "100 N Riverside Plz, Chicago, IL 60606, United States",
"Country": "USA",
"State": "IL",
"County": "Cook",
"City": "Chicago",
"District": "West Loop",
"Street": "N Riverside Plz",
"HouseNumber": "100",
"PostalCode": "60606",
"AdditionalData": [
{
"value": "United States",
"key": "CountryName"
},
{
"value": "Illinois",
"key": "StateName"
},
{
"value": "Cook",
"key": "CountyName"
},
{
"value": "N",
"key": "PostalCodeType"
}
]
},
"MapReference": {
"ReferenceId": "1190062166",
"MapId": "NAAM20117",
"MapVersion": "Q1/2020",
"MapReleaseDate": "2020-06-29",
"Spot": 0.59,
"SideOfStreet": "left",
"CountryId": "21000001",
"StateId": "21002247",
"CountyId": "21002623",
"CityId": "21002647",
"BuildingId": "9000000000002726912",
"AddressId": "79186499",
"RoadLinkId": "499349060"
},
"Shape": {
"_type": "WKTShapeType",
"Value": "MULTIPOLYGON (((-87.6339 41.88446, -87.6338 41.8813, -87.63239 41.88132, -87.63238 41.88067, -87.63378 41.88068, -87.63376 41.8794, -87.63377 41.87812, -87.6352 41.87811, -87.6352 41.87682, -87.63665 41.87678, -87.63663 41.87666, -87.63664 41.87658, -87.6367 41.87664, -87.63674 41.87678, -87.63706 41.87677, -87.6374 41.87807, -87.63756 41.87861, -87.63774 41.87936, -87.63794 41.88062, -87.63791 41.8819, -87.63779 41.88322, -87.63764 41.88449, -87.63727 41.88574, -87.63739 41.88602, -87.63603 41.88695, -87.63559 41.88717, -87.63248 41.8871, -87.63248 41.88703, -87.63374 41.88703, -87.63386 41.887, -87.63395 41.88702, -87.6339 41.88446)), ((-87.64102 41.87676, -87.64104 41.87804, -87.63955 41.87805, -87.63959 41.87933, -87.63966 41.88058, -87.63969 41.88187, -87.63976 41.88318, -87.6398 41.88446, -87.64022 41.88445, -87.64022 41.8846, -87.64025 41.88479, -87.64035 41.8851, -87.64047 41.88571, -87.63981 41.88572, -87.64062 41.88625, -87.64063 41.88639, -87.64064 41.88678, -87.63989 41.88679, -87.63993 41.88758, -87.6401 41.88769, -87.64035 41.88782, -87.64054 41.8879, -87.6407 41.88793, -87.64076 41.88828, -87.64085 41.88859, -87.63996 41.88847, -87.63999 41.88906, -87.63971 41.88905, -87.63961 41.88882, -87.63954 41.8887, -87.63918 41.88675, -87.63873 41.8864, -87.63841 41.88588, -87.6383 41.88573, -87.63812 41.88522, -87.63825 41.88449, -87.63845 41.88321, -87.63855 41.88231, -87.63858 41.88104, -87.63855 41.88061, -87.63836 41.87935, -87.63787 41.87794, -87.63778 41.87751, -87.63752 41.87751, -87.63752 41.87731, -87.63775 41.87728, -87.6377 41.87687, -87.63784 41.87684, -87.63778 41.87676, -87.64102 41.87676)))"
}
}
}
]
}
]
}
}
See also https://developer.here.com/blog/how-to-get-the-shape-of-an-area-using-the-here-geocoder-api
In mongodb, I have a collection of documents with an array of records that I want to group by similar tag preserving the natural order
{
"day": "2019-01-07",
"records": [
{
"tag": "ch",
"unixTime": ISODate("2019-01-07T09:06:56Z"),
"score": 1
},
{
"tag": "u",
"unixTime": ISODate("2019-01-07T09:07:06Z"),
"score": 0
},
{
"tag": "ou",
"unixTime": ISODate("2019-01-07T09:07:06Z"),
"score": 0
},
{
"tag": "u",
"unixTime": ISODate("2019-01-07T09:07:20Z"),
"score": 0
},
{
"tag": "u",
"unixTime": ISODate("2019-01-07T09:07:37Z"),
"score": 1
}
]
I want to group (and aggregate) the records by similar sequence of tags and NOT simply by grouping unique tags
Desired output:
{
"day": "2019-01-07",
"records": [
{
"tag": "ch",
"unixTime": [ISODate("2019-01-07T09:06:56Z")],
"score": 1
"nbRecords": 1
},
{
"tag": "u",
"unixTime": [ISODate("2019-01-07T09:07:06Z")],
"score": 0,
"nbRecords":1
},
{
"tag": "ou",
"unixTime": [ISODate("2019-01-07T09:07:06Z")],
"score": 0
},
{
"tag": "u",
"unixTime: [ISODate("2019-01-07T09:07:20Z"),ISODate("2019-01-07T09:07:37Z")]
"score": 1
"nbRecords":2
}
]
Groupby
It seems that '$groupby' aggregation operator in mongodb previously sort the array and group by the unique field
db.coll.aggregate(
[
{"$unwind":"$records"},
{"$group":
{
"_id":{
"tag":"$records.tag",
"day":"$day"
},
...
}
}
]
)
Returns
{
"day": "2019-01-07",
"records": [
{
"tag": "ch",
"unixTime": [ISODate("2019-01-07T09:06:56Z")],
"score": 1
"nbRecords": 1
},
{
"tag": "u",
"unixTime": [ISODate("2019-01-07T09:07:06Z"),ISODate("2019-01-07T09:07:20Z"),ISODate("2019-01-07T09:07:37Z")],
"score": 2,
"nbRecords":3
},
{
"tag": "ou",
"unixTime": [ISODate("2019-01-07T09:07:06Z")],
"score": 0
},
]
Map/reduce
As I'm currently using pymongo driver, I implemented the solution back in python
using itertools.groupby that as a generator performs the grouping respecting the natural order but I'm confronted to server timing out problem (cursor.NotFound Error) as an insane time processing.
Any idea of how to use directly the mapreduce function of mongo
to perform the equivalent of the itertools.groupby() in python?
Help would be very appreciated: I'm using pymongo driver 3.8 and MongoDB 4.0
Ni! Run through the array of records adding a new integer index that increments whenever the groupby target changes, then use the mongo operation on that index. .~´
With the recommendation of #Ale and without any tips on the way to do that in MongoDb. I switch back to a python implementation solving the cursor.NotFound problem.
I imagine that I could be done inside Mongodb but this is working out
for r in db.coll.find():
session = [
]
for tag, time_score in itertools.groupby(r["records"], key=lambda x:x["tag"]):
time_score = list(time_score)
session.append({
"tag": tag,
"start": time_score[0]["unixTime"],
"end": time_score[-1]["unixTime"],
"ca": sum([n["score"] for n in time_score]),
"nb_records": len(time_score)
})
db.col.update(
{"_id":r["_id"]},
{
"$unset": {"records": ""},
"$set":{"sessions": session}
})