Postgres JSONB Query and Index on Nested String Array - arrays

I have some troubles wrapping my head around how to formulate queries and provide proper indices for the following situation. I have customer entities represented in JSON like this (only relevant properties are retained):
{
"id": "50000",
"address": [
{
"line": [
"2nd Main Street",
"123 Harris Plaza"
],
"city": "Boston",
"state": "Massachusetts",
"country": "US",
},
{
"line": [
"1st Av."
],
"city": "Jamestown",
"state": "Massachusetts",
"country": "US",
}
]
}
The customers are stored in the following customer table:
CREATE TABLE Customer (
id BIGSERIAL PRIMARY KEY,
resource JSONB
);
I manage to do simple queries on the resource column, e.g. a projection query like this works (retrieve all lower-case address lines for cities starting with "bo"):
SELECT LOWER(jsonb_array_elements_text(jsonb_array_elements(c.resource#>'{address}') #> '{line}')) FROM Customer c, jsonb_array_elements(c.resource #> '{address}') a WHERE LOWER(a->>'city') LIKE 'bo%';
I have trouble doing the following: my goal is to query all customers that have at least one address line beginning with "12". Case insensitivity is a requirement for my use case. The example customer would match my query, as the first address object has an address line starting with "12". Please note that "line" is an Array of JSON Strings, not complex objects. So far the closest thing I could come up with is this:
SELECT c.resource FROM Customer c, jsonb_array_elements(c.resource #> '{address}') a WHERE a->'line' ?| array['123 Harris Plaza'];
Obviously this is not a case-insensitive LIKE query. Any help/pointers on how to formulate both query and accompanying GIN index are greatly appreciated. My first query already selects all address lines as text, so maybe this could be used in a GIN index?
I'm using Postres 9.5, but am happy to upgrade if this can only be achieved in more recent Postgres versions.

While GIN indexes have machinery to support prefix matching, this machinery is only hooked up for tsvectors. array_ops does not have it hooked up, nor does json_ops or json_path_ops. So unless you want to create new operator class/families (or normalize your data into separate tables) you will have to shoe-horn your data into a tsvector.
Here is a crude way to do that, which doesn't account for the possibility that a address line might contain literal single quotes or perhaps other meaningful characters:
create function addressline_tsvector(jsonb) returns tsvector immutable language SQL as $$
select string_agg('''' || lower(value) || '''', ' ')::tsvector
from jsonb_array_elements($1->'address') a(a),
jsonb_array_elements_text(a->'line')
$$;
create index on customer using gin (addressline_tsvector(resource));
select * from customer where addressline_tsvector(resource) ## lower('''2nd Main'':*')::tsquery;
Given that your example table only has one row, the index will probably not actually be used unless you set enable_seqscan = off first.

Related

explode() of pyspark.sql is not working as expected even if only one of the record is not following the schema

I have a json file which I would like to convert (say csv) by expanding one of the fields into columns.
I used explode() for this, but it gives an error even if one of the many records is not having the exact schema.
Input File:
{"place": "KA",
"id": "200",
"swversion": "v.002",
"events":[ {"time": "2020-05-23T22:34:32.770Z", "eid": 24, "app": "testing", "state": 0} ]}
{"place": "AP",
"id": "100",
"swversion": "v.001",
"events":[[]]}
In the above, i want to expand the fields of the "events" and they should become columns.
Ideal case, "events" is a Array of Struct Type..
Expected Output File Columns:
*place, id, swversion, time, eid, app, state*
For this, i have used explode() available in pyspark.sql, but because my second record in the Input file, does not follow the schema where "events" is an Array of Struct Type, explode() fails here by giving an error.
The code i have used to explode :
df = spark.read.json("InputFile")
ndf = df.withColumn("event", explode("events")).drop("events")
ndf.select("place", "id", "swversion", "event.*")
The last line fails because of the second record in my input file..
It should not be too difficult for explode() to handle this i believe.
Can you suggest how to avoid
Cannot expand star types
If I change the "events":[[]] to "events":[{}], explode() works fine since again its an Array of StructType, but since I have no control of the input data, i need to handle this.

DynamoDB How to design and query multiple fields

I have an item like this
{
"date": "2019-10-05",
"id": "2",
"serviceId": "1",
"time": {
"endTime": "1300",
"startTime": "1330"
}
}
Right now the way I design this is like so:
primary key --> id
Global secondary index --> primary key : serviceId
--> sort key : date
With the way I designed as of now,
* I can query the id
* I can query serviceId and range of date
I'd like to be able to query such that I can retrieve all items where
* serviceId = 1 AND
* date = "yyyy-mm-dd" AND
* time = {
"endTime": "1300",
"startTime": "1330"
}
I'd still like to be able to query based on the 2 previous condition (query by id, and query by serviceId and rangeOfDate
Is there a way to do this? one way I was thinking is to create a new field and use it as index e.g: combine all data so
combinedField: "1_yyyy-mm-dd_1300_1330
make that as primary key for global secondary index, and just query it like that.
I'm just not sure is this the way to do this or if there's a better or best practice way to do this?
Thank you
You could either use FilterExpression or composite sort keys.
FilterExpression
Here you could retrieve the items from the GSI you described by using specifying 'serviceId' and 'date' and then giving within the 'FilterExpression' specifying time.startTime and time.endTime. The sample Python code using boto3 would be as follows:
response = table.query(
KeyConditionExpression=Key('serviceId').eq(1) & Key('date').eq("2019-10-05"),
FilterExpression=Attr(time.endTime).eq('1300') & Attr('time.startTime').eq('1330')
)
The drawback with this method is that all items specified with the sort key will be read and only then the results are filtered. So you will be charged according to what is specified in the sort key.
eg: if 1000 items have 'serviceId' as 1 and 'date' as '2019-10-05' but only 10 items have 'time.startTime' as 1330, then still you will be charged for reading the 1000 items even though only 10 items will be returned after the FilterExpression is applied.
Composite Sort Key
I believe this is the method you mentioned in the question. Here you will need to make an attribute as
'yyyy-mm-dd_startTime_endTime'
and use this as the sort key in your GSI. Now your items will look like this:
{ "date": "2019-10-05",
"id": "2",
"serviceId": "1",
"time": {
"endTime": "1300",
"startTime": "1330"
}
"date_time":"2019-10-05_1330_1300"
}
Your GSI will have 'serviceId' as partition key and 'date_time' as sort key. Now you will be able to query date range as:
response = table.query(
KeyConditionExpression=Key('serviceId').eq(1) & Key('date').between('2019-07-05','2019-10-05')
)
For the query where date, start and end time are specified, you can query as:
response = table.query(
KeyConditionExpression=Key('serviceId').eq(1) & Key('date').eq('2019-10-05_1330_1300')
)
This approach won't work if you need range of dates and start and end time together ie. you won't be able to make a query for items in a particular date range containing a specific start and end time. In that case you would have to use FilterExpression.
Yes, the solution you suggested (add a new field which is the combination of the fields and defined a GSI on it) is the standard way to achieve that. You need to make sure that the character you use for concatenation is unique, i.e., it cannot appear in any of the individual fields you combine.

Postgres SQL add key to all json array

In my database I have table which has a column called items, this column has jsonb type and almost every record has data like this one:
{"items": [{"id": "item-id", "sku": "some-sku", "quantity": 1, "master_sku": "some-master-sku"}]}
I need to create a migration which add type to every item, so it should looks like this:
{"items": [{"id": "item-id", "sku": "some-sku", "quantity": 1, "master_sku": "some-master-sku", "type": "product"}]}
I must add type to every record, and every record looks the same.
The problem is, that i have about milion records in my table, and I cannot iterate every item and add new type because it can take too long time and my deploy script may crash.
Guys, how can i do that in the simplest way?
As long as the format/content is always more or less the same, the fastest way to do it would probably be as a string operation, something like this:
UPDATE your_table
SET your_field = REGEXP_REPLACE(your_field::TEXT, '\}(.+?)', ', "type": "product"}\1', 'g')::JSONB
WHERE your_field IS NOT NULL;
Example: https://www.db-fiddle.com/f/fsHoFKz9szpmV5aF4dt7r/1
This just checks for any } character which is followed by something (so we know it's not the final one from the "items" object, and replaces it with the missing key/value (and whatever character was following).
Of course this performs practically no validation, such as whether that key/value already exists. Where you want to be between performance and correctness depends on what you know about your data.

Load json file with arrays /structs and flexible schema into Hive table

Need some help loading a json file into a table . Here is an example of some of the json objects within the file:
{"asin": "0002000202", "title": "Black Berry, Sweet Juice: On Being Black and White in Canada", "price": 13.88, "imUrl": "http://ecx.images-amazon.com/images/I/51PQAYJ9EDL.jpg", "related": {"also_bought": ["0393333094"], "buy_after_viewing": ["0393333094", "1554685087"]}, "salesRank": {"Books": 3013713}, "categories": [["Books"]]}
{"asin": "0000041696", "title": "Arithmetic 2 A Beka Abeka 1994 Student Book (Traditional Arithmentic Series)", "price": 6.53, "imUrl": "http://ecx.images-amazon.com/images/I/41cGaan-BrL._SL500_.jpg", "related": {"also_viewed": ["B000KOYDUY", "B004GE1B7W", "B008SXBO88", "B001EH7Y02", "B000W7PN62", "B004H3G1X6", "B004WOEIXA", "B000AXWEEM", "0789478722", "B000MN2C56", "1402709269", "B001HHOKG0", "B000Y9TO1S", "1402711441", "0756609356", "0142400106", "1556616465", "0545021383", "B004LDD18A", "B000HZH18C", "1557996563", "B00CZTVUKI", "B001CXK8Y2", "B000QX6KY6"], "buy_after_viewing": ["B000KOYDUY", "B004GE1B7W", "B000LBXGRC", "0439827655"]}, "salesRank": {"Books": 2554321}, "categories": [["Books"]]}
As you can see the schema varies among objects. Some not all attributes are present in all objects. There are also structs and arrays.
Here is my create table statement
create table amazon.products_test
(asin string,
title string,
description string,
brand string,
price float,
salesRank struct<category:string, rank:int> ,
imUrl string,
categories array<string>,
related struct<also_bought:string, also_viewed:string, buy_after_viewing:string, bought_together:string>)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe';
My load statement:
load data inpath '/user/amazon/products_test.json'
overwrite into table amazon.products_test;
Here I try and query
hive> select * FROM products_test;
OK
Failed with exception java.io.IOException:org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Field name expected
Do I have the right datatypes ?
Is there a better serde ?
Do I need add TBLPROPERTIES or SERDEPROPERTIES ?
I found the answer . As suspected, I needed to use a different SERDE:
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
I saw some forums suggesting that I may need to use this SERDE but I didn't know how to implement and add the jars from :
https://github.com/rcongiu/Hive-JSON-Serde
also , I needed to use a map map type not a struct for the salesRank

Generalized way to extract JSON from a relational database?

Ok, maybe this is too broad for StackOverflow, but is there a good, generalized way to assemble data in relational tables into hierarchical JSON?
For example, let's say we have a "customers" table and an "orders" table. I want the output to look like this:
{
"customers": [
{
"customerId": 123,
"name": "Bob",
"orders": [
{
"orderId": 456,
"product": "chair",
"price": 100
},
{
"orderId": 789,
"product": "desk",
"price": 200
}
]
},
{
"customerId": 999,
"name": "Fred",
"orders": []
}
]
}
I'd rather not have to write a lot of procedural code to loop through the main table and fetch orders a few at a time and attach them. It'll be painfully slow.
The database I'm using is MS SQL Server, but I'll need to do the same thing with MySQL soon. I'm using Java and JDBC for access. If either of these databases had some magic way of assembling these records server-side it would be ideal.
How do people migrate from relational databases to JSON databases like MongoDB?
Here is a useful set of functions for converting relational data to JSON and XML and from JSON back to tables: https://www.simple-talk.com/sql/t-sql-programming/consuming-json-strings-in-sql-server/
SQL Server 2016 is finally catching up and adding support for JSON.
The JSON support still does not match other products such as PostgreSQL, e.g. no JSON-specific data type is included. However, several useful T-SQL language elements were added that make working with JSON a breeze.
E.g. in the following Transact-SQL code a text variable containing a JSON string is defined:
DECLARE #json NVARCHAR(4000)
SET #json =
N'{
"info":{
"type":1,
"address":{
"town":"Bristol",
"county":"Avon",
"country":"England"
},
"tags":["Sport", "Water polo"]
},
"type":"Basic"
}'
and then, you can extract values and objects from JSON text using the JSON_VALUE and JSON_QUERY functions:
SELECT
JSON_VALUE(#json, '$.type') as type,
JSON_VALUE(#json, '$.info.address.town') as town,
JSON_QUERY(#json, '$.info.tags') as tags
Furhtermore, the OPENJSON function allows to return elements from referenced JSON array:
SELECT value
FROM OPENJSON(#json, '$.info.tags')
Last but not least, there is a FOR JSON clause that can format a SQL result set as JSON text:
SELECT object_id, name
FROM sys.tables
FOR JSON PATH
Some references:
https://learn.microsoft.com/en-us/sql/relational-databases/json/json-data-sql-server
https://learn.microsoft.com/en-us/sql/relational-databases/json/convert-json-data-to-rows-and-columns-with-openjson-sql-server
https://blogs.technet.microsoft.com/dataplatforminsider/2016/01/05/json-in-sql-server-2016-part-1-of-4/
https://www.red-gate.com/simple-talk/sql/learn-sql-server/json-support-in-sql-server-2016/
I think one 'generalized' solution will be as follows:-
Create a 'select' query which will join all the required tables to fetch results in a 2 dimentional array (like CSV / temporary table, etc)
If each row of this join is unique, and the MongoDB schema and the columns have one to one mapping, then its all about importing this CSV/Table using MongoImport command with required parameters.
But a case like above, where a given Customer ID can have an array of 'orders', needs some computation before mongoImport.
You will have to write a program which can 'vertical merge' the orders for a given customer ID.For small set of data, a simple java program will work. But for larger sets, parallel programming using spark can do this job.
SQL Server 2016 now supports reading JSON in much the same way as it has supported XML for many years. Using OPENJSON to query directly and JSON datatype to store.
There is no generalized way because SQL Server doesn’t support JSON as its datatype. You’ll have to create your own “generalized way” for this.
Check out this article. There are good examples there on how to manipulate sql server data to JSON format.
https://www.simple-talk.com/blogs/2013/03/26/sql-server-json-to-table-and-table-to-json/

Resources