Not able to query mongo repository with parameter - spring-data-mongodb

I'm trying to query the following object from mongodb
[
{
"id": "6b3a9814c1990a0578988d9e",
"details": {
"buyerId": "5bd450ed0307fa0a3a904376",
"offerId": "1",
"productId": "5b3a9814c1880a0578988d6a",
"productTitle": "Watch",
"amount": 50,
"status": "Open",
}
}
]
I'm using spring-boot-starter-data-mongodb so first, I tried it the standard way.
Here is what's in my repository,
public interface OfferRepository extends MongoRepository<Offer, String> {
List<Offer> findOffersByDetailsBuyerId(String buyerId);
}
I've also tried a custom query,
#Query(value = "{'details.buyerId' : ?0 }")
List<Offer> findOfferByDetails_BuyerId(#Param("buyerId") String buyerId);
Both are coming back with an empty array. But if I hard code the buyerId in the query I get the results I want.
Also, when I debug it, I see the param but with double quotes around it?
screenshot from mongo compass

In MongoUI you should check the datatype of field ID. It is ObjectId not String So you need to pass org.bson.types.ObjectId instead of String in the repository method.
List<Offer> findByDetailsBuyerId(ObjectId buyerId);

ANSWER
#Query("{ 'details.buyerId' : ?0 }")
List<OfferOverview> findOffersByDetailsBuyerId(String buyerId);
but my main issue was passing the buyerId with double quotes.

Related

Solr query for child documents and return parents and filtered children

I'm having trouble creating a Solr query to be able to pull out the right documents, and am starting to wonder if what I am trying to do is even possible.
Currently on Solr 8.9 using a managed schema and every field is using a wildcard field.
Firstly what the document looks like
(changed names due to redacting internal business language):
{
"id": "COUNTY:1",
"county_name_s": "Hertfordshire",
"coordinates_s": {
"id": "COUNTY:1COORDINATES:!",
"lat_s": "54.238948",
"long_s": "54.238948"
},
"cities": [
{
"id": "COUNTY:1CITY:1",
"city_name_s": "St Albans",
"size": {
"id": "COUNTY:1CITY:1SIZE:1",
"sq_ft_s": "100",
"sq_meters_s": "5879"
}
},
{
"id": "COUNTY:1CITY:2",
"city_name_s": "Watford",
"size": {
"id": "COUNTY:1CITY:2SIZE:2",
"sq_ft_s": "150",
"sq_meters_s": "10000"
}
}
],
"mayor": {
"title_s": "Mrs.",
"first_name_s": "Sheila",
"last_name_s": "Smith"
}
}
And what I want to return:
{
"id": "COUNTY:1",
"county_name_s": "Hertfordshire",
"coordinates": {
"id": "COUNTY:1COORDINATES:!",
"lat_s": "54.238948",
"long_s": "54.238948"
},
"cities": [
{
"id": "COUNTY:1CITY:1",
"city_name_s": "St Albans",
"size": {
"id": "COUNTY:1CITY:1SIZE:1",
"sq_ft_s": "100",
"sq_meters_s": "5879"
}
}
],
"mayor": {
"title_s": "Mrs.",
"first_name_s": "Sheila",
"last_name_s": "Smith"
}
}
Basically my goal is to return more or less the entire thing, however with filtering out one of the cities. For example, the condition for the city would be like city_name_s:"St Albans". So it's to say that I want the parent and all children, however if the child is in that array (ie cities array), then the given field (city_name_s) must equal my defined value, or we don't want that child.
Things I've tried:
I've basically tried two approaches here:
I've tried to play around with {!child} and {!parent} to get a result that I want. Currently I can only get something from City level or the entire thing as if the filter was not there at county level.
I've tried to change values for the childFilter option, with things like:
city_name_s:"St Albans" OR (*:* NOT city_name_s:[* TO *]) to try and say 'if field exists it should be this'.
Anyhow I'm starting to run out of ideas with this; been hacking away at it for the past couple of days and not really got any closer.
Thanks in advance for any help; bashing my head against the wall currently so any suggestions are more than welcome :)
I had a similar issue in solr 9.0.0 and this solved it for me: Apache Solr Filter on Child Documents
In your case, just add fl=*,[child childFilter=city_name_s:"St Albans"]

BQ load JSON File with Array of Array

Im trying to load a JOSN file where some of the arrays are empty.
{"house_account_payable":"0.00","house_account_receivable":"0.00","gift_sales_payable":"0.00","gift_sales_receivable":"0.00","store_credit_sales_payable":"0.00","percentage_row":null,"sales_per_period":[["02:00AM - 02:59AM",{"amount":0,"qty":0}],["03:00AM - 03:59AM",{"amount":0,"qty":0}]],"revenue_centers":[],"tax_breakdowns":[]}
This is giving the error:
rror while reading table: test2, error message: Failed to parse JSON: No object found when new array is started.; BeginArray returned false; Parser terminated before end of string
Could somebody help me on this?
Are you trying to load data from your local machine or GCS? Please, remember about exporting in JSONL(Newline delimited JSON):
{"open_orders_ids": []}
{"unpaid_orders_ids": []}
The output:
Take a look for documentation about nested and repeated columns.
EDIT:
Your JSON schema should look like this:
{
"items": [
{
"house_account_payable": "0.00",
"house_account_receivable": "0.00",
"gift_sales_payable": "0.00",
"gift_sales_receivable": "0.00",
"store_credit_sales_payable": "0.00",
"percentage_row": "",
"sales_per_period": [
{
"AM02_00_AM02_59": {
"amount": "0",
"qty": "0"
}
},
{
"AM03_00_AM03_59": {
"amount": "0",
"qty": "0"
}
}
]
}
]
}
Regarding to Felipe Hoffa's post, run following commands:
jq -c .items[] <FILENAME>.json > <FILENAME>.jq.json
bq load --source_format NEWLINE_DELIMITED_JSON --autodetect <DATASET_ID>.<TABLENAME> <FILENAME>.jq.json
The schema:
Let me know if this is what you are looking for.
There's no problem with the null arrays.
The problem lies in this shorter json:
{"sales_per_period":[["02:00AM - 02:59AM",{"amount":0,"qty":0}],["03:00AM - 03:59AM",{"amount":0,"qty":0}]]}
The arrays there hold elements of different types, and to bring it into a structured table, a different schema is needed.
For example:
{"sales_per_period":[{"a":"02:00AM - 02:59AM","b":{"amount":0,"qty":0}},{"a":"03:00AM - 03:59AM","b":{"amount":0,"qty":0}}]}
Now this loads easily into BigQuery:
bq load --source_format=NEWLINE_DELIMITED_JSON --autodetect temp.short delete.short.json
Can you change this source JSON easily outside BigQuery? Otherwise load it raw into BigQuery, and parse it with a JS UDF inside BigQuery.

How to Use regular expression in Redash Mongo Json Query

I have MongoDB as Data source. I want to query in redash which will be the equivalent following:-
db.collection.find({“templateId”:/XYZ$/})
This query return all data from the collection where key templateId end with string XYZ. How can I use the same in redash JSON?
Also please help in using $exist in Redash.
I use this example with Redash.
{
"collection": "Users",
"query": {
"Phone": {
"$regex": "1234$",
"$options": "i"
}
}
}
Options i to make it case insensitive
Got here from googling the same question.
It seems to me that next pipeline step works:
{"$match": {"templateId": {"$regex": "XYZ$"}}}

Generate JSON schema

Im trying a setup a Microsoft flow. In short, I need to take JSON data retrieved from a device, and parse it so that i could reference it in the Flows below. In order to parse, i need to provide the JSON Schema to Flow. Microsoft Flow has an option to generate it from a sample payload (the results returned from the API call), but it's not generating it correctly. I'm hoping someone can help me. I need the correct JSON Schema.
The data returned from the API:
[
null,
[
{
"user_id": 2003,
"user_label": "Test1"
},
{
"user_id": 2004,
"user_label": "Test2"
}
]
]
Scheme generated in Flow from the above sample payload:
{
"type": "array",
"items": {}
}
I then tried to generate the Schema from just the data. That seemed to work, but when the Flow runs, I get a Json validation error.
Tried generating from just the data like this:
{
"user_id": 2003,
"user_label": "Test1"
}
This generated the scheme like this:
{
"type": "object",
"properties": {
"user_id": {
"type": "number"
},
"user_label": {
"type": "string"
}
}
}
So you have 2 things going on, the nested object array, and the null.
You'll need another Parse JSON after the first Parse JSON. And you'll want to filter out the null before the second Parse JSON.
It took me a while to figure out, but I hope this helps.
Start by adding the Parse JSON step to whatever step is outputting the JSON.
Now, filter the array, make sure you use the 'Expression' when comparing with null.
Add the second Parse JSON, you'll notice that you won't have the option to select the output "Item" of the Filter array step, so select 'Parse JSON' - Item for now (we will change this to use the output of the Filter JSON step in a moment)
The step should automatically change to an 'Apply to each'. In the Parse JSON 2, generate the schema with
[
{
"user_id": 2003,
"user_label": "Test1"
},
{
"user_id": 2004,
"user_label": "Test2"
}
]
Then, modify the 'Select an output from previous steps field' and change it (from the Body of the Parse JSON step) to the Body of the Filter Array step
Finally, add an action after Parse JSON 2 and select one of the fields in Parse JSON 2, this will automatically change that step to a nested Apply to each
You should end up with something like this:

cloudant: update document which replace existing data

I have following document
{
"_id": "9036472948305957379",
"_rev":"162de87a696361533791aa7",
"firstname":"xyz",
"lastname": "abc"
}
Now I want to update above dosument to following
{
"_id": "9036472948305957379",
"_rev":"162de87a696361533791aa7",
"name":"xyz abc"
}
if I do doc['name'] = "xyz abc" it doesnt remove firstname and lastname attributes. how do I achieve that?
You need to explicitly remove the firstname and lastname properties from your local copy of the document before saving it back in the database.
If I understand your issue correctly you are currently sending the following document body (implicitly or explicitly) to the database when you initiate the update operation:
{
"_id": "9036472948305957379",
"_rev":"162de87a696361533791aa7",
"firstname":"xyz",
"lastname": "abc",
"name":"xyz abc"
}
However, your payload needs to look as follows:
{
"_id": "9036472948305957379",
"_rev":"162de87a696361533791aa7",
"name":"xyz abc"
}
If you are using the python-cloudant library take a look at the field_set method at http://python-cloudant.readthedocs.io/en/latest/document.html:
static field_set(doc, field, value)
Sets or replaces a value for a field in a locally cached Document object. To remove the field set the value to None.

Resources