I have the following schema, where spending is an array.
{
"mappings": {
"entityName": {
"dynamic": "false",
"properties": {
"id": { "type": "string", "index": "not_analyzed" },
"spending": {
"dynamic": "false",
"type": "object",
"properties": {
"start": { "type": "date", "index": "not_analyzed" },
"end": { "type": "date", "index": "not_analyzed" },
"amount": { "type": "float", "index": "not_analyzed" }
}
}
}
}
}
}
Everything works great until I try to access spending.amount in a groovy map script by its index with doc['spending.amount'].values[i] - it returns a different element and the doc['spending.amount'].values is actually sorted.
So, the question is: how do I access the original (unsorted) array? I looked into this and this but whenever I used ctx._source all I got was
No such property: ctx for class: 0c55d8cb3fce09491241ef9d60297789e92dee68
Thanks.
Related
I have a JSON array of arbitrary length. Each item in the array is a nested block of JSON objects, they all have same properties but different values.
I need a JSON schema to check the array if the last block in the array has the values defined in the schema.
How should the scheme be defined so that it only considers the last block in the array and ignores all the blocks before in the array?
My current solution successfully validates the JSON objects if there is only one block in the array. As soon as I have more blocks, it fails because all the others are not valid against my schema - for sure, this corresponds to the expected behaviour.
In my example, the JSON array contains two nested blocks of JSON objects. These differ for the following items:
event.action = "[load|button]"
event.label = "[journey:device-only|submit,journey:device-only]"
type = "[page|track]"
An example for my data are:
[
{
"page": {
"path": "order/checkout/summary",
"language": "en"
},
"cart": {
"ordercase": "neworder",
"product_list": [
{
"name": "Apple iPhone 14 Plus",
"quantity": 1,
"price": 1000
}
]
},
"event": {
"action": "load",
"label": "journey:device-only"
},
"type": "page"
},
{
"page": {
"path": "order/checkout/summary",
"language": "en"
},
"cart": {
"ordercase": "neworder",
"product_list": [
{
"name": "Apple iPhone 14 Plus",
"quantity": 1,
"price": 1000
}
]
},
"event": {
"action": "button",
"label": "submit,journey:device-only",
},
"type": "track"
}
]
And the schema I use which works fine for the second block if the block would be the only one in the array:
{
"type": "array",
"$schema": "http://json-schema.org/draft-07/schema#",
"items": {
"type": "object",
"required": ["event", "page", "type"],
"properties": {
"page": {
"type": "object",
"properties": {
"path": {
"const": "order/checkout/summary"
},
"language": {
"enum": ["de", "fr", "it", "en"]
}
},
"required": ["path", "language"]
},
"event": {
"type": "object",
"additionalProperties": false,
"properties": {
"action": {
"const": "button"
},
"label": {
"type": "string",
"pattern": "^[-_:, a-z0-9]*$",
"allOf": [
{
"type": "string",
"pattern": "^\\S*(?:(submit,|,submit))\\S*$"
},
{
"type": "string",
"pattern": "^\\S*(journey:(?:(device-only|device-plus)))\\S*$"
}
]
}
},
"required": ["action", "label"]
},
"type": {
"enum": ["track", "string"]
}
}
}
}
I need to save JSON parsed data to Cosmos Db, HTTP trigger works as it should as well as parsing but getting Partition key [my_dynamic_key_value] is invalid.
Did anyone have a similar issue?
I have found this article link but still getting the same error.
Thanks
EDIT 1
This is the flow for adding item to DB
Schema:
{
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"Groups": {
"type": "array",
"items": {
"type": "string"
}
},
"JobName": {
"type": "string"
},
"Link": {
"type": "string"
},
"MinSalary": {
"type": "string"
},
"MaxSalary": {
"type": "string"
},
"Hours": {
"type": "string"
},
"WorkPattern": {
"type": "string"
},
"Details": {
"type": "array",
"items": {
"type": "object",
"properties": {
"Name": {
"type": "string"
},
"Detail": {
"type": "string"
}
},
"required": [
"Name",
"Detail"
]
}
}
},
"required": [
"id",
"Groups",
"JobName",
"Link",
"MinSalary",
"MaxSalary",
"Hours",
"WorkPattern",
"Details"
]
}
}
Here is a response:
{
"code": "BadRequest",
"message": "Partition key [1bb2d44f-a066-4fa8-8a78-0cdcea1a756c] is invalid.\r\nActivityId: 345f9a99-534b-40cb-9dc0-9863dc8c90f5, \r\nRequestStartTime: 2020-04-28T08:04:46.8249255Z, RequestEndTime: 2020-04-28T08:04:46.8249255Z, Number of regions attempted:1\r\n, Microsoft.Azure.Documents.Common/2.10.0"
}
You need to set the partition key in the double quotes. Refer to sample screen shot below
I'm new working with elastic search and I'm trying to make a simple query work. I'm using elasticseach 2.3. I can't use a newer version I'm limited by another technology I'm using.
Basically I have News stored in the database with a title, a source and a publication date.
The query I'm trying to make should search for all the news that contain a certain keyword, come from some sources A or B and have a publication date in the range given.
So far I have this:
{
"query":{
"bool":{
"must":[
{
"bool":{
"should":[
{
"match":{
"source":"SOURCE_A"
}
},
{
"match":{
"source":"SOURCE_B"
}
},
{
"match":{
"title": "keyword"
}
}
]
}
}
],
"filter":{
"range":{
"publication_date":{
"gte":"DATE_FROM",
"lte":"DATE_TO"
}
}
}
}
}
}
The problem is that if a given source starts exactly the same as another source (for example: "SOURCE" and "SOURCE ABC") they are both included in the result. I would like to match exactly the same source.
Can anyone point me in the right direction?
Thanks!
The index is being created by Django Haystack but given its limitations I need to query the database myself. The index mapping is the following:
{
"myindex": {
"mappings": {
"modelresult": {
"properties": {
"django_ct": {
"type": "string",
"index": "not_analyzed",
"include_in_all": false
},
"django_id": {
"type": "string",
"index": "not_analyzed",
"include_in_all": false
},
"id": {
"type": "string"
},
"publication_date": {
"type": "date",
"format": "strict_date_optional_time||epoch_millis"
},
"source": {
"type": "string",
"analyzer": "ascii_analyser"
},
"summary": {
"type": "string",
"analyzer": "ascii_analyser"
},
"text": {
"type": "string",
"analyzer": "ascii_analyser"
},
"title": {
"type": "string",
"analyzer": "ascii_analyser"
},
"url": {
"type": "string",
"analyzer": "ascii_analyser"
}
}
}
}
}
}
as a beginner of NoSQL-Databases I have following problem in MongoDB:
My "scheme" looks like:
Film document
{
"title": "film",
"type": "object",
"properties":{
"id":{
"type": "integer"
},
"title": {
"type": "string"
},
"genre": {
"type": "string"
},
"ratings":{
"type": "array",
"items": [
{
"type": "object",
"properties": {
"userId": {
"type": "integer"
},
"rating": {
"type": "number"
}
}
}]
}
}
}
With this queue, I get the right result, but not only one object.
db.film.aggregate( [
{ $unwind: "$ratings" },
{ $group: {
_id: '$title',
SumRating: { $sum: '$ratings.rating' }
} } , {$sort:{SumRating:-1}}]);
So my problem is to use the $max operator. I tried something but nothing worked fine for me. Does somebody has an idea how I need to use the operator to get only the movie with the highest/max rating?
I want to make a schema of json file.It's for an array of products.
The json schema is similar as below:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Product set",
"type": "array",
"items": {
"title": "Product",
"type": "object",
"properties": {
"id": {
"description": "The unique identifier for a product",
"type": "number"
},
"name": {
"type": "string"
},
"price": {
"type": "number",
"minimum": 0,
"exclusiveMinimum": true
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"uniqueItems": true
},
"dimensions": {
"type": "object",
"properties": {
"length": {"type": "number"},
"width": {"type": "number"},
"height": {"type": "number"}
},
"required": ["length", "width", "height"]
},
"warehouseLocation": {
"description": "Coordinates of the warehouse with the product",
"$ref": "http://json-schema.org/geo"
}
},
"required": ["id", "name", "price"]
}
}
The array should at least one item in it. How can I define the minimum of the array?
Do I need to add the minimun defination?
To set the minimum # of item in an array, use the "minItems".
See:
https://datatracker.ietf.org/doc/html/draft-fge-json-schema-validation-00#section-5.3.3
and
http://jsonary.com/documentation/json-schema/?section=keywords/Array%20validation
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Product",
"description": "A product from Acme's catalog",
"type": "object",
"properties": {
...
"tags": {
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"maxItems": 4,
"uniqueItems": true
}
},
"required": ["id", "name", "price"]
}
It looks like draft v4 permits what you are looking for. From http://json-schema.org/example1.html:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Product",
"description": "A product from Acme's catalog",
"type": "object",
"properties": {
...
"tags": {
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"uniqueItems": true
}
},
"required": ["id", "name", "price"]
}
Notice that the "tags" property is defined as an array, with a minimum number of items (1).
I suppose no, at least looking to working draft the minimum is applied only for numeric values, not arrays.
5.1. Validation keywords for numeric instances (number and integer)
...
5.1.3. minimum and exclusiveMinimum
So you should be good with min/maxItems for arrays.