"Unknown Error: mango_idx :: {no_usable_index,missing_sort_index}"} - cloudant

I have the following query:
{'type': 'text',
'name': 'album-rating-text',
'index': {'fields': [
{'type': 'string', 'name': 'user_id'},
{'type': 'string', 'name': 'album_id'},
{'type': 'number', 'name': 'timestamp'}
]}}
Here is the query:
{'sort': [
{'user_id': 'desc'},
{'album_id': 'desc'},
{'timestamp': 'desc'}
],
'limit': 1,
'fields': ['user_id', 'album_id', 'timestamp'],
'selector': {
'$and': [
{'user_id': {'$eq': 'a#a.com'}},
{'album_id': {'$in': ['bf129f0d', '380e3a05'
]
}}]}}
The error:
{
"error":"unknown_error",
"reason":"Unknown Error: mango_idx :: {no_usable_index,missing_sort_index}"
}
I've seen a similar question however, all the fields that I'm indexing on are in my sort list.
Update:
As a workaround, I attempted to simplify by dropping the timestamp field:
{"type": "text",
"name": "album-rating-text",
"index": {"fields": [
{"type": "string", "name": "user_id"},
{"type": "string", "name": "album_id"}
]}}
And query as so ...
{"selector": {"$and": [
{"user_id": {"$eq": "a#a.com"}},
{"album_id": {"$in": ["bf129f0d", "380e3a05"]}
}]},
"fields": ["user_id", "album_id"]}
I get the following error:
{"warning":"no matching index found, create an index to optimize query time",
"docs":[
]}

To use sort function for a custom field, that field needs to be manually registered to "Query-index".
Cloudant doesn't do this, because it's resource consuming:
"The example in the editor shows how to index the field "foo" using
the json type index. You can automatically index all the fields in all
of your documents using a text type index with the syntax '{ "index":
{}, "type": "text" }', Note that indexing all fields can be resource
consuming on large data sets."
You can do this using the Cloudant dashboard. Go to your database and look for "Queryable indexes". Click Edit.
Add your field to the default template:
{
"index": {
"fields": [
"user_id"
]
},
"type": "json"
}
Press "Create index"
Field "user_id" is now queryable, and you can now use sort-function to it.
All fields need to be add manually, or you can register all fields as Query-index with:
{ "index": {}, "type": "text" }
Video instructions for creating Query-index:
https://www.youtube.com/watch?v=B3ZkxSFau8U

Try using a JSON index instead of the text index:
{
"type": "json",
"name": "album-rating-text",
"index": {
"fields": ["user_id", "album_id", "timestamp"]
}
}

If I remember correct, my query requirements changed and I chose to use a standard Cloudant Search index instead of a Mango index.

Related

How to get all solr field names except for multivalued fields?

I'm new to solr and I'm trying to query for field names excluding fields with multiValued=true.
So far I have
select?q=*:*&wt=csv&rows=0&facet
which returns all the fields.
Is there a way to modify the query to check if a field is multivalued?
You can retrieve information about all the defined fields through the Schema API. The response will contain a multiValued field set to true if the field is defined as multivalued:
v1 API:
http://localhost:8983/techproducts/schema/fields
v2 API:
http://localhost:8983/api/collections/techproducts/schema/fields
{
"fields": [
{
"indexed": true,
"name": "_version_",
"stored": true,
"type": "long"
},
{
"indexed": true,
"multiValued": true, <----
"name": "cat",
"stored": true,
"type": "string"
},
],
"responseHeader": {
"QTime": 1,
"status": 0
}
}

JSONSchema keyword "type" when encased inside "items" fails to validate

I'm trying to write a json validator to check files before runtime but there's a really odd issue happening with using "type".
I know "type" is a reserved word but jsonSchema doesn't have an issue with it if it doesn't have a value pairing as I found in this other question: Key values of 'key' and 'type' in json schema.
The solution to their problem was encasing "type": { "type": "string"} inside "properties" and it does work. However, my implementation requires it to be inside an array. Here's the snippet of my code:
{
"type": "object",
"additionalProperties": false,
"properties":{
"method":{
"type": "array",
"items":{
"type": {
"type": "string"
},
"name":{
"type": "string"
},
"provider": {
"type": "array"
}
}
}
}
}
Oddly enough, VScode doesn't have a problem with it in a file when it's isolated, but when it's included in the main code, it doeesn't like it and yields no solution. Regardless, validating it with python yields:
...
raise exceptions.SchemaError.create_from(error)
jsonschema.exceptions.SchemaError: {'type': 'string'} is not valid under any of the given schemas
Failed validating 'anyOf' in metaschema['allOf'][1]['properties']['properties']['additionalProperties']['$dynamicRef']['allOf'][1]['properties']['items']['$dynamicRef']['allOf'][3]['properties']['type']:
{'anyOf': [{'$ref': '#/$defs/simpleTypes'},
{'items': {'$ref': '#/$defs/simpleTypes'},
'minItems': 1,
'type': 'array',
'uniqueItems': True}]}
On schema['properties']['method']['items']['type']:
{'type': 'string'}
What further confuses me is that https://www.jsonschemavalidator.net/ tells me
Expected array or string for 'type', got StartObject. Path 'properties.method.items.type', line 8, position 17. yet JSON Schema Faker is able to generate a fake file without any problems. The generated fake json also returns the same error when validated with python and JSONSchemaValidator.
I'm a beginner and any help or insight will be greatly appreciated, thanks for your time.
Edit: here's the snippet of the input data as requested.
{
...
"method": [
{
"type": "action",
"name": "name of the chaos experiment to use here",
"provider": [
]
}
}
]
}
Arrays don't have properties; arrays have items. The schema as you have included it is not valid; are you sure you don't mean to have this?
{
"type": "object",
"additionalProperties": false,
"properties":{
"method":{
"type": "array",
"items":{
"type": "object",
"properties": {
"type": {
"type": "string"
},
"name":{
"type": "string"
},
"provider": {
"type": "array"
}
}
}
}
}
}

Solr indexing nested objects array

We're trying to run an index in Solr (8.9.0 - Schemaless Mode) of a list of items that each contain 1 or 2 arrays of objects, with 1 or more records per array. The sample code below is the json we feed the index:
[
{
"id": 8270861,
"type": "Product",
"title": "Stripped T-shirt"
"tags": [{
"tagId": 218,
"tagIcon": "smile,happy",
"tagHelpText": "",
"tagValue": "grand"
},
{
"tagId": 219,
"tagIcon": "frown,sad",
"tagHelpText": "",
"tagValue": "grand"
}],
"keywords": [
{
"keywordId": 742,
"type": "color"
},
{
"keywordId": 743,
"type": "size"
}]
}
]
2 problems we run into:
PROBLEM 1:
The output of the solr query changes the format of the arrays to this (effectively removing the quotes):
...
"tags": [
"{tagIcon=smile,happy, tagHelpText=, tagId=218, tagValue=grand}",
"{tagIcon=frown,sad, tagHelpText=, tagId=219, tagValue=grand}"
],
"keywords": [
"{type=color, keywordId=742}",
"{type=size, keywordId=743}"
],
...
Is there a way to get the format of the arrays to come back the same way as fed into the index:
"tags": [
{ "tagId": 218, "tagIcon": "smile,happy", "tagHelpText": "", "tagValue": "grand" },
{ "tagId": 219, "tagIcon": "frown,sad", "tagHelpText": "", "tagValue": "grand"}
]
to avoid any conflicts when the value is a comma separated list. Are we missing some definition adjustments in the schema file? If so do we need to define the children of those parent keys(i.e. "tags.tagIcon")?
PROBLEM 2:
The index seems to reject an array with a single element. If we feed it the same json as above, but only one entry in the keywords array (or the tags array):
...
"keywords": [
{
"keywordId": 742,
"type": "color"
}]
...
it throws a code: 400 Unknown operation for the an atomic update: type"
Any suggestions on this would be welcome.

"There is no index available for this selector" despite the fact I made one

In my data, I have two fields that I want to use as an index together. They are sensorid (any string) and timestamp (yyyy-mm-dd hh:mm:ss).
So I made an index for these two using the Cloudant index generator. This was created successfully and it appears as a design document.
{
"index": {
"fields": [
{
"name": "sensorid",
"type": "string"
},
{
"name": "timestamp",
"type": "string"
}
]
},
"type": "text"
}
However, when I try to make the following query to find all documents with a timestamp newer than some value, I am told there is no index available for the selector:
{
"selector": {
"timestamp": {
"$gt": "2015-10-13 16:00:00"
}
},
"fields": [
"_id",
"_rev"
],
"sort": [
{
"_id": "asc"
}
]
}
What have I done wrong?
It seems to me like cloudant query only allows sorting on fields that are part of the selector.
Therefore your selector should include the _id field and look like:
"selector":{
"_id":{
"$gt":0
},
"timestamp":{
"$gt":"2015-10-13 16:00:00"
}
}
I hope this works for you!

ExtJs Menu Binding from database

Please provide some sample code or idea about , How to bind menu dynamically from Json results
I get results from database as json ,so how to bind menu from json (Parent and childs)
Thanks in advance
Its pretty easy actually. When you return the data from the server all you need to do is include a metaData field in your JSON that defines the record structure.
See this documentation: http://dev.sencha.com/deploy/ext-3.3.1/docs/?class=Ext.data.JsonReader
The example from the docs is as follows:
{
metaData: {
"idProperty": "id",
"root": "rows",
"totalProperty": "results"
"successProperty": "success",
"fields": [
{"name": "name"},
{"name": "job", "mapping": "occupation"}
],
// used by store to set its sortInfo
"sortInfo":{
"field": "name",
"direction": "ASC"
},
// paging data (if applicable)
"start": 0,
"limit": 2,
// custom property
"foo": "bar"
},
// Reader's configured successProperty
"success": true,
// Reader's configured totalProperty
"results": 2000,
// Reader's configured root
// (this data simulates 2 results per page)
"rows": [ // *Note: this must be an Array
{ "id": 1, "name": "Bill", "occupation": "Gardener" },
{ "id": 2, "name": "Ben", "occupation": "Horticulturalist" }
]
}

Resources