Adding a field mapping to 'base64Encode' an index field by calling REST endpoint not working: "A resource without a type name was found" - azure-cognitive-search

I'm trying to update a search index (using the Update Indexer) by sending a PUT request to https://searchservicename.search.windows.net/indexes/indexName?api-version=2017-11-11 and it's not working.
If I make the exact same request with the list of fields, the request works as expected and I receive a 200. As soon as I try to add fieldMappings as well, I get an error.
My Json I'm adding to the request as "application/json":
{
"name": "indexName",
"fields": [
<List of Valid Fields w/ Valid Types>
],
"fieldMappings": [
{
"sourceFieldName": "fieldName",
"targetFieldName": "fieldName",
"mappingFunction": {
"name": "base64Encode"
}
}
]
}
When calling the API, the error I'm getting is:
{Search request failed: {"error":{"code":"","message":"The request is invalid. Details: index : A resource without a type name was found, but no expected type was specified. To allow entries without type information, the expected type must also be specified when the model is specified.\r\n"}}
I expect the request to return 200 and have the field mapping added.
The error I get seems to be related to the list of fields but as mentioned before, the request works as expected with the same body minus the field mappings.
Let me know if you need any other information from me - Thanks.

Field mappings should be added to an indexer, not an index. Based on your request, you are trying to update an index.

Related

How to index blob content with existing "content" field that is Collection(Edm.String)?

I can successfully index documents like PDFs, etc... from blob storage with Azure Search and it will go into a field by default called content.
But what I want to achieve is:
index the blob file content to a field called fileContent (Edm.String)
have a field for other uses called content (Collection(Edm.String))
And I cannot make this work without an error. I've tried everything with some success but from what I can tell it's not possible to redirect the data to a different field other than content while also having a content field defined that is Collection(Edm.String).
Here's what I've tried:
Have output field mappings setup so that the content goes into a field called "fileContent". For example:
"outputFieldMappings": [
{
"sourceFieldName": "/document/content",
"targetFieldName": "fileContent"
}
]
This works fine and the content of the file goes into the fileContent field defined as Edm.String. However, if I create add a custom field called content in my index defined as Collection(Edm.String) I get an exception during the indexing operation:
The data field 'content' in the document with key '1234' has an invalid value of type 'Edm.String' (String maps to Edm.String). The expected type was 'Collection(Edm.String)'.
Why does it care what my data type for content is when I'm mapping this to a different field?
I have verified that if I make the content field just Edm.String I don't get an error but now I have duplicate entries in the index since both content and fileContent contain the same information.
According to the documentation it's possible to change the field from content to something else (but then it doesn't tell you how):
A content field is common to blob content. It contains the text extracted from blobs. Your definition of this field might look similar to the one above. You aren't required to use this name, but doing lets you take advantage of implicit field mappings. The blob indexer can send blob contents to a content Edm.String field in the index, with no field mappings required.
I've also tried using normal (non output) fieldMappings to redirect the input content field to fileContent but I end up with the same error if content is also defined with Collection(Edm.String)
{
"sourceFieldName": "content",
"targetFieldName": "fileContent",
"mappingFunction": null
}
I've also tried redirecting this content through a skillset but even though I can capture that output in a custom field, as soon as I add the content (Collection(Edm.String)) everything explodes.
Any pointers are much appreciated.
Update Turns out that the above (non output) fieldMapping does work so long as the fileContent type is just Edm.String. However, if you want to add a skillset to process this data, that data needs to be redirected to yet-another-field. It will not allow you to redirect that back to fileContent and you end up an error like: "Target
Parameter name: Enrichment target name 'fileContent' collides with existing '/document/fileContent'". So it seems that you end up being required to store the raw blob document data in a field and if you want to process it, it requires another field which is quite annoying.
The indexer will try to index as much content as possible by matching index field names, that's why it attempts to put the blob content string into the index field content collection (and fails).
To get around this you need to add a (non output) field mapping from content to another name that's not an index field name, such as blobContent to prevent the indexer from being too eager. Then in the skillset you can use blobContent by either
replacing all occurrences of /document/content with /document/blobContent, or
setting a value for /document/content which is only accessible within the skillset (and output field mappings), with a conditional skill to minimize other changes to your skillset
{
"#odata.type": "#Microsoft.Skills.Util.ConditionalSkill",
"context": "/document",
"inputs": [
{ "name": "condition", "source": "= true" },
{ "name": "whenTrue", "source": "/document/blobContent" },
{ "name": "whenFalse", "source": "= null" }
],
"outputs": [ { "name": "output", "targetName": "content" } ]
}

How to handle null or Non required column in json in Azure Logic App

sample api url: https://jsonplaceholder.typicode.com/posts
title is not a mandatory field in received json. It may or may not be part of each record.
When this field is missing in record, #{items('For_each')['title']} this throws an exception.
I want the value of myVariable to set as 'N/A' in that case. How do i do this?
I make the assumption that this is an array and that you have a set schema in the HTTP trigger. If the schema is set, make sure you remove Title as a required field.
Using these asumptions you should be able to do the following with Coalesce()
If Title now is not present in the body of the HTTP request the Title will be equal to 'N/A'
Using postman to test, note, the result is backward as it is the first object sent in my array.
Cause you url data all have the title, so I test with When a HTTP request is recived trigger to get the json data.
In this situation the data could only be like this:
{
"userId": 1,
"id": 2,
"body": "xxxxx"
}
not like the below one:
{
"userId": 1,
"id": 2,
"title":,
"body": "xxxxxxxxx"
}
I test with the first one, it did show the error message: property 'title' doesn't exist,. So here is my solution, after the action Set variable, add an action to set the variable you want and set the action run after Set variable has failed like the below picture.
After configuration, if the title doesn't exist, it will be set as N/A as you want.
Hope this could help you, if this is not what you want or you have other questions, please let me know.

Gmail API history request returning Drafts

I posted a question abt the api returning invalid history ids. I'm trying to figure this out. I think the ids are just not valid in a messages get request, since these are not real messages, rather drafts. I don't know why history list is returning drafts for a messagesAdded request. Can somebody tell me if this is the expected behavior?
{
"history": [
{
"id": "10946109",
"messages": [
{
"id": "15cc8cd840c2945a",
"threadId": "15cc5ccf65733c7f"
}
],
"messagesAdded": [
{
"message": {
...
"labelIds": [
"SENT"
]
}
}
]
},
{
"id": "10975146",
"messages": [
{
...
}
],
"messagesAdded": [
{
"message": {
...
"labelIds": [
"DRAFT"
]
}
}
]
}
If I need to filter for actual messages - not drafts, do I just do labelIds does not contain DRAFT?
Your first question:
Can somebody tell me if this is the expected behavior?
Yes this is expected behavior (replicated). Check this Document regarding History List:
Users.history: list
Lists the history of all changes to the given mailbox. History results
are returned in chronological order (increasing historyId).
Your second question:
If I need to filter for actual messages - not drafts, do I just do labelIds does not contain DRAFT?
Yes there is an actual filter. You can change the "labelId" parameter to anything except "DRAFT" so it would not return draft results in the history.
Below is a simple guide on how to properly filter your messages without returning Draft label types:
To check your list of labelId's, try this Label API Test Link to see your list of labels just to be sure that you will be using a valid "labelId" later in step 3 by executing the API.
Get the value of the "historyId" by executing the Message List API, retrieving a list of message then get one id then use the Message Get API
by entering the ID to retrieve the "historyId". Make sure that the labelId is not a "DRAFT" type or you have to get another id from the list just to avoid returning a "DRAFT" type.
Then execute History API Test Link. Enter your "userId" and the "startHistoryId"(make sure to subtract the value of the "startHistoryId" by 1) of your message and change the "labelId" by using one from the list of labels you retrieved from your GET API in step 2, change the "historyTypes" to "messagesAdded" then click execute.
It should return a list of message under the "labelId" being inputted and not a "DRAFT" type.

Empty Array being returned

I have created an ionic app and I am currently stuck trying to retrieve an array back from MongoDB through Go. This is what the data in MongoDB looks like.
{
"_id": {
"$oid": "58a86fc7ad0457629d64f569"
},
"name": "ewds",
"username": "affe#dsg.com",
"password": "vdseaff",
"email": "fawfef",
"usertype": "Coaches",
"Requests": [
"test#t.com"
]
}
I am currently trying to get back the Requests field one of the ways I tried was trying to receive the whole document using the following code.
//this is the struct being used.
type (
User struct {
Name string
Username string
Password string
Email string
UserType string
Requests []string
}
)
results := User{}
err = u.Find(bson.M{"username": Cname}).One(&results)
This only returns the following with an empty array.
{ewds affe#dsg.com vdseaff fawfef Coaches []}
In your data the Requests field has a capital R. The bson library that converts the mongo document to your struct type has this to say
https://godoc.org/gopkg.in/mgo.v2/bson#Unmarshal
The lowercased field name is used as the key for each exported field, but this behavior may be changed using the respective field tag.
So your options are to either add a tag to your Requests field or change your data to use lowercase requests. If you choose the tag option it would be like
Requests []string `bson:"Requests"`

How to fetch data from couchDB using couch api?

Instead keys and IDs alone, I want to get all the docs via couch api. I have tried with GET "http://localhost:5984/db-name/_all_docs" but it returned
{
"total_rows":4,
"offset":0,
"rows":[
{"id":"11","key":"11","value":{"rev":"1-a0206631250822b37640085c490a1b9f"}},
{"id":"18","key":"18","value":{"rev":"30-f0798ed72ceb3db86501c69ed4efa39b"}},
{"id":"3","key":"3","value":{"rev":"15-0dcb22bab2b640b4dc0b19e07c945f39"}},
{"id":"6","key":"6","value":{"rev":"4-d76008cc44109bd31dd32d26ba03125d"}}
]
}
From the documentation
for the below request it will send the data as we expected but it requires set of keys in request.
POST /db/_all_docs HTTP/1.1
{
"keys" : [
"11",
"18"
]
}
Thanks in advance.
The _all_docs endpoint is actually just a system-level view that uses the _id field as the index. Thus, any parameters that you can use for views also apply here.
If you read the documentation further, you'll find that adding the parameter include_docs=true to your view will include the original documents in the results. The documents will be added as the doc field alongside id, value and rev.

Resources