Index field and blank out - cloudant

Is it possible to index a field and then blank it out?
The reason for this would be that I have a plain text field and a field containing the encrypted version of the text. I'd like to index the plain text, and then remove it so only the encrypted data remains.
I tried modifying the passed doc in my index function, but it doesn't seem to affect storage.

No, it is not possible to index a field and then blank it out. It is not possible by design. The views and indexes only reflect the latest version of the documents, therefore when you 'blank' a field, the corresponding view/index will also be blanked. The view/index is kept in sync and there is no option to make them diverge.
To achieve the effect you want, your map or index function would need to decrypt the encrypted field and send it to the index. However the index is not encrypted so that would probably defeat the purpose of having the encrypted field in your document in the first place.

Related

Is that possible to specify the copy field source as different collection field in SOLR?

I am having an issue with the partial update in SOLR. As I am having some non-stored fields in my collection the values in the non stored fields gone after the partial update. So, is that possible to use copy field to copy the original content for the non stored field from a different collection?
No. copyFields are invoked when a document is submitted for indexing, so I'm not sure how that would semantically work either. In practice what a copyField instruction does is to duplicate the field value when the document arrives to the server and copy it into fields with other names. That assumption won't make sense if there's a different collection involved - does it get invoked when documents are submitted for the other collection? (if that's the case - what with the other fields local to the actual collection).
Set the fields to stored if you want to use partial updates with fields that can't support in place updates (which have very peculiar requirements, such as being non-stored, non-indexed, single valued and has numeric docValues).

How to get matched data into database?

I took a flat file and looked up a field in a database and added another field as a new column to the flat file.
But when I directed the matched output to another database, the matched field is NULL upon inspection with a Select statement.
What did I do wrong?
I would check for any of the following on either the flat file or lookup data, which may cause a non-match:
- text data with trailing blanks
- text data with upper case vs lower case
- numeric data of varying datatypes, even just precisions
- probably other issues I haven't listed above - it's just ridiculously fussy
To avoid these issues I always explicitly use SQL CAST or Derived Column transforms to make sure the key fields are all text, all upper case and all exactly the same, byte by byte.

Salesforce: Update Managed Object's Auto number Field...

I would like to know if we can update auto number field. When I tried updating using Dev Console system said : 'Field is not Writable'
In our support product(Managed Package), I need to update/ start auto number field from specific number say 100.
I am thinking of creating a text field and imitate is as that Auto Number field. But I don't know how much will it hamper existing functionality of the package.
Any idea.
Thanks in advance.
You cant set new values to auto number fields in Salesforce. But you can use for this custom field with number type, which will be automatically filled in using the trigger. In the trigger you can implement any logic to check the uniqueness of fields and check all its conditions. If you want input values of this field manually, you can use validation rules for checking the uniqueness.

how to access raw data in opensearchserver?

I searched for documents and cannot find where it store all data.
I want to access all crawled data in order to do my own processing.
In the file StartStopListener it sets up the index directories: look for the value of the environment values OPENSEARCHSERVER_DATA, OPENSEARCHSERVER_MULTIDATA, or OPENSHIFT_DATA_DIR.
Now, whether you'll be able to parse the files easily/correctly is another debate: I haven't ever tried to directly open a search server's indexes by hand, and I don't know that the index format is well documented.
By default, the crawled data are not stored. Only the extracted text is stored. It is possible to store the crawled data, here is the process:
Create a new field: Set the "stored" parameter to yes or to compressed.
Go to the Schema / Parser List
Edit the HTML parser
In the "Field Mapping" tab, link the parser field "htmlSource" to the new field.
Restart the indexation process. Now, all crawled data will be copied to this field. Don't forget to add it as returned field in your query.

solr indexing and reindexing

I have the schema with 10 fields. One of the fields is text(content of a file) , rest all the fields are custom metadata. Document doesn't chnages but the metadata changes frequently .
Is there any way to skip the Document(text) while re-indexing. Can I only index only custom metadata? If I skip the Document(text) in re-indexing , does it update the index file by removing the text field from the Index document?
To my knowledge there's no way to selectively update specific fields. An update operation performs a complete replace of all document data. Since Solr is open source, it's possible that you could produce your own component for this if really desired.

Resources