Azure Search : Blob only Index Creation - azure-cognitive-search

We would like to enable Azure Search only for Blob data, including its Contents and Meta Attributes stamped on the blob.
Is it possible to have such Indexer & Index without any reference to the database? How are the Fields of the Index specified in this case? Will the fields be same as meta attributes stamped on the blob?
Also, we have certain fields which may contain data from two different languages. Is it possible to add same field twice in the Index, with different language analyzer specified on each?
Is it possible to related same Indexer to two different Indexes?
Is it possible to specify more than one Storage Account Container as data source for the same Index?
Ideally, we would like to be able to do the following;
Utilize same Indexer in multiple Indexes
Enable same Indexer/Index to be able to search for Multiple Languages (with language analyzers)
Enable Index based only on Blob & its Meta attributes data

This doc topic explains how to setup search for blob data: https://learn.microsoft.com/en-us/azure/search/search-howto-indexing-azure-blob-storage
The default dataToExtract parameter value is contentAndMetadata, meaning all text content and metadata will be indexed. You should be able to set-up field mappings from metadata and contents to your index (the details are outlined in this same doc topic).
The indexer points to the index it should output to, so I don't think it would be possible to re-use the same indexer for multiple indexes, and you'll have to copy them instead.
Similarly, the indexer specifies what datasource it takes its data from, so only one data source per indexer. You'd need to aggregate your data into a single source first if you want to build an index from the data of multiple sources.
It is possible to index multiple languages in a single index, by specifying the relevant analyzer for each index field. More details can be found in this topic: https://learn.microsoft.com/en-us/azure/search/search-language-support

Related

In Azure Search, can an indexer combine information from different documents to a single index item without them overwritting each other?

My goal is to create a single searchable Azure Index that has all of the relevant information currently stored in many different sql tables.
I'm also using an Azure Cognitive Service to add additional info from related documents. Each document is tied to only a single item in my Index, but each item in the index will be tied to many documents.
According to my understanding, if two documents have the same value for the indexer's Key, then the index will overwrite the extracted information from the first document with the information extracted from the second. I'm hoping there's a way to append the information instead of overwriting it. For example: if two documents relate to the same index item, I want the values mapped to keyphrases for that item to include the keyphrases found in the first document and the keyphrases found in the second document.
Is this possible? Is there a different way I should be approaching this?
If it is possible, can I do it without having duplicate values?
Currently I have multiple indexes and I'm combining the search results from each one, but this seems inefficient and likely messes up the default scoring algorithm.
Every code example I find only has one document for each index item and doesn't address my problem. Admittedly, I haven't tried to set up my index as described above, because it would take a lot of refactoring, and I'm confident it would just overwrite itself.
I am currently creating my indexes and indexers programmatically using dotnet. I'm assuming my code isn't relevant to my question, but I can provide it if need be.
Thank you so much! I'd appreciate any feedback you can give.
Edit: I'm thinking about creating a custom skill to do the aggregation for me, but I don't know how the skill would access access everything it needs. It needs the extracted info from the current document, and it needs the previously aggregated info from previous documents. I guess the custom skill could perform a search on the index and get the item that way, but that sounds dangerously hacky. Any thoughts would be appreciated.
Pasting from docs:
Indexing actions: upload, merge, mergeOrUpload, delete
You can control the type of indexing action on a per-document basis, specifying whether the document should be uploaded in full, merged with existing document content, or deleted.
Whether you use the REST API or an SDK, the following document operations are supported for data import:
Upload, similar to an "upsert" where the document is inserted if it is new, and updated or replaced if it exists. If the document is missing values that the index requires, the document field's value is set to null.
merge updates a document that already exists, and fails a document that cannot be found. Merge replaces existing values. For this reason, be sure to check for collection fields that contain multiple values, such as fields of type Collection(Edm.String). For example, if a tags field starts with a value of ["budget"] and you execute a merge with ["economy", "pool"], the final value of the tags field is ["economy", "pool"]. It won't be ["budget", "economy", "pool"].
mergeOrUpload behaves like merge if the document exists, and upload if the document is new.
delete removes the entire document from the index. If you want to remove an individual field, use merge instead, setting the field in question to null.

Creating dynamic document type in Vespa

Like we can define index pattern in Elasticsearch and then keep on creating new indices with same mapping, is there any way to create dynamic document type in Vespa?
Our use case is - depending upon one of the keys' value, we need to put that in a specific document type. So, that while searching, we can search on specific document type according to that key's value.
Vespa has no dynamic document type support, document types needs to be configured explicitly in application package.
If you have 5M documents with key=foo, 500M documents with key=bar and 505M docs total, searching for key=foo will quickly restrict the search only those documents matching key=foo (5M).

how to boost the score in azure search for unstructured blob data?

I am using Azure search which is using default indexing on the data which is importing unstructured data (pdf, doc, text, image files etc.)
I didn't make any scoring profile on the default available fields.
Almost every setting in the portal is the default. If I search any text through the search explorer then I get the JSON result which has very low search score.
I read about score boosting using the scoring profile. however, the terms which I want to find out can be in any document at any place. so how can I decide on which field I can weight more?
how can I generate more custom fields on these input files? Do I need to write document parser?
I am using SDK 4.0 and c# in my bot.
please suggest.
To use scoring profile, the fields you are trying to boost need to be part of the index definition, otherwise the scoring mechanism won't know about them.
You mentioned using unstructured data as your source, I assume this means your data does not have any stable or predictable structure. If that's the case, then you probably won't be able to update your index definition to match exactly the structure of every document, since different documents will likely have a different and unpredictable structure. If you know what fields you want to boost, and you know how to retrieve those fields from your document, then you could update your index definition with only the fields you care about, and then use the "merge" document API to populate that field for each document.
https://learn.microsoft.com/en-us/rest/api/searchservice/addupdate-or-delete-documents
This would require you to retrieve all documents from the index, parse the data to extract the field you want to boost, and then use the merge API to update the index data with the data you extracted. Once you have this, you will be able to use that field as part of a scoring profile.

How can we retrieve tokens of a particular property from search engine?

Community version. When contents are added in Alfresco search engine tokenizes properties (name, description) and stores it in indexes. I would like to know if there a way by which we could retrieve a list of those keywords associated with particular content?
Ex.. Fetch me tokens from "Name" of "abc.txt" content
I see there are API's exposed by SolR to get overall status of indexes and to fix transactions, but nothing which meets my needs.
I had a similar experience, needed to find out what the tokenizer was doing about indexes because a particular file name was not found during search.
I finally used Luke Lucene index toolbox which is:
Luke is a handy development and diagnostic tool, which accesses
already existing Lucene indexes and allows you to display and modify
their content in several ways:
browse by document number, or by term
view documents / copy to clipboard
retrieve a ranked list of most frequent terms execute a search, and browse the results
analyze search results
selectively delete documents from the index
reconstruct the original document fields, edit them and re-insert to the index
optimize indexes
open indexes consisting of multiple parts, and/or located on Hadoop filesystem
and much more...
Simply open the index files and you will have a peek on how properties and data were tokenized.
As reported in this post it could be easily used also for SolR indexes.

Index file content and custom metadata separately with Solr3.3

I am doing a POC on content/text search using Solr3.3.
I have requirement where documents along with content and their custom metadata would be indexed initially. After the documents are indexed and made available for searching, user can change the custom metadata of the documents. However once the document is added to index the content of the document cannot be updated. When the user updates the custom metadata, the document index has to be updated to reflect the metadata changes in the search.
But during index update, even though the content of the file is not changed, it is also indexed and which causes delays in the metadata update.
So I wanted to check if there is a way to avoid content indexing and update just the metadata?
Or do I have to store the content and metadata in separate index files. i.e. documentId, content in index1 and documentId, custom metadata in another index. In that case how I can query onto these two different indexes and return the result?
"if there is a way to avoid content indexing and update just the metadata" This has been covered in solr indexing and reindexing and the answer is no.
Do remember that Solr uses a very loose schema. Its like a database where everything is put into a single table. Think sparse matrices, think Amazon SimpleDB. Two solr indexes are considered as two databases, not two tables, if you had DB-like joins in mind. I just answered on it on How to start and Stop SOLR from A user created windows service .
I would enter each file as two documents (a solr document = a DB row). Hence for a file on "watson":
id: docs_contents_watson
type:contents
text: text of the file
and the metadata as
id:docs_metadata_watson
type:metadata
author:A J Crown
year:1984
To search the contents of a document:
http://localhost:8080/app/select?q=type:contents&text:"on a dark lonely night"
To do metadata searches:
http://localhost:8080/app/select?q=type:metadata&year:1984
Note the type:xx.
This may be a kludge (an implementation that can cause headaches in the long run). Fellow SO'ers, please critic this.
We did try this and it should work. Take a snapshot of what you have basically the SOLrInputDocument object before you send it to lucene. Compress it and serialize the object and then assign it to one more field in your schema. Make that field as a binary field.
So when you want to update this information to one of the fields just fetch the binary field unserialize it and append/update the values to fields you are interested and re-feed it to lucene.
Never forget to store the XML as one of the fields inside SolrInputDocument that contains the text extracted by TIKA which is used for search/indexing.
The only negative: Your index size will grow a little bit but you will get what you want without re-feeding the data.

Resources