Like we can define index pattern in Elasticsearch and then keep on creating new indices with same mapping, is there any way to create dynamic document type in Vespa?
Our use case is - depending upon one of the keys' value, we need to put that in a specific document type. So, that while searching, we can search on specific document type according to that key's value.
Vespa has no dynamic document type support, document types needs to be configured explicitly in application package.
If you have 5M documents with key=foo, 500M documents with key=bar and 505M docs total, searching for key=foo will quickly restrict the search only those documents matching key=foo (5M).
Related
We would like to enable Azure Search only for Blob data, including its Contents and Meta Attributes stamped on the blob.
Is it possible to have such Indexer & Index without any reference to the database? How are the Fields of the Index specified in this case? Will the fields be same as meta attributes stamped on the blob?
Also, we have certain fields which may contain data from two different languages. Is it possible to add same field twice in the Index, with different language analyzer specified on each?
Is it possible to related same Indexer to two different Indexes?
Is it possible to specify more than one Storage Account Container as data source for the same Index?
Ideally, we would like to be able to do the following;
Utilize same Indexer in multiple Indexes
Enable same Indexer/Index to be able to search for Multiple Languages (with language analyzers)
Enable Index based only on Blob & its Meta attributes data
This doc topic explains how to setup search for blob data: https://learn.microsoft.com/en-us/azure/search/search-howto-indexing-azure-blob-storage
The default dataToExtract parameter value is contentAndMetadata, meaning all text content and metadata will be indexed. You should be able to set-up field mappings from metadata and contents to your index (the details are outlined in this same doc topic).
The indexer points to the index it should output to, so I don't think it would be possible to re-use the same indexer for multiple indexes, and you'll have to copy them instead.
Similarly, the indexer specifies what datasource it takes its data from, so only one data source per indexer. You'd need to aggregate your data into a single source first if you want to build an index from the data of multiple sources.
It is possible to index multiple languages in a single index, by specifying the relevant analyzer for each index field. More details can be found in this topic: https://learn.microsoft.com/en-us/azure/search/search-language-support
I am using Azure search which is using default indexing on the data which is importing unstructured data (pdf, doc, text, image files etc.)
I didn't make any scoring profile on the default available fields.
Almost every setting in the portal is the default. If I search any text through the search explorer then I get the JSON result which has very low search score.
I read about score boosting using the scoring profile. however, the terms which I want to find out can be in any document at any place. so how can I decide on which field I can weight more?
how can I generate more custom fields on these input files? Do I need to write document parser?
I am using SDK 4.0 and c# in my bot.
please suggest.
To use scoring profile, the fields you are trying to boost need to be part of the index definition, otherwise the scoring mechanism won't know about them.
You mentioned using unstructured data as your source, I assume this means your data does not have any stable or predictable structure. If that's the case, then you probably won't be able to update your index definition to match exactly the structure of every document, since different documents will likely have a different and unpredictable structure. If you know what fields you want to boost, and you know how to retrieve those fields from your document, then you could update your index definition with only the fields you care about, and then use the "merge" document API to populate that field for each document.
https://learn.microsoft.com/en-us/rest/api/searchservice/addupdate-or-delete-documents
This would require you to retrieve all documents from the index, parse the data to extract the field you want to boost, and then use the merge API to update the index data with the data you extracted. Once you have this, you will be able to use that field as part of a scoring profile.
I have different datasources that uploads different documents to Solr Sink. Now if two datasources sends a same name field with different data types (say integer & double) then indexing of second field fails because data type of first field is already added in managed-schema.
All I need is that both fields get indexed properly as they used to work in Solr 4.x versions .
Since field names come at runtime,please suggest a solution that would work for me. I suppose it needs a change in solrconfig.xml but couldnot find the required.
How was your Solr configured to work in 4.x? You can still do it exactly the same way in Solr 6.
On the other hand, schemaless feature will define the type mapping on the first time it sees the field. It has no way to know what will come in the future. That's also why all auto-definitions are multivalued.
However, if you want to deal with specific mapping of integer being too narrow, you can change the definition of the UpdateRequestProcessor chain that is actually doing the mapping. Just merge the mapping of integer/long/number into one final tdoubles type.
ElasticSearch has Mapping Types to, according to the docs:
Mapping types are a way to divide the documents in an index into
logical groups. Think of it as tables in a database.
Is there an equivalent in Solr for this?
I have seen that some people include a new field in the documents and later on they use this new field to limit the search to a certain type of documents, but as I understand it, they have to share the schema and (I believe) ElasticSearch Mapping Type doesn't. So, is there an equivalent?
Or, maybe a better question,
If I have a multiple document types and I want to limit searches to a certain document type, which one should offer a better solution?
I hope this question has any sense since I'm new to both of them.
Thanks!
You can configure multicore solr:
http://wiki.apache.org/solr/CoreAdmin
Maybe something has changed since solr 4.0 and it's easier now, i didn't look at it since i have switched to elasticsearch. Personally i find elasticsearch indexes/types system much better than that.
In Solr 4+.
If you are planning to do faceting or any other calculations across multiple types than create a single schema with a differentiator field. Then, on your business/mapping/client layer just define only the fields you actually want to look at. Use custom search handlers with 'fl' field to only return the fields relevant to that object. Of course, that means that all those single-type-only fields cannot be compulsory.
If your document types are completely disjoint, you can create a core/collection per type, each with its own definition file. You have full separation, but still have only one Solr server to maintain.
I have seen that some people include a new field in the documents and later on they use this new field to limit the search to a certain type of documents, but as I understand it, they have to share the schema and (I believe) ElasticSearch Mapping Type doesn't.
You can exactly do this in Solr. Add a field and use it to filter.
It is correct that Mapping Types in ElasticSearch do not have to share the same schema but under the hood ElasticSearch uses only ONE schema for all Mapping Types. So technical it makes to difference. In fact the MappingType is mapped to an internal schema field.
I am doing a POC on content/text search using Solr3.3.
I have requirement where documents along with content and their custom metadata would be indexed initially. After the documents are indexed and made available for searching, user can change the custom metadata of the documents. However once the document is added to index the content of the document cannot be updated. When the user updates the custom metadata, the document index has to be updated to reflect the metadata changes in the search.
But during index update, even though the content of the file is not changed, it is also indexed and which causes delays in the metadata update.
So I wanted to check if there is a way to avoid content indexing and update just the metadata?
Or do I have to store the content and metadata in separate index files. i.e. documentId, content in index1 and documentId, custom metadata in another index. In that case how I can query onto these two different indexes and return the result?
"if there is a way to avoid content indexing and update just the metadata" This has been covered in solr indexing and reindexing and the answer is no.
Do remember that Solr uses a very loose schema. Its like a database where everything is put into a single table. Think sparse matrices, think Amazon SimpleDB. Two solr indexes are considered as two databases, not two tables, if you had DB-like joins in mind. I just answered on it on How to start and Stop SOLR from A user created windows service .
I would enter each file as two documents (a solr document = a DB row). Hence for a file on "watson":
id: docs_contents_watson
type:contents
text: text of the file
and the metadata as
id:docs_metadata_watson
type:metadata
author:A J Crown
year:1984
To search the contents of a document:
http://localhost:8080/app/select?q=type:contents&text:"on a dark lonely night"
To do metadata searches:
http://localhost:8080/app/select?q=type:metadata&year:1984
Note the type:xx.
This may be a kludge (an implementation that can cause headaches in the long run). Fellow SO'ers, please critic this.
We did try this and it should work. Take a snapshot of what you have basically the SOLrInputDocument object before you send it to lucene. Compress it and serialize the object and then assign it to one more field in your schema. Make that field as a binary field.
So when you want to update this information to one of the fields just fetch the binary field unserialize it and append/update the values to fields you are interested and re-feed it to lucene.
Never forget to store the XML as one of the fields inside SolrInputDocument that contains the text extracted by TIKA which is used for search/indexing.
The only negative: Your index size will grow a little bit but you will get what you want without re-feeding the data.