Use Solr Schemaless feature without automatically adding unknown fields to managed-schema - solr

I have different datasources that uploads different documents to Solr Sink. Now if two datasources sends a same name field with different data types (say integer & double) then indexing of second field fails because data type of first field is already added in managed-schema.
All I need is that both fields get indexed properly as they used to work in Solr 4.x versions .
Since field names come at runtime,please suggest a solution that would work for me. I suppose it needs a change in solrconfig.xml but couldnot find the required.

How was your Solr configured to work in 4.x? You can still do it exactly the same way in Solr 6.
On the other hand, schemaless feature will define the type mapping on the first time it sees the field. It has no way to know what will come in the future. That's also why all auto-definitions are multivalued.
However, if you want to deal with specific mapping of integer being too narrow, you can change the definition of the UpdateRequestProcessor chain that is actually doing the mapping. Just merge the mapping of integer/long/number into one final tdoubles type.

Related

Solr: Normalize date field

I am new to apache solr and exploring some use cases that could potentially be applicable for my application.
In one of the use case, I have multiple mongodb instances pushing data to solr via mongo-connector. I am able to do so by running two instance of mongo-connector with two different mongo instance and using same solr core.
My question is: How do I handle a situation where I have a field in mongo-collection, say "startTime" which is of Date type in one mongo instance and another is treating it as long. I want this field to be treated as long type in solr. Does solr provide any sort of auto conversion or I will have to write my analyzer?
If you want both values to normalize to the same form, you should do that in the UpdateRequestProcessor (defined in solrconfig.xml). There is quite a number of them for various purposes, including date parsing. In fact, the schemaless mode is implemented by a chain of URPs, so that's an example you can review.
To process different Mongo instances in different ways, you can just define separate Update Request Handler endpoints (in solrconfig.xml again) and setup different processing for those. Use shared definitions to avoid duplicating what's common (using processor reference as in the schemaless definition linked above).
It may be more useful to normalize to dates rather than back from dates, as Solr allows more interesting searches that way, such as Date Math.

Solr schema: Is it safe to update schema type definitions when they are not used?

When is it safe to update the Solr schema and keep the existing indexes?
I am upgrading Solr to version 7.2 now, and some type definitions in my old schema generate warnings in the log like:
Solr loaded a deprecated plugin/analysis class [solr.CurrencyField]. Please consult documentation how to replace it accordingly.
Is it safe to update this type definition to the new solr.CurrencyFieldType and keep my existing indexes:
When the type is not used in the schema for document properties.
When the type is used in the schema for document properties.
Generally, what schema change will definitely require a total reindex of the documents?
If the field isn't being used, you can do anything you like with it - the schema is Solr's way of enforcing validation and expose certain low level Lucene settings for field configuration. If you've never indexed any content using the field, then you can update the field definition (or maybe better, remove it if you're not using it) without reindexing.
However, if you change the definition of an existing field to a different type (for example, when the int type changed from being a TrieInt to a Point field), it's a general rule that you'll have to reindex to avoid getting random weird, untraceable issues.
For TextFields, if you're not changing the field type - i.e. the field is still of the same type, but you're changing the analysis or tokenization change for the field, you might not have to reindex. If the change is only to the query part of the analysis chain, no reindexing is needed - if the change is to the indexing part (or both), it depends on what the change is - the existing tokens stored in the index won't change, so if you have indexed content without lowercasing it, and then add for example a lowercase filter for querying, you won't get a match for any existing tokens that contain uppercase. In that case you'll have to reindex to make your collection work properly again.

Solr atomic update with stored copyField destination

I would like to use Solr atomic updates in combination with some stored copyField destination fields, which is not a recommended combination - so I wish to understand the risks.
The Solr documentation for Atomic Updates says (my emphasis):
The core functionality of atomically updating a document requires that
all fields in your schema must be configured as stored (stored="true")
or docValues (docValues="true") except for fields which are
<copyField/> destinations, which must be configured as stored="false".
Atomic updates are applied to the document represented by the existing
stored field values. All data in copyField destinations fields must
originate from ONLY copyField sources.
However, I have some copyField destinations that I would like to set stored=true so that highlighting works correctly for them (see this question, for example).
I need atomic updates so that an (unrelated) field can be modified by another process, without losing data indexed by my process.
The documentation warns that:
If destinations are configured as stored, then Solr will
attempt to index both the current value of the field as well as an
additional copy from any source fields. If such fields contain some
information that comes from the indexing program and some information
that comes from copyField, then the information which originally came
from the indexing program will be lost when an atomic update is made.
But what does that mean? Can someone give an example that demonstrates this information-loss problem?
I am unsure what is meant by "some information that comes from the indexing program and some information that comes from copyField", in concrete terms.
Is it safe to make one copyField destination stored, whilst atomically updating other fields, or vice versa? I have tried this out via the Solr Admin console, and have not been able to demonstrate any issues, but would like to be clear on what circumstances would trigger the problem.
It means that the copy field will have an additional value added from the source field effectively creating a multi-valued field in your copyField, which if it isn't defined as multi-valued then the field won't be of the right type and no further updates can be made to it, until you reindex everything. I'm currently struggling with this exact issue, because we need the values to come back as part of the response for the copyField, which means it needs to be stored, but by doing so breaks the structure of the document if we do an atomic update on a different field.

Compatible collections - SOLR 4 / SOLRCloud

I am trying to fiogure out what SOLR means by compatible collection in order to be able to run the following query:
Query all shards of multiple compatible collections, explicitly specified:
http://localhost:8983/solr/collection1/select?collection=collection1_NY,collection1_NJ,collection1_CT
Does this mean that the schemas.xml must be exactly same between those collections or just partially same (share same fields used to satisfy the query)?
cheers,
/Marcin
OK,
so I have tested all myself and the schema.xml doesn't need to be exactly the same, it just needs to be partially same as defined by the query scope so i.e. if you ask for the description:text all collections/shards need to have description field but other fields can be completely different.

Solr equivalent to ElasticSearch Mapping Type

ElasticSearch has Mapping Types to, according to the docs:
Mapping types are a way to divide the documents in an index into
logical groups. Think of it as tables in a database.
Is there an equivalent in Solr for this?
I have seen that some people include a new field in the documents and later on they use this new field to limit the search to a certain type of documents, but as I understand it, they have to share the schema and (I believe) ElasticSearch Mapping Type doesn't. So, is there an equivalent?
Or, maybe a better question,
If I have a multiple document types and I want to limit searches to a certain document type, which one should offer a better solution?
I hope this question has any sense since I'm new to both of them.
Thanks!
You can configure multicore solr:
http://wiki.apache.org/solr/CoreAdmin
Maybe something has changed since solr 4.0 and it's easier now, i didn't look at it since i have switched to elasticsearch. Personally i find elasticsearch indexes/types system much better than that.
In Solr 4+.
If you are planning to do faceting or any other calculations across multiple types than create a single schema with a differentiator field. Then, on your business/mapping/client layer just define only the fields you actually want to look at. Use custom search handlers with 'fl' field to only return the fields relevant to that object. Of course, that means that all those single-type-only fields cannot be compulsory.
If your document types are completely disjoint, you can create a core/collection per type, each with its own definition file. You have full separation, but still have only one Solr server to maintain.
I have seen that some people include a new field in the documents and later on they use this new field to limit the search to a certain type of documents, but as I understand it, they have to share the schema and (I believe) ElasticSearch Mapping Type doesn't.
You can exactly do this in Solr. Add a field and use it to filter.
It is correct that Mapping Types in ElasticSearch do not have to share the same schema but under the hood ElasticSearch uses only ONE schema for all Mapping Types. So technical it makes to difference. In fact the MappingType is mapped to an internal schema field.

Resources