Howto: Reload entities in solr - solr

Lets say you have a Solr core with multiple entities in your document. In my case the reason for that is that the index is fed by SQL queries and I don't want to deal with multiple cores. So, in case you add or change one entity configuration, you eventually have to re-index the whole shop, which can be time consuming.
There is a way, to delete and re-index one single entity, and this is how it works:
Prerequisite: your index entries have to have a field, which reflects the entity name. You could either do that via a constant in your SQL statement or by using the TemplateTransformer:
<field column="entityName" name="entityName" template="yourNameForTheEntity"/>
You can use this name to remove all entity items from the index via using the Solr admin UI. Go to documents,
request-Handler: /update
Document-Type: JSON
Document(s): delete: {query:{entityName:yourNameForTheEntity}}
After submitting the document, all related items are gone and you can see that via running a query on the query page:
{!term f=entityName}yourNameForTheEntity
Then go to the Dataimport page to re-load you entity. Uncheck the Clean checkbox, select your entity and Execute.
After the indexing is complete, you can go back to the query page and check the result.
That's it.
Have fun,
Christian

Related

Reloading External file field with server up

I am trying to implement an external file field in order to change ranking values in Solr.
I've defined a field and field type in the schema and, in the "solrconfig.xml", bellow the <query> tags, created the external file and added the reload listeners as described in the ref guide:
After server start up, I'm able to sort the documents based on that previous created field, however, when i change the values while the server is up and when I make a new search query, I'm not able to see the updated rank list (neither the updated rank scores).
I also tried adding a reload request handler as suggested in another post and tried a force commit (http://HOST:PORT/solr/update?commit=true), but it says:
DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
DirectUpdateHandler2 end_commit_flush
Any suggestions?
Using ExternalFileFields for scoring is really not that useful any more, since Solr and Lucene now supports In-place updates for values that uses docValues.
You can then use those fields directly from your document for scoring, and you can update them without having to update the whole document. That way you don't have to reload anything externally, and your caches can be managed automagically by Solr.
There are three conditions a field has to pass for in-place updates (that being said, atomic updates can also be used, but that requires all your fields to be set as stored):
An atomic update operation is performed using this approach only when
the fields to be updated meet these three conditions:
are non-indexed (indexed="false"), non-stored (stored="false"), single
valued (multiValued="false") numeric docValues (docValues="true")
fields;
the _version_ field is also a non-indexed, non-stored single valued
docValues field; and,
copy targets of updated fields, if any, are also non-indexed,
non-stored single valued numeric docValues fields.

Solr document disappears when I update it

I am trying to update existing documents in a (Sentry-secured) Solr collection. The updates are accepted by Solr, but when I query, the document seems to have disappeared from the collection.
What is going on?
I am using Cloudera (CDH) 5.8.3, and Sentry with document-level access control enabled.
When using document-level access control, Sentry uses a field (whose name is defined in solrconfig.secure.xml, but the default is sentry_auth) to determine which roles can see that document.
If you update a document, but forget to supply a sentry_auth field, then the updated document doesn't belong to any roles, so nobody can see it - it becomes essentially invisible! This is easily done, because the sentry_auth field is typically not a stored field, so won't be returned by any queries.
You therefore cannot just retrieve a document, modify a field, then update the document - you need to know which roles that document belongs to, so you can supply a properly-populated sentry-auth field.
You can make the sentry_auth field a "required" field, in the Solr schema, which will prevent you from accidentally omitting it.
However, this won't prevent you from supplying a blank sentry-auth field (or supplying incorrect roles), either of which will also make the document "disappear".
Also note that you can update a document that you do not have document-level access to, provided you have write-access to the collection as a whole, and you have the ID of the document. This means that users can (deliberately or accidentally) over-write or delete documents that they cannot see. This is a design choice, made so that users cannot find out whether a particular document ID exists, when they do not have document-level access to it.
See the Cloudera documentation:
http://blog.cloudera.com/blog/2014/07/new-in-cdh-5-1-document-level-security-for-cloudera-search/
https://www.cloudera.com/documentation/enterprise/5-6-x/topics/search_sentry_doc_level.html
https://www.cloudera.com/documentation/enterprise/5-9-x/topics/search_sentry.html

Solr database with multiple schema

My solr db has multiple schema as below,
***Part of Schema 1***
<field1>
<field2>
<field3>
<field4>
<field5>
***Part of Schema 2***
<field6>
<field7>
<field8>
When I do a q = *:*, I get <field6>,<field7> and <field8> but not the remaining fields..
I am able to select fields 1-5 only when field1:'value' in the q object.
Is there a way to know that 6-8 is part of schema-2 and 1-5 is part of schema-1
Depending on your search handler (like (e)DISMAX) you can define the default search fields.
Or you can use the qf= Parameter to define the fields, you like to search in: http://wiki.apache.org/solr/ExtendedDisMax#qf_.28Query_Fields.29
If you like to separate your DB schema in solr, so that fields from schema-1 does not know the fields from schema-2, you can use 2 different solr cores: one for each schema.
Is there a way to know that 6-8 is part of schema-2 and 1-5 is part of schema-1
As far as i know, Solr does not support DB schemas. A field insight solr is a field. There is no way to add additional (meta) informations, where this field is coming from. So you will not be able to filter your queries depending on there origin - except by defining the Query fields or by separating schemas in cores or something like that.

How do you update data in Solr 4?

We need to update the index of Solr 4 but are getting some unexpected results. We run a C# program that uses SolrNet to do an AddRange(). In this process, we're adding new documents and also trying to update existing ones.
We're noticing that some records' fields get updated with the latest data, while others still show the old information. Should we be using the information indicated in the documentation?
The documentation indicates we can set an update="set|add|inc" on the field. If we'd like the existing record to be updated, should we use set? Also, when we delete a field, to have it removed, do we need to shut down Solr and restart? Or set null="true"?
Can you point us to some good information on doing updates to Solr data? Thank you.
The documenation reference that you list describes the parameters for Atomic Updates in Solr 4, which is currently not supported in SolrNet - see issue 199 for more details.
Until this support has been added to SolrNet, your only option for updating documents in the index is to resend the entire document (object in C#) with the required updated/deleted feilds set appropriately. Internally Solr will re-add the document to the index with the updated fields.
Also, when you are adding/updating documents in the index, these changes will not be visible to queries against the index until a commit has been issued. I would recommend using the CommitWithin option of AddParameters to allow Solr to handle this internally, this is described in detail in the SolrWiki - CommitWithin.

Can SOLR perform an UPSERT?

I've been attempting to do the equivalent of an UPSERT (insert or update if already exists) in solr. I only know what does not work and the solr/lucene documentation I have read has not been helpful. Here's what I have tried:
curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"id":"1","name":{"set":"steve"}}]'
{"responseHeader":{"status":409,"QTime":2},"error":{"msg":"Document not found for update. id=1","code":409}}
I do up to 50 updates in one request and request may contain the same id with exclusive fields (title_en and title_es for example). If there was a way of querying whether or not a list of id's exist, I could split the data and perform separate insert and update commands... This would be an acceptable alternative but is there already a handler that does this? I would like to avoid doing any in house routines at this point.
Thanks.
With Solr 4.0 you can do a Partial update of all those document with just the fields that have changed will keeping the complete document same. The id should match.
Solr does not support UPSERT mechanics out of the box. You can create a record or you can update a record and syntax is different.
And if you update the record you must make sure all your other pre-inserted fields are stored (not just indexed). Under the covers, an update creates a completely new record just pre-populated with previously stored values. But that functionality if very deep in (probably in Lucene itself).
Have you looked at DataImportHandler? You reverse the control flow (start from Solr), but it does have support for checking which records need to be updated and which records need to be created.
Or you can just run a solr query like http://solr.example.com:8983/solr/select?q=id%3A(ID1+ID2+ID3)&fl=id&wt=csv where you ask Solr to look for your ID records and return only ID of records it does find. Then, you could post-process that to segment your Updates and Inserts.

Resources