Is there anything instead of "add-field" for schema-default.json file in solr when we want to check if field exists? - solr

I am new to solr and I am struggling to find in documentation something like "add-field-if-not-exists". According to documentation:
"The add-field command adds a new field definition to your schema. If a field with the same name exists an error is thrown."
so anytime I use my updated schema with some new field I've got a bunch of errors logged in system for fields that are already there. Is there some kind of alternative to "add-field" command? Or solr should work with different versions of schema-default.json file - only latest changes should appear on each updating schema?

Related

Solr document disappears when I update it

I am trying to update existing documents in a (Sentry-secured) Solr collection. The updates are accepted by Solr, but when I query, the document seems to have disappeared from the collection.
What is going on?
I am using Cloudera (CDH) 5.8.3, and Sentry with document-level access control enabled.
When using document-level access control, Sentry uses a field (whose name is defined in solrconfig.secure.xml, but the default is sentry_auth) to determine which roles can see that document.
If you update a document, but forget to supply a sentry_auth field, then the updated document doesn't belong to any roles, so nobody can see it - it becomes essentially invisible! This is easily done, because the sentry_auth field is typically not a stored field, so won't be returned by any queries.
You therefore cannot just retrieve a document, modify a field, then update the document - you need to know which roles that document belongs to, so you can supply a properly-populated sentry-auth field.
You can make the sentry_auth field a "required" field, in the Solr schema, which will prevent you from accidentally omitting it.
However, this won't prevent you from supplying a blank sentry-auth field (or supplying incorrect roles), either of which will also make the document "disappear".
Also note that you can update a document that you do not have document-level access to, provided you have write-access to the collection as a whole, and you have the ID of the document. This means that users can (deliberately or accidentally) over-write or delete documents that they cannot see. This is a design choice, made so that users cannot find out whether a particular document ID exists, when they do not have document-level access to it.
See the Cloudera documentation:
http://blog.cloudera.com/blog/2014/07/new-in-cdh-5-1-document-level-security-for-cloudera-search/
https://www.cloudera.com/documentation/enterprise/5-6-x/topics/search_sentry_doc_level.html
https://www.cloudera.com/documentation/enterprise/5-9-x/topics/search_sentry.html

Solr - can't parse files using tika nested entity

I'm trying to index some data from database. There are some linked documents for each page represented in database table.
I noticed that indexing generally works, but field 'text' from Tika is completly ignored and not fetched at all, without any reasonable exception in logs.
My data congig: http://pastebin.com/XdwenPTE, my schema: http://pastebin.com/zXEuFTHE, my solr config: http://pastebin.com/qLiuT0tq
Can you look at my configs and tell me if I ommited anything? When I make query on indexed data, there is no even present field 'text' - why?
[edit]
I changed file path passed to tika to:
url="${page_resource_list.FILE_PATH}"
But still file content is not indexed at all. Any ideas? I have some exceptions saying about files not found (it's good, because some files are missing) but there are no exception about any problems with existing files. And tika didn't indexed anything.
It seems to be the same problem as described here: Solr's TikaEntityProcessor not working - but is this really not fixed yet?
The entity reference for FILE_PATH is ${page_resource_list.FILE_PATH}, not ${page_content.FILE_PATH} (which only have CONTENT defined as a column).
You also have a LogTransformer that can help you by giving you better debug information about the actual content of your fields while indexing.

conversion of DateField to TrieDateField in Solr

I'm using Apache Solr for powering the search functionality in my Drupal site using a contributed module for drupal named ApacheSolr Search Integration. I'm pretty novice with Solr and have a basic understanding of it, hence wish to convey my apologies in advance if this query sounds outrageous.
I have a date field added through one of drupal's hooks named ds_myDate which I initially used for sorting the search results. I decided to use a date boosting, so that the search results are displayed based on relevancy and boosted by their date rather than merely being displayed by the descending order of date. Once I had updated my hook to implement the same by adding a boost field as recip(ms(NOW/HOUR,ds_myDate),3.16e-11,1,1) I got a HTTP 400 error stating
Can't use ms() function on non-numeric legacy date field ds_myDate
Googling for the same suggested that I use a TrieDateField instead of the Legacy DateField to prevent this error. Adding a TrieDate field named tds_myDate following the suggested naming convention and implementing the boost as recip(ms(NOW/HOUR,tds_myDate),3.16e-11,1,1) did effectively achieve the boosting. However this requires me to reindex all the content (close to 500k records) to populate the new TrieDate field so that I may be able to use it effectively.
I'd request to know if there's an effective workaround than re-indexing all my content such as converting my ds_myDate to a TrieDate field like running an alter query on a mysql table field to change its type. Since I'm unfamiliar with how Solr works would request to know if such an option is feasible and what the right thing to do would be for this case.
You may be able to achieve it by doing a Partial update, but for that you need to be on on Solr 4+ and storing all indexed fields.
Here is how I would go with this:
Make sure version of Solr is 4+
Make sure all indexed fields are stored (requirement for partial updates)
If above two conditions meet, write a script(PHP), which does following:
1) Iterate through full Solr index, and for each doc:
----a) read value stored in ds_myDate field
----b) Convert it to TrieDateField format
----c) Push onto Solr, via partial update to only tds_myDate field (see sample query)
Sample query:
curl 'localhost:8983/solr/update?commit=true' -H 'Content-type:application/json' -d '[{"id":"$id","tds_myDate":{"set":$converted_Val}}]'
For more details on partial updates: http://solr.pl/en/2012/07/09/solr-4-0-partial-documents-update/
Unfortunately, once a document has been indexed a certain way and you change the schema, you cannot have the new schema changes be applied to existing documents until those documents are re-indexed.
Please see this previous question - Does Schema Change need Reindex for additional details.

How do you update data in Solr 4?

We need to update the index of Solr 4 but are getting some unexpected results. We run a C# program that uses SolrNet to do an AddRange(). In this process, we're adding new documents and also trying to update existing ones.
We're noticing that some records' fields get updated with the latest data, while others still show the old information. Should we be using the information indicated in the documentation?
The documentation indicates we can set an update="set|add|inc" on the field. If we'd like the existing record to be updated, should we use set? Also, when we delete a field, to have it removed, do we need to shut down Solr and restart? Or set null="true"?
Can you point us to some good information on doing updates to Solr data? Thank you.
The documenation reference that you list describes the parameters for Atomic Updates in Solr 4, which is currently not supported in SolrNet - see issue 199 for more details.
Until this support has been added to SolrNet, your only option for updating documents in the index is to resend the entire document (object in C#) with the required updated/deleted feilds set appropriately. Internally Solr will re-add the document to the index with the updated fields.
Also, when you are adding/updating documents in the index, these changes will not be visible to queries against the index until a commit has been issued. I would recommend using the CommitWithin option of AddParameters to allow Solr to handle this internally, this is described in detail in the SolrWiki - CommitWithin.

Update document field with solrj

I want to edit document filed in solr,for example edit the author name,so i use the following code in solrj:
params.set("literal.author","anaconda")
but the author multivalued="true" in schema and because of that "anaconde" is not replace with it's previous name and add to the end of the author name,also if i ommit the multivalued field or set it to false the bad request exception happen in re-indexing file with new author field,how can i solve this problem and delete or modify the previous document field in solrj?
or does it any config i miss in schema?
thanks
The only option I know of would be to query the full document (all fields using &fl=* parameter) into a local construct with solrj, update the appropriate field(s) and them submit the entire document back to Solr.
Nope there is no way to update specific field for an document in Solr, nor through any of its Client apis.
EDIT :- With Solr 4.0 it it possible to Partially update the documents with certain fields.
This post should be the correct answer to your question (if you are using SOLR 4.x)
For Solr 4.0 you are able to update a single field on a document, but that version is ALPHA, if you are concerned.
But for the update thingy, it is only possible by CURL I think, I didnt find any way to update a single field on a doc on java side by solrj.
You have two options:
As stated in other answers, you can query for the original document, update the field, and then re-save which will overwrite the original document with the new values.
Your other option is to install a nightly build of Solr, where Yonik has added a patch for updateable documents. You should keep an eye on https://issues.apache.org/jira/browse/SOLR-139 as this patch is pretty new and still being worked on.

Resources