Handling constantly updating fields in solr and preventing constantly reloading index? - solr

I'm running solr 6.3.0 with zookeeper/cloud.
I have a 'product' core that with a few integer/boolean fields that are getting updated constantly...a specific example would be inventory count.I have these working currently with an external file fields. This setup accomplishes my goal of not having solr constantly reindexing. However I don't like having to generate these files. I would rather handle have the come directly from my database.
It seems like atomic/partial updates is what I am looking for but I'm having trouble understanding the efficiency ramifications of these updates vs using external file fields...or perhaps there is a better approach all together.
I would appreciate any you could provide

ExternalFileField is useful when you want to change the values of a field in many/all of the docs at the same time. If that is not the case, I would just use a normal field and update as needed (with partial updates is possible). With the external file you don't need to reindex but you do need to reload the file...
Regarding efficiency, for sure, the less frequently you update your docs the better, you have to reach a balance.

Related

How to run code in wordpress to handle database

Whenever I have to run code on the database, change posts, or terms or what have you, I am running it on a custom page template.
Since this has been working for me up to know, I didn’t think about it much. But I need to delete a ton of terms now from a custom taxonomy and I can’t do it on the test page very effectively. Meaning I get 504 gateway errors all the time, because the code takes too long to run, and deletes only a part of the terms.
So I am wondering, if I need to run custom code to change a lot of data, what is the most efficient method to use?
Many people use a plugin named Code Snippets for this. Otherwise it's often more efficient to use direct SQL Queries using phpMyAdmin for example.

Posting large directory of files to SOLR using post tool, how to commit after every file

I am using the java post tool for solr to upload and index a directory of documents. There are several thousand documents. Solr only does a commit at the very end of the process and sometimes things stop before it completes so I lose all the work.
Has anyone a technique to fetch the name of each doc and call post on that so you get the commit for each document? Rather than the large commit of all the docs at the end?
From the help page for the post tool:
Other options:
..
-params "<key>=<value>[&<key>=<value>...]" (values must be URL-encoded; these pass through to Solr update request)
This should allow you to use -params "commitWithin=1000" to make sure each document shows up within one second of being added to the index.
Committing after each document is an overkill for the performance, in any case it's quite strange that you had to resubmit anything from start if something goes wrong. I suggest to seriously to change the indexing strategy you're using instead of investigating in a different way to commit.
Given that, if you not have any other way that change the commit configuration, I suggest to configure autocommit in your Solr collection/index or use the parameter commitWithin, as suggested by #MatsLindh. Just be aware if the tool you're using has the chance to add this parameter.
autoCommit
These settings control how often pending updates will be automatically pushed to the index. An alternative to autoCommit is
to use commitWithin, which can be defined when making the update
request to Solr (i.e., when pushing documents), or in an update
RequestHandler.

Is there an easy way to delete a complete Vespa document set?

Playing with Yahoo's vespa.ai, I'm now at a point where I have a search definition with which I am happy, but still, have a bunch of garbage test documents stored.
Is there an easy way to delete/purge/drop all of them at once, ala SQL DROP TABLE or DELETE FROM X?
The only place I found at this point where deleting documents is clearly mentioned in the Document JSON format page. As far as I understand it requires deleting documents one by one, which is fine, but gets a bit cumbersome when one is just playing around.
I tried deleting the application via the Deploy API using the default tenant, but the data is still there when issuing search requests.
Did I miss something? or is this by design?
There's no API available to do this, but the vespa-remove-index command line tool could help you out. Ie, to drop everything:
$ vespa-stop-services
$ vespa-remove-index
$ vespa-start-services
You could also play around with using garbage collection for this, but I wouldn't go down this path unless you are unable to use vespa-remove-index.

Solr Collection Creation and Custom Collection Libraries

I've run into a situation where we sometimes have to completely wipe an index and then re-index a collection. This process of course takes a lot of time. I don't want to allow for any or at least extended downtime in Prod. Thus, I am researching into a way in Solr to create a new Collection that is a copy of an old collection, but without the data. I can re-index this new Collection with little or any service degradation. I then want to use aliases to point the new Collection to the alias that our clients are using so that they will start using the new Collection without even knowing it.
I'm currently running 4.2, but wondering if I shouldn't upgrade to 4.7 in order to support this better. Seems like 4.2 has most of the same Collection API support.
One of the first snags I'm hitting is that the Collection I am copying has a lib folder in it with customer libraries. If possible I want to push these out to the solrhome/lib folder so that they only get loaded once. My problem with this is that if I have different versions of say a custom data importer, then I will run into classloader issues.
Has anyone successfully implemented this kind of scenario and could provide some insight into the pitfalls and successes that you had and what worked for you?
More Details...
I have many different collections that are part of this Solr cloud. I don't want to effect any of the other collections if possible while making changes to the new copied collection.
I'm also having a similar kind of situation where I might to modify solr schema and need to re index the whole data. But I don't have much downtime in production. So, we came up with a solution like..
Let's say I have a SolrCloud1 (existing one), with collection1 (It has it's own structure). I have my application running in different machine. There is a load-balancer in between my SolrCloud1 and application.
Now, create a separate SolrCloud (say, SolrCloud2) with collection1. Maintain the same structure as it was previously. Now, do the re indexing part in this SolrCloud2. When it's done, make available the new SolrCloud under the load-balancer. When the new SolrCLoud2 is up, shut down the SolrCloud1.
Thus without any production down time you'll re index the data. Users won't be able to know anything about this. Hope this will help.

How to implement Solr into Sitecore

I have to implement Solr index into Sitecore and I would like to know what is the best approach?
I looked at following approaches:
Capture publish end event (or other events) and then push item to solr index
Implement custom database crawler and get all changes from history table. Then using custom index push data to solr.
Second approach sounds like a way to go (in my opinion). In this case do I need to create a new search index, or search manager?
If anyone's done it before, can you point me into the right direction? Also if you could post some links to articles about sitecore-solr implementation.
UPDATE
Ok, after reading sitecore documentation this is what I came up with :
Create your custom SolrConfiguration class where you can set properties like solrserviceurl, add indexes and its definition (custom solr indexes)
Create SolrIndex and add it (in the config file) to your SolrConfiguration. Which instantiating, solrindex should subscribe to AddEntry event of Sitecore History Manager, and communicate with solr crawlers.
Create custom processor and hook into sitecore initialisation pipeline. Processor should initialize SolrConfiguration (from step 1)
Since everything in your config file in will be build using refrection, you can get instance of your cofiguration based on your config file
How does that sound like. Can I have any comments please?
We've done this on a few sites and tend to have a new "published" solr index and "unpublished" index
We interrupt:
OnItemSaving
Event to push things into the unpublished index (you may not need this, it depends if you want things in preview mode)
OnPublishItemProcessed
We process additions and updates to the published index here, I'm not sure what we do about deletions here without digging right into the code but certainly deal with deletions on the OnItemDelete (mentioned below)
OnItemDelete
We interrupt here to remove things from the published and non-published index (I think we remove from the published index here because Sitecore makes you publish the parent node in order to publish out deletions to the web database)
I hope that helps, I'd post the code if I could (but I'd be scowled at).
In addition to the already posted answer (which I think is a good way to do things) I'll share how we do it.
We basically just took a look at the Sitecore database crawler and decided to do things kind of like how it was doing it.
We utilize a significantly modified version of the Custom Item Generator to facilitate mapping between strongly typed objects and an object that has properties that correspond to our Solr schema. For actual communication with Solr we use SolrNet.
The general idea is that we loop through all the items (starting with the site root) recursively and map them to the appropriate type based on its template. Then we go through an indexing process for that item (some items need to index multiple documents to Solr in our implementation).
This approach is working very well for us except I will note that because we are indexing everything at once, it tends to introduce a slight bit of lag time between publish and the site reflecting any changes made to the index. One oversight we made in the beginning but will be working to fix soon is that we don't have an "unpublished" index (meaning we need to publish the site to see updates). It doesn't impact our solution that much really, but I can definitely see where it would others, so keep that in mind.
We didn't particularly want to get into the deletion of items from the index so we do the indexing as a publish:end event.
I hope this additional insight helps you. As far as I know there's not a whole lot of information out there about this specific combination of products, but I can tell you it's definitely possible and quite useful.

Resources