How to setup Solr Replication with two search servers? - solr

Hi I'm developing rails project with sunspot solr and configure Solr replication.
My environment: rails 3.2.1, ruby 2.1.2, sunspot 2.1.0, Solr 4.1.6.
Why replication: I need more stable system - oftentimes search server goes on maintenance and web application stop working on production. So, I think about how to make 2 identical search servers instead of one, to make system more stable: if one server will be down, other will continue working.
I cannot find any good turtorial with simple, easy to understand and described in details turtorial...
I'm trying to set up replication on two servers, but I do not fully understand how replication working inside:
synchronize data between two servers (is it automatic action?)
balances search requests between two servers
when one server suddenly stop working other should become a master (is it automatic action?)
is there replication features other than listed?

Answer to this is similar to
How to setup Solr Cloud with two search servers?
What is the difference between Solr Replication and Solr Cloud?
Can we close this as duplicate?

Related

Is it a good idea to stop the Solr replication for all search machines till the the new Solr configurations are deployed on all nodes?

The question is for legacy Solr setup (non-cloud mode).
Let's consider one hypothetical example. Say we have one index machine and 2 search machines.
We have some Solr schema and config changes that we want to deploy to all the machines.
We do a round-robin deployment - deploy to the index machine first then deploy to one search machine at a time. For this whole deployment, we disable the replication from index machines to search machines. Can we do better so that replication is not stopped for the entirety of the deployment process?

Hbase indexer for Solr

I’m trying to index data from a Hbase table using lucid works hbase indexer , I would like to know if Solr , Hbase indexer & Hbase have to use the same zookeeper?
Can my Solr instance be independent while hbase and Hbase indexer are together reporting to zookeeper1 while Solr reports to its own zookeeper ?
Im following the below url
https://community.hortonworks.com/articles/1181/hbase-indexing-to-solr-with-hdp-search-in-hdp-23.html
It is up to our decisions whether go with the same zookeeper or the different independent one.
Because for hbase-zookeeper production setup zookeeper recommend the 3 node setup which means 3 zookeeper required for that setup. So we can make use of the same server for solr also.
It will help us to reduce the number of servers.
Zookeeper is light weight server which will be used to monitor solr server, so it would be good to keep the zookeeper outside the solr server for production run.

Sitecore 8.1 index rebuild strategy for SOLR search provider

Just read through the index update strategies document below but couldn't get the clear answer on which strategy is best for SOLR search implementation:
https://doc.sitecore.net/sitecore_experience_platform/search_and_indexing/index_update_strategies
We have setup the master and slave Solr endpoints where master will be used for create/update. And slave for reading only.
Appreciate if you could suggest the indexing strategy to be used for:
Content Authoring
Content Delivery
Solution is hosted in azure web apps and content delivery can be scaled up or down from 1-N number at any time.
I'm planning to configure below:
Only CA have a OnPublishEndAsync
All CDs will not have any indexing strategy.
Appreciate if you could suggest a solution that has worked for you. Also how do we disable indexing strategy?
Thanks.
Usually when you use replication in Solr (master + slave Solr servers), it should be configured like that:
Content Authoring (CM server):
connects to Solr master server.
It runs syncMaster strategy for master database, and onPublishEndAsync for web database.
Content Delivery (CD servers):
connects to Solr slave server (or to some load balancer if there are multiple Solr slave servers).
has all the indexing strategies set to manual - they should NEVER update Slave solr servers.
With this solution, CD servers always can get results from Solr, even if there is full index rebuild in progress (this happens on Master Solr server and data is copied to Slaves after it's finished).
You should think about having 2 Solr Slave servers and load balancer for them. If you do this:
If Solr master is down for some reason, slaves still answers to requests from CD boxes. You can safely restart master, reindex, and the only thing you lost is that you didn't have 100% up to date search results on CD for some time.
If one of the Solr slave servers is down, second slave server still answers to the request and load balancer should redirect all the traffic to the slave server which works.

Loadbalancer and Solrcloud

I am wondering how loadbalancer can be set up on top of SolrCloud or a load-balancer is not needed?
If the former, shard leaders need to be added to the loadbalancer? Then what if the shard leader changes for some reason? Or all machines in the cluster (including replica) better be added to the load balancer?
If the latter, I guess a cname needs to point to the SolrCloud cluster and it should be round robin DNS?
Any advice from some actual Solrcloud operation experience would be really appreicated.
Usually SolrCloud is used with combination of ZooKeeper, the client uses CloudSolrServer to access to SolrCloud.
The query will be done in following flow.
Note that I only read the source code of Solr partially and there are lot of guesses. Also what I read was source code of Solr 4.1, so it might be outdated.
ZooKeeper holds the list of IPAddress:Port of all SolrCloud servers.
(Client Side) The instance of CloudSolrServer retrieves the list of servers from ZooKeeper.
(Client Side) The instance of CloudSolrServer chooses one of SolrCloud server randomly and sends query to it. (Also LBHttpSolrServer chooses the server in round-robin?)
(Server Side) The SolrCloud server which recieved the query chooses randomly from replica of shards (one server per shard) from server list and redirects the query to it. (Note that all the SolrCloud server holds the server list which can be recieved from ZooKeeper)
The update will be done in same manner as above but also be populated to all servers.
Note that as for SolrCloud, the leader and replica has small difference and we can send query/update to any of the server. It is automatically redirected to other servers.
In short, the loadbalancing is done in both client side and server side.
So you don't need to worry about it.
A Load Balancer is needed and would be implemented by Zookeeper used in conjunction with SolrCloud.
When you use SolrCloud you must setup sharding and replication through the use of Zookeeper either using the embedded Zookeeper server that comes bundled with SolrCloud or you use a stand-alone Zookeeper ensemble (which is recommended for redundancy).
Then you would use SolrCloudClient to send your queries to Zookeeper which will then forward your query to the correct shard among your cluster. SolrCloudClient will require the name and address of all your Zookeeper instances upon instantiation and your Load-Balancing will be handled as appropriate from there.
Please see the following excllent tutorial:
http://www.francelabs.com/blog/tutorial-solrcloud-amazon-ec2/
Solr Docs:
https://cwiki.apache.org/confluence/display/solr/Setting+Up+an+External+ZooKeeper+Ensemble
This quote refers to latest version of Solr, at time of writing was ver. 7.1
Solrcloud - Distributed Requests
When a Solr node receives a search request, the request is routed
behind the scenes to a replica of a shard that is part of the
collection being searched.
The chosen replica acts as an aggregator: it creates internal requests
to randomly chosen replicas of every shard in the collection,
coordinates the responses, issues any subsequent internal requests as
needed (for example, to refine facets values, or request additional
stored fields), and constructs the final response for the client.
Solrcloud - Read Side Fault Tolerance
In a SolrCloud cluster each individual node load balances read
requests across all the replicas in collection. You still need a load
balancer on the 'outside' that talks to the cluster, or you need a
smart client which understands how to read and interact with Solr’s
metadata in ZooKeeper and only requests the ZooKeeper ensemble’s
address to start discovering to which nodes it should send requests.
(Solr provides a smart Java SolrJ client called CloudSolrClient.)
I am in a similar situation where I can't rely on CloudSolrServer for loadbalancing, a possible solution that I am evaluating is to use Airbnb's synapse (http://nerds.airbnb.com/smartstack-service-discovery-cloud/) to reconfigure dynamically an existing haproxy loadbalancer based on the status of the SolrCloud cluster that we get from Zookeeper.

Solr Distributed Search vs. Solr Cloud

For Solr 4.3 users, what would be the benefit of using Solr Distributed Search over Solr Cloud?
Or should all Solr deployment after 4.x just use Solr Cloud, and forget about Solr Distributed Search?
Benefit:
There won't be any benefit of Distributed search over solr Cloud. Solr Cloud is currently the most efficient way to deploy solr cluster. It takes care of all your instances using zookeeper and is very efficient for high availability.
Efficient management
Zookeeper decides which of your documents go to which instance.
I have used Solr Cloud in production also and it work wonderfully for high traffic scenarios.
Solr cloud it self resembles distributed search via solr.
No you can still use all deployments after 4.x as normal standalone solr instance.Just avoid zkHost parameter in bootstrap for that.
JOINs are not supported in SOLR cloud which is a big drawback.
If you want to control shards yourself, means which shard will contain which record, go for distributed search otherwise go for cloud search. Cloud manage all shards itself.
We can have multiple instances of SOLR so in case if one fails, we can move to other in distributed search. In cloud search, ZK manage all these things so if ZK fail, system will be down.

Resources