Zookeeper and SolrCloud on AWS EC2 instances - solr

I have used Solr for a while, but am new to SolrCloud. I am investigating whether it makes sense in my context to deploy SolrCloud or to have multiple Solr instances (with matching indexed content) sitting behind an ELB.
My deployment will be in AWS on EC2 instances. Our current troubleshooting strategy in AWS is to terminate misbehaving instances and allow them to be automatically recreated by an AutoScaling group (which configures new instances via scripts when they are created). In fact, we do not have access to log on to the instances once they are in production. Everything stored in Solr can be re-indexed, so there is not a concern for data loss.
When trying to understand the SolrCloud infrastructure, however, I had a few questions:
Is Zookeeper able to automatically add a new instance if I destroy one of them? Everything I have seen seems to have static IP addresses in the configurations, which would require the configs to be updated (and Zookeeper restarted) if an instance was terminated and replaced.
Is there a "master" Zookeeper instance that I should call, or can I call any of them? If I can call any of them, we would likely put an ELB in front of Zookeeper.
If we hit heavy usage and allow the AWS AutoScaling group to create additional servers that serve as SolrCloud shards, will SolrCloud gracefully add the instances and terminate them without problems? (This appears to be true, and the whole point of using SolrCloud.)

Is Zookeeper able to automatically add a new instance if I destroy one of them? Everything I have seen seems to have static IP addresses in the configurations, which would require the configs to be updated (and Zookeeper restarted) if an instance was terminated and replaced.
AN: In ZooKeeper, you will just have to mention about other ZooKeepers. This is to make the ZooKeepers aware of other running ZooKeepers. You don't need to change this config unless you plan to increase/decrease the number of ZooKeepers. Even if we have to do, we can do without disturbing the cluster by doing one at time. Also we keep hostname in config so that change in ip will have no impact on this.
Is there a "master" Zookeeper instance that I should call, or can I call any of them? If I can call any of them, we would likely put an ELB in front of Zookeeper.
AN: In ZooKeeper, we have a leader and followers. We don't need to bother about them as we don't communicate with ZooKeepers
If we hit heavy usage and allow the AWS AutoScaling group to create additional servers that serve as SolrCloud shards, will SolrCloud gracefully add the instances and terminate them without problems? (This appears to be true, and the whole point of using SolrCloud.)
AN: When you create a new SOLR node, you will have to start the node under the same cluster (Pass same ZooKeepers). Once you start with this, you will have to split a shard and move it to another node so as to balance the cluster. Not automated as of now.
SOLR Nodes are the one that you have to add in your ELB.
When you start a SOLR node, you will mention the list of ZooKeepers by which SOLR node will understand which cluster is that part of and other nodes serving the cluster

Related

MongoDB - set replication to DocumentDB

We're setting up a local MongoDB cluster - Locally, we'll have one primary and one node, and we want to have another node in AWS. Is it possible to have that node as the DocumentDB service instead of an EC2 instace?
Also, I know I must have an odd number of total nodes, is it possible to first add one node and then add another one?
Thanks ahaed.
Also, I know I must have an odd number of total nodes
In a MongoDB replica set, you can have any number of nodes you like. It is possible to have a 2-node replica set, although it's not very practically useful since unavailability of a single node (e.g. a restart for maintenance) would make the whole deployment unavailable for writes. A 4-node replica set is a feasible construction if you wanted an additional replica somewhere (e.g. for geographically close querying from a secondary, or for analytics querying), though if you are simply doing this for redundancy you should probably stick with the standard 3-node configuration and configure proper backups.
Is it possible to first add one node and then add another one?
You can reconfigure a replica set at any time.
Is it possible to have that node as the DocumentDB service instead of an EC2 instace?
Unlikely. DocumentDB is not MongoDB. DocumentDB pretends to be like a MongoDB but it 1) pretends to be an old version of MongoDB, 2) even then many features don't work, and 3) it's not anywhere near the same architecture as MongoDB under the hood. So when you ask a genuine MongoDB database to work with a DocumentDB node, this will probably not work.
This assumes you can even configure DocumentDB in the required manner - I suspect this won't be possible to begin with.
If you're only trying to replicate the data to DocumentDB, Database Migration Service is a good tool for the job: https://aws.amazon.com/dms/
But like others have said, this will be a separate cluster from your MongodDB setup.

Does Solr cloud needs a load balancer e.g. HAPROXY in master failure

I have searched a lot but unfortunately have some simple confusion about solr cloud. Lets say, I have three systems where solrCloud in configured (1 master and 2 slave) and external Zookeeper on same three machines to make a quorum. Systems names are
master
slave1
slave2
Public-Front
The Public-Front is the system where, I have configured HAPROXY. It receives requests from WWW and the send to backend server depending on ACLs.
According to my understanding, If I request to Solr collection (i.e., master), it routes it to slaves and hence load balanced. There is no need to specify slaves here. Isn't ?
Now in Public-Front, should I configured each Solr as a separate slave to load balance or just to master system.
Now if I only configure master system as solr-server in HAPROXY then if solr-server (master) goes down then I think I cannot get service from Solr from HAPROXY (although slaves are till up but not configured in HAPROXY).
Where am I wrong and what is the best approach ?
There is no traditional master or slave in Solr Cloud - there is a set of replicas, one of which is defined as the leader. The leader selection is automagic - i.e. the first replica that says it wants to be the leader, receives that status. This is per collection state. In your example there is three replicas, one which is designed as the leader. If that replica disappears, one of the two remaining replicas becomes the new leader, and everything continues as normal. The role of the leader is to be the up-to-date version of the index and handle any updates - first to its own index, then route those updates to any replicas.
There is also several types of replicas, and not all of them are suited to be promoted to a leader - but in the default configuration they can be.
Here's the thing - since there isn't really a master, all three indexes contain the same data and they all are replicas of the same shard, the request won't have to be routed through the master. If you're using a dumb haproxy, you can safely spread the requests across all three nodes and they should be able to answer the query without contacting any other nodes (as long as they all contain all the shards of the collection).
However, if you're using SolrJ or another Zookeeper capable client (and using the Zookeeper compatible client), the client will keep in touch with Zookeeper instead, and read the state information for your cluster. That allows the client to know which servers are currently replicas for your collection, and contact any of those nodes that it can decide have the required information for your query. In your case the result will be the same, except that your client will know not to connect to any nodes that disappear and will automagically know about nodes that are added to the cluster.
The "one Solr node routing requests to a different node" is only relevant if the node you're contacting doesn't have any replicas for the collection you're querying - i.e. it'll have to contact a different node to fetch that content. In that case an inter cluster request will happen and the load on the cluster will be slightly higher than necessary. When the collection is replicated to all three nodes - or when you're using SolrJ, that inter cluster request should not happen.

Deploy SolrCloud to multiple servers

I am a little bit confused with solrCloud. But how can I deploy SolrCloud on multiple servers? Will it be multiple nodes one per separate server or maybe will it bee one solrCloud node and multiple shards one per server?
And how all of this will communicate with Zookeeper (as far as I understand Zookeeper has to be also deployed on the separate server, is this correct?)
I am a little bit confused with all of this? Can you help me? Or maybe give a link to a good tutorial?
The SolrCloud section of the reference manual should be able to help you out about the concepts of Solr Cloud.
You can run multiple nodes on a single server, or you can run one node on each server. That's really up to you - but all the nodes running in a single server will disappear when that server goes down. The use case for running multiple nodes on a single server is usually for experimenting or for very particular requirements to try to get certain speedups from the single threaded parts of Lucene, so unless you're doing low-level optimization, having one node per server is what you want.
The exception to that rule is for development and experimenting - running multiple nodes on a single machine is fine when the data doesn't matter.
All the nodes make up a single SolrCloud cluster - so you'd be running multiple nodes, not multiple clusters.
Zookeeper should (usually) be deployed on three to five servers - depending on what kind of resiliency you want for failovers. While Solr bundles a Zookeeper instance you can use if you don't want to set up Zookeeper yourself, that is not recommended for production. In a production environment you'd run Zookeeper as a separate process - but that may not mean that you'll be running it on separate servers. Depending on how much traffic and use you'll see for Zookeeper for your nodes, running them on the same server as your cloud nodes will work perfectly fine. The point is to avoid using the bundled version to have full control over Zookeeper and its configuration, and to be able to upgrade/manage the instances outside of Solr.
If the need arises later you can move Zookeeper to its own cluster of servers then (at least three).

Solr Cloud: Distribution of Shards across nodes

I'm currently using Solr Cloud 6.1, the following behavior can also be observed until 7.0.
I'm trying to create a Solr collection with 5 shards and a replication factor of 2. I have 5 physical servers. Normally, this would distribute all 10 replicas evenly among the available servers.
But, when starting Solr Cloud with a -h (hostname) param to give every Solr instance an individual, but constant hostname, this doesn't work any more. The distribution then looks like this:
solr-0:
wikipedia_shard1_replica1 wikipedia_shard2_replica1 wikipedia_shard3_replica2 wikipedia_shard4_replica1 wikipedia_shard4_replica2
solr-1:
solr-2:
wikipedia_shard3_replica1 wikipedia_shard5_replica1 wikipedia_shard5_replica2
solr-3:
wikipedia_shard1_replica2
solr-4:
wikipedia_shard2_replica2
I tried using Rule-based Replica Placement, but the rules seem to be ignored.
I need to use hostnames, because Solr runs in a Kubernetes cluster, where IP adresses change frequently and Solr won't find it's cores after a container restart. I first suspected a newer Solr version to be the cause of this, but I narrowed it down to the hostname problem.
Is there any solution for this?
The solution was actually quite simple (but not really documented):
When creating a Service in OpenShift/Kubernetes, all matching Pods get backed by a load balancer. When all Solr instances get assigned an unique hostname, this hostnames would all resolve to one single IP address (that of the load balancer).
Solr somehow can't deal with that and fails to distribute its shards evenly.
The solution is to use headless services from Kubernetes. Headless services aren't backed by a load balancer and therefore every hostname resolves to an unique IP address.

Using solrj and LBHttpSolrClient to access a single solrcloud instance

Is using the LBHttpSolrClient within solrj to access a single solrcloud instance is it less robust than using the default solrj and zookeeper behavior? Can it load balance over a single solrcloud instance correctly?
The solrcloud instance that I have available has a collection with about 9 million documents, spread over three shards with about 3 million documents per shard. There are three nodes (servers) in the solrcloud, with 3 shards, replicationFactor is 2, and maxShardsPerNode of 2. For this solrcloud instance, there are 3 zookeeper nodes also running on these three servers.
Note: The values listed in the following variable named solrUrls should be prefixed with "http://" instead of "http_url_". I am unable to post more than 2 URLs at this time so I must "encode" them. Sorry.
This is the basic code that I've been told to use:
String zkUrls = "solrd1:2181,solrd2:2181,solrd3:2181";
String solrUrls = {"http_url_solrd1:8983", "http_url_solrd2:8983", "http_url_solrd3:8983"};
LBHttpSolrClient.Builder lbclient =
new BHttpSolrClient.Builder().withBaseSolrUrls(solrUrls);
CloudSolrClient solr = new CloudSolrClient.Builder()
.withLBHttpSolrClientBuilder(lbclient)
.withZkHost(zkUrls)
.build();
cloudServer.setDefaultCollection(defaultCollection);
Is this LBHttpSolrClient client able to properly use the provided solrUrls since each node listed in that variable are just nodes within a single solrcloud? Does this load balance client automatically query all the other nodes to ensure the results are complete for the whole collection instead of just the shards that exist on that node?
If the use of the LBHttpSolrClient client is the correct way to access a single solrcloud instance (better than solrj and zookeeper), then is there a better way to let zookeeper provide the base solr urls? I have an impression that the LBHttpSolrClient client predates the whole solrcloud setup and was a way to load balance over multiple standalone instances of solr; if that's the case then would the use of the LBHttpSolrClient client be obsolete compared to solrj and zookeeper?
References:
Is there any loss of functionality if I use load balancer which does not communicate with zookeeper in solrcloud?
This link appears to have an appropriate title that may provide some insight in to the same questions that I'm asking, but it has no answers.
Loadbalancer and Solrcloud
This link discusses how solrj and zookeeper works together, but does not address my questions on if the LBHttpSolrClient client is less robust or if it will work correctly on a single instance of a small solrcloud.
SolrCloud load-balancing
Does not address if solrj and zookeeper is better suited than use of the LBHttpSolrClient client.
I think you are overcomplicating things, you can even totally skip the LBHttpSolrClient in your code, and Solrj will create the needed instance behind the scenes.
In short, CloudSolrClient uses LBHttpSolrClient to send request to right Solr instances. If you want to get the most out of your Solrcloud setup, use CloudSolrClient, if you use just a LBHttpSolrClient (without CloudSolrClient), then you will not know a Solr node has gone down for instance (until you get failed requests).

Resources