I have deployed a SolrCloud in AKS with 5 Nodes, each Node with 32Gb RAM. In Solr, I create a collection with 7 shards and 3 replicas. Usually, my pod is restarted. So I can see replicas to recovering and recovered failed. I checked the log and nothing.
What can I do to fix that?
I try to set autocommit with 5 min and 10 min. However, it does not work well.
I have a Statefulset solr with 5 replicas. Each replica requests 20Gi - 2 CPU and limits 32Gi - 4 CPU. I set JAVA_MEM with xms12g; xmx16g and each replica are on a node of Kubernetes. Sometimes, I see one or two solr pods restarts. So replicas on a shard will usually go from active to recovering and finally recovered failed.
Related
I'm running SolrCloud with 3 solr and 3 zookeeper instances. For fault tolerance, I now have 3 shards and 3 replicas per solr node.
So:
numShards [3]
maxShardsPernode[3]
autoAddReplicas [false]
replicationFactor [3]
nrtReplicas[3]
Is this recommended? If I already have 3 shards why do I need 3 replicas of that shard spread across the 3 instances too?
Sharding is important for :
It allows you to horizontally split or scale your content volume.
It allows you to distribute operations, for example, index tracking,
across shards (potentially on multiple nodes) therefore increasing
performance/throughput.
Replication : The purpose of replication is both to ensure high availability and to improve search query performance, although the main purpose is often to be more fault tolerant. This is accomplished by never storing a replica shard on the same node as its primary shard.
Advantages of Replication :
Splits read and write load and operations
Load distribution for search queries
High availability for searching
Any number of slave instances can be created to scale query performance
It is advised to set replication factor to at least 3 so that even if something happens to the rack, one copy is always safe.
Consider that you have 3 instance of solr server called server1, server2 and server3.
You have created 3 shards for your collection.
Each server has one shard on it as Shard1 on server1, shard2 on server 2 ans shard3 on server3.
Lets have 3 replicas of each shard on each server.
So your server1 will have shard1, replica of other shard like shard 2 and shard 3 as well.
Same goes with other servers.
If 2 servers goes down still you have one server with all the data of your collection.
That's the beauty of replication in achieving the high availability.
I need to create a sharded cluster on k8 using archives yaml. I have just one idea but I have problems with the shards then I want to found another option. Taking in account that to create the sharded cluster also use statefulsets.
Help me please
I have setup a Solr cloud on two machines, I created a collection collection1 and split it into two shards with 2 replica's, I added my other Solr machine to the cloud and in the Solr admin page in cloud->tree->live nodes, I can see 4 live, which includes the last Solr instance launched, but I can see my shards are running on the same machine just on different ports, even replica is still showing the leader address.
Now I want to shift the replica to the newly launched Solr instance or just put the entire shard 1 or 2 on the other machines.
I have tried searching about it, but nothing tells me the exact commands.
This question is rather old, but for the sake of completeness:
In the Solr UI goto Collections
Select your collection
Click on the shards on the right side
Click add replica
Choose your new node as the target node
Wait for the replica to be ready (watch in Cloud > Graph)
Back in the shards list, delete the old replica
If the old replica was the leader, a leader election will be triggered automatically.
We have a SolrCloud managed by Zookeeper. One concern that we have is with updating the schema or dataConfig on the fly. All changes that we are planning to make is in the indexing server node on the SolrCloud. Once the changes to the schema or dataConfig are made, then we do a full dataimport.
The concern is that the replication of the new indexes on the slave nodes in the cloud would not happen immediately, but only after the replication interval. Also for the different slave nodes the replication will happen at different times, which might cause inconsistent results.
For e.g.
The index replication interval is 5 mins.
Slave node A started at 10:00 => next index replication would be at 10:05.
Slave node B started at 10:03 => next index replication would be at 10:08.
If we make changes to the schema in the indexing server and re-index the results at 10:04, then the results of this change would be available on node A at 10:05, but in node B only at 10:08. Requests made to the SolrCloud between 10:05 and 10:08 would have inconsistent results depending on which slave node the request gets redirected to.
Please let me know if there is any way to make the results more consistent.
#Wish, what you are stating is not the behavior of a SolrCloud.
In SolrCloud indexing are routed to shard leaders and leader sent the copies to all the replicas.
At any point of time, if the ZooKeeper identifies that any of the replica is not in sync with leader, it will brought down to recovering mode. In this mode it will not serve any requests including the query.
P.S: In solr cloud configs are maintained at ZooKeeper and not at the nodes level.
I guess you are little confusing Solr Cloud and Master Slave mode, please confirm which one setup are you in?
Everytime i start a new node in the Solr cluster a shard or a shard replica is assigned automatically.
How could i specify which shard/shards should be replicated on this new node ?
I'm trying to get to a configuration with 3 shards, 6 servers - one for each shard master and 3 for the replicas - and shard1 to have 3 replicas, one on each of the servers while shard1 and shard2 only one.
How can this be achieved?
You can go to the core admin at the solrcloud Web GUI, unload the core that has been automatically assigned to that node and then create a new core, specifying the collection and the shard you want it to be assigned at. After you create that core you should see at the cloud view , that your node has been adeed to that specific shard and after some time that all documents of that shard have been sychronized with your node.