I have some questions regarding SolrCloud:
If I send a request directly to a solr node, which belons to a solr cluster, does it delegate the query to the zookeeper ensemble to handle it?
I want to have a single url to send requests to SolrCloud. Is there a better way of achieving this, than setting up an external load balancer, which balances directly between individual solr nodes? If 1 isn't true, this approach seems like a bad idea. On top I feel like it would somewhat defeat the purpose of zookeeper ensemble.
There is an option to break up a collection in shards. If I do so, how exactly does SolrCloud decide which document goes to which shard? Is there a need and/or an option to configure this process?
What happens if I send a collection of documents directly to one of the solr nodes? Would the data set somehow distribute itself across the shards evenly? If so, how does it happen?
Thanks a lot!
Zookeeper "just" keeps configuration data available for all nodes - i.e. the state of the cluster, etc. It does not get any queries "delegated" to it; it's just a way for Solr nodes and clients to know which collections are handled by which nodes in the cluster, and have that information be stored in resilient and available manner (i.e. dedicate the hard part out of managing a cluster to Zookeeper).
The best is to use a cloud aware Solr client - it will connect to any of the available Zookeeper nodes given in its configuration, retrieve the cluster state and connect directly to one the nodes that has the information it needs (i.e. the collection it needs to query). If you can't do that, you can either load balance with an external load balancer across all nodes in your cluster or let the client load balance if the client you use supports round robin, etc. - but having an external load balancer gives you other gains (such as being able to remove a node from load balancing for all clients at the same time, having dedicated http caching in front of th enodes, etc.) for a bit more administration.
It will use the unique id field to decide which node a given document should be routed to. You don't have to configure anything, but you can tell Solr to use a specific field or a specific prefix of a field, etc. as the route key. See Document Routing. for specific information. It allows you to make sure that all documents that belong to a specific client/application is placed on the same node (which is important for some calculations and possible operations).
It gets routed to the correct node. Whether that is evenly depends on your routing key, but by default, it'll be about as even as you can get it.
Related
I am setting up a distributed search with shards in SOLR.
Which server should I send this request to? or does it not matter?
host1:8983/solr/core?q=:&shards=host1:8983/solr/core,host2:8983/solr/core
vs
host2:8983/solr/core?q=:&shards=host1:8983/solr/core,host2:8983/solr/core
Similarly, would it be a better idea to have a separate empty solr server to direct these searches to instead of using one of the shards?
Unless you're seeing performance issues I wouldn't be too concerned about the performance difference between those two. The queries will run on both servers anyway, it'll just be a different server that's responsible for merging the end result to the client. If you want to spread this load across both servers, that's fine - in that case I'd go with alternating between both in a round robin manner (for example by placing an HTTP load balancer in front or letting your Solr library load balance between the available servers).
If you start getting replicas into the mix it becomes harder, where a load balancer will be useful. In that case it might be a good idea to look into Solr in cloud mode instead, where Solr will handle all this for you transparently (both load balancing and replica balancing, as long as your library is Zookeeper aware).
I have searched a lot but unfortunately have some simple confusion about solr cloud. Lets say, I have three systems where solrCloud in configured (1 master and 2 slave) and external Zookeeper on same three machines to make a quorum. Systems names are
master
slave1
slave2
Public-Front
The Public-Front is the system where, I have configured HAPROXY. It receives requests from WWW and the send to backend server depending on ACLs.
According to my understanding, If I request to Solr collection (i.e., master), it routes it to slaves and hence load balanced. There is no need to specify slaves here. Isn't ?
Now in Public-Front, should I configured each Solr as a separate slave to load balance or just to master system.
Now if I only configure master system as solr-server in HAPROXY then if solr-server (master) goes down then I think I cannot get service from Solr from HAPROXY (although slaves are till up but not configured in HAPROXY).
Where am I wrong and what is the best approach ?
There is no traditional master or slave in Solr Cloud - there is a set of replicas, one of which is defined as the leader. The leader selection is automagic - i.e. the first replica that says it wants to be the leader, receives that status. This is per collection state. In your example there is three replicas, one which is designed as the leader. If that replica disappears, one of the two remaining replicas becomes the new leader, and everything continues as normal. The role of the leader is to be the up-to-date version of the index and handle any updates - first to its own index, then route those updates to any replicas.
There is also several types of replicas, and not all of them are suited to be promoted to a leader - but in the default configuration they can be.
Here's the thing - since there isn't really a master, all three indexes contain the same data and they all are replicas of the same shard, the request won't have to be routed through the master. If you're using a dumb haproxy, you can safely spread the requests across all three nodes and they should be able to answer the query without contacting any other nodes (as long as they all contain all the shards of the collection).
However, if you're using SolrJ or another Zookeeper capable client (and using the Zookeeper compatible client), the client will keep in touch with Zookeeper instead, and read the state information for your cluster. That allows the client to know which servers are currently replicas for your collection, and contact any of those nodes that it can decide have the required information for your query. In your case the result will be the same, except that your client will know not to connect to any nodes that disappear and will automagically know about nodes that are added to the cluster.
The "one Solr node routing requests to a different node" is only relevant if the node you're contacting doesn't have any replicas for the collection you're querying - i.e. it'll have to contact a different node to fetch that content. In that case an inter cluster request will happen and the load on the cluster will be slightly higher than necessary. When the collection is replicated to all three nodes - or when you're using SolrJ, that inter cluster request should not happen.
As described in the tile, can Solr provide search service when reindexing?
If not, is there a solution cover such scenario?
Until you commit (or auto-commit), no changes submitted to Solr are visible to the client. So you can continue providing the search service. After commit, the searcher is reopened and the clients will see new content.
If the changes are significant, you may consider building them in a separate collection from scratch (on the same or on different server) and then - after it is done - swapping the cores for standalone or changing the alias for the SolrCloud configuration. Either approach will keep the same name but point to your new collection.
Hi I'm developing rails project with sunspot solr and configuring Solr Cloud.
My environment: rails 3.2.1, ruby 2.1.2, sunspot 2.1.0, Solr 4.1.6.
Why SolrCloud: I need more stable system - oftentimes search server goes on maintenance and web application stop working on production. So, I think about how to make 2 identical search servers instead of one, to make system more stable: if one server will be down, other will continue working.
I cannot find any good turtorial with simple, easy to understand and described in details turtorial...
I'm trying to set up SolrCloud on two servers, but I do not fully understand how it is working inside:
synchronize data between two servers (is it automatic action?)
balances search requests between two servers
when one server suddenly stop working other should become a master (is it automatic action?)
is there SolrCloud features other than listed?
Read more about SolrCloud here..! https://wiki.apache.org/solr/SolrCloud
Couple of inputs from my experience.
If your application just reads data from SOLR and does not write to SOLR(in real time but you index using an ETL or so) then you can just go for Master Slave hierarchy.
Define one Master :- Point all writes to here. If this master is down you will no longer be able to index the data
Create 2(or more) Slaves :- This is an feature from SOLR and it will take care of synchronizing data from the master based on the interval we specify(Say every 20 seconds)
Create a load balancer based out of slaves and point your application to read data from load balancer.
Pros:
With above setup, you don't have high availability for Master(Data writes) but you will have high availability for data until the last slave goes down.
Cons:
Assume one slave went down and you bought it back after an hour, this slave will be behind the other slaves by one hour. So its manual task to check for data consistency among other slaves before adding back to ELB.
How about SolrCloud?
No Master here, so you can achieve high availability for Writes too
No need to worry about data inconsistency as I described above, SolrCloud architecture will take care of that.
What Suits Best for you.
Define a external Zookeeper with 3 nodes Quorom
Define at least 2 SOLR severs.
Split your Current index to 2 shards (by default each shard will reside one each in 2 solr nodes defined in step #2
Define replica as 2 (This will create replica for shards in each nodes)
Define an LB to point to above solr nodes.
Point your Solr input as well as application to point to this LB.
By above setup, you can sustain fail over for either nodes.
Let me know if you need more info on this.
Regards,
Aneesh N
-Let us learn together.
i'm working on a .NET application that uses Solr as Search Engine. I had configured a SolrCloud installation with two server (one for Replica) and i didn't split the index in shards (number of shards = 1). I have read that SolrCloud (via Zookeeper) can do some load balancing, but i didn't understand how. If a call a specific address where an instance of solr is deployed, the query appears only on the logs of that specific server.
On the documentation of SolrCloud i've found that:
Explicitly specify the addresses of shards you want to query, giving alternatives (delimited by |) used for load balancing and fail-over:
http://www.ipaddress.com:8983/solr/collection1/select?shards=www.ipaddress.com:8983/solr|www.ipaddress.com:8900/solr,www.ipaddress.com:7574/solr|www.ipaddress.com:7500/solr
I'm wondering if i can use this notation to force load balancing also if a have an entire index (only one shard) and in that case how the load-balancer works.
UPDATE: I've tested this solution and it works. Adding the various shard addresses in the field "shards" separated by the character "|" forces Solr to call the internal load balancer (LBHttpSolrServer) that performs a simple round robin balancing.
Thanks for your help.
Regards,
Jacopo
I've tested this solution and it works. Adding the various shard addresses in the field "shards" separated by the character "|" forces Solr to call the internal load balancer (LBHttpSolrServer) that performs a simple round robin balancing.
Since you only have a single shard, the server that is receiving the request will respond with the result, it will not perform another request to the other replica when it has the data locally. The Java CloudSolrServer client connects to ZooKeeper and knows which servers are up or down and will perform load balancing appropriately across all active servers. I don't believe there are any ports .NET ports available for this specific client.