I am managing a solrcloud running with 4 solr nodes and 5 zookeeper instances. Solr version is 4.4.
We are planning for an upgrade to the latest version of 7.
Solr node1 is started with bootsrap_conf=true. For any update to schema or configset, we update configset in node1 and do a restart.
My issue is that I dont see that option in solr 7. We have around 200 cores with individual configset for most of them. I read that, with solr7, zookeeper stores the configset. But I dont see from where zookeeper will load this.
If I shut down my entire solr cloud (including zookeeper), how do I reload the configsets to zookeeper ? Do I need to track which configset is used for each core ? Or is there an option like bootstrap_conf in Solr 7 which will load the respective conf to zookeeper.
Related
I’m trying to index data from a Hbase table using lucid works hbase indexer , I would like to know if Solr , Hbase indexer & Hbase have to use the same zookeeper?
Can my Solr instance be independent while hbase and Hbase indexer are together reporting to zookeeper1 while Solr reports to its own zookeeper ?
Im following the below url
https://community.hortonworks.com/articles/1181/hbase-indexing-to-solr-with-hdp-search-in-hdp-23.html
It is up to our decisions whether go with the same zookeeper or the different independent one.
Because for hbase-zookeeper production setup zookeeper recommend the 3 node setup which means 3 zookeeper required for that setup. So we can make use of the same server for solr also.
It will help us to reduce the number of servers.
Zookeeper is light weight server which will be used to monitor solr server, so it would be good to keep the zookeeper outside the solr server for production run.
We have two Solr environments in production.
One Solr environment has latest two years data. Other has last 10 years of archived data.
At the moment, these two Solr environments connect to separate Zookeeper ensembles.
The collections have same name & configuration in both Solr environments.
We want to reduce the number of servers for Zookeeper.
Is it feasible to have both Solr environments in production connect to one Zookeeper ensemble without overwriting configs for each other?
Or is it mandatory to have separate Zookeeper ensemble for each Solr environment?
You can use the same Zookeeper ensemble to handle more than one Solr or SolrCloud instance.
However, the data must be kept separate. This is (probably) best done by using the "chroot" functionality in Zookeeper.
Essentially, when you create the "space" in Zookeeper for your Solr instance, you append a /some_thing_unique and keep that in the appropriate config files in Solr - then you should have no trouble.
I haven't experienced moving an existing Solr instance from one Zookeeper to another - I'd guess you would have to take Solr down, change the configs, set up the collection etc.. in Zookeeper, and restart Solr. For sure I'd get that all worked out in a test environment before doing it live.
Hope that helps...
Oh, here's how I did it when creating a collection "new" in Zookeeper... You'll note I gave it a name (the name of my collection) as well as noting what version of Solr I was using. This allows me to install later versions of Solr and move my collection to that later version and keep it all in the same Zookeeper ensemble...
/opt/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost 10.196.12.103,10.196.12.104,10.196.22.103 -cmd makepath /myCollectionName_solr6_2
I had to take a 4.3.1 single solr index and migrate it to solrcloud 5.2.1.
The new 5.2.1 architecture is 2 shards, each shard has 1 master and 1 slave (replica). My steps were:
Setup new single sharded solrcloud 5.2.1.
Take the big index data folder, migrate it to the single shard solrcloud 5.2.1, with 1 shard/core.
Split index.
Copy the 2 data folders to a new solrcloud 5.2.1 installation (a fresh installation with 2 shards, 1 master 1 replica).
I also have the configuration schema.xml/solrconfig.xml in a single ZooKeeper (i am aware 1 ZK is not recommended).
Everything went fine, the shards are running, the replicas contains the data, I can query the data from new solrcloud 5.2.1 2 shards.
When I Add new document, the replica is not working. I have defined a Replication Handler, but i cannot determine who is master and who is slave, since this is hosted in the Zookeeper, and the Zookeeper is responsible for all configuration. I have 2 masters and 2 slaves, And I can't decide which is master and who is slave. This is the election process goal.
What should i do? Do i understand the process right?
I have read this: How do I configure Solr replication with multiple cores.
But my problem is that I am using zookeeper.
The flow is valid and has finally worked.
The reason replication did not work was because we had invalid parameter in the updateLog section of solrconfig.
The default solrconfig.xml contains the following in the updateLog:
<updateLog>
<str name="dir">${solr.ulog.dir:}</str>
</updateLog>
Solr instructs in their guide to use when using legacy configuration (and now it works):
<updateLog>
<str name="dir">${solr.data.dir:}</str>
</updateLog>
It is advised to read this and mind all details carefully:
https://cwiki.apache.org/confluence/display/solr/SolrCloud+with+Legacy+Configuration+Files
I have the setup of Solr cloud running in my local machine with the internal Zookeeper (i.e) Zookeeper that is being internally used by Solr with the single node.
My query is that while I move my Solr to the production environment, Is it recommended to run the Zookeeper in a isolated/separate/external instance or is it better to go with the internal instance of Zookeeper that comes along with the Solr?
The use solr internal zookeeper is discouraged for the production environments. This is even stated in SolrCloud documentation.
Although Solr comes bundled with Apache ZooKeeper, you should consider yourself discouraged from using this internal ZooKeeper in production, because shutting down a redundant Solr instance will also shut down its ZooKeeper server, which might not be quite so redundant. Because a ZooKeeper ensemble must have a quorum of more than half its servers running at any given time, this can be a problem.
The solution to this problem is to set up an external ZooKeeper ensemble. You should create this ensemble on a different machine so that if any of the solr machine goes down it will not impact the zookeeper and rest of the solr instances. I know currently you are going with one solr instance.
As mentioned, for production is not a good idea to have the internal Zookeeper inside Solr but for development is entirely OK and very practical and for that you just need to add this lines to your /etc/default/solr.in.sh file:
SOLR_MODE=solrcloud
ZK_CREATE_CHROOT=true
As an alternative, you can also start Solr manually with the command $SOLR_HOME_DIR/bin/solr start -c
Tested with Apache Solr 9 on a Debian based Linux
I am going to put my old DataImportHandler configuration of solr 4.3 to SolrCloud 5.0.
I have already deployed zookeeper on 3 virtual machines and all are well communicating with each other. I have read about nodes, collections, shards and replicas but I am not able to collect how I can put my old DIH configurations to zookeeper. Currently I have 5 different DIH configurations which I need to put into solrCloud. Is that mean I have to create 5 nodes or collections?, yup I am confused here.
Thanks in Advance!
There is no need of extra node for configuration. Solr Cloud depends upon collection which is sharded across the nodes and you can create replica of it.
These are the Steps you need to do for SolrCloud :-
Run Zookeeper
Run Solrnodes with zookeeper
Upload configuration to zookeeper
Create collection by referring to the configuration
To upload configuration to zookeeper and create collection :-
Create a solrlibs directory
Copy /opt/solr/server/solr-webapp/webapp/WEB-INF/lib/* to it
Copy /opt/solr/server/lib/ext/* to it
Run the command java -classpath .:/opt/solrlibs/* org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost 192.168.1.1:2181,192.168.1.2:2181,192.168.1.3:2181 -confdir /opt/solrconfigs/test/conf -confname testconf
Create the collection using following command http://192.168.1.4:8080/solr/admin/collections?action=CREATE&name=test&collection.configName=testconf&numShards=2&replicationFactor=2
Num Shards and Replication factor will be based on number of nodes you have.