We are using solr cloud as a search service and currently we run this from command prompt of windows, but I don't know how we can create solr cloud as a windows service on production environment.
I referred below document for the same,
http://www.norconex.com/how-to-run-solr5-as-a-service-on-windows/
but it is not working as expected for solr cloud.
Can anybody please help me on this.
Thanks,
Santosh
You need to give some information on what is not as expected .
The link shown looks like it will get you some of the way, but you obviously need to run a couple of instances, give different home locations and probably setup dependencies between the services to ensure that the one with the Zookeeper starts first. All of which you should already have by running it on the command line, so you should only need to put the corresponding parameters into corresponding fields in the GUI.
Sorry for late reply, I was on vacation.
I can run solr cloud as a windows service , but for that I need to first ccreate a complete setup of solr cloud instance using command:
solr -e cloud -z localhost:2181
then need to stop all the solr port running in command prompt , by closing the command prompt.
Then I configure individual windows service for each solr port running as below:
restart -c -f -p 8984 -z 0.0.0.0:2181 -s "C:/solr-5.2.1/example/cloud/node1/solr"
and so on for every port.
This way I can configure each solr running port as a windows service.
But I want to know , is there any command in solr cloud, which will create solr cloud setup as well run all solr port under single windows service instance.
the command like " solr -f -e cloud -z localhost:2181 -noprompt" , is doing that but it is running for only one default port i.e 8983, and I want to configure solr cloud setup for at least two ports just like "solr -e cloud -z localhost:2181 -noprompt" configured default in solr.cmd.
Thanks,
Santosh
Related
Our Developers is working in Local Standalone Solr Server and we have many cores in Local Solr. Now we are planning to migrate it to SolrCloud in AWS Infrastructure for replication purpose with numShards:3 and replicationFact:3. We don't need data to be migrated from Local Solr server to AWS SolrCloud. We only need to transfer Core from Local Solr to collection in SolrCloud. I am newbie in these can you please any help me in these.
1) In layman word we only need to transfer content in Conf folder of core to SolrCloud Collection and we don't need to transfer data(data folder).
Answering the my own question ,so any one can check it, if issue arise.
Solution:
1) Create a new collection in SolrCloud with config set name same as of core.
2) Move the conf folder of the core in Local Standalone Solr Server to SolrCloud 'Collection' Folder.
3) Run zookeeper's zkCli.sh commands from bash to upload the conf file to all SolrCloud server.
cd /opt/solr/server/scripts/cloud-scripts/
bash zkcli.sh -cmd upconfig -confdir /opt/solr-7.4.0/server/solr/collectionname/conf/ -z IP1:2181,IP2:2181,IP3:2181 -confname confname
Reference : https://lucene.apache.org/solr/guide/6_6/using-zookeeper-to-manage-configuration-files.html#UsingZooKeepertoManageConfigurationFiles-UploadingConfigurationFilesusingbin_solrorSolrJ
I have installed Apache Solr 6.6.2 on 2 different systems and I have to run Solr in Cloud mode that I have done successfully. Now I want to create one shard with 2 replica. For that I have run following command
bin/solr create -c myCollection -d use_configs -n conf1 -replicationFactor 2
At the time of above command execution, there was only one node live, so it creates one replicat and all index data reside in corresponding Solr Home. When I start second solr (on seperate machine), It replicate index to second machine also (it was expected due to replication factor 2). But After that I have to replace second machine with a new one. I did the same setting and run below command on new machine
bin/solr start -cloud -s tmp/solr -p 7900 -z zk-ip:2181
Solr on new machine starts successfully but it does not replicate index to this new machine. Is there any configuration I missed on this new system ?
Also in admin dashboard, it shows that only one replica (system) is available on first system but there is no indication for second system. Why solr is behaving like this ? I think if I add a new system then index should be replicated to this new system as I have set replication factor to 2 at time of creation of the shard.
I have a cassandra instance running on Docker and I am wondering (if possible) to use datastax opscenter to monitor the cassandra instance.
To cennect to my cassandra instance I run:
$ docker run -it --rm cassandra:3.0.2 bash
$ cqlsh [MY_HOST] -u USERNAME -p PASSWORD
After installing opscenter I dont know what to put here:
In order to use OpsCenter to monitor your cassandra instance you will need to have the Datastax-agent running on your cassandra instance. You then add the IP address of the running cassandra instance to the dialog box in your post. Click Save Cluster and OpsCenter will try to connect to your cassandra instance.
If this is the free version of OpsCenter it will have some limitations like only managing a single node instance, but I have done what your asking so you should be able to connect to your cassandra instance and it should come up in opscenter.
Give it a try, hope this helps.
Pat
Helo,
I run a 2 machine setup with 5 Zookeeper instances on it. I know that normally minimum 3 machines are required to run a smal zookeeper quorum but for now I need to start with this 2 machines. Now I want to create a script which autostarts all the zookeeper instances automaically in case of crashes or reboots. After all I want to build a stable environment which recovers automatically the following services:
solr
solrcloud
zookeeper
shardallocation
Does somebody have any experience with this?
You require a good monitoring system for this. A simpler solution would be to write a cron jobs for all these boxes. These cron jobs would run curl or wget comands and check the output. If the output of the command is not as expected, restart your services. Also add the services to your startup with /etc/init.d so the services start with the reboot.
How do I update an existing configuration file of SolrCloud in the Zoo Keeper?
I am using Solr4 Beta version with ZooKeeper 3.3.6. I have updated a configuration file, and restarted the Solr Instance which uploads the configuration file to the ZooKeeper. But when I check the configuration file from the SolrCloud Admin console, I don't see the updates. I am not able to understand if this is an issue with SolrCloud admin console or if I am not successful in uploading the config file to ZooKeeper.
Can someone who is familiar with ZooKeeper tell me on how to update an existing configuration file in the ZooKeeper, and how to verify the change in the ZooKeeper?
Solr 4 comes with some helpful scripts
cloud-scripts/zkcli.sh -cmd upconfig -zkhost 127.0.0.1:2181 -d solr/your_default_collection_with_the_config/conf/ -n config_name_userd_by_all_collections
After that you have to reload cores.
SolrCloud provides two option for uploading configuration files to ZK. If you have multiple cores while starting give option -Dbootstrap_conf=true. This will upload the index configuration files for all the cores. If you only want to upload configuration file of one core give two startup parameters -Dbootstrap_confdir and -Dcollection.configName.
I had multiple cores defined in the instance. You would have to upload each configuration by changing -Dcollection.configName argument and restart the Solr instance every time