I have a collection in solrcloud which had been created using zookeeper managed configuration and I want all collection configuration files which were used to create the collection. Here are the options I found:
Copy manually all files from Solrcloud UI.
solrUI->cloud->tree->/collections/<collection-name>
Download files from zookeeper
/opt/solr/server/scripts/cloud-scripts/zkcli.sh -cmd downconfig -zkhost <zk hosts>/usecasedir -confname <configuration name> -confdir <dir to download>
2nd option would save my lot of time but the problem here my zookeeper has huge list of configurations and I am not sure which configuration directory was used to create collection.
Is there any way to figure out which collection configuration was used to create collection?
the info of what config was used to create a collection is stored in zk itself. Some bash scripting (using the great jq utility) is enough to do what you need:
find what config was used for the given XXX collection:
CONFIGNAME=$(curl -L -s "http://localhost:8983/solr/admin/zookeeper?detail=true&path=/collections/XXX" | jq '.znode.data' | cut -d ":" -f2 | tr -d '}"\\')
now download the config:
/opt/solr/bin/solr zk downconfig -n $CONFIGNAME -d config$CONFIGNAME -z localhost:2181
Sets of configuration are usually found in a directory called /configs. If the zookeeper is dedicated to solr this is usually at the top level, if it's used by multiple applications it's common to "zk chroot" the configs to a sub directory.
Once you find the right location in zookeeper, one directory in the configs directory should match the name shown as "config-name" in the admin UI under Collections > name_of_your_collection
If your project uses gradle, the up/down load of configs from the project (where you might want to check these things into version control) can be smoothed somewhat by a plugin (disclaimer: I wrote this plugin)
https://plugins.gradle.org/plugin/com.needhamsoftware.solr-gradle
There's an additional complication to be aware of however, if the collection is using managed schema, the acutal schema in use will not be in schema.xml, but in a file called "managed-schema"
Fields may have been added via the Schema Rest API, so "files used to create the collection" is a bit fuzzy in that respect, but the managed_schema can be renamed to schema.xml and the solr config modified to take things out of managed mode if you want.
Related
My first instance:
sudo bin/solr start -p 8983 -s ../coaps
My second instance:
sudo bin/solr start -p 8984 -s ../newcoaps
Using the python http utility I verified connections:
http :8983/solr/
http :8984/solr/
I can ping my first one with :8983/solr/samos/admin/ping/ but I can NOT ping the other one because the core located in ../newcoaps is not added upon startup.
The ../newcoaps directory looks like this before I started up Solr:
ls -R ../newcoaps/
../newcoaps/:
samos solr.xml
../newcoaps/samos:
conf data
../newcoaps/samos/conf:
schema.xml solrconfig.xml
../newcoaps/samos/data:
I copied the files in here directly from my other instance, which is running smoothly. Everything is default except for several fields I defined.
In the web browser, I see that the second instance has no cores, so I tried to add it manually but I get this response:
Error CREATEing SolrCore 'new_core': Unable to create core [new_core] Caused by: Can't find resource 'synonyms.txt' in classpath or '/opt/solr/newcoaps/samos'
What is going on here and why is that file important enough to prevent me from adding this core? What steps can I take to figuring out a solution to this problem?
Your schema (schema.xml) is referencing the synonyms.txt file (in a SynonymFilter definition). Remove the filter from the configuration if you're not expanding synonyms, or create an empty file named synonyms.txt to allow the core to start up.
As a possible explanation: If you started the first node without a schema.xml present the first time, it might have switched to using the managed schema functionality instead of reading the schema.xml, but when starting the second node with the schema present, it'll try to read and parse it.
I have a SOLR / Zookeeper / Kafka setup. Each on separate VMs.
I have successfully run this all using two SOLR 4.9 vms (Ubuntu)
Now I wish to build two SOLR 5.4 vms and get it all working again.
Essentially, "Upgrade by Replacement"
I have "hacked" a solution to my problem but that makes me very nervous.
To begin, Zookeeper is running. I turn off my SOLR 4.9 vms and delete the config out of Zookeeper (not necessarily in that order... ;-) )
Now, I start up my 'solr5' VM (and SOLR in cloud mode) where I have installed SOLR 5.4 according to the "Production Install" instructions on the SOLR Wiki. I have also installed 5.4 on 'solr6', but it's not running yet.
I issue this command on the 'solr5' machine:
/opt/solr/bin/solr create -c fooCollection -d /home/john/conf -shards 1 -replicationFactor 1
and I get the following output:
Connecting to ZooKeeper at 192.168.56.5,192.168.56.6,192.168.56.7/solr ...
Re-using existing configuration directory statdx
Creating new collection 'fooCollection' using command:
http://localhost:8983/solr/admin/collections?action=CREATE&name=fooCollection&numShards=1&replicationFactor=1&maxShardsPerNode=1&collection.configName=fooCollection
{
"responseHeader":{
"status":0,
"QTime":3822},
"success":{"":{
"responseHeader":{
"status":0,
"QTime":3640},
"core":"fooCollection_shard1_replica1"}}}
Everything is working great. I turn on my microservice, and it pumps all my SOLR docs from Kafka into 'solr5'.
Now, I want to add 'solr6' to the collection. I can't find a way to do this besides my hack (which I'll describe later).
The command I used before to create a collection, errors out with the observation that my collection already exists.
There seems to be no zkcli.sh or solr command that will do what I want. None of the api commands seem to do this either.
Is there not a simple way to say to (SOLR? Zookeeper?) I want to add another machine to my SOLR nodes, please configure it like the first (solr5) and begin replicating data?
Maybe I should have had both machines running when I issued the create command?
I'd be grateful for some "approved" method for doing this since I need to come up with a "solution" to do the same kind of approach in Prod every time there is a need to upgrade SOLR.
Now for my hack. Keep in mind I'm now two days trying to find clear docs on this. No flames please, I totally get that this is not the way to do things. At least, I HOPE this is not the way to do things...
Copy the fooCollection directory from where the create collection
command put it on 'solr5' (which was
/opt/solr/server/solr/fooCollection_shard1_replica1) to the same
location on my 'solr6' VM.
Make what changes seem logical to the collection directory name (becomes
fooCollection_shard1_replica2)
Make what changes seem logical in the core.properties file:
For reference, here's the core.properties file that was created by the create command.
#Written by CorePropertiesLocator
#Wed Jan 20 18:59:08 UTC 2016
numShards=1
name=fooCollection_shard1_replica1
shard=shard1
collection=fooCollection
coreNodeName=core_node1
Here is what the file looked like on 'solr6' when I was done hacking.
#Written by CorePropertiesLocator
#Wed Jan 20 18:59:08 UTC 2016
numShards=1
name=fooCollection_shard1_replica2
shard=shard1
collection=fooCollection
coreNodeName=core_node2
When I did this and rebooted 'solr6' everything appeared golden. The "Cloud" web page looked right in the Admin web page - and when I added documents to 'solr5' they were available in 'solr6' if I hit it directly from the Admin web pages.
I would be grateful if someone can tell me how to achieve this without a hack like this... or if this IS the right way to do this...
=============================
In answer to #Mani and the suggested procedure
Thanks Mani - I did try this very carefully following your steps.
In the end, I get this output from the collection status query:
john#solr6:/opt/solr$ ./bin/solr healthcheck -z 192.168.56.5,192.168.56.6,192.168.56.7/solr5_4 -c fooCollection
{
"collection":"fooCollection",
"status":"healthy",
"numDocs":0,
"numShards":1,
"shards":[{
"shard":"shard1",
"status":"healthy",
"replicas":[{
"name":"core_node1",
"url":"http://192.168.56.15:8983/solr/fooCollection_shard1_replica1/",
"numDocs":0,
"status":"active",
"uptime":"0 days, 0 hours, 6 minutes, 24 seconds",
"memory":"31 MB (%6.3) of 490.7 MB",
"leader":true}]}]}
This is the kind of result I've been finding in my experimentation all along. The core will get created on one of the SOLR VM's (the one I issue the command line to create the collection on) but I don't get anything created on the other VM -- which, based on your steps below, I believe you also thought should occur, yes?
Also, I'll note for anyone reading that in 5.4, the command is "healthcheck" and not healthstatus. The command line shows you immediately, so it's no big deal.
===============
Update 1 :: Manual add of 2nd core
If I go to the other VM and manually add the following:
sudo mkdir /opt/solr/server/solr/fooCollection_shard1_replica2
sudo mkdir /opt/solr/server/solr/fooCollection_shard1_replica2/data
nano /opt/solr/server/solr/fooCollection_shard1_replica2/core.properties
(in here I add only collection=fooCollection and then save/close)
Then I reboot my SOLR server on that same VM:
sudo /opt/solr/bin/solr restart -c -z zoo1,zoo2,zoo3/solr
I will find a second node magically appearing in my Admin console. It will be a "follower" (I.E. not the leader) and both will be branching off "shard1" in the cloud UI.
I don't know if this is "the way" but it's the only way I've found so far. I'm going to reproduce to that point and try with the Admin UI and see what I get. That would be a little easier for my IT guys when the time comes - if it works.
===============
Update 2 :: Slight modification of create command
#Mani -- I believe I have success following your steps - and like many things, it's simple once you understand.
I reset everything (deleted directories, cleared out zookeeper (rmr /solr) and re did everything from scratch.
I changed the "create" command slightly thus:
./bin/solr create -c fooCollection -d /home/john/conf -shards 1 -replicationFactor 2
Note the "replicationFactor 2" rather than 1.
Suddenly I did indeed have cores on both VMs.
A couple of notes:
I found that I couldn't get a happy result from the status call just by starting the SOLR 5.4 servers in Cloud mode with the Zookeeper IP addresses. The "node" in Zookeeper was not yet created.
The create command also failed at that point.
The way I found around this was to use the zkcli.sh to load the configs like this:
sudo /opt/solr/server/scripts/cloud-scripts/zkcli.sh -cmd upconfig -confdir /home/john/conf/ -confname fooCollection -z 192.168.56.5/solr
When I checked Zookeeper immediately after running this command, there was a /solr/configs/fooCollection "path".
NOW the create command works and I assume that if I had wanted to override the configs, I could have done so at that point although I haven't tried.
I'm not positive at what point, but it seems I needed to reboot the SOLR Servers (probably after the create command) in order to find everything on status etc... I may be misremembering that because I've been through it so many times. If in doubt after the create command, try a reboot of the servers. (This can be IP addresses or names that resolve correctly)
sudo /opt/solr/bin/solr restart -c -z zoo1,zoo2,zoo3/solr
sudo /opt/solr/bin/solr restart -c -z 192.168.56.5,192.168.56.6,192.168.56.7/solr
After doing these slight modifications to #Mani's recommended procedure, I get a Leader and a "follower" each on different VM's - in the /opt/solr/server/solr directory (fooCollection in this case) and I was able to send data in to one and search the other via the Admin console hitting the IP addresses.
=============
Variations
One thing anyone reading this may want to try is simply making another "node" in Zookeeper (solr5_4 for example).
I tried this and it works like a charm. Everywhere you see the /solr chroot associated with the Zookeeper ensemble, you could replace it with /solr5_4. This would allow the older SOLR VM's to keep functioning in Prod while you build out your new SOLR 5.4 "environment" and the same Zookeeper VM's could be used for both -- because a different chroot should guarantee no interaction or overlap.
Again, the "node" in Zookeeper won't be created until you do the config upload, but you need to start your SOLR process like this or you'd be in the wrong context later on. Note the "solr5_4" as the chroot.
sudo /opt/solr/bin/solr restart -c -z zoo1,zoo2,zoo3/solr5_4
Once done with testing, the solr5_4 "environment" becomes what matters for Prod and the SOLR 4.x VM's and Zookeeper "node" of solr can be removed. It should be a fairly simple matter to point a load balancer at the new SOLR VM's and do a switchover without users really even noticing.
This strategy will work for SOLR 6, 6.5, 7, and so on.
This command also worked to add the collections/cores. However, the solr server had to be running first.
http://192.168.56.16:8983/solr/admin/collections?action=CREATE&name=fooCollection&numShards=1&replicationFactor=2&collection.configName=fooCollection
==================
Use as Upgrade By Replacement
In case it's not obvious, this technique (especially if using the "new" chroot in Zookeeper of something like /solr5_4 or similar) gives you the luxury of leaving your older version of SOLR running for as long as you want. Allowing a re-indexing of all your data to take days if needed.
I haven't tried, but I'm guessing a backup of the index could be dropped into the new machines as well.
I just wanted readers to understand that this was an approach intended to make upgrades really low stress and straightforward. (Don't need to upgrade in place, just build new VMs and install latest version of SOLR.)
This would allow the switch-over to occur without affecting prod until you're ready to drop the hammer and re-direct your load balancer at the new SOLR ip addresses (Which you will have already tested of course...)
The one assumption here is that you have the resources to bring up a set of SOLR VMs or physical servers to match whatever you already have in Production. Obviously, if you're resource-limited to only the boxes or VMs you have, upgrade-in-place may be your only option.
This is how I would do it. I am assuming that you have the luxury of having downtime & have ability to completely reindex the documents. Since you are essentially upgrading from 4.9 to 5.4.
Stop the 4.9 solr nodes and uninstall solr.
Remove the config from zk nodes using zkcli.sh with the clear command.
Install the solr on both solr5 & solr6 vm
Start both the solr nodes and make sure both can talk to zk. =>
On solr5 vm ./bin/solr start -c -z zk1:port1,zk2:port1,zk3:port1
On solr6 vm ./bin/solr start -c -z zk1:port1,zk2:port1,zk3:port1
Verify the status of Solrcloud using ./bin/solr status => this should return liveNodes as 2
Now create the fooCollection using the CollectionsAPI from anyone of solr nodes. This uploads the configsets to zookeeper and also creates the collection =>
./bin/solr create -c fooCollection -d /home/john/conf -shards 1 -replicationFactor 1
Verify the healthstatus of the fooCollection =>
./bin/solr healthstatus -z zk1:port1,zk2:port1,zk3:port1 -c fooCollection
Now verify the config is present in Zookeeper by checking Solr-AdminConsole -> CloudSection -> Tree .. /configs
And also check the CloudSection -> Graph showing the active status on the nodes. That indicates that everything is good.
Now start pushing documents into the collection
The below wiki is very helpful to do the above.
https://cwiki.apache.org/confluence/display/solr/Solr+Start+Script+Reference
I have a standalone Solr instance with 4 different cores working fine using the embedded Jetty server. I configured the cores for v4.10.3 but since I moved to v5.1 and all seems to work fine without any changes.
Before going into production, I need to set it up as a Solrcloud installation, initially with 2 nodes (two different machines) with 1 shard per node (to keep it simple). I have been trying to get it to work but I have not been able to do it.
I tried to run it like this (I think using start.jar is not the preferred way), having read that Solr will look for multiple configured cores in any nested folders (which works for standalone Solr):
java -DzkRun -DnumShards=2 -Dbootstrap_confdir=solr/ -jar start.jar
but that did not work, it does not find the needed solrconfig.xml file.
My Solr directory looks like this:
My solr.xml file is the standard one:
<solr>
<solrcloud>
<str name="host">${host:}</str>
<int name="hostPort">${jetty.port:8983}</int>
<str name="hostContext">${hostContext:solr}</str>
<int name="zkClientTimeout">${zkClientTimeout:30000}</int>
<bool name="genericCoreNodeNames">${genericCoreNodeNames:true}</bool>
</solrcloud>
<shardHandlerFactory name="shardHandlerFactory"
class="HttpShardHandlerFactory">
<int name="socketTimeout">${socketTimeout:0}</int>
<int name="connTimeout">${connTimeout:0}</int>
</shardHandlerFactory>
</solr>
Each core looks like this:
And the core.properties just has the name of the core:
name=users
My question is:
How do I start Solrcloud v5.1 so the 4 cores are picked up?
In SolrCloud each of your Core will become a Collection.
Each Collection will have its own set of Config Files and data.
You might find this helpful Moving multi-core SOLR instance to cloud
Solr 5.0 (onwards) has made some changes on how to create a SolrCloud setup with shards, and how to add collections etc.
Everything listed below is my understanding of the Solr Reference Guide. I will highly recommend going through it thoroughly.
https://cwiki.apache.org/confluence/display/solr/Apache+Solr+Reference+Guide
I setup my servers on a Linux(CentOS) server, but the steps can be used to setup solr on Windows system also. For example, there is solr.cmd file instead of solr.sh
Here are the steps I followed to create a simple two shard SolrCloud setup.
Setup the zookeeper ensemble. I am assuming you are trying to use the
embedded ZK in solr. For a production system, it is highly
recommended to create a external ZK ensemble. You can find steps to install a external ensemble in this section of reference guid
Download solr to /opt folder.
Extract the install file ONLY.
tar xzf solr-5.0.0.tgz solr-5.0.0/bin/install_solr_service.sh --strip components=2
This command will install solr on your system
sudo bash ./install_solr_service.sh solr-5.0.0.tgz
The above command will create a new user called "solr" if it does not exist.
These are some of the default options it will assume. You can view this in /var/solr/solr.in.sh . This is the include file where you can specify other options.
* SOLR_PID_DIR=/var/solr
* SOLR_HOME=/var/solr/data
* LOG4J_PROPS=/var/solr/log4j.properties
* SOLR_LOGS_DIR=/var/solr/logs
* SOLR_PORT=8983
Running install_solr_service start in the above step will start a solr server. Stop the server using service solr stop before doing any of the changes below.
Change Java heap value
SOLR_HEAP="3g"
This will set Xmx and Xms as 3GB . (optional)
This variable is not mentioned in the solr.in.sh file in Solr 5.1 . Its a bug and has been fixed, will be released in next version.
SOLR_MODE="solrcloud" Required
this is what you need start solr in cloud mode.
ZK_HOST=ZK1:2181,ZK2:2181,ZK3:2181 Required
(replace zk with you zookeeper host names)
Running the install_solr_service.sh command also creates a init.d file as /etc/init.d/solr
This init.d script in turn calls the /opt/solr/bin/solr script and includes all the variables from /var/solr/solr.in.sh
Once you have made the above changes, start solr again using service solr start
You can check the status using service solr status
Creating Collections Shards and Replicas
- All shard, collection, replica related commands are now made using Collections API.
Before creating a collection a config folder should be uploaded to ZK .
This can be done using the zkcli.sh script in the solr folder (not on the zookeeper servers)
Folder: /opt/solr/server/scripts/cloud-scripts
The command to upload the confg folder is
sh zkcli.sh -cmd upconfig -zkhost zk1:2181,zk2:2181,zk3:2181 -confname yourconfigname -confdir /var/solr/configs/conf
You will run this command 4 times for each of your 4 cores, each time changing the path of the conf folder and config name.
This will upload all the config files in conf folder with the name 'yourconfigname' in zookeeper.
Creating a collection
I used the following command to create a new collection.
http://1.1.1.1:8983/solr/admin/collections?action=CREATE&name=yourcollectionname&numShards=2&replicationFactor=1&maxShardsPerNode=1&createNodeSet=1.1.1.1:8983_solr,2.2.2.2:8983_solr&collection.configName=yourconfigname
Happy Searching!
SolrCloud does not use configuration files stored in core conf directory. To make your cores visible in SolrCloud structure you need to upload the configuration files to ZooKeeper and keep it manage the files to you. All the time a Solr instance comes up it get the configuration files stored in ZooKeeper. This way your cores doesn't need to have conf directory to work. To upload your core configuration files to ZooKeeper follow the link bellow and take a look at Upload a configuration directory
https://cwiki.apache.org/confluence/display/solr/Command+Line+Utilities
Currently we are using Apache Solr 4.10.3 OR Heliosearch Distribution for Solr [HDS] as a search engine to index our data.
Now after that, I got the news about Apache Solr 5.0.0 release in last month. I'd successfully installed Apache Solr 5.0.0 version and now its running properly on 8983 port (means only running solr but unable to create core). In that UI, I'm unable to find the example core as well as schema or config files under it. So, I started creating new core as we create in old versions but unable to create one. Following is the error, I'm getting it:
Error CREATEing SolrCore 'testcore1': Unable to create core [testcore1] Caused by: Could not find configName for collection testcore1 found:null
Note: I also seen Cloud tab on (ie. http://localhost:8983/solr/) left side of Solr UI and also don't know how it works? Meaning I don't know the location of the schema.xml, solrconfig.xml files due to lack of example folder (Collection1) and how to update those files?
Is there any useful document or solution available to solve this error?
In Solr 5, creation of cores is supported by the bin/solr script provided in the distribution. Try
bin/solr create -help
for a quick introduction.
From the above help doc, you may find:
bin/solr create [-c name] [-d confdir] [-n configName] [-shards #] [-replicationFactor #] [-p port]
In Solr 5.4.0 , create new core using command from Solr-5.x.x folder (Solr Installation folder) like following,
$ bin/solr create -c <name>
See this documentation of Apache Solr 5.4 https://cwiki.apache.org/confluence/display/solr/Running+Solr
{SOLR_INSTALLATION}/server/solr/configsets\basic_configs\conf
you can find the example schema.xml and solrconfig.xml.
if you want to create the new core
{SOLR_INSTALLATION}/server/solr/{new core name} folder and create conf folder with required schema and solrconfig.xml and blank core.properties file.
you can find the examples for schema and config in
{SOLR_INSTALLATION}/example/example-DIH/solr
Create using the web interface
Go to bin directory and issue
./solr start -e cloud -noprompt
Which will start solr.
Go to http://localhost:8983
(this is assuming you are running on localhost)
Click on core admin and they "Add Core"
Use provided solr script with solr user privileges to create Solr cores, e.g.
cd /opt/solr
sudo -u solr ./bin/solr create -c testcore1
Run bin/solr --help for syntax guidance.
For any other issues, please check your Solr logs (e.g. /var/solr/logs/solr.log).
Related: SOLR-7826: Permission issues when creating cores with bin/solr as root user.
You can find your solrconfig.xml and schema.xml inside the collection directory.
Go to /usr/lib/ambari-infra-solr/server/solr and u will see a folder with same name as of collection and with schema and config files.
Inside the conf folder there will be a managed-schema file and other files that you have been searching for.
As for this error
Error CREATEing SolrCore 'testcore1': Unable to create core [testcore1] Caused by: Could not find configName for collection testcore1 found:null
This error must be coming when you are creating solr collection from UI.
For that go to location where solr.cmd is located and type the below code
./solr create -c -d -s -r
copy conf from solr/example/conf to solr/server/solr/.
I have setup a SolrCloud replication using standalone zookeeper. But now I wish to make some changes to my Schema.xml and reload the core. The problem is that when I run a single server Solr (no solrcloud) the new schema is loaded, but I do not know how to reload schema on all the replication server. I tried reloading the schema on one of the server with no desired impact. Is there a way in which I can reload my schema.xml in Solr in distributed replication setup which uses zookeeper.
Just found the solution we need to push the changed configuration to zookeeper ensemble.
Just use
sh zkcli.sh -cmd upconfig -zkhost 127.0.0.1:2181 -collection collection1 -confname myconf -solrhome ../solr -confdir ../solr/collection1/conf
zkcli.sh is present under example/cloud-scripts
The answer marked as correct is wrong. You have to use Solr Collection API
Once you have uploaded the new collection (index) configuration with the Solr zkcli.sh utility the configuration will not be reloaded automatically.
Solr Collection API are indicated for SolrCloud and the configuration reload will be spread in the whole cluster. As far as I know Solr Collection API are available at least from Solr 4.8.
The procedure is slightly different and with these API you can reload the configuration on the entire Cluster with only one API call.
Just upload your updated configuration with the Solr zkcli.sh utility. Pay attention to do not confuse Solr zkcli.sh with Zookeeper zkCli.sh they have quite the same name but completely different purpose.
So as said use Solr zkcli.sh (At time of writing is in the directory server/scripts/cloud-scripts):
./zkcli.sh -cmd upconfig -zkhost 127.0.0.1:2181 -collection collection1 -confname myconf -confdir path/to/solr/collection1/conf
Then you can reload the configuration of collection1 with:
http://server1:8983/solr/admin/collections?action=RELOAD&name=collection1
The entire cluster will be updated.
This worked for me :
bin/solr zk -upconfig -n collectionName -d pathto/Conf_directory -z localhost:2181/solr
Below is the Command for Windows,
IT will be almost same in Unix we just need to change the path of Solr lib and class-path separator ; & : Because its java command so should run in Unix also.
java -Dlog4j.configuration="file:E:/solr-5.5.1/server/scripts/cloud-scripts/log4j.properties" -classpath .;E:/solr-5.5.1/server/solr-webapp/webapp/WEB-INF/lib/*;E:/solr-5.5.1/server/lib/ext/* org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost 192.168.42.13:2787 -confdir E:/New_Solor_Conf -confname Solor_conf
Brief details about command as follows:
Configuration of log4j for logging.
-Dlog4j.configuration="file:E:/solr-5.5.1/server/scripts/cloud-scripts/log4j.properties
Class path to run "org.apache.solr.cloud.ZkCLI". class.
make sure UNIX and Windows will have different : (Unix seperator) ;(Windows Separator)
-classpath .;E:/solr-5.5.1/server/solr-webapp/webapp/WEB-INF/lib/;E:/solr-5.5.1/server/lib/ext/
-zkhost 192.168.42.13:2787 (Remote Host and port where Solr Zookeeper is running)
-confdir E:/New_Solor_Conf (Local directory what we need to upload.)
-confname Solor_conf Remote instance name.
If you will not use correct class path you will get error like :
Error: Could not find or load main class org.apache.solr.cloud.ZkCLI
or
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFacto
ry
at org.apache.solr.common.cloud.SolrZkClient.<clinit>(SolrZkClient.java:
71)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:183)
Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
I am able to upload my local configuration changes without physically login to remote Solr box. Hope it will work for other also.