I've got a test zookeeper configuration up and running and can use the zkcli command to create a running cluster (in this case managing solr).
Is there anyway to do this by passing a configuration file to zookeeper rather than piping commands through zkcli?
Related
I'm trying to set up 3 Solr (8.4.0) servers with a Zookeeper (3.7.0) ensemble on Windows Server 2019. Each server has one Solr instance and one Zookeeper installed. The problem I'm facing is that I'm getting an error when trying to start Solr pointing to multiple Zookeeper Ips:
.\solr start -c -z "172.29.70.47:2181,172.29.70.48:2181"
Console output:
Invalid command-line option: 172.29.70.48:2181
I have tried various combinations of this command with or without quotes, with or without ports etc but it fails every time. If I only specify one Zookeeper IP and port the command runs fine. As soon as I specify more than one IP it fails.
I've tried setting ZK_HOST in solr.in.cmd but it also fails to start. Even in the docs (https://solr.apache.org/guide/8_4/setting-up-an-external-zookeeper-ensemble.html#using-the-z-parameter-with-binsolr) it shows that configuring multiple IPs should be possible using the -z parameter.
What am I missing?
Thanks to MatsLindh I was able to figure out what the issue was. When using Powershell the double quotes need to be wrapped in single quotes so the command should look like:
.\solr start -c -z '"172.29.70.47:2181,172.29.70.48:2181,172.29.70.49:2181"'
Using Command Prompt in windows double quotes work as expected and the command should be:
solr start -c -z "172.29.70.47:2181,172.29.70.48:2181,172.29.70.49:2181"
So, I have two instance of solr node running along with a embedded zookeeper on a single machine using the link Set up solrCloud. Now I want to add a new machine to this cluster. I run bin\solr start -cloud -s ./solr -h newMachineIP -p 9000 -z oldMachineIP:9983. It shows successful startup, but when I create a new collection it gives me an error saying "Server refused connection at: http://newMachineIp:9000/solr"
just a guess but... does C:\path\to\dir\solr-7.1.0\solr-7.1.0\server\solr\gettingstarted contain any spaces? If so, install Solr into a path with no spaces, this has been an issue before in Windows, and it's possible it still is in some code paths. Solr on Windows get much less testing than on linux.
I can successfully run a gsutil command with a windows domain account from the command line in Windows (setting up service account key etc.). When I try to run the same command from a SQL Agent Job using a CmdExec task the job hangs and doesn't complete. I can't see any logging so have no clue what it's waiting for. I've setup the job to run with the same Proxy User that i use to run the gsutil command manually.
Any ideas how I can get this to work or how to see more logging?
Are you using standalone gsutil? Or did you get it as part of installing the Cloud SDK (gcloud)?
If the job hangs for a long time, it could be stuck retrying multiple times. To test if this is the case, you can set the num_retries option to be very small, but above 0 (e.g. 1) either in your .boto file or the the command arguments via this option:
gsutil -o 'Boto:num_retries=1' <rest of command here...>
A second thing to note (at least for the version of gsutil that doesn't come with gcloud) is that gsutil looks for your boto config file (which specifies the credentials it should use) in your home directory by default. If you're running gsutil as a different user (maybe your SQL Agent Job runs as its own dedicated user?), it will look for a .boto file in that user's home directory. The same should apply for the gcloud version -- gcloud uses credentials based on the user executing it. You can avoid this by copying your .boto file to somewhere that the job has permission to read from, along with setting the BOTO_CONFIG environment variable to that path before running gsutil. From the cmd shell, this would look something like:
set BOTO_CONFIG=C:\some\path\.boto && gsutil <rest of command here...>
Note: If you're not sure which boto config file you're normally using, you can find out by running gsutil version -l and looking at the line that displays your config path(s).
I have a git repository containing a solr/conf folder with solrconfig.xml and schema.xml. I've managed to create a local solr core and copy these files into it, but I expect there is an easier way than what I did, which was basically:
solr create -c mycorename
cp solr/conf/schema.xml /usr/local/Cellar/solr/5.5.0/server/solr/mycorename/conf
cp solr/conf/solrconfig.xml /usr/local/Cellar/solr/5.5.0/server/solr/mycorename/conf
...and restart the core to have the changes take effect.
My solution is not that complicated, but it requires a lot of specific knowledge of folders etc. and I'd like something simpler. Ideally, I would prefer that the core is created in-place in my existing folder.
If that is not possible I would like to have a simpler way that does not require knowledge of the specific solr folders on a developer's workstation. Maybe a couple of curl commands.
Your question is about best practice for creating a core from command line.
You already use
bin\solr create -c mycorename`
but at time you need a restart, because you change the config after creation.
Solr can copy your config files and creating the core in one step:
bin\solr create_core -c mycore -d c:/tmp/myconfig
If you are using SolrCloud you could work even more folder independent:
Add configuration folder to zookeeper
Create collection with this configuration
see also How to create new core in Solr 5?
To create Solr core, use solr script ran with solr user privileges, e.g.
sudo -u solr ./bin/solr create -c mycorename
I have installed zookeeper on 3 machines. And set zoo.conf file. It's running with configuration. I installed solr also on these 3 machine.
Now i need to know how to run solr cloud on group of machine with number of shards?
My zookeeper configuration on all machine :-
tickTime=2000
dataDir=/var/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=instance-1:2888:3888
server.2=instance-2:2889:3889
server.3=instance-3:2890:3890
zookeeper command to run it on all machine:-
bin/zkServer.sh start conf/zoo.cfg
Solr command to run it:-
bin/solr start -e cloud -z instance-1:2181,instance-2:2182,instance-3:2183 -noprompt
It's just creating 2 shards on each machine individually on default port. Not able to connect machine with each other.