I'm a newbie to SOLR and there's a problem I can't solve so far: When I'm starting SOLR cloud with Zookeeper I like to create a collection with a personal schema. However, SOLR only loads the default 'example-data-driven-schema'.
Any suggestion what I should do in order to put my defined schema to it?
In order to create a new collection with your own schema, you need to use zkCli.sh and SolrCloud Collection API.
In particular, you could:
a) upload in Zookeeper (using Solr zkCli) the configuration directory for your new collection, for instance in
<my_new_config>
Examples of Solr ZkCli commands to upload your changes in ZooKeeper can be found here.
In particular, if you want to upload your configuration directory on Zk, you can:
STEP 1) run the command:
./server/scripts/cloud-scripts/zkcli.sh -zkhost 127.0.0.1:9983 \ -cmd upconfig -confname my_new_config -confdir server/solr/configsets/basic_configs/conf
STEP 2) Restart your Solr nodes so they can pick up the configuration changes.
Please remember that if you wish to replace an existing file in Zk you will need to use zkCli.sh clear to delete the existing one from ZooKeeper and then the putfile command to add the new one.
b) call the following API from your browser:
/admin/collections?action=CREATE&name=<my_collection_name>&collection.configName=<my_new_config>
Related
org.apache.solr.common.SolrException: There is conflicting information about the leader of shard: shard2 our state says:http://xxxxx:9003/solr/collectionname_shard2_replica1/ but zookeeper says:http://xxxxxx:9006/solr/collectionname_shard2_replica1/
at org.apache.solr.cloud.ZkController.getLeader(ZkController.java:1013)
at org.apache.solr.cloud.ZkController.register(ZkController.java:940)
at org.apache.solr.cloud.ZkController.register(ZkController.java:883)
at org.apache.solr.core.ZkContainer$2.run(ZkContainer.java:184)
above mentioned error is displayed in solr admin console. 9003 is valid instance. I want to remove 9006 from clusterjson and leader file. How??
Look into your solr gui under cloud -> tree. Make sure that in the folder /overseer_elect/election are only your current solr instances.
A simple way to recognize if there are dead solr instances in the /overseer_elect/election folder is to shutdown solr and then use the zkCli.sh zookeeper script to look into the /overseer_elect/election folder. If you still have files in this folder, you have dead solr instances. To solve this issue remove this instances with the zkCli.sh script and restart solr.
We have a cluster of standalone Solr cores (Solr 4.3) for which we had built some custom plugins. I'm now trying to prototype converting the cluster to a Solr Cloud cluster. This is how I am trying to deploy the cores (in 4.7.2).
Start solr with zookeeper embedded.
java -DzkRun -Djetty.port=8985 -jar start.jar
upload a config into Zookeeper (same config as the standalone cores)
zkcli.bat -zkhost localhost:9985 -cmd upconfig -confdir myconfig -confname myconfig
Create a new collection (mycollection) of 2 shards using the Collections API
http://localhost:8985/solr/admin/collections?action=CREATE&name=mycollection&numShards=2&replicationFactor=1&maxShardsPerNode=2&collection.configName=myconfig
So at this point I have two shards under my solr directory with the appropriate core.properties
But when I go to http://localhost:8985/solr/#/~cloud, I see that the two shards' status is "Down" when they are supposed to be active by default.
And when I try to index documents in them using SolrJ (via CloudSolrServer API) , I get the error "No live SolrServers available to handle this request". I restarted Solr but same issue.
private CloudSolrServer cloudSolr;
cloudSolr = new CloudSolrServer(zkHOST);
cloudSolr.setZkClientTimeout(zkClientTimeout);
cloudSolr.setDefaultCollection(collectionName);
cloudSolr.connect();
cloudSolr.add(doc)
What am I doing wrong? I did a lot of digging around and saw an old Jira bug saying that Solr Cloud shards won't be active until there are some documents in the index. If that is the reason, that's kind of like a catch-22 isn't it?
So anyways, I also tried adding some test documents manually and committed to see if things improved. Now on the shard statistics page, it correctly gives me the Numdocs count but when I try to query it says "no servers hosting shard". I next tried passing in shards.tolerant=true as a query parameter and search, but no cigar. It says 0 documents found.
Any help would be appreciated. My main objective is to rebuilt the old standalone cores using SolrCloud and test to see if our custom requesthandlers still work as expected. And at this point, I can't index documents inside of the 4.7 Solr Cloud collection I have created.
Thanks and Regards
I am using solr 4.9, not adding any additional shard, just use whatever default it comes with it. I have created a collection and tried to delete it by using the following api :
http://<host>/solr/admin/collections?action=DELETE&name=collectionName
but it returns error :
Solr instance is not running in SolrCloud mode
My Solr is not solrCloud, but how do I delete my collection?
Go to the solr folder and Do this..
bin/solr delete -c collection_name
and restart solr with
bin/solr restart
n.b. Tested against Solr 4.9, should work with newer versions.
curl -v http://localhost:8983/solr/admin/cores?action=UNLOAD&deleteInstanceDir=true&core=collectionName
You can delete the Solr Collection in two ways.
1) From the command prompt:
Launch the command prompt from where you have extracted the Apache Solr. Run the below Command
Solr\bin>solr delete -c My_Collection
2) From the Solr Admin Console:
/admin/collections?action=DELETE&name=collection
For more information about the Apache Solr Collection API.
Apache Solr Collection API
According to this guide (https://cwiki.apache.org/confluence/display/solr/Collections+API) this API is working only, when you are in a SolrCloud mode.
If you want to just delete core or just delete all docs in that core, take a look here - https://wiki.apache.org/solr/CoreAdmin
Or do this with the API instead:
http://localhost:8983/solr/admin/collections?action=DELETE&collection=collection1&shard=shard1&replica=core_node1
You can delete a collection in three ways in the recent versions of Solr.
You can delete the collection manually by using the bin/solr tool
You can delete the collection manually via Solr Admin
You can delete the collection by using the Collections API
You can delete the collection by using the V2 API
Deleting a collection using the bin/solr tool is simple. You go to your SOLR_HOME directory and you run:
bin/solr delete -c COLLECTION_NAME
To delete a collection using the Collections API you would run a command like this:
curl 'localhost:8983/solr/admin/collections?action=DELETE&name=COLLECTION_NAME'
Finally, to use the V2 API and delete a collection using it you would do the following:
curl -XDELETE 'http://localhost:8983/api/c/COLLECTION_NAME'
If you plan on removing the collection very rarely, you can do that manually. If that is something commonly used - for example with aliases and time-based data I would suggest using the V2 API as this is the newest one and will probably replace the old APIs at some point.
You could use curl
curl -X GET -H "Content-Type: application/json" "http://localhost:8983/solr/admin/cores?wt=json&action=UNLOAD&core=gettingstarted"
Where gettingstarted is the name of the core that you want to delete.
Please note that the above assumes that solr is running on port 8983.
Though Adam's response is correct, here's the documentation to help use it: https://lucene.apache.org/solr/guide/7_7/collections-api.html#delete
I got stuck on this one for a while and was only getting the invalid deleting of cores answer, which, I'm guessing used to work for collections, but does not in newer versions.
I've got a setup with 3x zoo keeper's and 4x solrcloud node's.
This is all working, all nodes are seeing each other and I initially had a default collection.
From there, I used the collections API to create a new collection which successfully completed and all it's successfully sharded across 2 nodes, with the other 2 being used for replica's. I can also successfully save documents to that collection. Browsing the solr web GUI on any of the boxes all works, no speed issues.
However, anytime I try to use the collections API I get timeouts. Creating a new collection, reloading one of the existing collections, deleting a collection... all of them timeout.
Any thoughts on why would be much appreciated
Cheers
I have also faced similar issue:
Solr process 24214 running on port 8983
Failed to get system information from http://localhost:8983/solr/ due to: org.apache.solr.client.solrj.SolrServerException: clusterstatus the collection time out:180s
at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:537)
at org.apache.solr.util.SolrCLI.getJson(SolrCLI.java:471)
at org.apache.solr.util.SolrCLI$StatusTool.getCloudStatus(SolrCLI.java:721)
at org.apache.solr.util.SolrCLI$StatusTool.reportStatus(SolrCLI.java:704)
at org.apache.solr.util.SolrCLI$StatusTool.runTool(SolrCLI.java:662)
at org.apache.solr.util.SolrCLI.main(SolrCLI.java:215)
So to solve this issue I have followed given instruction and resolved it.:
Stop all Solr instances
Stop all Zookeeper instances
Start all Zookeeper instances
Start Solr instances one at a time.
Such timeout can occur when Solr is not able to obtain cluster state. If following call is results in timeout, then this is the case
http://solr-hostname:8983/solr/admin/collections?action=CLUSTERSTATUS&wt=json
This may be caused by incorrect entries present in /clusterstate.json
To fix this:
get clusterstate from ZooKeeper by calling
zkcli.sh -zkhost localhost:2181 -cmd get /clusterstate.json > clusterstate.json
edit extracted clusterstate.json file and remove sections with wrong IPs or not existing hosts
clear the clusterstate in ZooKeeper by calling
zkcli.sh -zkhost localhost:2181 -cmd clear /clusterstate.json
save corrected state in ZooKeeper by sending updated JSON file
zkcli.sh -zkhost localhost:2181 -cmd putfile /clusterstate.json ./clusterstate.json`
restart Solr instances
After that, if your clusterstate shows correct info, you should no longer have timeouts when accessing Collections API.
Note
Be careful when editing clusterstate JSON, limit your changes only to removing not existing hosts/replicas/shards.
I also had timeout issues with the collections API. To fix this problem, I added the server's IP address to the solr.xml file that you find in /var/solr/data/solr.xml. My setup consists of 3 Ubuntu servers that run ZooKeeper (3.4.6) and SolrCloud (5.2.1) on each server.
Ended up being Zoo Keeper config mismatch
I'm following this tutorial on setting up django-haystack and solr: http://django-haystack.readthedocs.org/en/latest/tutorial.html
I hit a stumbling block here:
If you’re using the Solr backend, you have an extra step. Solr’s
configuration is XML-based, so you’ll need to manually regenerate the
schema. You should run ./manage.py build_solr_schema first, drop the
XML output in your Solr’s schema.xml file and restart your Solr
server.
Where is my schema.xml file located? It says it should in the Solr home directory and the .conf folder. But where is the Solr home directory, and/or how do I configure its location?
The solr home is the place where you can find your schema.xml and solrconfig.xml, as well as some other files depending on the text analysis you're using (dictionaries for stemming, stopwords etc.), and where your index gets created by default.
There are a couple of ways to configure the solr home, since it is located outside of the servlet container:
solr.solr.home java system property (most used one)
java:comp/env/solr/home for JNDI lookup
You can either check your servlet container configuration or go to the Solr admin page http://host:port/solr/admin, which prints out the actual solr home location together with other information about the solr instance running.
First check whether your Solr instance is working.
Got to -> http://localhost:8983/solr
If you can see a Solr web panel you have a live Solr instance.
Now go to Java Properties
Here you will see the the variables. This is where you can find the home DIRs
Note schema is now managed. If you want to override this you will have to hack it a bit. check here