We setup external Zookeper with 3 nodes and 3 Solr 7 instances.
I am trying to use schema.xml file from my old project created using Solr 4.
I follow bellow steps :
Rename the managed-schema file to schema.xml.
Modify solrconfig.xml to replace the schemaFactory class.
a. Remove any ManagedIndexSchemaFactory definition if it exists.
b. Add a ClassicIndexSchemaFactory
uploaded configuration using upconfig
sudo ./zkcli.sh -cmd upconfig -confdir /home/pc2/Desktop/solrconfig/conf-readData -confname readData -zkhost 192.168.1.120:2181,192.168.1.100:2181,192.168.1.105:2181
sudo ./zkcli.sh -cmd linkconfig -collection readData -confname readData -zkhost 192.168.1.120:2181,192.168.1.100:2181,192.168.1.105:2181
curl 'http://192.168.1.85:8983/solr/admin/collections?action=CREATE&name=readData&numShards=3&replicationFactor=3&maxShardsPerNode=3'
When I check schema for readData from Solr admin, it is not showing fields that I mentioned in schema.xml.
Fields created were _root_, _text_, _version_ and many more dynamic fields.
What I am missing?
Solr version: 7.3.0
Zookeper version: 3.4.12
I see 2 problems in what you do:
The collection.configName parameter is missing when you create the collection.
You must first create the collection and than link the configuration.
linkconfig is useful when you want change the current configuration with a new one, but you must specify the collection.configName parameter while creating a new collection or, on the other hand, the _default collection config is used.
This also explains why you see only _root_, version, _text_, etc. Those are the default fields configured in the _default collection configuration.
I suggest to create the collection in the following way:
curl "http://192.168.1.85:8983/solr/admin/collections?action=CREATE&name=readData&collection.configName=readData&numShards=3&replicationFactor=3&maxShardsPerNode=3"
Or use the Solr admin console.
Related
I added a misconfigured dynamic field to a Solr Core. Since, I cannot update anything to fix this error because Solr fails to load it.
The erroneous query:
http://solr.dev.fr:8983/solr/zCollection/schema
{
"add-dynamic-field":{
"name":"*_alz*",
"type":"customFieldType",
"stored":true,
"indexed":true
}
The Exception :
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Could not load conf for core zCollection_shard1_replica1: Can't load schema managed-schema: Dynamic field name '*_alz*' should have either a leading or a trailing asterisk, and no others.
The problem:
I can't find this dynamic field anywhere because I'm using data_driven_schema_configs
I can't use schema API to delete it; I get 404 Not Found in response.
The Question:
Where can I found this element and delete it?
PS: I did a
grep -rnw '/opt/lucidworks-hdpsearch/' -e '_alz'
But nothing comes out with me.
Update 1 :
I found the field in Zookeeper files using:
./zkcli.sh -zkhost hmaster.dev.fr:2181 -cmd list
I downloaded the file
./zkcli.sh -zkhost hmaster.dev.fr:2181 -cmd get /configs/zCollection/managed-schema
Fixed the erroneous fields and uploaded it to ZK again:
./zkcli.sh -zkhost hmaster.dev.fr:2181 -cmd putfile /configs/zCollection/managed-schema managed-schema
And it works finally!!
I have a collection in solrcloud which had been created using zookeeper managed configuration and I want all collection configuration files which were used to create the collection. Here are the options I found:
Copy manually all files from Solrcloud UI.
solrUI->cloud->tree->/collections/<collection-name>
Download files from zookeeper
/opt/solr/server/scripts/cloud-scripts/zkcli.sh -cmd downconfig -zkhost <zk hosts>/usecasedir -confname <configuration name> -confdir <dir to download>
2nd option would save my lot of time but the problem here my zookeeper has huge list of configurations and I am not sure which configuration directory was used to create collection.
Is there any way to figure out which collection configuration was used to create collection?
the info of what config was used to create a collection is stored in zk itself. Some bash scripting (using the great jq utility) is enough to do what you need:
find what config was used for the given XXX collection:
CONFIGNAME=$(curl -L -s "http://localhost:8983/solr/admin/zookeeper?detail=true&path=/collections/XXX" | jq '.znode.data' | cut -d ":" -f2 | tr -d '}"\\')
now download the config:
/opt/solr/bin/solr zk downconfig -n $CONFIGNAME -d config$CONFIGNAME -z localhost:2181
Sets of configuration are usually found in a directory called /configs. If the zookeeper is dedicated to solr this is usually at the top level, if it's used by multiple applications it's common to "zk chroot" the configs to a sub directory.
Once you find the right location in zookeeper, one directory in the configs directory should match the name shown as "config-name" in the admin UI under Collections > name_of_your_collection
If your project uses gradle, the up/down load of configs from the project (where you might want to check these things into version control) can be smoothed somewhat by a plugin (disclaimer: I wrote this plugin)
https://plugins.gradle.org/plugin/com.needhamsoftware.solr-gradle
There's an additional complication to be aware of however, if the collection is using managed schema, the acutal schema in use will not be in schema.xml, but in a file called "managed-schema"
Fields may have been added via the Schema Rest API, so "files used to create the collection" is a bit fuzzy in that respect, but the managed_schema can be renamed to schema.xml and the solr config modified to take things out of managed mode if you want.
I have a SolrCloud instance running with a single core / collection.
I am attempting to download the configuration for this collection with the following command:
/opt/solr-5.3.0/server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983 -cmd downconfig -confdir /opt/solr/collection1 -confname *
However, I get the following error:
Exception in thread "main" java.io.IOException: Error downloading files from zookeeper path /configs/bin to /opt/solr/collection1
at org.apache.solr.common.cloud.ZkConfigManager.downloadFromZK(ZkConfigManager.java:107)
at org.apache.solr.common.cloud.ZkConfigManager.downloadConfigDir(ZkConfigManager.java:131)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:230)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /configs/bin
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:328)
at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:325)
at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:325)
at org.apache.solr.common.cloud.ZkConfigManager.downloadFromZK(ZkConfigManager.java:92)
I do not know the confname so I am provding * as its value. Is that the cause of the issue?
All that I wish to know is how to download the configuration for the existing core / collection (which I then intend to upload to my own local installation)
Found out the cause of the issue. It was the value passed to confname option.
The confname option is a mandatory option when attempting to download the configurations of an existing core / collection.
Turns out that when a configuration is uploaded to zookeeper, you don't have to specify the confname option - in such a case, zookeeper uses the collection name itself as the configuration name.
My collection was named Collection1 and thus, by providing that I managed to successfully download the configuration.
The final command was:
/opt/solr-5.3.0/server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983
-cmd downconfig -confdir /opt/solr/collection1 -confname Collection1
Currently we are using Apache Solr 4.10.3 OR Heliosearch Distribution for Solr [HDS] as a search engine to index our data.
Now after that, I got the news about Apache Solr 5.0.0 release in last month. I'd successfully installed Apache Solr 5.0.0 version and now its running properly on 8983 port (means only running solr but unable to create core). In that UI, I'm unable to find the example core as well as schema or config files under it. So, I started creating new core as we create in old versions but unable to create one. Following is the error, I'm getting it:
Error CREATEing SolrCore 'testcore1': Unable to create core [testcore1] Caused by: Could not find configName for collection testcore1 found:null
Note: I also seen Cloud tab on (ie. http://localhost:8983/solr/) left side of Solr UI and also don't know how it works? Meaning I don't know the location of the schema.xml, solrconfig.xml files due to lack of example folder (Collection1) and how to update those files?
Is there any useful document or solution available to solve this error?
In Solr 5, creation of cores is supported by the bin/solr script provided in the distribution. Try
bin/solr create -help
for a quick introduction.
From the above help doc, you may find:
bin/solr create [-c name] [-d confdir] [-n configName] [-shards #] [-replicationFactor #] [-p port]
In Solr 5.4.0 , create new core using command from Solr-5.x.x folder (Solr Installation folder) like following,
$ bin/solr create -c <name>
See this documentation of Apache Solr 5.4 https://cwiki.apache.org/confluence/display/solr/Running+Solr
{SOLR_INSTALLATION}/server/solr/configsets\basic_configs\conf
you can find the example schema.xml and solrconfig.xml.
if you want to create the new core
{SOLR_INSTALLATION}/server/solr/{new core name} folder and create conf folder with required schema and solrconfig.xml and blank core.properties file.
you can find the examples for schema and config in
{SOLR_INSTALLATION}/example/example-DIH/solr
Create using the web interface
Go to bin directory and issue
./solr start -e cloud -noprompt
Which will start solr.
Go to http://localhost:8983
(this is assuming you are running on localhost)
Click on core admin and they "Add Core"
Use provided solr script with solr user privileges to create Solr cores, e.g.
cd /opt/solr
sudo -u solr ./bin/solr create -c testcore1
Run bin/solr --help for syntax guidance.
For any other issues, please check your Solr logs (e.g. /var/solr/logs/solr.log).
Related: SOLR-7826: Permission issues when creating cores with bin/solr as root user.
You can find your solrconfig.xml and schema.xml inside the collection directory.
Go to /usr/lib/ambari-infra-solr/server/solr and u will see a folder with same name as of collection and with schema and config files.
Inside the conf folder there will be a managed-schema file and other files that you have been searching for.
As for this error
Error CREATEing SolrCore 'testcore1': Unable to create core [testcore1] Caused by: Could not find configName for collection testcore1 found:null
This error must be coming when you are creating solr collection from UI.
For that go to location where solr.cmd is located and type the below code
./solr create -c -d -s -r
copy conf from solr/example/conf to solr/server/solr/.
I have setup a SolrCloud replication using standalone zookeeper. But now I wish to make some changes to my Schema.xml and reload the core. The problem is that when I run a single server Solr (no solrcloud) the new schema is loaded, but I do not know how to reload schema on all the replication server. I tried reloading the schema on one of the server with no desired impact. Is there a way in which I can reload my schema.xml in Solr in distributed replication setup which uses zookeeper.
Just found the solution we need to push the changed configuration to zookeeper ensemble.
Just use
sh zkcli.sh -cmd upconfig -zkhost 127.0.0.1:2181 -collection collection1 -confname myconf -solrhome ../solr -confdir ../solr/collection1/conf
zkcli.sh is present under example/cloud-scripts
The answer marked as correct is wrong. You have to use Solr Collection API
Once you have uploaded the new collection (index) configuration with the Solr zkcli.sh utility the configuration will not be reloaded automatically.
Solr Collection API are indicated for SolrCloud and the configuration reload will be spread in the whole cluster. As far as I know Solr Collection API are available at least from Solr 4.8.
The procedure is slightly different and with these API you can reload the configuration on the entire Cluster with only one API call.
Just upload your updated configuration with the Solr zkcli.sh utility. Pay attention to do not confuse Solr zkcli.sh with Zookeeper zkCli.sh they have quite the same name but completely different purpose.
So as said use Solr zkcli.sh (At time of writing is in the directory server/scripts/cloud-scripts):
./zkcli.sh -cmd upconfig -zkhost 127.0.0.1:2181 -collection collection1 -confname myconf -confdir path/to/solr/collection1/conf
Then you can reload the configuration of collection1 with:
http://server1:8983/solr/admin/collections?action=RELOAD&name=collection1
The entire cluster will be updated.
This worked for me :
bin/solr zk -upconfig -n collectionName -d pathto/Conf_directory -z localhost:2181/solr
Below is the Command for Windows,
IT will be almost same in Unix we just need to change the path of Solr lib and class-path separator ; & : Because its java command so should run in Unix also.
java -Dlog4j.configuration="file:E:/solr-5.5.1/server/scripts/cloud-scripts/log4j.properties" -classpath .;E:/solr-5.5.1/server/solr-webapp/webapp/WEB-INF/lib/*;E:/solr-5.5.1/server/lib/ext/* org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost 192.168.42.13:2787 -confdir E:/New_Solor_Conf -confname Solor_conf
Brief details about command as follows:
Configuration of log4j for logging.
-Dlog4j.configuration="file:E:/solr-5.5.1/server/scripts/cloud-scripts/log4j.properties
Class path to run "org.apache.solr.cloud.ZkCLI". class.
make sure UNIX and Windows will have different : (Unix seperator) ;(Windows Separator)
-classpath .;E:/solr-5.5.1/server/solr-webapp/webapp/WEB-INF/lib/;E:/solr-5.5.1/server/lib/ext/
-zkhost 192.168.42.13:2787 (Remote Host and port where Solr Zookeeper is running)
-confdir E:/New_Solor_Conf (Local directory what we need to upload.)
-confname Solor_conf Remote instance name.
If you will not use correct class path you will get error like :
Error: Could not find or load main class org.apache.solr.cloud.ZkCLI
or
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFacto
ry
at org.apache.solr.common.cloud.SolrZkClient.<clinit>(SolrZkClient.java:
71)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:183)
Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
I am able to upload my local configuration changes without physically login to remote Solr box. Hope it will work for other also.