I added a misconfigured dynamic field to a Solr Core. Since, I cannot update anything to fix this error because Solr fails to load it.
The erroneous query:
http://solr.dev.fr:8983/solr/zCollection/schema
{
"add-dynamic-field":{
"name":"*_alz*",
"type":"customFieldType",
"stored":true,
"indexed":true
}
The Exception :
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Could not load conf for core zCollection_shard1_replica1: Can't load schema managed-schema: Dynamic field name '*_alz*' should have either a leading or a trailing asterisk, and no others.
The problem:
I can't find this dynamic field anywhere because I'm using data_driven_schema_configs
I can't use schema API to delete it; I get 404 Not Found in response.
The Question:
Where can I found this element and delete it?
PS: I did a
grep -rnw '/opt/lucidworks-hdpsearch/' -e '_alz'
But nothing comes out with me.
Update 1 :
I found the field in Zookeeper files using:
./zkcli.sh -zkhost hmaster.dev.fr:2181 -cmd list
I downloaded the file
./zkcli.sh -zkhost hmaster.dev.fr:2181 -cmd get /configs/zCollection/managed-schema
Fixed the erroneous fields and uploaded it to ZK again:
./zkcli.sh -zkhost hmaster.dev.fr:2181 -cmd putfile /configs/zCollection/managed-schema managed-schema
And it works finally!!
Related
We setup external Zookeper with 3 nodes and 3 Solr 7 instances.
I am trying to use schema.xml file from my old project created using Solr 4.
I follow bellow steps :
Rename the managed-schema file to schema.xml.
Modify solrconfig.xml to replace the schemaFactory class.
a. Remove any ManagedIndexSchemaFactory definition if it exists.
b. Add a ClassicIndexSchemaFactory
uploaded configuration using upconfig
sudo ./zkcli.sh -cmd upconfig -confdir /home/pc2/Desktop/solrconfig/conf-readData -confname readData -zkhost 192.168.1.120:2181,192.168.1.100:2181,192.168.1.105:2181
sudo ./zkcli.sh -cmd linkconfig -collection readData -confname readData -zkhost 192.168.1.120:2181,192.168.1.100:2181,192.168.1.105:2181
curl 'http://192.168.1.85:8983/solr/admin/collections?action=CREATE&name=readData&numShards=3&replicationFactor=3&maxShardsPerNode=3'
When I check schema for readData from Solr admin, it is not showing fields that I mentioned in schema.xml.
Fields created were _root_, _text_, _version_ and many more dynamic fields.
What I am missing?
Solr version: 7.3.0
Zookeper version: 3.4.12
I see 2 problems in what you do:
The collection.configName parameter is missing when you create the collection.
You must first create the collection and than link the configuration.
linkconfig is useful when you want change the current configuration with a new one, but you must specify the collection.configName parameter while creating a new collection or, on the other hand, the _default collection config is used.
This also explains why you see only _root_, version, _text_, etc. Those are the default fields configured in the _default collection configuration.
I suggest to create the collection in the following way:
curl "http://192.168.1.85:8983/solr/admin/collections?action=CREATE&name=readData&collection.configName=readData&numShards=3&replicationFactor=3&maxShardsPerNode=3"
Or use the Solr admin console.
I have a collection in solrcloud which had been created using zookeeper managed configuration and I want all collection configuration files which were used to create the collection. Here are the options I found:
Copy manually all files from Solrcloud UI.
solrUI->cloud->tree->/collections/<collection-name>
Download files from zookeeper
/opt/solr/server/scripts/cloud-scripts/zkcli.sh -cmd downconfig -zkhost <zk hosts>/usecasedir -confname <configuration name> -confdir <dir to download>
2nd option would save my lot of time but the problem here my zookeeper has huge list of configurations and I am not sure which configuration directory was used to create collection.
Is there any way to figure out which collection configuration was used to create collection?
the info of what config was used to create a collection is stored in zk itself. Some bash scripting (using the great jq utility) is enough to do what you need:
find what config was used for the given XXX collection:
CONFIGNAME=$(curl -L -s "http://localhost:8983/solr/admin/zookeeper?detail=true&path=/collections/XXX" | jq '.znode.data' | cut -d ":" -f2 | tr -d '}"\\')
now download the config:
/opt/solr/bin/solr zk downconfig -n $CONFIGNAME -d config$CONFIGNAME -z localhost:2181
Sets of configuration are usually found in a directory called /configs. If the zookeeper is dedicated to solr this is usually at the top level, if it's used by multiple applications it's common to "zk chroot" the configs to a sub directory.
Once you find the right location in zookeeper, one directory in the configs directory should match the name shown as "config-name" in the admin UI under Collections > name_of_your_collection
If your project uses gradle, the up/down load of configs from the project (where you might want to check these things into version control) can be smoothed somewhat by a plugin (disclaimer: I wrote this plugin)
https://plugins.gradle.org/plugin/com.needhamsoftware.solr-gradle
There's an additional complication to be aware of however, if the collection is using managed schema, the acutal schema in use will not be in schema.xml, but in a file called "managed-schema"
Fields may have been added via the Schema Rest API, so "files used to create the collection" is a bit fuzzy in that respect, but the managed_schema can be renamed to schema.xml and the solr config modified to take things out of managed mode if you want.
My first instance:
sudo bin/solr start -p 8983 -s ../coaps
My second instance:
sudo bin/solr start -p 8984 -s ../newcoaps
Using the python http utility I verified connections:
http :8983/solr/
http :8984/solr/
I can ping my first one with :8983/solr/samos/admin/ping/ but I can NOT ping the other one because the core located in ../newcoaps is not added upon startup.
The ../newcoaps directory looks like this before I started up Solr:
ls -R ../newcoaps/
../newcoaps/:
samos solr.xml
../newcoaps/samos:
conf data
../newcoaps/samos/conf:
schema.xml solrconfig.xml
../newcoaps/samos/data:
I copied the files in here directly from my other instance, which is running smoothly. Everything is default except for several fields I defined.
In the web browser, I see that the second instance has no cores, so I tried to add it manually but I get this response:
Error CREATEing SolrCore 'new_core': Unable to create core [new_core] Caused by: Can't find resource 'synonyms.txt' in classpath or '/opt/solr/newcoaps/samos'
What is going on here and why is that file important enough to prevent me from adding this core? What steps can I take to figuring out a solution to this problem?
Your schema (schema.xml) is referencing the synonyms.txt file (in a SynonymFilter definition). Remove the filter from the configuration if you're not expanding synonyms, or create an empty file named synonyms.txt to allow the core to start up.
As a possible explanation: If you started the first node without a schema.xml present the first time, it might have switched to using the managed schema functionality instead of reading the schema.xml, but when starting the second node with the schema present, it'll try to read and parse it.
I have a SolrCloud instance running with a single core / collection.
I am attempting to download the configuration for this collection with the following command:
/opt/solr-5.3.0/server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983 -cmd downconfig -confdir /opt/solr/collection1 -confname *
However, I get the following error:
Exception in thread "main" java.io.IOException: Error downloading files from zookeeper path /configs/bin to /opt/solr/collection1
at org.apache.solr.common.cloud.ZkConfigManager.downloadFromZK(ZkConfigManager.java:107)
at org.apache.solr.common.cloud.ZkConfigManager.downloadConfigDir(ZkConfigManager.java:131)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:230)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /configs/bin
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:328)
at org.apache.solr.common.cloud.SolrZkClient$6.execute(SolrZkClient.java:325)
at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at org.apache.solr.common.cloud.SolrZkClient.getChildren(SolrZkClient.java:325)
at org.apache.solr.common.cloud.ZkConfigManager.downloadFromZK(ZkConfigManager.java:92)
I do not know the confname so I am provding * as its value. Is that the cause of the issue?
All that I wish to know is how to download the configuration for the existing core / collection (which I then intend to upload to my own local installation)
Found out the cause of the issue. It was the value passed to confname option.
The confname option is a mandatory option when attempting to download the configurations of an existing core / collection.
Turns out that when a configuration is uploaded to zookeeper, you don't have to specify the confname option - in such a case, zookeeper uses the collection name itself as the configuration name.
My collection was named Collection1 and thus, by providing that I managed to successfully download the configuration.
The final command was:
/opt/solr-5.3.0/server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983
-cmd downconfig -confdir /opt/solr/collection1 -confname Collection1
I have setup a SolrCloud replication using standalone zookeeper. But now I wish to make some changes to my Schema.xml and reload the core. The problem is that when I run a single server Solr (no solrcloud) the new schema is loaded, but I do not know how to reload schema on all the replication server. I tried reloading the schema on one of the server with no desired impact. Is there a way in which I can reload my schema.xml in Solr in distributed replication setup which uses zookeeper.
Just found the solution we need to push the changed configuration to zookeeper ensemble.
Just use
sh zkcli.sh -cmd upconfig -zkhost 127.0.0.1:2181 -collection collection1 -confname myconf -solrhome ../solr -confdir ../solr/collection1/conf
zkcli.sh is present under example/cloud-scripts
The answer marked as correct is wrong. You have to use Solr Collection API
Once you have uploaded the new collection (index) configuration with the Solr zkcli.sh utility the configuration will not be reloaded automatically.
Solr Collection API are indicated for SolrCloud and the configuration reload will be spread in the whole cluster. As far as I know Solr Collection API are available at least from Solr 4.8.
The procedure is slightly different and with these API you can reload the configuration on the entire Cluster with only one API call.
Just upload your updated configuration with the Solr zkcli.sh utility. Pay attention to do not confuse Solr zkcli.sh with Zookeeper zkCli.sh they have quite the same name but completely different purpose.
So as said use Solr zkcli.sh (At time of writing is in the directory server/scripts/cloud-scripts):
./zkcli.sh -cmd upconfig -zkhost 127.0.0.1:2181 -collection collection1 -confname myconf -confdir path/to/solr/collection1/conf
Then you can reload the configuration of collection1 with:
http://server1:8983/solr/admin/collections?action=RELOAD&name=collection1
The entire cluster will be updated.
This worked for me :
bin/solr zk -upconfig -n collectionName -d pathto/Conf_directory -z localhost:2181/solr
Below is the Command for Windows,
IT will be almost same in Unix we just need to change the path of Solr lib and class-path separator ; & : Because its java command so should run in Unix also.
java -Dlog4j.configuration="file:E:/solr-5.5.1/server/scripts/cloud-scripts/log4j.properties" -classpath .;E:/solr-5.5.1/server/solr-webapp/webapp/WEB-INF/lib/*;E:/solr-5.5.1/server/lib/ext/* org.apache.solr.cloud.ZkCLI -cmd upconfig -zkhost 192.168.42.13:2787 -confdir E:/New_Solor_Conf -confname Solor_conf
Brief details about command as follows:
Configuration of log4j for logging.
-Dlog4j.configuration="file:E:/solr-5.5.1/server/scripts/cloud-scripts/log4j.properties
Class path to run "org.apache.solr.cloud.ZkCLI". class.
make sure UNIX and Windows will have different : (Unix seperator) ;(Windows Separator)
-classpath .;E:/solr-5.5.1/server/solr-webapp/webapp/WEB-INF/lib/;E:/solr-5.5.1/server/lib/ext/
-zkhost 192.168.42.13:2787 (Remote Host and port where Solr Zookeeper is running)
-confdir E:/New_Solor_Conf (Local directory what we need to upload.)
-confname Solor_conf Remote instance name.
If you will not use correct class path you will get error like :
Error: Could not find or load main class org.apache.solr.cloud.ZkCLI
or
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFacto
ry
at org.apache.solr.common.cloud.SolrZkClient.<clinit>(SolrZkClient.java:
71)
at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:183)
Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
I am able to upload my local configuration changes without physically login to remote Solr box. Hope it will work for other also.