I read the documentation which says 7199 is JMX port number and 8983 is solr port number and 9160 is cassandra client port number. But if i start
dse cassandra -s
starts solr. If i start cassandra-client in the same machine
dse cassandra -f
It says
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 7199; nested exception is:
java.net.BindException: Address already in use
So I understand that both tries to use same JMX port number.
Is there any way to specify two port numbers one for solr or one for cassandra OR is there any way to start both in the same machine.
I am using datastax 2.2.2 tarball set up.
Any ideas?
You only need to start dse one time. It runs search and c* in the same jvm and serves in all the ports you mentioned above.
As you mention above. Use this command for a tarball install to start dse in search mode. Do this accross your cluster (rolling restart, no downtime required):
bin/dse cassandra -s
Related
I'm trying to configure a Galera Cluster that works under Ubuntu 20.04 of a CTX Proxmox container. At the moment I'm stuck with the following error from the Cluster: "
WSREP: Recovered position 00000000-0000-0000-0000-000000000000:-1
[Note] /usr/sbin/mysqld (mysqld 10.3.37-MariaDB-0ubuntu0.20.04.1) starting as process 5351 ..
mariadb.service: Main process exited, code=exited, status=1/FAILURE
and I think that the problem is due to a wrong configuration for the NAT.
What I already tried?
I configured two ports, one for the MariaDB server and another one for the two Galera Cluster. I wrote a PREROUTING rule for forwarding it to the correct machine and I tested that the firewall works well.
Any suggestion for the galera.cnf?
Parameters that at the moment I configured are:
wsrep_cluster_address="gcomm://IP-ADDRESS:PORT,IP-ADDRESS2:PORT2";
wsrep_node_address="IP-ADDRESS:PORT:PORT"
and a similar configuration for the second machine.
In our current architecture of the project we are using solr for gathering, storing and indexing documents from different sources and making them searchable in near real-time
Our web applications running on tomcat connecting to solr to create / modify the documents
Solr uses Zookeeper to keep the configuration centralized
There are 5 servers in our cluster where we are running solr
when the zookeeper restarts in one of the server the daemon thread created in the server doesn't complete it's execution due to which
We are getting continuous logs with below exceptions while trying to connect to zookeeper from tomcat instance
org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [org.apache.zookeeper.ClientCnxn$SendThread]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
which in some time runs out of thread in the server
can someone help me with the below question please ?
why the daemon thread doesn't complete it's execution when we restart zookeeper
Solr Version : 8.5.1
zookeeper version : 3.5.5
since I'm new to the Solr server. I don't know how can I find the port number of Solr using CMD in windows?. if any knows please do help in finding thanks in advance.
Usually, Solr is running on the port 8983. However, it may happen, that you have Solr server which is running on some different port. In this case you could check status of Solr server by running command:
./bin/solr.cmd status
which would provide information like this, if you have something running:
Found 1 Solr nodes:
Solr process 9713 running on port 8983
Currently working with Cassandra in Solr mode and started running Cassandra in Solr.
using dse 4.7
cassandra 2.1.8
./dsetool create_core vin_service_development.vinid_search1
generateResources=true reindex=true
Created indexes successfully and able to see the table under Core Selector Select list in http://10.14.210.22:8983/solr/#/
Changed the schema.xml field type from "TextField" to "StrField" and want to reload the changes made to schema.xml file.
After executing the below command.
./dsetool reload_core vin_service_development.vinid_search1 reindex=true solrconfig=solr.xml
solr.xml is placed in the same path of dsetool.
Error Info:
brsblcdb012:/apps/apg-data.cassandra/bin ./dsetool reload_core vin_service_development.vinid_search1 reindex=true solrconfig=solr.xml
WARN 20:21:14 Error while computing token map for datacenter datacenter1: could not achieve replication factor 1 (found 0 replicas only), check your keyspace replication settings. Note that this can affect the performance of the driver.
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error in xpath:/config/luceneMatchVersion for solrconfig.xml
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:665)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:303)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:294)
at com.datastax.bdp.tools.SearchDseToolCommands.createOrReloadCore(SearchDseToolCommands.java:383)
at com.datastax.bdp.tools.SearchDseToolCommands.access$200(SearchDseToolCommands.java:53)
at com.datastax.bdp.tools.SearchDseToolCommands$ReloadCore.execute(SearchDseToolCommands.java:201)
at com.datastax.bdp.tools.DseTool.run(DseTool.java:114)
at com.datastax.bdp.tools.DseTool.run(DseTool.java:51)
at com.datastax.bdp.tools.DseTool.main(DseTool.java:174)
Is this the correct way to re-load the core in Solr after making changes to the xml files?
Updated:
One of my keyspace was using NetworkTopologyStrategy earlier. Fixed this to SimpleStrategy. Now all the keyspaces have SimpleStrategy in the datacenter Solr.
After executing the same command, got this error.
brsblcdb012:/apps/apg-data.cassandra/bin ./dsetool reload_core vin_service_development.vinid_search1 reindex=true solrconfig=solr.xml
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error in xpath:/config/luceneMatchVersion for solrconfig.xml
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:665)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:303)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:294)
at com.datastax.bdp.tools.SearchDseToolCommands.createOrReloadCore(SearchDseToolCommands.java:383)
at com.datastax.bdp.tools.SearchDseToolCommands.access$200(SearchDseToolCommands.java:53)
at com.datastax.bdp.tools.SearchDseToolCommands$ReloadCore.execute(SearchDseToolCommands.java:201)
at com.datastax.bdp.tools.DseTool.run(DseTool.java:114)
at com.datastax.bdp.tools.DseTool.run(DseTool.java:51)
at com.datastax.bdp.tools.DseTool.main(DseTool.java:174)
what would be the recommended change now?
To sum up the conversation:
The keyspace replication configuration was initially wrong (updated to SimpleStrategy RF2):
Your nodes are now in Datacenter 'Solr' but one of your keyspaces is configured with NetworkTopologyStrategy and a replication factor referencing 'datacenter1'.
You had accidentally replaced your solrconfig with the wrong XML which caused this error. To fix this you can recreate your solr core.
In DSE 4.8 you can remove your solr core using unload_core and recreate it. If on an older verison of DSE you can follow 'Remove core from Datastax Solr'.
We are using Solr 4.2.1 and ZooKeeper 3.4.5 and there are 2 Solr servers.
Solr is reporting "No registered leader was found" and "WARNING ZkStateReader ZooKeeper watch triggered, but Solr cannot talk to ZK".
ZooKeeper is reporting "Exception when following the leader".
But after restarting both, it works for some time and it reports the issue again.
Here are some additional logs from Solr:
SEVERE ZkController There was a problem finding the leader in
zk:org.apache.solr.common.SolrException: Could not get leader props
org.apache.solr.common.SolrException: No registered leader was found, collection:www-live slice:shard1
SEVERE: shard update error StdNode: http://10.23.3.47:8983/solr/www-live/:org.apache.solr.client.solrj.SolrServerException: Server refused connection at: http://10.23.3.47:8983/solr/www-live
SEVERE: Recovery failed - trying again... (5) core=www-live
From ZooKeeper
2016-01-14 11:25:08,423 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower#89] - Exception when following the leader
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:108)
at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:152)
at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:740)
Any help is much appreciated.
Thank you.
How many zookeepers you have?
It must be on odd numbers for leader election. If it is on even number, please update it to odd number and try again.
Three ZooKeeper servers is the minimum recommended size for an
ensemble, and we also recommend that they run on separate machines.
For reliable ZooKeeper service, you should deploy ZooKeeper in a
cluster known as an ensemble. As long as a majority of the ensemble
are up, the service will be available. Because Zookeeper requires a
majority, it is best to use an odd number of machines. For example,
with four machines ZooKeeper can only handle the failure of a single
machine; if two machines fail, the remaining two machines do not
constitute a majority. However, with five machines ZooKeeper can
handle the failure of two machines.
http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html