My question is when we are using CloudSolrServer, we specify single zkHost address and LBHttpSolrServer. Now CloudSolrServer does extracts information about alive and dead nodes from zookeeper (zkHost) and serves the requests.
But what if the zkHost specified as argument it self goes down ? I think CloudSolrServer should accept more then one zkHost, as the case with LBHttpSolrServer, which accepts more then one solr server urls.
Any idea ?
Thanks
According to this: http://comments.gmane.org/gmane.comp.jakarta.lucene.solr.user/71075
You can pass a comma-delimited list of Zk addresses in your ensemble, such as:
zk1:2181,zk2:2181,zk3:2181, etc.
Related
How can I query Solr, using the HTTP API, for information about a collection? I'm not talking about the collection's indexes, which I could query using the COLSTATUS command. I'm just talking about the basic details of a collection, which you can see when you click on a collection in the Solr web admin page, such as config name.
When wondering where information provided in the web interface comes from, the easiest way is to bring up your browser's development tools and go to the Network section. Since the interface is a small Javascript application, it uses the available REST API in the background - the same that you'd query yourself.
Extensive collection information can be retrieved by querying:
/solr/admin/collections?action=CLUSTERSTATUS&wt=json
(Any _ parameter is just present for cache busting).
This will return a list of all the collections present and their metadata, such as which config set they use and what shards the collection consists of. This is the same API endpoint that the web interface uses.
collections":{
"aaaaa":{
"pullReplicas":"0",
"replicationFactor":"1",
"shards":{"shard1":{
"range":"80000000-7fffffff",
"state":"active",
"replicas":{"core_node2":{
"core":"aaaaa_shard1_replica_n1",
"base_url":"http://...:8983/solr",
"node_name":"...:8983_solr",
"state":"down",
"type":"NRT",
"force_set_state":"false",
"leader":"true"}}}},
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"nrtReplicas":"1",
"tlogReplicas":"0",
"znodeVersion":7,
"configName":"_default"},
...
}
Please try the below code.
getConfigName(String collectionName){
//provide the list of zookeeper instances
List<String> zkHosts = ""
// get the solr cloud client
CloudSolrClient cloudSolrClient = new CloudSolrClient.Builder (zkHosts, Optional.empty
()).build ();
// get the config for the collection
String configName = solrConnectionProvider.getCloudSolrClient().getZkStateReader().readConfigName(collectionName);
return configName;
}
Please handle the exception(s) from your end.
Is it possible to loadbalance in a camel route without knowing the amount of endpoints before runtime?
As an example, certain incoming requests has to loadbalance over certain servers and the servers are configured.
using .loadBalance().failover().to() how can I dynamically set the amount of to() endpoints?
I have tried it with toD() and sending a string of comma seperated endpoints but it sends the request to all servers and does not loadbalance.
To do this, you'll need to use the Java DSL.
//Setup dynamic list of endpoints
List<String> toDefs = new ArrayList<String>();
toDefs.add("mock:a");
toDefs.add("mock:b");
//We need to modify the definition, so get the reference
LoadBalanceDefinition loadBalanceDefinition =
from("dirct:start")
.loadBalance()
.failover();
//Dynamically add the list of endpoints
for (String toDef: toDefs){
loadBalanceDefinition = loadBalanceDefinition.to(toDef);
}
//Finalize the route
loadBalanceDefinition.to("direct:end");
Is it possible to get the query configuration values(default + request parameters) using SolrJ ?
For example: If I direct a request to the RequestHandler using SolrJ, I would like to get a list of parameters(default + overridden request parameters) used on the query. I need this to log the current configuration when the query was made.
Try adding the parameter echoParams=all.
The echoParams parameter tells Solr what kinds of Request parameters
should be included in the response for debugging purposes, legal
values include:
none - don't include any request parameters for debugging
explicit -
include the parameters explicitly specified by the client in the
request
all - include all parameters involved in this request, either
specified explicitly by the client, or implicit because of the request
handler configuration.
Take a look at Common Query Parameters
Queue queue = QueueFactory.getDefaultQueue();
queue.add(("/worker").param("key", "ABC"));
how to take key value .? using below code or any other approach.
request.getParameter("key");
Absolutely. The /worker endpoint is configured in your web.xml as a Servlet. The request object will be passed to your Servlet method and you can use the standard methods like the one you have specified i.e. getParameter(...)
I am trying to connect to rabbitmq-c in centos 5.6 and test its function in c client following the steps of the website: http://www.rabbitmq.com/tutorials/tutorial-one-java.html.
However, it fails when I use the default exchange.
For example, I want to send a message, "Hello world", to a queue named "myqueue" via the default exchange whose name is "(AMQP default)".
In java, here is the code:
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
But in c, when I run rmq_new_task.c (almost the same as amqp_sendstring.c) as the examples on https://github.com/liuhaobupt/rabbitmq_work_queues_demo-with-rabbit-c-client-lib.
queuename="myqueue";
......
die_on_error(amqp_basic_publish(conn, amqp_cstring_bytes(exchange),
amqp_cstring_bytes(routingkey), &props, amqp_cstring_bytes("Hello world")),
"Publishing");
In the java client, we just set the parameter "exchange" to "" to tell the server that we'd send the message to a specified queue named the same as routingkey via the default exchange.
So what value should I give the second parameter "exchange" in c client (using the default exchange)? I tried to set it to "" or "amq.direct". It didnot show any error while running and seemed working well.
However, when I checked in the rabbitmq-management(http://localhost:55672/#/queues), the queue named "myqueue" did not exist!
Would someone please point me to the right direction? I'd really appreciate!
Take a look at http://www.rabbitmq.com/tutorials/amqp-concepts.html and specifically look for the section entitled Default Exchange.
The usage of the default exchange is very simple.
In java you would do:
channel.basicPublish("", "hello", null, message.getBytes());
By specifying "" in says to use the default exchange. (There should be no need to use amq.direct)
As per the article above it states:
The default exchange is a direct exchange with no name (empty string)
pre-declared by the broker. It has one special property that makes it
very useful for simple applications: every queue that is created is
automatically bound to it with a routing key which is the same as the
queue name.
So that means publishing to the default exchange will only work if you have already created the queue that you want to publish to.
So you will need to create your queue before you can publish to the default exchange. Once you've done that you will start seeing your messages.