Using Camel 2.19.3 ...
I want to read from a TOPIC (IBM-MQ). I set both a
"durableSubscriptionName" and a client ID.
from ("jms:topic:TEST/TOPIC1?durableSubscriptionName=TestSubscription1&clientId=101021&exchangePattern=InOnly")
However, the DefaultJmsMessageContainerFactory gives me an error:
JMWCC0101: The clientID cannot be null
I've tried the same configuration using Spring JmsTemplate directly, and by
setting the clientId on the connection, and that works.
Do I need to specify a custom "connectionFactory"? Looking at the code for
DefaultJmsMessageContainerFactory , it looks like it should handle setting
the clientID to the underlying connection.
Any thoughts on what I should look for?
What worked for us was to assign a client-id to the Connection Factory, not to the Camel JMS Component, nor to the specific consumer. That level of granularity is all we need in our use-case.
Since we use IBM Liberty, we added a property in server.xml, but there are probably other ways to accomplish the same thing.
<jmsConnectionFactory ..... >
<properties.wmqJms ... clientId="99999" ... />
</jmsConnectionFactory>
Related
We are using pubsub & a cloud function to process a stream of incoming data. I am setting up a dead letter topic to handle cases where a message cannot be processed, as described at Cloud Pub/Sub > Guides > Handling message failures.
I've configured a subscription on the dead-letter topic to retain messages for 7 days, we're doing this using terraform:
resource "google_pubsub_subscription" "dead_letter_monitoring" {
project = var.project_id
name = "var.dead_letter_sub_name
topic = google_pubsub_topic.dead_letter.name
expiration_policy { ttl = "" }
message_retention_duration = "604800s" # 7 days
retain_acked_messages = true
ack_deadline_seconds = 600
}
We've tested our cloud function robustly and consequently our expectation is that messages will appear on this dead-letter topic very very rarely, perhaps never. Nevertheless we're putting it in place just to make sure that we catch any anomalies.
Given the rarity of which we expect messages to appear on the dead-letter-topic we need to set up an alert to send an email when such a message appears. Is it possible to do this? I've taken a look through the alerts one can create at https://console.cloud.google.com/monitoring/alerting/policies/create however I didn't see anything that could accomplish this.
I know that I could write a cloud function to consume a message from the subscription and act upon it accordingly however I'd rather not have to do that, a monitoring alert feels like a much more elegant way of achieving this.
is this possible?
Yes, you can use Cloud Monitoring for that. Create a new policy and perform that configuration
Select PubSub Topic and Published message. Observe the value every minute and count them (aligner in the advanced option). Now, in the config, when it's above 0 from the most recent value, the alert is raised.
To filter on only your topic you can add a filter by topic_id on your topic name.
Then, configure your alert to send an email. It should work!
How can I query Solr, using the HTTP API, for information about a collection? I'm not talking about the collection's indexes, which I could query using the COLSTATUS command. I'm just talking about the basic details of a collection, which you can see when you click on a collection in the Solr web admin page, such as config name.
When wondering where information provided in the web interface comes from, the easiest way is to bring up your browser's development tools and go to the Network section. Since the interface is a small Javascript application, it uses the available REST API in the background - the same that you'd query yourself.
Extensive collection information can be retrieved by querying:
/solr/admin/collections?action=CLUSTERSTATUS&wt=json
(Any _ parameter is just present for cache busting).
This will return a list of all the collections present and their metadata, such as which config set they use and what shards the collection consists of. This is the same API endpoint that the web interface uses.
collections":{
"aaaaa":{
"pullReplicas":"0",
"replicationFactor":"1",
"shards":{"shard1":{
"range":"80000000-7fffffff",
"state":"active",
"replicas":{"core_node2":{
"core":"aaaaa_shard1_replica_n1",
"base_url":"http://...:8983/solr",
"node_name":"...:8983_solr",
"state":"down",
"type":"NRT",
"force_set_state":"false",
"leader":"true"}}}},
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"nrtReplicas":"1",
"tlogReplicas":"0",
"znodeVersion":7,
"configName":"_default"},
...
}
Please try the below code.
getConfigName(String collectionName){
//provide the list of zookeeper instances
List<String> zkHosts = ""
// get the solr cloud client
CloudSolrClient cloudSolrClient = new CloudSolrClient.Builder (zkHosts, Optional.empty
()).build ();
// get the config for the collection
String configName = solrConnectionProvider.getCloudSolrClient().getZkStateReader().readConfigName(collectionName);
return configName;
}
Please handle the exception(s) from your end.
Usecase: I would like to pick messages from an http end point and route them to a jms endpoint.
My route configuration looks like:
from("jetty:http://0.0.0.0:9080/quote")
.convertBodyTo(String.class)
.to("stream:out")
.to(InOut, "wmq:queue:" + requestQueue +
"?replyTo=" + responseQueue +
"&replyToType=" + replyToType +
"&useMessageIDAsCorrelationID=true");
My understanding is that this way I will get a request-response pattern for the JMS endpoint and correlationId would be the unique identifier to make the response to the corresponding request.
This works well when I have only 1 instance of the application running however when I have more than 1 instances running simultaneously, responses are picked up randomly and not only by the producer.
For example, A and B are 2 instances of the route (and listener) with exact same configuration listening for responses on a shared queue. At times A gets its response but at times it also picks up response for the message produced by B.
Appreciate any help/pointers on this. Thanks!
Basically the issue was with the use of replyToType which was set to Exclusive - this resulted in listener picking up everything instead of using jms selectors to match correlation ids.
I changed the value of replyToType to Shared and its all working fine now.
This is well defined in section "Request-Reply over JMS" at http://camel.apache.org/jms.html
I need to configure some camel routes based on some configuration files.
All configured routes will need to split a message into one or two sub messages then do some JMS integration work on the first one and then aggregate together the JMS reply with the optional second message. In a simplified picture it will look like below:
message -- > split --> message 1 --> JMS request/reply --> aggregate --> more processing
\--> message 2 /
The aggregation will be done on completion size which I am able to know upfront if it is going to be 1 or 2 depending of the route meta data. When the second message is present no other processing is needed before being merged back with the JMS reply.
Si in short I need a split followed by a routing followed by an aggregation which is quite a common pattern. The only particularity is is that in case the second split message is present I don't need to do anything on it before aggregating it back.
In java DSL it will looks something like this:
from("direct:abc")
// The splitter below will set the JmsIntegration flag
.split().method(MySplitter.class, "split")
.choice()
.when(header("JmsIntegration"))
.inOut("jms:someQueue"))
.otherwise()
// what should I have on here?
.to(???)
.end()
.aggregate(...)to(...);
So my questions would be:
What should I put on the otherwise branch?
What I need in fact is an if: if the split message needs JMS go to JMS and then move to aggregator if it is not just go straight to the aggregator. I am considering creating a dummy processor which will actually do nothing but this seems to me a naive approach.
Am I on a wrong path. If so what would be the alternative
Initially I was thinking about a message enricher but I would not like to sent the original message to the JMS
I also considered putting my aggregation strategy inside my splitter but again I could not put it all together.
Based off your post it looks like you are trying to have the return of your enrichment merge with the original message, but you want to send a custom message to the jms endpoint. I would recommend storing your original message in either a bean or a cache or something of the sort, leveraging all of your conversions with camel and then have your aggregation strategy leverage your storage to return your desired format.
from("direct:abc")
.split().method(MySplitter.class, "split")
.choice()
.when(header("JmsIntegration"))
.beanRef("MyStorageBean", "storeOriginal")
.convertBodyTo(MyJmsFormat.class)
//This aggregation strategy could have a reference
//to your storage bean and retrieve the instance
.enrich("jms:someQueue", myCustomAggreationStrategyInstance)
.otherwise()
.end()
.aggregate(...)
.to("direct:continueProcessing");
Option #2: Based off of your comment saying you needed the "original message that the direct:abc endpoint received this can be simplified a lot. In this example we can use camel's existing Original message store to retrieve the message that was passed into direct:abc. If Your message after the split has a JmsIntegration header we will convert the body to the desired format for the jms call, leverage the enrich statement to make the jms call and a custom aggregator that gives you access to the message used to call the jms endpoint, the message that came back, and the original message direct:abc has. If your flow does not have a JmsIntegration header the message will go to the Otherwise statement in your route which does no additional processing before ending the choice statement and then the spit messages are aggregated back together with whatever custom strategy you need.
from("direct:abc")
.split().method(MySplitter.class, "split")
.choice()
.when(header("JmsIntegration"))
.convertBodyTo(MyJmsFormat.class)
//See aggregationStrategy sample below
.enrich("jms:someQueue", myAggStrat)
.otherwise()
//Non JmsIntegration header messages come here,
//but receive no work and are passed on.
.end()
.aggregate(...)
.to("direct:continueProcessing");
//Your Custom Aggregator
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
//This logic will retrieve the original message passed into direct:abc
Message originalMessage =(Message)exchange.getUnitOfWork().getOriginalInMessage();
//TODO logic for manipulating your exchanges and returning the desired result
}
You said you considered using Enricher, but you don't want to send raw message. You can resolve this neatly by using a pre-JMS route:
from("direct:abc")
.enrich("direct:sendToJms", new MyAggregation());
.to("direct:continue");
from("direct:sendToJms")
// do marshalling or conversion here as necessary
.convertBodyTo(MyJmsRequest.class)
.to("jms:someQueue");
public class MyAggregation implements AggregationStrategy {
public Exchange aggregate(Exchange original, Exchange resource) {
MyBody originalBody = original.getIn().getBody(MyBody.class);
MyJmsResponse resourceResponse = resource.getIn().getBody(MyJmsResponse.class);
Object mergeResult = ... // combine original body and resource response
original.getIn().setBody(mergeResult);
return original;
}
}
Splitter automatically aggregates split exchanges back together. However, default (since 2.3) aggregation strategy is to return the original exchange. You can easily override the default strategy with your own by specifying it directly on the Splitter. Furthermore, if you don't have an alternative flow for your Choice, then it's much easier to use Filter. Example:
from("direct:abc")
.split().method(MySplitter.class, "split").aggregationStrategy(new MyStrategy())
.filter(header("JmsIntegration"))
.inOut("jms:someQueue"))
.end()
.end()
.to(...);
You still need to implement MyStrategy to combine the two messages.
I am trying to connect to rabbitmq-c in centos 5.6 and test its function in c client following the steps of the website: http://www.rabbitmq.com/tutorials/tutorial-one-java.html.
However, it fails when I use the default exchange.
For example, I want to send a message, "Hello world", to a queue named "myqueue" via the default exchange whose name is "(AMQP default)".
In java, here is the code:
channel.basicPublish("", QUEUE_NAME, null, message.getBytes());
But in c, when I run rmq_new_task.c (almost the same as amqp_sendstring.c) as the examples on https://github.com/liuhaobupt/rabbitmq_work_queues_demo-with-rabbit-c-client-lib.
queuename="myqueue";
......
die_on_error(amqp_basic_publish(conn, amqp_cstring_bytes(exchange),
amqp_cstring_bytes(routingkey), &props, amqp_cstring_bytes("Hello world")),
"Publishing");
In the java client, we just set the parameter "exchange" to "" to tell the server that we'd send the message to a specified queue named the same as routingkey via the default exchange.
So what value should I give the second parameter "exchange" in c client (using the default exchange)? I tried to set it to "" or "amq.direct". It didnot show any error while running and seemed working well.
However, when I checked in the rabbitmq-management(http://localhost:55672/#/queues), the queue named "myqueue" did not exist!
Would someone please point me to the right direction? I'd really appreciate!
Take a look at http://www.rabbitmq.com/tutorials/amqp-concepts.html and specifically look for the section entitled Default Exchange.
The usage of the default exchange is very simple.
In java you would do:
channel.basicPublish("", "hello", null, message.getBytes());
By specifying "" in says to use the default exchange. (There should be no need to use amq.direct)
As per the article above it states:
The default exchange is a direct exchange with no name (empty string)
pre-declared by the broker. It has one special property that makes it
very useful for simple applications: every queue that is created is
automatically bound to it with a routing key which is the same as the
queue name.
So that means publishing to the default exchange will only work if you have already created the queue that you want to publish to.
So you will need to create your queue before you can publish to the default exchange. Once you've done that you will start seeing your messages.