Camel netty consumer always sends immediate response even if route not finished - apache-camel

I am using apache camel 3.8 and trying to make a simple tcp server that sends the request message back to the client (the client I am using is PacketSender).
The problem is, if the process() method takes too long an empty response is sent back by netty in the background after 15ms, even if the process() is still in the Thread.sleep hold.
If I do not let the method (thread) sleep, the response is sent with the received content immediatly.
How can I manage it, that netty is waiting until I finished my process and send the response I set in the exchange message body?
Here is the route I am using:
fromF("netty:tcp://%s:%d?sync=true&synchronous=true&disconnectOnNoReply=false&connectTimeout=100000", host, receivePort)
.bean(HL7Request.class, "process", BeanScope.Request);
The bean process method looks like this, for simulation purpose of my long taking process I used a Thread.sleep:
public void process(Exchange exchange) throws Exception {
try {
CamelContext context = exchange.getContext();
exchange.setException(null);
Thread.sleep(5000); // <-- Here the method stopps for 5 seconds but the response is sent by netty anyway
String content = exchange.getMessage().getBody(String.class);
System.out.println(content);
}
catch (Exception e) {
exchange.setException(e);
}
}
PacketSender receives an empty response
Thank you.
Regards,
Florian

I think it was an issue with PacketSender Software. It perfectly works using command line telnet.
More details to the bug

Related

How to manually ack/nack a PubSub message in Camel Route

I am setting up a Camel Route with ackMode=NONE meaning acknowlegements are not done automatically. How do I explicitly acknowledge the message in the route?
In my Camel Route definition I've set ackMode to NONE. According to the documentation, I should be able to manually acknowledge the message downstream:
https://github.com/apache/camel/blob/master/components/camel-google-pubsub/src/main/docs/google-pubsub-component.adoc
"AUTO = exchange gets ack’ed/nack’ed on completion. NONE = downstream process has to ack/nack explicitly"
However I cannot figure out how to send the ack.
from("google-pubsub:<project>:<subscription>?concurrentConsumers=1&maxMessagesPerPoll=1&ackMode=NONE")
.bean("processingBean");
My PubSub subscription has an acknowledgement deadline of 10 seconds and so my message keeps getting re-sent every 10 seconds due to ackMode=NONE. This is as expected. However I cannot find a way to manually acknowledge the message once processing is complete and stop the re-deliveries.
I was able to dig through the Camel components and figure out how it is done. First I created a GooglePubSubConnectionFactory bean:
#Bean
public GooglePubsubConnectionFactory googlePubsubConnectionFactory() {
GooglePubsubConnectionFactory connectionFactory = new GooglePubsubConnectionFactory();
connectionFactory.setCredentialsFileLocation(pubsubKey);
return connectionFactory;
}
Then I was able to reference the ack id of the message from the header:
#Header(GooglePubsubConstants.ACK_ID) String ackId
Then I used the following code to acknowledge the message:
List<String > ackIdList = new ArrayList<>();
ackIdList.add(ackId);
AcknowledgeRequest ackRequest = new AcknowledgeRequest().setAckIds(ackIdList);
Pubsub pubsub = googlePubsubConnectionFactory.getDefaultClient();
pubsub.projects().subscriptions().acknowledge("projects/<my project>/subscriptions/<my subscription>", ackRequest).execute();
I think it is best if you look how the Camel component does it with ackMode=AUTO. Have a look at this class (method acknowledge)
But why do you want to do this extra work? Camel is your fried to simplify integration by abstracting away low level code.
So when you use ackMode=AUTO Camel automatically commits your successfully processed messages (when the message has successfully passed the whole route) and rolls back your not processable messages.

Store the file body without interrupting the route

Basically I have route like following.
from("servlet://test/?matchOnUriPrefix=true&servletName=testservlet")
.log("Wire tap beginning")
.streamCaching()
.wireTap("seda:tap").copy(true).end()
.log("End of wiretap")
.log("request sent to provider ")
.to("https://someservice.com" + "?bridgeEndpoint=true&throwExceptionOnFailure=false")
.log("request sent to END");
Above route redirects the request to "https://someservice.com".
"https://someservice.com" request is 'Post' call which accepts
- text/plain; charset=UTF-8
- gzip file body
My intentions is to save the gzip body without interrupting the actual route. My intention of using 'wiretap' was to achieve the same i.e. save the request body in a separate thread.
When I make a request, I don't see 'https://someservice.com' is getting invoked in a separate thread, basically the execution happens in following way.
1.wiretap endpoint is invoked first, once after wiretap ("seda:tap") processing is finished then
2."https://someservice.com" is invoked.
Adding code of seda:wiretap
from("seda:tap")
.unmarshal().gzip()
.to("seda:storedata");
from("seda:storedata")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
//store the data
Message message = exchange.getIn();
String result=message.getBody(String.class);
}});
How to achieve this?
Tell seda to not wait for reply
.wireTap("seda:tap?waitForTaskToComplete=Never").copy(true).end()
http://camel.apache.org/seda

Apache Camel: how to consume messages from two or more JMS queues

From a programming point of view, I have a very simple business case. However, I can't figure out how to implement it using Apache Camel... Well, I have 2 JMS queues: one to receive commands, another - to store large number of message which should be delivered to external system in a batches of 1000 or less.
Here is the concept message exchange algorithm:
upon receiving a command message in 1st JMS queue I prepare XML
message
Send the XML message to external SOAP Web Service to obtain a usertoken
Using the usertoken, prepare another XML message and send it to a REST service to obtain jobToken
loop:
4.1. aggregate messages from 2nd JMS queue in batches of 1000, stop aggregation at timeout
4.2. for every batch, convert it to CSV file
4.3. send csv via HTTP Post to a REST service
4.4. retain batchtoken assigned to each batch
using the jobtoken prepare XML message and send to REST service to commit the batches
using batchtoken check execution status of each batch via XML message to REST service
While looking at Camel I could create a sample project where I can model out the exchange 1-3, 5:
from("file:src/data?noop=true")
.setHeader("sfUsername", constant("a#fd.com"))
.setHeader("sfPwd", constant("12345"))
.to("velocity:com/eip/vm/bulkPreLogin.vm?contentCache=false")
.setHeader(Exchange.CONTENT_TYPE, constant("text/xml; charset=UTF-8"))
.setHeader("SOAPAction", constant("login"))
.setHeader("CamelHttpMethod", constant("POST"))
.to("http4://bulklogin") // send login
.to("xslt:com/eip/xslt/bulkLogin.xsl") //xslt transformation to retrieve userToken
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
String body = (String) exchange.getIn().getBody();
String[] bodyParts = body.split(",");
exchange.getProperties().put("userToken", bodyParts[0]);
.....
}
})
.to("velocity:com/eip/vm/jobInsertTeamOppStart.vm")
.setHeader(Exchange.CONTENT_TYPE, constant("application/xml; charset=UTF-8"))
.setHeader("X-Session", property("userToken"))
.setHeader("CamelHttpMethod", constant("POST"))
.to("http4://scheduleJob") //schedule job
.to("xslt:com//eip/xslt/jobInfoTransform.xsl")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
String body = (String) exchange.getIn().getBody();
exchange.getProperties().put("jobToken",body.trim());
}
})
//add batches in a loop ???
.to("velocity:com/eip/vm/jobInsertTeamOppEnd.vm")
.setHeader(Exchange.HTTP_URI, simple("https://na15.com/services/async/job/${property.jobToken}"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/xml; charset=UTF-8"))
.setHeader("X-ID-Session", property("userToken"))
.setHeader("CamelHttpMethod", constant("POST"))
.to("http4://closeJob") //schedule job
//check batch?
.bean(new SomeBean());
So, my question is:
How can I read messages from my 2nd JMS queue?
This doesn't strike me as a very good use-case for a single camel route. I think you should implement the main functionality in a POJO and use Camels Bean Integration for consuming and producing messages. This will result in much more easy to maintain code, and also for easier Exception handling.
See https://camel.apache.org/pojo-consuming.html

Camel RaabitMQ Acknowledgement

I am using Camel for my messaging application. In my use case I have a producer (which is RabbitMQ here), and the Consumer is a bean.
from("rabbitmq://127.0.0.1:5672/exDemo?queue=testQueue&username=guest&password=guest&autoAck=false&durable=true&exchangeType=direct&autoDelete=false")
.throttle(100).timePeriodMillis(10000)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
MyCustomConsumer.consume(exchange.getIn().getBody())
}
});
Apparently, when autoAck is false, acknowledgement is sent when the process() execution is finished (please correct me if I am wrong here)
Now I don't want to acknowledge when the process() execution is finished, I want to do it at a later stage. I have a BlockingQueue in my MyCustomConsumer where consume() is putting messages, and MyCustomConsumer has different mechanism to process them. I want to acknowledge message only when MyCustomConsumer finishes processing messages from BlockingQueue. How can I achieve this?
You can consider to use the camel AsyncProcessor API to call the callback done once you processing the message from BlockingQueue.
I bumped into the same issue.
The Camel RabbitMQConsumer.RabbitConsumer implementation does
consumer.getProcessor().process(exchange);
long deliveryTag = envelope.getDeliveryTag();
if (!consumer.endpoint.isAutoAck()) {
log.trace("Acknowledging receipt [delivery_tag={}]", deliveryTag);
channel.basicAck(deliveryTag, false);
}
So it's just expecting a synchronous processor.
If you bind this to a seda route for instance, the process method returns immediately and you're pretty much back to the autoAck situation.
My understanding is that we need to make our own RabbitMQ component to do something like
consumer.getAsyncProcessor().process(exchange, new AsynCallback() {
public void done(doneSync) {
if (!consumer.endpoint.isAutoAck()) {
long deliveryTag = envelope.getDeliveryTag();
log.trace("Acknowledging receipt [delivery_tag={}]", deliveryTag);
channel.basicAck(deliveryTag, false);
}
}
});
Even then, the semantics of the "doneSync" parameter is not clear to me. I think it's merely a marker to identify whether we're dealing with a real async processor or a synchronous processor that was automatically wrapped into an async one.
Maybe someone can validate or invalidate this solution?
Is there a lighter/faster/stronger alternative?
Or could this be suggested as the default implementation for the RabbitMQConsumer?

dismiss message in Apache Camel

Hope this doesn't sound ridiculous, but how can I discard a message in Camel on purpose?
Until now, I sent them to the Log-Component, but meanwhile I don't even want to log the withdrawal.
Is there a /dev/null Endpoint in Camel?
You can use the message filter eip to filter out unwanted messages.
http://camel.apache.org/message-filter
There is no dev/null, component.
Also there is a < stop /> you can use in the route, and when a message hit that, it will stop continue routing.
And the closest we got on a dev/null, is to route to a log, where you set logLeve=OFF as option.
With credit to my colleague (code name: cayha)...
You can use the Stub Component as a camel endpoint that is equivalent to /dev/null.
e.g.
activemq:route?abc=xyz
becomes
stub:activemq:route?abc=xyz
Although I am not aware of the inner workings of this component (and if there are dangers for memory leaks, etc), it works for me and I can see no drawbacks in doing it this way.
one can put uri/mock-uri to the config using property component
<camelContext ...>
<propertyPlaceholder id="properties" location="ref:myProperties"/>
</camelContext>
// properties
cool.end=mock:result
# cool.end=result
// route
from("direct:start").to("properties:{{cool.end}}");
I'm a little late to the party but you can set a flag on the exchange and use that flag to skip only that message (by calling stop) if it doesn't meet your conditions.
#Override
public void configure() throws Exception {
from()
.process(new Processor() {
#SuppressWarnings("unchecked")
#Override
public void process(Exchange exchange) throws Exception {
exchange.setProperty("skip", false);
byte[] messageBytes = exchange.getIn().getBody(byte[].class);
if (<shouldNotSkip>) {
} else { //skip
exchange.setProperty("skip", true);
}
}
}).choice()
.when(exchangeProperty("skip").isEqualTo(true))
.stop()
.otherwise()
.to();
}
I am using activemq route and needs to send reply in normal cases, so exchange pattern is InOut. When I configure a filter in the route I find that even it does not pass message to next step, the callback is executed(sending reply), just same as the behavior when calling stop(). And it will send the same message back to reply queue, which is not desirable.
What I do is to change the exchange pattern to InOnly conditionally and stop if I want to filter out the message, so reply is not sent. MAIN_ENDPOINT is a direct:main endpoint I defined to include normal business logic.
from("activemq:queue:myqueue" + "?replyToSameDestinationAllowed=true")
.log(LoggingLevel.INFO, "Correlation id is: ${header.JMSCorrelationID}; will ignore if not null")
.choice()
.when(simple("${header.JMSCorrelationID} == null"))
.to(MAIN_ENDPOINT)
.endChoice()
.otherwise()
.setExchangePattern(ExchangePattern.InOnly)
.stop()
.endChoice()
.end();
Note that this message is also consumed and not in the queue anymore. If you want to preserve the message in the queue(not consuming it), you may just stop() or just filter() so the callback(sending reply which is the original message) works, putting the message back to the queue.
Using only filter() would be much simpler:
from("activemq:queue:myqueue" + "?replyToSameDestinationAllowed=true")
.log(LoggingLevel.INFO, "Correlation id is: ${header.JMSCorrelationID}; will ignore if not null")
.filter(simple("${header.JMSCorrelationID} == null"))
.to(MAIN_ENDPOINT);

Resources