Stop Camel after too many retries - apache-camel

I am trying to implement more advanced Apache Camel error handling:
in case if there are too many pending retries then stop processing at all and log all collected exceptions somewhere.
First part (stop on too many retries) is already implemented by following helper method, that gets size of retry queue and I just stop context if queue is over some limit:
static Long getToRetryTaskCount(CamelContext context) {
Long retryTaskCount = null;
ScheduledExecutorService errorHandlerExecutor = context.getErrorHandlerExecutorService();
if (errorHandlerExecutor instanceof SizedScheduledExecutorService)
{
SizedScheduledExecutorService svc = (SizedScheduledExecutorService) errorHandlerExecutor;
ScheduledThreadPoolExecutor executor = svc.getScheduledThreadPoolExecutor();
BlockingQueue<Runnable> queue = executor.getQueue();
retryTaskCount = (long) queue.size();
}
return retryTaskCount;
}
But this code smells to me and I don't like it and also I don't see here any way to collect the exceptions caused all this retries.

There is also a new control bus component in camel 2.11 which could do what you want (source)
template.sendBody("controlbus:route?routeId=foo&action=stop", null);

I wouldn't try to shutdown the CamelContext, just the route in question...that way the rest of your app can still function, you can get route stats and view/move messages to alternate queues, etc.
see https://camel.apache.org/how-can-i-stop-a-route-from-a-route.html

Related

How to manually ack/nack a PubSub message in Camel Route

I am setting up a Camel Route with ackMode=NONE meaning acknowlegements are not done automatically. How do I explicitly acknowledge the message in the route?
In my Camel Route definition I've set ackMode to NONE. According to the documentation, I should be able to manually acknowledge the message downstream:
https://github.com/apache/camel/blob/master/components/camel-google-pubsub/src/main/docs/google-pubsub-component.adoc
"AUTO = exchange gets ack’ed/nack’ed on completion. NONE = downstream process has to ack/nack explicitly"
However I cannot figure out how to send the ack.
from("google-pubsub:<project>:<subscription>?concurrentConsumers=1&maxMessagesPerPoll=1&ackMode=NONE")
.bean("processingBean");
My PubSub subscription has an acknowledgement deadline of 10 seconds and so my message keeps getting re-sent every 10 seconds due to ackMode=NONE. This is as expected. However I cannot find a way to manually acknowledge the message once processing is complete and stop the re-deliveries.
I was able to dig through the Camel components and figure out how it is done. First I created a GooglePubSubConnectionFactory bean:
#Bean
public GooglePubsubConnectionFactory googlePubsubConnectionFactory() {
GooglePubsubConnectionFactory connectionFactory = new GooglePubsubConnectionFactory();
connectionFactory.setCredentialsFileLocation(pubsubKey);
return connectionFactory;
}
Then I was able to reference the ack id of the message from the header:
#Header(GooglePubsubConstants.ACK_ID) String ackId
Then I used the following code to acknowledge the message:
List<String > ackIdList = new ArrayList<>();
ackIdList.add(ackId);
AcknowledgeRequest ackRequest = new AcknowledgeRequest().setAckIds(ackIdList);
Pubsub pubsub = googlePubsubConnectionFactory.getDefaultClient();
pubsub.projects().subscriptions().acknowledge("projects/<my project>/subscriptions/<my subscription>", ackRequest).execute();
I think it is best if you look how the Camel component does it with ackMode=AUTO. Have a look at this class (method acknowledge)
But why do you want to do this extra work? Camel is your fried to simplify integration by abstracting away low level code.
So when you use ackMode=AUTO Camel automatically commits your successfully processed messages (when the message has successfully passed the whole route) and rolls back your not processable messages.

Infinite AMQP Consumer with Alpakka

I'm trying to implement a very simple service connected to an AMQP broker with Alpakka. I just want it to consume messages from its queue as a stream at the moment they are pushed on a given exchange/topic.
Everything seemed to work fine in my tests, but when I tried to start my service, I realized that my stream was only consuming my messages once and then exited.
Basically I'm using the code from Alpakka documentation :
def consume()={
val amqpSource = AmqpSource.committableSource(
TemporaryQueueSourceSettings(connectionProvider, exchangeName)
.withDeclaration(exchangeDeclaration)
.withRoutingKey(topic),
bufferSize = prefetchCount
)
val amqpSink = AmqpSink.replyTo(AmqpReplyToSinkSettings(connectionProvider))
amqpSource.mapAsync(4)(msg => onMessage(msg)).runWith(amqpSink)
}
I tried to schedule the consume() execution every second, but I experienced OutOfMemoryException issues.
Is there any proper way to make this code run as an infinite loop ?
If you want to have a Source restarted when it fails or is cancelled, wrap it with RestartSource.withBackoff.

How to schedule JMS consuming in Apache Camel?

I need to consume JMS messages with Camel everyday at 9pm (or from 9pm to 10pm to give it the time to consume all the messages).
I can't see any "scheduler" option for URIs "cMQConnectionFactory:queue:myQueue" while it exists for "file://" or "ftp://" URIs.
If I put a cTimer before it will send an empty message to the queue, not schedule the consumer.
You can use a route policy where you can setup for example a cron expression to tell when the route is started and when its stopped.
http://camel.apache.org/scheduledroutepolicy.html
Other alternatives is to start/stop the route via the Java API or JMX etc and have some other logic that knows when to do that according to the clock.
This is something that has caused me a significant amount of trouble. There are a number of ways of skinning this cat, and none of them are great as far as I can see.
On is to set the route not to start automatically, and use a schedule to start the route and then stop it again after a short time using the controlbus EIP. http://camel.apache.org/controlbus.html
I didn't like this approach because I didn't trust that it would drain the queue completely once and only once per trigger.
Another is to use a pollEnrich to query the queue, but that only seems to pick up one item from the queue, but I wanted to completely drain it (only once).
I wrote a custom bean that uses consumer and producer templates to read all the entries in a queue with a specified time-out.
I found an example on the internet somewhere, but it took me a long time to find, and quickly searching again I can't find it now.
So what I have is:
from("timer:myTimer...")
.beanRef( "myConsumerBean", "pollConsumer" )
from("direct:myProcessingRoute")
.to("whatever");
And a simple pollConsumer method:
public void pollConsumer() throws Exception {
if ( consumerEndpoint == null ) consumerEndpoint = consumer.getCamelContext().getEndpoint( endpointUri );
consumer.start();
producer.start();
while ( true ) {
Exchange exchange = consumer.receive( consumerEndpoint, 1000 );
if ( exchange == null ) break;
producer.send( exchange );
consumer.doneUoW( exchange );
}
producer.stop();
consumer.stop();
}
where the producer is a DefaultProducerTemplate, consumer is a DefaultConsumerTemplate, and these are configured in the bean configuration.
This seems to work for me, but if anyone gives you a better answer I'll be very interested to see how it can be done better.

Job Queue using Google PubSub

I want to have a simple task queue. There will be multiple consumers running on different machines, but I only want each task to be consumed once.
If I have multiple subscribers taking messages from a topic using the same subscription ID is there a chance that the message will be read twice?
I've tested something along these lines successfully but I'm concerned that there could be synchronization issues.
client = SubscriberClient.create(SubscriberSettings.defaultBuilder().build());
subName = SubscriptionName.create(projectId, "Queue");
client.createSubscription(subName, topicName, PushConfig.getDefaultInstance(), 0);
Thread subscriber = new Thread() {
public void run() {
while (!interrupted()) {
PullResponse response = subscriberClient.pull(subscriptionName, false, 1);
List<ReceivedMessage> messages = response.getReceivedMessagesList();
mess = messasges.get(0);
client.acknowledge(subscriptionName, ImmutableList.of(mess.getAckId()));
doSomethingWith(mess.getMessage().getData().toStringUtf8());
}
}
};
subscriber.start();
In short, yes there is a chance that some messages will be duplicated: GCP promises at-least-once delivery. Exactly-once-delivery is theoretically impossible in any distributed system. You should design your doSomethingWith code to be idempotent if possible so duplicate messages are not a problem.
You should also only acknowledge a message once you have finished processing it: what would happen if your machine dies after acknowledge but before doSomethingWith returns? your message will be lost! (this fundamental idea is why exactly-once delivery is impossible).
If losing messages is preferable to double processing them, you could add a locking process (write a "processed" token to a consistent database), but this can fail if the write is handled before the message is processed. But at this point you might be able to find a messaging technology that is designed for at-most-once, rather than optimised for reliability.

Polling consumer thread stopping automatically

I am having a scenario where I am using consumer template to receive file from a endpoint. The Endpoint could be either File System or FTP site. Currently I am using only File System with following endpoint URL:
file://D:/metamodel/Seach.json?noop=true&idempotent=false
On every hit to following code:
Exchange exchange = consumerTemplate.receive(endPointURI, timeout);
if (exchange != null) {
String body = exchange.getIn().getBody(String.class);
consumerTemplate.doneUoW(exchange);
return body;
}
It creating a new Camel context thread and after some hits it giving error as
java.util.concurrent.RejectedExecutionException: PollingConsumer on Endpoint[file://D:/metamodel/Seach.json?noop=true&idempotent=false] is not started, but in state:Stopped
I am not sure why this is happening and its sporadic in nature.
Any suggestion on this would do great help.

Resources