Camel K SEDA component throws out of memory error - apache-camel

I have gotten an OutOfMemoryError in my Camel K route. I am using SEDA(one producer and multiple consumers) and feeding events with 10,000 events per second, even I use multiple consumers the error was thrown, does anyone know how to improve the performance?
I have tried to increase the consumers numbers but the issue hasn't been resolved and I tried increase the memory but the in-memory queue take too much memory.
Error Message:
2021-12-01 17:42:58,401 syslog-basic-68d776c9b4-js4cd io.quarkus.bootstrap.runner.QuarkusEntryPoint[1] WARN [io.net.cha.AbstractChannelHandlerContext] (Camel (camel-1) thread #1 - NettyConsumerExecutorGroup) An exception 'java.lang.OutOfMemoryError: Java heap space' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:: java.lang.OutOfMemoryError: Java heap space
Here's my source code for the SEDA routes:
from("netty:udp://" + HOST + ":" + PORT + "?sync=false&receiveBufferSize=16777216")
// .log("in: ${body}")
.to("seda:next?size=5000&timeout=0&blockWhenFull=true");
from("seda:next?concurrentConsumers=5")
.unmarshal(myDataFormat)
.routePolicy(myPolicy)
.process(myProcessor)
.choice()
.when(body().contains(MISSING_REQUIRED_FIELDS)).to("log:warning").otherwise()
.log("log:${body}");

Related

Setting handled as true in onException block prevents redeliveries

I am reading a file from a directory, and trying to call an API based on the data in the file.
While trying to handle the exceptions, I am facing an issue. I am trying to configure the onException block to redeliver 3 times, with a delay of 5 seconds. The issue occurs, when I am setting handled(true). This configuration does not redeliver, and stops as soon as the exception occurs.
This is my onException block:
onException(HttpOperationFailedException.class)
.log(LoggingLevel.ERROR, logger, "Error occurred while connecting to API for file ${header.CamelFileName} :: ${exception.message}")
.log("redelivery counter :: ${header.CamelRedeliveryCounter}")
.maximumRedeliveries(3)
.redeliveryDelay(5000)
.handled(true);
How do I do both, i.e. handle as well as redeliver?
Unless you use a buggy version of Camel, the redeliveries are made as expected whatever if it is handled or not.
The only difference between handled or not, is the fact that the result sent back to the client once the retries are exhausted will be either the exception (not handled) or the result of your onException (handled).
Your mistake here, is the fact that you assume that the log EIPs that you have defined in your onException are called for each retry while they are actually called only when the retries are exhausted.
If you want to see the retries in your logs, you can use retryAttemptedLogLevel as next:
onException(HttpOperationFailedException.class)
.maximumRedeliveries(3)
.redeliveryDelay(5000)
.retryAttemptedLogLevel(LoggingLevel.WARN);
You will then get warning messages of type:
Failed delivery for (MessageId: X on ExchangeId: Y). On delivery attempt: Z caught: ...

Polling byte / large messages from ActiveMQ Artemis using Apache Camel ConsumerTemplate

I'm struggling with a problem in an application based on Apache Camel when connecting to ActiveMQ Artemis via JMS. At the end of one of the Camel routes, messages are stored in an Artemis JMS queue. A legacy component running in the same application picks them up from there periodically using a ConsumerTemplate.
This works fine for Camel messages with plain text bodies, but causes errors when using byte array bodies: It seems Artemis treats any message with byte body as a "large message", which are streamed instead of kept in memory. Receiving via the ConsumerTemplate works, but as soon as the body or headers are accessed, an exception as follows is raised:
org.apache.camel.RuntimeCamelException: Failed to extract body due to: javax.jms.IllegalStateException: AMQ119023: The large message lost connection with its session, either because of a rollback or a closed session. Message: ActiveMQMessage[ID:90c4d1d5-3233-11ea-b0cc-44032c68a56f]:PERSISTENT/ClientLargeMessageImpl[messageID=2974, durable=true, address=mytest,userID=90c4d1d5-3233-11ea-b0cc-44032c68a56f,properties=TypedProperties[firedTime=Wed Jan 08 17:26:03 CET 2020,__AMQ_CID=90b4f34e-3233-11ea-b0cc-44032c68a56f,breadcrumbId=ID-NB045-evolit-co-at-1578500762151-0-1,_AMQ_ROUTING_TYPE=1,_AMQ_LARGE_SIZE=3]]
at org.apache.camel.component.jms.JmsBinding.extractBodyFromJms(JmsBinding.java:172) ~[camel-jms-2.22.1.jar:2.22.1]
at org.apache.camel.component.jms.JmsMessage.createBody(JmsMessage.java:221) ~[camel-jms-2.22.1.jar:2.22.1]
at org.apache.camel.impl.MessageSupport.getBody(MessageSupport.java:54) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.example.cdi.JmsPoller.someMethod(JmsPoller.java:36) ~[classes/:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_171]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_171]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_171]
at org.apache.camel.component.bean.MethodInfo.invoke(MethodInfo.java:481) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.bean.MethodInfo$1.doProceed(MethodInfo.java:300) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.bean.MethodInfo$1.proceed(MethodInfo.java:273) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.bean.AbstractBeanProcessor.process(AbstractBeanProcessor.java:188) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.bean.BeanProcessor.process(BeanProcessor.java:53) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.bean.BeanProducer.process(BeanProducer.java:41) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:148) ~[camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:548) [camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) [camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:201) [camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:197) [camel-core-2.22.1.jar:2.22.1]
at org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:79) [camel-core-2.22.1.jar:2.22.1]
at java.util.TimerThread.mainLoop(Timer.java:555) [?:1.8.0_171]
at java.util.TimerThread.run(Timer.java:505) [?:1.8.0_171]
Caused by: javax.jms.IllegalStateException: AMQ119023: The large message lost connection with its session, either because of a rollback or a closed session
at org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:273) ~[artemis-core-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.saveToOutputStream(ClientLargeMessageImpl.java:115) ~[artemis-core-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.jms.client.ActiveMQMessage.saveToOutputStream(ActiveMQMessage.java:853) ~[artemis-jms-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.jms.client.ActiveMQMessage.setObjectProperty(ActiveMQMessage.java:693) ~[artemis-jms-client-2.6.2.jar:2.6.2]
at org.apache.camel.component.jms.JmsBinding.createByteArrayFromBytesMessage(JmsBinding.java:251) ~[camel-jms-2.22.1.jar:2.22.1]
at org.apache.camel.component.jms.JmsBinding.extractBodyFromJms(JmsBinding.java:163) ~[camel-jms-2.22.1.jar:2.22.1]
... 21 more
Caused by: org.apache.activemq.artemis.api.core.ActiveMQIllegalStateException: AMQ119023: The large message lost connection with its session, either because of a rollback or a closed session
at org.apache.activemq.artemis.core.client.impl.LargeMessageControllerImpl.saveBuffer(LargeMessageControllerImpl.java:273) ~[artemis-core-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.client.impl.ClientLargeMessageImpl.saveToOutputStream(ClientLargeMessageImpl.java:115) ~[artemis-core-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.jms.client.ActiveMQMessage.saveToOutputStream(ActiveMQMessage.java:853) ~[artemis-jms-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.jms.client.ActiveMQMessage.setObjectProperty(ActiveMQMessage.java:693) ~[artemis-jms-client-2.6.2.jar:2.6.2]
at org.apache.camel.component.jms.JmsBinding.createByteArrayFromBytesMessage(JmsBinding.java:251) ~[camel-jms-2.22.1.jar:2.22.1]
at org.apache.camel.component.jms.JmsBinding.extractBodyFromJms(JmsBinding.java:163) ~[camel-jms-2.22.1.jar:2.22.1]
... 21 more
The problem also occurs for messages that do not exceed the minLargeMessageSize of Artemis, in a test program even for 3 bytes.
Coincidentally, the same problem occurred in a standalone application used for testing the application. There, I was able to solve the issue by keeping the JMS session and receiver open until the JMS message body and headers were completely read. With Camel, that's abstracted away in the Spring JmsTemplate that Camel is based on.
I consulted the user documentation of the Camel JMS component to find configuration options that might help me. I've tried the following:
eagerLoadingOfProperties=true on consumer side: no effect, only seems to affect MessageListenerContainer. The documentation says:
It uses [...] Spring’s JmsTemplate for sending and a MessageListenerContainer for consuming.
However, while debugging it seemed that MessageListenerContainer is only used when consuming messages from an JMS endpoint in a Camel route. Using a ConsumerTemplate like in my case uses a JmsTemplate for consuming.
messageConverter and mapJmsMessage on consumer side: no effect, they are executed when the session has already been closed
alwaysCopyMessage on producer side: I thought maybe copying prevents use of streamed large messages, no effect
streamMessageTypeEnabled=false on producer side: no effect
jmsMessageType=Bytes on both producer and consumer side: no effect
transferExchange=true on both producer and consumer side: this does seem to solve my specific case, but it feels like a workaround. Documentation advises to use the option with caution.
So right now, transferExchange seems to be my best bet, assuming it really solves my issue in all test cases. Nevertheless, I'd be glad to get better understanding on the issue or different solutions:
Why does Artemis treat small byte array messages as large messages anyway?
Does Camel ConsumerTemplate support streamed large messages at all?
My versions are Camel 2.22.1 and Artemis 2.10.1.
I've been able to reproduce my problem by modifying the Camel Example camel-example-cdi from the release package of Camel to have the minimal classes shown below.
In addition I've added camel-jms and Artemis dependencies and started Artemis locally, both like described in the camel-example-artemis-large-messages example.
public class MyRoutes extends RouteBuilder {
#Override
public void configure() {
setupJmsComponent();
from("timer:writeTimer?period=6000")
.log("writing to JMS")
.setBody(() -> new byte[]{0,1,2})
.to(JmsPoller.ENDPOINT);
from("timer:pollTimer?period=3000")
.to("bean:jmsPoller");
}
private void setupJmsComponent() {
ActiveMQJMSConnectionFactory connectionFactory = new ActiveMQJMSConnectionFactory("tcp://localhost:61616");
JmsComponent jmsComponent = new JmsComponent();
jmsComponent.setConnectionFactory(connectionFactory);
getContext().addComponent("jms", jmsComponent);
}
}
#Singleton
#Named("jmsPoller")
public class JmsPoller {
static final String ENDPOINT = "jms:queue:mytest";
#Inject
private ConsumerTemplate consumerTemplate;
public void someMethod(String body) {
Exchange exchange = consumerTemplate.receive(ENDPOINT, 1000L);
System.out.println("Received " + (exchange == null ? null : exchange.getIn().getBody()));
}
}
ActiveMQ Artemis doesn't treat just any message with a byte body as a "large" message. It's worth noting that the broker ultimately treats all message bodies as an array of bytes because that's exactly what they are. However, in order to be considered "large" the message has to exceed a certain size. The documentation states:
Any message larger than a certain size is considered a large message. Large messages will be split up and sent in fragments. This is determined by the URL parameter minLargeMessageSize.
Note:
Apache ActiveMQ Artemis messages are encoded using 2 bytes per character so if the message data is filled with ASCII characters (which are 1 byte) the size of the resulting Apache ActiveMQ Artemis message would roughly double. This is important when calculating the size of a "large" message as it may appear to be less than the minLargeMessageSize before it is sent, but it then turns into a "large" message once it is encoded.
The default value is 100KiB.
It looks like the application's use-case simply doesn't fit with the semantics of large message support in ActiveMQ Artemis since the session which the message came from is being closed before the message's body is fully received.
Therefore, I recommend that you either keep the session open until the body is read or increase the minLargeMessageSize on the URL of the application which is sending the message so that no messages are ever considered "large." The latter option may result in greater memory usage on the broker since the entire message body will be held in memory at once.

CamelExchangeException Invalid Correlation Key on exchange header

Apache Camel is throwing Invalid Correlation Key exception when trying to aggregate messages from my AWS SQS queue.
The messages were placed in the queue using ZipSplitter and they all appear in the queue with matching "parentId" values (which I added using a random uuid as part of the splitting -I've tried CamelSourceFile as well). I get the Exception repeatedly until the retries are exhausted.
My aggregate expression:
from(--queue--).aggregate(header("parentId"), customAggregationStrategy).completionTimeout(3000).processor(new Processor() {...}.to(--next queue--);
There is no logging emitted from my customAggregationStrategy nor from any of the subsequent processors. It fails to aggregate:
... DeadLetterChannel - Failed delivery for (MessageId: ...). On Delivery attempt: 0 caught ...CamelExchangeException: Invalid correlation key. Exchange[ID...]
The delivery attempt is 0 through 9 for my retry attempts.
The infuriating thing is that the code works everywhere but locally...which you think would narrow things down, but neither the exception nor anything else logged sheds any light onto what is going on here.
You could try to use Camel simple language when expressing the correlation key, ie:
.aggregate(simple("${headers.parentId}", customAggregationStrategy)
This way, the exceptions might be silently ignored ?
Did you activate Camel tracer (http://camel.apache.org/tracer.html) to analyse your exchanges and ease the debugging ?
I suspect you have an Exchange which does NOT have the "parentId" header. If you want to skip them, just activate the ignoreInvalidCorrelationKeys option (see http://camel.apache.org/aggregator2.html)

Camel onException redelivery Clarification

So I'm a little unsure of how this will work. I have a "Broken Pipe" Exception that occurs every now and then. It triggers 2 exceptions to be thrown (according to the log): org.apache.camel.component.file.GenericFileOperationFailedException due to the file not being able to reach its endpoint and java.net.SocketException because that is the root cause of why the file wasn't able to reach the endpoint.
So to deal with that I have an an <onException> block that looks like this:
<onException>
<exception>org.apache.camel.component.file.GenericFileOperationFailedException</exception>
<exception>java.net.SocketException</exception>
<redeliveryPolicy maximumRedeliveries="2" redeliveryDelay="5000"/>
</onException>
So from what I understand Camel should select the GenericFileOperationFailedException and then try to perform 2 Redeliveries, 5000 milliseconds apart.
Well then what happens if it is unable to redeliver in those 2 attempts, will Camel then select the SocketException due to the nature of the error thrown?
Meaning Camel will attempt 4 total redeliveries, taking up a total of 20000 milliseconds?
Camel will initially select on based on the order you have written them. "..the order in which the onException is configured takes precedence. Camel will test from first...last defined."
Straight from the documentation:
So if an exception is thrown with this hierarchy:
+ RuntimeCamelException (wrapper exception by Camel) + OrderFailedException
+ IOException
+ FileNotFoundException
Then Camel will try testing the exception in this order:
FileNotFoundException, IOException, OrderFailedException and
RuntimeCamelException. As we have defined a
onException(IOException.class) Camel will select this as it's the
closest match.
So based on the example, I would assume that
SocketException
will get triggered and redelivery started because it has an exact match and it is lower down in the stacktrace which Camel starts finding matches from.

java.lang.OutOfMemoryError when processing large pgp file

I want to use Camel 2.12.1 to decrypt some potentially large pgp files. The following flow results in an out of memory exception and the call stack shows that the PGPDataFormat.unmarshal() function is trying to build a ByteArray which is destined to fail if the file is large. Is there a way to pass streams around during unmarshalling?
My route:
from("file:///home/cps/camel/sftp-in?"
+ "include=.*&" // find files using this pattern
+ "move=/home/cps/camel/sftp-archive&" // after done adding records to queue, move file to archive
+ "delay=5000&"
+ "readLock=rename&" // readLock parameters prevent picking up file which is currently changing
+ "readLockCheckInterval=5000")
.choice()
.when(header(Exchange.FILE_NAME_ONLY).regex(".*pgp$|.*PGP$|.*gpg$|.*GPG$")).to("direct:decrypt")
.otherwise()
.to("file:///home/cps/camel/input");
from("direct:decrypt").unmarshal().pgp("file:///home/cps/.gnupg/secring.gpg", "developer", "set42now")
.setHeader(Exchange.FILE_NAME).groovy("request.headers.get('CamelFileNameOnly').replace('.gpg', '')")
.to("file:///home/cps/camel/input/")
.to("log:done");
The exception which shows the converter trying to create a ByteArray:
java.lang.OutOfMemoryError: Java heap space
at org.apache.commons.io.output.ByteArrayOutputStream.needNewBuffer(ByteArrayOutputStream.java:128)
at org.apache.commons.io.output.ByteArrayOutputStream.write(ByteArrayOutputStream.java:158)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1026)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:999)
at org.apache.commons.io.IOUtils.toByteArray(IOUtils.java:218)
at org.apache.camel.converter.crypto.PGPDataFormat.unmarshal(PGPDataFormat.java:238)
at org.apache.camel.processor.UnmarshalProcessor.process(UnmarshalProcessor.java:65)
Try with 2.13 or 2.12-SNAPSHOT as we have improved data format and streaming recently. So likely to be better in next release.

Resources