camel showing negative in flight exchanges in hawtio - apache-camel

I am using camel version 2.24.1. I am reading a CSV and sending those records to the HTTP endpoint with the jetty client. the problem is on hawtio when I look at how many exchanges are in flight, it always shows a negative number. Does anyone why this is happening

Related

camel gcp pubsub attributes maxMessagesPerPoll is creating problems when I have the number more than 1

My Apache Camel app takes messages from gcp pubsub subscrption producer-sub , and then after processing and translating the messages puts them in a gcp pubsub topic1
Here is my camel subscription code
from("google-pubsub://mygcpcloudpubsub-270721:producer-sub?concurrentConsumers=10&maxMessagesPerPoll=1") .process(new TranslationProcessor(route)). to("google-pubsub://mygcpcloudpubsub-270721:topic1").end
All looks good as long as maxMessagesPerPoll=1
Moment I make it say 100 i.e. maxMessagesPerPoll=100,all hell breaks loose.
For example if my test code pumps 1000 messages to producer-topic (producer-sub is the subscriber) ,now my app ends up publishing 1300 messages to topic1.Sometimes 1430...etc.My camel route is messed up !
It appears there is a bug in maxMessagesPerPoll parameter of gcp pubsub component of apache camel.
Please let me know if I am missing anything.
I found the problem.For huge amount of messages,my ack deadline of 10 seconds was too small.Hence the messages were redelivered by pubsub.I increase the deadline to 10 minutes....now there are no duplicate messages !!!
Thanks again

Large messages with Camel + ActiveMQ Artemis

I'm trying to post a large message (JSON Format with +210k characters) on a Artemis Queue trough a endpoint Rest with Camel.
When I add a Camel Component with ActiveMQ Connection Factory org.apache.activemq.ActiveMQConnectionFactory - Version: 5.15.6) I'm able to post the message with success.
But when I use the Artemis Connection Factory ( org.apache.activemq.artemis.jms.client.ActiveMQConnectionFactory - Version: 2.6.2) the entire message isn't post in the Queue. The message was cut off reaming only 106725 characters.
Repository where I've create the examples: https://github.com/vitorvr/camel-amq
Thanks.
You should check out this Artemis documentation page for large message support in Artemis.
There is an attribute minLargeMessageSize that is by default 100 kB (roughly the remaining message size you mention). That setting means that Artemis treats all messages that are more than 100 kB as large message and therefore handles them differently.

Wildfly 10 Undertow report a 513 response but looking at the threads I only see 3 active sessions

I have a Wildfly 10.1 deployment which uses undertow to receive a stream of JPEG images. The sending client is reporting a 513 response from the Wildfly server. However, when I do a thread dump of the Wildfly server I see only 3 active sessions that seem to be responding to requests (I verify this by looking at threads that are executing my REST code). There are a number of threads that appear to be undertow related but they are idle waiting for something to happen. The inbound requests are REST requests.
Any idea why undertow is sending a 513 response? How can I go about looking at how many sessions undertow believes it has? Any other suggestions on how to debug this issue?
Thanks, David
Undertow contains a build in handler called Request Limiting Handler. It limits the maximum number of concurrent requests, blocks further requests and puts them in a queue. If the queue is full then requests will be rejected with code 513 (default implementation)

Apache Camel: complete exchanges when an aggregated exchange is completed

In my Apache Camel application, I have a very simple route:
from("aws-sqs://...")
.aggregate(constant(true), new AggregationStrategy())
.completionSize(100)
.to("SEND_AGGREGATE_VIA_HTTP");
That is, it takes messages from AWS SQS, groups them in batches of 100, and sends them via HTTP somewhere.
Exchanges with messages from SQS are completed successfully on getting into the aggregate stage, and SqsConsumer deletes them from the queue at this point.
The problem is that something might happen with an aggregated exchange (it might be delivered with an error), and messages will be lost. I would really like these original exchanges to be completed successfully (messages to be deleted from a queue) only when an aggregated exchange they're in is also completed successfully (a batch of messages is delivered). Is there a way to do this?
Thank you.
You could set deleteAfterRead to false and manually delete the messages after you've sent them to you HTTP endpoint; You could use a bean or a processor and send the proper SQS delete requests through the AWS SDK library. It's a workaround, granted, but I don't see a better way of doing it.

Are there any good polling strategies to use with Apache Camel?

I'm new to Apache Camel and currently reading Camel in Action. I'm building a system that will receive json messages via restful webservices and I plan to turn them into acsii files and transmit them to another system via SFTP.
The problem is the response file will take over 10 minutes to return per request so I need to come up with a polling/monitoring strategy that will keep track of the request state.
Can anyone point me in the right direction if there is a specific EIP that handles this sort of problem?

Resources