How to immediately shut down Apache camel transactions - apache-camel

So Apache camel has this graceful shutdown feature that waits 300 seconds, and it's really annoying. I say this because I'm running local testing and I get errors where a request will hang, and I want to abort it by shutting my app down. But then I get stuck waiting for 5 mins for all inflight transactions to finish.
I want the ability to disable this graceful shutdown waiting period for my local testing so I can just kill the whole process and start over. Any advice would be appreciated.

You can set shutdown timeout to some lower value. There are many options to set shutdown timeout value:
Spring Boot property - camel.springboot.shutdownTimeout = 1
ShutdownStrategy property - getContext().getShutdownStrategy().setTimeout(1)
With environment variable (camel-main only) - java -DCAMEL_MAIN_SHUTDOWNTIMEOUT=1 ...
At runtime with JMX operation setTimeout() on MBean org.apache.camel:context=MyCamel,type=context,name="MyCamel"
For more details see Graceful shutdown.
In Camel 3.1 and later will be default shutdown timeout reduced to 45s - CAMEL-14336.

Here's the documentation of the spring boot property: https://camel.apache.org/camel-spring-boot/3.7.x/spring-boot.html
camel.springboot.shutdown-timeout | Timeout in seconds to graceful shutdown Camel. | 300 | Integer
Setting camel.springboot.shutdownTimeout = 1 in the application test properties was the easiest way to adjust this.

Related

Stopping Camel Route when no more message in ActiveMQ

Is there a way using which we can stop the Camel route if we don't have any more message in ActiveMQ? My required scenario.
Fetch all the message from ActiveMQ queue and process them.
Poll for 2-3 more times to check if we have any other new message, is yes execute step #1.
If no message in present stop the route and start it back after say 5 min (which I guess can be achieved by Polling Strategy).
Have a look at this answer. It polls the queue with a scheduler and a polling strategy (POJO).
With the scheduler you can choose the interval of polling
With the timeout of the polling strategy consumer you can stop consuming (e.g. if no message arrives for 5 seconds the queue is probably empty)
If you want to stop/start the consumer completely, you can add Camel Control Bus to the mix. You could then start and stop the consumer route.

Apache Camel throttling working intermittently

We need our Camel app to raise an exception and reject incoming request if the msg count exceeds specified threshold (i.e., allow 4 requests in 10 sec time span).
Below is the configuration we have in our camel context file right after jetty front side http listener.
<throttle timePeriodMillis="10000" rejectExecution="true">
<constant>4</constant>
<to uri="bean:someEndPoint"/>
</throttle>
When we invoke Camel app via jmeter, throttle happens for 5th request and subsequent requests too... however, throttle keeps happening forever even after the expiry of 10 sec time span. While, some other times throttle doesn't happen at all during the newer 10 sec time span windows.
Please help here.
Thx
Ramesh

WebSphere HTTP 500 while copying 10 GB file

Configuration:
We have iPlanet web server which sits before WebSphere portal 6.1 cluster (2) deployed in Linux machines.
When user tries to copy a 10 GB file across file systems (NFS mounted), we are using java run time to copy the file across to a different NFS mount, hoping that it would be faster than using any other java libraries.
proc = rt.exec("cp " + fileName + " " + outFileName);
Application deployed is a JSF portlet application.
a) session timeout is 60 mins on the app server and the application
b) we have an Ajax call from the client page to keep the session alive
User receives HTTP 500 within 3 minutes, while our logs show that file is still copying. Not sure why WebSphere is sending HTTP 500?
After 10 minutes are so file is copied, and when he clicks on refresh he can proceed.
Not sure what is causing this HTTP 500.
WebContainer threads are not supposed to be used for long tasks.
He's getting 500 after 3 minutes because that is the time WebSphere decides the thread is hung.
What you should be doing is using a WorkManager to perform that long task and the client can poll to check the status of the task.
If you consider upgrading to WAS v8/v8.5 in the near future a good idea will be to use Asynchronous Servlets for that
The reason that your client receives an HTTP 500 error after a few minutes can happen for a few reasons. Without a stack trace and some relevant logging, it is impossible to know which component within WebSphere "woke up" after 3 minutes and stopped everything. It might be WebSphere's timeout setting for the Web Container thread pool, or it can be some other timeout - should be easily concluded from the logs.
To fix this, you can do one of the following:
Adjust the relevant timeout value (depending, again, on which timeout it is exactly).
Change your design so long-running tasks are executed in the background. You can use WebSphere's Work Manager API for that, or asynchronous beans / servlets.

Can Nagios send alerts for periodic events?

Is there any plugin or facility in nagios which can do this:
For example, the CPU load rises to 80% for 2seconds would not be a problem. I want to get an alert if it remains at 80%++ for at least 5mins. is it possible??
Yes. Just set your retry_interval to 1, and your max_check_attempts to 5, and Nagios will retry the check 5 times (5mn) before sending out an alert. If the problem persists after all of the retries, it will send the alert.

Tomcat 6 response writing

I am observing strange behavior of my tomcat server, it seems like tomcat is not writing response to the client fast enough. Here is what I am seeing:
When firing aound 200 requests at the same time at my tomcat server, my application logs shows that my servlet's doGet() finishes process the request in about 500ms. However, at the client side the average response time is about 30 seconds (which means client start seeing response from tomcat after 30 seconds)!
Does anyone have any idea about how come there are such long delay between the end of my servlet's process time and the time when client receives response?
My server is hosted on Rackspace VM.
Found the culprit. I observed that the hosting server was using abnormally high CPU usage for even for only few requests, so I attached JConsole to Tomcat and found that all my worker thread has high blocking count... and are constantly in blocking state. Looking at the stack trace the locking happened during JAXBContext instantiation. Digg further, the application creating JAXBContext, which is relatively expensive, for each request.
So in summary, the problem was caused by JAXBContext instantiation per thread. Solution was to ensure JAXBContext is created once per application.

Resources