We start our camel(kestrel queue) consumers running inside a spring context from maven as mvn camel:run.
We used to kill this using pkill -9 -f camel. But now, we are moving more critical components into queue and cannot afford killing consumers mid way.
Camel has provision for graceful shutdown but the question is how do we stop consumers, will using a pkill camel kill it gracefully? What is the common practice to shutdown camel consumers?
What about kill -3 to signal you want to kill the process? There is shutdown hook when using camel:run that may trigger and issues a graceful shutdown of Camel.
Related
When thread pool mechanism is enabled in Nginx, some of the aio tasks would be offloaded to a thread pool, which will then notify the main thread when the tasks are done.
But what if a request is timeout while being processed by a thread? ngx_event_expire_timers will invoke ev->handler(ev) when an event is timeout. How does Nginx prevent such race condition? Please help me out.
I have a relatively simple service written in python that is doing asynchronous pulls from a pubsub subscription and then running a subprocess on the messages it receives. I'm currently just calling result() and blocking indefinitely and letting the background thread manage everything. What's the best and cleanest way to handle signals that the service may get? (E.g. I like to log startup and shutdown of the service). Should I just catch the signal and call cancel()?
Catching the signal and calling cancel() should work. This is done in the example Python quickstart for receiving messages.
I have two separate Spring Boot apps using camel - lets call them producer and consumer.
In the producer process, the camel ProducerTemplate sends messages to activemq:queue:consumer
In the consumer process, listens to that queue.
When I kill the consumer process, and I can see in the AMQ console that there are no consumers of the queue, the producer sends about 1000 messages to the queue, and then blocks.
If I purge the queue, the producer unblocks and processes about another 1000 messages, and then we repeat.
Why is the producer process backing off if the downstream queue gets backed up? And how do I fix this?
I am using Spring Boot 1.5.6, Camel 2.18.0 and Apache activemq 5.11.0.redhat-630187
It sounds to me like your broker is using Producer Flow Control, which basically means that the producer will be blocked from sending more messages than the broker can handle. To remedy this you can enable Message Cursors which will write the messages to disk, thus delaying a block until all the allocated disk space has been filled up.
You can read more about Producer Flow Control here, and Message Cursors here.
Lets assume that I have a File Consumer that polls a directory every 10 seconds and does some sort of processing to the files it has found there.
This processing may take 40 seconds for each file. This means that during that interval the Cosumer will poll the directory again, and start another similar process?
Is there any way I can avoid that, and not allow the Consumer to poll if the previous poll has not finished?
The file consumer is single threaded so it will not poll while it already process files.
When the consumer finishes it will delay for 10s before polling again. This is controlled by useFixedDelay option which you can read more about in the JDK ScheduledExecutorService which is used by Camel as the scheduler.
I have set a delayer(30000) instruction into a route. When I perform a Graceful Shutdown on this route, during those 30 seconds, the message is immediately transfered to the next instruction. Is it normal?
Actually it is pretty smart but how can I still delay the exchange?
PS : sorry, camel 2.2.0
Camel 2.2.0 is much old version.
I don't think you can get some free support here.