I have two separate Spring Boot apps using camel - lets call them producer and consumer.
In the producer process, the camel ProducerTemplate sends messages to activemq:queue:consumer
In the consumer process, listens to that queue.
When I kill the consumer process, and I can see in the AMQ console that there are no consumers of the queue, the producer sends about 1000 messages to the queue, and then blocks.
If I purge the queue, the producer unblocks and processes about another 1000 messages, and then we repeat.
Why is the producer process backing off if the downstream queue gets backed up? And how do I fix this?
I am using Spring Boot 1.5.6, Camel 2.18.0 and Apache activemq 5.11.0.redhat-630187
It sounds to me like your broker is using Producer Flow Control, which basically means that the producer will be blocked from sending more messages than the broker can handle. To remedy this you can enable Message Cursors which will write the messages to disk, thus delaying a block until all the allocated disk space has been filled up.
You can read more about Producer Flow Control here, and Message Cursors here.
Related
I'm trying to architecture the main event handling of a libuv-based application. The application is composed of one (or more) UDP receivers sharing a socket, whose job is to delegate processing incoming messages to a common worker pool.
As the protocol handled is stateful, all packets coming from any given server should always be directed to the same worker – this constraint seem to make using LibUV built-in worker pool impossible.
The workers should be able to send themselves packets.
As such, and as I am new to LibUV, I wanted to share with you the intended architecture, in order to get feedback and best practices about it.
– Each worker run their very own LibUV loop, allowing them to send directly packets over the network. Additionally, each worker has a dedicated concurrent queue for sending it messages.
– When a packet is received, its source address is hashed to select the corresponding worker from the pool.
– The receiver created a unique async handle on the receiver loop, to act as callback when processing has finished.
– The receiver notifies the worker with an async handle that a new message is available, which wakes up the worker, that starts to process all enqueued messages.
– The worker thread calls the async handle on the receiver queue, which will cause the receiver to return the buffer to pool and free all allocated resources (as such, the pool does not need to be thread-safe).
The main questions I have would be:
– What is the overhead of creating an async handle for each received message? Is it a good design?
– Is there any built-in way to send a message to another event loop?
– Would it be better to send outgoing packets using another loop, instead of doing it right from the worker loop?
Thanks.
Lets assume that I have a File Consumer that polls a directory every 10 seconds and does some sort of processing to the files it has found there.
This processing may take 40 seconds for each file. This means that during that interval the Cosumer will poll the directory again, and start another similar process?
Is there any way I can avoid that, and not allow the Consumer to poll if the previous poll has not finished?
The file consumer is single threaded so it will not poll while it already process files.
When the consumer finishes it will delay for 10s before polling again. This is controlled by useFixedDelay option which you can read more about in the JDK ScheduledExecutorService which is used by Camel as the scheduler.
The following is the definition about a producer and a consumer given in Camel in Action book.
The consumer could be receiving the message from an external service, polling
for the message on some system, or even creating the message itself. This message
then flows through a processing component, which could be an enterprise integration
pattern (EIP), a processor, an interceptor, or some other custom creation. The message
is finally sent to a target endpoint that’s in the role of a producer. A route may
have many processing components that modify the message or send it to another location,
or it may have none, in which case it would be a simple pipeline.
My doubts:
What is an External Service?
How consumer comes into play before producer produces the message.My understanding is that A producer produces and transforms a message in exchange so that the message is compatible to consumer's endpoint.
Why does a consumer has to do a producer's work (that is transforming a message and sending it to producer again?) Shouldn't it be the viceversa?
Thanks!
An external service could be, for example, an external web service, an external REST service, an EJB, and so on.
A Consumer could be consuming from any of those services, or it could be listening for a file (or files) to be created in a specific place on the file system, it could be consuming from a message queue (JMS), etc, etc - there are endless possibilities limited only by the components and endpoints available.
Basically, with apache camel, you are designing a message bus (ESB), right? You can think like this - the "consumer" takes stuff from the outside world and puts it on the bus.
Then, your message will go through various routes (most probably being translated and modified along the way, via EIPs) and then eventually it has to go some place else "out there" in the real world - that's when the producer does it's job.
Consumer consumes on to the bus / Producer produces off of the bus.
Usually, you don't need to think too much about whether an endpoint is operating as a producer as a consumer - just use .from and .to as you need and everything should work fine from there.
Also have a read of this answer: Apache Camel producers and consumers
I hope this helps!
I have a C program which communicates with PHP through Unix Sockets. The procedure is as follows: PHP accepts a file upload by the user, then sends a "signal" to C which then dispatches another process (fork) to unzip the file (I know this could be handled by PHP alone, this is just an example; the whole problem is more complex).
The problem is that I don't want to have more than say 4 processes running at the same time. I think this could be solved like this: C, when it gets a new "task" from PHP dumps it on a queue and handles them one-by-one (assuring that there are no more than 4 running) while still listening on the socket.
I'm unsure how to achieve this though, as I cannot do that in the same process (or can I)? I have thought I could have another child process for managing the queue which would be accessible by the parent with the use of shared memory, but that seems overly complicated. Is there any other way around?
Thanks in advance.
If you need to have a separate process for each task handler, then you might consider having five separate processes. The first one is the listener, and handles new incoming tasks, and places it into a queue. Each task handler initially sends a request for work, and also when it is finished processing a task. When the listener receives this request, it delivers the next task on the queue to the task handler, or it places the task handler on a handler queue if the task queue is empty. When the task queue transitions from empty to non-empty, it checks if there is a ready task handler in the handler queue. If so, it takes that task handler out of the queue, and delivers the task from the task queue to the task handler.
The PHP process would put tasks to the listener, while the task handlers would get tasks from the listener. The listener simply waits for put or get requests, and processes them. You can think of the listener as a simple web server, but each of the socket connections to the PHP process and to each task handler can be persistent.
Since the number of sockets is small and persistent, any of the multiplexing calls could work (select, poll, epoll, kqueue, or whatever is best and or available for your system), but it may be easiest to use a separate thread to handle each socket synchronously. The ready task handler queue would then be a semaphore or a condition variable on the task queue. The thread that handles puts from the PHP process would place tasks on the task queue, and up the semaphore. Each thread that handles ready tasks would down the semaphore, then take a task off the task queue. The task queue itself may itself need mutual exclusive protection depending on how it is implemented.
i am programming a Mulithreaded Client/Server between processes program which uses message queue's.
The Server will handle the message's send by the clients, and later it should give the work to a threads to continue handling their it's processes.
Every client will have a different message queue.
After making the connection of the 1st client and sending a thread to handle it
Using pthread_join doesnt allow me to to receive new connections that are on main thread,cause it's blocked how can i fix it.
Receiving New Messages in the main thread ( or other solution if possible)
Sending to threads to handle a client message's and after.
Getting back to receive new message
Very simple,
Make the threads you create detached from the main thread - means you don't need to "pthread_join" them anymore. So the main thread is getting new connections and new request for existing connections in a loop, if it's a new connection it will start new thread and if it's a request to an existing connection it's just add the request to the thread's queue (using a lock on it's mutex ofcourse).