The following is the definition about a producer and a consumer given in Camel in Action book.
The consumer could be receiving the message from an external service, polling
for the message on some system, or even creating the message itself. This message
then flows through a processing component, which could be an enterprise integration
pattern (EIP), a processor, an interceptor, or some other custom creation. The message
is finally sent to a target endpoint that’s in the role of a producer. A route may
have many processing components that modify the message or send it to another location,
or it may have none, in which case it would be a simple pipeline.
My doubts:
What is an External Service?
How consumer comes into play before producer produces the message.My understanding is that A producer produces and transforms a message in exchange so that the message is compatible to consumer's endpoint.
Why does a consumer has to do a producer's work (that is transforming a message and sending it to producer again?) Shouldn't it be the viceversa?
Thanks!
An external service could be, for example, an external web service, an external REST service, an EJB, and so on.
A Consumer could be consuming from any of those services, or it could be listening for a file (or files) to be created in a specific place on the file system, it could be consuming from a message queue (JMS), etc, etc - there are endless possibilities limited only by the components and endpoints available.
Basically, with apache camel, you are designing a message bus (ESB), right? You can think like this - the "consumer" takes stuff from the outside world and puts it on the bus.
Then, your message will go through various routes (most probably being translated and modified along the way, via EIPs) and then eventually it has to go some place else "out there" in the real world - that's when the producer does it's job.
Consumer consumes on to the bus / Producer produces off of the bus.
Usually, you don't need to think too much about whether an endpoint is operating as a producer as a consumer - just use .from and .to as you need and everything should work fine from there.
Also have a read of this answer: Apache Camel producers and consumers
I hope this helps!
Related
I like to know what should be the execution pattern of Multiple Threads of a Server to implement TCP in request-response cycle of hi-performance Server (like dozens of packets with single or no system call on Linux using Packet MMAP or some other way).
Design 1) For simplicity, Start two thread in main at the start of a Server program. one thread just getting packets directly from network interface(s) like wlan0/eth0. and once number of packets read in one cycle (using while loop with poll() in Linux). wake up the other thread using conditional variable signal call. and after waking up, other thread (sender) process and send packet as tcp response.
Design 2) Start receiver thread at the start of main program. The packet receiver thread reads packets from interfaces using while loop and poll(). When number of packets received, create sender thread and pass number of packets received in one cycle to sender as parameter. Sender thread process the packets and respond as tcp response.
(I think, Design 2 will be more easy to implement but is there any design issue or possible performance issue with this approach this is the question). Since creating buffer to pass to sender thread from receiver thread need to be allocated prior to receiving packets. So I know the size of buffer to allocate. Also in this execution pattern I am creating new thread (which will return and end execution after processing packets and responding tcp response). I like to know what will be the performance issue with this approach since I am creating new thread every time I get a batch of packet from interfaces.
In first approach I am not creating more than two threads (or limited number of threads and threads can be tracked easily for logging and debugging since I will know how many thread are initially created) In second approach I don't know how many threads are hanging around and executing concurrently.
I need any advise how real website like youtube/ or others may have handled this in there hi-performance server if they had followed this way of implementing their front facing servers.
First when going to a 'real' website the magic lies in having a load balancers and a whole bunch of worker nodes to take the load and you easily exceed the boundary of a single system. For example take a look at the following AWS reference architecture for serving web pages at scale AWS Cloud Architecture for serving web whitepaper.
That being said taking this one level down it is always interesting to look at how other well-known products have solved this issue. For example NGINX has an excellent infographic available and matching blogpost describing their architecture and threading.
I'm trying to send incoming messages to multiple stateful functions but I couldn't fully understand how to do. For the sake of understandability let's say one of my stateful function getting some integers and sending them to couple of remote functions. These functions adds this integers to their state values and saves it as the new state.
When one of these 2 remote functions fails, the other should continue to work the same way.
When the failed function recovered, it should process messages that it cannot process during failure.
I thought about sending them one after another as below, but I don't think it will work
context.send(RemoteFuncType1,someID,someInteger);
context.send(RemoteFuncType2,someID,someInteger);
...
how can I do this in a fault tolerant way?
if possible how it works in the background?
The way you are suggesting to do it is the correct way!
StateFun would deliver the messages to the remote functions in a consistent manner. If one of the functions is experiencing a short downtime, StateFun would retry sending the message until:
It would successfully deliver it (with back off)
A maximum timeout for retries would be reached. When a timeout is reached the whole StateFun job would be rewind to a
previously consistent checkpoint.
Since StateFun is managing message delivery and the state of the functions (remote included) it would make sure that a consistent state and message would be delivered to each function.
In your example: the second remote function would receive someInteger with whatever state it had before, once recovered.
To get a deeper understanding of how checkpointing works in Flink and how it enables exactly once processing I’d recommend the following:
https://ci.apache.org/projects/flink/flink-docs-stable/internals/stream_checkpointing.html
I'm trying to architecture the main event handling of a libuv-based application. The application is composed of one (or more) UDP receivers sharing a socket, whose job is to delegate processing incoming messages to a common worker pool.
As the protocol handled is stateful, all packets coming from any given server should always be directed to the same worker – this constraint seem to make using LibUV built-in worker pool impossible.
The workers should be able to send themselves packets.
As such, and as I am new to LibUV, I wanted to share with you the intended architecture, in order to get feedback and best practices about it.
– Each worker run their very own LibUV loop, allowing them to send directly packets over the network. Additionally, each worker has a dedicated concurrent queue for sending it messages.
– When a packet is received, its source address is hashed to select the corresponding worker from the pool.
– The receiver created a unique async handle on the receiver loop, to act as callback when processing has finished.
– The receiver notifies the worker with an async handle that a new message is available, which wakes up the worker, that starts to process all enqueued messages.
– The worker thread calls the async handle on the receiver queue, which will cause the receiver to return the buffer to pool and free all allocated resources (as such, the pool does not need to be thread-safe).
The main questions I have would be:
– What is the overhead of creating an async handle for each received message? Is it a good design?
– Is there any built-in way to send a message to another event loop?
– Would it be better to send outgoing packets using another loop, instead of doing it right from the worker loop?
Thanks.
How can a dbus user get notified when the dbus service it is using exits/crashes/restarts?
The tutorial suggests there is a way to do this, but in the specification I only found a signal intended for the name owner.
The dbus tutorial says:
Names have a second important use, other than routing messages. They are used to track lifecycle. When an application exits (or crashes), its connection to the message bus will be closed by the operating system kernel. The message bus then sends out notification messages telling remaining applications that the application's names have lost their owner. By tracking these notifications, your application can reliably monitor the lifetime of other applications.
The dbus specification has a section about the NameLost signal:
org.freedesktop.DBus.NameLost
This signal is sent to a specific application when it loses ownership of a name.
One way to find this out is to listen for the org.freedesktop.DBus.NameOwnerChanged signal, as specified in the D-Bus specification
Your client needs to have some logic implemented to analyse the arguments of the signal to figure out when a name has been claimed, when a service has been restarted, when it is gone etc. But the above signal can be used to receive the relevant information at least.
In your handler function you can check if the name argument matches the service name you want to know about. If the old_owner argument is empty, then the service has just claimed the name on the bus. If new_owner is empty then the service has gone away from the bus (for whatever reason).
Context: this is a web/sqlite application. One process receives new data over TCP, and feed them to a SQLite database. Other processes (number is variable) are launched as required as clients connect and request updates over HTML5's server-side events interface (this might change to websocket in the future).
The idea is to force the client apps to block, and to find a way for the server to create a notification that will wakeup all awaiting clients.
Note that the clients aren't fork'ed from the server.
I'm hoping for a solution that:
doesn't require clients to register themselves to the server
allows the server to broadcast even if no client is listening - and doesn't create a huge pile of unprocessed notifications
allows clients to detect that server isn't present
allows clients to define a custom timeout (maximum wait time for an event)
Solutions checked:
sqlite3_update_hook() - only works within a single process (damned, that would have been sleek)
signals: I still have nightmares about the last time I used signals. Maybe signalfd would be better (server creates a folder, client create unique files, and server notifies all files in that folder)
iNotify - didn't read enough on this one
semaphores / locks / shared memory - can't think of a non-hacked way to use these. The server could update a shared memory area with the row ID of the line just inserted in the DB, but then what?
I'm sure I'm missing something obvious - but what? At this time, polling seems to be the best option!
Thanks.
Just as a suggestion can you try message queues? multiple clients can connect to the same queue and receive one broadcast message, each client can have its own message queue if it requires communication with the server.
Message queues are implemented by Linux OS and they are very reliable. I personally use message queues to pass messages from several clients to a central routing daemon, clients being responsible of processing and returning the altered data.