Generating UUID in response to message sent by Kafka Producer - c

I am trying to develop an application which reads byte arrays (representing a C structure and each array associated with a UUID) from a Cache and sends it to Kafka Server through a Producer application in C.
The kafka producer application accumulates a fixed number of such packets and sends them at once.
What I wish to accomplish is get an acknowledgment that which of the messages in the batch were successfully delivered and get their UUID back so that I can clear them from my cache application. I am new to kafka, Please guide me as what is the best way to get this done.

Why not create another consumer topic that 'listens' for messages that have been acknowledged, and then removes them from the cache. Then the producer can just push messages to that topic when required.

When sending messages to Kafka, you can configure your Producer to request acknowledgments (usually exposed as a config called acks). That allows the producer to know if a message was successfully or not.
For example, with librdkafka (in my opinion the best C Kafka client) you can get a delivery report callback when a produce request completes. It contains the message and an error in case of failure. That should enable you to easily identify which messages where sent successfully and mark them as done.
See the rd_kafka_conf_set_dr_msg_cb method to configure a delivery report callback. The simple producer example demonstrate its usage.

Related

How to process and propagate data to two different type of clients?

I am having the following use case and I am not sure how to approach it.
A Kafka producer generates data to a topic, lets call the data X
A Kafka consumer reads them.
The consumer sends the data to a server for further actions on them (a heavy task), lets say the resulting object is now X1
I want the server that did the actions, to inform another client (a react client) about the result.
Here are some solutions I have thought about:
Websockets: I need to have a connection initiated by the client (correct me if I am wrong), which is not part of the flow I build.
Long polling to registry-like storage (eg mongo): When a new key has been identified (let's say by insertion date), grab the associated data.
Queueing: The server will put the processed result in a queue (or maybe in another Kafka topic). React will consume that queue. Maybe via a web socket backend endpoint (as connecting Kafka with react doesn't sound good) which will do periodic polling to the queue and will receive the data via the WebSocket.
How do those sound? Any other ideas are appreciated. My eventual architecture goal is to have a good enough solution that can be implemented relatively fast.

How to publish data synchronously using mosquitto_publish?

I have written code (mosquitto_publish()) using Mosquitto to publish data to AWS.
My problem is the sequence with which data is arriving on the MQTT broker. In the Paho client, I see waitForCompletion(), but nothing similar in Mosquitto. Would anyone please help me in dealing with this problem ?
Based on the mosquitto_publich documentation, the function returns when sending has been "successful". MQTT does not guarantee the order in which messages arrive, so you should arguably watch for the arrival rather than the sending, and avoid having two messages race each other to the broker. With QoS 0, the client never knows if a message arrived; that requires QoS 1 or 2, for which additional communications are exchanged. Raise the quality of service, and you can use mosquitto_max_inflight_messages_set(mosq, 1) so that the client queues any additional messages until it receives confirmation from the server. This may be even more efficient than "waiting" for completion, since non-MQTT operations can continue. The queue might pile up if you send bursts of many messages.
The more complex alternative is to send messages unrestricted, but include an index with each, so that the subscriber can sort them upon receipt (for which it would need its own queue and delay). Not recommended if this burden is going to fall on multiple subscribers.

Could we maintain order of messages in AWS-IoT at subscriber end?

We have created a thing using AWS-IoT service. We have created a topic in that particular thing. Subscriber has subscribed to that topic and publisher is sending messages to that topic.
Below is the publisher messaging order:
message 0
message 1
message 2
message 3
message 4
At the subscriber end the sequence of messages is not maintained. It's showing like this:
message 0
message 1
message 4
message 2
message 3
True, in AWS IoT, the message broker does not guarantee order while they deliver messages to the devices.
The reason being that in a typical distributed systems architecture, a single message from the publisher to the subscriber shall take multiple paths to ensure that the system is highly available and scalable. In the case of AWS IoT, the Device Gateway supports the publisher subscriber messaging pattern and enables scalable, low-latency, and low-overhead communication.
However, based on the type of use case, there are many possible solutions that can be worked out. There should be a logic such that the publishers themselves shall do the co-ordination. One generic or simple approach could be that a sequence number addition at the device side should be sufficient to handle the ordering of the messages between publisher and subscriber. On the receiver, a logic to process or discard based on checking of the ordering based on sequence number should be helpful.
As written in the documentation of AWS
The message broker does not guarantee the order in which messages and
ACK are received.
I guess its too late to answer to this question but I'll still go ahead so others facing this issue can have a work around. I faced a similar scenario and I did the following to make sure that the order is maintained.
I added sequence ID or timestamp to the payload sent to the broker from my iot device (can be any kind of client)
I then configured the IoT rules engine (add actions) to send the messages directly to DynamoDB where the data was automatically stored in a sorted manner (needs to be configured to sort by seqID).
Then I used Lambda to pull out the data from DynamoDB for my further workflow but you can use whatever service according to yours.

With Service Broker, how to know when all sent messages have been processed by a target service?

Pretty new in the use of SS Service Broker, I'm not able to find a simple way to know when all sent messages have been processed.
I'm using Service Broker to multithread a task by splitting it into many small pieces, but the execution flow needs all the atomic tasks to have been successfully processed in order to continue its way.
Any suggestions about the manner we can structure things around to achieve this aim?
You must explicitly send a response from the target, acknowledging the processing. And remember that is perfectly valid for the target to process the message a month after you sent it. So don't block waiting for a response, the initiator should be event driven and respond to message in his queue.

Implement persistence message queue using plain file in Linux

Here i want to implement persistence queue in C programming.
Here i want save messages to persistence queue and then i want to send them.
If my embedded device restarts and when starts again then and then i can also send messages from persistence message queue which are pending.
Can any one have some idea how i can implement this and how it will works?
Thanks
Store it on some persistent storage.
There's not much more to tell you with the information you provided.
If you want it to be persistent than you have to store data on hard drive. What I would suggest is using http://www.sqlite.org/ for it. There are bindigs for many languages.
A persistent message is a message that must not be lost, even if the broker fails.
A persistent queue is able to write messages to disk, so that they will not be lost in the event of system shut down or failure.
Now, messages can be either persistent or non-persistent that travel through the persistent queues.
When a sender sends a persistent message to the broker, it routes it to the recipient queue and waits for the message to be written to the persistent store, before acknowledging delivery to the actual sender.
If a queue is not persistent, messages on the queue are not written to disk.
If a message is not persistent, it is not written to disk even if it is on a persistent queue.
When a reciever queue reads a message from a persistent queue, it is not removed from the queue until the reciever acknowledges the message.
Now you have to put in a journaling mechanism, to keep record of the states of the messages and the brokers on the disk.Then you have to manage caches for the messages and the journals, in a proper order.
This is a simple idea of what a persistent queue is supposed to be and how to write one.
Persistent queues are used by many proprietary software systems like the IBM WebSphere, RedHat's MRG,etc. Refer them for more idea.

Resources