I have an ADC task that uses 4 channels and uses the DMA for transfer I also have a streaming client which streams the ADC data through the TCP socket I made the ADS Task lower priority than the streaming client.
I'm sending an integer that selects which ADC channel is selected as a message queue to the streaming client.
The problem is I get queue overflow when sending that adc channel integer.
ADC TASK
if(bufferSelect != BUFFERS_NOT_READY)
{
if(xQueueSend(g_adcQueue, &bufferSelect, 0) != pdPASS)
{
throwError(ERROR_MESSAGE_QUEUE_FULL);
PRINTF("%s\r\n", getErrorMessage(ERROR_MESSAGE_QUEUE_FULL));
}
bufferSelect = BUFFERS_NOT_READY;
}
Streaming client task
/* obtain next buffer ready event */
if(xQueueReceive(g_adcQueue, &bufferSelect, 0) == pdFALSE)
{
g_stopStreaming = true;
continue;
}
You seem to handle the queue full status as an error, which it normally isn't - One of the purposes of queues is to back-pressure the producer, and that is exactly what you should do here: If the streaming task cannot digest the data you are throwing at it, you are simply producing too much.
The priority of the consumer does only help keeping queue fill state at a reasonable level when there is no inactive (waiting for I/O) periods in the consumer code. As soon as you have such wait periods in your consumer, priority alone doesn't relieve you from accepting that queues can become full.
Related
I'm writing a module which contains a task with the highest priority and it should be in blocking until it receives a message from an other task the start doing its duty as a highest priority task. It uses mailbox mechanism for signaling .
My problem is
I want the task -which send a signal to higher task- gets back message in blocking mode
Here is my question
should I post through mailbox 1 and then fetch from mailbox 2 or there is a better solution?
I use "FreeRTOS" if it helps
EDIT
I think I described the problem very bad
I mean do I need 2 mailbox in order to communicate between task to task or ISR to task or I can use just one mailbox with other implementation!!??
For your edited question:
You have to use two message queues. One for each task otherwise you won't be able to wait correctly.
So for your blocking message transfer, the code looks like this:
High priority task:
while(-1){
xQueueReceive(high_prio_queue, &msg, portMAX_DELAY);
[your complex code]
xQueueSend(low_prio_queue, &return_msg, timeout);
}
Low priority task:
xQueueSend(high_prio_queue, &msg, timeout);
//will only wait if your high priority task gets blocked before sending
xQueueReceive(low_prio_queue, &return_msg, portMAX_DELAY);
From ISR:
xQueueSendFromISR(high_prio_queue, &msg, &unblocked);
It is very simple. For example queues used and the freeRTOS.
The task waits for the queue. It is in the blocked state
while(1)
{
xQueueReceive(queue, &object, portMAX_DELAY);
....
another task send the data to the queue.
xQueueSend(queue, &object, timeout);
When the data is received the task is given the control. Then it checks if anything is in the queue. If not it waits in blocked state.
I have a DLS graph that is connected to rabbitMQ. (source and sink)
If I start the service with 10 messages already in the queue and value of akka.stream.materializer.max-input-buffer-size is 1, and I trigger the killingSwitch after one message is processed and another one is in-flight, it seems that I lose the message that is in the akka-streams buffer. (the stream does not shutdown until all the jobs that are in-flight complete)
I end-up with 7 messages remaining in the queue.
Any idea how that buffer works? or how to get access to that buffer? or how to process that message at well?
ex:
start queue messages
5646245d2b0000251a9fe92b
56def590430000fd1dac3e47
542560eae4b0ba04ec469e12 (the messages that will get lost)
55835213e4b03eb77098e88e
569edf2851000098027fdad8
6cb975919f254472b61c012d0b76e119
53667258e4b09a003032bcb3
92e4765c5dae4c8485b0a3aa088b8c1b
5326b1c4e4b0b5ce16824303
5623f7912c000072223bc3af
acknowledge messages
5646245d2b0000251a9fe92b
56def590430000fd1dac3e47
542560eae4b0ba04ec469e12 (lost message)
processed messages
5646245d2b0000251a9fe92b
56def590430000fd1dac3e47
the messages in rabbitMQ buffer that are requeued
55835213e4b03eb77098e88e
messages left in the queue
55835213e4b03eb77098e88e
569edf2851000098027fdad8
6cb975919f254472b61c012d0b76e119
53667258e4b09a003032bcb3
92e4765c5dae4c8485b0a3aa088b8c1b
5326b1c4e4b0b5ce16824303
5623f7912c000072223bc3af
I want to have a simple task queue. There will be multiple consumers running on different machines, but I only want each task to be consumed once.
If I have multiple subscribers taking messages from a topic using the same subscription ID is there a chance that the message will be read twice?
I've tested something along these lines successfully but I'm concerned that there could be synchronization issues.
client = SubscriberClient.create(SubscriberSettings.defaultBuilder().build());
subName = SubscriptionName.create(projectId, "Queue");
client.createSubscription(subName, topicName, PushConfig.getDefaultInstance(), 0);
Thread subscriber = new Thread() {
public void run() {
while (!interrupted()) {
PullResponse response = subscriberClient.pull(subscriptionName, false, 1);
List<ReceivedMessage> messages = response.getReceivedMessagesList();
mess = messasges.get(0);
client.acknowledge(subscriptionName, ImmutableList.of(mess.getAckId()));
doSomethingWith(mess.getMessage().getData().toStringUtf8());
}
}
};
subscriber.start();
In short, yes there is a chance that some messages will be duplicated: GCP promises at-least-once delivery. Exactly-once-delivery is theoretically impossible in any distributed system. You should design your doSomethingWith code to be idempotent if possible so duplicate messages are not a problem.
You should also only acknowledge a message once you have finished processing it: what would happen if your machine dies after acknowledge but before doSomethingWith returns? your message will be lost! (this fundamental idea is why exactly-once delivery is impossible).
If losing messages is preferable to double processing them, you could add a locking process (write a "processed" token to a consistent database), but this can fail if the write is handled before the message is processed. But at this point you might be able to find a messaging technology that is designed for at-most-once, rather than optimised for reliability.
I am coding the communication between 2 DSPs through SPI. The start code is quite simple, DSP-1 is sending and DSP-2 is receiving (Of course, DSP-1 also receives but I don't care so far, vice versa for DSP-2)
That works fine. One thousand 16bit data were sent and received correctly.
However, when I add an random delay in DSP-1(master) side, I found DSP-2 begin to lost some data. It is confusing me that I didn't change anything at DSP-2 side for receiving and I am polling quite often.
So anyidea why the delay on sender's side might affect the receiver? (I double checked the DSP1 did send correct sequence.)
And I am thinking to convert to interrupt mechanism, will that solve this kind of issue for all?
my DSP2's polling code is:
for(;;) //my main program for receving
{
spi_xmit(data); //For sending, not care so far
while(SpiaRegs.SPIFFRX.bit.RXFFST == 0) {} //polling
while(SpiaRegs.SPIFFRX.bit.RXFFST != 0)
{
rdata[seq] = SpiaRegs.SPIRXBUF;
seq++;
}
if(seq>1000) break;
}
I have 2 question. The following is the scenario -
There are 2 different processes Process A and Process B.
Process A enqueue's the message in the message queue.
Process B dequeue's the message from the message queue.
1) Process B shuts down for some time but Process A continues to enqueue message in the queue. When Process B comes back live how to dequeue the messages in the message queue that were posted by Process A when Process B was offline?
2) The queue that I am using is multiple consumer queue as there needs to be more than 1 Process B to dequeue the message. The reason behind the design is if one of the process B dies the other process B's can still continue to process the message. At the same time if 1 instance of Process B has picked up a message it should notify other Process B to not process the message.
I coudn't find any samples. Any help is greatly appreciated.
I just completed a project with fairly similar requirements.
Problem 1)
I created a Windows service timer which invokes a WCF Restful service to run periodically. The WCF service would then dequeue anything enqueued (up to 500 messages for each invocation). Anything Enqueued should be automatically processed in order so even if this timer stopped once it was restarted it would pick up where it left off.
Problem 2)
I was replicating data from Oracle to CouchBase so I had a timestamp for retrieval when the process started and a timestamp for already saved data in CouchBase, if the first was older than the latter then it won't save. (This was to take care of race conditions).
In Oracle I also had a trigger which when something was enqueued it would copy the id and enqueued time to a second table. Periodically this second table is checked and if an item has been dequeued within the queue table but the second table has not been updated to reflect this within a certain time frame by the WCF service it will re-enqueue the data as something failed in the process.
In case it is helpful here is an example of the wcf restful service using odp.net.
OracleAQQueue _queueObj;
OracleConnection _connObj;
_connString = ConfigurationManager.ConnectionStrings["connectionstring"].ToString();
_connObj = new OracleConnection(_connString);
_queueObj = new OracleAQQueue("QUEUENAME", _connObj);
_connObj.Open();
int i = 0;
bool messageAvailable = true;
while (messageAvailable && i < 500)
{
OracleTransaction _txn = _connObj.BeginTransaction();
//Makes dequeue part of transaction
_queueObj.DequeueOptions.Visibility = OracleAQVisibilityMode.OnCommit;
_queueObj.DequeueOptions.ConsumerName = "CONSUMERNAME"
try
{
//Wait number of seconds for dequeue, default is forever
_queueObj.DequeueOptions.Wait = 2;
_queueObj.MessageType = OracleAQMessageType.Raw;
_queueObj.DequeueOptions.ProviderSpecificType = true;
OracleAQMessage _depMsq = _queueObj.Dequeue();
var _binary = (OracleBinary)_depMsq.Payload;
byte[] byteArray = _binary.Value;
_txn.Commit();
}
catch (Exception ex)
{
//This catch will always fire when all messages have been dequeued
messageAvailable = false;
if (ex.Message.IndexOf("end-of-fetch during message dequeue") == -1)
{
//Actual error present.
log.Info("Problem occurred during dequeue process : " + ex.Message);
}
}
}
_queueObj.Dispose();
_connObj.Close();
_connObj.Dispose();
_connObj = null;