I've got a producer and a consumer. The producer writes fixed size items on a given shared memory area, and the consumer retrieves them.
The producer can be noticeably slower or faster than the consumer, randomly.
What we want is that
If the producer is running faster than the consumer, when it fills the circular buffer, it keeps writing on the oldest frames (other than the one that the consumer is consuming, of course - I stress this point, producer and consumer must be synchronized in the solution, because they are unrelated processes).
If, instead, the consumer is faster than the producer, it must wait for a new frame and consume it when it's there.
I found implementations of producer/consumers with circular buffers, but only ones that didn't respect the first request (ie, if the circular buffer is full, they wait for the consumer to finish, while what I want is to overwrite the oldest frames).
I'd prefer not to roll my own (prone to bugs) solution, but use a pre-canned, tested one. Can someone point me to a good C implementation? (C++ is also ok).
Many thanks.
Basically when the consumers are slow, it means that no one is using the buffer, so there is no difference between dropping the new frames and overriding the old frames. So maybe the following code can help. The producerRTLock cannot lock the buffer because there are consumers using the bufffer and therefore at the application level you can indicate to drop the frames.
class SampleSynchronizer {
mutex mux;
condition_variable con_cond;
unsigned int con_num;
condition_variable pro_cond;
bool prod;
public:
SampleSynchronizer(): con_num(0), prod(false) {
}
void consumerLock() {
unique_lock<mutex> locker(mux);
while(prod)
pro_cond.wait(locker);
con_num++;
}
void consumerUnlock() {
lock_guard<mutex> locker(mux);
con_num--;
con_cond.notify_one();
}
void producerLock() {
unique_lock<mutex> locker(mux);
while(con_num > 0)
con_cond.wait(locker);
prod = true;
}
bool producerRTLock() {
lock_guard<mutex> locker(mux);
if(con_num > 0)
return false;
prod = true;
return true;
}
void producerUnlock() {
lock_guard<mutex> locker(mux);
prod = false;
pro_cond.notify_all();
}
};
Related
A C++ wapper around a FreeRTOS queue can be simplified into something like this:
template<typename T>
class Queue<T>
{
public:
bool push(const T& item)
{
return xQueueSendToBack(handle, &item, 0) == pdTRUE;
}
bool pop(T& target)
{
return xQueueReceive(handle, &target, 0) == pdTRUE;
}
private:
QueueHandle_t handle;
}
The documentation of xQueueSendToBack states:
The item is queued by copy, not by reference.
Unfortunately, it is literally by copy, because it all ends in a memcpy, which makes sense since it is a C API. While this works well for plain old data, more complex items such as the following event message give serious problems.
class ConnectionStatusEvent
{
public:
ConnectionStatusEvent() = default;
ConnectionStatusEvent(std::shared_ptr<ISocket> sock)
: sock(sock)
{
}
const std::shared_ptr<ISocket>& get_socket() const
{
return sock;
}
private:
const std::shared_ptr<ISocket> sock;
bool connected;
};
The problem is obviously the std::shared_ptr which doesn't work at all with a memcpy since its copy constructor/assignment operator isn't called when copied onto the queue, resulting in premature deletion of the held object when the event message, and thus the shared_ptr, goes out of scope.
I could solve this by using dynamically allocated T-instances and change the queues to only contain pointers to the instance, but I'd rather not do that since this shall run on an embedded system and I very much want to keep the memory static at run-time.
My current plan is to change the queue to contain pointers to a locally held memory area in the wrapper class in which I can implement full C++ object-copy, but as I'd also need to protect that memory area against multiple thread access, it essentially defeats the already thread-safe implementation of the FreeRTOS queues (which surely are more efficient than any implementation I can write myself) I might as well skip them entirely.
Finally, the question:
Before I implement my own queue, are there any tricks I can use to make the FreeRTOS queues function with C++ object instances, in particular std::shared_ptr?
The issue is what happens to the original once you put the pointer into the queue.
Copying seems trivial but not optimal.
To get around this issue i use a mailbox instead of a queue:
T* data = (T*) osMailAlloc(m_mail, osWaitForever);
...
osMailPut (m_mail, data);
Where you allocate the pointer explicitly to begin with. And just add the pointer to the mailbox.
And to retrieve:
osEvent ev = osMailGet(m_mail, osWaitForever);
...
osStatus freeStatus = osMailFree(m_mail, p);
All can be neatly warpend into c++ template methods.
Attempting to use mbed OS scheduler for a small project.
As mbed os is Asynchronous I need to avoid blocking code.
However the library for my wireless receiver uses a blocking line of:
while (!(wireless.isRxData()));
Is there an alternative way to do this that won't block all the code until a message is received?
static void listen(void) {
wireless.quickRxSetup(channel, addr1);
sprintf(ackData,"Ack data \r\n");
wireless.acknowledgeData(ackData, strlen(ackData), 1);
while (!(wireless.isRxData()));
len = wireless.getRxData(msg);
}
static void motor(void) {
pc.printf("Motor\n");
m.speed(1);
n.speed(1);
led1 = 1;
wait(0.5);
m.speed(0);
n.speed(0);
}
static void sendData() {
wireless.quickTxSetup(channel, addr1);
strcpy(accelData, "Robot");
wireless.transmitData(accelData ,strlen(accelData));
}
void app_start(int, char**) {
minar::Scheduler::postCallback(listen).period(minar::milliseconds(500)).tolerance(minar::milliseconds(1000));
minar::Scheduler::postCallback(motor).period(minar::milliseconds(500));
minar::Scheduler::postCallback(sendData).period(minar::milliseconds(500)).delay(minar::milliseconds(3000));
}
You should remove the while (!(wireless.isRxData())); loop in your listen function. Replace it with:
if (wireless.isRxData()) {
len = wireless.getRxData(msg);
// Process data
}
Then, you can process your data in that if statement, or you can call postCallback on another function that will do your processing.
Instead of looping until data is available, you'll want to poll for data. If RX data is not available, exit the function and set a timer to go off after a short interval. When the timer goes off, check for data again. Repeat until data is available. I'm not familiar with your OS so I can't offer any specific code. This may be as simple as adding a short "sleep" call inside the while loop, or may involve creating another callback from the scheduler.
I am trying to implement a mutex in c using the fetch and increment algorithm (sort of like the bakery algorithm). I have implemented the fetch and add part atomically. I have every thread obtain a ticket number and wait for their number to be "displayed". However, I have not found a way to tackle the issue of waiting for your ticket to be displayed. I have thought of using a queue to store your thread ID and descheudle/yield yourself until someone who has the lock, wakes you up. However, I would need a lock for the queue as well! :(
Are there any recommendations on what I could do to make the queue insertion safe or perhaps a different approach to using a queue?
Here is some code of my initial implementation:
void mutex_lock( mutex_t *mp ) {
while (compareAndSwap(&(mp->guard), 0, 1) == 1) {
// This will loop for a short period of time, Need to change this <--
}
if ( mp->lock == 1 ) {
queue_elem_t elem;
elem.data.tid = gettid();
enq( &(mp->queue), &(elem) );
mp->guard = 0;
deschedule();
}
else {
mp->lock = 1; // Lock the mutex
mp->guard = 0; // Allow others to enq themselves
}
}
Also, lets for now ignore the potential race condition where someone can call make_runnable before you call deschedule, I can write another system call that will say we are about to deschedule so queue make_runnable calls.
I have an issue in my use of critical sections. My app has a large number of threads, say 60, which all need access to a global resource. I therefore protect that resource with a critical section. This works perfectly during operation, however when my application shuts down, I trigger the threads to quit, and then destroy the critical section.
The problem comes if some of those threads are waiting on the critical section at exit time, and thus are blocked from quitting themselves.
I've written a wrapper around the windows CriticalSection calls that has an 'Initialised' flag, which I set to true when the crit is created, and set to false when I'm about to leave the crit (both cases are set when inside the crit). This flag is checked before the 'enter crit' wrapper function tries entering the crit, bypassing the request if the flag is false. The flag is also checked the moment any thread successfully enters the crit, making it immediately leave the crit if it's false.
What I to do prior to deleting the crit is set the flag to false, then wait for any waiting threads to: be allowed into the crit; see the Initialised flag is false; then leave the crit (which should be a pretty quick operating for each thread).
I check the number of threads waiting for access to the crit by checking the LockCount inside the CRITICAL_SECTION struct, and waiting until it hits 0 (in XP, that's LockCount - (RecursionCount-1); in 2003 server and above, the lock count is ((-1) - (LockCount)) >> 2), before I then destroy the critical section.
This should be sufficient, however I'm finding that the LockCount reaches 0 when there's still one thread (always just one thread, never more) waiting to enter the crit, meaning if I delete the crit at that point, the other thread subsequently wakes up from waiting on the crit, and causes a crash as the CRITICAL_SECTION object has by that time been destroyed.
If I keep my own internal lock count of threads waiting for access, I have the correct count; however this isn't ideal as I have to increment this count outside of the crit, meaning that value isn't protected and therefore can't be entirely relied upon at any one time.
Does anyone know why the LockCount in the CRITICAL_SECTION struct would be out by 1? If I use my own lock count, then check the CRITICAL_SECTION's lock count after that last thread has exited (and before I destroy the crit), it's still 0...
Or, is there a better way for me to protect the global resource in my app with that many threads, other than a critical section?
This is my wrapper struct:
typedef struct MY_CRIT {
BOOL Initialised;
CRITICAL_SECTION Crit;
int MyLockCount;
}
Here's my Crit init function:
BOOL InitCrit( MY_CRIT *pCrit )
{
if (pCrit)
{
InitializeCriticalSection( &pCrit->Crit );
pCrit->Initialised = TRUE;
pCrit->MyLockCount = 0;
return TRUE;
}
// else invalid pointer
else
return FALSE;
}
This is my enter crit wrapper function:
BOOL EnterCrit( MY_CRIT *pCrit )
{
// if pointer valid, and the crit is initialised
if (pCrit && pCrit->Initialised)
{
pCrit->MyLockCount++;
EnterCriticalSection( &pCrit->Crit );
pCrit->MyLockCount--;
// if still initialised
if (pCrit->Initialised)
{
return TRUE;
}
// else someone's trying to close this crit - jump out now!
else
{
LeaveCriticalSection( &pCrit->Crit );
return FALSE;
}
}
else // crit pointer is null
return FALSE;
}
And here's my FreeCrit wrapper function:
void FreeCrit( MY_CRIT *pCrit )
{
LONG WaitingCount = 0;
if (pCrit && (pCrit->Initialised))
{
// set Initialised to FALSE to stop any more threads trying to get in from now on:
EnterCriticalSection( &pCrit->Crit );
pCrit->Initialised = FALSE;
LeaveCriticalSection( &pCrit->Crit );
// loop until all waiting threads have gained access and finished:
do {
EnterCriticalSection( &pCrit->Crit );
// check if any threads are still waiting to enter:
// Windows XP and below:
if (IsWindowsXPOrBelow())
{
if ((pCrit->Crit.LockCount > 0) && ((pCrit->Crit.RecursionCount - 1) >= 0))
WaitingCount = pCrit->Crit.LockCount - (pCrit->Crit.RecursionCount - 1);
else
WaitingCount = 0;
}
// Windows 2003 Server and above:
else
{
WaitingCount = ((-1) - (pCrit->Crit.LockCount)) >> 2;
}
// hack: if our own lock count is higher, use that:
WaitingCount = max( WaitingCount, pCrit->MyLockCount );
// if some threads are still waiting, leave the crit and sleep a bit, to give them a chance to enter & exit:
if (WaitingCount > 0)
{
LeaveCriticalSection( &pCrit->Crit );
// don't hog the processor:
Sleep( 1 );
}
// when no other threads are waiting to enter, we can safely delete the crit (and leave the loop):
else
{
DeleteCriticalSection( &pCrit->Crit );
}
} while (WaitingCount > 0);
}
}
You are responsible to make sure that CS is not in use any longer before destroying it. Let us say that no other thread is currently trying to enter, but there is a chance that it is going to attempt very soon. Now you destroy the CS, what this concurrent thread is going to do? At its full pace it hits deleted critical section causing memory access violation?
The actual solution depends on your current app design, but if you are destroying threads, then you will perhaps want to flag your request to stop those threads, and then wait on theit handles to wait for their destruction. And then complete with deleting critical section when you are sure that threads are done.
Note that it is unsafe to rely on CS member values such as .LockCount, and having done things right way you will prehaps not even need thing like IsWindowsXPOrBelow. Critical section API suggest that you use CRITICAL_SECTION structure as "black box" leaving the internals to be implementation specific.
I am building a FreeRTOS application. I created a module which registers a freeRTOS queue handle from another module and when an interrupt in this module module occurs, it sends a message to all the registered queues. But it seems I am able to send the message from the queue but not able to receive it at the other module.
Here is my code.
remote module:-
CanRxMsg RxMessage;
can_rx0_queue = xQueueCreate( 10, sizeof(CanRxMsg) ); // can_rx0_queue is globally defined
// Register my queue with can module
if (registerRxQueueWithCAN(can_rx0_queue) == -1)
{
TurnLedRed();
}
while(1)
{
if(can_rx0_queue){
while( xQueueReceive( can_rx0_queue, ( void * ) &RxMessage, portMAX_DELAY))
{
}
.....
Here is the registration module
#define MAX_NUMBER_OF_RX_QUEUES 2
//xQueueHandle rxQueueStore[MAX_NUMBER_OF_RX_QUEUES];
typedef struct QUEUE_REGISTRY_ITEM
{
// signed char *pcQueueName;
xQueueHandle xHandle;
} xQueueRegistryItem;
xQueueRegistryItem rxQueueStore[MAX_NUMBER_OF_RX_QUEUES];
int numberOfQueuesRegistered;
#define cError -1
#define cSuccess 0
void processInterrupt()
{
for(int i=0; i < numberOfQueuesRegistered; i++)
{
if(xQueueSendFromISR(rxQueueStore[i].xHandle,(void *) &RxMessage,&tmp) != pdTRUE)
TurnLedRed();
if(tmp)resched_needed = pdTRUE;
}
portEND_SWITCHING_ISR(resched_needed);
}
int registerRxQueueWithCAN(xQueueHandle myQueue)
{
if(numberOfQueuesRegistered == MAX_NUMBER_OF_RX_QUEUES)
{
// Over Flow of registerations
TurnLedRed();
return cError;
}else
{
rxQueueStore[numberOfQueuesRegistered].xHandle = myQueue;
numberOfQueuesRegistered++;
}
return cSuccess;
}
Few points:-
xQuehandle is typdefed to "void *"
The code works if remove the registration thing and just do with directly pointer of queue in xQueueSendFromISR if I take the pointer by extern.
Any advice or information required?
At first glance I cannot see anything obviously wrong. The problem might be outside of the code you have shown, like how is can_rx0_queue declared, how is the interrupt entered, which port are you using, etc.
There is a FreeRTOS support forum, linked to from the FreeRTOS home page http://www.FreeRTOS.org
Regards.
I think Richard is right. The problem could be issues that are not within your code that you have posted here.
Are you calling any form of suspension on the receiving Task that is waiting on the Queue? When you invoke a vTaskSuspend() on a Task that is blocked waiting on a Queue, the Task that is suspended will be moved to the pxSuspendedTaskList and it will "forget" that it is waiting on an Event Queue because the pvContainer of xEventListItem in that Task will be set to NULL.
You might want to check if your receiving Task is ever suspended while waiting on a Queue. Hope that helped. Cheers!
Your shared memory should at least be declared volatile:
volatile xQueueRegistryItem rxQueueStore[MAX_NUMBER_OF_RX_QUEUES] ;
volatile int numberOfQueuesRegistered ;
otherwise the compiler may optimise out read or writes to these because it has no concept of different threads of execution (between the ISR and the main thread).
Also I recall that some PIC C runtime start-up options do not apply zero-initialisation of static data in order to minimise start-up time, if you are using such a start-up, you should explicitly initialise numberOfQueuesRegistered. I would suggest that to do so would be a good idea in any case.
It is not clear from your code that RxMessage in the ISR is not the same as RxMessage in the 'remote module'; they should not be shared, since that would allow the ISR to potentially modify the data while the receiving thread was processing it. If they could be shared, there would ne no reason to have a queue in the first place, since shared memory and a semaphore would suffice.
As a side-note, there is never any need to cast a pointer to void*, and you should generally avoid doing so, since it will prevent the compiler from issuing an error if you were to pass something other than a pointer. The whole point of a void* is rather that it can accept any pointer type.