I'm trying to understand the use case for EV_DISABLE and EV_ENABLE in kqueue.
int KQueue = kqueue();
struct kevent ev = {
.ident = fd,
.filter = EVFILT_READ,
.flags = EV_ADD | EV_DISABLE,
.udata = somePtr
};
kevent(KQueue, &ev, 1, NULL, 0, NULL);
...
struct kevent ev = {
.ident = fd,
.filter = EVFILT_READ,
.flags = EV_ENABLE
};
kevent(KQueue, &ev, 1, &ev, 1, NULL);
Now, when the last call to kevent() returns, ev.udata is NULL instead of somePtr. If kevent() updates the udata pointer even though EV_ADD isn't set, instead of just enable the event, what is the reason for allowing you to add a disabled event, then?
Another use case for EV_ENABLE is in conjunction with EV_DISPATCH. This is necessary in multi-threaded scenario where you have multiple threads waiting for events in a kevent() call. When an event occurs, without EV_DISPATCH, all your threads would be woken up on the same event causing a thundering herd problem. With EV_DISPATCH, the event is delivered to one thread and disabled right after that (ie. atomically from userspace point of view). The thread then handles the event and may re-enable it.
kqueue did not update udata. YOU updated udata by leaving it uninitialized. You are registering the filter with new values. The point of udata is to cross the kernel with it. You can keep your own pointer in userland.
The point of disabling an event is that you want it returned at another call, or that you don't want to cause kqueue to return when it is triggered, but at a different time.
Related
We have several tasks running on an STM32 MCU. In the main.c file we call all the init functions for the various threads. Currently there is one renewing xTimer to trigger a periodic callback (which, at present, does nothing except print a message that it was called). Declarations as follows, outside any function:
TimerHandle_t xMotorTimer;
StaticTimer_t xMotorTimerBuffer;
EventGroupHandle_t MotorEventGroupHandle;
In the init function for the thread:
xMotorTimer = xTimerCreateStatic("MotorTimer",
xTimerPeriod,
uxAutoReload,
( void * ) 0,
MotorTimerCallback,
&xMotorTimerBuffer);
xTimerStart(xMotorTimer, 100);
One thread starts an infinite loop that pauses on an xEventGroupWaitBits() to determine whether to enter an inner loop, which is then governed by its own state:
DeclareTask(MotorThread)
{
bool done = false;
EventBits_t event;
for (;;)
{
Packet * pkt = NULL;
event = xEventGroupWaitBits( MotorEventGroupHandle,
EVT_MOTOR_START | EVT_MOTOR_STOP, // EventBits_t uxBitsToWaitFor
pdTRUE, // BaseType_t xClearOnExit
pdFALSE, // BaseType_t xWaitForAllBits,
portMAX_DELAY //TickType_t xTicksToWait
);
if (event & EVT_MOTOR_STOP)
{
MotorStop(true);
}
if (event & EVT_MOTOR_START)
{
EnableMotor(MOTOR_ALL);
done = false;
while (!done && !abortTest)
{
xQueueReceive(motorQueue, &pkt, portMAX_DELAY);
if (pkt == NULL)
{
done = true;
} else {
done = MotorExecCmd(pkt);
done = ( uxQueueMessagesWaiting(motorQueue) == ( UBaseType_t ) 0);
FreePacket(pkt);
}
}
}
}
}
xEventGroupWaitBits() fires successfully once, the inner loop enters, then exits when the program state meets the expected conditions. The outer loop repeats as it should, but when it arrives again at the xEventGroupWaitBits() call, it crashes almost instantly. In fact, it crashes a few lines down into the wait function, at a call to uxTaskResetEventItemValue(). I can't even step the debugger into the function, as if calling a bad address. But if I check the disassembly, the memory address for the BL instruction hasn't changed since the previous loop, and that address is valid. The expected function is actually there.
I can prevent this chain of events happening altogether by not calling that xTimerStart() and leaving everything else as-is. Everything runs just fine, so it's definitely not xEventGroupWaitBits() (or at least not just that). We tried switching to xEventGroupGetBits() and adding a short osDelay to the loop just as an experiment. That also froze the whole system.
So, main question. Are we doing something FreeRTOS is not meant to do here, using xEventGroupWaitBits() with xTimers running? Or is there supposed to be something between xEventGroupWaitBits() calls, possibly some kind of state reset that we've overlooked? Reviewing the docs, I can't see it, but I could have missed a detail. The
I have a multithreaded program and eacch thread focuses on its own work. When a thread finishes its work, it will drive the event to the main thread.
So I want to use the pthread condition variable to implement the idea (i.e. pthread_cond_t and pthread_mutex_t). The data structure will be like:
typedef struct peer_info {
int ip;
int port;
pthread_mutex_t peer_mutex;
pthread_cond_t peer_cond;
bool write_v;
bool read_v;
bool create_v;
bool destroy_v;
} peer_info;
Suppose that:
thread1 changes the write_v and signals main thread,
thread2 changes the read_v and signals the main thread,
thread3 changes the create_v and signals the main thread,
thread4 changes the destroy_v and signals the main thread.
Is it possible to use only one pthread_mutex and pthread_cond_t to implement the above scenario? Will it cause a deadlock?
Yes, you can safely wait on a "compound predicate," where one signal means "this state changed or that state changed or some other state changed or ..."
In your case, those states are the event_v flags you set. Presuming your main thread is in a loop, and that the event_v flags are initialized to false, it'd look something like this:
// main loop:
pthread_mutex_lock(&pi.peer_mutex);
while (! (pi.write_v || pi.read_v || ... )) {
pthread_cond_wait(&pi.peer_cond, &pi.peer_mutex);
}
if (pi.write_v) {
pi.write_v = false; // clear this so we're not confused next time we wake up
handle_write_event(&pi);
}
if (pi.read_v) {
pi.read_v = false;
handle_read_event(&pi);
}
if (...) ... // handle other events
Note that this will handle events while the mutex is locked, which may or may not be OK for you.
As an aside, I find all those boolean flags a bit cumbersome, and might use bit flags instead:
#define PEER_EVENT_READ 1 << 0
#define PEER_EVENT_WRITE 1 << 1
#define PEER_EVENT_ ...
struct peer_info {
short events_pending; // initialized to zero
...
};
So that I could concisely write:
while (! pi.events_pending) {
pthread_cond_wait(&pi.peer_cond, &pi.peer_mutex);
}
if (pi.events_pending & PEER_EVENT_READ) { ... }
if ....
pi.events_pending = 0; // clear them all
But it's semantically the same thing, so up to you.
I'm using the transfer queue to upload data to GPU local memory to be used by the graphics queue. I believe I need 3 barriers, one to release the texture object from the transfer queue, one to acquire it on the graphics queue, and one transition it from TRANSFER_DST_OPTIMAL to SHADER_READ_ONLY_OPTIMAL. I think my barriers are what's incorrect as this is the error I get and also, I see the correct rendered output as I'm on Nvidia hardware. Is there any synchronization missing?
UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout(ERROR / SPEC): msgNum: 1303270965 -
Validation Error: [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object 0:
handle = 0x562696461ca0, type = VK_OBJECT_TYPE_COMMAND_BUFFER; | MessageID = 0x4dae5635 |
Submitted command buffer expects VkImage 0x1c000000001c[] (subresource: aspectMask 0x1 array
layer 0, mip level 0) to be in layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL--instead,
current layout is VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL.
I believe what I'm doing wrong is not properly specifying stageMasks
VkImageMemoryBarrier tex_barrier = {0};
/* layout transition - UNDEFINED -> TRANSFER_DST */
tex_barrier.srcAccessMask = 0;
tex_barrier.dstAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT;
tex_barrier.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED;
tex_barrier.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
tex_barrier.srcQueueFamilyIndex = -1;
tex_barrier.dstQueueFamilyIndex = -1;
tex_barrier.subresourceRange = (VkImageSubresourceRange) { VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 };
vkCmdPipelineBarrier(transfer_cmdbuffs[0],
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT,
0,
0, NULL, 0, NULL, 1, &tex_barrier);
/* queue ownership transfer */
tex_barrier.srcAccessMask = 0;
tex_barrier.dstAccessMask = 0;
tex_barrier.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
tex_barrier.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
tex_barrier.srcQueueFamilyIndex = device.transfer_queue_family_index;
tex_barrier.dstQueueFamilyIndex = device.graphics_queue_family_index;
vkCmdPipelineBarrier(transfer_cmdbuffs[0],
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT,
0,
0, NULL, 0, NULL, 1, &tex_barrier);
tex_barrier.srcAccessMask = 0;
tex_barrier.dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
tex_barrier.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;
tex_barrier.newLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;
tex_barrier.srcQueueFamilyIndex = device.transfer_queue_family_index;
tex_barrier.dstQueueFamilyIndex = device.graphics_queue_family_index;
vkCmdPipelineBarrier(transfer_cmdbuffs[0],
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT,
0,
0, NULL, 0, NULL, 1, &tex_barrier);
Doing an ownership transfer is a 2-way process: the source of the transfer has to release the resource, and the receiver has to acquire it. And by "the source" and "the receiver", I mean the queues themselves. You can't merely give a queue take ownership of a resource; that queue must issue a command to claim ownership of it.
You need to submit a release barrier operation on the source queue. It must specify the source queue family as well as the destination queue family. Then, you have to submit an acquire barrier operation on the receiving queue, using the same source and destination. And you must ensure the order of these operations via a semaphore. So the vkQueueSubmit call for the acquire has to wait on the semaphore from the submission of the release operation (a timeline semaphore would work too).
Now, since these are pipeline/memory barriers, you are free to also specify a layout transition. You don't need a third barrier to change the layout, but both barriers have to specify the same source/destination layouts for the acquire/release operation.
I'm trying to implement a simple firewall which filters network connections made by Windows processes.
The firewall should either allow/block the connection.
In order to intercept connections by any process, I created a kernel driver which makes use of Windows Filtering Platform.
I registered a ClassifyFn (FWPS_CALLOUT_CLASSIFY_FN1) callback at the filtering layer FWPM_LAYER_ALE_AUTH_CONNECT_V4:
FWPM_CALLOUT m_callout = { 0 };
m_callout.applicableLayer = FWPM_LAYER_ALE_AUTH_CONNECT_V4;
...
status = FwpmCalloutAdd(filter_engine_handle, &m_callout, NULL, NULL);
The decision regarding connection allow/block should be taken by userlevel.
I communicate with Userlevel using FltSendMessage,
which cannot be used at IRQL DISPATCH_LEVEL.
Following the instructions of the Microsoft documentation regarding how to process callouts asynchronously,
I do call FwpsPendOperation0 before calling FltSendMessage.
After the call to FltSendMessage, I resume packet processing by calling FwpsCompleteOperation0.
FwpsPendOperation0 documentation states that calling this function should make possible to operate calls at PASSIVE_LEVEL:
A callout can pend the current processing operation on a packet when
the callout must perform processing on one of these layers that may
take a long interval to complete or that should occur at IRQL =
PASSIVE_LEVEL if the current IRQL > PASSIVE_LEVEL.
However, when the ClassifyFn callback is called at DISPATCH_LEVEL, I do sometimes still get a BSOD on FltSendMessage (INVALID_PROCESS_ATTACH_ATTEMPT).
I don't understand what's wrong.
Thank you in advance for any advice which could point me to the right direction.
Here is the relevant code of the ClassifyFn callback:
/*************************
ClassifyFn Function
**************************/
void example_classify(
const FWPS_INCOMING_VALUES * inFixedValues,
const FWPS_INCOMING_METADATA_VALUES * inMetaValues,
void * layerData,
const void * classifyContext,
const FWPS_FILTER * filter,
UINT64 flowContext,
FWPS_CLASSIFY_OUT * classifyOut)
{
NTSTATUS status;
BOOLEAN bIsReauthorize = FALSE;
BOOLEAN SafeToOpen = TRUE; // Value returned by userlevel which signals to allow/deny packet
classifyOut->actionType = FWP_ACTION_PERMIT;
remote_address = inFixedValues->incomingValue[FWPS_FIELD_ALE_AUTH_CONNECT_V4_IP_REMOTE_ADDRESS].value.uint32;
remote_port = inFixedValues->incomingValue[FWPS_FIELD_ALE_AUTH_CONNECT_V4_IP_REMOTE_PORT].value.uint16;
bIsReauthorize = IsAleReauthorize(inFixedValues);
if (!bIsReauthorize)
{
// First time receiving packet (not a reauthorized packet)
// Communicate with userlevel asynchronously
HANDLE hCompletion;
status = FwpsPendOperation0(inMetaValues->completionHandle, &hCompletion);
//
// FltSendMessage call here
// ERROR HERE:
// INVALID_PROCESS_ATTACH_ATTEMP BSOD on FltMessage call when at IRQL DISPATCH_LEVEL
//
FwpsCompleteOperation0(hCompletion, NULL);
}
if (!SafeToOpen) {
// Packet blocked
classifyOut->actionType = FWP_ACTION_BLOCK;
}
else {
// Packet allowed
}
return;
}
You need to invoke FltSendMessage() on another thread running at PASSIVE_LEVEL. You can use IoQueueWorkItem() or implement your own mechanism to process it on a system worker thread created via PsCreateSystemThread().
I am migrating some code of CYBOI from Xlib to XCB.
CYBOI uses a couple of threads for different communication channels like:
serial_port, terminal, socket, x_window_system.
However, it uses these threads only for signal/event/data detection;
the actual receiving and sending is done in the main thread,
in order to avoid any multi-threading conflicts of address space.
For the x_window_system channel, I previously detected events in a thread:
int n = XEventsQueued(display, QueuedAfterReading);
Upon detection of an event, an "interrupt flag" was set.
Afterwards, the main thread was reading the actual event using:
XNextEvent(display, &event);
When no more events were available, the main thread stopped receiving events
and the x_window_system channel thread started listening with XEventsQueued again.
Now, I am migrating the code to X C Binding (XCB).
There is a blocking function "xcb_wait_for_event" which is fine for reading an event.
What I miss is some function "peeking ahead" if there are events pending,
WITHOUT actually returning/removing the event from the queue.
I was reading the web for a couple of hours now, but am not able to find such a function.
The "xcb_poll_for_event" does not help. Blocking is fine for me,
since my event detection runs in its own thread.
The "xcb_request_check" as third input function does not seem to be what I want.
Could somebody help me out?
Thanks,
Christian
Are you looking for xcb_poll_for_queued_event(xcb_connection_t *c) which returns the next event without reading from the connection?
First, thanks to Julien for his reply.
I have studied the XCB 1.9 sources and found out that the
"xcb_poll_for_queued_event" function is not what I need.
The functions "xcb_poll_for_event" and "xcb_poll_for_queued_event"
both call "poll_for_next_event".
The functions "poll_for_next_event" and "xcb_wait_for_event"
both call "get_event".
If "get_event" finds an event, it changes the internal
linked list to point to the next event. However, I would
prefer NOT to change the event queue AT ALL, independent
from whether or not events are available.
I therefore propose to add a function like the following to XCB:
void* NULL_POINTER = (void*) 0;
int xcb_test_for_event(xcb_connection_t* c) {
int r = 0;
if (c != NULL_POINTER) {
struct _xcb_in in = c->in;
struct event_list* l = in.events;
if (l != NULL_POINTER) {
xcb_generic_event_t* e = l->event;
if (e != NULL_POINTER) {
r = 1;
}
}
}
return r;
}
This would allow me to write an endless loop like:
while (!xcb_test_for_event(connection)) {
sleep(t);
}
This is comparable to the old Xlib function:
int n = XEventsQueued(d, QueuedAfterReading);
which just checked the number of events in the event queue.
The "XEventsQueued" function always returns immediately WITHOUT
input/output, if there are events already in the queue.
Thanks
Christian