I would like to model an event-driven finite state machine in C as proposed here :
http://en.wikipedia.org/wiki/Event-driven_finite_state_machine
But I would also like the 'external' events to be handled in various threads.
Can I find such a code somewhere ? Or advices ?
Message queues are a way to solve your problem.
If you want to feed your state machine with external events from other threads, they can write these events in a message queue that will be read by your state machine.
If you want that other threads trigger on actions from your state machine, it can write to various message queues each associated to a thread that will read from its MQ.
One drawback is that events get sorted in chronological order. If your state machine is not in the mood of handling the event it just read from the queue, you have to decide what to do with this event: destroy it, put it back to the queue, remember it for future use...
Maybe the Quantum Framework is what you are looking for? See http://state-machine.com/ for further information. There are ports for many microcontrollers as well as for linux and windows.
Related
Clearly, states of a machine will be abstracted into tasks, but how are transitions controlled?
The functionality I'm looking for is that only one of the state tasks is active at a time, while the rest block. The task that is running must block itself, and unblock whichever task is next in the state transition model.
The method I thought of is creating an index array of Binary semaphores for each task, and simply giving to the semaphore of whichever task is to be transitioned to.
Alternatively, I could handle all state machine functionality in one task, and regulate which functionality is executed by a switch statement?
Which is more efficient or better practice?
Not sure SO is the right płace to ask this, as this is a really generał question. Anyway: imho the simplest way to get started is to utilise the exisiting tools, the state machine design pattern in this case. C is not the perfect language to implement it, but it can be done, see for example: https://stackoverflow.com/a/44955234/4885321
or https://www.adamtornhill.com/Patterns%20in%20C%202,%20STATE.pdf
In the context of FreeRTOS the FSM is most likely going to end up as a single task.
Description:
Currently I am working on using Flink with an IOT setup. Essentially, devices are sending data such as (device_id, device_type, event_timestamp, etc) and I don't have any control over when the messages get sent. I then key the steam by device_id and device_type to preform aggregations. I would like to use event-time given that is ensures the timers which are set trigger in a deterministic nature given a failure. However, given that this isn't always a high throughput stream a window could be opened for a 10 minute aggregation period, but not have its next point come until approximately 40 minutes later. Although the calculation would aggregation would eventually be completed it would output my desired result extremely late.
So my work around for this is to create an additional external source that does nothing other than pump fake messages. By having these fake messages being pumped out in alignment with my 10 minute aggregation period, even if a device hadn't sent any data, the event time windows would have something to force the windows closed. The critical part here is to make it possible that all parallel instances / operators have access to this fake message because I need to close all the windows with this single fake message. I was thinking that Broadcast state might be the most appropriate way to accomplish this goal given: "Broadcast state is replicated across all parallel instances of a function, and might typically be used where you have two streams, a regular data stream alongside a control stream that serves rules, patterns, or other configuration messages." Quote Source
Questions:
Is broadcast state the best method for ensuring all parallel instances (e.g. windows) receive my fake messages?
Once the operators have access to this fake message via the broadcast state can this fake message then be used to advance the event time watermark?
You can make this work with broadcast state, along the lines you propose, but I'm not convinced it's the best solution.
In an ideal world I'd suggest you arrange for the devices to send occasional keepalive messages, but assuming that's not possible, I think a custom Trigger would work well here. You can extend the EventTimeTrigger so that in addition to the event time timer it creates via
ctx.registerEventTimeTimer(window.maxTimestamp());
you also create a processing time timer, as a fallback, and you FIRE the window if the window still exists when that processing time timer fires.
I'm recommending this approach because it's simpler and more directly addresses the specific need. With the broadcast state approach you'll have to introduce a source for these messages, add a broadcast state descriptor and stream, add special fake watermarks for the non-broadcast stream (set to Watermark.MAX_WATERMARK), connect the broadcast and non-broadcast streams and implement a BroadcastProcessFunction (that probably doesn't really do anything), etc. It's a lot of moving parts spread across several different operators.
i wonder if there is a way in flink to broadcast an event (or something like that) if specific event read from the source into all the task managers.
To be more specific I am aggregating state data with a map state and if some events are read from the source I want that all task managers perform a specific action
Is it possible?
Yes, this is possible. The broadcast state pattern is meant for exactly this sort of use case.
As David noted, using a broadcast stream is the right way to send data to all (parallel) sub-tasks. As for only broadcasting some data, take a look at side outputs as a way to do special processing for a sub-set of your data. So you could have a ProcessFunction that passes through all data un-modified, and if an incoming event is one that wants to be broadcast, then you also emit it as a side output.
Sorry I answered my own question - it actually IS just SEDA, I assumed when I saw 'BlockingQueue' that SEDA would block until the queue had been read ... which of course is nonsense. SEDA is completely all I need. Question answered
I've got a problem that's compeletely screwing me, I've been provided a custom Endpoint by company we connect to, but the endpoint maintains a heart-beat to a feed, and when it sends messages above a certain size they take so long to process on the route that its blocking and the heartbeat gets lost and the connection goes down
Obviously this is analogous to processing events on a non-graphics thread to keep a smooth operation going. But I'm unsure how I'd achieve this in camel. Essentially I want to queue the results and have them on a separate thread.
from( "custom:endpoint" )
.process( MyProcesor )
.to( "some-endpoint")
as suggested camel-seda is a simple way to perform async/mult-threaded processing, beware that the blocking queues are in-memory only (lost if VM is stopped, etc). if you need guaranteed messaging support, use camel-jms
I'm guessing I'm going to need to do threading but before I teach myself some bad practices I wanted to make sure I'm going about this the correct way.
Basically I have a "chat" application that can be told to listen or ping the recipients' ip address:port (in my current case just 127.0.0.1:1300). When I open up my application twice (the first one to listen, the second to send a ping) I pick one and tell it to listen(Which is a While statement that just constantly listens until it gets a ping message) and the other one will ping it. It works just peachy!
The problem is when I click the "Listen for ping" button it will go into a glued "down" mode and freeze up "visually" however it prints the UDP packet message to the console so i know its not actually frozen. So my question is how to I make it so I can click the "Listen" button and have it "listen" while at the same time have a "working" cancel button so the user can cancel the process if its taking too long?
This most likely happens because you use synchronous (blocking) socket IO. Your server application most likely blocks on the recv()/read(), which blocks your thread's execution until some data arrives; it then processes the data and returns to blocked state. Hence, your button is rendered by GTK as pushed.
There are, basically, two generic approaches to this problem. The first one is threading. But I would recommend against it in the simpler applications; this approach is generally error-prone and pretty complicated to implement properly.
The second approach is asynchronous IO. First, you may use select()/poll() functions to wait for one of multiple FDs to be signalled (on such events as 'data received', 'data sent', 'connection accepted'). But in a GUI application where the main loop is not immediately available (I'm not sure about GTK, but this is the case in many GUI toolkits), this is usually impossible. In such cases, you may use generic asynchronous IO libraries (like boost asio). With GLIB, IIRC, you can create channels for socket interaction (g_io_channel_unix_new()) and then assign callbacks to them (g_io_add_watch()) which will be called when something interesting happens.
The idea behind asynchronous IO is pretty simple: you ask the OS to do something (send data, wait for events) and then you do other important things (GUI interaction, etc.) until something you requested is done (you have to be able to receive notifications of such events).
So, here's what you may want to study next:
select()/poll() (the latter is generally easier to use)
boost asio library
GLIB channels and asynchronous IO