Event queue cleanup - c

In my Tcl extension, a secondary thread is filling the Tcl event queue with events; the events contain pointers to structures with a dynamic life time.
What is the right strategy for ensuring that no events with dangling pointers to de-allocated structures remain in the event queue? I can prevent the secondary thread from creating new events; currently I call Tcl_DoOneEvent(TCL_DONTWAIT) in a loop till it returns 0 (i.e., event queue is empty) after ensuring no new events can be created and before de-allocating the structure.
Is that the right way to do it?
On a related note, I am unsure of the purpose of Tcl_ThreadAlert(): if this is needed after every call to Tcl_ThreadQueueEvent(), why isn't the alert included in Tcl_ThreadQueueEvent()?
Finally, my code does not call Tcl_CreateEventSource(), since it doesn't seem to be needing a setup nor a check procedure as a second thread is involved. Is that cause for concern?

On the first point, that seems OK to me. It is very much like running update at the TCL level.
I'm not sure about the second point, as it isn't part of the API that I have explored a lot. It might be that way to allow multiple events to be scheduled per notification, or because there are other uses for the call, but I really don't know.
On the third point, it sounds fine. I think you never need special event sources just to do inter-thread messaging.

Related

How to make a GTK application that receives data from a TCP socket

I am building a GUI (using C) which receives data to display from another application sending the data over a TCP socket. How do I do this using GTK (just a general overview of the approach I should take)? I have done a lot of searching and came across stuff about multithreading, GIOchannel etc. now I'm more confused than ever. There doesn't seem to be any conclusive articles or guides about how to actually achieve this.
There is basically one important rule:
You must call all gtk_* functions from the main thread.
If you update any widgets from another thread, you might get inconsistent results.
Of course, you don't want to wait for TCP data in that thread.
Therefore I would suggest you create a separate thread for doing the communication. In this thread you can wait for data and if you got anything that should affect what you show in your GUI, you can tell the main thread to do the required work.
A simple way to do this is to use g_idle_add() to enqueue a callback function. That callback function is then executed in context of main thread and can update your widgets.
The information what needs to be updated can be stored in some newly allocated memory that is passed to this callback where you have to free it afterwards.

Extended windows

I have an always one application, listening to a Kafka stream, and processing events. Events are part of a session. And I need to do calculations based off of a sessions data. I am running into a problem trying to correctly run my calculations due to the length of my sessions. 90% of my sessions are done after 5 minutes. 99% are done after 1 hour. Sessions may last more than a day, due to this being a real-time system, there is no determined end. Session are unique, and show never collide.
I am looking for a way where I can process a window multiple times, either with an initial wait period and processing any later events after that, or a pure process per event type structure. I will need to keep all previous events around(ListState), as well as previously processed values(ValueState).
I previously thought allowedLateness would allow me to do this, but it seems the lateness is only considered for when the event should have been processed, it does not extend an actual window. GlobalWindows may also work, but I am unsure if there is a way to process a window multiple times. I believe I can used an evictor with GlobalWindows to purge the Windows after a period of inactivity(although admittedly, I did not research this yet, because I was unsure of how to trigger a GlobalWindow multiple times.
Any suggestions on how to achieve what I am looking to do would be greatly appreciated, I would also be happy to clarify any points needed.
If SessionWindows won't do the job, then you can use GlobalWindows with a custom Trigger and Evictor. The Trigger interface has onElement and timer-based callbacks that can fire whenever and as often as you like. If you go down this route, then yes, you'll also need to implement an Evictor to dispose of elements when they are no longer needed.
The documentation and the source code are helpful when trying to understand how this all fits together.

Libev: how to schedule a callback to be called as soon as possible

I'm learning libev and I've stumbled upon this question. Assume that I want to process something as soon as possible but not now (i.e. not in the current executing function). For example I want to divide some big synchronous job into multiple pieces that will be queued so that other callbacks can fire in between. In other words I want to schedule a callback with timeout 0.
So the first idea is to use ev_timer with timeout 0. The first question is: is that efficient? Is libev capable of transforming 0 timeout timer into an efficient "call as soon as possible" job? I assume it is not.
I've been digging through libev's docs and I found other options as well:
it can artificially delay invoking the callback by using a prepare or idle watcher
So the idle watcher is probably not going to be good here because
Idle watchers trigger events when no other events of the same or higher priority are pending
Which probably is not what I want. Prepare watchers might work here. But why not check watcher? Is there any crutial difference in the context I'm talking about?
The other option these docs suggest is:
or more sneakily, by reusing an existing (stopped) watcher and pushing it into the pending queue:
ev_set_cb (watcher, callback);
ev_feed_event (EV_A_ watcher, 0);
But that would require to always have a stopped watcher. Also since I don't know a priori how many calls I want to schedule at the same time then I would have to have multiple watchers and additionally keep track of them via some kind of list and increase it when needed.
So am I on the right track? Are these all possibilities or am I missing something simple?
You may want to check out the ev_prepare watcher. That one is scheduled for execution as the last handler in the given event loop iteration. It can be used for 'Execute this task ASAP' implementations. You can create dedicated watcher for each task you want to execute, or you can implement a queue with a one prepare watcher that is started once queue contains at least one task.
Alternatively, you can implement similar mechanism using ev_idle watcher, but this time, it will be executed only if the application doesn't process any 'higher priority' watcher handlers.

Observable.Timer(DateTimeOffset) Process Exit

I have a Observable.Timer(TimeSpan) multiple times, but in couple of places I have used Observable.Timer(DateTimeOffset) to trigger the event at that time, but I believe it is stopping my process from exiting.
DateTimeOffset offset = new DateTimeOffset(minStart);
Observable.Timer(offset)
.Subscribe(_ =>
{
UpdateActive();
});
This piece of code is in my ViewModel and on Window Closed, the process is still running in the background, normally wherever I use the Observable.Timer(TimeSpan) they gets disposed automatically, why doesn't this?
Am I doing something wrong or is it a bug? Or am I missing something?
Given that you're using one of the Subscribe() extension methods, assuming you're using a recent version of RX, the observable should be releasing any subscribers when it completes. Is your observable completing in one case but not the other?
If your observable has not completed (i.e. if the time represented by offset hasn't happened yet) by the time you close your window, nothing is going to automatically unsubscribe for you. Here's what the introtorx site has to say on this matter (emphasis mine):
Considering this, I thought it was prudent to note that subscriptions
will not be automatically disposed of. You can safely assume that the
instance of IDisposable that is returned to you does not have a
finalizer and will not be collected when it goes out of scope. If you
call a Subscribe method and ignore the return value, you have lost
your only handle to unsubscribe. The subscription will still exist,
and you have effectively lost access to this resource, which could
result in leaking memory and running unwanted processes.
Using the overload of Observable.Timer that accepts DateTimeOffset will not on it's own cause a process to be held open; something else is responsible for this.
However, the consequence of not disposing subscriptions to Observable.Timer is that you will leak the timer resource.
You should retain the IDisposable subscription handles for timer and event based observables and dispose them appropriately; most WPF frameworks provide a suitable event for this.
In general, I track and dispose of all my window-scoped subscriptions, just to be on the safe side. See Convert Polling Web Service to RX for an example.

Multiple Async Request Synchronization

I'm developing a Silverlight app that makes multiple async requests to a number of web services. I want a modal "loading" dialog to stay active until all the requests have completed. I'm managing the situation by using a counter variable that gets incremented on each async request start event, and decrements on each async complete event (doesn't seem thread safe to me). When the counter is zero a property bound to the UI turns the dialog off. Is there a better/more general way of dealing with this problem than my counter solution?
Your counter solution is a valid one. Whatever you do, you will have to keep track of all your requests and understand when they arrive (when count hits zero).
You can do different things to clean up your code like put all of this implementation in some MultiAsyncWaiter class which returns an event when complete. But the fundamental implmentation will remain the same: keep track of them until they all return.
You are right about the thread unsafe-ness of the int. If you use interlocked operations (see comments) or lock on the variable, you can keep your implementation thread safe.
Why volatile keyword wont work: With multiple threads changing the variable, an interlocked operation is required for the decrement, which is technically a read + write operation. This is because another thread can change the value between the read and the write.

Resources