Multiple Async Request Synchronization - silverlight

I'm developing a Silverlight app that makes multiple async requests to a number of web services. I want a modal "loading" dialog to stay active until all the requests have completed. I'm managing the situation by using a counter variable that gets incremented on each async request start event, and decrements on each async complete event (doesn't seem thread safe to me). When the counter is zero a property bound to the UI turns the dialog off. Is there a better/more general way of dealing with this problem than my counter solution?

Your counter solution is a valid one. Whatever you do, you will have to keep track of all your requests and understand when they arrive (when count hits zero).
You can do different things to clean up your code like put all of this implementation in some MultiAsyncWaiter class which returns an event when complete. But the fundamental implmentation will remain the same: keep track of them until they all return.
You are right about the thread unsafe-ness of the int. If you use interlocked operations (see comments) or lock on the variable, you can keep your implementation thread safe.
Why volatile keyword wont work: With multiple threads changing the variable, an interlocked operation is required for the decrement, which is technically a read + write operation. This is because another thread can change the value between the read and the write.

Related

Event queue cleanup

In my Tcl extension, a secondary thread is filling the Tcl event queue with events; the events contain pointers to structures with a dynamic life time.
What is the right strategy for ensuring that no events with dangling pointers to de-allocated structures remain in the event queue? I can prevent the secondary thread from creating new events; currently I call Tcl_DoOneEvent(TCL_DONTWAIT) in a loop till it returns 0 (i.e., event queue is empty) after ensuring no new events can be created and before de-allocating the structure.
Is that the right way to do it?
On a related note, I am unsure of the purpose of Tcl_ThreadAlert(): if this is needed after every call to Tcl_ThreadQueueEvent(), why isn't the alert included in Tcl_ThreadQueueEvent()?
Finally, my code does not call Tcl_CreateEventSource(), since it doesn't seem to be needing a setup nor a check procedure as a second thread is involved. Is that cause for concern?
On the first point, that seems OK to me. It is very much like running update at the TCL level.
I'm not sure about the second point, as it isn't part of the API that I have explored a lot. It might be that way to allow multiple events to be scheduled per notification, or because there are other uses for the call, but I really don't know.
On the third point, it sounds fine. I think you never need special event sources just to do inter-thread messaging.

Libev: how to schedule a callback to be called as soon as possible

I'm learning libev and I've stumbled upon this question. Assume that I want to process something as soon as possible but not now (i.e. not in the current executing function). For example I want to divide some big synchronous job into multiple pieces that will be queued so that other callbacks can fire in between. In other words I want to schedule a callback with timeout 0.
So the first idea is to use ev_timer with timeout 0. The first question is: is that efficient? Is libev capable of transforming 0 timeout timer into an efficient "call as soon as possible" job? I assume it is not.
I've been digging through libev's docs and I found other options as well:
it can artificially delay invoking the callback by using a prepare or idle watcher
So the idle watcher is probably not going to be good here because
Idle watchers trigger events when no other events of the same or higher priority are pending
Which probably is not what I want. Prepare watchers might work here. But why not check watcher? Is there any crutial difference in the context I'm talking about?
The other option these docs suggest is:
or more sneakily, by reusing an existing (stopped) watcher and pushing it into the pending queue:
ev_set_cb (watcher, callback);
ev_feed_event (EV_A_ watcher, 0);
But that would require to always have a stopped watcher. Also since I don't know a priori how many calls I want to schedule at the same time then I would have to have multiple watchers and additionally keep track of them via some kind of list and increase it when needed.
So am I on the right track? Are these all possibilities or am I missing something simple?
You may want to check out the ev_prepare watcher. That one is scheduled for execution as the last handler in the given event loop iteration. It can be used for 'Execute this task ASAP' implementations. You can create dedicated watcher for each task you want to execute, or you can implement a queue with a one prepare watcher that is started once queue contains at least one task.
Alternatively, you can implement similar mechanism using ev_idle watcher, but this time, it will be executed only if the application doesn't process any 'higher priority' watcher handlers.

Observable.Timer(DateTimeOffset) Process Exit

I have a Observable.Timer(TimeSpan) multiple times, but in couple of places I have used Observable.Timer(DateTimeOffset) to trigger the event at that time, but I believe it is stopping my process from exiting.
DateTimeOffset offset = new DateTimeOffset(minStart);
Observable.Timer(offset)
.Subscribe(_ =>
{
UpdateActive();
});
This piece of code is in my ViewModel and on Window Closed, the process is still running in the background, normally wherever I use the Observable.Timer(TimeSpan) they gets disposed automatically, why doesn't this?
Am I doing something wrong or is it a bug? Or am I missing something?
Given that you're using one of the Subscribe() extension methods, assuming you're using a recent version of RX, the observable should be releasing any subscribers when it completes. Is your observable completing in one case but not the other?
If your observable has not completed (i.e. if the time represented by offset hasn't happened yet) by the time you close your window, nothing is going to automatically unsubscribe for you. Here's what the introtorx site has to say on this matter (emphasis mine):
Considering this, I thought it was prudent to note that subscriptions
will not be automatically disposed of. You can safely assume that the
instance of IDisposable that is returned to you does not have a
finalizer and will not be collected when it goes out of scope. If you
call a Subscribe method and ignore the return value, you have lost
your only handle to unsubscribe. The subscription will still exist,
and you have effectively lost access to this resource, which could
result in leaking memory and running unwanted processes.
Using the overload of Observable.Timer that accepts DateTimeOffset will not on it's own cause a process to be held open; something else is responsible for this.
However, the consequence of not disposing subscriptions to Observable.Timer is that you will leak the timer resource.
You should retain the IDisposable subscription handles for timer and event based observables and dispose them appropriately; most WPF frameworks provide a suitable event for this.
In general, I track and dispose of all my window-scoped subscriptions, just to be on the safe side. See Convert Polling Web Service to RX for an example.

Differences between events and semaphores

I already searched for this subject but couldn't understand it very well. What are the main differences between events and semaphores?
An event generally has only two states, unsignaled or signaled. A semaphore has a count, and is considered unsignaled if the count is zero, and signaled if the count is not zero. In the case of Windows, ReleaseSemaphore() increments a semaphore count, and WaitForSingleObject(...) with a handle of a semaphore will wait (unless the timeout parameter is set to zero) for a non-zero count, then decrement the count before returning.
Do you need do know it in a specific context? That would help to make it better understandable.
Typically a semaphore is some token that must be obtained to execute an action, e.g. lock on an execution unit that is protected from concurrent access.
Events are functions in a message/subscriber pattern.
So they are somewhat related but not even comparable.
A typical confusing/complex scenario that you may face is that one event triggers two different subscribers, that than want simultaneous access to some resource. They should request for a semaphore token and release it after use to let the other subscriber have a go.

Many-to-one gatekeeper task synchronization

I'm working on a design that uses a gatekeeper task to access a shared resource. The basic design I have right now is a single queue that the gatekeeper task is receiving from and multiple tasks putting requests into it.
This is a memory limited system, and I'm using FreeRTOS (Cortex M3 port).
The problem is as follows: To handle these requests asynchronously is fairly simple. The requesting task queues its request and goes about its business, polling, processing, or waiting for other events. To handle these requests synchronously, I need a mechanism for the requesting task to block on such that once the request has been handled, the gatekeeper can wake up the task that called that request.
The easiest design I can think of would be to include a semaphore in each request, but given the memory limitations and the rather large size of a semaphore in FreeRTOS, this isn't practical.
What I've come up with is using the task suspend and task resume feature to manually block the task, passing a handle to the gatekeeper with which it can resume the task when the request is completed. There are some issues with suspend/resume, though, and I'd really like to avoid them. A single resume call will wake up a task no matter how many times it has been suspended by other calls and this can create an undesired behavior.
Some simple pseudo-C to demonstrate the suspend/resume method.
void gatekeeper_blocking_request(void)
{
put_request_in_queue(request);
task_suspend(this_task);
}
void gatekeeper_request_complete_callback(request)
{
task_resume(request->task);
}
A workaround that I plan to use in the meantime is to use the asynchronous calls and implement the blocking entirely in each requesting task. The gatekeeper will execute a supplied callback when the operation completes, and that can then post to the task's main queue or a specific semaphore, or whatever is needed. Having the blocking calls for requests is essentially a convenience feature so each requesting task doesn't need to implement this.
Pseudo-C to demonstrate the task-specific blocking, but this needs to be implemented in each task.
void requesting_task(void)
{
while(1)
{
gatekeeper_async_request(callback);
pend_on_sempahore(sem);
}
}
void callback(request)
{
post_to_semaphore(sem);
}
Maybe the best solution is just to not implement blocking in the gatekeeper and API, and force each task to handle it. That will increase the complexity of each task's flow, though, and I was hoping I could avoid it. For the most part, all calls will want to block until the operation is finished.
Is there some construct that I'm missing, or even just a better term for this type of problem that I can google? I haven't come across anything like this in my searches.
Additional remarks - Two reasons for the gatekeeper task:
Large stack space required. Rather than adding this requirement to each task, the gatekeeper can have a single stack with all the memory required.
The resource is not always accessible in the CPU. It is synchronizing not only tasks in the CPU, but tasks outside the CPU as well.
Use a mutex and make the gatekeeper a subroutine instead of a task.
It's been six years since I posted this question, and I struggled with getting the synchronization working how I needed it to. There were some terrible abuses of OS constructs used. I've considered updating this code, even though it works, to be less abusive, and so I've looked at more elegant ways to handle this. FreeRTOS has also added a number of features in the last six years, one of which I believe provides a lightweight method to accomplish the same thing.
Direct-to-Task Notifications
Revisiting my original proposed method:
void gatekeeper_blocking_request(void)
{
put_request_in_queue(request);
task_suspend(this_task);
}
void gatekeeper_request_complete_callback(request)
{
task_resume(request->task);
}
The reason this method was avoided was because the FreeRTOS task suspend/resume calls do not keep count, so several suspend calls will be negated by a single resume call. At the time, the suspend/resume feature was being used by the application, and so this was a real possibility.
Beginning with FreeRTOS 8.2.0, Direct-to-task notifications essentially provide a lightweight built-into-the-task binary semaphore. When a notification is sent to a task, the notification value may be set. This notification will lie dormant until the notified task calls some variant of xTaskNotifyWait() or it will be woken if it had already made such a call.
The above code, can be slightly reworked to be the following:
void gatekeeper_blocking_request(void)
{
put_request_in_queue(request);
xTaskNotifyWait( ... );
}
void gatekeeper_request_complete_callback(request)
{
xTaskNotify( ... );
}
This is still not an ideal method, as if the task notifications are used elsewhere, you may run into the same problem with suspend/resume, where the task is woken by a different source than the one it is expecting. Given that, for me, it was a new feature, it may work out in the revised code.

Resources