When I run this:
using(SqlConnection connection = new SqlConnection(connectionString)){
await connection.OpenAsync();
}
It hangs on the connection.OpenAsync() line.
If I look in Sql Server Management Studio how many connections are active for the database, there's only one: probably the one that this code uses. So, I'm not sure it is that I'm running out of connections from the app pool.
What am I doing wrong?
The problem was not the connection at all. The problem was that I shot myself in the foot with a deadlock on my threads. I was trying to make a synchronous call to the method containing the connection.OpenAsync(), like this:
Task task = MyAsyncMethod();
task.Wait();
By calling task.Wait() I was blocking the thread. When await connection.OpenAsync() returns, the rest of the method wants to run on the same thread I just blocked, so the task never finishes and task.Wait() never returns.
The solution:
Because in my async method I had nothing requiring me to stick to the same thread that called it, I simply used await connection.OpenAsync().ConfigureAwait(false), to make it run the remainder of the method in a different thread other than the one I block with task.Wait().
Another reason it could hang is because the implementation is bad. OpenAsync(CancellationToken) doesn't even use the cancellation token for the Open operation, so you can't actually cancel it. You have to wait for it to timeout. All it does is return a cancelled task if the cancellationToken was already set when you called the method, something you could check yourself without any special implementation. So this "async" overload is actually useless.
Related
I have implemented a Source by extending RichSourceFunction for our Message Queue that Flink doesn't support.
When I implements the run method whose signature is:
override def run(sc: SourceFunction.SourceContext[String]): Unit = {
val msg = read_from_mq
sc.collect(msg)
}
When the run method is called, if there is no newer message in message queue,
Should I run without calling sc.collect or
I can wait until newer data comes(in this case, run method will be blocked).
I would prefer the 2nd one,not sure if this is the correct usage.
The run method of a Flink source should loop, endlessly producing output until its cancel method is called. When there's nothing to produce, then it's best if you can find a way to do a blocking wait.
The apache nifi source connector is another reasonable example to use as a model. You will note that it sleeps for a configurable interval when there's nothing for it to do.
As you probably know both options are functionally correct and will yield correct results.
This being said the second one is preferred because you're not holding the thread. In fact, if you take a look at the RabbitMQ connector implementation you'll notice that this exactly how it is implemented: inside its run it indirectly waits for messages to be placed on a BlockingQueue.
Google's doc on async tasks assumes knowledge of the difference between regular and asynchronously added tasks.
add_async(task, transactional=False, rpc=None)
Asynchronously add a Task or a list of Tasks to this Queue.
How is adding tasks asynchronously different to adding them regularly.
I.e. what is the difference between using add(task, transactional=False) and add_async(task, transactional=False, rpc=None)
I've heard that adding tasks regularly blocks certain things. Any explanation of what it blocks and how, and how async tasks don't block would be greatly appreciated.
tasks are scheduled and run elsewhere.
The async bit refers to the fact the call returns immediately (your code does not wait for the round trip of the RPC that submits the task to a queue) however you still have to check/wait for the result at the end of the request, but it means you can be doing work and then check that the call completed before you exit.
If I call an async data store operation such as the one shown below but then end the request without calling get on the future, what will happen?
Will my operation still execute?
Will me response be sent before the operation has completed execution?
AsyncDatastoreService datastore = DatastoreServiceFactory.getAsyncDatastoreService();
Entity entity = new Employee("Employee", "Alfred");
// ... populate entity properties
// Make a sync call via the async interface
datastore.put(key)
//Return response
The rpc will be sent immediately; when your app is ready to send a response to the client, it will block until the rpc is done.
I've done this in python by accident and the result was nothing was written to the datastore.
Your operation may still execute but it seems that'll happen only if the response handler is still active when it decides to execute. If not, nothing seems to happen at all.
Yes, the response will be sent before the operation has completed execution - this is the main feature of a future, it's non-blocking.
Many of my handlers add a task to a task queue to do non-critical background processing. Since this processing isn't critical, if the call to taskqueue.add() throws an exception, my code just ignores it.
Tonight the task queue seemed to be down for around half an hour. Although my handlers correctly ignored the failure, they took about 5 seconds for the taskqueue.add() call to timeout and move on to processing the rest of the page. This therefore made my site run very slowly.
So, is it possible to enqueue a task asynchronously - meaning a way to add a task, without waiting to see if the addition succeeded?
Alternatively, is there a way to reduce that timeout from 5 seconds down to eg 1 second?
Thanks.
You can use the new taskqueue methods create_rpc and add_async. If you don't care if the add succeeds, simply call add_async and ignore the result. If you care, but only want to wait 1 second, set the deadline when calling create_rpc, and use the return value as the RPC argument to add_async. Call get_result to find out if the tasks were successfully added.
I think you can't do anything about it because the RPC call underneath the add method is a synchronous blocking API call.
You could try to add some check using the Capabilities API.
I am pretty sure GAE announced that TQ adds will be async with the next release (experimental feature).
I'm working on a design that uses a gatekeeper task to access a shared resource. The basic design I have right now is a single queue that the gatekeeper task is receiving from and multiple tasks putting requests into it.
This is a memory limited system, and I'm using FreeRTOS (Cortex M3 port).
The problem is as follows: To handle these requests asynchronously is fairly simple. The requesting task queues its request and goes about its business, polling, processing, or waiting for other events. To handle these requests synchronously, I need a mechanism for the requesting task to block on such that once the request has been handled, the gatekeeper can wake up the task that called that request.
The easiest design I can think of would be to include a semaphore in each request, but given the memory limitations and the rather large size of a semaphore in FreeRTOS, this isn't practical.
What I've come up with is using the task suspend and task resume feature to manually block the task, passing a handle to the gatekeeper with which it can resume the task when the request is completed. There are some issues with suspend/resume, though, and I'd really like to avoid them. A single resume call will wake up a task no matter how many times it has been suspended by other calls and this can create an undesired behavior.
Some simple pseudo-C to demonstrate the suspend/resume method.
void gatekeeper_blocking_request(void)
{
put_request_in_queue(request);
task_suspend(this_task);
}
void gatekeeper_request_complete_callback(request)
{
task_resume(request->task);
}
A workaround that I plan to use in the meantime is to use the asynchronous calls and implement the blocking entirely in each requesting task. The gatekeeper will execute a supplied callback when the operation completes, and that can then post to the task's main queue or a specific semaphore, or whatever is needed. Having the blocking calls for requests is essentially a convenience feature so each requesting task doesn't need to implement this.
Pseudo-C to demonstrate the task-specific blocking, but this needs to be implemented in each task.
void requesting_task(void)
{
while(1)
{
gatekeeper_async_request(callback);
pend_on_sempahore(sem);
}
}
void callback(request)
{
post_to_semaphore(sem);
}
Maybe the best solution is just to not implement blocking in the gatekeeper and API, and force each task to handle it. That will increase the complexity of each task's flow, though, and I was hoping I could avoid it. For the most part, all calls will want to block until the operation is finished.
Is there some construct that I'm missing, or even just a better term for this type of problem that I can google? I haven't come across anything like this in my searches.
Additional remarks - Two reasons for the gatekeeper task:
Large stack space required. Rather than adding this requirement to each task, the gatekeeper can have a single stack with all the memory required.
The resource is not always accessible in the CPU. It is synchronizing not only tasks in the CPU, but tasks outside the CPU as well.
Use a mutex and make the gatekeeper a subroutine instead of a task.
It's been six years since I posted this question, and I struggled with getting the synchronization working how I needed it to. There were some terrible abuses of OS constructs used. I've considered updating this code, even though it works, to be less abusive, and so I've looked at more elegant ways to handle this. FreeRTOS has also added a number of features in the last six years, one of which I believe provides a lightweight method to accomplish the same thing.
Direct-to-Task Notifications
Revisiting my original proposed method:
void gatekeeper_blocking_request(void)
{
put_request_in_queue(request);
task_suspend(this_task);
}
void gatekeeper_request_complete_callback(request)
{
task_resume(request->task);
}
The reason this method was avoided was because the FreeRTOS task suspend/resume calls do not keep count, so several suspend calls will be negated by a single resume call. At the time, the suspend/resume feature was being used by the application, and so this was a real possibility.
Beginning with FreeRTOS 8.2.0, Direct-to-task notifications essentially provide a lightweight built-into-the-task binary semaphore. When a notification is sent to a task, the notification value may be set. This notification will lie dormant until the notified task calls some variant of xTaskNotifyWait() or it will be woken if it had already made such a call.
The above code, can be slightly reworked to be the following:
void gatekeeper_blocking_request(void)
{
put_request_in_queue(request);
xTaskNotifyWait( ... );
}
void gatekeeper_request_complete_callback(request)
{
xTaskNotify( ... );
}
This is still not an ideal method, as if the task notifications are used elsewhere, you may run into the same problem with suspend/resume, where the task is woken by a different source than the one it is expecting. Given that, for me, it was a new feature, it may work out in the revised code.