How do I cancel an asynchronous operation in Silverlight/WCF? - silverlight

I am calling an asynchronous service from my Silverlight app and I want to be able to cancel that call after it is made. There is an option for e.Cancelled once the service has finished (i.e. If e.Cancelled Then), but how to you set that cancelled to true after you have called it? How do you cancel that asynchronous call?
Let me clarify a bit... what I am trying to do is call the SAME method twice, one right after the other, and get the results of the last call into my collection. If I call an asynchronous method twice there is no guarantee that the second call will return first, so I may end up with the results of the first call coming in last and having the wrong results in my collection. So what I would like to do is cancel the first call when I make the second so I don't get results back from the first call. Seeing as how there is a Cancelled flag in the completed event args I figure you should be able to do this. But how?

It's async... the transfer is passed off to a remote server and it does not return until the server is done with it.
Basically the server will keep going, but you don't have to wait for the response. Disconnect your service completed event handler and pretend it was never called. That will give the effect of cancelling the operation.
If you really need to cancel something in progress on the server you would need to make another call to the server to cancel the first call. Assuming the first call is a very slow one, this might be possible.
Update (as question changed)
In the case you specify, it will be up to the server to cancel a operation in progress if a second one comes through, not up to the client. e.Cancelled is set server-side.
However... :)
You have exposed a client usability issue. Shouldn't you also delay sending any service request until an idle delay has passed. That way rapid selections will not result in multiple service calls.
Also... :>
You may also want to send a sequence number to your service calls and return that as part of the result. Then you will know if it is the latest request or not.

It sounds like what you really want to do is ignore the responses of all but the most recent call.
Set a unique ID (could be request #, a Guid, timestamp, or whatever) with the request, and make sure the service sends that same value back. Keep around the ID of the most recent request and ignore response that don't match that ID.
This will be safer than cancelling the first request, since if the service has already started sending the response before the cancel request happens, you still get your error condition.

Related

How to Make a Synchronous Call an Asynchronous Call in Angular?

I have an app that shows statuses for internal processes. It also has a separate view that allows you to set up new records. To set up a new record, you fill out a form, and upon submit a call is made to my nodejs server that:
inserts the record into a table
kicks off a stored procedure
routes you back to the status page
The issue here, is that the page hangs while this happens, as sometimes the stored procedure takes a minute or two to run. So you wait for a couple minutes, and then are routed back to the status page.
Now, I don't actually care to see any exit code for this stored proc on the front end, as you will see the status of it on the status page. I'm wondering if there's a way for me to kick this process off, but not have the front end care about the return.
I've tried adding in the $location.path() before the $http call to the server, but then the $http call never happens.
Any ideas?
Thanks!
You can wrap the stored procedure call in a promise. The browser will make the call and continue on without waiting for it to complete and you can react appropriately in the resolve or reject functions. You can use angular's $q service:
insertRecord();
$q(function() {
storedProcCall();
});
redirect();

REST optimistic-locking and multiple PUTs

Far as I understand, PUT request is not supposed to return any content.
Consider the client wants to run this pseudo code:
x = resource.get({id: 1});
x.field1 = "some update";
resource.put(x);
x.field2 = "another update";
resource.put(x);
(Imagine I have an input control and a button "Save", this allows me to change a part of object "x" shown in an input control, then on button click PUT changes to server, then continue editing and maybe "save" another change to "x")
Following different proposals on how to implement optimistic locking in REST APIs, the above code MUST fail, because version mark (however implemented) for "x" as returned by get() will become stale after put().
Then how do you people usually make it work?
Or do you just re-GET objects after every PUT?
You can use "conditional" actions with HTTP, for example the If-Match header described here:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24
In short: You deliver an ETag with the GET request, and supply this ETag back to the server in the If-Match header. The server will respond with a failure if the resource you are trying to PUT has another ETag. You can also use simple timestamps with the If-Unmodified-Since header.
Of course you will have to make your server code understand conditional requests.
For multiple steps, the PUT can indeed return the new representation, it can therefore include the new ETag or timestamp too. Even if the server does not return the new representation for a PUT, you could still use the timestamp from the response with an If-Unmodified-Since conditional PUT.
Here is probably what I was looking for: https://www.rfc-editor.org/rfc/rfc7231#section-4.3.4
They implicitly say that we CAN return ETag from PUT. Though only in the case server applied the changes as they were given, without any corrections.
However this raises yet another question. In real world app PUT caller will run asynchronously in JS gui, like in my example in the question. So, Save button might be pressed several times with or without entering any changes. If we don't use optimistic locking, then supposed PUT idempotency makes it safe to send another PUT query with each button click, as long as the last one wins (but actually if there were changes then it's not guaranteed, so the question remains).
But with optimistic locking, when first PUT succeeds, it returns updatred ETag, ok? And if there is another PUT request running, still with outdated tag version, that latter request will get 412 and the user will see a message "someone else changed the resource" - but actually it was our former changes.
What do you usually do to prevent that? Disable the Save button until its request is fully completed? What if it times out? Or do you think it's acceptable to see concurrent-change error message if it was a timeout, because the stability is already compromised anyway?

How to prevent ndb from batching a put_async() call and make it issue the RPC immediately?

I have a request handler that updates an entity, saves it to the datastore, then needs to perform some additional work before returning (like queuing a background task and json-serializing some results). I want to parallelize this code, so that the additional work is done while the entity is being saved.
Here's what my handler code boils down to:
class FooHandler(webapp2.RequestHandler):
#ndb.toplevel
def post(self):
foo = yield Foo.get_by_id_async(some_id)
# Do some work with foo
# Don't yield, as I want to perform the code that follows
# while foo is being saved to the datastore.
# I'm in a toplevel, so the handler will not exit as long as
# this async request is not finished.
foo.put_async()
taskqueue.add(...)
json_result = generate_result()
self.response.headers["Content-Type"] = "application/json; charset=UTF-8"
self.response.write(json_result)
However, Appstats shows that the datastore.Put RPC is being done serially, after taskqueue.Add:
A little digging around in ndb.context.py shows that a put_async() call ends up being added to an AutoBatcher instead of the RPC being issued immediately.
So I presume that the _put_batcher ends up being flushed when the toplevel waits for all async calls to be complete.
I understand that batching puts has real benefits in certain scenarios, but in my case here I really want the put RPC to be sent immediately, so I can perform other work while the entity is being saved.
If I do yield foo.put_async(), then I get the same waterfall in Appstats, but with datastore.Put being done before the rest:
This is to be expected, as yield makes my handler wait for the put_async() call to complete before executing the rest of the code.
I also have tried adding a call to ndb.get_context().flush() right after foo.put_async(), but the datastore.Put and taskqueue.BulkAdd calls are still not being made in parallel according to Appstats.
So my question is: how can I force the call to put_async() to bypass the auto batcher and issue the RPC immediately?
There's no supported way to do it. Maybe there should be. Can you try if this works?
loop - ndb.eventloop.get_event_loop()
while loop.run_idle():
pass
You may have to look at the source code of ndb/eventloop.py to see what else you could try -- basically you want to try most of what run0() does except waiting for RPCs. In particular, it's possible that you would have to do this:
while loop.current:
loop.run0()
while loop.run_idle():
pass
(This still isn't supported, because there are other conditions you may have to handle too, but those don't seem to occur in your example.)
Try this, I'm not 100% certain it will help:
foo = yield Foo.get_by_id_async(some_id)
future = foo.put_async()
future.done()
ndb requests get put into the autobatcher, the batch gets sent to RPC when you need a result. Since you don't need the result of foo.put_async(), it doesn't get sent until you make another ndb call (you don't) or until the #ndb.toplevel ends.
Calling future.done() does not block, but I'm guessing it might trigger the request.
Another thing to try to force the operation is:
ndb.get_context().flush()

What if I never call get on the future from an Async Data Store operation?

If I call an async data store operation such as the one shown below but then end the request without calling get on the future, what will happen?
Will my operation still execute?
Will me response be sent before the operation has completed execution?
AsyncDatastoreService datastore = DatastoreServiceFactory.getAsyncDatastoreService();
Entity entity = new Employee("Employee", "Alfred");
// ... populate entity properties
// Make a sync call via the async interface
datastore.put(key)
//Return response
The rpc will be sent immediately; when your app is ready to send a response to the client, it will block until the rpc is done.
I've done this in python by accident and the result was nothing was written to the datastore.
Your operation may still execute but it seems that'll happen only if the response handler is still active when it decides to execute. If not, nothing seems to happen at all.
Yes, the response will be sent before the operation has completed execution - this is the main feature of a future, it's non-blocking.

App Engine: Is it possible to enqueue tasks asynchronously?

Many of my handlers add a task to a task queue to do non-critical background processing. Since this processing isn't critical, if the call to taskqueue.add() throws an exception, my code just ignores it.
Tonight the task queue seemed to be down for around half an hour. Although my handlers correctly ignored the failure, they took about 5 seconds for the taskqueue.add() call to timeout and move on to processing the rest of the page. This therefore made my site run very slowly.
So, is it possible to enqueue a task asynchronously - meaning a way to add a task, without waiting to see if the addition succeeded?
Alternatively, is there a way to reduce that timeout from 5 seconds down to eg 1 second?
Thanks.
You can use the new taskqueue methods create_rpc and add_async. If you don't care if the add succeeds, simply call add_async and ignore the result. If you care, but only want to wait 1 second, set the deadline when calling create_rpc, and use the return value as the RPC argument to add_async. Call get_result to find out if the tasks were successfully added.
I think you can't do anything about it because the RPC call underneath the add method is a synchronous blocking API call.
You could try to add some check using the Capabilities API.
I am pretty sure GAE announced that TQ adds will be async with the next release (experimental feature).

Resources