Delay in receiving response or no response when using ExecuteQueryAsync - silverlight

I am using the ExecuteQueryAsync on multiple lists to fill data into a List<ListItemCollection>. Once all the responses are received, I bind the data onto my Silverlight Application.
ExecuteQueryAsync could ideally go into success or failure. I basically keep track of the total number responses received on both Success and Failure and once the count received matches with the total requests. I show a wait icon till all the responses are received and the data is bound to the application. Now the problem is that, for example, I make 12 requests, I receive only 10 or 11 responses back. I don't seem to understand why I dont receive the response for the last few requests. I neither goes to success not to failure.
Is someone facing this issue? Could you help me understand what is causing this issue and why the response is neither received as a success nor a failure. If I perform the same operation, it works. This issue keeps happening every now and then and I am not sure how to fix this issue.
var requestCount = 0;
var responseCount = 0;
List<ListItemCollection>() bindingData;
public function getData(){
showWaitIcon();
//Some Code here to form the CAML query
bindingData = new List<ListItemCollection>();
for(int i=0; i<10; i++){
//Some Code here to create the Client Context to query each Document Library or List
clientContext.RequestTimeout = -1;
clientContext.Load(clientContext.Web);
ListItemCollection _lstItems;
clientContext.Load(_lstItems);
clientContext.ExecuteQueryAsync(onQuerySucceeded, onQueryFailed);
requestCount++;
}
}
private void onQuerySucceeded(object sender, ClientRequestSucceededEventArgs args)
{
responseCount++;
this.Dispatcher.BeginInvoke(BindData);
}
private void onQueryFailed(object sender, ClientRequestFailedEventArgs args)
{
responseCount++;
this.Dispatcher.BeginInvoke(BindData);
}
private function BindData(){
if(requestCount == responseCount() {
hideWaitIcon();
bindToSilverlightView(bindingData);
}
}

The most probable cause I can think of, without looking through your code, is that you have a synchronization issue, that is, two or more of your requests are receiving the result at nearly the exact same time and trying to modify the variable where you keep track of the number of answered requests. You may need to use a "lock" or similar structure to make sure that block of code is not accessed by two threads at once.

Related

how to use a simple way to determine the end of the streaming of client in asynchronous GRPC++?

Now I'm learning Bidirectional streaming in asynchronous GRPC++.
Thanks for the master:https://github.com/Mityuha/grpc_async. I get much useful information to know the realization principle of this mode.But I have a question about it:
Not much to say,the code is following:
the server:
if(!ok || mcounter >= greeting.size())//ctx_.IsCancelled() doesn't work
{
std::cout << "[ProceedMM]: Trying finish" << std::endl;
status_ = FINISH;
responder_.Finish(Status(), (void*)this);
}
the client:
void AsyncCompleteRpc()
{
void* got_tag;
bool ok = false;
while(cq_.Next(&got_tag, &ok))
{
AbstractAsyncClientCall* call = static_cast<AbstractAsyncClientCall*>(got_tag);
call->Proceed(ok);
}
std::cout << "Completion queue is shutting down." << std::endl;
}
in this server,the end of ClientStream is judged by the bool value of OK which is send by client.It isn't similar to the way of synchronous GRPC,which is judged the steaming end by the return of bool Read(RequestType* request) in the class of ServerReaderWriter in many times.It's so strange to find the same way in the class of ServerAsyncReaderWriter which is void Read(R* msg, void* tag).Though I know it's because of the asynchronous way.But if I don't know how much times of asynchronous streaming without the judgement of "OK", how to find the way like synchronous streaming to judge the end of client streaming.Because I test the performance by java which is the same code between synchronous with asynchronous ways,which don't have the bool value of OK in asynchronous ways.
So can someone help me?Or tell me some ways to deal with it or find a way to test the performance testing of GRPC++ by Bazel of in my another question.
I'm not 100% sure that I get the question, but what ok tells you is (when false) that the operation you requested couldn't be completed and nothing else will ever complete successfully on that side of the stream. So if you issue a Read operation and the Next gives you a !ok value, then you can be sure that no more data will ever come back from the client. A more detailed explanation is given in the comments for the CompletionQueue class.
Thanks and good luck with gRPC.
In the case of receiving a stream in an asynchronous client of gRPC, you will use a ClientAsyncReader<> class to receive data. This class differs when both send and receive are stream, but logic is the same.
This class has a Finish() method which you need to call after finishing sending your rpc data to server. When answer stream from server is finished, a message to CompletionQueue will be added which corresponds to this method. This Finish method returns final status when its message is returned in CQ. You can find out that your stream is finished. Your code will be similar to this:
response_reader_ = stub->PrepareAsyncXYZ(ctx_, req, cq);
response_reader_->StartCall(&start_data_);
response_reader_->Finish(&status_, &finish_data_);
in this sample, message in CQ will have finish_data_ tag and you can use it to handle it properly. You will probably will need to manage messages for Finish() and Read() by reference counting, because you will probably get an additional failed read message too. when message with finish_data_ is received in CQ, status_ will have the valid value of status.
At least it is how I wrote it.

ReactiveCommand in WPF/ReactiveUI app blocks UI thread

I am quite a beginner with ReactiveUI and have a strange behavior with a ReactiveCommand.
I want to query data from a database that currently does not support
asynchronous operations. Since we want to exchange the database in the
future with an asynchronous interface I want to write everything as
if the database already would allow async operations. As far as I understand
that would mean that I wrap my database calls at the lowest level in
a Task.
I have a button which is bound to a ReactiveCommand and the command
starts the database query. While the query lasts I want to show some
sort of animation.
The problem is that whatever I tried, the query blocks my UI thread.
Here is part of my code:
public ReactiveCommand<Unit, Unit> StartExportCommand { get; }
//The constructor of my view model
public ExportDataViewModel(IDataRepository dr)
{
this.dr = dr;
//...
StartExportCommand = ReactiveCommand.CreateFromTask(() => StartExport());
//...
}
private async Task StartExport()
{
try
{
Status = "Querying data from database...";
//Interestingly without this call the Status message would not even be shown!
//The delay seems to give the system the opportunity to at least update the
//label in the UI that is bound to "Status".
await Task.Delay(100);
//### This is the call that blocks the UI thread for several seconds ###
var result = await dr.GetValues();
//do something with result...
Status = "Successfully completed";
}
catch(Exception ex)
{
Status = "Failed!";
//do whatever else is necessary
}
}
//This is the GetValues method of the implementation of the IDataRepository.
//The dictionary maps measured values to measuring points but that should not matter here.
//ValuesDto is just some container for the values.
public Task<IDictionary<int, ValuesDto>> GetValues()
{
//...
return Task<IDictionary<int, ValuesDto>>.Factory.StartNew(() =>
{
//### here is where the blocking calls to the database
//### specific APIs take place
return result;
}, TaskCreationOptions.LongRunning);
}
I don't understand why this code is blocking the UI thread although I am wrapping
the long running query in a Task.
Is there something wrong with this pattern or should I go another way with Observables?
Edit 1
I am aware of the fact that async != threads. I thought, however, that Task with the TaskCreationOptions.LongRunning would make the blocking code run on a thread pool thread.
Edit 2
As recommended by Andy I set a breakpoint inside my task and had a look into the Debug Threads window. It tells me that Task is running on a worker thread. Still my UI is blocking.

What does the error "Exceeded maximum allowed number of Trackers" mean?

I'm following this tutorial to implement object tracking on iOS 11. I'm able to track objects perfectly, until a certain point, then this error appears in the console.
Throws: Error Domain=com.apple.vis Code=9 "Internal error: Exceeded maximum allowed number of Trackers for a tracker type: VNObjectTrackerType" UserInfo={NSLocalizedDescription=Internal error: Exceeded maximum allowed number of Trackers for a tracker type: VNObjectTrackerType}
Am I using the API incorrectly, or perhaps Vision has trouble handling too many consecutive object tracking tasks? Curious if anyone has insight into why this is happening.
It appears that you hit the limit on the number of trackers that can be active in the system. First thing to note is that a new tracker is created every time a new observation, with new -uuid property is used. You should be recycling the initial observation you use when you started the tracker, until you no longer want to use it, by feeding what you got from “results” for time T into the subsequent request you make for time T+1. When you no longer want to use that tracker (maybe the confidence score gets too low), there is a “lastFrame” property that can be set, which lets the Vision framework know that you are done with that tracker. Trackers also get released when the sequence request handler is released.
To track the rectangle you feed consequent observations to the same VNSequenceRequestHandler instance, say, handler. When the rectangle is lost, i.e. the new observation is nil in your handler function / callback, or you are getting some other tracking error, just re-instantiate the handler and continue, e.g. (sample code to show the idea):
private var handler = VNSequenceRequestHandler()
// <...>
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
let lastObservation = self.lastObservation
else {
self.handler = VNSequenceRequestHandler()
return
}
let request = VNTrackObjectRequest(detectedObjectObservation: lastObservation, completionHandler: self.handleVisionRequestUpdate)
request.trackingLevel = .accurate
do {
try self.handler.perform([request], on: pixelBuffer)
} catch {
print("Throws: \(error)")
}
}
Note that handler is var, not a constant.
Also, you may re-instantiate the handler in actual handler function (like func handleVisionRequestUpdate(_ request: VNRequest, error: Error?)) in case new observation object is invalid.
My problem with this was that I had a function that called perform... on the same VNSequenceRequestHandler that the tracking was also calling perform on, because of that I was processing too many try self.visionSequenceHandler.perform(trackRequests, on: ciimage) concurrently. Make sure the VNSequenceRequestHandler is not getting hit at the same time by multiple performs....

WCF/Silverlight: How can I foul a Channel?

I was told that I shouldn't cache channels in Silverlight/WCF because they may become faulted and unsuable. Can somone show me some sample code that would prove it can happen.
Call a service to prove the connection can work (i.e. no bogus URL)
Make a second call that fouls the channel by causing it to go into a faulted condition
Repeat the first call, which would fail.
In my own testing, the key is whether the binding you're using is session-oriented or not. If you're using a stateless binding like BasicHttpBinding, you can muck up the channel all you want and you're good. For instance, I've got a WCF service using the BasicHttpBinding that looks like this -- note specifically the Channel.Abort() call in SayGoodbye():
public class HelloWorldService : IHelloWorldService
{
public string SayHello()
{
return "Hello.";
}
public string SayGoodbye()
{
OperationContext.Current.Channel.Abort();
return "Goodbye.";
}
}
And the Silverlight client code looks like this (ugly as hell, sorry).
public partial class ServiceTestPage : Page
{
HelloWorldServiceClient client;
public ServiceTestPage()
{
InitializeComponent();
client = new HelloWorldServiceClient();
client.SayHelloCompleted += new EventHandler<SayHelloCompletedEventArgs>(client_SayHelloCompleted);
client.SayGoodbyeCompleted += new EventHandler<SayGoodbyeCompletedEventArgs>(client_SayGoodbyeCompleted);
client.SayHelloAsync();
}
void client_SayHelloCompleted(object sender, SayHelloCompletedEventArgs e)
{
if (e.Error == null)
{
Debug.WriteLine("Called SayHello() with result: {0}.", e.Result);
client.SayGoodbyeAsync();
}
else
{
Debug.WriteLine("Called SayHello() with the error: {0}", e.Error.ToString());
}
}
void client_SayGoodbyeCompleted(object sender, SayGoodbyeCompletedEventArgs e)
{
if (e.Error == null)
{
Debug.WriteLine("Called SayGoodbye() with result: {0}.");
}
else
{
Debug.WriteLine("Called SayGoodbye() with the error: {0}", e.Error.ToString());
}
client.SayHelloAsync(); // start over
}
}
And it'll loop around infinitely as long as you want.
But if you're using a session-oriented binding like Net.TCP or HttpPollingDuplex, you've got to be much more careful about your channel handling. If that's the case, then of course you're caching your proxy client, right? What you have to do in that instance is to catch the Channel_Faulted event, abort the client, and then recreate it, and of course, re-establish all your event-handlers. Kind of a pain.
On a side note, when it comes to using a duplex binding, the best approach that I've found (I'm open to others) is to create a wrapper around my proxy client that does three things:
(1) Transforms the obnoxious event-raising code generated by the "Add Service Reference" dialog box into a far-more-useful continuation-passing pattern.
(2) Wraps each of the events raised from the server-side, so that the client can subscribe to the event on my wrapper, not the event on the proxy client itself, since the proxy client itself may have to be deleted and recreated.
(3) Handles the ChannelFaulted event, and (several times, with a timeout) attempts to recreate the proxy client. If it succeeds, it automatically resubscribes all of its event wrappers, and if it fails, it throws a real ClientFaulted event which in effect means, "You're screwed, try again later."
It's a pain, since it seems like this is the sort of thing that should have been included with the MS-generated code in the first place. But it sure fixes a whole lot of problems. One of these days I'll see if I can get this wrapper working with T4 templates.

Calling a webservice synchronously from a Silverlight 3 application?

I am trying to reuse some .NET code that performs some calls to a data-access-layer type service. I have managed to package up both the input to the method and the output from the method, but unfortunately the service is called from inside code that I really don't want to rewrite in order to be asynchronous.
Unfortunately, the webservice code generated in Silverlight only produces asynchronous methods, so I was wondering if anyone had working code that managed to work around this?
Note: I don't need to execute the main code path here on the UI thread, but the code in question will expect that calls it makes to the data access layers are synchronous in nature, but the entire job can be mainly executing on a background thread.
I tried the recipe found here: The Easy Way To Synchronously Call WCF Services In Silverlight, but unfortunately it times out and never completes the call.
Or rather, what seems to happen is that the completed event handler is called, but only after the method returns. I am suspecting that the event handler is called from a dispatcher or similar, and since I'm blocking the main thread here, it never completes until the code is actually back into the GUI loop.
Or something like that.
Here's my own version that I wrote before I found the above recipe, but it suffers from the same problem:
public static object ExecuteRequestOnServer(Type dalInterfaceType, string methodName, object[] arguments)
{
string securityToken = "DUMMYTOKEN";
string input = "DUMMYINPUT";
object result = null;
Exception resultException = null;
object evtLock = new object();
var evt = new System.Threading.ManualResetEvent(false);
try
{
var client = new MinGatServices.DataAccessLayerServiceSoapClient();
client.ExecuteRequestCompleted += (s, e) =>
{
resultException = e.Error;
result = e.Result;
lock (evtLock)
{
if (evt != null)
evt.Set();
}
};
client.ExecuteRequestAsync(securityToken, input);
try
{
var didComplete = evt.WaitOne(10000);
if (!didComplete)
throw new TimeoutException("A data access layer web service request timed out (" + dalInterfaceType.Name + "." + methodName + ")");
}
finally
{
client.CloseAsync();
}
}
finally
{
lock (evtLock)
{
evt.Close();
evt = null;
}
}
if (resultException != null)
throw resultException;
else
return result;
}
Basically, both recipes does this:
Set up a ManualResetEvent
Hook into the Completed event
The event handler grabs the result from the service call, and signals the event
The main thread now starts the web service call asynchronously
It then waits for the event to become signalled
However, the event handler is not called until the method above has returned, hence my code that checks for evt != null and such, to avoid TargetInvocationException from killing my program after the method has timed out.
Does anyone know:
... if it is possible at all in Silverlight 3
... what I have done wrong above?
I suspect that the MinGatServices thingy is trying to be helpful by ensuring the ExecuteRequestCompleted is dispatched on the main UI thread.
I also suspect that your code is already executing on the main UI thread which you have blocked. Never block the UI thread in Silverlight, if you need to block the UI use something like the BusyIndicator control.
The knee-jerk answer is "code asynchronously" but that doesn't satisfy your question's requirement.
One possible solution that may be less troublesome is to start the whole chunk of code from whatever user action invokes it on a different thread, using say the BackgroundWorker.
Of course the MinGatServices might be ensuring the callback occurs on the same thread that executed ExecuteRequestAsync in which case you'll need to get that to run on a different thread (jumping back to the UI thread would be acceptable):-
Deployment.Current.Dispatcher.BeginInvoke(() => client.ExecuteRequestAsync(securityToken, input));

Resources