I have a situation as below which gives me error and looks like timeout.
Its missing some insert of records.
the error is as below:
IdeaBlade.EntityModel.AsyncProcessor1.<>c__DisplayClass2.<.ctor>b__0(TArgs args)
at IdeaBlade.EntityModel.AsyncProcessor1.Signal()
at IdeaBlade.EntityModel.AsyncProcessor`1.b__5(Object x)
InnerException:
[HttpRequestTimedOutWithoutDetail]
Arguments:
Debugging resource strings are unavailable. Often the key and arguments provide sufficient information to diagnose the problem. See http://go.microsoft.com/fwlink/?linkid=106663&Version=5.0.10411.00&File=System.ServiceModel.dll&Key=HttpRequestTimedOutWithoutDetail
at IdeaBlade.EntityModel.EntityServerProxy.<>c_DisplayClass14.b_13()
at IdeaBlade.EntityModel.EntityServerProxy.ExecFunc[T](Func1 func, Boolean funcWillHandleException)
at IdeaBlade.EntityModel.EntityServerProxy.ExecuteOnServer[T](Func1 func, Boolean funcWillHandleException)
at IdeaBlade.EntityModel.EntityServerProxy.InvokeServerMethod(SessionBundle sessionBundle, ITypeWrapper entityManagerType, String typeName, String methodName, Object[] args)
at IdeaBlade.EntityModel.EntityMa
Any Idea how to handle it?
Thx:)
......
.ExecuteAsync(op =>
{
var cust =Customers.Where(p => p.IsSelected).ToList();
..........................
Ships.ForEach(.......
...........
EntityManager.SalesGetSalesQuery(
..............
.ExecuteAsync(opn =>
{
................
});
p.UpdateOrders(copyOrders);
Orders.Add(copyOrders);
Save();
});
A timeout can happen at several places, so you will want to increase all possible timeout values.
In this case, you should be looking at increasing the query (CommandTimeout and Transaction), communication, and IIS executionTimeout.
DevForce has a documentation page that talks about troubleshooting timeouts. It's at http://drc.ideablade.com/devforce-2012/bin/view/Documentation/understand-timeouts.
I noticed that your nested query ("SalesGetSalesQuery") is a StoredProcQuery. There is an outstanding bug where StoredProcQueries are not respecting the Transaction timeout value, if different than the default. (120 seconds) We are working on a fix, but unfortunately there's no workaround in the meantime.
If it's not the StoredProcQuery that's timing out, then the link above will help you resolve it.
Job number 1 is to increase the timeout period while you figure out what is taking so long.
This will help https://stackoverflow.com/questions/4877315/silverlight-4-ria-services-timeout-issues
I don't think that the issue is in the fact that the async calls are nested. Remember that the second (i.e. nested) async call will only be executed once the first completed.
What async call is timing out exacly? Is it the StoredProcQuery? (any of them since you're calling them in a loop) If yes, then it's an outstanding bug that we are working on fixing. Like I mentioned in the previous post, there is no workaround. However, since this particular storedProc takes a date range as arguments, one possibility would be 'breaking' this date range in smaller date ranges and issuing multiple async calls. (perhaps in a parallel coroutine) Not that this 'workaround' is not fail proof since all orders could be in a small range period and the async call for that particular range would still timeout.
sbelini.
Related
I am a newbie in CANOPEN. I wrote a program that read actual position via PDO1 (default is statusword + actual position).
void canopen_init() {
// code1 setup PDO mapping
nmtPreOperation();
disablePDO(PDO_TX1_CONFIG_COMM);
setTransmissionTypePDO(PDO_TX1_CONFIG_COMM, 1);
setInhibitTimePDO(PDO_TX1_CONFIG_COMM, 0);
setEventTimePDO(PDO_TX1_CONFIG_COMM, 0);
enablePDO(PDO_TX1_CONFIG_COMM);
setCyclePeriod(1000);
setSyncWindow(100);
//code 2: enable OPeration
readyToSwitchOn();
switchOn();
enableOperation();
motionStart();
// code 3
nmtActiveNode();
}
int main (void) {
canopen_init();
while {
delay_ms(1);
send_sync();
}
}
If I remove "code 2" (the servo is in Switch_on_disable status), i can read position each time sync send. But if i use "code 2", the driver has error "sync frame timeout". I dont know driver has problem or my code has problem. Does my code has problem? thank you!
I don't know what protocol stack this is or how it works, but these:
setCyclePeriod(1000);
setSyncWindow(100);
likely correspond to these OD entries :
Object 1006h: Communication cycle period (CiA 301 7.5.2.6)
Object 1007h: Synchronous window length (CiA 301 7.5.2.7)
They set the SYNC interval and time window for synchronous PDOs respectively. The latter is described by the standard as:
If the synchronous window length expires all synchronous TPDOs may be discarded and an EMCY message may be transmitted; all synchronous RPDOs may be discarded until the next SYNC message is received. Synchronous RPDO processing is resumed with the next SYNC message.
Now if you set this sync time window to 100us but have a sloppy busy-wait delay delay_ms(1), then that doesn't add up. If you write zero to Object 1007h, you disable the sync window feature. I suppose setSyncWindow(0); might do that. You can try to do that to see if that's the issue. If so, you have to drop your busy-wait in favour for proper hardware timers, one for the SYNC period and one for PDO timeout (if you must use that feature).
Problem fixed. Due to much EMI from servo, that make my controller didn't work properly. After isolating, it worked very well :)!
There are a number of sync and async operations for files in dart:io:
file.deleteSync() and file.delete()
file.readAsStringSync() and file.readAsString()
file.writeAsBytesSync(bytes) and file.writeAsBytes(bytes)
and many, many more.
What are the considerations that I should keep in mind when choosing between the sync and async options? I seem to recall seeing somewhere that the sync option is faster if you have to wait for it to finish anyway (await file.delete() for example). But I can't remember where I saw that or if it is true.
Is there any difference between this method:
Future deleteFile(File file) async {
await file.delete();
print('deleted');
}
and this method:
Future deleteFile(File file) async {
file.deleteSync();
print('deleted');
}
Let me try to summarize an answer based on the comments to my question. Correct me where I'm wrong.
Running code in an async method doesn't make it run on another thread.
Dart is a single threaded system.
Code gets run on an event loop.
Performing long running synchronous tasks will block the system whether it is in an async method or not.
An isolate is a single thread.
If you want to run tasks on another thread then you need to run it on another isolate.
Starting another isolate is called spawning the isolate.
There are a few options for running tasks on another isolate including compute and IsolateChannel and writing your own isolate communication code.
For File IO, the synchronous versions are faster than the asynchronous versions.
For heavy File IO, prefer the asynchronous version because they work on a separate thread.
For light File IO (like file.exists()?), using the synchronous version is an option since it is likely to be fast.
Further reading
Isolates and Event Loops
Single Thread Dart, What? — Part 1
Single Thread Dart, What? — Part 2
avoid_slow_async_io lint
sync variants unlike async ones stop the CPU from executing any event handlers - like the event loop, until the operation is complete.
Using sync:
void main() {
final file = File('...');
Future(() => print('1')); // Adding to the event queue
file.readAsBytesSync();
print('2');
}
Output:
2
1
Using async:
void main() async {
final file = File('...');
Future(() => print('1')); // Adding to the event queue
await file.readAsBytes();
print('2');
}
Output:
1
2
I'm following this tutorial to implement object tracking on iOS 11. I'm able to track objects perfectly, until a certain point, then this error appears in the console.
Throws: Error Domain=com.apple.vis Code=9 "Internal error: Exceeded maximum allowed number of Trackers for a tracker type: VNObjectTrackerType" UserInfo={NSLocalizedDescription=Internal error: Exceeded maximum allowed number of Trackers for a tracker type: VNObjectTrackerType}
Am I using the API incorrectly, or perhaps Vision has trouble handling too many consecutive object tracking tasks? Curious if anyone has insight into why this is happening.
It appears that you hit the limit on the number of trackers that can be active in the system. First thing to note is that a new tracker is created every time a new observation, with new -uuid property is used. You should be recycling the initial observation you use when you started the tracker, until you no longer want to use it, by feeding what you got from “results” for time T into the subsequent request you make for time T+1. When you no longer want to use that tracker (maybe the confidence score gets too low), there is a “lastFrame” property that can be set, which lets the Vision framework know that you are done with that tracker. Trackers also get released when the sequence request handler is released.
To track the rectangle you feed consequent observations to the same VNSequenceRequestHandler instance, say, handler. When the rectangle is lost, i.e. the new observation is nil in your handler function / callback, or you are getting some other tracking error, just re-instantiate the handler and continue, e.g. (sample code to show the idea):
private var handler = VNSequenceRequestHandler()
// <...>
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
let lastObservation = self.lastObservation
else {
self.handler = VNSequenceRequestHandler()
return
}
let request = VNTrackObjectRequest(detectedObjectObservation: lastObservation, completionHandler: self.handleVisionRequestUpdate)
request.trackingLevel = .accurate
do {
try self.handler.perform([request], on: pixelBuffer)
} catch {
print("Throws: \(error)")
}
}
Note that handler is var, not a constant.
Also, you may re-instantiate the handler in actual handler function (like func handleVisionRequestUpdate(_ request: VNRequest, error: Error?)) in case new observation object is invalid.
My problem with this was that I had a function that called perform... on the same VNSequenceRequestHandler that the tracking was also calling perform on, because of that I was processing too many try self.visionSequenceHandler.perform(trackRequests, on: ciimage) concurrently. Make sure the VNSequenceRequestHandler is not getting hit at the same time by multiple performs....
Why breeze keeps throwing 'Concurrent saves are not allowed' with manager.enableSaveQueuing(true) option enabled
Simply because you're trying to issue multiple saves at the same time.
Breeze's default save option is queuing data for saving.
In your case, you can overwrite the option for allowing concurrent saves as follows:
var so = new breeze.SaveOptions({allowConcurrentSaves: true})
return manager.saveChanges(null,so)
.then(saveSucceeded) //
.fail(saveFailed);
EDIT
Since you are using the "saveQueuing" plugin, Ignore my first Answer since it only applies to concurrent saves.
I'm not aware of how your code works, but you might take some considerations in case of save queuing:
You can only issue manager.saveChanges() once inside your code.
At the server side, override the BeforeSaveEntity() method for the sake of the mutual-exclusion lock statement in your new savechanges() method, your code may look something like this:
public void SaveChanges(SaveWorkState saveWorkState)
{
lock (__lock) // this will block any try to issue concurrent saves on the same row
{
// Saving Operations goes here
}
}
You might want to look at it in the NoDB Sample.
We're migrating SQL to Azure. Our DAL is Entity Framework 4.x based. We're wanting to use the Transient Fault Handling Block to add retry logic for SQL Azure.
Overall, we're looking for the best 80/20 rule (or maybe more of a 95/5 but you get the point) - we're not looking to spend weeks refactoring/rewriting code (there's a LOT of it). I'm fine re-implementing our DAL's framework but not all of the code written and generated against it anymore than we have to since this is already here only to address a minority case. Mitigation >>> elimination of this edge case for us.
Looking at the possible options explained here at MSDN, it seems Case #3 there is the "quickest" to implement, but only at first glance. Upon pondering this solution a bit, it struck me that we might have problems with connection management since this circumvent's Entity Framework's built-in processes for managing connections (i.e. always closing them). It seems to me that the "solution" is to make sure 100% of our Contexts that we instantiate use Using blocks, but with our architecture, this would be difficult.
So my question: Going with Case #3 from that link, are hanging connections a problem or is there some magic somewhere that's going on that I don't know about?
I've done some experimenting and it turns out that this brings us back to the old "managing connections" situation we're used to from the past, only this time the connections are abstracted away from us a bit and we must now "manage Contexts" similarly.
Let's say we have the following OnContextCreated implementation:
private void OnContextCreated()
{
const int maxRetries = 4;
const int initialDelayInMilliseconds = 100;
const int maxDelayInMilliseconds = 5000;
const int deltaBackoffInMilliseconds = initialDelayInMilliseconds;
var policy = new RetryPolicy<SqlAzureTransientErrorDetectionStrategy>(maxRetries,
TimeSpan.FromMilliseconds(initialDelayInMilliseconds),
TimeSpan.FromMilliseconds(maxDelayInMilliseconds),
TimeSpan.FromMilliseconds(deltaBackoffInMilliseconds));
policy.ExecuteAction(() =>
{
try
{
Connection.Open();
var storeConnection = (SqlConnection) ((EntityConnection) Connection).StoreConnection;
new SqlCommand("declare #i int", storeConnection).ExecuteNonQuery();
//Connection.Close();
// throw new ApplicationException("Test only");
}
catch (Exception e)
{
Connection.Close();
Trace.TraceWarning("Attempted to open connection but failed: " + e.Message);
throw;
}
}
);
}
In this scenario, we forcibly open the Connection (which was the goal here). Because of this, the Context keeps it open across many calls. Because of that, we must tell the Context when to close the connection. Our primary mechanism for doing that is calling the Dispose method on the Context. So if we just allow garbage collection to clean up our contexts, then we allow connections to remain hanging open.
I tested this by toggling the comments on the Connection.Close() in the try block and running a bunch of unit tests against our database. Without calling Close, we jumped up to ~275-300 active connections (from SQL Server's perspective). By calling Close, that number hovered at ~12. I then reproduced with a small number of unit tests both with and without a using block for the Context and reproduced the same result (different numbers - I forget what they were).
I was using the following query to count my connections:
SELECT s.session_id, s.login_name, e.connection_id,
s.last_request_end_time, s.cpu_time,
e.connect_time
FROM sys.dm_exec_sessions AS s
INNER JOIN sys.dm_exec_connections AS e
ON s.session_id = e.session_id
WHERE login_name='myuser'
ORDER BY s.login_name
Conclusion: If you call Connection.Open() with this work-around to enable the Transient Fault Handling Block, then you MUST use using blocks for all contexts you work with, otherwise you will have problems (that with SQL Azure, will cause your database to be "throttled" and ultimately taken offline for hours!).
The problem with this approach is it only takes care of connection retries and not command retries.
If you use Entity Framework 6 (currently in alpha) then there is some new in-built support for transient retries with Azure SQL Database (with a little bit of configuration): http://entityframework.codeplex.com/wikipage?title=Connection%20Resiliency%20Spec
I've created a library which allows you to configure Entity Framework to retry using the Fault Handling block without needing to change every database call - generally you will only need to change your config file and possibly one or two lines of code.
This allows you to use it for Entity Framework or Linq To Sql.
https://github.com/robdmoore/ReliableDbProvider