I have an array of items (to test I used around 250). Within each item is an ID that I am trying to call from CosmosDB. I am doing so in a simple for-loop
for (i = 0; i < arr.length; k++) {
var func = find(context, arr[i].id)
}
Within find I simply call cosmosDB to read the file. This works fine on individual items, or if I use small arrays (20-50), however with large arrays I get the following error:
{ FetchError: request to mycosmossite/docs failed, reason: connect ETIMEDOUT
message:
'request to mycosmossite/docs failed, reason: connect ETIMEDOUT',
type: 'system',
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
headers:
{ 'x-ms-throttle-retry-count': 0,
'x-ms-throttle-retry-wait-time-ms': 0 } }
I am not sure why this is happening. I also get this when using request-promise from time to time but if I try again without changing anything it often works. I am not sure if this is linked
Exception: RequestError: Error: connect ETIMEDOUT
Can someone offer a solution so I can work on larger arrays here? Is this a throttling issue?
Thanks
I maintain the Azure Cosmos DB JS SDK. Are you using the SDK to make these calls? We don't throw ETIMEDOUT anywhere inside the SDK so it is bubbling up from the NodeJS or Browser layer. Possibly you are overwhelming the networking stack or event loop by opening up many downstream connections and promises. As currently written, your code will open arr.length number of concurrent backend requests. Did you mean to await the result of each request? Example:
for (i = 0; i < arr.length; k++) {
var func = await find(context, arr[i].id)
}
You could also batch the requests using a package like p-map and using the concurrency parameter
Related
=== SAD PANDA ===
TypeError: Failed to fetch
=== SAD PANDA ===
While executing a flow cadence transaction in react.js, I got the above error.
My intention is when I click the minttoken button, this transaction has to execute so as to mint the NFT.
const mintToken = async() => {
console.log(form.name)
const encoded = await fcl.send([
fcl.proposer(fcl.currentUser().authorization),
fcl.payer(fcl.authz),
fcl.authorizations([fcl.authz]),
fcl.limit(50),
fcl.args([
fcl.arg(form.name,t.String),
fcl.arg(form.velocity,t.String),
fcl.arg(form.angle,t.String),
fcl.arg(form.rating,t.String),
fcl.arg(form.uri,t.String)
]),
fcl.transaction`
import commitContract from 0xf8d6e0586b0a20c7
transaction {
let receiverRef: &{commitContract.NFTReceiver}
let minterRef: &commitContract.NFTMinter
prepare(acct: AuthAccount) {
self.receiverRef = acct.getCapability<&{commitContract.NFTReceiver}>(/public/NFTReceiver)
.borrow()
?? panic("Could not borrow receiver reference")
self.minterRef = acct.borrow<&commitContract.NFTMinter>(from: /storage/NFTMinter)
?? panic("could not borrow minter reference")
}
execute {
let metadata : {String : String} = {
"name": name,
"swing_velocity": velocity,
"swing_angle": angle,
"rating": rating,
"uri": uri
}
let newNFT <- self.minterRef.mintNFT()
self.receiverRef.deposit(token: <-newNFT, metadata: metadata)
log("NFT Minted and deposited to Account 2's Collection")
}
}
`
]);
await fcl.decode(encoded);
}
this error being so useless is my fault, but I can explain what is happening here because it also only happens in a really specific situation.
Sad Panda error is a catch all error that happens when there is a catastrophic failure when fcl tries to resolve the signatures and it fails in a completely unexpected way. At the time of writing this it usually shows up when people are writing their own authorization functions so that was the first thing i looked at in your code example. Since you are using fcl.authz and fcl.currentUser().authorization (both of those are the same by the way) your situation here isnt because of a custom authorization function, which leads me to believe this is either a configuration issue (fcl.authz is having a hard time doing its job correctly) or what fcl is getting back from the wallet doesn't line up with what it is expecting internally (most likely because of an out of date version of fcl).
I have also seen this come up when the version of the sdk that fcl uses doesnt line up with the version of the sdk that is there (because some people have added #onflow/sdk as well as #onflow/fcl) so would also maybe check to make sure you only have fcl in your package.json and not the sdk as well (everything you should need from the sdk should be exposed from fcl directly, meaning you shouldnt need the sdk as a direct dependency of your application)
I would first recommend making sure you are using the latest version of fcl (your code should still all work), then i would make sure you are only using fcl and not inadvertently using an older version of the sdk. If you are still getting the same error after that could you create an issue on the github so we can dedicate some resources to helping sort this out (and make it so you and others dont see this cryptic error in future versions of fcl)
I have an AngularJS application that I intend to have receive communications via SignalR from the server, most notably when data changes and I want the client to refresh itself.
The following is my hub logic:
[HubName("update")]
public class SignalRHub : Hub
{
public static void SendDataChangedMessage(string changeType)
{
var context = GlobalHost.ConnectionManager.GetHubContext<SignalRHub>();
context.Clients.All.ReceiveDataChangedMessage(changeType);
}
}
I use the following within my API after the data operation has successfully occurred to send the message to the clients:
SignalRHub.SendDataChangedMessage("newdata");
Within my AngularJS application, I create a service for SignalR with the following javascript that's referenced in the HTML page:
angular.module('MyApp').value('signalr', $.connection.update);
Within the root for the AngularJS module, I set this up with the following so that it starts and I can see the debug output:
$(function () {
$.connection.hub.logging = true;
$.connection.hub.start();
});
$.connection.hub.error(function(err) {
console.log('An error occurred: ' + err);
});
Then I've got my controller. It's got all sorts of wonderful things in it, but I'll show the basics as relate to this issue:
angular.module('MyApp').controller('MyController', function($scope, signalr) {
signalr.client.ReceiveDataChangedMessage = function dataReceived(changeType) {
console.log('DataChangedUpdate: ' + changeType);
};
});
Unfortunately, when I set a breakpoint in the javascript, this never executes though the rest of the program works fine (including performing the operation in the API).
Some additional (hopefully) helpful information:
If I set a breakpoint in the SignalRHub class, the method is successfully called as expected and throws no exceptions.
If I look at Fiddler, I can see the polling operations but never see any sign of the call being sent to the client.
The Chrome console shows that the AngularJS client negotiates the websocket endpoint, it opens it, initiates the start request, transitions to the connected state, and monitors the keep alive with a warning and connection lost timeout. There's no indication that the client ever disconnects from the server.
I reference the proxy script available at http://localhost:port/signalr/hubs in my HTML file so I disregard the first error I receive stating that no hubs have been subscribed to. Partly because the very next message in the console is the negotiation with the server and if I later use '$.connection.hub' in the console, I'll see the populated object.
I appreciate any help you can provide. Thanks!
It's not easy to reproduce it here, but it's likely that the controller function is invoked after the start of the connection. You can verify with a couple of breakpoints on the first line of the controller and on the start call. If I'm right, that's why you are not called back, because the callback on the client member must be defined before starting the connection. Try restructuring your code a bit in order to ensure the right order.
I have the following piece of code with Restlet in Google AppEngine from an Android client.
ClientResource clientResource = new ClientResource(RESTLET_TEST_URL);
ProductResource productResource = clientResource.wrap(ProductResource.class);
productResource.store(mProduct);
Status status = clientResource.getResponse().getStatus();
Toast.makeText(this, "Status: "+ status.getDescription(), Toast.LENGTH_SHORT).show();
clientResource.release();
The .store() method is analogous to a PUT request. The weird thing is, this works fine when I connect to the development server but on the actual AppEngine site, nothing happens. I just get Status: OK indicating that the request went through.
I can't troubleshoot cause I can only do that in the Dev Server and that is working fine.
Any ideas on what the problem may be or how to approach this ?
For reference, the code at the server end is :
if (product != null ) {
if (new DataStore().putToDataStore(product) ) {
log.warning("Product written to datastore");
} else {
log.warning("Product not found in datastore");
}
}
This is just a simple write to the datastore using Objectify.
Turns out this is a known issue. See here
The solution is to use clientResource.setEntityBuffering(true);. However, please note that this method is only available in the Release Candidate for Android Client and not in the stable release.
I am using App Engine Connected Android Plugin support and customized the sample project shown in Google I/O. Ran it successfully. I wrote some Tasks from Android device to cloud succesffully using the code.
CloudTasksRequestFactory factory = (CloudTasksRequestFactory) Util
.getRequestFactory(CloudTasksActivity.this,
CloudTasksRequestFactory.class);
TaskRequest request = factory.taskRequest();
TaskProxy task = request.create(TaskProxy.class);
task.setName(taskName);
task.setNote(taskDetails);
task.setDueDate(dueDate);
request.updateTask(task).fire();
This works well and I have tested.
What I am trying to now is I have an array String[][] addArrayServer and want to write all the its elements to the server. The approach I am using is:
NoteSyncDemoRequestFactory factory = Util.getRequestFactory(activity,NoteSyncDemoRequestFactory.class);
NoteSyncDemoRequest request = factory.taskRequest();
TaskProxy task;
for(int ik=0;ik<addArrayServer.length;ik++) {
task = request.create(TaskProxy.class);
Log.d(TAG,"inside uploading task:"+ik+":"+addArrayServer[ik][1]);
task.setTitle(addArrayServer[ik][1]);
task.setNote(addArrayServer[ik][2]);
task.setCreatedDate(addArrayServer[ik][3]);
// made one task
request.updateTask(task).fire();
}
One task is uploaded for sure, the first element of the array. But hangs when making a new instance of task. I am pretty new to Google-Appengine. Whats the right way to call RPC, to upload multiple entities really fast??
Thanks.
Well answer to this queston is that request.fire() can be called only once for an request object but I was calling it every time in the loop. So it would update only once. Simple solution is to call fire() outside the loop.
NoteSyncDemoRequestFactory factory = Util.getRequestFactory(activity,NoteSyncDemoRequestFactory.class);
NoteSyncDemoRequest request = factory.taskRequest();
TaskProxy task;
for(int ik=0;ik<addArrayServer.length;ik++) {
task = request.create(TaskProxy.class);
Log.d(TAG,"inside uploading task:"+ik+":"+addArrayServer[ik][1]);
task.setTitle(addArrayServer[ik][1]);
task.setNote(addArrayServer[ik][2]);
task.setCreatedDate(addArrayServer[ik][3]);
// made one task
request.updateTask(task);
}
request.fire(); //call fire only once after all the actions are done...
For more info check out this question.. GWT RequestFactory and multiple requests
I've written a Silverlight class to consume the Bing Maps Routing Service. I'm creating an array of Waypoint objects from lat/long data that I have stored in a database, and sending that to the CalculateRoute method of the webservice in order to get a route back, but I am unable to successfully get a route back. The response always contains the error "An error occurred while processing the request." I'm stumped. Any ideas about how I could solve this or at least get a more helpful error/exception out of the service? Here's the method that calls the service:
public void CalculateRoute(Waypoint[] waypoints)
{
request = new RouteRequest();
request.Waypoints = new ObservableCollection<Waypoint>();
for (int idx = 0; idx < waypoints.Length; idx++)
{
request.Waypoints.Add(waypoints[idx] as Waypoint);
}
request.ExecutionOptions = new ExecutionOptions();
request.ExecutionOptions.SuppressFaults = true;
request.Options = new RouteOptions();
request.Options.Optimization = RouteOptimization.MinimizeTime;
request.Options.RoutePathType = RoutePathType.Points;
request.Options.Mode = TravelMode.Walking;
request.Options.TrafficUsage = TrafficUsage.TrafficBasedRouteAndTime;
_map.CredentialsProvider.GetCredentials(
(Credentials credentials) =>
{
request.Credentials = credentials;
RouteClient.CalculateRouteAsync(request);
});
}
I then have a callback that handles the response, but I have been unable to get a successful response. I've tried making sure the maxBufferSize and maxReceivedMessageSize are set correctly and that timeouts are set correctly, but to no avail. Any help would be much appreciated.
It appears that this line:
request.Options.TrafficUsage = TrafficUsage.TrafficBasedRouteAndTime;
was the culprit. Apparently if you've got that option set and request a route for somewhere that doesn't have traffic data, it dies rather than just ignoring it.