Error: insertOne()` buffering timed out after 10000ms - database

I am at the very beginning of learning Mongoose, and I am having trouble saving this model to my database and keep getting an error:
insertOne()` buffering timed out after 10000ms
This is my code:
How can I find a solution for this?

const options = {
autoIndex: false, // Don't build indexes
maxPoolSize: 10, // Maintain up to 10 socket connections
serverSelectionTimeoutMS: 5000, // Keep trying to send operations for 5 seconds
socketTimeoutMS: 45000, // Close sockets after 45 seconds of inactivity
family: 4 // Use IPv4, skip trying IPv6
};
mongoose.connect(uri, options);

Related

After adding more products to woocomerce wordpress store graphql fails

anyone knows how to fix this graphql error? It appeared after i've added more woocomerce products. Url seems to be good because after deleting part of the woocomerce prducts everything stars to work like normal again.
ERROR
timeout of 30000ms exceeded
ERROR #gatsby-source-wordpress_111006
gatsby-source-wordpress It took too long for https://my-web-url/graphql to respond (longer than 30 seconds).
Either your URL is wrong, you need to increase server resources, or you need to decrease the amount of resources each
request takes.
You can configure how much resources each request takes by lowering your `options.schema.perPage` value from the default
of 100 nodes per request.
Alternatively you can increase the request timeout by setting a value in milliseconds to `options.schema.timeout`, the
current setting is 30000.
GraphQL request to https://my-web-url/graphql failed.
The output is quite self-explanatory. You've reached the timeout threshold because of the addition of more data to fetch.
As it has been prompted, you can add a bunch of options to gatsby-sourde-wordpress to customize that limit:
{
resolve: `gatsby-source-wordpress`,
options: {
schema: {
timeout: 30000,
},
},
}
The timeout, by default takes the value of 30000ms.
Additionally, you can change the number of nodes fetched by page(perPage).
Mixing both customizations:
{
resolve: `gatsby-source-wordpress`,
options: {
schema: {
timeout: 30000,
perPage: 100,
},
},
}
Play around increasing those default values to see if your requests succeed.
To fix this issue you need to raise the timeout in your gatsby-config.js file by adding options schema.
options: {schema: { timeout: 1000000,},}
But solely doing this will probably not be enough as if you are getting timeout error your wordpress server is either already overloaded or will be shortly. You need to raise allocated memory in your wordpress server. You can do that by using some FTP software like filezilla and adding this line to wp.config file. define( ‘WP_MEMORY_LIMIT’, ‘512M’ ); If you don't have as much data you should chose lower number like 256MB.

Cosmos DB error connect ETIMEDOUT when trying to call many times

I have an array of items (to test I used around 250). Within each item is an ID that I am trying to call from CosmosDB. I am doing so in a simple for-loop
for (i = 0; i < arr.length; k++) {
var func = find(context, arr[i].id)
}
Within find I simply call cosmosDB to read the file. This works fine on individual items, or if I use small arrays (20-50), however with large arrays I get the following error:
{ FetchError: request to mycosmossite/docs failed, reason: connect ETIMEDOUT
message:
'request to mycosmossite/docs failed, reason: connect ETIMEDOUT',
type: 'system',
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
headers:
{ 'x-ms-throttle-retry-count': 0,
'x-ms-throttle-retry-wait-time-ms': 0 } }
I am not sure why this is happening. I also get this when using request-promise from time to time but if I try again without changing anything it often works. I am not sure if this is linked
Exception: RequestError: Error: connect ETIMEDOUT
Can someone offer a solution so I can work on larger arrays here? Is this a throttling issue?
Thanks
I maintain the Azure Cosmos DB JS SDK. Are you using the SDK to make these calls? We don't throw ETIMEDOUT anywhere inside the SDK so it is bubbling up from the NodeJS or Browser layer. Possibly you are overwhelming the networking stack or event loop by opening up many downstream connections and promises. As currently written, your code will open arr.length number of concurrent backend requests. Did you mean to await the result of each request? Example:
for (i = 0; i < arr.length; k++) {
var func = await find(context, arr[i].id)
}
You could also batch the requests using a package like p-map and using the concurrency parameter

Set timeout for get request with HTTP.jl

I got to scan IP ranges and want to reduce waiting time on the timeouts.
How do i specify the request timeout, with Julia's HTTP.jl package?
I have found the readtimeout option in the docs for v.0.6.15:
conf = (readtimeout = 10,
pipeline_limit = 4,
retry = false,
redirect = false)
HTTP.get("http://httpbin.org/ip"; conf..)
But in the current stable version v0.8.6 readtimeout seams to only appear on the server side.
Testcode with readtimeout=2 and v.0.8.6:
#time begin
try
HTTP.get("http://2.160.0.0:80/"; readtimeout=2)
catch e
#info e
end
end
Output:
113.642150 seconds (6.69 k allocations: 141.328 KiB)
┌ Info: IOError(Base.IOError("connect: connection timed out (ETIMEDOUT)", -4039) during request(http://2.160.0.0:80/))
└ # Main In[28]:5
So the request took about 114 seconds, hence i think that this option is currently unsupported.
Edit
I checked the source code (HTTP.jl) of the stable release:
Timeout options
- `readtimeout = 60`, close the connection if no data is received for this many
seconds. Use `readtimeout = 0` to disable.
with this example given:
HTTP.request("GET", "http://httpbin.org/ip"; retries=4, cookies=true)
HTTP.get("http://s3.us-east-1.amazonaws.com/"; aws_authorization=true)
conf = (readtimeout = 10,
pipeline_limit = 4,
retry = false,
redirect = false)
HTTP.get("http://httpbin.org/ip"; conf..)
HTTP.put("http://httpbin.org/put", [], "Hello"; conf..)
So it should be working...
It definitely does not do what is expected.
There are multiple things going on here:
First off the default for idempotent requests in HTTP.jl is to retry 4 times. So a HTTP.get will only fail after 5 * readtimeout. You can change this by passing retry = false in the arguments.
Second thing I noticed is that the check for timed out connections is on a very long interval. (See TimeoutRequest Layer) It only checks every 8 to 12 seconds for a time out, so a timeout below 8 seconds is not doing anything. (I suspect these should be 8-12 milliseconds not seconds as implemented)
And finally the docs for 0.8.6 are missing for HTTP.request. I already made a PR to fix this.

Go http reuse seem doesn't work in upload file

When I use my go(1.8) http lib to do normal GET/POST method, it works fine, If I try to upload file to server with the http lib, the client will create a lot of sockets. In my test, files are cut in pieces to upload in 5 goroutines, the client remain 250 sockets. I already add defer resp.Body.Close(),here the key codes:
const (
MaxIdleConns int = 40
MaxIdleConnsPerHost int = 40
)
transport := &http.Transport{
MaxIdleConns: MaxIdleConns,
MaxIdleConnsPerHost: MaxIdleConnsPerHost,
IdleConnTimeout: 15 * time.Second,
ResponseHeaderTimeout: Time.Hour,
}
client := &http.Client{
Transport: transport,
Timeout: time.Second * 30,
}
those 250 sockets only recycle after client exit。
It just a silly question,the lib doesn't reuse http client, thanks for help.here my new transport define:
const (
MaxIdleConnsPerHost = 10
MaxIdleConns = 100
)
transport := &http.Transport{
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
MaxIdleConns: MaxIdleConns,
MaxIdleConnsPerHost: MaxIdleConnsPerHost,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 5 * time.Second,
}
and i cache the http client instead of new http.Client for every http request.
go doc recommand reuse the client
// The Client's Transport typically has internal state (cached TCP
// connections), so Clients should be reused instead of created as
// needed. Clients are safe for concurrent use by multiple goroutines.
https://golang.org/src/net/http/client.go

Throttling PubNub requests in AngluarJS

I came across this article by Brian Ford which talks about throttling Socket.io requests to help with digests in a large app - http://briantford.com/blog/huuuuuge-angular-apps.html
I recently built a Factory to support PUBNUB's JS API and am having difficulties implementing a throttle in JS to prevent an apply/digest from hapenning every time a message is received. Here is a working Plunkr of the Factory in action - http://plnkr.co/edit/0w6dWQ4lcqTtdxbOa1EL?p=preview
I think the main issue i'm having is understanding how Brian's example does it with regards to Socket.io's syntax, and how that can apply to the way PubNub handles message callbacks.
Thanks!
PubNub Rate Limiting vs. Throttling Messages in AngularJS
Before diving into the solution, we'd want to talk about to variations of potential desirable characteristics or behavior of rate limiting and throttling. To start you may need to limit the rate the frequency at which the UI is updated, or when a function() is invoked.
If you want to skip to the plunker source code: http://plnkr.co/edit/Kv698u?p=preview
The behavior of rate limiting will demonstrate a max Message Per Second with an evenly distributed delay between each event triggered by the rate at which messages arrive. Rather than triggering all events by the rate messages arrive in the pipe, you can rate limit your recognition of the message and evenly spread the event triggering over X milliseconds.
However the behavior of throttling is different from rate limiting where intentional dropping of messages occurs only using the most recent message received instead. Throttling is one step further from the rate limiting method and altogether excludes messages that are recognized by tossing each message away and leaving only the most recent available to be processed at a set interval.
There is also the concept of capping which, over a timespan, would only allow x messages to arrive and then pause events until timespan complete. Capping is not the same as rate limiting or throttling where the rate at which messages receive is the same rate at which they are processed rather than evenly distributing each event over an interval. All messages are recognized (dropping after quota exceeded is optional).
Plunker JavaScript Source Editor for AngularJS
http://plnkr.co/edit/Kv698u?p=preview - Code View with AngularJS bindings.
Use Plunker to preview a working example!
PubNub Rate Limiting in AngularJS
This process requires a queue to receive and store all the messages until they are processed in a slow steady fashion. This method will not drop any messages, but instead slowly chew through each message at a set rate until all are processed regardless of the network receive rate.
//
// Listen for messages and process all messages at steady rate limit.
//
pubnub.subscribe({
channel : "hello_world",
message : limit( function(message) {
// process your messages at set interval here.
}, 500 /* milliseconds */ )
});
//
// Rate Limit Function
//
function limit( fun, rate ) {
var queue = [];
setInterval( function() {
var msg = queue.shift();
msg && fun(msg);
}, rate );
return function(message) { queue.push(message) };
}
Notice the function of interest is limit() and how it is used in the message response callback of the subscribe call.
PubNub Throttling in AngularJS
This is a process which only keeps the most recent message to be processed at a regular interval while purposefully dropping all older messages that are received in a certain window of time.
//
// Listen for events and process last message each 500ms.
//
pubnub.subscribe({
channel : "hello_world",
message : throttle( function(message) {
// process your last message here each 500ms.
}, 500 /* milliseconds */ )
});
//
// Throttle Function
//
function throttle( fun, rate ) {
var last;
setInterval( function() {
last !== null && fun(last);
last = null;
}, rate );
return function(message) { last = message };
}
The throttle() function will drop messages received in a certain window while processing always the last received message at a set interval.
Or Combine Both
//
// Listen for events and process last message each 500ms.
//
pubnub.subscribe({
channel : "hello_world",
message : thrimit( function( last_message, all_messages ) {
// process your last message here each 500ms.
}, 500 /* milliseconds */ )
});
//
// Throttle + Limit Function
//
function thrimit( fun, rate ) {
var last;
var queue = [];
setInterval( function() {
last !== null && fun( last, queue );
last = null;
queue = []
}, rate );
return function(message) {
last = message;
queue.push(message);
};
}

Resources