Go http reuse seem doesn't work in upload file - file

When I use my go(1.8) http lib to do normal GET/POST method, it works fine, If I try to upload file to server with the http lib, the client will create a lot of sockets. In my test, files are cut in pieces to upload in 5 goroutines, the client remain 250 sockets. I already add defer resp.Body.Close(),here the key codes:
const (
MaxIdleConns int = 40
MaxIdleConnsPerHost int = 40
)
transport := &http.Transport{
MaxIdleConns: MaxIdleConns,
MaxIdleConnsPerHost: MaxIdleConnsPerHost,
IdleConnTimeout: 15 * time.Second,
ResponseHeaderTimeout: Time.Hour,
}
client := &http.Client{
Transport: transport,
Timeout: time.Second * 30,
}
those 250 sockets only recycle after client exit。

It just a silly question,the lib doesn't reuse http client, thanks for help.here my new transport define:
const (
MaxIdleConnsPerHost = 10
MaxIdleConns = 100
)
transport := &http.Transport{
DialContext: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).DialContext,
MaxIdleConns: MaxIdleConns,
MaxIdleConnsPerHost: MaxIdleConnsPerHost,
IdleConnTimeout: 90 * time.Second,
TLSHandshakeTimeout: 10 * time.Second,
ExpectContinueTimeout: 5 * time.Second,
}
and i cache the http client instead of new http.Client for every http request.
go doc recommand reuse the client
// The Client's Transport typically has internal state (cached TCP
// connections), so Clients should be reused instead of created as
// needed. Clients are safe for concurrent use by multiple goroutines.
https://golang.org/src/net/http/client.go

Related

Error: insertOne()` buffering timed out after 10000ms

I am at the very beginning of learning Mongoose, and I am having trouble saving this model to my database and keep getting an error:
insertOne()` buffering timed out after 10000ms
This is my code:
How can I find a solution for this?
const options = {
autoIndex: false, // Don't build indexes
maxPoolSize: 10, // Maintain up to 10 socket connections
serverSelectionTimeoutMS: 5000, // Keep trying to send operations for 5 seconds
socketTimeoutMS: 45000, // Close sockets after 45 seconds of inactivity
family: 4 // Use IPv4, skip trying IPv6
};
mongoose.connect(uri, options);

Gatling scala gradle, HowTo set requestTimeout

I'm using gatling gradle plugin and I'm trying to increase default timeout.
https://gatling.io/docs/current/extensions/gradle_plugin/
This doesn't work.
I constantly get
> i.g.h.c.i.RequestTimeoutException: Request timeout to localhos 47 (94,00%)
t/127.0.0.1:8080 after 60000 ms
my gatling.conf is
gatling {
core {
http {
pooledConnectionIdleTimeout = 600000 # Timeout in millis for a connection to stay idle in the pool
requestTimeout = 1000000 # Timeout in millis for performing an HTTP request
}
}
}
I tried to corrupt my gatling.conf and build ruins
/build/resources/gatling/gatling.conf: 8: Key 'qd qw qd qd' may not be followed by token: 'core' (if you intended 'core' to be part of a key or string value, try enclosing the key or value in double quotes)
So gatling really tries to read my file but doesn't want to override setting.
Who knows how to override it?
Your configuration file is wrong.
Your file, properly formatted:
gatling {
core {
http {
requestTimeout = 1000000
}
}
}
How it should be, like in the documentation:
gatling {
core {
# core options
}
http {
requestTimeout = 1000000
}
}

Set timeout for get request with HTTP.jl

I got to scan IP ranges and want to reduce waiting time on the timeouts.
How do i specify the request timeout, with Julia's HTTP.jl package?
I have found the readtimeout option in the docs for v.0.6.15:
conf = (readtimeout = 10,
pipeline_limit = 4,
retry = false,
redirect = false)
HTTP.get("http://httpbin.org/ip"; conf..)
But in the current stable version v0.8.6 readtimeout seams to only appear on the server side.
Testcode with readtimeout=2 and v.0.8.6:
#time begin
try
HTTP.get("http://2.160.0.0:80/"; readtimeout=2)
catch e
#info e
end
end
Output:
113.642150 seconds (6.69 k allocations: 141.328 KiB)
┌ Info: IOError(Base.IOError("connect: connection timed out (ETIMEDOUT)", -4039) during request(http://2.160.0.0:80/))
└ # Main In[28]:5
So the request took about 114 seconds, hence i think that this option is currently unsupported.
Edit
I checked the source code (HTTP.jl) of the stable release:
Timeout options
- `readtimeout = 60`, close the connection if no data is received for this many
seconds. Use `readtimeout = 0` to disable.
with this example given:
HTTP.request("GET", "http://httpbin.org/ip"; retries=4, cookies=true)
HTTP.get("http://s3.us-east-1.amazonaws.com/"; aws_authorization=true)
conf = (readtimeout = 10,
pipeline_limit = 4,
retry = false,
redirect = false)
HTTP.get("http://httpbin.org/ip"; conf..)
HTTP.put("http://httpbin.org/put", [], "Hello"; conf..)
So it should be working...
It definitely does not do what is expected.
There are multiple things going on here:
First off the default for idempotent requests in HTTP.jl is to retry 4 times. So a HTTP.get will only fail after 5 * readtimeout. You can change this by passing retry = false in the arguments.
Second thing I noticed is that the check for timed out connections is on a very long interval. (See TimeoutRequest Layer) It only checks every 8 to 12 seconds for a time out, so a timeout below 8 seconds is not doing anything. (I suspect these should be 8-12 milliseconds not seconds as implemented)
And finally the docs for 0.8.6 are missing for HTTP.request. I already made a PR to fix this.

Is there any way to implement server-side caching using Apollo-server 2.x?

Currently I am using "GrandStack" for my application . The challenge I
am facing with cache .I want to maintain Cache at both side Client and
Back-end.
At Client:
I am using React Js, Apollo-client: By default Apollo-client maintain a store at application level (wrap whole application with client using Apollo provide)
Que here .. If I navigate to any visited page that data should server
from cache even on page refresh
At Backend:
I am using Apollo-server 2 + express + Neo4j as DB
Is there any way to cache "client request" at server ? If user hit same
request to server that data should comes from server cache ?
Please help me provide some reference code .Thanks in Advance .
For same scenario i have implemented LRU cache in which I stored query as a key and response as value.
const LRU = require("lru-cache")
const lruCache = new LRU({
maxElements: 1000,
length: (n, key) => {
return n * 2 + key.length
},
dispose: function (key, n) {
// n.close()
},
maxAge: 1000 * 60 * 60
})
To set value lruCache.set(key, value) and to get const data = lruCache.get(key);
"key" is your request.
There are other options also but this is most popular(thats what i believe)
URL - https://www.npmjs.com/package/lru-cache

XSockets do not connect from Firefox

I need to use web sockets for some interactions with the user. I have pretty much copypasted solution from here - http://xsockets.net/blog/angular-js-xsocketsnet and got an issue with Firefox (27.0.1).
When I try to make this call (TwoWayBinding is my XSockets controller, I'm using .NET MVC on host side):
var connect = function (url) {
var deferred = $q.defer();
socket = new XSockets.WebSocket(url);
socket.on(XSockets.Events.open, function (conn) {
$rootScope.$apply(function () {
deferred.resolve(conn);
});
});
return deferred.promise;
};
connect("ws://localhost:49200/TwoWayBinding").then(function (ctx) {
isConnected = true;
queued.forEach(function (msg, i) {
publish(msg.t, msg.d);
});
queued = [];
});
I always get an error from Firebug:
Firefox can't establish a connection to the server at ws://localhost:49200/TwoWayBinding.
this.webSocket = new window.WebSocket(url, subprotocol || "XSocketsNET");
The same code works well in Chrome, it gets connected and I'm getting messages sent from host. Mentioned methods are wrapped into angular service, but this all works, I do not think this should be a problem.
One thing I was able to figure out from Fiddler was this:
Chrome:
Result Protocol Host URL Body Caching Content-Type Process Comments Custom
3 200 HTTP Tunnel to localhost:49200 0 chrome:3976
Result Protocol Host URL Body Caching Content-Type Process Comments Custom
6 101 HTTP localhost:49200 /TwoWayBinding?XSocketsClientStorageGuid=5cf5c99aafd141d1b247ed70107659e0 0 chrome:3976
Firefox:
Result Protocol Host URL Body Caching Content-Type Process Comments Custom
1740 200 HTTP Tunnel to localhost:49200 0 firefox:1420
Result Protocol Host URL Body Caching Content-Type Process Comments Custom
1741 - HTTP localhost:49200 /TwoWayBinding -1 firefox:1420
Simply said - there is some additional parameter XSocketsClientStorageGuid in the response for Chrome which does not occur in the respose to FF. I'm not sure if that has any impact or if I'm completely wrong but will appreciate any advice if somebody experiences same issue.
Update:
It looks like the critical line is this one
socket = new XSockets.WebSocket(url);
as the socket is not created properly in Firefox. But I still not have the cause of this.
What version are you running on , did you make a new installation of XSockets.NET using the Nuget Package or did you use the git example mentioned in the quesion above?
I just did a test on FF 26.0 and 27.0.1 , and it did go well using this pice of example;
http://xsockets.github.io/XSockets.JavaScript.API/test/index.html
I will have a look at the old Angular example asap and makre sure it is fixed of there is a problem!
Kind regards
Magnus

Resources