My server is listening on the port 1234 for incoming connections, i made an array of sockets and i am looping through that array looking for an already free sockets (Closed = 0) increasing that array to hold a new incoming sockets if no free socket available in the sockets array, This will be later the index to the sockets to identify each connection alone. Not sure if that is a good approach but it works fine and stable until i tried to stress my server with about 15 client opened at the same time. The problem i am facing is that some of the client apps gets connection time-out and the server becomes unstable handling those multiple connections at the same time. I am not sending much data only 20 bytes on each connect event received from the server. I have tried to increase the Backlog value on the listen call but that didn't help either.
I have to wait longer for those clients to connect. But they connect eventually. And the server still responses to new connections later on (If i close all those clients apps for example or open a new client app it will connect immediatly). Also these connections stay open and i do not close socket from the client side.
I am using CSocketPlus Class
Connection request event
Private Sub Sockets_ConnectionRequest(ByVal Index As Variant, ByVal requestID As Long)
Dim sckIndex As Integer
sckIndex = GetFreeSocketIndex(Sockets)
Sockets.Accept sckIndex, requestID
End Sub
Function GetFreeSocketIndex(Sockets As CSocketPlus) As Integer
Dim i As Integer, blnFound As Boolean
If Sockets.ArrayCount = 1 Then 'First we check if we have not loaded any arrays yet (First connector)
Sockets.ArrayAdd 1
GetFreeSocketIndex = 1
ReDim UserInfo(1)
Exit Function
Else
'Else we loop through all arrays and find a free (Not connected one)
For i = 1 To Sockets.ArrayCount - 1
If Sockets.State(i) <> sckConnected Then
'Found one use it
blnFound = True
Sockets.CloseSck i
GetFreeSocketIndex = i
If UBound(UserInfo) < i Then
ReDim Preserve UserInfo(i)
End If
Exit Function
End If
Next i
'Didn't find any load one and use it
If blnFound = False Then
Sockets.ArrayAdd i
Sockets.CloseSck i
GetFreeSocketIndex = i
ReDim Preserve UserInfo(i)
End If
End If
End Function
Does anyone know why there is a performance issues when multiple connections occur at the same time? why the server becomes slowly to response?
What makes the server to accept faster?
I have also tried to not accept the connection ie not calling accept on connection request event but still the same issue.
EDIT: I have tried to put a debug variable to print the output of socket number on the FD_ACCEPT event on the CSocket class and it seems that WndProc is delaying to post the messages in case of a lot of connections.
Ok the problem seems to be from my connection. I have moved my Server to my RDP which has 250Mbps Download speed with 200Mbps Upload speed and it seems to work very well there. Tested it with 100 client made a connection and every one of them connected immediatly!. Wonder why i have such issues where my home connection is 40/40 Mbps...hmmm. Anyone knows why that happen?
Edit: Seems to be the Router option!
Enable Firewall
Enable DoS protection
Disabled the firewall (just for testing purposes) and everything works flawlessly!
So basically the router is thinking that there is some kind of dos attack.
So it will slow down the traffic.
Related
I have a grails/groovy web-app with database connection pooling. The settings are set-up like this:
dataSource:
url: "jdbc:postgresql://192.168.100.53:5432/bhub_dev"
properties:
jmxEnabled: true
initialSize: 5
maxActive: 25
minIdle: 5
maxIdle: 15
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
I'm using java-melody for diagnostics and monitoring, and noticing some weird behavior. For example, when executing jobs that query the DB, the connections can get over the maxActive property. Why is this even possible?
Do I need to explicitly close the Grails connections? The job calls a service method that simply does a DB query via a withCriteria Grails call, like:
def activities = WorkActivity.withCriteria{
eq("workCategoryActive", true)
order("id", "desc");
}
and it seems everytime this is run, 2 new connections are opened up, and they do not close everytime properly.
Also, on every page refresh there's some calls to the backend that execute queries, and sometimes even on refresh 2 new connections seem to open up.
I'm pretty new to Grails dev, so I have no idea if I have to/can even close this withCriteria database connection.
Any bit of help is appreciated. The database is PGSQL.
EDIT: Ok, so now I'm looking at Thread diagnostic in java-melody, and it seems like the Tomcat pool-cleaner is in waiting state and that's why the connection count isn't going down? Also it seems that every time that job is run, 2 threads start, and one gets stuck in waiting? What the hell is going on.
You should take a look how is connection pool is working, basically connections don't closed, they are reusing and opens. So when it's reach max(in your case 50) connection will wait for free on. It's work so, because open connection it's a complex work. Regarding docs they said that connection is closed, that mean that it goes back to pool, but DON'T close. The same but more description
My MongoDB database has a fast-growing amount of active connections.
I wrote a code to test how connection creation/closing flow works. This code sums up how I use the mgo library in my project.
package main
import (
"time"
"fmt"
"gopkg.in/mgo.v2"
)
func main() {
// No connections
// db.serverStatus().connections.current = 6
mongoSession := connectMGO("localhost", "27017", "admin")
// 1 new connection created
//db.serverStatus().connections.current = 7
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
// 4 new connections created and closed
// db.serverStatus().connections.current = 7
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
// 4 new connections created and closed concurrently
// db.serverStatus().connections.current = 10
time.Sleep(time.Hour * 24) // wait any amount of time
// db.serverStatus().connections.current = 10
}
func connectMGO(host, port, dbName string) *mgo.Session {
session, _ := mgo.DialWithInfo(&mgo.DialInfo{
Addrs: []string{fmt.Sprintf("%s:%s", host, port)},
Timeout: 10 * time.Second,
Database: dbName,
Username: "",
Password: "",
})
return session
}
func produceDataMGO(conn *mgo.Session) {
dbConn := conn.Copy()
dbConn.DB("").C("test").Insert("")
dbConn.Close()
}
I detected a pretty weird thing that I don't understand. Behaviour is somehow different depending on how we create new connections (sync/async).
If we create connection synchronously - mongo closes this new connection immediately after calling .Close() method.
If we create connection asynchronously - mongo keeps this new connection alive even after calling .Close() method.
Why is it so?
Is there any other way to force-close connection socket?
Will it auto-close these open connections after a certain amount of time?
Is there any way to set a limit for the amount of connections MongoDB can expand its pool to?
Is there any way to setup an auto-truncate after a certain amount of time without high load?
It's connection pooling. When you "close" a session, it isn't necessarily closed; it may just be returned to the pool for re-use. In the synchronous example, it doesn't need to expand the pool; you're only using one connection at a time. In the concurrent example, you're using several connections at a time, so it may decide it does need to expand the pool. I would not consider 10 open connections to be cause for concern.
Try it with a larger test - say, 10 batches of 10 goroutines - and see how many connections are open afterward. If you have 100 connections open, something has gone wrong; if you have 10~20, then pooling is working correctly.
I want to create to function. The first one is connect to DB, the second one is complete reconnection if first is failed.
In my experiment I turn off DB at start to get connect block failed and call reconnect block. After it I am turning on DB, and expecting that connection block will success, but I am getting exception.
Here is my code:
bool connect()
{
if(connection is null)
{
scope(failure) reconnect(); // call reconnect if fail
this.connection = mydb.lockConnection();
writeln("connection done");
return true;
}
else
return false;
}
void reconnect()
{
writeln("reconnection block");
if(connection is null)
{
while(!connect) // continue till connection will not be established
{
Thread.sleep(3.seconds);
connectionsAttempts++;
logError("Connection to DB is not active...");
logError("Reconnection to DB attempt: %s", connectionsAttempts);
connect();
}
if(connection !is null)
{
logWarn("Reconnection to DB server done");
}
}
}
The log (turning on DB after few seconds):
reconnection block
reconnection block
connection done
Reconnection to DB server done
object.Exception#C:\Users\Dima\AppData\Roaming\dub\packages\vibe-d-0.7.30\vibe-d\source\vibe\core\drivers\libevent2.d(326): Failed to connect to host 194.87.235.42:3306: Connection timed out [WSAETIMEDOUT ]
I can't understand why I am getting exception after: Reconnection to DB server done
There's two main problems here.
First of all, there shouldn't be any need for automatic retry attempts at all. If it didn't work the first time, and you don't change anything, there's no reason doing the same exact thing should suddenly work the second time. If your network is that unreliable, then you have much bigger problems.
Secondly, if you are going to automatically retry anyway, that's code's not going to work:
For one thing, reconnect is calling connect TWICE on every failure: Once at the end of the loop body and then immediately again in the loop condition regardless of whether the connection succeeded. That's probably not what you intended.
But more importantly, you have a potentially-infinite recursion going on there: connect calls reconnect if it fails. Then reconnect calls connect up to six times, each of those times connect calls reconnect AGAIN on failure, looping forever until the connection configuration that didn't work somehow magically starts working (or perhaps more likely, until you blow the stack and crash).
Honestly, I'd recommend simply throwing that all away: Just call lockConnection (if you're using vibe.d) or new Connection(...) (if you're not using vibe.d) and be done with it. If your connection settings are wrong, then trying the same connection settings again isn't going to fix them.
lockConnection -- Is there supposed to be a matching "unlock"? – Rick James
No, the connection pool in question comes from vibe.d. When the fiber which locked the connection exits (usually meaning "when your server is done processing a request"), any connections the fiber locked automatically get returned to the pool.
With Gatling 2, is it possible to repeat with connection re-use? How?
I have the below code, but it appears to open new connection every time. I want to maintain x connections for some time.
val httpProtocol = http
.baseURL("http://mysrv.pvt")
.inferHtmlResources()
val uri1 = "http://mysrv.pvt"
val scn = scenario("Simulation").repeat(50){
pause(2 seconds,20 seconds).
exec(http("request_0")
.get("/s1/serve.html")
)
}
setUp(scn.inject(
atOnceUsers(20000)
).protocols(httpProtocol))
First, your question is not accurate enough.
By default, Gatling has one connection pool per virtual user, so each of them do re-use connections between sequential requests, and can have more than one concurrent connection when dealing with resource fetching, which you do as you enabled inferHtmlResources. This way, virtual users behave as independent browsers.
You can change this behavior and share a common connection pool, see doc. However, you have to make sure this makes sense in your case. Your workload profile will be very different, the toll on the TCP stack on both the client/Gatling and the server/your app will be way less, so make sure that's how your application is being used in production.
I'm writing an HTTP server in C using sockets. It can listen on multiple ports and works on a 1-thread-per-port basis to run listening loops and each loop spawns another thread to deliver a response
The code works perfectly when delivering standard HTTP responses. I have it set up to respond with an HTML page with JavaScript code that just refreshes the browser repeatedly in order to stress test the server. I've tested this with my computer running as the server and 4 other devices spamming it with requests at the same time.
No crashes, no dropped connections and no memory leaks. CPU usage never jumps beyond 5% running on a 2.0 GHz Intel Core 2 Duo in HTTP mode with 4 devices spamming requests.
I just added OpenSSL yesterday so it can deliver secure responses over HTTPS. That went fairly smoothly as it seems that all I had to do with replace some standard socket calls with their OSSL counterparts for secure mode (based on the solution to this question: Turn a simple socket into an SSL socket).
There is one SSL context and SSL struct per connection. It does work but not very reliably. Again, each response happens on its own thread but multiple/rapid/concurrent requests in secure mode are getting dropped seemingly at random, though there are still no crashes or memory leaks in my code.
When a connection is dropped the browser will either say its waiting for a response that never happens (Chrome) or just says the connection was reset (Firefox).
For reference, here is the updated connection creation and closing code.
Connection creation code (main part of the listening loop):
// Note: sslCtx and sslConnection exist
// elsewhere in memory allocated specifically
// for each connection.
struct sockaddr_in clientAddr; // memset-ed to 0 before accept
int clientAddrLength = sizeof(clientAddr);
...
int clientSocketHandle = accept(serverSocketHandle, (struct sockaddr *)&clientAddr, &clientAddrLength);
...
if (useSSL)
{
int use_cert, use_privateKey, accept_result;
sslCtx = SSL_CTX_new(SSLv23_server_method());
SSL_CTX_set_options(sslCtx, SSL_OP_SINGLE_DH_USE);
use_cert = SSL_CTX_use_certificate_file(sslCtx, sslCertificatePath , SSL_FILETYPE_PEM);
use_privateKey = SSL_CTX_use_PrivateKey_file(sslCtx, sslCertificatePath , SSL_FILETYPE_PEM);
sslConnection = SSL_new(sslCtx);
SSL_set_fd(sslConnection, clientSocketHandle);
accept_result = SSL_accept(sslConnection);
}
... // Do other things and spawn request handling thread
Connection closing code:
int recvResult = 0;
if (!useSSL)
{
shutdown(clientSocketHandle, SHUT_WR);
while (TRUE)
{
recvResult = recv(clientSocketHandle, NULL, 0, 0);
if (recvResult <= 0) break;
}
}
else
{
SSL_shutdown(sslConnection);
while (TRUE)
{
recvResult = SSL_read(sslConnection, NULL, 0);
if (recvResult <= 0) break;
}
SSL_free(sslConnection);
SSL_CTX_free(sslCtx);
}
closesocket(clientSocketHandle);
Again, this works 100% perfect for HTTP responses. What could be going wrong for HTTPS responses?
Update
I've updated the code with OpenSSL callbacks for mutli-threaded environments and the server is slightly more reliable using code from an answer to this question: OpenSSL and multi-threads.
I wrote a small command line program to spam the server with HTTPS requests and it is not dropping any connections with 5 multiple instances of it running at the same time. Multiple instances of Firefox also appear not to be dropping any connections.
What is interesting however is that connections are still being dropped with modern WebKit-based browsers. Chrome starts to drop connections at under 30 seconds of spamming, Safari on an iPhone 4 (iOS 5.1) rarely makes it past 3 refreshes before saying the connection was lost, but Safari on an iPad 2 (iOS 5.0) seems to cope the longest but ultimately ends up dropping connections as well.
You should call SSL_accept() in your request handling thread. This will allow your listening thread to process the TCP accept/listen queue more quickly, and reduce the chance of new connections getting a RESET from the TCP stack because of a full accept/listen queue.
SSL handshake is compute intensive. I would guess that your spammer is probably not using SSL session cache, so this causes your server to use the maximum amount of CPU. This will cause it to be CPU starved in regards to servicing the other connections, or new incoming connections.