My MongoDB database has a fast-growing amount of active connections.
I wrote a code to test how connection creation/closing flow works. This code sums up how I use the mgo library in my project.
package main
import (
"time"
"fmt"
"gopkg.in/mgo.v2"
)
func main() {
// No connections
// db.serverStatus().connections.current = 6
mongoSession := connectMGO("localhost", "27017", "admin")
// 1 new connection created
//db.serverStatus().connections.current = 7
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
// 4 new connections created and closed
// db.serverStatus().connections.current = 7
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
// 4 new connections created and closed concurrently
// db.serverStatus().connections.current = 10
time.Sleep(time.Hour * 24) // wait any amount of time
// db.serverStatus().connections.current = 10
}
func connectMGO(host, port, dbName string) *mgo.Session {
session, _ := mgo.DialWithInfo(&mgo.DialInfo{
Addrs: []string{fmt.Sprintf("%s:%s", host, port)},
Timeout: 10 * time.Second,
Database: dbName,
Username: "",
Password: "",
})
return session
}
func produceDataMGO(conn *mgo.Session) {
dbConn := conn.Copy()
dbConn.DB("").C("test").Insert("")
dbConn.Close()
}
I detected a pretty weird thing that I don't understand. Behaviour is somehow different depending on how we create new connections (sync/async).
If we create connection synchronously - mongo closes this new connection immediately after calling .Close() method.
If we create connection asynchronously - mongo keeps this new connection alive even after calling .Close() method.
Why is it so?
Is there any other way to force-close connection socket?
Will it auto-close these open connections after a certain amount of time?
Is there any way to set a limit for the amount of connections MongoDB can expand its pool to?
Is there any way to setup an auto-truncate after a certain amount of time without high load?
It's connection pooling. When you "close" a session, it isn't necessarily closed; it may just be returned to the pool for re-use. In the synchronous example, it doesn't need to expand the pool; you're only using one connection at a time. In the concurrent example, you're using several connections at a time, so it may decide it does need to expand the pool. I would not consider 10 open connections to be cause for concern.
Try it with a larger test - say, 10 batches of 10 goroutines - and see how many connections are open afterward. If you have 100 connections open, something has gone wrong; if you have 10~20, then pooling is working correctly.
Related
One application is giving exception timeout period elapsed prior to obtaining connection from connection pool. I have used below code, this code snippet is called from different concurrent users and can have maximum hits up to 10000 pr.second. I have used Dapper for fetching the results from Azure MS SQL database.
public async Task<List<Results>> GetDBResults(string Id, int skip, int take)
{
var values = new DynamicParameters();
values.Add("Id", Id);
values.Add("Skip", skip);
values.Add("Take", take);
using var connection = GetConnection(AppSettingsProvider.TryGet("ConnectionString"));
try
{
var sw = Stopwatch.StartNew();
//connection.Open();
// QueryAsync is from Dapper
var dbResult = await connection.QueryAsync<ResponseObject>("SP_name", param: values,
commandType: CommandType.StoredProcedure, commandTimeout: Constants.CommandTimeout);
var result= dbResult?.ToList();
Console.WriteLine("execution time = {0} seconds\n", sw.Elapsed.TotalSeconds);
return result;
}
finally
{
connection.Close();
}
}
private SqlConnection GetConnection(string connectionString)
{
var sqlConnection = new SqlConnection(connectionString);
return sqlConnection;
}
I know, 'using' will close and dispose the connection object, Connection is getting closed but DB pool is not available immediately to next DB connection. So, I have closed DB connection explicitly in final block. This made me to execute few more concurrent request successfully. After that, I am getting timeout exception.
Connection.Open is managed by Dapper, so no connection.Open is added to the code snippet.
We are getting timeout issue after concurrent users crossing more than 200 hits.
Please let me know the resolution to this problem.
This would appear to be a pool exhaustion issue; if you have 10000 requests per second, and noting that the default connection pool size is 100, then even if everything else works perfectly (never CPU starvation, GC stalls, etc), then you only have 10ms for your "acquire, execute, release" loop (since each of the 100 connections needs to handle 100 requests per second to achieve a total of 10k/s requests). That's pretty ambitious, to be honest, even on hardware with a good LAN - if you're in the cloud ("Azure MS SQL"): forget it (unless you're paying for very very high tier services). You could try increasing the pool size, but I honestly wonder whether some kind of redesign is the way to look here. The fact that you're measuring the time in seconds emphasizes the scale of the problem here:
Console.WriteLine("execution time = {0} seconds\n", sw.Elapsed.TotalSeconds);
I have a grails/groovy web-app with database connection pooling. The settings are set-up like this:
dataSource:
url: "jdbc:postgresql://192.168.100.53:5432/bhub_dev"
properties:
jmxEnabled: true
initialSize: 5
maxActive: 25
minIdle: 5
maxIdle: 15
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
I'm using java-melody for diagnostics and monitoring, and noticing some weird behavior. For example, when executing jobs that query the DB, the connections can get over the maxActive property. Why is this even possible?
Do I need to explicitly close the Grails connections? The job calls a service method that simply does a DB query via a withCriteria Grails call, like:
def activities = WorkActivity.withCriteria{
eq("workCategoryActive", true)
order("id", "desc");
}
and it seems everytime this is run, 2 new connections are opened up, and they do not close everytime properly.
Also, on every page refresh there's some calls to the backend that execute queries, and sometimes even on refresh 2 new connections seem to open up.
I'm pretty new to Grails dev, so I have no idea if I have to/can even close this withCriteria database connection.
Any bit of help is appreciated. The database is PGSQL.
EDIT: Ok, so now I'm looking at Thread diagnostic in java-melody, and it seems like the Tomcat pool-cleaner is in waiting state and that's why the connection count isn't going down? Also it seems that every time that job is run, 2 threads start, and one gets stuck in waiting? What the hell is going on.
You should take a look how is connection pool is working, basically connections don't closed, they are reusing and opens. So when it's reach max(in your case 50) connection will wait for free on. It's work so, because open connection it's a complex work. Regarding docs they said that connection is closed, that mean that it goes back to pool, but DON'T close. The same but more description
I want to create to function. The first one is connect to DB, the second one is complete reconnection if first is failed.
In my experiment I turn off DB at start to get connect block failed and call reconnect block. After it I am turning on DB, and expecting that connection block will success, but I am getting exception.
Here is my code:
bool connect()
{
if(connection is null)
{
scope(failure) reconnect(); // call reconnect if fail
this.connection = mydb.lockConnection();
writeln("connection done");
return true;
}
else
return false;
}
void reconnect()
{
writeln("reconnection block");
if(connection is null)
{
while(!connect) // continue till connection will not be established
{
Thread.sleep(3.seconds);
connectionsAttempts++;
logError("Connection to DB is not active...");
logError("Reconnection to DB attempt: %s", connectionsAttempts);
connect();
}
if(connection !is null)
{
logWarn("Reconnection to DB server done");
}
}
}
The log (turning on DB after few seconds):
reconnection block
reconnection block
connection done
Reconnection to DB server done
object.Exception#C:\Users\Dima\AppData\Roaming\dub\packages\vibe-d-0.7.30\vibe-d\source\vibe\core\drivers\libevent2.d(326): Failed to connect to host 194.87.235.42:3306: Connection timed out [WSAETIMEDOUT ]
I can't understand why I am getting exception after: Reconnection to DB server done
There's two main problems here.
First of all, there shouldn't be any need for automatic retry attempts at all. If it didn't work the first time, and you don't change anything, there's no reason doing the same exact thing should suddenly work the second time. If your network is that unreliable, then you have much bigger problems.
Secondly, if you are going to automatically retry anyway, that's code's not going to work:
For one thing, reconnect is calling connect TWICE on every failure: Once at the end of the loop body and then immediately again in the loop condition regardless of whether the connection succeeded. That's probably not what you intended.
But more importantly, you have a potentially-infinite recursion going on there: connect calls reconnect if it fails. Then reconnect calls connect up to six times, each of those times connect calls reconnect AGAIN on failure, looping forever until the connection configuration that didn't work somehow magically starts working (or perhaps more likely, until you blow the stack and crash).
Honestly, I'd recommend simply throwing that all away: Just call lockConnection (if you're using vibe.d) or new Connection(...) (if you're not using vibe.d) and be done with it. If your connection settings are wrong, then trying the same connection settings again isn't going to fix them.
lockConnection -- Is there supposed to be a matching "unlock"? – Rick James
No, the connection pool in question comes from vibe.d. When the fiber which locked the connection exits (usually meaning "when your server is done processing a request"), any connections the fiber locked automatically get returned to the pool.
My server is listening on the port 1234 for incoming connections, i made an array of sockets and i am looping through that array looking for an already free sockets (Closed = 0) increasing that array to hold a new incoming sockets if no free socket available in the sockets array, This will be later the index to the sockets to identify each connection alone. Not sure if that is a good approach but it works fine and stable until i tried to stress my server with about 15 client opened at the same time. The problem i am facing is that some of the client apps gets connection time-out and the server becomes unstable handling those multiple connections at the same time. I am not sending much data only 20 bytes on each connect event received from the server. I have tried to increase the Backlog value on the listen call but that didn't help either.
I have to wait longer for those clients to connect. But they connect eventually. And the server still responses to new connections later on (If i close all those clients apps for example or open a new client app it will connect immediatly). Also these connections stay open and i do not close socket from the client side.
I am using CSocketPlus Class
Connection request event
Private Sub Sockets_ConnectionRequest(ByVal Index As Variant, ByVal requestID As Long)
Dim sckIndex As Integer
sckIndex = GetFreeSocketIndex(Sockets)
Sockets.Accept sckIndex, requestID
End Sub
Function GetFreeSocketIndex(Sockets As CSocketPlus) As Integer
Dim i As Integer, blnFound As Boolean
If Sockets.ArrayCount = 1 Then 'First we check if we have not loaded any arrays yet (First connector)
Sockets.ArrayAdd 1
GetFreeSocketIndex = 1
ReDim UserInfo(1)
Exit Function
Else
'Else we loop through all arrays and find a free (Not connected one)
For i = 1 To Sockets.ArrayCount - 1
If Sockets.State(i) <> sckConnected Then
'Found one use it
blnFound = True
Sockets.CloseSck i
GetFreeSocketIndex = i
If UBound(UserInfo) < i Then
ReDim Preserve UserInfo(i)
End If
Exit Function
End If
Next i
'Didn't find any load one and use it
If blnFound = False Then
Sockets.ArrayAdd i
Sockets.CloseSck i
GetFreeSocketIndex = i
ReDim Preserve UserInfo(i)
End If
End If
End Function
Does anyone know why there is a performance issues when multiple connections occur at the same time? why the server becomes slowly to response?
What makes the server to accept faster?
I have also tried to not accept the connection ie not calling accept on connection request event but still the same issue.
EDIT: I have tried to put a debug variable to print the output of socket number on the FD_ACCEPT event on the CSocket class and it seems that WndProc is delaying to post the messages in case of a lot of connections.
Ok the problem seems to be from my connection. I have moved my Server to my RDP which has 250Mbps Download speed with 200Mbps Upload speed and it seems to work very well there. Tested it with 100 client made a connection and every one of them connected immediatly!. Wonder why i have such issues where my home connection is 40/40 Mbps...hmmm. Anyone knows why that happen?
Edit: Seems to be the Router option!
Enable Firewall
Enable DoS protection
Disabled the firewall (just for testing purposes) and everything works flawlessly!
So basically the router is thinking that there is some kind of dos attack.
So it will slow down the traffic.
This question already has an answer here:
Closed 11 years ago.
Possible Duplicate:
Entity framework 4 not closing connection in sql server 2005 profiler
Well, lots of developers on stackoverflow are saying that I should not worry to close my connection: my using statement will close the connection for me, here and here and all over the site. Unfortunately, I do not see it happening. Here is my code:
[Test, Explicit]
public void ReconnectTest()
{
const string connString = "Initial Catalog=MyDb;Data Source=MyServer;Integrated Security=SSPI;Application Name=ReconnectTest;";
for (int i = 0; i < 2000; i++)
{
try
{
using (var conn = new SqlConnection(connString))
{
conn.Open();
using (var command = conn.CreateCommand())
{
command.CommandText = "SELECT 1 as a";
command.CommandType = CommandType.Text;
command.ExecuteNonQuery();
}
//conn.Close();
// optional breakpoint 1 below
}
}
catch(SqlException e)
{
// breakpoint 2 below
Console.WriteLine(e);
}
// breakpoint 3 below
}
}
When I enable all breakpoints and start my test, the first iteration succeeds, and I hit breakpoint 3. At this point, the connection is still open: I see it in the Profiler, and sp_who2 outputs it too.
Let's suppose that at this time I am out for a lunch, and my connection is idle. As such, our production server kills it. To imitate it, I am killing the connection from SSMS.
So, when I hit F5 and run the second iteration, my connection is killed. Unfortunately, it does not reopen automatically, so ExecuteNonQuery throws the following exception: "transport-level error has occurred". When I run the third iteration, my connection actually opens: I see it as an event in Profiler, and sp_who2 outputs it as well.
Even when I have uncommented my conn.Close() command, the connection still does not close, and when I kill it from SSMS, the next iteration still blows up.
What am I missing? Why can't using statement close my connection? Why can't Open() actually open it the first time, but succeeds the next time?
This question has originated from my previous one
When you call SqlConnection.Dispose(), which you do because of the using block, the connection is not closed, per-se. It is released back to the connection pool.
In order to avoid constantly building/tearing down connections, the connection pool will keep connections open for your application to use. So it makes perfect sense that the connection would still show as being open.
What's happening after that, I can't explain offhand - I know that keeping a random connection open would not cause that, though, because your application can certainly make more than a single concurrent connection.