I have a grails/groovy web-app with database connection pooling. The settings are set-up like this:
dataSource:
url: "jdbc:postgresql://192.168.100.53:5432/bhub_dev"
properties:
jmxEnabled: true
initialSize: 5
maxActive: 25
minIdle: 5
maxIdle: 15
maxWait: 10000
maxAge: 600000
timeBetweenEvictionRunsMillis: 5000
minEvictableIdleTimeMillis: 60000
validationQuery: SELECT 1
validationQueryTimeout: 3
validationInterval: 15000
testOnBorrow: true
testWhileIdle: true
testOnReturn: false
jdbcInterceptors: ConnectionState
defaultTransactionIsolation: 2 # TRANSACTION_READ_COMMITTED
I'm using java-melody for diagnostics and monitoring, and noticing some weird behavior. For example, when executing jobs that query the DB, the connections can get over the maxActive property. Why is this even possible?
Do I need to explicitly close the Grails connections? The job calls a service method that simply does a DB query via a withCriteria Grails call, like:
def activities = WorkActivity.withCriteria{
eq("workCategoryActive", true)
order("id", "desc");
}
and it seems everytime this is run, 2 new connections are opened up, and they do not close everytime properly.
Also, on every page refresh there's some calls to the backend that execute queries, and sometimes even on refresh 2 new connections seem to open up.
I'm pretty new to Grails dev, so I have no idea if I have to/can even close this withCriteria database connection.
Any bit of help is appreciated. The database is PGSQL.
EDIT: Ok, so now I'm looking at Thread diagnostic in java-melody, and it seems like the Tomcat pool-cleaner is in waiting state and that's why the connection count isn't going down? Also it seems that every time that job is run, 2 threads start, and one gets stuck in waiting? What the hell is going on.
You should take a look how is connection pool is working, basically connections don't closed, they are reusing and opens. So when it's reach max(in your case 50) connection will wait for free on. It's work so, because open connection it's a complex work. Regarding docs they said that connection is closed, that mean that it goes back to pool, but DON'T close. The same but more description
Related
My MongoDB database has a fast-growing amount of active connections.
I wrote a code to test how connection creation/closing flow works. This code sums up how I use the mgo library in my project.
package main
import (
"time"
"fmt"
"gopkg.in/mgo.v2"
)
func main() {
// No connections
// db.serverStatus().connections.current = 6
mongoSession := connectMGO("localhost", "27017", "admin")
// 1 new connection created
//db.serverStatus().connections.current = 7
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
produceDataMGO(mongoSession)
// 4 new connections created and closed
// db.serverStatus().connections.current = 7
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
go produceDataMGO(mongoSession)
// 4 new connections created and closed concurrently
// db.serverStatus().connections.current = 10
time.Sleep(time.Hour * 24) // wait any amount of time
// db.serverStatus().connections.current = 10
}
func connectMGO(host, port, dbName string) *mgo.Session {
session, _ := mgo.DialWithInfo(&mgo.DialInfo{
Addrs: []string{fmt.Sprintf("%s:%s", host, port)},
Timeout: 10 * time.Second,
Database: dbName,
Username: "",
Password: "",
})
return session
}
func produceDataMGO(conn *mgo.Session) {
dbConn := conn.Copy()
dbConn.DB("").C("test").Insert("")
dbConn.Close()
}
I detected a pretty weird thing that I don't understand. Behaviour is somehow different depending on how we create new connections (sync/async).
If we create connection synchronously - mongo closes this new connection immediately after calling .Close() method.
If we create connection asynchronously - mongo keeps this new connection alive even after calling .Close() method.
Why is it so?
Is there any other way to force-close connection socket?
Will it auto-close these open connections after a certain amount of time?
Is there any way to set a limit for the amount of connections MongoDB can expand its pool to?
Is there any way to setup an auto-truncate after a certain amount of time without high load?
It's connection pooling. When you "close" a session, it isn't necessarily closed; it may just be returned to the pool for re-use. In the synchronous example, it doesn't need to expand the pool; you're only using one connection at a time. In the concurrent example, you're using several connections at a time, so it may decide it does need to expand the pool. I would not consider 10 open connections to be cause for concern.
Try it with a larger test - say, 10 batches of 10 goroutines - and see how many connections are open afterward. If you have 100 connections open, something has gone wrong; if you have 10~20, then pooling is working correctly.
My apps could slow down in iOS 11, iPhone 6 plus. (Other iOS run as expected.)
I know SecTrustEvaluate() method is a reason that make the app slow down.
I run its in main thread takes about 3 seconds. So i use gcd to move its to background thread.
- (void)URLSession:(NSURLSession *)session didReceiveChallenge(NSURLAuthenticationChallenge *)challenge completionHandler:(void (^)(NSURLSessionAuthChallengeDisposition disposition, NSURLCredential * _Nullable credential))completionHandler {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
BOOL allowConnect = //Server Trust Evaluation in here
dispatch_async( dispatch_get_main_queue(), ^{
if (allowConnect) {
//completionHandler;
} else {
//cancel
}
});
});
}
}
Then it do not block UI, but take 20 seconds for server trust validation.
Can someone know this issue? Please help me. Thanks.
I configure out my problem. This does not relate to iOS 11. It's my fault.
I create one NSURLSession for each security download image request on the same host.
Because TLS session is computationally expensive so that make my app slow down.
My solution is create only one session for all download request.
So evaluated server certificate’s result will be cached, and next request(on the same host, port) you don’t need evaluation server trust.
More info:
https://developer.apple.com/library/content/qa/qa1727/_index.html
Why is a HTTPS NSURLSession connection only challenged once per domain?
My server is listening on the port 1234 for incoming connections, i made an array of sockets and i am looping through that array looking for an already free sockets (Closed = 0) increasing that array to hold a new incoming sockets if no free socket available in the sockets array, This will be later the index to the sockets to identify each connection alone. Not sure if that is a good approach but it works fine and stable until i tried to stress my server with about 15 client opened at the same time. The problem i am facing is that some of the client apps gets connection time-out and the server becomes unstable handling those multiple connections at the same time. I am not sending much data only 20 bytes on each connect event received from the server. I have tried to increase the Backlog value on the listen call but that didn't help either.
I have to wait longer for those clients to connect. But they connect eventually. And the server still responses to new connections later on (If i close all those clients apps for example or open a new client app it will connect immediatly). Also these connections stay open and i do not close socket from the client side.
I am using CSocketPlus Class
Connection request event
Private Sub Sockets_ConnectionRequest(ByVal Index As Variant, ByVal requestID As Long)
Dim sckIndex As Integer
sckIndex = GetFreeSocketIndex(Sockets)
Sockets.Accept sckIndex, requestID
End Sub
Function GetFreeSocketIndex(Sockets As CSocketPlus) As Integer
Dim i As Integer, blnFound As Boolean
If Sockets.ArrayCount = 1 Then 'First we check if we have not loaded any arrays yet (First connector)
Sockets.ArrayAdd 1
GetFreeSocketIndex = 1
ReDim UserInfo(1)
Exit Function
Else
'Else we loop through all arrays and find a free (Not connected one)
For i = 1 To Sockets.ArrayCount - 1
If Sockets.State(i) <> sckConnected Then
'Found one use it
blnFound = True
Sockets.CloseSck i
GetFreeSocketIndex = i
If UBound(UserInfo) < i Then
ReDim Preserve UserInfo(i)
End If
Exit Function
End If
Next i
'Didn't find any load one and use it
If blnFound = False Then
Sockets.ArrayAdd i
Sockets.CloseSck i
GetFreeSocketIndex = i
ReDim Preserve UserInfo(i)
End If
End If
End Function
Does anyone know why there is a performance issues when multiple connections occur at the same time? why the server becomes slowly to response?
What makes the server to accept faster?
I have also tried to not accept the connection ie not calling accept on connection request event but still the same issue.
EDIT: I have tried to put a debug variable to print the output of socket number on the FD_ACCEPT event on the CSocket class and it seems that WndProc is delaying to post the messages in case of a lot of connections.
Ok the problem seems to be from my connection. I have moved my Server to my RDP which has 250Mbps Download speed with 200Mbps Upload speed and it seems to work very well there. Tested it with 100 client made a connection and every one of them connected immediatly!. Wonder why i have such issues where my home connection is 40/40 Mbps...hmmm. Anyone knows why that happen?
Edit: Seems to be the Router option!
Enable Firewall
Enable DoS protection
Disabled the firewall (just for testing purposes) and everything works flawlessly!
So basically the router is thinking that there is some kind of dos attack.
So it will slow down the traffic.
With Gatling 2, is it possible to repeat with connection re-use? How?
I have the below code, but it appears to open new connection every time. I want to maintain x connections for some time.
val httpProtocol = http
.baseURL("http://mysrv.pvt")
.inferHtmlResources()
val uri1 = "http://mysrv.pvt"
val scn = scenario("Simulation").repeat(50){
pause(2 seconds,20 seconds).
exec(http("request_0")
.get("/s1/serve.html")
)
}
setUp(scn.inject(
atOnceUsers(20000)
).protocols(httpProtocol))
First, your question is not accurate enough.
By default, Gatling has one connection pool per virtual user, so each of them do re-use connections between sequential requests, and can have more than one concurrent connection when dealing with resource fetching, which you do as you enabled inferHtmlResources. This way, virtual users behave as independent browsers.
You can change this behavior and share a common connection pool, see doc. However, you have to make sure this makes sense in your case. Your workload profile will be very different, the toll on the TCP stack on both the client/Gatling and the server/your app will be way less, so make sure that's how your application is being used in production.
I use H2 database in embedded mode with SORM.
If the database is busy then SORM just continue to wait. There are no exception, nothing happens. This is misleading. :(
So how i can set the db connection timeout?
How come it's misleading and how come exceptions are better? If you need the process to be non-blocking just use Future like so:
future{ Db.query[Artist].fetch() }.foreach{ artists => ... }
Consider this to be a non-blocking version of the following:
val artists = Db.query[Artist].fetch()