500 http sessions result in low database connection utilization - sql-server

We have an Alfresco server running on Tomcat6 and a look at the manager/html page tells us that there are approx. 500 active http session during an usual workday.
Is it normal/expected that this kind of load only utilizes one of the 8 open jdbc-connections?
I would expect that there is a lot more load on the database as all the node metadata cannot already be in the ehcache.
My assumption would be that there is load on approx 30 DB connections.
Am I completely wrong on this?
Alfresco 4.0.2.9, Tomcat6, Java6, Window Server 2008R2, MSSQL
#alfresco-global.properties
db.pool.initial=30
db.pool.max=300
db.pool.idle=-1
hibernate.jdbc.fetch_size=150

For HTTP, 500 simultaneous connections is not really that many. Remember that HTTP 1.1 keeps the connection open after the current request or requests, to make subsequent requests faster, so these are not necessarily doing anything.
Rather than simultaneous connections, you should consider how many simultaneous requests there are - where the server is processing two requests simultaneously. Only then is there a reason to use more than one database connection.

Related

Openliberty datasource always uses 1 database connection

I have configured openliberty (version 21) with a database (oracle) connection as follows in the server.xml :
<dataSource jndiName="jdbc/myds" transactional="true">
<connectionManager maxPoolSize="20" minPoolSize="5" agedTimeout="120s" connectionTimeout="10s"/>
<jdbcDriver libraryRef="jdbcLib" />
<properties.oracle URL="jdbc:oracle:thin:#..." user="..." password="..."/>
</dataSource>
The server starts and I can make queries to the database via my rest api but I have noticed that I only use 1 active database connection and parallel http queries result in queuing databases queries over that 1 connection.
I have verified this by monitoring the active open database connections in combination with slow queries (I make several rest calls in parallel). Only 1 connection is opened and 1 query is processes after the other. How do I open a connection pool with for example 5-20 connections for parallel operation.
Based on your described usage, the connection pool should be creating connections as requests come in if there are no connections available in the free pool.
Your connectionTimeout is configured to be 10 seconds. To ensure that your test really is running in parallel would be to make two requests to the server. The server should create a connection, use it, wait 11 seconds, then close the connection.
If your requests are NOT running in parallel, you will not get any exception since the second request won't start until after the first one finished and that would be an issue with your test procedure.
If your requests are running in parallel, and you do not get any exception output from Liberty. Then Liberty likely is making multiple connections and that can be confirmed by enabling J2C trace.
See: https://openliberty.io/docs/21.0.0.9/log-trace-configuration.html
Enable: J2C=ALL
If your requests are running in parallel, and no more than one connection is being created, then you will get a ConnectionWaitTimeoutException. This could be caused by the driver not being able to create more than one connection, incorrect use of the Oracle Connection Pool (UCP), or a number of other factors. I would need more information to debug that issue.

Finding out sources of connections to MongoDB cluster

The "Real Time Metrics" panel of my MongoDB Atlas cluster, shows 36 connections, even though I terminated all server apps that were supposed to be connected to it. Currently nothing should be connected to it, but I still see those 36 connections. I tried pausing the cluster and then resuming it - the connections came back. Is there any way for me to find out where are they coming from? OR, terminating all connections.
Each connection is supposed to provide with it what is called "app metadata". This is supposed to always include:
The driver identifier (e.g. pymongo 1.2.3)
The platform of the client (e.g. linux amd64)
Additionally, you can provide your own information to be sent as part of client metadata which you can use to identify your application. See e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/ :app_name option.
Atlas has internal processes that connect to cluster nodes and cluster nodes communicate with each other also. All of these add to connection count seen on each node.
To figure out where connections are coming from:
Read the server logs (which you have to download first) to obtain the client metadata sent with each connection.
Hopefully this will provide enough clues to identify cluster to cluster connections. You should also be able to tell those by source IPs which you should be able to dig out of cluster configuration.
Atlas connections should be using either Go or Java drivers, if you don't use those in your own applications this would be an easy way of telling those apart.
Add app name to all of your application connections to eliminate those from the unknown ones.
There is no facility provided by MongoDB server to terminate connections from clients. You can kill operations and sessions but connections used for those operations would remain until the clients close them. When clients close connections depends on the particular driver used and connection pool settings, see e.g. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-create-client/#connection-pooling.

Connection timeout with SlashDB and Azure SQL DB

I've just installed SlashDB and connected to an Azure SQL DB successfully. Querying works and everything is fine. However, after a while, if I retry my previously working query, I get an error from SlashDB:
500 Internal Server Error (pyodbc.OperationalError) ('08S01', u'[08S01] [FreeTDS][SQL Server]Write to the server failed (20006) (SQLExecDirectW)')
I'm not writing anything to the server, if that matters. But if I retry the query immediately, it works. My deep analysis (=guess) of this all is that the SQL Server terminates the idle connection. Now, I'd like SlashDB to retry when it fails, instead of returning error to the the client. Is this possible?
Apparently Azure SQL DB could be breaking connections due to either their redundancy limitations or the 30 minute idle.
https://azure.microsoft.com/en-us/blog/connections-and-sql-azure/
For performance reasons SlashDB does not establish a new connection for every request, but instead maintains a pool of connections.
MySQL, has a similar behavior, which is widely known (60 minutes idle timeout) and SlashDB actually has a logic to attempt reconnect. It should implement the same for all database types, but I need to confirm that with the development team (and fix if not the case).
In the meantime, you could either retry on the client side or send a periodic request for to avoid the timeout.

Does UTL_HTTP have any performance issues if remote URI isn't available?

I'm using Oracle UTL_HTTP (in 11g) from a trigger to call a HTTP API, which works fine under normal circumstances.
If the remote HTTP API becomes unavailable, will the UTL_HTTP timeouts cause any performance issues for the Oracle database (i.e. does it affect any DB connections)?
For example, if my trigger would normally fire 60 times a minute, and each HTTP call completes in 25ms, this is fine. What happens if each HTTP call takes 30 seconds to timeout? After 30 seconds there would be 30 HTTP calls waiting to timeout - does Oracle keep this isolated enough or would it begin to impact on other DB users?
It will only affect the connection that is waiting for the UTL_HTTP. No impact on other connections.
Did you see: HTTP_REQUEST_TIME_OUT
http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/u_http.htm#ARPLS70957

SQL Server Timeouts First Time Application Loads

I've recently switched to running my development environment over our company's VPN using NetExtender. It would now seem that my database driven applications are now timing out the first time they try to hit the database. After the timeout (30 sec or so) and an additional 5-10 seconds, all DB calls succeed. During the 5-10 seconds the timeout error response is sent immediately. It seems to be related to when SQL Server needs to create a new database session for me. Each time I need to be assigned a new client process ID, I timeout. This is a huge problem when using Resharper + NUnit as a test harness as each time the tests are run, a new instance of resharper's unit test runner is created thusly causing me to timeout. Server timeout seems to be in the area of 30 seconds, which is certainly generous enough for a connection to be established.
It sounds to me like it could be a DNS issue. If the primary DNS is not properly configured and is inaccessible from the VPN client, it will timeout and pass on to the secondary.
Additionally, some VPNs allow you to access some local resources - this could put the DNS on your own, local network in play.
I think I'd try changing the DNS-order and see if that did the trick.

Resources