Tomcat Physcical memory usage keeps increasing and doesnt go down - tomcat6

I have set the JAVA options settings value for tomcat 6.0.35 as
"-Xms2048m -Xmx2048m -XX:PermSize Size=1024m -XX:MaxPermSize=1024m -XX:NewSize=512m "
Physical memory keeps increasing for every request. 70 request is coming into server at a time.
It has grown up to 2384,012K and didn't crash.
I have restarted the server when it reach 2384,012K. There was no impact to the system and I don't see any errors.
I have Java application to connect to MQ Series via MQQuemanager. It creates new connection for every request and then closes it.
Also it has JDBC connection to get details from Database.
is there any log generated by tomcat if there is a memory leak.
Please advice to validate it.
Thanks in advance

Related

Connection tab of Google Cloud SQL instance taking forever to load the console interface

I want to access my cloud database from my computer but the connection tab cannot load to finish so that I can enter my IPv6 address. This is the second time am experiencing this issue and my network is strong enough. It's now been 20 minutes, but still the three dots are just indicating progress that never ends.
The first time it happened I had to leave my computer and go for a walk. This really frustrates me since it's in production and rapid updates should not be delayed.
How can I fix this?
POSSIBLE CAUSE:
It happens after I re-open Mysql-workbench and it fails reason being my IPv6 has been changed possibly by my Internet Service Provider (ISP) (I dont know of other possible reasons). After Mysql-workbench fails, I go to the console to update the new one but this problem occurs.
I think Cloud SQL security (don't know exact name) is treating this a malicious access attempt hence initiating this weird delay for immediate subsequent access. If so, then this is purely impractical for b/s since my computer does not tell me that my IPv6 has changed, besides, that normal regular IPv6 updates can't be treated as malicious lest developers continue to suffer this issue.
EDIT: This time it finished loading after approximately 50 minutes.
Have you considered using the Cloud SQL proxy to connect to your instance instead of white-listing an IP? White-listing an IP can be insecure since it provides anyone on your network access, and inconvenient (as you have discovered) because if your IP changes you lose access.
The proxy uses a service account to provide authenticated access to your instance, so it will work regardless of your IP (as long as your service account has the correct permissions). Check out these instructions for a guide on starting it up.
(As a side note, it's a difficult problem to tell why your connectivity tab is failing to get load. It might be a browser add on or even a networking failure in your local network that is interfering. You can check the browser dev console to see if any errors appear)

IIS 7 consuming connections and not releasing them

We have a asp.net website, hosted by IIS7. The website functions normally without any problem.
But after a while, like 1 or 2 months, the Sybase database which the website connects to will produce this error whenever accessed by any application : Maximum number of connections already opened
At first we didn't realize that it was the web site's application pool that causes this, so we restart the database and everything is back to normal.
But then the 2nd time, the 3rd time ... and we came to aware that we just need to restart the application pool for those unreleased connection to be released.
I checked the source code, there was only one place has database-connect-code and it was a very simple code which connects to the db, get the results then close it :
con.open()
x = con.getdata()
con.close
Btw, after checking the application's log, there wasn't any error or exceptions so I'm pretty sure that the con.close is probably reached and executed.
So if we could rule out the possibility that there were unclosed connections in the source code, is there any other explanation for this ?

How best to close connections and avoid inactive sessions while using C3P0?

I am using c3p0 for my connection pooling. The ComboPooledDataSource I use is configured as below.
#Bean
public DataSource dataSource() {
ComboPooledDataSource dataSource = new ComboPooledDataSource();
dataSource.setUser("user");
dataSource.setDriverClass("oracle.jdbc.OracleDriver");
dataSource.setJdbcUrl("test");
dataSource.setPassword("test");
dataSource.setMinPoolSize("10");
dataSource.setMaxPoolSize("20");
dataSource.setMaxStatements("100");
return dataSource;
}
I am facing some issues with this. I get warnings that this might leak connections. Also the below error from time to time,
as all the connections are being used up.
java.sql.SQLException: Io exception: Got minus one from a read call
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:387)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:439)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:165)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:35)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:801)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:135)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection
(WrapperConnectionPoolDataSource.java:182)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection
(WrapperConnectionPoolDataSource.java:171)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource
(C3P0PooledConnectionPool.java:137)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1014)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:32)
at com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:547)
And from the DB stat, could see almost 290 inactive connections. I am having around 8 applications deployed in two servers,
connecting to the same DB.
My queries are
How do I make sure th connections are closed and not to have these many inactive connections?
Would configuring idle time and timeout resolve this issue?
What would happen if the server is brought down/tomcat is shutdown, will the connections remain open?
Connections are mainly used during startup to load cache, so is there a way of not using these connections afterwards?
What should I do for the existing inactive connections?
Given maxPoolSize of 20 and eight deployments, you should expect to see up to 180 Connections, which may be inactive if the application has seen periods of traffic which has now subsided. You have configured nothing to encourage a fast scaling down of your pools -- set maxIdleTime and/or maxIdleTimeExcessConnections and/or maxConnectionAge.
You should probably tell Spring how to close the DataSource you've defined. Use #Bean(destroyMethodName="close") instead of #Bean alone above your dataSource() method.
You have not configured any sort of Connection testing, so even broken Connections might remain in the pool. Please see Simple Advice On Connection Testing.
If the issue were a Connection leak, clients would eventually hang indefinitely, as the pool would be out of Connections to check out, but would already have reached maxPoolSize, and so wouldn't be able to acquire more from the DBMS. Are you seeing clients hang like that?
The way you avoid Connection leaks is, post-Java7, to always acquire Connections from your DataSource via try-with-resources. That is, use...
try ( Connection conn = myDataSource.getConnection() ) {
...
}
rather than just calling getConnection() in a method that might throw an Exception or in a try block. If you are using an older version of Java, you need to use the robust resource cleanup idiom, that is, acquire the Connection in a try block and be sure that conn.close() is always closed in the finally block, regardless of any other failures in the finally block. If you are not working with the DataSource directly, but letting Spring utilities work with it, hopefully those utilities are doing the right thing. But you should post whatever warning you are receiving that warns you of potential Connection leaks!
If your application has little use for Connections after it has "warmed up", and you want to minimize the resource footprint, set minPoolSize to a very low number, and use maxIdleTime and/or maxIdleTimeExcessConnections and/or maxConnectionAge as above to ensure that the pool promptly scales down when Connections are no longer in demand. Alternatively you might close() the DataSource when you are done with its work, but you are probably leaving that to Spring.

File created from base64 string causes memory leak on server

The company that I work for has come across a pretty significant issue with one of our releases that has brought our project to a screeching halt.
A third party application that we manage, generates word documents from base64 encoded strings stored in our SQL Server. The issue that we are having is that in some cases, when one of these documents are sent via SMTP and the file is opened by the user, the file fails to open.
When the file fails, the server locks up. The memory and cpu then grow exponentially on the server to the point that the only option is to kill the process from the server-side in order to prevent failure and down time for the rest of the users on the network.
We are using Windows 7 with Microsoft Office 2013 and the latest version of SQL Server.
What is apparent is that the word document created from the base64 string is corrupt. What isn't apparent is how this appears to bring the entire server system down in one fell swoop.
Has anyone come across this issue before and if so, what was the solution that you came up with? We do not have access to the binaries of the 3rd party application that generates the files. We aren't able to reproduce the issue manually in order to come up with a working testcase to present to the 3rd party, so we are stumped. Any ideas?
I would need more details to understand your scenario. You say this is the order of events:
1. Word file is sent via SMTP (presumably an email to an Outlook client)
2. User receives email; opens attached file
3. Memory and CPU on server go to 100 percent. This creates downtime for rest of the users.
4. Need to kill this process to recover.
Since Outlook is a client-side application, it must be the Word document attachment that is causing this problem. Can you post a sample document in a public place, like a free OneDrive account? Presumably this document creates the problem. Maybe it has some VBA code? Try this with a blank document.

Intermittent connection timeouts to Solr server using SolrNet

I have a production webserver hosting a search, and another machine which hosts the Solr search server (on a subnet which is in the same room, so no network problems). All is fine >90% of the time, but I consistently get a small number of The operation has timed out errors.
I've increased the timeout in the SolrNet init to 30 seconds (!)
SolrNet.Startup.Init<SolrDataObject>(
new SolrNet.Impl.SolrConnection(
System.Configuration.ConfigurationManager.AppSettings["URL"]
) {Timeout = 30000}
);
...but all that happened is I started getting this message instead of Unable to connect to the remote server which I was seeing before. It seems to have made no difference to the amount of timeout errors.
I can see nothing in any log (believe me I've looked!) and clearly my configuration is correct because it works most of the time. Anyone any ideas how I can find more information on this problem?
EDIT:
I have now increased the number of HttpRequest connections from 2 to 'a large number' (I see up to 10 connections) - but this has had no discernible effect on this problem.
The firewall is set to allow ANY connections between the two machines.
We've also checked the hardware with our server host and there are no problems on the connections, according to them.
EDIT 2:
We're still seeing this issue.
We're now logging the timeouts and they're mostly just over 30s - which is the SolrNet layer's timeout; some are 20s, though - which is the Tomcat default timeout period - which suggests it's something in the actual connection between the machines.
Not sure where to go from here, though - they're on a VLAN and we're specifically using the VLAN address - response time from pings is ALWAYS <1ms.
Without more information, I can only guess a few possible reasons:
You're fetching tons of documents per query, and it times out while transferring data.
You're hitting the ServicePoint.ConnectionLimit. If so, just increase this value. See also How can I programmatically remove the 2 connection limit in WebClient
You have some very facet-heavy requests or misusing Solr (e.g. not using filter queries). Check the qtime in the response. See the Solr performance wiki for more details.
Try setting this in .net.
ServicePointManager.Expect100Continue = false;
or this
ServicePointManager.SetTcpKeepAlive(true, 200000, 200000); - this sends requests to the server to keep the connection alive.

Resources