I am using a spring based application with MS SQL server and using hibernate or native sql to fetch data from the database. However one of the problems I am facing is session timeout after around 2 minutes. The session timeout configured for the application in web.xml is 20 minutes. Ideally if session is idle it does not get logged out. But whenever there is a database related operation that would take more than 2 minutes to execute the session is killed. Can someone help me with this?
Using Jboss 7.3
Related
I'm trying to connect to salesforce using Mulesoft OOB connector - using OAuth jwt, but I intermittently get timeouts (connect timeout and read timeout both are set to 10 secs). Is there a way I can find out how much time it is taking to connect to Salesforce ?
So that the root cause for these timeouts can be found out?
I'm currently developing a project with python 3.7, Django 2.1, Mysql as database.
I'm deploying it in google cloud app engine standard environment and for the database I'm using a cloud SQL - MySql 2nd gen instance.
The application works well, however when I analyze the logs I see these errors:
"aborted connection - Got an error reading communication packets"
In this case the connection is being closed by my app (django).If I configure my app to have persistent connections and I put wait_timeout (i.e 60) in the config of the cloud sql, the error is:
"aborted connection - Got timeout reading communication packets".
I just determined that it's not a problem with sql cloud, or with the configuration of my application, but that it's an app engine problem. I came to this conclusion in the following way:
if I connect to the sql cloud instance through Mysql workbench, no connection is aborted
Similarly if I run my application on a local server, but connecting to cloud sql (through the cloud_sql_proxy), no error is generated and everything works perfect.
So my conclusion is that it is a problem of how the app engine connects to the cloud sql instance.
Why does this happen? How could it be solved?
The "Aborted connection" messages you're seeing, are usually triggered when a connection is closed improperly or there is a networking anomaly between the server and the client.
Sometimes Cloud SQL instances and GAE have long-live idle connections. In order to address this issue, it is recommended to set "wait_timeout' flag below 600 seconds - as you've already attempted to do so.
Another possible solution, is to implement application-level keepalives. SQLAlchemy provides “pre-ping” for this. Otherwise, generate activity on all open connections by sending a simple SQL statement such as "SELECT 1;" regularly, at least once every 5 minutes. Also consider using statements in your code like “with db.connect() as conn:” to control the connection’s lifetime.
I believe this is because requests from App Engine applications to Cloud SQL are subject to the following time and connection limits:
For apps running in the App Engine standard environment, all database requests must finish within the HTTP request timer, around 60 seconds. For apps running in the flexible environment, all database requests must finish within 60 minutes.
Offline requests like cron tasks have a time limit of 10 minutes.
Requests to Cloud SQL have limitations based on the scaling type of the App Engine module and how long an instance can remain in memory (residence).
Each App Engine instance running in a standard environment cannot have more than 60 concurrent connections to a Cloud SQL instance. For applications written in Java 8 or Go 1.8, the limit is 100.
Connection Issues: If you see errors containing "Aborted connection nnnn to db:", it usually indicates that your application is not terminating connections properly. It could also be caused by network issues. This error does not mean that there are problems with your Cloud SQL instance.
We are working in .Net MVC application environment.
Technology being used is .Net Framework 4.7, SQL Server version 12.0, Asp.Net with C#.
And it is hosted in Amazon cloud server(AWS). Application hosted in Amazon EC2 instance and SQL Server from Amazon RDS.
Our product support globally. Clients accessing the application 24/7 by different time zones.
We do have schedulers like Auto Invoice creation, Auto Payment, Auto Email Remainder.
And these schedulers are running in the interval of every half an hour using “SQL Server Agent job” and "Windows Task Scheduler (.EXE)".
During the schedulers are running the insert and update process will happen in the database transaction tables.
The end user has page load problems when the schedulers are up and running. And sometimes the page is unresponsive.
We are using below database logic while executing stored procedures.
TRANSACTION,ROLLBACK,RAISERROR,COMMIT,
SET DEADLOCK_PRIORITY HIGH,SET DEADLOCK_PRIORITY LOW
Is there any solution to overcome this issue in application while running schedulers?
Please keep us posted and thanks in advance.
We are a Premier Account User in GoogleAppEngine. We have two front-end servers running on Google AppEngine developed using Java.
Front-end server A communicates with another front-end server B using URLFetch(java.net) request.
We have set the maximum connectionTimeOut and readTimeOuts (60000ms) also for URLFetch in server A.
All the requests from server A to B works successfully 70% of the time.
But we are finding weird scenarios like 30% of the time where server A would have given URLFetch request to Server B and throwing java.net.SocketTimeoutException: Timeout while fetching URL.
Note: The request from Server A to B remains dead air for 60 seconds and there aren't any request being received at Server B for the same. There are no instance restarts on both the servers.
But when the client retries the request again, it runs successfully.
We have 2 idle instances and instance classes configured to F4 on both the servers.
Can we please know why the request from Server A timeouts out without a request being received at the other server end (ie) Server B?
Thanks.
Regards,
Anantha Krishnan
I have a WCF Service for a Silverlight Application that can perform a few thousand inserts to an SQL Server database. Normally everything works fine but when we have a large set of data the service takes a longer time to do the work.
On my local IIS server (Win XP, IIS 5) and my ASP.Net dev server (even when not in debug mode) I can run the request for a long time until the job is done. I have set the Client side silverlight EndPoints timeouts/buffer sizes to a large amount (20 minutes etc..).
When we deploy to live (IIS 7 / Win 2008 Server) for some reason the service times out dead on 2 minutes. It's not the service thats stopped running because the log file is still being written to whilst Silverlight shows a "Not Found" error for the async callback method.
Where on earth is the setting on IIS that controls the timeout? I know their are 5 or so different setting for timeouts that can be set in the web.config. However, no setting I change in IIS has any effect (forms authentication has a timeout, wcf has one etc..).
Please bare in mind that this app and web.config works perfectly ok on IIS 5 / Win XP. Finally, do I have to restart IIS for any timeout changes to to effect? I am reluctant to do this because its a live server and there are other applications running on it.
Try changing the connectionTimeout for the web site. Looks like the default is two minutes.