Our performance testing team is running test on our WPF-WCF-Sql Server application and they are facing connection timeout after the load goes above 75 users
Error -27796: Failed to connect to server "81.171.180.119:4567": [10060] Connection timed out
I would like to know what can be steps to look at bottlenecks which may be causing issues like maybe some setting in Load Runner or identify the code bottlenecks.
Thanks
Based on what you wrote my first guess would be a configuration on the server that only allows 75 active sessions to connect to it. I would start by looking into the server's configuration for concurrent threads/sessions/connections.
you need to change the runtime settings in load runner{press F4} , in miscellaneous >options and change connection timeout to around 240 seconds depending how your application is responding.
now loadrunner will wait for about 4 minutes until response is received.
Check similar settings for step download timeout in case of downloads are there in your application.
Related
I am doing some load testing on a service run with Apache2 and my load testing tool has a default timeout of 30 seconds. When I run the tool for a minute with 1 request per second load, it reports that 40 succeeded with 200 OK response and 20 requests were cancelled because client timeout exceeded while awaiting headers.
Now, I was trying to spot this on the server side. I can't see the timeouts logged either in apache access logs or gunicorn access logs. Note that I am interested in connections that weren't accepted as well as that are accepted and times out.
I have some experience working on similar services on Windows. The http.sys error logs would show connection dropped errors and we would know if our server was dropping connections.
When a client times out, all the server knows is that the client has aborted the connection. In mod_log's config, the %X format specifier is used to log the status of the client connection after the request has completed, which is exactly what you want to know in this case.
Configure your logs to use %X, and look for the X character in the log lines.
Bonus: I even found the discussion about this feature in apache's dev forum, from 20 years ago
Update:
Regarding refused connections, these cannot be logged by apache. Connection refusal is done by the kernel, in the tcp stack, and not by apache. The closest solution including only apache that I can think of is keeping track of the amount of open connections (using mod_status). If it reaches the maximum you know you might be refusing connections. Otherwise, you'd need to set up some monitoring solution to track tcp resets sent by the kernel.
I have an application where a user can upload a PDF using angular-file-upload.js
This library does not support file chunking: https://github.com/nervgh/angular-file-upload/issues/41
My elastic load balancer is configured to have an idle timeout of 10 seconds and other parts of the application depend on keeping this parameter.
The issue is if the file upload takes longer than 10 seconds the user receives a 504 Gateway Timeout in the browser and an error message. However, the file still reaches the server after some time.
How can I ignore or not show the user this 504 Gateway Timeout that comes from the ELB? Is there another way around this issue?
The issue you have is that an ELB is always going to close the connection unless it gets some traffic back from your server. See below from AWS docs. It's the same behaviour for an ALB or a Classic load balancer.
By default, Elastic Load Balancing sets the idle timeout to 60 seconds
for both connections. Therefore, if the instance doesn't send some
data at least every 60 seconds while the request is in flight, the
load balancer can close the connection. To ensure that lengthy
operations such as file uploads have time to complete, send at least 1
byte of data before each idle timeout period elapses, and increase the
length of the idle timeout period as needed.
So to get around this, you have two options:
Change the server processing to start sending some data back as soon as the connection is established, on an interval of less than 10 seconds.
Use another library for doing your uploads, or use vanilla javascript. There are plenty of examples out there, e.g. this one.
Edit: Third option
Thanks to #colde for making the valid point that you can simply work around your load balancer altogether. This has the added benefit of freeing up your server resources which get tied up with lengthy uploads. In our implementation of this we used pre-signed urls to securely achieve this.
Is it possible to schedule a task that will kill a specific internet tab every 15 minutes?
Our operatives all access a reports dashboard, but due to the number of licenses we keep finding ourselves being unable to log on as people leave their screen open despite not using it.
If there was a scheduled task that ran every 15 minutes, perhaps that just kicked off a batch file that looked for an internet tab that's always called 'Dashboard' then it would kill it, that would be great.
Can anyone help please?
Thanks
I don't think this can be done easily if it can be done at all.
Does the application you use - Dashboard? - have a setting to handle a session after some time of inactivity as being closed like when closing the page in the browser or terminate the browser?
It would be better to control the session and license management from server side instead of from client side.
Do you use OpenStack Dashboard (Horizon) as I found the page Horizon does not implement a browser session timeout with the information that there is now such a feature.
I need to understand a complex web project and for that I need to be in a debugging session for much longer time. For this I have set the "Application Pool" "Ping Enabled" to False in the IIS 7.5, so that IIS does not terminate itself and I can continue debugging.
But, if I continue debugging for a longer time, I do not receive the response in the browser, rather I receive a blank page. Although there was no exception raised while debugging and everything went properly.
What other configuration is needed to remain in a debugging session for a longer time?
Can anyone suggest?
There are two timeouts that you must consider, both the server and the client.
You have already resolved the server timeout but the client (browser) gives up and shows a white page, usually after 1 minute.
You need to increase the timeout that the browser is willing to wait for.
There are steps to do this in IE here: http://support.microsoft.com/kb/813827
Configuration:
We have iPlanet web server which sits before WebSphere portal 6.1 cluster (2) deployed in Linux machines.
When user tries to copy a 10 GB file across file systems (NFS mounted), we are using java run time to copy the file across to a different NFS mount, hoping that it would be faster than using any other java libraries.
proc = rt.exec("cp " + fileName + " " + outFileName);
Application deployed is a JSF portlet application.
a) session timeout is 60 mins on the app server and the application
b) we have an Ajax call from the client page to keep the session alive
User receives HTTP 500 within 3 minutes, while our logs show that file is still copying. Not sure why WebSphere is sending HTTP 500?
After 10 minutes are so file is copied, and when he clicks on refresh he can proceed.
Not sure what is causing this HTTP 500.
WebContainer threads are not supposed to be used for long tasks.
He's getting 500 after 3 minutes because that is the time WebSphere decides the thread is hung.
What you should be doing is using a WorkManager to perform that long task and the client can poll to check the status of the task.
If you consider upgrading to WAS v8/v8.5 in the near future a good idea will be to use Asynchronous Servlets for that
The reason that your client receives an HTTP 500 error after a few minutes can happen for a few reasons. Without a stack trace and some relevant logging, it is impossible to know which component within WebSphere "woke up" after 3 minutes and stopped everything. It might be WebSphere's timeout setting for the Web Container thread pool, or it can be some other timeout - should be easily concluded from the logs.
To fix this, you can do one of the following:
Adjust the relevant timeout value (depending, again, on which timeout it is exactly).
Change your design so long-running tasks are executed in the background. You can use WebSphere's Work Manager API for that, or asynchronous beans / servlets.