I am using selenoid with ggr and 10 hosts. per my understanding ggr devides the load to all the host machine based on quota.
my question is if in .srprofile I have thread count as 5 , will 50 scenario will be executed at once ( 5 threads will be invoked per hosts)
I am not clear how does that work with selenoid.
Every request to create a new browser will cause Ggr to randomly choose a host and create a session there. Overall sessions distribution is quasi-uniform. If every thread is sending new session request to Ggr then only 5 browser sessions will be created in parallel.
Related
I have a situation where a sign in button actually fires a call to following actions:
calling a auth service and gets a token
calls service A with token
calls service B with token
calls service C with token
Please note again all these actions are made(serially) on clicking the single sign-in button
I am actually trying to tune the system by applying some metrics monitoring. The problem is I want to load test the sign in process with 100 concurrent users for confirming that tuning works. I actually tried using jmeter with concurrency thread groups after recording the process above in jmeter script by means of blazemeter chrome plugin, but i found a difficulty there when i ran the test the threads are just keep hiting the urls involved in sign-in in arbitrary manner. i dont want that. what i want is: i have 100 * 4 threads and the group of these 4 threads should run concurrently but in each group the thread should run serially and the token in each group should be the one recieved from the auth call. Is it possible to attain such thing ?
Each JMeter thread (virtual user) is executing Samplers upside down (or according to the Logic Controllers) so if you don't need to run requests 2-4 in parallel you basically don't have to do anything.
If you're confused by the order of the requests you can add ${__threadNum} function as the prefix (or postfix) and ensure that each virtual user is executing requests as they appear in the Thread Group
If you need to get the token and then execute requests 2-4 at the same time - put them under the Parallel Controller
I have an application where a user can upload a PDF using angular-file-upload.js
This library does not support file chunking: https://github.com/nervgh/angular-file-upload/issues/41
My elastic load balancer is configured to have an idle timeout of 10 seconds and other parts of the application depend on keeping this parameter.
The issue is if the file upload takes longer than 10 seconds the user receives a 504 Gateway Timeout in the browser and an error message. However, the file still reaches the server after some time.
How can I ignore or not show the user this 504 Gateway Timeout that comes from the ELB? Is there another way around this issue?
The issue you have is that an ELB is always going to close the connection unless it gets some traffic back from your server. See below from AWS docs. It's the same behaviour for an ALB or a Classic load balancer.
By default, Elastic Load Balancing sets the idle timeout to 60 seconds
for both connections. Therefore, if the instance doesn't send some
data at least every 60 seconds while the request is in flight, the
load balancer can close the connection. To ensure that lengthy
operations such as file uploads have time to complete, send at least 1
byte of data before each idle timeout period elapses, and increase the
length of the idle timeout period as needed.
So to get around this, you have two options:
Change the server processing to start sending some data back as soon as the connection is established, on an interval of less than 10 seconds.
Use another library for doing your uploads, or use vanilla javascript. There are plenty of examples out there, e.g. this one.
Edit: Third option
Thanks to #colde for making the valid point that you can simply work around your load balancer altogether. This has the added benefit of freeing up your server resources which get tied up with lengthy uploads. In our implementation of this we used pre-signed urls to securely achieve this.
My requirement is I want 10 users logging in(using login credentials from CSV) with simultaneous login of 5 users, with each user traversing different paths depending on which user has logged-in. Below is my Test Plan for the same:
Below is synchronizing timer settings which I have used:
I have clubbed my requests in a transaction controller since each main request has multiple concurrent sub-requests Plus i want to put requests for all JS, css, image files as one parent request. I am considering 1 request to include all the requests within each Transaction controller.:
As per my Test Plan, if my understanding is correct then, 1st user will login and the request continues to "If controller" of User1. Here requests will wait till 5 requests have been queued as per setting done in synchronizing timer and all the 5 requests will be sent to the server at one time. Then 2nd user will login and the requests of the second user will be processed and so on.
The above test plan executes successfully if synchronizing timer is not used. Once I use the synchronizing timer, my test plan execution continues indefinitely.
As per my understanding of synchronizing timer, the processing should continue since I have used timeout value of 200000 MilliSecs. I am unable to understand why on using synchronizing timer the Test Plan hangs.
What i actually want is first all 10 users should login with 5 simultaneous logins and then each user continue with their respective requests as per the condition specified in the If Controller(${__groovy(vars.get("username") == "user1" )}), with 10 simultaneous requests.
So, how do i design my Test Plan along with use of synchronizing timer to achieve the desired result?
I will greatly appreciate inputs from seasoned JMeter experts. Thanks!
It seems the you want the synchronizing timer to work specifically when 10 users are entering the if controller.
Because Timers are executed before every Samplet in scope,
timers are processed before each sampler in the scope in which they are found;
In your case you just need to move timer under request 1 inside controller.
Currently you are trying to sync all samplers in flow, and you don't need to wait on every sampler
I have a website deployed on appengine that has a API.
On my computer I have a node.js script that sends data to the website using a POST through the API.
The problem is that while sometimes the website processes the requests fast (average of 1 per second), some other times it is very slow (average of 1 request per minute).
After some digging, I've found out that when that happens, 4 requests are processes very fast, and then the website does nothing for 4 minutes, repeating the process all over again, making the average time of 1 request per minute like said before. What could be causing this?
I don't know if it is very relevant or not, but I have a free appengine account.
Looks like a bug in your node.js script.
The Appengine has a limit of 60 seconds for external requests - its not possible that one
request blocks an instance for more than these 60 seconds. (And even if the instance is blocked,
the gae would spawn another one)
Other guesses:
check the access logs for first/warmup requests (eg. slow instance startup)
check the access-log to ensure that the gae actually recived the requests
Configuration:
We have iPlanet web server which sits before WebSphere portal 6.1 cluster (2) deployed in Linux machines.
When user tries to copy a 10 GB file across file systems (NFS mounted), we are using java run time to copy the file across to a different NFS mount, hoping that it would be faster than using any other java libraries.
proc = rt.exec("cp " + fileName + " " + outFileName);
Application deployed is a JSF portlet application.
a) session timeout is 60 mins on the app server and the application
b) we have an Ajax call from the client page to keep the session alive
User receives HTTP 500 within 3 minutes, while our logs show that file is still copying. Not sure why WebSphere is sending HTTP 500?
After 10 minutes are so file is copied, and when he clicks on refresh he can proceed.
Not sure what is causing this HTTP 500.
WebContainer threads are not supposed to be used for long tasks.
He's getting 500 after 3 minutes because that is the time WebSphere decides the thread is hung.
What you should be doing is using a WorkManager to perform that long task and the client can poll to check the status of the task.
If you consider upgrading to WAS v8/v8.5 in the near future a good idea will be to use Asynchronous Servlets for that
The reason that your client receives an HTTP 500 error after a few minutes can happen for a few reasons. Without a stack trace and some relevant logging, it is impossible to know which component within WebSphere "woke up" after 3 minutes and stopped everything. It might be WebSphere's timeout setting for the Web Container thread pool, or it can be some other timeout - should be easily concluded from the logs.
To fix this, you can do one of the following:
Adjust the relevant timeout value (depending, again, on which timeout it is exactly).
Change your design so long-running tasks are executed in the background. You can use WebSphere's Work Manager API for that, or asynchronous beans / servlets.