I have this meter job for which I need to build some think time between certain HTTP Requests. But during those thinking time I still need to send a keep alive request on specific interval.
For example:
User login
get some profile information.
Then he start to do some work.
each unit of work is delayed by some random delay varying from 1 to 30 minutes.
During that time we still need to send to there server a ImAlive request at fix interval (like 5 minutes).
Once the thinking time is expired which could be at 17m12s for example, then the loop exit.
For simulate the delay you can use Runtime Controller which will be executed with given seconds you define the keep alive requests, inside Runtime Controller add Timer as Gaussian Random Timer to add delay between keep alive requests.
You can use While Controller with condition like:
${__groovy(${__time(,)} - ${TESTSTART.MS} < 1032000,)}
Where:
__time() function - returns current time in milliseconds since start of Unix epoch
${TESTSTART.MS} - pre-defined JMeter property where test start time lives
__groovy() function - allows execution of arbitrary Groovy code
1032000 - milliseconds representation of 17m12s - (17 * 60 + 12) * 1000
So children of the While Controller will be executed for 17 minutes and 12 seconds after test start. If needed you can add another condition just in case you want to exit the loop earlier. See Using the While Controller in JMeter guide for more details.
Related
I'm trying to implement the following in jmeter: send 100 identical requests, wait for 1 minute, send the same requests again...for 30 min. I can't add delay/waiter/pause between groups of requests in jmeter. Timers don't work since they introduce those pauses between requests, not groups of requests. Any ideas?
Add Test Action sampler and configure it like:
Add Synchronizing Timer as a child of the Test Action sampler and configure it as follows:
The synchronizing timer will act as a "rendezvous" point when all threads will "meet" and wait together for 60 seconds prior to moving on.
Timers obey JMeter's Scoping rules:
Some elements in the test trees are strictly hierarchical (Listeners, Config Elements, Post-Processors, Pre-Processors, Assertions, Timers)
You need to put Timer under Flow Control Action (was: Test Action )
Which will be after 100 requests
For variable delays, set the pause time to zero, and add a Timer as a child.
New to Gatling world but an experienced Loadrunner user.
I created a sample simulation to run two scenarios, each with 10 users and want to run it for 10 minutes. Below is what I have in my setUp function. But each time I run the simulation, it only runs for 136 seconds. The holdFor doesn't seem to take into effect.
setUp(
scn.inject(rampUsers(10) over (10 seconds)),
scen.inject(rampUsers(10) over (10 seconds))
)
.protocols(httpProtocol)
.throttle(
reachRps(2) in (10 seconds),
holdFor(10 minutes)
)
I am using Gatling 2.2.2 bundle.
Output: Simulation computerdatabase.BasicSimulation completed in 136 seconds
The throttle works as a bottleneck, effectively working as an upper boundary for how many requests will be sent. If your scenarios + injection profiles aren't able to generate as many requests as you would like in the first place, the ones that are generated simply pass through the throttle unhindered. The throttle cannot increase the load to match the desired RPS, it can only decreases it.
You will need to inject enough users into your scenarios for them to be able to generate the 2 RPS you want in the first place, and keep adding more of them over the course of the simulation, in order for the throttle to do what you are looking for.
Try changing your injection profiles to for example something like this (and adjust the constantUsersPerSec value as needed), I believe this might give you a load-profile a step closer to what you are looking for:
scn.inject(constantUsersPerSec(1) during(10 minutes))),
scen.inject(constantUsersPerSec(1) during (10 minutes)))
The example above was just a very quick and dirty way to illustrate the point of having to inject users over time, but as chance would have it, injecting 600 users in total over 10 minutes into a scenario is 10 users every ten seconds and should be exactly what you want, unless I'm falling ass first into a basic arithmetic error and/or misunderstanding.
It will also naturally ramp up and down to some extent, although you can more explicitly control the ramp up by chaining injection steps if you need, for example like this:
scn.inject(
rampUsers(10) over (1 minute),
constantUsersPerSecond(1) during (10 minutes)
)
For another approach to more explicitly control the ramp over time, you could also play around with a configuration like this:
scn.inject(
splitUsers(600) into(rampUsers(10) over(10 seconds)) separatedBy(10 seconds)
)
My current environment: JMeter v2.11, remote Oracle 12, JDK 7
I have a recorded script for 200 users to login to a web application within 1 thread group but I need to keep this going for several hours so I need to keep the 200 user's sessions live for several hours and if there is no interaction, the http sessions will expire, so I decided to use a Loop Controller to simply resubmit the same http request every 14.5minutes, once the user's session has been established by logging in.
Because I need to stop the script running after a certain duration I specified the Duration on the Thread Group, but I noticed that if the http requests were before the Loop Controller in the script occurred when the Duration value was reached, the script stopped, however - if the http requests that were being exercised when the 'Duration' was reached were in the Loop Controller - the Loop Controller overrode those Duration settings and the script ran until the number of loops had completed.
I found the following posts https://sqa.stackexchange.com/questions/8378/how-to-run-jmeter-test-plan-for-specified-amount-of-time and
https://sqa.stackexchange.com/questions/1660/how-to-stop-thread-in-jmeter
and followed the instructions to create a second separate Thread Group placing a Test Action with a Constant Timer child which will stop ALL Threads.
Again (as when specifying the 'Duration' via the Thread Group property value) the Stop Test Action works when stopping the script in the other Thread Group if the http requests being executed are not in the Loop Controller - If they are, the Stop Test Action does not work - i.e. the Loop Controller overrides the Stop Test Action's Constant Timer Duration value and runs until the Loop Count has completed.
My Workings below:
Thread Group 1 : No. of Threads-->200, Ramp Up-->1, Loop Count-->Forever, Duration-->900 seconds
-HTTP Request Defaults
-Recording Controller
--HTTP Request (GET) - Login Page Launched
--HTTP Request (POST) - Login Details submitted
--HTTP Request (POST) - Home Page displayed
---Loop Controller : Loop Count --> 2
----HTTP Request (POST) - Relaunch Home page
-----Constant Timer : Thread Delay --> 870000 ms
----HTTP Request (POST) - Select 'Yes' to View Home Page Again
Thread Group 2 : No. of Threads-->1, Ramp Up-->1, Loop Count-->Forever, Duration-->900 seconds
-Test Action: Stop, All Threads
--Constant Timer --> 900000 ms
note: I used 15minutes/900 seconds/900000 milliseconds to test my boundaries above.
Can anyone provide any insight into how I can stop the thread running after a certain duration despite the loop controller settings? That is - can anyone describe a way to override the loop controllers settings to stop the thread after a certain Duration, rather than it stopping once the Loop Count has been reached?
Many Thanks!
I have identified what was causing my problem. The Loop Controller value - it needs to be set 'Forever' so that it doesn't override the 'Duration' settings in either the parent Thread Group or the separate Stop Test Action (with child Constant Timer) Thread Group.
Once the Loop Controller is set 'Forever' it appears JMeter then runs up to the 'Duration' settings.
Thanks
Apologies if this request is similar to others - I am new to JMeter and have searched for other relevants posts but couldn't find anything - or maybe I just didn't understand them!
I'm performance testing a system with a web based application. The front end system will be processing records submitted into the system via MQ - the front end allows the user to pick up a record from the queue, validate some detail, make changes and submit the changes.
There will be 20 users using the front end to do this message validation, update and submission.
Each user is expected to need 30 seconds to pick a message from queue, make changes and resubmit - so we are expecting 1 user to process 120 records/hour, so 20 users will be expected to process 2400 records/hour
The picking up the record off the queue, changing it and submitting the changes will be done via 3 individual web pages.
SO - think time across the 3 pages has been defined as 24 seconds (leaving 6 of the 30 second limit for rendering, server responses, db calls etc.)
However I don't know how to specify this within JMeter. From my reading I can see that I can add a Timer in as a parent to a sampler and I assume I can add a Timer in as a parent of the Recording Controller? - but I need to be able to specify that the 24 second think time is spread across those 3 different pages.
I read a post elsewhere suggesting that if I record using the proxy after adding the Gaussian Random Timer in as a child of the Test Plan (parent to everything else) then the http proxy will record the think time as a ${T} variable in the Gaussian Random Timer - I tried this and this didn't work (also I don't want to rely on this - I'd like to be able to understand and make changes to think time properly rather than relying on JMETER to do it for me.)
To reiterate - 20 users, 30 seconds for 1 user to complete a transaction, TT defined as 24 seconds - I am struggling what Timer to use, where to put it so that the think-time is spread across the samplers that equate to the GETS associated with the 3 pages the user will navigate through.
Apologies for the lengthy post - I just wanted to be clear and concise.
Many thanks in advance,
As per JMeter Timers documentation
Note that timers are processed before each sampler in the scope in which they are found; if there are several timers in the same scope, all the timers will be processed before each sampler.
Timers are only processed in conjunction with a sampler. A timer which is not in the same scope as a sampler will not be processed at all.
To apply a timer to a single sampler, add the timer as a child element of the sampler. The timer will be applied before the sampler is executed. To apply a timer after a sampler, either add it to the next sampler, or add it as the child of a Test Action Sampler.
Now regarding "what timer to use"
There are 2 scenarios:
Virtual-User-oriented scenario - when you try to simulate N users working together
Goal-Oriented-scenario - when you try to produce N hits per second load.
In case of scenario 1 even Constant Timer can be quite enough, besides it will provide repeatability of results. See above quote for information on where to put your timer(s)
In case of scenario 2 you'll need Constant Throughput Timer. If 20 users process 2400 records per hour and each record assumes 3 web page calls, it means that 7200 requests will be made in one hour which in its turn stands for 120 requests per minute (this is what you should enter into the timer's "throughput" area) or 2 requests per second.
I want to download some information from url every 20 seconds and update view based on that info (2-3 labels change text values). I'm using AFNetworking for making request operations in my app.
Should I use NSTimer and make it call method with AFNetworking request every 20 seconds ? Or is there some better way to implement this ?
Thanks
You can use an NSTimer. There is a repeats parameters in the NSTimer scheduleWithTimeInterval to do repeating request.
Instead you can also define a method you can call every 20 seconds and in that method you can decide whether to make the request based on some logic(like a boolean) whether the previous request was successful or not. This can be useful, if there is a server problem and you continue requesting the server unnecessarily.