So, I've written a few Gatling tests and know how to write test setup for a max duration.
setUp(testScenario.inject(atOnceUsers(3))).maxDuration(5 minutes)
Now, I want to achieve something along this:
setUp(testScenario.inject(atOnceUsers(3))).maxRequests(1000 requests)
How should I approach that?
Here instead of limiting my time, I'm limiting my test setup by achieving a number of requests.
Any assistance is appreciated. Thanks.
In general there is no maxRequests() option. You should think of each injected user as of actual user that independently executes some steps and finish his work rather than a thread that executes steps in loop. With that approach it is as simple as setting up certain injection strategy fe.: inject(constantUsersPerSec(10) during(100 seconds)). This way you will simulate actual users behavior (real users are independent and do not relay on other users). Of course there may be some cases where you want simulate users that makes lot of requests but in that case you should write scenario that executes certain number of requests fe.: with repeat loop:
val floodingScenario = scenario("Flood").repeat(250){
// some execs here
}
setUp(
floodingScenario.inject(
atOnceUsers(4) // each user executes steps 250 times = 1000 executes total
)
)
Related
currently we are having issue with an CPU Limit. We do have a lot of processes that are most likely not optimized, I have already combined some processes for the same object but it is not enough. I am trying to understand logs rights now - as you can see on the screenshots, there is one process that is being called multiple times (I assume each time for created record). Even if I create, for example, 60 records in one operation/dml statement, the Process Builders still gets called 60 times? (this is what I think is happening) Is that a problem we are having right now? If so, is there a better way to do it? Because right now we need updates from PB to run, but I expected it should get bulkified or something like that. I was also thinking there might be some looping between processes. If there are more information you need, please let me know. Thank you.
Well, yes, the process builder will be invoked 60 times, 1 record at a time. But that shouldn't be your problem. The final update / create child records / email send (or whatever your action is) will be bulkified, it won't save 1 record at a time. If the process calls some apex actions - they're supposed to support passing collection of records, not just single record.
You maybe looking at wrong place. CPU time suggests code problems, not config (flow, workflow, process builder... although if you're doing updates of fields on "this" record it's possible you'd benefit from before-save flows). Try to compare timestamps related to METHOD_BEGIN, METHOD_END for triggers, code methods (including invocable action / process plugin interfaces).
Maybe there's code that doesn't need to run because key fields didn't change, there's nothing to recalculate, rollup. Hard to say without seeing the debug log.
Maybe the operation doesn't have to be immediate. Think if you can offload some stuff to "scheduled actions", "time based workflows" or in apex terms "#future, batchable, queueable". But they'd have to be relatively safe to run, if there's error - it won't display to the user because the action will be in the background, you'd need to handle the errors manually (send an email, create a record, make chatter post or bell notification).
You could try uploading the log to https://apextimeline.herokuapp.com/ and try to make sense out of that Gantt-chart-like output. Or capture the log "pro" way, with https://help.salesforce.com/s/articleView?id=sf.code_dev_console_solving_problems_using_system_log.htm&type=5 or https://marketplace.visualstudio.com/items?itemName=financialforce.lana (you'll likely need developer's help to make sense out of it).
Is there a way to set a global variable/constant in .feature files in Behave?
For an analytical service, I have many scenarios like this one
Scenario: Some scenario
Given do some action
And wait for 90 seconds while the action results are ready
Then verifying some result
And recently the requirements has updated and the service can wait for a longer time. This requirement may be changed in future. Is there a way not to find and replace all the "wait for 90 seconds" but have some constant in a feature file that I can update in one place?
My current approach is to refactor the step into wait for a reasonable time while the action results are ready and set the constant of reasonable time in Python. But in this approach, it's not clear from the tests logs what is the reasonable time for a specific run.
Waiting a constant amount of time is bad practice
Correct scenario defininition should be:
Scenario: Some scenario
Given do some action
And wait for the action results are ready
Then verifying some result
In the step implementation of "wait for the action results are ready" an active wait must be made that will end when results are ready
New to Gatling world but an experienced Loadrunner user.
I created a sample simulation to run two scenarios, each with 10 users and want to run it for 10 minutes. Below is what I have in my setUp function. But each time I run the simulation, it only runs for 136 seconds. The holdFor doesn't seem to take into effect.
setUp(
scn.inject(rampUsers(10) over (10 seconds)),
scen.inject(rampUsers(10) over (10 seconds))
)
.protocols(httpProtocol)
.throttle(
reachRps(2) in (10 seconds),
holdFor(10 minutes)
)
I am using Gatling 2.2.2 bundle.
Output: Simulation computerdatabase.BasicSimulation completed in 136 seconds
The throttle works as a bottleneck, effectively working as an upper boundary for how many requests will be sent. If your scenarios + injection profiles aren't able to generate as many requests as you would like in the first place, the ones that are generated simply pass through the throttle unhindered. The throttle cannot increase the load to match the desired RPS, it can only decreases it.
You will need to inject enough users into your scenarios for them to be able to generate the 2 RPS you want in the first place, and keep adding more of them over the course of the simulation, in order for the throttle to do what you are looking for.
Try changing your injection profiles to for example something like this (and adjust the constantUsersPerSec value as needed), I believe this might give you a load-profile a step closer to what you are looking for:
scn.inject(constantUsersPerSec(1) during(10 minutes))),
scen.inject(constantUsersPerSec(1) during (10 minutes)))
The example above was just a very quick and dirty way to illustrate the point of having to inject users over time, but as chance would have it, injecting 600 users in total over 10 minutes into a scenario is 10 users every ten seconds and should be exactly what you want, unless I'm falling ass first into a basic arithmetic error and/or misunderstanding.
It will also naturally ramp up and down to some extent, although you can more explicitly control the ramp up by chaining injection steps if you need, for example like this:
scn.inject(
rampUsers(10) over (1 minute),
constantUsersPerSecond(1) during (10 minutes)
)
For another approach to more explicitly control the ramp over time, you could also play around with a configuration like this:
scn.inject(
splitUsers(600) into(rampUsers(10) over(10 seconds)) separatedBy(10 seconds)
)
Apologies if this request is similar to others - I am new to JMeter and have searched for other relevants posts but couldn't find anything - or maybe I just didn't understand them!
I'm performance testing a system with a web based application. The front end system will be processing records submitted into the system via MQ - the front end allows the user to pick up a record from the queue, validate some detail, make changes and submit the changes.
There will be 20 users using the front end to do this message validation, update and submission.
Each user is expected to need 30 seconds to pick a message from queue, make changes and resubmit - so we are expecting 1 user to process 120 records/hour, so 20 users will be expected to process 2400 records/hour
The picking up the record off the queue, changing it and submitting the changes will be done via 3 individual web pages.
SO - think time across the 3 pages has been defined as 24 seconds (leaving 6 of the 30 second limit for rendering, server responses, db calls etc.)
However I don't know how to specify this within JMeter. From my reading I can see that I can add a Timer in as a parent to a sampler and I assume I can add a Timer in as a parent of the Recording Controller? - but I need to be able to specify that the 24 second think time is spread across those 3 different pages.
I read a post elsewhere suggesting that if I record using the proxy after adding the Gaussian Random Timer in as a child of the Test Plan (parent to everything else) then the http proxy will record the think time as a ${T} variable in the Gaussian Random Timer - I tried this and this didn't work (also I don't want to rely on this - I'd like to be able to understand and make changes to think time properly rather than relying on JMETER to do it for me.)
To reiterate - 20 users, 30 seconds for 1 user to complete a transaction, TT defined as 24 seconds - I am struggling what Timer to use, where to put it so that the think-time is spread across the samplers that equate to the GETS associated with the 3 pages the user will navigate through.
Apologies for the lengthy post - I just wanted to be clear and concise.
Many thanks in advance,
As per JMeter Timers documentation
Note that timers are processed before each sampler in the scope in which they are found; if there are several timers in the same scope, all the timers will be processed before each sampler.
Timers are only processed in conjunction with a sampler. A timer which is not in the same scope as a sampler will not be processed at all.
To apply a timer to a single sampler, add the timer as a child element of the sampler. The timer will be applied before the sampler is executed. To apply a timer after a sampler, either add it to the next sampler, or add it as the child of a Test Action Sampler.
Now regarding "what timer to use"
There are 2 scenarios:
Virtual-User-oriented scenario - when you try to simulate N users working together
Goal-Oriented-scenario - when you try to produce N hits per second load.
In case of scenario 1 even Constant Timer can be quite enough, besides it will provide repeatability of results. See above quote for information on where to put your timer(s)
In case of scenario 2 you'll need Constant Throughput Timer. If 20 users process 2400 records per hour and each record assumes 3 web page calls, it means that 7200 requests will be made in one hour which in its turn stands for 120 requests per minute (this is what you should enter into the timer's "throughput" area) or 2 requests per second.
I am trying to figure out how to make a request in c each n seconds. I want it to be asynchronous, meaning the requests are made even if the previous ones have not been responded.
I want to achieve this in order to test a server.
Any ideas?
Thank you.
Use the multi interface. Add a new handle and start a new request every N seconds and let it take its time. It'll handle "any" amount of simultaneous transfers for you. "any" because there's probably a limit in number of open sockets a process is allowed to use (depending on the environment you want this for).