Parallel execution of multiple scenarios - gatling

What ist the best practice for parallel execution of multiple scenarios? For example 30% Users execute scenario1 and 70% users scenario2.
Is the code below the right way or is it better to have one scenario with contional executions of REST calls?
class MySimulation extends Simulation {
val userIdsData = csv(userIdsCSV).queue
...
val scenario1 = scenario("Scenario 1")
.feed(userIdsData)
.get(...)
val scenario2 = scenario("Scenario 2")
.feed(userIdsData)
.get(...)
.post(...)
setUp(scenario1.inject(rampUsers(30) over (ramp seconds))
.protocols(HttpConfig.value(baseURL)),
scenario2.inject(rampUsers(70) over (ramp seconds))
.protocols(HttpConfig.value(baseURL))
)
}

Whatever you are doing is absolutely fine.
The way you are running the setup you will see that the requests are running in parallel.

Gatling will run each item within SetUp in parallel where as each item defined in a scenario will be run sequentially. As you can see from the link
The definition of the injection profile of users is done with the
inject method. This method takes as argument a sequence of injection
steps that will be processed sequentially.
So your above code will run scenario01 ramp to 30 over x seconds and scenario02 ramp to 70 over y seconds in parallel.

You can also try with below code .
scenario1.inject(rampConcurrentUsers(0) to (6) during(10),constantConcurrentUsers(6) during(60 seconds)),
scenario2.inject(rampConcurrentUsers(0) to (4) during(10),constantConcurrentUsers(4) during(60 seconds))

Related

C# .NET system timer hiccup

private System.Timers.Timer timerSys = new System.Timers.Timer();
ticks are set to start at the beginning of each multiple of 5 seconds
entry exit
10:5:00.146 10:5:00.205
10:5:05.129 10:5:05.177
10:5:10.136 10:5:10.192
10:5:15.140 10:5:15.189
10:5:20.144 10:5:20.204
amd then a delay of 28 second
note that Windows 10 compensates for missing ticks
by firing them at close intervals
10:5:48.612 10:5:48.692
10:5:48.695 10:5:48.745
10:5:48.748 10:5:48.789
10:5:48.792 10:5:49.90
10:5:43.93 10:5:49.131
and another delay of 27 seconds
again Windows 10 crams ticks to compensate
but this time there is an even inside the second tick
that lasts about 28 seconds that makes the tick very long
10:6:16.639 10:6:16.878
this one is very long
10:6:16.883 10:6:42.980
10:6:42.984 10:6:43.236
10:6:43.241 10:6:43.321
10:6:43.326 10:6:43.479
The PC is running just two applications that I wrote.
They communicate via files and also via SQL tables.
This event happens maybe once every two months.
Questions:
What could be happening?
Is there a way to create a log file of all processes over time
My applications keep tabs of the time down to milliseconds.
So if there were a way of logging processes, I could match.
Alternately is there a way for my app to know what the OS is doing.

RxJS interval() varies wildly in MS Edge

I have an Angular 6 application that performs an API call every 10 seconds to update price quotes. The timing of the API calls is manages using RxJS interval().
For some reason, on MS Edge, the timings vary wildly, from a couple of seconds, to minutes. Any ideas what might be the cause?
Here is the code:
const refreshPriceInterval = interval(10000);
let startTime = new Date(), endTime: Date;
refreshPriceInterval.pipe(
startWith(0),
flatMap(() => {
endTime = new Date();
let timeDiff = endTime.getTime() - startTime.getTime(); //in ms
// strip the ms
timeDiff /= 1000;
console.log(timeDiff);
startTime = new Date();
return this.timeseriesService.getQuote(symbol, this.userInfo.apiLanguage);
})
)
Here is the console output:
0.001
18.143
4.111
11.057
13.633
12.895
3.003
12.394
7.336
31.616
20.221
10.461
Is there a way to increase the accuracy?
EDIT:
Performance degrades over time.
Reducing the code in the interval() to only a console.log does not perform any better.
Might be an Angular issue.
It is up to the browser to decide how many CPU cycles are allocated per browser tab. Depending on resources (for instance; battery) or activity (background tab vs foreground tab) your browser page will receive more or less cpu slices.
some background: https://github.com/WICG/interventions/issues/5
This is shipped in Edge as well now (as of EdgeHTML14). The clamping to 1Hz in background tabs, not anything more intensive.
Apart from this fact; you are measuring the latency of your call as well:
timeseriesService.getQuote() so it might also be that this call just takes some time.
It was indeed an Angular issue. Timed processes cause Angular to constantly re-render the app, which can be quite resource intensive.
I used runOutsideAngular() to circumvent this issue. It turned out that I only had to run one function outside of Angular:
this.ngZone.runOutsideAngular(() => {
this.handleMouseEvents();
});
Thanks to Mark van Straten for your input!

How can I abort a Gatling simulation if the test system is not in the right state?

The target system I am load testing has a mode that indicates if it's suitable for running a load test against.
I want to check that mode once only at the beginning of my simulation (i.e. I don't want to do the check over and over for each user in the sim).
This is what I've come up with, but System.exit() seems pretty harsh.
I define an execution chain that checks if the mode is the value I want:
def getInfoCheckNotRealMode:ChainBuilder = exec(
http("mode check").get("/modeUrl").
check( jsonPath("$.mode").saveAs("mode") )
).exec { sess =>
val mode = sess("mode").as[String]
println(s"sengingMode $mode")
if( mode == "REAL"){
log.error("cannot allow simulation to run against system in REAL mode")
System.exit(1)
}
sess
}
Then I run the "check" scenario in parallel to the real scenario like this:
val sim = setUp(
newUserScene.inject(loadProfile).
protocols(mySvcHttp),
scenario("Check Sending mode").exec(getInfoCheckNotRealMode).
inject(atOnceUsers(1)).
protocols(mySvcHttp)
)
Problems that I see with this:
Seems a bit over-complicated for simply checking that the system-under-test is suitable for testing against.
It's going to actually run the scenarios in parallel so if the check takes a while it's still going to generate load against a system that's in the wrong mode.
Need to consider and test what happens if the mode check is not behaving correctly
Is there a better way?
Is there some kind of "before simulation begins" phase where I can put this check?

How to run a Erlang Process periodically with Precise Time (i.e. 10ms)

I want to run a periodic erlang process every 10ms (based on wall clock time), the 10ms should be as accurate as possible; what should be the right way to implement it?
If you want really reliable and accurate periodic process you should rely on actual wall clock time using erlang:monotonic_time/0,1. If you use method in Stratus3D's answer you will eventually fall behind.
start_link(Period) when Period > 0, is_integer(Period) ->
gen_server:start_link({local, ?SERVER}, ?MODULE, Period, []).
...
init(Period) ->
StartT = erlang:monotonic_time(millisecond),
self() ! tick,
{ok, {StartT, Period}}.
...
handle_info(tick, {StartT, Period} = S) ->
Next = Period - (erlang:monotonic_time(millisecond)-StartT) rem Period,
_Timer = erlang:send_after(Next, self(), tick),
do_task(),
{noreply, S}.
You can test in the shell:
spawn(fun() ->
P = 1000,
StartT = erlang:monotonic_time(millisecond),
self() ! tick,
(fun F() ->
receive
tick ->
Next = P - (erlang:monotonic_time(millisecond)-StartT) rem P,
erlang:send_after(Next, self(), tick),
io:format("X~n", []),
F()
end
end)()
end).
If you really want to be as precise as possible and you are sure your task will take less time than the interval you want it performed at you could have one long running process instead of spawning a process every 10ms. Erlang could spawn a new process every 10ms but unless there is a reason you cannot reuse the same process it's usually not worth the overhead (even though it's very little).
I would do something like this in an OTP gen_server:
-module(periodic_task).
... module exports
start_link() ->
gen_server:start_link({local, ?SERVER}, ?MODULE, [], []).
... Rest of API and other OTP callbacks
init([]) ->
Timer = erlang:send_after(0, self(), check),
{ok, Timer}.
handle_info(check, OldTimer) ->
erlang:cancel_timer(OldTimer),
Timer = erlang:send_after(10, self(), check),
do_task(), % A function that executes your task
{noreply, Timer}.
Then start the gen_server like this:
periodic_task:start_link().
As long as the gen_server is running (if it crashes so will the parent process since they are linked) the function do_task/0 will be executed almost every 10 milliseconds. Note that this will not be perfectly accurate. There will be a drift in the execution times. The actual interval will be 10ms + time it takes receive the timer message, cancel the old timer, and start the new one.
If you want to start a separate process every 10ms you could have the do_task/0 spawn a process. Note that this will add additional overhead, but won't necessarily make the interval between spawns less accurate.
My example was taken from this answer: What's the best way to do something periodically in Erlang?

Is there a way to schedule a task at a specific time or with an interval?

Is there a way to run a task in rust, a thread at best, at a specific time or in an interval again and again?
So that I can run my function every 5 minutes or every day at 12 o'clock.
In Java there is the TimerTask, so I'm searching for something like that.
You can use Timer::periodic to create a channel that gets sent a message at regular intervals, e.g.
use std::old_io::Timer;
let mut timer = Timer::new().unwrap();
let ticks = timer.periodic(Duration::minutes(5));
for _ in ticks.iter() {
your_function();
}
Receiver::iter blocks, waiting for the next message, and those messages are 5 minutes apart, so the body of the for loop is run at those regular intervals. NB. this will use a whole thread for that single function, but I believe one can generalise to any fixed number of functions with different intervals by creating multiple timer channels and using select! to work out which function should execute next.
I'm fairly sure that running every day at a specified time, correctly, isn't possible with the current standard library. E.g. using a simple Timer::periodic(Duration::days(1)) won't handle the system clock changing, e.g. when the user moves timezones, or goes in/out of daylight savings.
For the latest Rust nightly-version:
use std::old_io::Timer;
use std::time::Duration;
let mut timer1 = Timer::new().unwrap();
let mut timer2 = Timer::new().unwrap();
let tick1 = timer1.periodic(Duration::seconds(1));
let tick2 = timer2.periodic(Duration::seconds(3));
loop {
select! {
_ = tick1.recv() => do_something1(),
_ = tick2.recv() => do_something2()
}
}

Resources