Ramp up number of calls/users until it breaks - gatling

I struggle with what seems like an easy thing to do.
I'd like to ramp up the number of concurrent calls to an api over time until it breaks.
From my understanding this is rampUsersPerSec with a low initial rate increasing to something high enough to break it with during long enough to see when it actually breaks.
This is my code
val httpProtocol = http
.baseUrl("http://some.url")
.userAgentHeader("breaking test")
.shareConnections()
val scn = scenario("Ralph breaks it")
.exec(
http("root page")
.get("/index.html")
.check(status.is(200))
)
setUp(
scn.inject(
rampUsersPerSec(1) to 100000 during (10 minutes))
.protocols(httpProtocol))
Two things happen
I get lots of exceptions when I run it. Request Timeouts, Connection Timeouts and so on. How can I stop the test when that happens?
Active Users seems much higher than expected. Did I understand 'rampUsersPerSec` wrong? What is the correct way to scale number of calls per second linear?

When you use rampUsersPerSec you are defining the arrival rate of users. So with 'rampUsersPerSec(1) to 100000 during (10 minutes))' gatling will inject 1 user per second at the start, and gradually increase the rate until at 10 minutes in it is injecting 100,000 users per second.
Depending on the time it takes for your call to /index.html to respond, this could very quickly out of hand as gatling isn't waiting for the users already injected to actually finish - it just keeps adding them regardless. So (roughly) in the first second gatling might inject 1 user, but then in the 2nd it might inject 166, and in the 3rd 333 users and so on. So if your scenario takes a few seconds to respond, the number of concurrent
users can increase rapidly.
Unfortunately, I don't think there's any way to have the simulation detect when you've hit a defined error rate and stop. You would be better having a much slower ramp over a longer duration. Alternatively, you could use the closed form injection methods that target a given level of concurrency rather than arrival rate

I'm doing similar thing to yours and I managed to solve that using below code
private val injection = incrementConcurrentUsers(1)
.times(56)
.eachLevelLasting(1700 millis)
.startingFrom(10)

Related

How do I request a scan on a Wi-Fi device using libnm?

The documentation suggests I use nm-device-wifi-request-scan-async, but I don't understand how to understand when it's has finished scanning. What's the correct second parameter I should pass, how is it constructed and what do I pass to nm-device-wifi-request-scan-finish?
I've tried using nm-device-wifi-get-last-scan and determining whether the scan has just happened or the last scan was a long time ago, but it doesn't seem to update the time of the scan - i.e., after requesting the scan and printing out the time between nm-utils-get-timestamp-msec and the last scan, it only increases and and decreases only if I restart the whole program, for some reason...
All I really need is to request a scan every x seconds and understand whether it has happened or not. The deprecated synchronous functions seem to have allowed this with callback functions, but I don't understand async :(
nm_device_wifi_request_scan_async() starts the scan, it will take a while until the result is ready.
You will know when the result is ready when the nm_device_wifi_get_last_scan() timestamp gets bumped.
it only increases and and decreases only if I restart the whole program, for some reason...
last-scan should be in clock_gettime(CLOCK_BOOTTIME) scale. That is supposed to never decrease (unless reboot). See man clock_gettime.
but I don't understand async
The sync method does not differ from the async method in this regard. They both only kick off a new scan, and when they compete, NetworkManager will be about to scan (or start shortly). The sync method is deprecated for other reasons (https://smcv.pseudorandom.co.uk/2008/11/nonblocking/).

React Profiler: What do the timings mean?

I am using react profiler to make my app more efficient. It will commonly spit out a graph like this:
I am confused because the timings do not add up. For example, it would make sense if the total commit time for "Shell" was 0.3ms then "Main" was "0.2ms of 0.3ms." But that is not the case.
What precisely do these timings mean and how do they add up?
(note: I have read "Introducing the React Profiler" but it appears from this section that this time-reporting convention is new since that article.)
The first number (0.2ms) is the self duration and the second number (0.3ms) is the actual duration. Mostly the self duration is the actual duration minus the time spent on the children. I have noticed that the numbers don't always add up perfectly, which I would guess is either a rounding artifact or because some time is spent on hidden work. For example, in your case, the Shell has an actual time of 3.1ms and a self duration of 0.3ms, which means the 2 children (Navbar and Main), should add up to 3.1ms - 0.3ms, or 2.8ms. However, we see that the Navbar is not re-rendered, so it's 0ms, but the actual duration for Main is only 2.7ms, not 2.8ms. It's not going to have any impact in practical terms when you're performance tuning, but it does violate expectations a bit.

Intentionally slow-loading Python app on AppEngine

I have discovered, as have many before me, that AppEngine has a 60 second execution deadline.
Process terminated because the request deadline was exceeded.
My use case is a bit different to the others I've seen. I have a web form that lets you move a toggle switch; there's a page you can GET which represents this toggle state with a 1 or 0. A Raspberry Pi hits this page every 10 seconds and makes a light at my front gate match the state of the toggle. I'm doing all of this over HTTP (the Pi is on a 4G modem which firewalls traffic on other ports).
I had the idea earlier today of making a "has the state changed" handler. The Pi would get and match the state at first boot, but after that it would hit a (often very slow to load) handler that did something like this:
iterations = 0
current_state = get_state()
while iterations < 600
if get_state != current_state:
return "Change!"
iterations = iterations + 1
time.sleep(1)
return "No change"
This would reduce my 4G overhead to a single request every ten minutes, but - if the state changed - the page would finish loading immediately and I could act on it straight away. If nothing changed, I would just call the process again - but now I'd be doing it once every 10 min instead of once every 10 sec.
Even with a 50s upper limit, I can build this and it will save me some overhead + improve my response latency. But is there something I'm missing about how deadlines work which would let me do this in GAE for longer periods of time?
It is possible to reach 60 minutes timeout by switching to the flexible environment. From Comparing high-level features:

Timer to represent AI reaction times

I'm creating a card game in pygame for my college project, and a large aspect of the game is how the game's AI reacts to the current situation. I have a function to randomly generate a number within 2 parameters, and this is how long I want the program to wait.
All of the code on my ai is contained within an if statement, and once called I want the program to wait generated amount of time, and then make it's decision on what to do.
Originally I had:
pygame.time.delay(calcAISpeed(AIspeed))
This would work well, if it didn't pause the rest of the program whilst the AI is waiting, stopping the user from interacting with the program. This means I cannot use while loops to create my timer either.
What is the best way to work around this without going into multi-threading or other complex solutions? My project is due in soon and I don't want to make massive changes. I've tried using pygame.time.Clock functions to compare the current time to the generated one, but resetting the clock once the operation has been performed has proved troublesome.
Thanks for the help and I look forward to your input.
The easiest way around this would be to have a variable within your AI called something like "wait" and set it to a random number (of course it will have to be tweaked to your program speed... I'll explain in the code below.). Then in your update function have a conditional that waits to see if that wait number is zero or below, and if not subtract a certain amount of time from it. Below is a basic set of code to explain this...
class AI(object):
def __init__(self):
#put the stuff you want in your ai in here
self.currentwait = 100
#^^^ All you need is this variable defined somewhere
#If you want a static number as your wait time add this variable
self.wait = 100 #Your number here
def updateAI(self):
#If the wait number is less than zero then do stuff
if self.currentwait <= 0:
#Do your AI stuff here
else:
#Based on your game's tick speed and how long you want
#your AI to wait you can change the amount removed from
#your "current wait" variable
self.currentwait -= 100 #Your number here
To give you an idea of what is going on above, you have a variable called currentwait. This variable describes the time left the program has to wait. If this number is greater than 0, there is still time to wait, so nothing will get executed. However, time will be subtracted from this variable so every tick there is less time to wait. You can control this rate by using the clock tick rate. For example, if you clock rate is set to 60, then you can make the program wait 1 second by setting currentwait to 60 and taking 1 off every tick until the number reaches zero.
Like I said this is very basic so you will probably have to change it to fit your program slightly, but it should do the trick. Hope this helps you and good luck with your project :)
The other option is to create a timer event on the event queue and listen for it in the event loop: How can I detect if the user has double-clicked in pygame?

Process scheduler

So I have to implement a discrete event cpu scheduler for my OS class, but I don't quite understand what how it works. Every explanation/textbook I've read always put things in terms a little too abstract for me to be able to figure out how it actually works, nor does it put things in terms of cpu bursts and io bursts (some did but still not helpful enough).
I'm not posting any of the code I have (I wrote a lot actually but I think I'm going to rewrite it after I figure out (in the words of Trump) what is actually going on). Instead I just want help to figure out a sort of pseudocode I can then implement.
We are given multiple processes with an Arrival Time (AT), Total Cpu (TC), Cpu burst (CB), and Io burst (IO).
Suppose that I was given: p1 (AT=1, TC=200, CB=10, IO=20) and p2 (AT=1000, TC=200, CB=20, IO=10). And suppose I am implementing a First Come First Serve scheduler.
I also put question marks (?) where I'm not sure.
Put all processes into eventQ
initialize all processes.state = CREATED
While(eventQueue not empty) process = eventQueue.getFront()
if process.state==CREATED state, it can transition to ready
clock= process.AT
process.state = READY
then I add it back to the end (?) of the eventQueue.
if process.state==READY, it can transition to run
clock= process.AT + process.CPU_time_had + process.IO_time_had (?)
CPU_Burst = process.CB * Rand(b/w 0 and process.CB)
if (CB >= process.TC - process.CPU_time_had)
then it's done I don't add it back
process.finish_time = clock + CB
continue
else
process.CPU_time_had += CB
(?) Not sure if I put the process into BLOCK or READY here
Add it to the back of eventQueue (?)
if process.state==BLOCK
No idea what happens (?)
Or do things never get Blocked in FCFS (which would make sense)
Also how do IO bursts enter into this picture???
Thanks for the help guys!
Look at arrival time of each thread, you can sort the queue such that arrival times occurring first appear before threads with later arrival times. Run the front of the queue's thread (this is a thread scheduler). Run the thread a burst at a time, when the burst's cpu time is up, enter a new event at the back of the queue with an arrival time of the current time plus the burst's io time (sort the queue again on arrival times). This way other threads can execute while a thread is performing io.
(My answer is assuming you are in the same class as me. [CIS*3110])

Resources