Artillery.io - is there a limit to the number of requests you can sent per flow? - artillery

Is there a limit to the number of requests you can send per flow on artillery.up? My scenario flow seems to stop after two (one get and one post)

Related

Gatling: Keep fixed number of users/requests at any instant

how can we keep fixed number of active concurrent users/requests at time for a scenario.
I have an unique testing problem where I am required to do the performance testing of services with fixed number of request at a given moment for a given time periods like 10 minutes or 30 minutes or 1 hour.
I am not looking for per second thing, what I am looking for is that we start with N number of requests and as any of request out of N requests completes we add one more so that at any given moment we have N concurrent requests only.
Things which I tried are rampUsers(100) over 10 seconds but what I see is sometimes there are more than 50 users at a given instance.
constantUsersPerSec(20) during (1 minute) also took the number of requests t0 50+ for sometime.
atOnceUsers(20) seems related but I don't see any way to keep it running for given number of seconds and adding more requests as previous ones completes.
Thankyou community in advance, expecting some direction from your side.
There is a throttling mechanism (https://gatling.io/docs/3.0/general/simulation_setup/#throttling) which allow you to set max number of requests, but you must remember that users are injected to simulation independently of that and you must inject enough users to produce that max number of request, without that you will end up with lower req/s. Also users that will be injected but won't be able to send request because of throttling will wait in queue for they turn. It may result in huge load just after throttle ends or may extend your simulation, so it is always better to have throttle time longer than injection time and add maxDuration() option to simulation setup.
You should also have in mind that throttled simulation is far from natural way how users behave. They never wait for other user to finish before opening page or making any action, so in real life you will always end up with variable number of requests per second.
Use the Closed Work Load Model injection supported by Gatling 3.0. In your case, to simulate and maintain 20 active users/requests for a minute, you can use an injection like,
Script.<Controller>.<Scenario>.inject(constantConcurrentUsers(20) during (60 seconds))

Pubsub pull at a steady rate?

Is there a way to enforce a steady poll rate using the google-cloud-pubsub client?. I want to avoid scenarios where if there is spike in the publish rate, the pull request rate also tend to increase.
The client provides FlowControl settings, by setting the maxOutstanding messages. From my understanding, it sets the max batch size during a pull operation.
I want to understand how to create a constant pull rate, say 1000 RPS.
Message Flow Control can be used to set the maximum number of messages being processed at a given time (i.e., setting max_messages in the case of the python client), which indirectly sets the maximum rate at which messages are received.
While it doesn’t allow you to directly set the exact number of messages received per second (that would depend on the time it takes to process a message and the number of messages being processed), it should avoid scenarios where you get a spike in publish rate.
If you really need to set a rate in messages received per second, AFAIK it’s not made available directly on the client libraries, so you’d have to implement it yourself using an asynchronous pull and using some timers to acknowledge the messages at your desired rate.

MWS api Throtlling - Maximum request quota

I don't understand something about mws throtlling.
For example with this api:
http://docs.developer.amazonservices.com/en_US/products/Products_GetMatchingProductForId.html
The Max request quota is 20. So I understand that I can submit 20 different ids on each request. But in the table there written that 'Maximum: Five Id values'.
So what the 20 represents?
20 represents the maximum about of requests you can make at a time. Each request can have a maximum of 5 Id values in the IdList. So, essentially you can submit requests for 100 (20 * 5) product Id's at a time. Then you have to wait until the quota is restored, which is 5 per second. You are also capped by an hourly request quota, in this case 18,000 requests per hour.
Do some math to figure out how many requests you need to make and space those out so enough time is given for the restore to kick in.
There are typically 2 or 3 components to Amazon throttling.
They use a modified leaky bucket algorithm. The Quota is how many separate requests you can submit at a given instant, assuming you have not already consumed any requests. This is how much the bucket can hold.
For every request you submit, the bucket 'leaks' one unit.
The restore rate, is how quickly the bucket refills.
For the API call you linked, how many requests can hypothetically be sent in 1 second? If my math is right (give or take 1) you should be capable of making 25 requests in that first second, because you exhaust the bucket, but in the first second, it also refills 5 requests.
Keep in mind that Amazon caps you on many API calls with an Hourly / Daily cap.
Edit
Keep in mind the throttling caps how many Requests you can make, not how many ID's, reports etc etc can be submitted inside of each request.

How strictly enforced is the Parse.com API call limit?

Suppose my parse.com api call limit is 30 api calls per second (the free tier). Suppose also that when opening an app I've created, I issue five api calls (1 call to the cloud code, three queries, and one save object).
Suppose 60 users happen to open the app at the same time. Would Parse begin rejecting some API calls?
The typical use case for my app would be 1 or maybe 2 api calls per second with 1000 active users. However, it is possible in some rare situations that I may issue 45 api calls per second. Is there a way around this without having to pay for a large number of API calls per second? It feels like I'm paying for cable TV (24 hours of 200 channels while I only see 2-3 channels 1 hour a day).
One of the Parse guys mentioned recently that they count calls per minute. So the limit is actually 30*60/min or 1,800/min.
This allows for short bursts of activity to not cause problems.
After the 1,800th call in a minute, all further calls will be rejected.

Multi threading in Modbus? How can send query at 500 ms keeping response time as 1 second?

I have implemented MODBUS master slave communication. I have
implemented fun 6, 16 & 3. I set 1 minute as response time. Now
problem is that I want to send query at 500 ms. But because of this 1
second response time, I need to wait till 1 sec to send the second
query. How I can send query at every 500 ms keeping response time as 1
sec.
Is it possible to send new query, if we are still waiting for response of our previous query as well?
How to communicate with slower device over MODBUS?
See section 2.1 of the MODBUS over serial line specification and implementation guide V1.02 where
The master node initiates only one MODBUS transaction at the same
time.
This should inform any decision on how you sequence the commands. The other specification documents on the site are also helpful to ensure your implementation is conforming.
You would probably save yourself several person-months by using an existing open implementation. There are a number described at Modbus Technical Resources.

Resources