I am using the api sending messages in batches.
I'm getting many messages with code 400 and 500.
I need to control the time between requests when sending multiple batches?
example:
messages.get = 5 per second
If I send 100 messages in a batch request, have to wait 20 seconds to send the next batch?
or
need to send 20 requests with 5 messages each?
At this point probably batches of 10 messages each is best. You can change the per-user limit to 50/sec using the developers console. Even then, you can exceed the limit for a little bit before you start getting quota errors. Either way, if you get quota errors for a user you'll want to do some type of backoff for the user.
Related
Using Gmail API, I keep getting hundreds of error 429 "User-rate limit exceeded".
My script sends 100 emails at a time, every 10 minutes.
Shouldn't this be within the API limits?
The error pops up after only 5 or 6 successful sends.
Thanks
Have a look at the Gmail Usage Limits:
So, if you send 100 emails during a very short time, this corresponds to 10 000 units - thats is 40 times more than the allowed quota per second.
So while short bursts are allowed, if you exceed the quota significantly this might be beyond the scope of the allowed burst.
In this case you should implement exponential backoff as recommended by Google.
how can I check my requirement of 100 requests are processed in less than 1 second in my gatling3 report. I ran this using jenkins.
my simulation looks like as below
rampConcurrentUsers(1) to (100) during (161 second),
constantConcurrentUsers(100) during (1 minute)
Below is my response time percentile graph of two executions for an interval of one second.
enter image []1 here
What does the min,max here will tell us, i am assuming the percentages 25%-99% are the completion of the request.
Those graph sections are not what you're after - they show the distribution of response times and the number of active users.
So min is the fastest response time
max is the longest
95% is the response time for which 95% of your requests were under
and so on...
So what you could do is look at the section of your graph corresponding to the 100 constant concurrent users injection stage. In this part you would require that the max response time always be under 1 second
(Note: there's something odd with your 2nd report - I assume it didn't come from running the stated injection profile as it has more than 100 concurrent users active)
I send emails with cron job and task queue usage. The job is executed every 15 minutes and the queue used has the following setup:
- name: send-emails
rate: 1/m
max_concurrent_requests: 1
retry_parameters:
task_retry_limit: 0
But quite often apiproxy_errors.OverQuotaError exception happens. I am checking Quota Details and see that I am still within the daily quota (Recipients Emailed, Attachment Data Sent etc.), and I believe I couldn't be over maximum per minute limit, since the the rate I use is just 1 task per minute (i.e. send no more than 1 mail per minute).
Where am I wrong and what should I check?
How many emails are you sending? You have not set a bucket-size, so it defaults to 5. Your rate sets how often the bucket is replenished. So, with your current configuration, you can send 5 emails every minute. That means if you are sending more than 75 emails to the queue every 15 minutes, the queue will fill up, and eventually go over quota.
I have not tried this myself, but when you catch the apiproxy_errors.OverQuotaError exception, does the message contain any detail as to why it is over quota/which quota has been exceeded?
try:
send_mail_here
except apiproxy_errors.OverQuotaError, message:
logging.error(message)
I have route which when sent a message invokes a refresh service
I only want the service to be invoked at most every 1 minute
If the refresh service takes longer than 1 minute (e.g. 11 minutes) I don't want requests for it to queue up
The first part: every 1 minutes is easy, I just create an aggregator with a completionTimeout of 1 mins
The part about stopping requests queueing up is not so easy and I can't figure out how to construct it
e.g.
from( seda_in )
.aggregate( constant(A), blank aggregator )
.completionTimeout( 1000 )
.process( whatever )...
if the process takes 15 seconds then potentially 15 new inoke messages could be waiting for the process when it finishes. I want at most just 1 to be waiting for however long the process takes. (its hard to predict)
how can I avoid this or structure it better to achieve my objectives?
I believe you would be interested in seeing the Throttler pattern, which is documented here http://camel.apache.org/throttler.html
Hope this helps :)
EDIT - If you are wanting to eliminate excess requests, you can also investigate setting a TTL (Time to live) header within JMS and add a concurrent consumer of 1 to your route, which means any excess messages will also be discarded.
We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.