Throttle limits for MWS Orders API - amazon-mws

In the documentation it states:
The ListOrders and ListOrdersByNextToken operations together share a maximum request quota of 6 and a restore rate of 60.
It was my understanding that this means I could do something like this:
Call ListOrders: request quota = 5, orders downloaded = 100
Call ListOrdersByNextToken: request quota = 4, orders downloaded = 200
Call ListOrdersByNextToken: request quota = 3, orders downloaded = 300
Call ListOrdersByNextToken: request quota = 2, orders downloaded = 400
Call ListOrdersByNextToken: request quota = 1, orders downloaded = 500
Call ListOrdersByNextToken: request quota = 0, orders downloaded = 600
Then, since the restore rate is 60, after 6 minutes my request quota would be back to 6 and I could repeat the process. If I submit all the requests back-to-back, I could pull 600 orders every 6 minutes per merchant.
QUESTIONS:
Is my understanding of the throttle limit correct?
If it is correct, why have I been able to pull over 1000 orders in less than a minute? The only reason the program stops is that the merchant has no more orders to be pulled.
Thanks!

In my testing in February 2017, requests for ListOrdersByNextToken do, in fact, count against your request quota for ListOrders.

It is my understanding that the ...NextToken calls do not count towards the throttling limits. That means you've only made one call as far as throttling is concerned.

Related

GmailAPI: Does using a batch request count just once or n(items) against the quota?

If I use the gmail api to batch fetch 100 mails, does that count 500 quota units or 5? If its 500, then what should the queue setting on my App Engine queue.yaml be so I don't hit the 250 quota units/sec rate limit?
It will count as n items and not one. Also, 250 Quota units per sec is for per user. So, if in your batch request, you have different users, each user will have a 250 Unit Limit.

Google Custom Search JSON API quota and billing

I'm just deciding between GSS (Google Site Search) and CSE (Custom Search Engine) with JSON API. But I'm a little bit confused about JSON API billing.
My approved start budget is 100$ per year which allows 20 000 queries/year in GSS but how many queries will I get in JSON API and how I must set quota to not exceed the budget?
I have opinion how google makes billing:
Price of 1 query is 0.005$ = 5$ / 1000 queries => https://developers.google.com/custom-search/json-api/v1/overview#pricing
Google adds day queries (over 100 free) and then create billing for month. So my quota has to be set to 154 (100 free + 54):
54 queries per day * 31 days * 12 months = 20 088 queries * 0.005$ = 100,44$ which is maximum I will pay (lesser maybe).
Am I right? Or google makes billing in different way?

How to identify reason of OverQuotaError when sending emails?

I send emails with cron job and task queue usage. The job is executed every 15 minutes and the queue used has the following setup:
- name: send-emails
rate: 1/m
max_concurrent_requests: 1
retry_parameters:
task_retry_limit: 0
But quite often apiproxy_errors.OverQuotaError exception happens. I am checking Quota Details and see that I am still within the daily quota (Recipients Emailed, Attachment Data Sent etc.), and I believe I couldn't be over maximum per minute limit, since the the rate I use is just 1 task per minute (i.e. send no more than 1 mail per minute).
Where am I wrong and what should I check?
How many emails are you sending? You have not set a bucket-size, so it defaults to 5. Your rate sets how often the bucket is replenished. So, with your current configuration, you can send 5 emails every minute. That means if you are sending more than 75 emails to the queue every 15 minutes, the queue will fill up, and eventually go over quota.
I have not tried this myself, but when you catch the apiproxy_errors.OverQuotaError exception, does the message contain any detail as to why it is over quota/which quota has been exceeded?
try:
send_mail_here
except apiproxy_errors.OverQuotaError, message:
logging.error(message)

Count Method Throwing 500 error

I have got into strenge situation. I want to know count of Fetch based on daily, weekly, monthly and All time. In the Datastore, the count is about 2,368,348. Whenever I try to get the count either by Model or GqlQuery I get a 500 error. When rows are less, the code below is working fine.
Can any guru correct me or tell me right solution, please? I am using Python.
The Model:
class Fetch(db.Model):
adid = db.IntegerProperty()
ip = db.StringProperty()
date = db.DateProperty(auto_now_add=True)
Stats Codes:
adid = cgi.escape(self.request.get('adid'))
...
query = "SELECT __key__ FROM Fetch WHERE adid = " + adid + " AND date >= :1"
rows = db.GqlQuery( query, monthlyDate)
fetch_count = 0
for row in rows:
fetch_count = fetch_count + 1
self.response.out.write( fetch_count)
It looks like your query is taking longer than GAE allows a query to run (typically ~60 seconds). From the count() documentation:
Unless the result count is expected to be small, it is best to specify a limit argument; otherwise the method will continue until it finishes counting or times out.
From the Request Timer documentation:
A request handler has a limited amount of time to generate and return a response to a request, typically around 60 seconds. Once the deadline has been reached, the request handler is interrupted.
If a DeadlineExceededError is being raised, this is your problem. If you need to run this query consider using Backends in GAE. With Backends there is no time limit for generating and returning a request.

How do I measure response time in seconds given the following benchmarking data?

We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.

Resources