I have a batch process which makes a lot of rest api calls to upsert SalesForce records. The Rest api calls start failing after a while with below error.
I did check the API limit for my account and, I can confirm I am well within the 24 hour API limits.
Is there an hourly limit to the API calls as well ? I tried searching SalesForce documentation ,but everywhere I just see a mention about 24 hour limits.I could not find any limit on each hour basis.
{
“errorCode”: “REQUEST_LIMIT_EXCEEDED”,
“message”: “You have reached the Connect API’s hourly request limit for this user and application. Please try again later.”
}
You haven't hit the Salesforce API limit, which is generally quite high. You've hit the limit for the Chatter REST API, also known as the Connect API. This API has a per user, per application, per hour request limit:
Chatter REST API requests are subject to rate limiting. Chatter REST API has a different rate limit than other Salesforce APIs. Chatter REST API has a per user, per application, per hour rate limit. When you exceed the rate limit, all Chatter REST API resources return a 503 Service Unavailable error code.
The linked document has a number of recommendations for avoiding this rate limit:
If you hit limits when running tests, use multiple users to simulate a real-world scenario.
When polling for feed updates, do not exceed one poll per minute (60 polls per hour). To return more results in 1 request, increase the page size.
When polling for private messages, do not exceed 60 polls per hour.
To avoid making multiple requests, cache metered static assets such as file and dashboard renditions (group and user profile pictures are not metered).
Each developer on a team should set up 2 connected apps: one for automated testing and one for manual testing and development. Don’t share connected apps with other developers.
Use a unique connected app for the production environment.
Don’t share connected apps between applications.
Review the list of Chatter REST API resources to determine what you're calling that is subject to these enhanced limits.
Related
I am making a discord bot using discord.js and am using the YouTube API for music commands.
However, I managed to use up all of my quota just in testing. It can load playlists, search for songs and use urls and uses ytdl-core module to download the videos. Is it normal to use this much of a quota or is something wrong. I also noticed that you cannot get an extension for your quota without having an organisation, which I do not.
I'm running into the same problem. The YouTube Data API has a crazy low quota/limit. While the quota sounds reasonable, 10,000 queries per day, it's actually much lower. First, they count each query as 100. So take 10,000 and divide it by 100 and that's how many queries you are really allowed to make. Furthermore, they paginate their results with a maximum of 50 results (videos) per page. So in reality, YouTube will let you access data on 5,000 videos per day using the YouTube Data APIs search functionality. To confirm my math, I just checked my quota for the day. After querying the API for 10 pages of results (50 videos per page, 500 videos total) my quota/usage is listed at 1,000 out of the 10,000 allotment per day. Essentially, the YouTube Data API is useless.
Our application makes use of a service account that has been authorized for the entire domain by the admin. With this service account our application accesses the domain user's emails with Gmail APIs like GetMessage.
All of a sudden, starting this week, we have started receiving the errors intermittently
Quota exceeded for quota metric 'Queries' and limit 'Queries per minute per user' of service 'gmail.googleapis.com' for consumer 'project_number:XYZ
There is no change in our application or frequency at which we access emails. We are using batch size of 10 while using the API.
The 'Quota exceeded errors count (10 sec) - Queries per minute' graph in the GCP dashboard is empty. So we are really not sure what is going on and why we are suddenly hitting the quota limits.
Also, I am not sure how the 'per-user' limit is applied when my app accesses the user mailboxes with the service account. The documentation around this is vague, at least to me.
These errors are really impacting our ability to serve our customers. Moreover not knowing why we are getting these errors is shaking our confidence in the Gmail APIs.
Any help in this regard is highly appreciated.
Thanks
UPDATE:
Today we are seeing lots of
"User-rate limit exceeded. Retry after <timestamp>"
errors. Seems like this time around we are hitting some quota limit other than 'queries per minute'. While I look in my client implementation and figure out why this is happening, feel free to share any recommendations you may have.
Thanks.
Google analytics has limits and quotas on API requests. You can increase it. If you don't want to do it, daily quotas are refreshed at midnight Pacific Standard Time.
I have used Microsoft Flow to check for new emails with specific labels using their gmail connector trigger 'when a new email arrives'.
Up until now I have had four flows running perfectly for over a year, until last Friday when errors started occurring. I am now testing with just one flow, single concurrency, which runs every minute (Flow Plan 2). The mailbox being checked receives at most 200 new messages per day, fairly spread out.
This is the error message which happens after the flow runs sucessfully 2-3 times a minute apart:
{
"statusCode": 403,
"message": "Out of call volume quota. Quota will be replenished in 23:01:41."
}
Microsoft claim this is related to the gmail API limit of no more than 60 calls per 60 seconds, even though the message above suggests a refresh of the quote in something close to 24 hours. It seems more like we are hitting some kind of daily limit, but the only one I could find on the gmail usage limits page is for 1,000,000 quota units per day, and I'm certain we are far short of that.
I have tried accessing quota usage from https://console.developers.google.com but since I didn't make a project and everything is setup from the Microsoft flow, there is no quota data shown.
Q: How can I verify the number of api calls being made from Microsoft Flow via the gmail connector, if that's even possible?
Note, I also started a thread on the Microsoft Power users forum to get help, but I figure if it's gmail API related then I may get better answers here.
Edit 27th Aug: Resolution was to export the flows to a new microsoft account. For whatever reason, the flows run fine on the new account (even though it has the same flow subscription). Microsoft still cannot explain why the flows would halt on one account and not on another.
I have just developed a mobile apps which basically for users to upload, download photoes, add, update, search , delete, refresh transaction, and query report. Every action need submit request to Appengine Server.
I am using CloudEndpoint, oAuth2.0 and Objectify to implement this appengine. When I'm testing alone, The instance hours has used up 40% . How much billing for instance can I imagine if 100 people using this app? How does it calculate the instance hours? by request of submitting? or by time of instance working on multiple request??
is it worth?
If my target is more than 100 users to using my apps. Is it worth? Could you please share me what exactly I misunderstood about this instance.
Thanks
As others have commented, the question is very hard to answer. The easiest answer I can think of is by looking at the response header "X-AppEngine-Estimated-CPM-US-Dollars". You have to be a member of the Cloud Platform Project (see the Permissions page in Cloud Platform developers console) to see this header (you can check it in your browser).
The header tells you what the cost of the request was in US Dollars multiplied by 1000.
But think of it as an indication. If your request spawns other processes such as tasks, those costs are not included in the number you see in that header.
The relationship between Frontend instance hours and the number of requests is not linear either. For one, you will be charged a number of minutes (not sure if it's 15 minutes) when the instance spins up. And there are other performance settings that determine how this works.
Your best bet is to run the app for a while against real users and find out what the costs were in a given month or so.
I read somewhere that the Salesforce API has a 10 request limit. If we write code to integrate with Salesforce:
1. What is the risk of this limit
2. How can we write code to negate this risk?
My real concern is that I don't want to build our customer this great standalone website that integrates with Salesforce only to have user 11 and 12 kicked out to wait until requests 1-10 are complete?
Edit:
Some more details on the specifics of the limitation can be found at http://www.salesforce.com/us/developer/docs/api/Content/implementation_considerations.htm. Look at the section titled limits.
"Limits
There is a limit on the number of queries that a user can execute concurrently. A user can have up to 10 query cursors open at a time. If 10 QueryLocator cursors are open when a client application, logged in as the same user, attempts to open a new one, then the oldest of the 10 cursors is released. This results in an error in the client application.
Multiple client applications can log in using the same username argument. However, this increases your risk of getting errors due to query limits.
If multiple client applications are logged in using the same user, they all share the same session. If one of the client applications calls logout(), it invalidates the session for all the client applications. Using a different user for each client application makes it easier to avoid these limits.*"
Not sure which limit you're referring to, but the governor limits are all listed in the Apex documentation. These limits apply to code running in a given Apex transaction (i.e. in response to a trigger/web service call etc), so adding more users won't hurt you - each transaction gets its own allocation of resources.
There are also limits on the number of long-running concurrent API requests and total API calls in a day. Most of these are per-license, so, again, as the number of users rises, so do the limits.
Few comments on:
I don't want to build our customer this great standalone website that integrates with Salesforce only to have user 11 and 12 kicked out to wait until requests 1-10 are complete?
There are two major things you need to consider when planning real-time Sfdc integration beside the api call limits mentioned in the metadaddy's answer (and if you make a lot of queries it's easy to hit these limits):
Sfdc has routine maintainance outage periods.
Querying Sfdc will always be significantly slower than a querying local datasource.
You may want to consider a local mirror of you Sfdc data where you replicate your Sfdc data.
Cheers,
Tymek
All API usage limits are calculated over 24 hours period
Limits are applicable to whole organization. So if you have several users connecting through API all of them count against the same limit.
You get 1,000 API requests per each Salesforce user. Even Unlimited Editions is actually limited to 5,000.
If you want to check your current API usage status go to Your Name |
Setup | Company Profile | Company Information
You can purchase additional API calls
You can read more at Salesforce API Limits documentation