Salesforce API 10 request limit - salesforce

I read somewhere that the Salesforce API has a 10 request limit. If we write code to integrate with Salesforce:
1. What is the risk of this limit
2. How can we write code to negate this risk?
My real concern is that I don't want to build our customer this great standalone website that integrates with Salesforce only to have user 11 and 12 kicked out to wait until requests 1-10 are complete?
Edit:
Some more details on the specifics of the limitation can be found at http://www.salesforce.com/us/developer/docs/api/Content/implementation_considerations.htm. Look at the section titled limits.
"Limits
There is a limit on the number of queries that a user can execute concurrently. A user can have up to 10 query cursors open at a time. If 10 QueryLocator cursors are open when a client application, logged in as the same user, attempts to open a new one, then the oldest of the 10 cursors is released. This results in an error in the client application.
Multiple client applications can log in using the same username argument. However, this increases your risk of getting errors due to query limits.
If multiple client applications are logged in using the same user, they all share the same session. If one of the client applications calls logout(), it invalidates the session for all the client applications. Using a different user for each client application makes it easier to avoid these limits.*"

Not sure which limit you're referring to, but the governor limits are all listed in the Apex documentation. These limits apply to code running in a given Apex transaction (i.e. in response to a trigger/web service call etc), so adding more users won't hurt you - each transaction gets its own allocation of resources.
There are also limits on the number of long-running concurrent API requests and total API calls in a day. Most of these are per-license, so, again, as the number of users rises, so do the limits.

Few comments on:
I don't want to build our customer this great standalone website that integrates with Salesforce only to have user 11 and 12 kicked out to wait until requests 1-10 are complete?
There are two major things you need to consider when planning real-time Sfdc integration beside the api call limits mentioned in the metadaddy's answer (and if you make a lot of queries it's easy to hit these limits):
Sfdc has routine maintainance outage periods.
Querying Sfdc will always be significantly slower than a querying local datasource.
You may want to consider a local mirror of you Sfdc data where you replicate your Sfdc data.
Cheers,
Tymek

All API usage limits are calculated over 24 hours period
Limits are applicable to whole organization. So if you have several users connecting through API all of them count against the same limit.
You get 1,000 API requests per each Salesforce user. Even Unlimited Editions is actually limited to 5,000.
If you want to check your current API usage status go to Your Name |
Setup | Company Profile | Company Information
You can purchase additional API calls
You can read more at Salesforce API Limits documentation

Related

SalesForce Rest API calls Hourly limit

I have a batch process which makes a lot of rest api calls to upsert SalesForce records. The Rest api calls start failing after a while with below error.
I did check the API limit for my account and, I can confirm I am well within the 24 hour API limits.
Is there an hourly limit to the API calls as well ? I tried searching SalesForce documentation ,but everywhere I just see a mention about 24 hour limits.I could not find any limit on each hour basis.
{
“errorCode”: “REQUEST_LIMIT_EXCEEDED”,
“message”: “You have reached the Connect API’s hourly request limit for this user and application. Please try again later.”
}
You haven't hit the Salesforce API limit, which is generally quite high. You've hit the limit for the Chatter REST API, also known as the Connect API. This API has a per user, per application, per hour request limit:
Chatter REST API requests are subject to rate limiting. Chatter REST API has a different rate limit than other Salesforce APIs. Chatter REST API has a per user, per application, per hour rate limit. When you exceed the rate limit, all Chatter REST API resources return a 503 Service Unavailable error code.
The linked document has a number of recommendations for avoiding this rate limit:
If you hit limits when running tests, use multiple users to simulate a real-world scenario.
When polling for feed updates, do not exceed one poll per minute (60 polls per hour). To return more results in 1 request, increase the page size.
When polling for private messages, do not exceed 60 polls per hour.
To avoid making multiple requests, cache metered static assets such as file and dashboard renditions (group and user profile pictures are not metered).
Each developer on a team should set up 2 connected apps: one for automated testing and one for manual testing and development. Don’t share connected apps with other developers.
Use a unique connected app for the production environment.
Don’t share connected apps between applications.
Review the list of Chatter REST API resources to determine what you're calling that is subject to these enhanced limits.

How do I monitor gmail API usage from Microsoft Flow?

I have used Microsoft Flow to check for new emails with specific labels using their gmail connector trigger 'when a new email arrives'.
Up until now I have had four flows running perfectly for over a year, until last Friday when errors started occurring. I am now testing with just one flow, single concurrency, which runs every minute (Flow Plan 2). The mailbox being checked receives at most 200 new messages per day, fairly spread out.
This is the error message which happens after the flow runs sucessfully 2-3 times a minute apart:
{
"statusCode": 403,
"message": "Out of call volume quota. Quota will be replenished in 23:01:41."
}
Microsoft claim this is related to the gmail API limit of no more than 60 calls per 60 seconds, even though the message above suggests a refresh of the quote in something close to 24 hours. It seems more like we are hitting some kind of daily limit, but the only one I could find on the gmail usage limits page is for 1,000,000 quota units per day, and I'm certain we are far short of that.
I have tried accessing quota usage from https://console.developers.google.com but since I didn't make a project and everything is setup from the Microsoft flow, there is no quota data shown.
Q: How can I verify the number of api calls being made from Microsoft Flow via the gmail connector, if that's even possible?
Note, I also started a thread on the Microsoft Power users forum to get help, but I figure if it's gmail API related then I may get better answers here.
Edit 27th Aug: Resolution was to export the flows to a new microsoft account. For whatever reason, the flows run fine on the new account (even though it has the same flow subscription). Microsoft still cannot explain why the flows would halt on one account and not on another.

Upper limit on the number of Azure logic apps that can be created in a resource group/subscription?

Any limit on number of logic apps that can be created in a resource group/subscription.
Any limit on number of logic apps that can be created per minute.
We essentially want to create logic apps(many per minute) to create scheduled triggers to drop messages on service bus and delete these logic apps when they are no longer needed.
Workflows per region per subscription 1000 (from here)
This would be related to the Azure Resource Manager.
For each subscription and tenant, Resource Manager limits read requests to 15,000 per hour and write requests to 1,200 per hour.
These limits apply to each Azure Resource Manager instance; there are
multiple instances in every Azure region, and Azure Resource Manager
is deployed to all Azure regions. So, in practice, limits are
effectively much higher than those listed above, as user requests are
generally serviced by many different instances.+
If your application or script reaches these limits, you need to
throttle your requests.
from here.
HTH

Google App Engine : How can I access sessions inside a cloud endpoint?

I have developed a standard Google App Engine backend Application for my Android client. Now, there is search functionality in the App and during one request, I plan to return 20 results but I search for more in advanced(like 100) so that for the next hit, I will just search in these records and return. So, I need a mechanism to save these 80 records so that the same user might get them quickly.
I searched for it and found out that we can enable sessions in appengine-web.xml but all the session access has been done in doPost() and doGet() while my code is entirely Google's cloud endpoints.(like Spring)
Another thing is that I would like to persist the data both inside the Datastore and some cache(like Memcache).
My end goal is storing this data across search sessions. Is there any mechanism that will allow me to do this?
The usual approach here is to provide a code value in the response which the user can send in the next request to "continue" viewing the same results. This is called a "cursor".
For example, you might store the 80 records under some random key in your cache, and then send that random key to the user as part of the response. Then, when the user makes a new request including the key, you just the records and return them.
Cookie-based sessions don't usually work well with APIs; they introduce unnecessary statefulness.

How much AppEngine Instance hours should I expected?

I have just developed a mobile apps which basically for users to upload, download photoes, add, update, search , delete, refresh transaction, and query report. Every action need submit request to Appengine Server.
I am using CloudEndpoint, oAuth2.0 and Objectify to implement this appengine. When I'm testing alone, The instance hours has used up 40% . How much billing for instance can I imagine if 100 people using this app? How does it calculate the instance hours? by request of submitting? or by time of instance working on multiple request??
is it worth?
If my target is more than 100 users to using my apps. Is it worth? Could you please share me what exactly I misunderstood about this instance.
Thanks
As others have commented, the question is very hard to answer. The easiest answer I can think of is by looking at the response header "X-AppEngine-Estimated-CPM-US-Dollars". You have to be a member of the Cloud Platform Project (see the Permissions page in Cloud Platform developers console) to see this header (you can check it in your browser).
The header tells you what the cost of the request was in US Dollars multiplied by 1000.
But think of it as an indication. If your request spawns other processes such as tasks, those costs are not included in the number you see in that header.
The relationship between Frontend instance hours and the number of requests is not linear either. For one, you will be charged a number of minutes (not sure if it's 15 minutes) when the instance spins up. And there are other performance settings that determine how this works.
Your best bet is to run the app for a while against real users and find out what the costs were in a given month or so.

Resources