How much AppEngine Instance hours should I expected? - google-app-engine

I have just developed a mobile apps which basically for users to upload, download photoes, add, update, search , delete, refresh transaction, and query report. Every action need submit request to Appengine Server.
I am using CloudEndpoint, oAuth2.0 and Objectify to implement this appengine. When I'm testing alone, The instance hours has used up 40% . How much billing for instance can I imagine if 100 people using this app? How does it calculate the instance hours? by request of submitting? or by time of instance working on multiple request??
is it worth?
If my target is more than 100 users to using my apps. Is it worth? Could you please share me what exactly I misunderstood about this instance.
Thanks

As others have commented, the question is very hard to answer. The easiest answer I can think of is by looking at the response header "X-AppEngine-Estimated-CPM-US-Dollars". You have to be a member of the Cloud Platform Project (see the Permissions page in Cloud Platform developers console) to see this header (you can check it in your browser).
The header tells you what the cost of the request was in US Dollars multiplied by 1000.
But think of it as an indication. If your request spawns other processes such as tasks, those costs are not included in the number you see in that header.
The relationship between Frontend instance hours and the number of requests is not linear either. For one, you will be charged a number of minutes (not sure if it's 15 minutes) when the instance spins up. And there are other performance settings that determine how this works.
Your best bet is to run the app for a while against real users and find out what the costs were in a given month or so.

Related

Number of online users and app visits in single page application

I making a SPA (single page application) and want to show the number of online users and app visits in the admin panel like this:
online users: 5 people
Today visits: 12 people.
This Year visits: 5263 people.
I don't know what the proper procedure is for implementing this.
What I'm doing as a solution is to make an endpoint at the server for computing number of online users and visits. In the client I've created a random string as UserKey and saved it into localStorage.Then I've used setInterval to send a request with UserKey payload to the endpoint every 5 minutes. This endpoint calculates the number of online users and visits based on UserKey and current date and time. Calculating Is not 100% Accurate but it's satisfactory for me.
I'm not sure if I'm in the right way. Please share me your advice.
You can simply do it with Navigator.onLine :
Source
Websockets (pusher) should be a right solution for your case. This way you don't have to poll every 5 minutes.
See https://pusher.com/docs/channels/getting_started/javascript/?ref=docs-index
It is free but just for 50 users if you want more users you can also choose to start an own Websocket server. Which is a little more complicated (without external libraries of course).

How do I monitor gmail API usage from Microsoft Flow?

I have used Microsoft Flow to check for new emails with specific labels using their gmail connector trigger 'when a new email arrives'.
Up until now I have had four flows running perfectly for over a year, until last Friday when errors started occurring. I am now testing with just one flow, single concurrency, which runs every minute (Flow Plan 2). The mailbox being checked receives at most 200 new messages per day, fairly spread out.
This is the error message which happens after the flow runs sucessfully 2-3 times a minute apart:
{
"statusCode": 403,
"message": "Out of call volume quota. Quota will be replenished in 23:01:41."
}
Microsoft claim this is related to the gmail API limit of no more than 60 calls per 60 seconds, even though the message above suggests a refresh of the quote in something close to 24 hours. It seems more like we are hitting some kind of daily limit, but the only one I could find on the gmail usage limits page is for 1,000,000 quota units per day, and I'm certain we are far short of that.
I have tried accessing quota usage from https://console.developers.google.com but since I didn't make a project and everything is setup from the Microsoft flow, there is no quota data shown.
Q: How can I verify the number of api calls being made from Microsoft Flow via the gmail connector, if that's even possible?
Note, I also started a thread on the Microsoft Power users forum to get help, but I figure if it's gmail API related then I may get better answers here.
Edit 27th Aug: Resolution was to export the flows to a new microsoft account. For whatever reason, the flows run fine on the new account (even though it has the same flow subscription). Microsoft still cannot explain why the flows would halt on one account and not on another.

How to unpublish an iCal (*.ics) feed?

One feature on my site allows registered users to create calendars for their organization. We provide a dynamically-generated iCal feed for these calendars through a URL with query-string parameters. Anyone can subscribe to these feeds by entering the provided URL into Google Calendar, Outlook, iPhone, etc...
This has been working well enough for a few years, but we now have a problem with stale or deleted calendars. If a registered user significantly alters or deletes their account, the calendar will no longer exist and the feed is useless. We currently return a "404 - Not Found" error for those requests (recently changed from "400 - Bad Request").
My question is, other than returning the 404, is there any way to get subscribers to stop requesting a bad feed? This is a similar question, where the accepted answer suggests returning 404 or 410 and hoping the clients will see the error and manually remove the subscription.
That doesn't seem to be working so far. We get ~ 100k feed requests an hour and a full 30% of those are for deleted calendars.
Do Google, Apple, et al not give up when they repeatedly get a 404 for a feed? How have others handled this issue?
If this was just a problem with log pollution I wouldn't worry too much about it. However, since the feeds are dynamically generated, each request hits the backend db. The processing is trivial and doesn't appear to be affecting performance, but the situation can only get worse.
Apologies if this belongs on ServerFault. While the issue affects my servers, I believe the solution is programmatic.
I don't believe there is an easy answer - I think it's been asked before.
It's like having to deal with all the traffic when some hackers use your site for target practise on logins or xmlrpc or just looking for vulnerabilities. Or the spammers trying a scatter gun approach sending emails. Or when a web spider decides to excessively crawl your site. You have to size for all that non useful traffic.
You could possibly generate and keep up to date a list outside of the database of the bad ics URLs and have a script check and bounce the request before it gets near the database ?
Basically try to deal as efficiently as possible with the problem.
You could also in account deletion try adding a step that requests the user go to their calendar programs and delete the feed before continuing. However that might cause bad vibes and probably would not totally fix it anyway.

How to send mail to all users in an app engine application

I work on a google app engine application which currently has about 4000 users and I want to write a handler to send email to all users.
The problem is that app engine has limitations on getting entities from datastore. For example, the max number of rows which can be returned from datastore is 1000.
I can get all users incrementally by using a loop and limit, offset parameters of gql. But this time the max lifetime of a handler which is 30 seconds limits me.
I made some research to overcome this problem and I have ended up with backends. But it seems to me backends usage is different I mean it is not appropriate for this operation.
How can I achieve this task?
Thanks in advance..
from google.appengine.api import mail
mail.send_mail(sender="Example.com Support <support#example.com>",
to="Albert Johnson <Albert.Johnson#example.com>",
subject="Your account has been approved",
body="""
Dear Albert:
Your example.com account has been approved. You can now visit
http://www.example.com/ and sign in using your Google Account to
access new features.
Please let us know if you have any questions.
The example.com Team
""")
Task Queues give you a 10-minute deadline. See the documentation
You can get more than 1000 items in one request. Just avoid using fetch and try this:
entities = Entity.all() # <-- no fetch
for e in entities:
mail.send_mail()
This will keep on getting users until the 10 minute limit run out: a lot of entities and more than enough for 4000 users.

Salesforce API 10 request limit

I read somewhere that the Salesforce API has a 10 request limit. If we write code to integrate with Salesforce:
1. What is the risk of this limit
2. How can we write code to negate this risk?
My real concern is that I don't want to build our customer this great standalone website that integrates with Salesforce only to have user 11 and 12 kicked out to wait until requests 1-10 are complete?
Edit:
Some more details on the specifics of the limitation can be found at http://www.salesforce.com/us/developer/docs/api/Content/implementation_considerations.htm. Look at the section titled limits.
"Limits
There is a limit on the number of queries that a user can execute concurrently. A user can have up to 10 query cursors open at a time. If 10 QueryLocator cursors are open when a client application, logged in as the same user, attempts to open a new one, then the oldest of the 10 cursors is released. This results in an error in the client application.
Multiple client applications can log in using the same username argument. However, this increases your risk of getting errors due to query limits.
If multiple client applications are logged in using the same user, they all share the same session. If one of the client applications calls logout(), it invalidates the session for all the client applications. Using a different user for each client application makes it easier to avoid these limits.*"
Not sure which limit you're referring to, but the governor limits are all listed in the Apex documentation. These limits apply to code running in a given Apex transaction (i.e. in response to a trigger/web service call etc), so adding more users won't hurt you - each transaction gets its own allocation of resources.
There are also limits on the number of long-running concurrent API requests and total API calls in a day. Most of these are per-license, so, again, as the number of users rises, so do the limits.
Few comments on:
I don't want to build our customer this great standalone website that integrates with Salesforce only to have user 11 and 12 kicked out to wait until requests 1-10 are complete?
There are two major things you need to consider when planning real-time Sfdc integration beside the api call limits mentioned in the metadaddy's answer (and if you make a lot of queries it's easy to hit these limits):
Sfdc has routine maintainance outage periods.
Querying Sfdc will always be significantly slower than a querying local datasource.
You may want to consider a local mirror of you Sfdc data where you replicate your Sfdc data.
Cheers,
Tymek
All API usage limits are calculated over 24 hours period
Limits are applicable to whole organization. So if you have several users connecting through API all of them count against the same limit.
You get 1,000 API requests per each Salesforce user. Even Unlimited Editions is actually limited to 5,000.
If you want to check your current API usage status go to Your Name |
Setup | Company Profile | Company Information
You can purchase additional API calls
You can read more at Salesforce API Limits documentation

Resources