Google Cloud Natural Language Processing API Spending Limit - google-app-engine

I have a small hobby project using Google Cloud's Natural Language Processing API. I also made sure to set up a daily budget for the project of just $2.00 USD.
My questions is: what happens when/if the spending limit is reached? Does the API cease working to prevent any further queries to the API? Basically, does having a spending limit prevent me from having to worry about additional charges to the project in question?
Thanks!

Yes, if your daily spending limit is hit, services that cost money will cease to function until the limit resets.
See When a resource is depleted for details:
For resources that are required to initiate a request, when the resource is depleted, App Engine by default returns an HTTP 403 or 503 error code for the request instead of calling a request handler.
For all other resources, when the resource is depleted, an attempt in the application to consume the resource results in an exception. This exception can be caught by the application and handled, such as by displaying a friendly error message to the user.
In the Python API, this exception is apiproxy_errors.OverQuotaError.
In the API for Java, this exception is com.google.apphosting.api.ApiProxy.OverQuotaException.
In the Go API, the appengine.IsOverQuota function reports whether an error represents an API call failure due to insufficient available quota.

Related

Getting 408 API request has timed out while accessing Watson discovery

For the past few days i have been trying to access my discovery profile but it is showing -- 408 API request has timed out, I don't understand what it is i tried on different browsers and different systems.
A 408 HTTP response code is admittedly confusing in this case as 400-level errors typically indicate the client (browser) took too long to send the necessary information so the server timed out the connection when a preconfigured duration has been exceeded.
In this particular case, however, a 502 Gateway Timeout HTTP response would be more appropriate. There are multiple complex interactions happening with some of the pages in Watson Discovery Tooling and sometimes the service experiences slowness. In general, I would investigate the status page for the affected IBM Cloud services to determine whether or not there are any customer impacting events.
To check the status of Watson Discovery and any IBM provided service, I would check https://console.bluemix.net/status to see if any of the slowness or errors line up with your experiences.

Why is my Google App Engine site over quota?

I'm getting "Over Quota
This application is temporarily over its serving quota. Please try again later." on my GAE app. It's not billing-enabled. I ran a security scan against it today, which presumably triggered the over quota, but I can't explain why based on the information in the console.
Note that 1.59G has been used responding to 4578 requests. That's an average of about 347k per request, but none of my responses should ever be that large.
By filtering my logs I can see that there was no request today whose response size was greater than 25k. So although the security scan generated a lot of small requests over its 14 minute run, it couldn't possibly account for 1.59G. Can anyone explain this?
Note: mostly suppositions ...
The Impact of Security Scanner on logs section mentions:
Some traces of the scan will appear in your log files. For instance,
the security scanner generates requests for unlikely strings such as
"~sfi9876" and "/sfi9876" in order to examine your application's error
pages; these intentionally invalid page requests will show up in your
logs.
My interpretation is that some of the scan requests will not appear in the app's logs.
I guess it's not impossible for some of the scanner's requests to similarly not be counted in the app's request stats, which might explain the suspicious computation results you reported. I don't see any mention of this in the docs to validate or invalidate this theory. However...
In the Pricing, costs, and traffic section I see:
Currently, a large scan stops after 100,000 test requests, not
including requests related to site crawling. (Site crawling requests
are not capped.)
A couple of other quotes from Google Cloud Security Scanner doc:
The Google Cloud Security Scanner identifies security vulnerabilities
in your Google App Engine web applications. It crawls your
application, following all links within the scope of your starting
URLs, and attempts to exercise as many user inputs and event handlers
as possible.
Because the scanner populates fields, pushes buttons, clicks links,
and so on, it should be used with caution. The scanner could
potentially activate features that change the state of your data or
system, with undesirable results. For example:
In a blog application that allows public comments, the scanner may post test strings as comments on all your blog articles.
In an email sign-up page, the scanner may generate large numbers of test emails.
These quotes suggest that, depending on your app's structure and functionality, the number of requests can be fairly high. Your app would need to be really basic for the quoted kinds of activities to be achieved in 4578 requests - kinda supporting the above theory that some scanner requests might not be counted in the app's stats.

Cannot delete data from Appengine Datastore due to error: "API call datastore_v3.Put() required more quota than is available"

A Google Appengine application has reached the free resource limits regarding Datastore Stored Data. (All other quotas are OK). Hence I'm trying to delete data from the Datastore (on the Datastore Adnmin page).
Only problem is, I cannot delete data because I get this error:
Delete Job Status
There was a problem kicking off the jobs. The error was:
The API call datastore_v3.Put() required more quota than is available.
How to break out from this vicious circle?
You need to wait until the current billing day is over in order your datastore operations quotas to be reset, and the you will be able to delete entities.
If you're getting this error after enabling billing,
you need to set a daily budget.
check out this answer: https://stackoverflow.com/a/31693372/1942593

status of AppEngine

Is there another way to find out the status of services on AppEngine other than the link given in the error below?
LogAndContinueErrorHandler handleServiceError: Service error in memcache
com.google.appengine.api.memcache.MemcacheServiceException: Memcache getIdentifiables: exception getting multiple keys
...
Caused by: com.google.apphosting.api.ApiProxy$CapabilityDisabledException: The API call memcache.Set() is temporarily unavailable: Memcache is temporarily unavailable. Please see http://code.google.com/status/appengine for more information.
I check it but it shows an error rate of 0% for all categories.
The specific memcache issue now appears to be resolved, but in general you should use the Capabilities API (Java and Python) to check if an API is currently unavailable.

Google App Engine RemoteApiServlet/remote_api handler errors

Recently, i have come across an error (quite frequently) with the RemoteApiServlet as well as the remote_api handler.
While bulk loading large amounts of data using the Bulk Loader, i start seeing random HTTP 500 errors, with the following details (in the log file):
Request was aborted after waiting too long to attempt to service your request.
This may happen sporadically when the App Engine serving cluster is under
unexpectedly high or uneven load. If you see this message frequently, please
contact the App Engine team.
Can someone explain what i might be doing wrong? This errors prevents the Bulk Loader from uploading any data further, and I am having to start all over again.
Related thread in Google App Engine forums is at http://groups.google.com/group/google-appengine-python/browse_thread/thread/bee08a70d9fd89cd
This isn't specific to remote_api. What's happening is that your app is getting a lot of requests that take a long time to execute, and App Engine will not scale up the number of instances your app runs on if the request latency is too high. As a result, requests are being queued until a handler is available to serve them; if none become available, a 500 is returned and this message is logged.
Simply reduce the rate at which you're bulkloading data, or decrease the batch size so remote_api requests execute faster.

Resources