My managed VM was deployed and working fine. Then about a week ago it stopped working and started returning 500 for all requests. This is an Ubuntu container which is running a flask application.
The instances where showing as "restarting" in the developer console, but they were stuck that way for along time. When I tried deleting the instances I got an error messages saying the instances could not be deleted.
I tried to deploy the app again. The image creation process was a success, after which I got this error
Updating service [abc]...failed.
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: The following quotas
were exceeded: BACKEND_SERVICES (quota: 0, used: 0 + needed: 1),
IN_USE_ADDRESSES (quota: 0, used: 0 + needed: 2), INSTANCES (quota:
0, used: 0 + needed: 2).
The app has billing enabled, and a working credit card attached. There are not pending invoices. Does anyone know why this happens?
I had my billing details updated in the account, but the account was using the promotional credits that I got as part of the signup. I saw a notification saying that promotional credits were expired and I needed to upgrade the account to continue using it. I hit the upgrade button and it said the account was upgraded.
It turned out that it takes a while for that to happen. Maybe a cron job had to run and reset some limits before I could deploy the app again.
It started working working fine after about a day.
Related
I successfully deployed a twitter screenshot bot on Google App Engine.
This is my first time deploying.
First thing I noticed was that the app didn't start running until I clicked the link.
When I did, the app worked successfully (replied to tweets with screenshots) as long as the tab was loading and open.
When I closed the tab, the bot stopped working.
Also, in the cloud shell log, I saw:
Handling signal: term
[INFO] Worker exiting (pid 18)
This behaviour surprises me as I expect it to keep running on google server indefinitely.
My bot works by streaming with Twitter api. Also the "worker exiting" line above surprises me.
Here is the relevant code:
def get_stream(set):
global servecount
with requests.get(f"https://api.twitter.com/2/tweets/search/stream?tweet.fields=id,author_id&user.fields=id,username&expansions=author_id,referenced_tweets.id", auth=bearer_oauth, stream=True) as response:
print(response.status_code)
if response.status_code == 429:
print(f"returned code 429, waiting for 60 seconds to try again")
print(response.text)
time.sleep(60)
return
if response.status_code != 200:
raise Exception(
f"Cannot get stream (HTTP {response.status_code}): {response.text}"
)
for response_line in response.iter_lines():
if response_line:
json_response = json.loads(response_line)
print(json.dumps(json_response, indent=4))
if json_response['data']['referenced_tweets'][0]['type'] != "replied_to":
print(f"that was a {json_response['data']['referenced_tweets'][0]['type']} tweet not a reply. Moving on.")
continue
uname = json_response['includes']['users'][0]['username']
tid = json_response['data']['id']
reply_tid = json_response['includes']['tweets'][0]['id']
or_uid = json_response['includes']['tweets'][0]['author_id']
print(uname, tid, reply_tid, or_uid)
followers = api.get_follower_ids(user_id='1509540822815055881')
uid = int(json_response['data']['author_id'])
if uid not in followers:
try:
client.create_tweet(text=f"{uname}, you need to follow me first :)\nPlease follow and retry. \n\n\nIf there is a problem, please speak with my creator, #JoIyke_", in_reply_to_tweet_id=tid, media_ids=[mid])
except:
print("tweet failed")
continue
mid = getmedia(uname, reply_tid)
#try:
client.create_tweet(text=f"{uname}, here is your screenshot: \n\n\nIf there is a problem, please speak with my creator, #JoIyke_", in_reply_to_tweet_id=tid, media_ids=[mid])
#print(f"served {servecount} users with screenshot")
#servecount += 1
#except:
# print("tweet failed")
editlogger()
def main():
servecount, tries = 1, 1
rules = get_rules()
delete = delete_all_rules(rules)
set = set_rules(delete)
while True:
print(f"starting try: {tries}")
get_stream(set)
tries += 1
If this is important, my app.yaml file has only one line:
runtime: python38
and I deployed the app from cloud shell with gcloud app deploy app.yaml
What can I do?
I have searched and can't seem to find a solution. Also, this is my first time deploying an app sucessfully.
Thank you.
Google App Engine works on demand i.e. when it receives an HTTP(s) request.
Neither Warmup requests nor min_instances > 0 will meet your needs. A warmup tries to 'start up' an instance before your requests come in. A min_instance > 0 simply says not to kill the instance but you still need an http request to invoke the service (which is what you did by opening a browser tab and entering your Apps url).
You may ask - since you've 'started up' the instance by opening a browser tab, why doesn't it keep running afterwards? The answer is that every request to a Google App Engine (Standard) app must complete within 1 - 10 minutes (depending on the type of scaling) your App is using (see documentation). For Google App Engine Flexible, the timeout goes up to 60 minutes. This tells you that your service will timeout after at most 10 minutes on GAE standard or 60 minutes on GAE Flexible.
I think the best solution for you on GCP is to use Google Compute Engine (GCE). Spin up a virtual server (pick the lowest configuration so you can stick within the free tier). If you use GCE, it means you spin up a Virtual Machine (VM), deploy your code to it and kick off your code. Your code then runs continuously.
App Engine works on demand, i.e, only will be up if there are requests to the app (this is why when you click on the URL the app works). As well you can set 1 instance to be "running all the time" (min_instances) it will be an anti-pattern for what you want to accomplish and App Engine. Please read How Instances are Managed
Looking at your code you're pulling data every minute from Twitter, so the best option for you is using Cloud Scheduler + Cloud Functions.
Cloud Scheduler will call your Function and it will check if there is data to process, if not the process is terminated. This will help you to save costs because instead of have something running all the time, the function will only work the needed time.
On the other hand I'm not an expert with the Twitter API, but if there is a way that instead of pulling data from Twitter and Twitter calls directly your function it will be better since you can optimize your costs and the function will only run when there is data to process instead of checking every n minutes.
As an advice, first review all the options you have in GCP or the provider you'll use, then choose the best one for your use case. Just selecting one that works with your programming language does not necessarily will work as you expect like in this case.
I recently updated my cron.yaml file and now my cron tasks fail with no entries in the logs.
It is acting like the java servlet at the url is not being run.
I can paste the url into a browser and the servlet runs fine.
My cron.yaml file:
cron:
- description: Daily revenues report
url: /revenues
schedule: every day 07:35
timezone: America/Denver
Using below deploycron.sh
PROJECT_ID='my-project-id'
gcloud config set project ${PROJECT_ID}
gcloud info
gcloud app deploy cron.yaml
Is there an error in my .yaml?
Is there a special task queue set up required?
Is some other configuration or permissions piece missing?
It was running fine last week. I have tried deleting and starting over to no avail.
https://console.cloud.google.com/cloudscheduler?project=project-id
Shows the job. Result column 'Failed'.
Logs 'View' link shows:
protoPayload.taskName="01661931846119241031" protoPayload.taskQueueName="__cron"
with no log entries.
Is __cron not automatic?
I am at a loss.
App Engine Standard. Java 8.
After installing the latest update of GCloud locally and re-running the deploy cron script. The cron jobs now run as before. 02/02/2021.
'Failed' means that the endpoint /revenues is not returning a success http status code.
Logs 'View' link shows: protoPayload.taskName="01661931846119241031" protoPayload.taskQueueName="__cron" with no log entries
Maybe don't use the premade filter, and just try filtering for /revenues or viewing all the logs at 07:35 am (when it was supposed to have run)
Is there an error in my .yaml?
if there was then gcloud app deploy cron.yaml would fail
Is there a special task queue set up required?
you shouldn't need to do anything, i didn't
I can paste the url into a browser and the servlet runs fine.
When you paste the url into the browser, is there any redirecting (like from /revenues to /revenues/) or anything that your browser is handling for you. Maybe /revenues is expecting there to be cookies present now.
What are there any special app.yaml or dispatch.yaml rules that /revenues would be hitting?
Is /revenues being handled by a service other than the default service?
I had a similar problem: CRON tasks fail without any logs.
The root cause was that the IP address of App Engine was blocked by the App Engine Firewall. Thus I had to update the allow-list, as described here: https://cloud.google.com/appengine/docs/standard/nodejs/scheduling-jobs-with-cron-yaml#validating_cron_requests
I started having the same problem a few days ago on my existing CRON schedules. I've tried everything including tearing my code down to the bare minimum and creating a new GAE project with the Hello World quick start. It still fails. Nothing useful in the logs and the UI just says 'Failed'. I'm pulling my hair out.
Sorry I don't have an answer to contribute but your post makes me think it's on Google's side. I know they're moving CRON jobs to Cloud Scheduler->App Engine Cron Jobs. My gut tells me it's a permissions issue related to this move and IAM. I'm really at a loss.
I'm using Google App Engine Flexible and some kind of rolling restart happened last night and my instance failed right after.
Looking at logs, I see this on new instance creation:
ZONE_RESOURCE_POOL_EXHAUSTED
This appears to be related to this:
https://status.cloud.google.com/incident/compute/18012
But the status shows fixed and I'm still having issues.
I am getting this error about 20% of the time. I've dumped and compared traffic on successful and failed requests and there is no noticeable difference:
There's nothing in the AppEngine logs or dashboard, and also no way to catch exceptions on requests that hit "/_ah" URLs. I've attached a script that tries the login every 5 minutes, as well as the traffic dumps for successful and failed requests.
I would really appreciate it if someone from Google could take a look at this. The error definitely occurs deep in the bowls of the AppEngine OpenID implementation and there is no way for an outsider to see such errors.
Thanks,
Graeme
https://dl.dropboxusercontent.com/u/6618078/AppEngine%20OpenID%20error/error.dump
https://dl.dropboxusercontent.com/u/6618078/AppEngine%20OpenID%20error/success.dump
https://dl.dropboxusercontent.com/u/6618078/AppEngine%20OpenID%20error/test.sh
It could be related to this bug that is only 3.5 years old and not fixed as they don't consider it a "production issue".
https://code.google.com/p/googleappengine/issues/detail?id=3589
The bug is about non-gmail accounts but I have the same server error with gmail accounts (started today for me, 01/12/2014).
No error acknowledged here https://code.google.com/status/appengine.
I'm trying to add timeouts to GWT sessions, by using the following code to check if a session is alive:
public boolean isSessionAlive() {
return System.currentTimeMillis() - getThreadLocalRequest().getSession()
.getLastAccessedTime() < timeout;
}
I based this code on many examples I saw on web for GWT sessions, such as this.
The above code works great while running on a local web server, but after deploying the project to App Engine it doesn't. The following always returns 0 on App Engine:
getThreadLocalRequest().getSession().getLastAccessedTime()
As far as I understand, the last accessed time is updated on each RPC call.
I made several calls, but this value still remains zero and incorrect result is returned.
Does anybody know how to fix this issue?
Things will change after deployed on GAE
Just today attended the session on app engine by #roman irani .
remember that App Engine is a distributed architecture so a difference from Java EE is that you are never guaranteed the same application server instance during request processing as the previous request. While the object is being serialized correctly in memcache, you still have to call setAttribute() every time due to the fact that memory is not shared.
Clear cut picture here to handle the session
I have found a workaround. Adding the following code in war/WEB-INF/web.xml will cause the session to expire after 30 minutes:
<!-- timeout in minutes -->
<session-config>
<session-timeout>30</session-timeout>
</session-config>
Reference: Session Timeouts with GWT RPC calls.