I successfully deployed a twitter screenshot bot on Google App Engine.
This is my first time deploying.
First thing I noticed was that the app didn't start running until I clicked the link.
When I did, the app worked successfully (replied to tweets with screenshots) as long as the tab was loading and open.
When I closed the tab, the bot stopped working.
Also, in the cloud shell log, I saw:
Handling signal: term
[INFO] Worker exiting (pid 18)
This behaviour surprises me as I expect it to keep running on google server indefinitely.
My bot works by streaming with Twitter api. Also the "worker exiting" line above surprises me.
Here is the relevant code:
def get_stream(set):
global servecount
with requests.get(f"https://api.twitter.com/2/tweets/search/stream?tweet.fields=id,author_id&user.fields=id,username&expansions=author_id,referenced_tweets.id", auth=bearer_oauth, stream=True) as response:
print(response.status_code)
if response.status_code == 429:
print(f"returned code 429, waiting for 60 seconds to try again")
print(response.text)
time.sleep(60)
return
if response.status_code != 200:
raise Exception(
f"Cannot get stream (HTTP {response.status_code}): {response.text}"
)
for response_line in response.iter_lines():
if response_line:
json_response = json.loads(response_line)
print(json.dumps(json_response, indent=4))
if json_response['data']['referenced_tweets'][0]['type'] != "replied_to":
print(f"that was a {json_response['data']['referenced_tweets'][0]['type']} tweet not a reply. Moving on.")
continue
uname = json_response['includes']['users'][0]['username']
tid = json_response['data']['id']
reply_tid = json_response['includes']['tweets'][0]['id']
or_uid = json_response['includes']['tweets'][0]['author_id']
print(uname, tid, reply_tid, or_uid)
followers = api.get_follower_ids(user_id='1509540822815055881')
uid = int(json_response['data']['author_id'])
if uid not in followers:
try:
client.create_tweet(text=f"{uname}, you need to follow me first :)\nPlease follow and retry. \n\n\nIf there is a problem, please speak with my creator, #JoIyke_", in_reply_to_tweet_id=tid, media_ids=[mid])
except:
print("tweet failed")
continue
mid = getmedia(uname, reply_tid)
#try:
client.create_tweet(text=f"{uname}, here is your screenshot: \n\n\nIf there is a problem, please speak with my creator, #JoIyke_", in_reply_to_tweet_id=tid, media_ids=[mid])
#print(f"served {servecount} users with screenshot")
#servecount += 1
#except:
# print("tweet failed")
editlogger()
def main():
servecount, tries = 1, 1
rules = get_rules()
delete = delete_all_rules(rules)
set = set_rules(delete)
while True:
print(f"starting try: {tries}")
get_stream(set)
tries += 1
If this is important, my app.yaml file has only one line:
runtime: python38
and I deployed the app from cloud shell with gcloud app deploy app.yaml
What can I do?
I have searched and can't seem to find a solution. Also, this is my first time deploying an app sucessfully.
Thank you.
Google App Engine works on demand i.e. when it receives an HTTP(s) request.
Neither Warmup requests nor min_instances > 0 will meet your needs. A warmup tries to 'start up' an instance before your requests come in. A min_instance > 0 simply says not to kill the instance but you still need an http request to invoke the service (which is what you did by opening a browser tab and entering your Apps url).
You may ask - since you've 'started up' the instance by opening a browser tab, why doesn't it keep running afterwards? The answer is that every request to a Google App Engine (Standard) app must complete within 1 - 10 minutes (depending on the type of scaling) your App is using (see documentation). For Google App Engine Flexible, the timeout goes up to 60 minutes. This tells you that your service will timeout after at most 10 minutes on GAE standard or 60 minutes on GAE Flexible.
I think the best solution for you on GCP is to use Google Compute Engine (GCE). Spin up a virtual server (pick the lowest configuration so you can stick within the free tier). If you use GCE, it means you spin up a Virtual Machine (VM), deploy your code to it and kick off your code. Your code then runs continuously.
App Engine works on demand, i.e, only will be up if there are requests to the app (this is why when you click on the URL the app works). As well you can set 1 instance to be "running all the time" (min_instances) it will be an anti-pattern for what you want to accomplish and App Engine. Please read How Instances are Managed
Looking at your code you're pulling data every minute from Twitter, so the best option for you is using Cloud Scheduler + Cloud Functions.
Cloud Scheduler will call your Function and it will check if there is data to process, if not the process is terminated. This will help you to save costs because instead of have something running all the time, the function will only work the needed time.
On the other hand I'm not an expert with the Twitter API, but if there is a way that instead of pulling data from Twitter and Twitter calls directly your function it will be better since you can optimize your costs and the function will only run when there is data to process instead of checking every n minutes.
As an advice, first review all the options you have in GCP or the provider you'll use, then choose the best one for your use case. Just selecting one that works with your programming language does not necessarily will work as you expect like in this case.
Related
I have been racking my brains on this for a few weeks now, trying different variations from Google Cloud service offerings but can't seem to find the proper one.
I have a python script with dependencies etc, that I have containerized, pushed, and deploy to GCR.
The script is a bot that connects to an external websocket receiving signals perpetually to then do other processing via API against another external service.
What would be the best service offering from Google Cloud to run this?
So far, I've tried:
GCR Web Service - requires listening service (:8080) which I do not provide in this use case, and, it scales down your service when there is no traffic so no go.
GCR Job Service - Seems like the next ideal option (no HTTP port requirement) - however, since the script (my entry point), upon launch, doesn't 'return' unless it quits, the service launch just allows it to run for a minute or so, until the jobs API declares it as 'failed' - basically, it is launching it via my entry point which just executes the script as if it was running locally and my script isn't meant to return anything.
To try and get around this, I went the google's recommended way and built a main.py with they standard boilerplate, and built it as a wrapper to act as a launcher for the actual script. I did this via a simple subprocess.Popen using their sample main.py as shown below.
main.py
import json
import os
import sys
import subprocess
# Retrieve Job-defined env vars
TASK_INDEX = os.getenv("CLOUD_RUN_TASK_INDEX", 0)
TASK_ATTEMPT = os.getenv("CLOUD_RUN_TASK_ATTEMPT", 0)
# Define main script
def main():
print(f"Starting Task #{TASK_INDEX}, Attempt #{TASK_ATTEMPT}...")
subprocess.Popen(["python3", "myscript.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print(f"Completed Task #{TASK_INDEX}.")
# Start script
if __name__ == "__main__":
try:
main()
except Exception as err:
message = f"Task #{TASK_INDEX}, " \
+ f"Attempt #{TASK_ATTEMPT} failed: {str(err)}"
print(json.dumps({"message": message, "severity": "ERROR"}))
sys.exit(1) # Retry Job Task by exiting the process
My thinking being, this would allow the job to execute my script and mark the job as completed, while the actual script remains running. Also, since subprocess.Popen sets its stdout and stderr to PIPE, my thinking is it would get caught by the google logging and I would see the output.
The job runs and marks it as succeed, however, I see no indication of the actual script executing anywhere.
I had similar issue with Google Cloud functions. Jobs seemed like an ideal option since I can run on their scheduler to make sure it is launching after saying, every hour (my script uses a lock file so it doesn't run again if running).
Am I just missing the point on how these cloud services run?
Do offerings like google cloud run jobs/functions, etc meant to execute jobs that return and quit until launched again by however scheduled?
Do I need to consider Google Computing engine as an option for this use case that is, a full running VM instead of stateless/serverless options?
I am trying to use this in a containerized, scalable as needed, fashion to both make my project portable and minimize costs as much as possible given the always running nature of the job.
Lastly, I know services like pythonanywhere as I am sure others, make this kinda stuff easier, but I would like to learn how to do this via standard cloud offerings like GCR, AWS, etc.
thanks for any insight / advice!
Cloud Run best fit is for HTTP Rest APIs serving (stateless services). There are also Jobs in beta.
One of the top feature of Run is that it scales to 0, when there are not requests to your service (your service instance gets totally destroyed).
If your bot needs to stay alive "for ever", Run is not for you... (Even if you can configure Run to keep at least one instance live).
I would consider instead AppEngine or Compute.
I have an GAE app PHP72, env: standard which is hanging intermittently (once or twice a day for about 5 mins).
When this occurs I see a large spike in GAE dashboard's Traffic Sent graph.
I've reviewed all uses of file_get_contents and curl_exec within the app's scripts, not including those in /vendor/, and don't believe these to be the cause.
Is there a simple way in which I can review more info on these outbound requests?
There is no way to get more details in that dashboard. You're going to need to check your logs at the corresponding times. Obscure things to check for:
Cron jobs coming in at the same times
Task Queues spinning up
I have been using Google Cloud Video Intelligence annotation functionality with Google App Engine Flex. When I try to use VideoIntelligence with a two hour video, it takes 60 minutes for the AnnotateVideo function to respond.
gs_video_path ='gs://'+bucket_name+'/'+videodata.video.path+videodata.video.name
print(gs_video_path)
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.enums.Feature.OBJECT_TRACKING]
operation = video_client.annotate_video(gs_video_path, features=features)
Currently, the only place I can execute this is on Google App Engine Flex. Yet, Google App Engine Flex keeps an instance idle all the time, it is very similar to running a VM in terms of cost.
Google App Engine has a 540 seconds timeout and same wise Google Cloud Run has a timeout of 900 seconds and Google Cloud Functions has a max timeout of 600 seconds as far as I understand.
In these circumstances, which Google Cloud product should I use for a one hour process that should take place while avoiding having an idle instance when there is no usage.
(Please don't respond quoting GKE or other VM based solutions, no idle instance solutions are accepted)
Cloud Run’s 900 second timeout is subject to change soon to satisfy your needs (up to an hour). There's a feature in the works. I’ll update here once it's available in beta, stay tuned.
#ahmetb-todo
You can specify output_uri in the original request. This will write the final result to your GCS bucket. Then you don't have to wait for the long running operation to finish on your VM. The initial request will only take a few seconds so you can use Google Cloud Function.
When the operation finishes an hour later, you process the output json files by setting up a trigger on your output GCS bucket.
I don't think Google has a service that matches your needs.
Probably you should implement some custom workflow, like:
From "short living" environment, like Function, CloudRun or AppEngine do following:
Put an event for your long-running task to PubSub
Use Compute Engine API to start a VM
When VM starts, it's startup script should get latest item from the PubSub and start your long-running task
When task is done, VM terminates itself using ComputeEngine API or calls a Function that invokes shutdown
I'm trying to add timeouts to GWT sessions, by using the following code to check if a session is alive:
public boolean isSessionAlive() {
return System.currentTimeMillis() - getThreadLocalRequest().getSession()
.getLastAccessedTime() < timeout;
}
I based this code on many examples I saw on web for GWT sessions, such as this.
The above code works great while running on a local web server, but after deploying the project to App Engine it doesn't. The following always returns 0 on App Engine:
getThreadLocalRequest().getSession().getLastAccessedTime()
As far as I understand, the last accessed time is updated on each RPC call.
I made several calls, but this value still remains zero and incorrect result is returned.
Does anybody know how to fix this issue?
Things will change after deployed on GAE
Just today attended the session on app engine by #roman irani .
remember that App Engine is a distributed architecture so a difference from Java EE is that you are never guaranteed the same application server instance during request processing as the previous request. While the object is being serialized correctly in memcache, you still have to call setAttribute() every time due to the fact that memory is not shared.
Clear cut picture here to handle the session
I have found a workaround. Adding the following code in war/WEB-INF/web.xml will cause the session to expire after 30 minutes:
<!-- timeout in minutes -->
<session-config>
<session-timeout>30</session-timeout>
</session-config>
Reference: Session Timeouts with GWT RPC calls.
I know that when your Google App Engine (GAE) app has 0 instances running (because it has been idle for a bit), and a user requests a page, the user has to wait for the instance to boot up and do all of the instantiation which can cause the user to wait a significant amount of time.
My question is about the situation when your GAE app already has 1 instance running, but begins to experience heavy load and starts booting up a second instance.
In this case, which will happen:
Will a user end up having to wait for the second instance to instantiate before getting their request responded to?
Will no requests be sent to the second instance until it has fully instantiated, thus not making a user wait an extended amount of time?
EDIT: Its unfortunate that currently the answer to this is #1. However, there is a feature request to change the behavior to be #2. Please star this feature request( http://code.google.com/p/googleappengine/issues/detail?id=2690) to help get it to the attention of the App Engine developers
This blog post may give you some hints. Shortly speaking, the answer seems to be 1)