Google Cloud Engine: PubSub instead of RabbitMQ - google-app-engine

My project has microserver architecture working in Google Cloud. I'm thinking about moving from container with RabbitMQ to PubSub engine.
The question is: is it possible to receive messages one by one? My code is written on Go and docs says
The callback is invoked concurrently by multiple goroutines,
maximizing throughput.
But how many goroutines can be invoked? How can I set the max allowed? E.g. one of my workers works with third-party API allowing only one connection per IP, so I can have only one task in time for this worker.

Correct solution is App Engine Task Pull Queues
https://cloud.google.com/appengine/docs/standard/go/taskqueue/overview-pull

Related

Google App Engine streaming data into Bigquery: GCP architecture

I'm working with a Django web app deployed on Google App Engine flexible environment.
I'm streaming my data while processing requests in my views using bigquery.Client(). But I think it is not the best way to do it. Do I need to delegate this process outside of the view (using pub/sub, tasks, cloud functions etc.? If so, give me a suitable architecture: which GCP product should I use, how to connect, and what to read.
Based on your comment, I could recommend you Cloud Run;
Cloud Run is a serverless container based product. You write a webserver (that handle your POST request), wrap it in a container and deploy it on Cloud Run.
With a brand new feature, named always on the CPU is not throttled after the response sent (the normal behavior). With always on, you keep the full CPU up to the Cloud Run instances off load (usually after 15 minutes, but can be quicker).
The benefit of the feature is the capacity to return immediately the response to the client, and then to continue to process, asynchronously, your data to store in BigQuery (in streaming mode).

Google Cloud Platform: Cloud Functions vs App Engine

This may be the wrong place for this question, so please re-direct me if necessary.
I have deployed a couple simple functions using Google Cloud Functions that do the following:
Read files from AWS and write to Cloud SQL
Aggregate Cloud SQL data and write csv file to Cloud Storage bucket
Simple OLS prediction model on aggregated data
I have these as separate functions because (1) often takes longer than the Cloud Function maximum timeout. Because of this, I am considering moving this whole thing to App Engine as a service. My question about App Engine Standard are:
What do the request timeouts mean? If I were to run this service, do I still have a short time-limit after which it will no longer run?
Is App Engine the best thing to use for this task?
Thanks for all your help
According to Google Documentation, GAE Standard has a maximum timeout of 1 minute for http requests and 10 minutes for cron/tasks for the older environments. Newer env have it as 10 minutes for both http requests & tasks. If your functions are taking longer than these, then GAE standard won't work for you. For such a case, you should take a look at GAE Flex - see this Google documentation which compares Flex to Standard).
Secondly, it seems to me that what you have are endpoints that are only hit occasionally or at specific scheduled times. If that is the case, I would also recommend taking a look at Cloud Run. We have a blog article about it and we have this
....Another thing to note about Cloud Run is that it only runs when it receives an HTTP request. It plays dead and comes alive to execute your code when an HTTP request comes in. When it is done executing the request, it goes 'dead' again till the next request comes in. This means you're not paying for time spent idling i.e. when it is not doing anything....
You can keep your Cloud Functions and the strong cohesion implemented by each of your 3 Functions, then you can use Cloud Workflows a serverless solution to orchestrate the 3 CF calls. The drawback : you pay for 3 CF invocations and 3 Workflows steps. But does it matter ? Since 2millions CF invocations are free and 5000 Workflows steps are free.
As proposed by #NoCommandLine Cloud Run is indeed an alternative, with its timeout of 3600s(1h). The drawback: you need to wrap your code in a http request and provide a webserver like express or gunicorn.
A hack is to build a docker container for your code with no need for a webserver and run it using Cloud Build which have a timeout of 24 hours.

what google cloud component to use for my client-server pub/sub?

I'm building an app on google cloud that takes users' audio recording. Users would record 1 or more audio clip and upload them to the the backend, which would process the clips, run Machine Learning prediction model I built and return an integer back to the user for each of the audio that are uploaded. Processing and predicting 1 piece of audio takes about 10 seconds. Users can upload 20 audio at a time.
What I have so far is:
HTML, Javascript, css on the client side. The upload functionality is async using fetch and return a promise
The backend is running Google AppEngine (python3.7), Firebase Authentication, Google CLoud storage and cloud logging
The processing and the prediction is running on Google Cloud Function.
My question is as follow:
Since, it might take up to 200-300 seconds for the processing to complete, how should I be handling the task once users hit the upload button? Is simple request-response enough?
I have investigated the following:
Google Cloud tasks. This seems inappropriate, because client actually needs to know when the processing is done. There is really no call back when the task is done
Google Cloud PubSub. There is a call back for when the job is done (subscribe), but it's server side. This seems more appropriate for server to server communication, instead of client-server.
What is the appropriate piece of tech to use in this case?
There is to way to improve the user experience.
Firstly, on the processing, you can perform parallel processing. All the prediction should be handled by the same Cloud Functions. In App Engine, you should have a multi-thread processing which invoke your CLoud Functions for only one audio clip, and do that 20 time in parallel. I don't know how to achieve that with async in Python, but I know that you can
Then, if you implement that, you will wait all the end of all the audio clip processing to send a response to your users. the total should be between 15 - 20 seconds.
If you use Cloud Run, you can use streaming (or partial HTTP response). Therefore, you could send a partial response when you get a response from your CLoud Functions, whoever the audio clip (the 3rd can be finished before the 1st one).
As you note, Cloud Pub/Sub is more appropriate for server-to-server communication since there is currently a limit of 10,000 subscriptions per-topic/per-project (this may change in the future).
Firebase Cloud Messaging may be a more appropriate solution for client-server messaging.

Google app engine - rabbit mq alternative

I'm looking to move over a messaging system that we have over to the google app engine environment but I have a few questions that I'm hoping someone can help me with.
Our current message environment uses rabbit mq to process messages and then uses about 10 consumers that connect to the que to send the messages. This works well for us as having 10 consumer instances to process the messages dramatically increases delivery rates.
I understand that the app engine doesn't support rabbit mq so I was wondering what would be the best alternative to achieve the same result. I see that you can run tasks in the background which is great but this would only act as one instance, which will slow down the delivery rates.
Are there any other options?
I never use rabbitmq before, but your requirement looks like quite fit the usage of taskqueue and pipeline on app engine.
TaskQueue provide the ability to setup consumers and setup their process rate.
https://developers.google.com/appengine/docs/python/taskqueue/
With the Task Queue API, applications can perform work outside of a user request, initiated by a user request. If an app needs to execute some background work, it can use the Task Queue API to organize that work into small, discrete units, called tasks. The app adds tasks to task queues to be executed later.
The piepline is based on taskqueue and provide more feature on control the flow.
https://code.google.com/p/appengine-pipeline/
The Google App Engine Pipeline API connects together complex, time-consuming workflows (including human tasks). The goals are flexibility, workflow reuse, and testability. A primary use-case of the API is connecting together various App Engine MapReduces into a computational pipeline.

Using amazon web services as google app engine back end

I am currently using google app engine as my mobile application back end. I have a few tasks that can not be performed in the gae environment (mainly image recognition using opencv). My intention is to retain gae and use AWS to perform these specific tasks.
Is there a simple way to pass specific tasks from gae to AWS? E.g. A task queue?
You could either push tasks from GAE towards AWS, or have your AWS instances pull tasks from GAE.
If you push tasks from GAE towards AWS, you could use URLFetch to push your data towards your AWS instances.
If you prefer to have your AWS instances pull tasks from GAE, you could have your GAE instances put their tasks in the GAE Pull Queue, and then have your AWS instances use the Task Queue REST API to lease tasks from the queue.
In either case, the AWS instance could report back the processing result through a simple POST request to your GAE servlets, or through inserting tasks via the abovementioned REST API which would later be leased by your GAE instances. The latter could be useful if you want to control the rate of which your GAE app process the results.
Disclaimer: I'm a lead developer on the AppScale project.
One way that you could go is with AppScale - it's an open source implementation of the App Engine APIs that runs over Amazon EC2 (as well as other clouds). Since it's open source, you could alter the AppServer that we ship with it to enable OpenCV to be used. This would require you to run your App Engine app in AWS, but you could get creative and have a copy of your app running with Google, and have it send Task Queue requests to the version of your app running in AWS only when you need to use the OpenCV libraries.
Have you considered using amazon simple queue service ? http://aws.amazon.com/sqs/
You should be able to add items to the queue from gae using a standard http clint.
Sure. AppEngine has a Task Queue, where you can put in your tasks by simply implementing DeferredTask. In that task you can make requests to AWS.
Your intention to retain the application in GAE and use AWS to perform a few tasks, that can not be performed in the GAE, seems for me a right scenario.
I'd like to share a few ideas along with some resources to answer the main part of your question:
Is there a simple way to pass specific tasks from gae to AWS? E.g. A task queue?
If you need GAE and AWS to perform the task all the time (24/7) then your application will definitely depend on batch schedule or task queue. They are available by GAE.
However if you could arrange to pull the task in GAE and perform by AWG on interval basis (say twice a day of less than an hour each), you may no need to use them as long you can manage the GAE to put the data on Google Cloud Storage (GCS) as public.
For this scenario, you need to setup AWS EC2 Instance for On/Off Schedule and let the instance to run a boot script using cloud-init to collect the data through your domain that pointed to GCS (c.storage.googleapis.com) like so:
wget -q --read-timeout=0.0 --waitretry=5 --tries=400 \\
--background http://your.domain.com/yourfile?q=XXX...
By having the data from GCS, then AWS can perform these specific tasks. Let it fire up GAE to clean the data and put the result back to GCS to be ready to be used as your mobile application back end.
Following are some options to consider:
You should note that not all of the EC2 types are suitable for On/Off Schedule. I recommend to use EC2-VPC/EBS if you want to setup AWS EC2 Instance for On/Off Schedule
You may no need to setup EC2 if you can set AWS Lambda to perform the task without EC2. The cost is cheaper, a task running twice a day for typically less than 3 seconds with memory consumption up to 128MB typically costs less than $0.0004 USD/month
As outcome of rearranging you your application in GAE and set AWG to perform some of the tasks, it might finally rise your billing rates, try to to optimize the instance class in GAE.

Resources