Testing Google Cloud PubSub push endpoints locally - google-cloud-pubsub

Trying to figure out the best way to test PubSub push endpoints locally. We tried with ngrok.io, but you must own the domain in order to whitelist (the tool for doing so is also broken… resulting in an infinite redirect loop). We also tried emulating PubSub locally. I am able to publish and pull, but I cannot get the push subscriptions working. We are using a local Flask webserver like so:
#app.route('/_ah/push-handlers/events', methods=['POST'])
def handle_message():
print request.json
return jsonify({'ok': 1}), 200
The following produces no result:
client = pubsub.Client()
topic = client('events')
topic.create()
subscription = topic.subscription('test_push', push_endpoint='http://localhost:5000/_ah/push-handlers/events')
subscription.create()
topic.publish('{"test": 123}')
It does yell at us when we attempt to create a subscription to an HTTP endpoint (whereas live PubSub will if you do not use HTTPS). Perhaps this is by design? Pull works just fine… Any ideas on how to best develop PubSub push endpoints locally?

Following the latest PubSub library documentation at the time of writing, the following example creates a subscription with a push configuration.
Requirements
I have tested with the following requirements :
Google Cloud SDK 285.0.1 (for PubSub local emulator)
Python 3.8.1
Python packages (requirements.txt) :
flask==1.1.1
google-cloud-pubsub==1.3.1
Run PubSub emulator locally
export PUBSUB_PROJECT_ID=fake-project
gcloud beta emulators pubsub start --project=$PUBSUB_PROJECT_ID
By default, PubSub emulator starts on port 8085.
Project argument can be anything and does not matter.
Flask server
Considering the following server.py :
from flask import Flask, jsonify, request
app = Flask(__name__)
#app.route('/_ah/push-handlers/events', methods=['POST'])
def handle_message():
print(request.json)
return jsonify({'ok': 1}), 200
if __name__ == "__main__":
app.run(port=5000)
Run the server (starts on port 5000) :
python server.py
PubSub example
Considering the following pubsub.py :
import sys
from google.cloud import pubsub_v1
if __name__ == "__main__":
project_id = sys.argv[1]
# 1. create topic (events)
publisher_client = pubsub_v1.PublisherClient()
topic_path = publisher_client.topic_path(project_id, "events")
publisher_client.create_topic(topic_path)
# 2. create subscription (test_push with push_config)
subscriber_client = pubsub_v1.SubscriberClient()
subscription_path = subscriber_client.subscription_path(
project_id, "test_push"
)
subscriber_client.create_subscription(
subscription_path,
topic_path,
push_config={
'push_endpoint': 'http://localhost:5000/_ah/push-handlers/events'
}
)
# 3. publish a test message
publisher_client.publish(
topic_path,
data='{"test": 123}'.encode("utf-8")
)
Finally, run this script :
PUBSUB_EMULATOR_HOST=localhost:8085 \
PUBSUB_PROJECT_ID=fake-project \
python pubsub.py $PUBSUB_PROJECT_ID
Results
Then, you can see the results in Flask server's log :
{'subscription': 'projects/fake-project/subscriptions/test_push', 'message': {'data': 'eyJ0ZXN0IjogMTIzfQ==', 'messageId': '1', 'attributes': {}}}
127.0.0.1 - - [22/Mar/2020 12:11:00] "POST /_ah/push-handlers/events HTTP/1.1" 200 -
Note that you can retrieve the message sent, encoded here in base64 (message.data) :
$ echo "eyJ0ZXN0IjogMTIzfQ==" | base64 -d
{"test": 123}
Of course, you can also do the decoding in Python.

This could be a known bug (fix forthcoming) in the emulator where push endpoints created along with the subscription don't work. The bug only affects the initial push config; modifying the push config for an existing subscription should work. Can you try that?

I failed to get PubSub emulator to work on my local env (fails with various java exceptions). I didn't even get to try various features like push with auth, etc. So I end up using ngrok to expose my local dev server and used the public https URL from ngrok in PubSub subscription.
I had no issue with whitelisting and redirects like described in the Q.
So might be helpful for anyone else.

Related

Flink Rest API : /jars/upload returning 404

Following is my code snippet used for uploading Jar in Flink. I am getting 404 response for this post request. Following is the output for request. I also tried updating the url with /v1/jars/upload but same response. All the API related to jars is giving me same response. I am running this code inside AWS lambda which is present in same vpc where EMR exists which is runing my Flink Job. APIs like /config, /jobs working in this lambda, only APIs like upload jar, submit jobs not working and getting 404 for them
<Response [404]> {"errors":["Not found: /jars/upload"]}
Also tried the same thing by directly logging into job manager node and running curl command, but got the same response. I am using Flink 1.14.2 version on EMR cluster
curl -X POST -H "Expect:" -F
"jarfile=#/home/hadoop/test-1.0-global-14-dyn.jar"
http://ip-10-0-1-xxx:8081/jars/upload
{"errors":["Not found:> /jars/upload"]}
import json
import requests
import boto3
import os
def lambda_handler(event, context):
config = dict(
service_name="s3",
region_name="us-east-1"
)
s3_ = boto3.resource(**config)
bucket = "dv-stream-processor-na-gamma"
prefix = ""
file = "Test-1.0-global-14-dyn.jar"
path = "/tmp/"+file;
try:
s3_.Bucket(bucket).download_file(f"{file}", "/tmp/"+file)
except botocore.exceptions.ClientError as e:
print(e.response['Error']['Code'])
if e.response['Error']['Code'] == "404":
print("The object does not exist.")
print(os.path.isfile('/tmp/' + file))
response = requests.post(
"http://ip-10-0-1-xx.ec2.internal:8081/jars/upload",
files={
"jarfile": (
os.path.basename(path),
open(path, "rb"),
"application/x-java-archive"
)
}
)
print(response)
print(response.text)
Reason for upload jar was not working for me was I was using Flink "Per Job" cluster mode where it was not allowed to submit job via REST API. I updated the cluster mode to "Session" mode and it started working
References for Flink cluster mode information :
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/overview/
Code you can refer to start cluster in session mode : https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/resource-providers/yarn/#starting-a-flink-session-on-yarn

FastAPI on Google App Engine: Why am I getting duplicate cloud tasks?

I have deployed a FastAPI ML service to Google App Engine but it's exhibiting some odd behavior. The FastAPI service is intended to receive requests from a main service (via Cloud Tasks) and then send responses back. And that does happen. But it appears the route in the FastAPI service that handles these requests gets called four times instead of just once.
My assumption was that GAE, gunicorn, or FastAPI would ensure that the handler runs once per cloud task. But it appears that multiple workers, or some other issue in my config, is causing the handler to get called four times. Here are a few more details and some specific questions:
The Fast API app is deployed to Google App Engine (flex) via gcloud app deploy app.yaml
The app.yaml file includes GUNICORN_ARGS: "--graceful-timeout 3540 --timeout 3600 -k gevent -c gunicorn.gcloud.conf.py main:app"
The Dockerfile in the FastAPI project root (which is used for the gcloud deploy) also includes the final command gunicorn -c gunicorn.gcloud.conf.py main:app
Here's the gunicorn conf:
bind = ":" + os.environ["PORT"]
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = "uvicorn.workers.UvicornWorker"
forwarded_allow_ips = "*"
max_requests = 1000
max_requests_jitter = 100
timeout = 200
graceful_timeout = 6000
So I'm confused:
Does GUNICORN_ARGS in app.yaml or the gunicorn argument in the Dockerfile take precedence?
Should I be using multiple workers or is that precisely what's causing multiple tasks?
Happy to provide any other relevant info.
GAE Flex defines environment variables in the app.yaml file [1].
Looking at Docker Compose "In the case of environment, labels, volumes, and devices, Compose “merges” entries together with locally-defined values taking precedence." [2], depending on if they are using a .env file "Values in the shell take precedence over those specified in the .env file." [3]
[1] https://cloud.google.com/appengine/docs/flexible/custom-runtimes/configuring-your-app-with-app-yaml#defining_environment_variables
[2] https://docs.docker.com/compose/extends/
[3] https://docs.docker.com/compose/environment-variables/
The issue is unlikely to be a Cloud Task duplication issue "in production, more than 99.999% of tasks are executed only once." [4]. You can investigate the calling source
[4] https://cloud.google.com/tasks/docs/common-pitfalls#duplicate_execution
You can also investigate the log contents to see if there are unique identifiers, or if they are the same logs.
For the second question on uvicorn [0] workers, you can try hard coding the value of “workers” to 1 and verify if there is no repetition.
[0] https://www.uvicorn.org/

Create new instances of a Google App Engine Service

I have a Google App Engine Java 11 service using the Standard Environment.
I have deployed it specifying in the corresponding app.yaml file manual scaling, setting the number of instances to 1.
Is there a way that I can increase the number of instances for this service without having to upload again all the files in the service?
So I have one instance. Now I want 2 instances. How do I do this?
Have not found a way in either the console or in the gcloud utilities to do this.
Also, just calling gcloud app deploy with the modified app.yaml file creates a broken version of the service.
app.yaml:
service: headergrabber
runtime: java11
instance_class: B8
manual_scaling:
instances: 1
Use REST API to patch the number of instances of your manual scaling app.
Here's the HTTP request:
PATCH https://appengine.googleapis.com/v1/{name=apps/*/services/*/versions/*}
You will have to pass manualScaling.instances field to update to the number of instance you prefer.
Here's an example with curl using a token that should only be used for local testing. I tested it on my side and it works:
curl -X PATCH -H "Content-Type: application/json" \
-d "{ 'manualScaling': { 'instances': 2 } }" \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
https://appengine.googleapis.com/v1/apps/PROJECT-ID/services/SERVICE/versions/VERSION?updateMask=manualScaling.instances
Where:
PROJECT_ID = Project ID
SERVICE = Service Name
VERSION = Version Name
You need to login to your account and set the project if you're using Cloud SDK or you can run the command on Cloud Shell.
An alternative is to use a client library so you can write applications that can update your App Engine instance.
Take note that doing this would require App Engine Admin API enabled on your project. This API provides programmatic access to several of the App Engine administrative operations that are found in the Google Cloud Console.

Consume SQS tasks from App Engine

I'm attempting to integrate with a third party that is posting messages on an Amazon SQS queue. I need my GAE backend to receive these messages.
Essentially, I want the following script to launch and always be running
import boto3
sqs_client = boto3.client('sqs',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET,
region_name=REGION)
while True:
sqs_client.receive_message(QueueUrl=QUEUE_URL, WaitTimeSeconds=60)
for message in msgs_response.get('Messages', []):
deferred.defer(process_and_delete_message, message)
My main appengine web app is on Automatic Scaling (with the 60-second &10-minute task timeouts), but I'm thinking of setting up a micro-service set to either Manual Scaling or Basic Scaling because:
Requests can run indefinitely. A manually-scaled instance can choose to handle /_ah/start and execute a program or script for many hours without returning an HTTP response code. Task queue tasks can run up to 24 hours.
https://cloud.google.com/appengine/docs/standard/python/an-overview-of-app-engine
Apparently both Manual & Basic Scaling also allow "Background Threads", but I am having a hard-time finding documentation for it and I'm thinking this may be a relic from the days before they deprecated Backends in favor of Modules (although I did find this https://cloud.google.com/appengine/docs/standard/python/refdocs/modules/google/appengine/api/background_thread/background_thread#BackgroundThread).
Is Manual or Basic Scaling suited for this? If so, what should I use to listen on sqs_client.receive_message()? One thing I'm concerned about is this task/background thread dieing and not relaunching itself.
This maybe a possible solution:
Try to use a Google Compute Engine micro instance to run that script continuously and send a REST call to your app engine app. Easy Python Example For Compute Engine
OR:
I have used modules that run instance type B2/B1 for long running jobs; and I have never had any trouble; but those jobs do start and stop. I use the basic scaling: with max_instances set to 1. The jobs I have run take around 6 hours to complete.
I ended up creating a manual scaling app engine standard micro-service for this. This micro-service has handeler for /_ah/start never returns and runs indefinitely (many days at a time) and when it does get stopped, then app engine restarts it immediately.
Requests can run indefinitely. A manually-scaled instance can choose
to handle /_ah/start and execute a program or script for many hours
without returning an HTTP response code. Task queue tasks can run up
to 24 hours.
https://cloud.google.com/appengine/docs/standard/python/an-overview-of-app-engine
My /_ah/start handler listens to the SQS queue, and creates Push Queue tasks that my default service is set up to listen for.
I was looking into the Compute Engine route as well as the App Engine Flex route (which is essentially Compute Engine managed by app engine), but there were other complexities like not getting access to ndb and the taskqueue sdk and I didn't have time to dive into that.
Below are all of the files for this micro-service, not included is my lib folder that contains the source code for boto3 & some other libraries I needed.
I hope this helpful for someone.
gaesqs.yaml:
application: my-project-id
module: gaesqs
version: dev
runtime: python27
api_version: 1
threadsafe: true
manual_scaling:
instances: 1
env_variables:
theme: 'default'
GAE_USE_SOCKETS_HTTPLIB : 'true'
builtins:
- appstats: on #/_ah/stats/
- remote_api: on #/_ah/remote_api/
- deferred: on
handlers:
- url: /.*
script: gaesqs_main.app
libraries:
- name: jinja2
version: "2.6"
- name: webapp2
version: "2.5.2"
- name: markupsafe
version: "0.15"
- name: ssl
version: "2.7.11"
- name: pycrypto
version: "2.6"
- name: lxml
version: latest
gaesqs_main.py:
#!/usr/bin/env python
import json
import logging
import appengine_config
try:
# This is needed to make local development work with SSL.
# See http://stackoverflow.com/a/24066819/500584
# and https://code.google.com/p/googleappengine/issues/detail?id=9246 for more information.
from google.appengine.tools.devappserver2.python import sandbox
sandbox._WHITE_LIST_C_MODULES += ['_ssl', '_socket']
import sys
# this is socket.py copied from a standard python install
from lib import stdlib_socket
socket = sys.modules['socket'] = stdlib_socket
except ImportError:
pass
import boto3
import os
import webapp2
from webapp2_extras.routes import RedirectRoute
from google.appengine.api import taskqueue
app = webapp2.WSGIApplication(debug=os.environ['SERVER_SOFTWARE'].startswith('Dev'))#, config=webapp2_config)
KEY = "<MY-KEY>"
SECRET = "<MY-SECRET>"
REGION = "<MY-REGION>"
QUEUE_URL = "<MY-QUEUE_URL>"
def process_message(message_body):
queue = taskqueue.Queue('default')
task = taskqueue.Task(
url='/task/sqs-process/',
countdown=0,
target='default',
params={'message': message_body})
queue.add(task)
class Start(webapp2.RequestHandler):
def get(self):
logging.info("Start")
for loggers_to_suppress in ['boto3', 'botocore', 'nose', 's3transfer']:
logger = logging.getLogger(loggers_to_suppress)
if logger:
logger.setLevel(logging.WARNING)
logging.info("boto3 loggers suppressed")
sqs_client = boto3.client('sqs',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET,
region_name=REGION)
while True:
msgs_response = sqs_client.receive_message(QueueUrl=QUEUE_URL, WaitTimeSeconds=20)
logging.info("msgs_response: %s" % msgs_response)
for message in msgs_response.get('Messages', []):
logging.info("message: %s" % message)
process_message(message['Body'])
sqs_client.delete_message(QueueUrl=QUEUE_URL, ReceiptHandle=message['ReceiptHandle'])
_routes = [
RedirectRoute('/_ah/start', Start, name='start'),
]
for r in _routes:
app.router.add(r)
appengine_config.py:
import os
from google.appengine.ext import vendor
from google.appengine.ext.appstats import recording
appstats_CALC_RPC_COSTS = True
# Add any libraries installed in the "lib" folder.
# Use pip with the -t lib flag to install libraries in this directory:
# $ pip install -t lib gcloud
# https://cloud.google.com/appengine/docs/python/tools/libraries27
try:
vendor.add('lib')
except:
print "Unable to add 'lib'"
def webapp_add_wsgi_middleware(app):
app = recording.appstats_wsgi_middleware(app)
return app
if os.environ.get('SERVER_SOFTWARE', '').startswith('Development'):
print "gaesqs development"
import imp
import os.path
import inspect
from google.appengine.tools.devappserver2.python import sandbox
sandbox._WHITE_LIST_C_MODULES += ['_ssl', '_socket']
# Use the system socket.
real_os_src_path = os.path.realpath(inspect.getsourcefile(os))
psocket = os.path.join(os.path.dirname(real_os_src_path), 'socket.py')
imp.load_source('socket', psocket)
os.environ['HTTP_HOST'] = "my-project-id.appspot.com"
else:
print "gaesqs prod"
# Doing this on dev_appserver/localhost seems to cause outbound https requests to fail
from lib import requests
from lib.requests_toolbelt.adapters import appengine as requests_toolbelt_appengine
# Use the App Engine Requests adapter. This makes sure that Requests uses
# URLFetch.
requests_toolbelt_appengine.monkeypatch()

GAE: AssertionError: No api proxy found for service "datastore_v3"

I'm writing simple to code to access dev server. Both dev server and datastore emulated have been started locally.
from google.appengine.ext import ndb
class Account(ndb.Model):
name = ndb.StringProperty()
acc = Account(name=u"test").put()
print(acc)
Error:
AssertionError: No api proxy found for service "datastore_v3"
I tried to set: export DATASTORE_EMULATOR_HOST=localhost:8760 . It does not help.
$ dev_appserver.py ./app.yaml
WARNING 2017-02-20 06:40:23,130 application_configuration.py:176] The "python" runtime specified in "./app.yaml" is not supported - the "python27" runtime will be used instead. A description of the differences between the two can be found here:
https://developers.google.com/appengine/docs/python/python25/diff27
INFO 2017-02-20 06:40:23,131 devappserver2.py:764] Skipping SDK update check.
INFO 2017-02-20 06:40:23,508 api_server.py:268] Starting API server at: http://localhost:53755
INFO 2017-02-20 06:40:23,514 dispatcher.py:199] Starting module "default" running at: http://localhost:8080
INFO 2017-02-20 06:40:23,517 admin_server.py:116] Starting admin server at: http://localhost:8000
GAE app code cannot run as a standalone python app, it can only run inside a GAE app which runs inside your dev server. Typically as part of a handler code, triggered via http requests to the dev server.
You need to put that code inside one of your app handlers. For example inside the get() method of the MainPage handler from main.py in the Quickstart for Python App Engine Standard Environment (actually it'd be better in a post() method since your code is writing to the datastore).

Resources