I have a mobile app with a comment system, backed by App Engine. When user A replies to user B's comment, user B gets a notification. Everything works over HTTP.
Right now I have the client device polling App Engine every minute for updates. It works but on average, there's a 30-second delay before the notification appears.
I would like to close this gap by having App Engine send a packet to user B's device immediately after user A posts the reply. I can make this happen by moving the wait(60) command from the client to the server -- the client will run a tight loop, making another request as soon as it gets a response; App Engine sits on every request for 60 seconds before responding.
But if the user gets a notification, App Engine responds before the 60 seconds are up. Essentially, user A's request handler wakes up user B's sleeping request handler and causes it to return non-null data.
Is there a name for this technique as applied to HTTP? Can it be coded efficiently? If so, how can I implement the wait/notify code?
In lieu of sockets App Engine has the Channel API, which should be nearly instant without the need to poll.
docs
Related
Google describes basic scaling like this:
I don't really have any other choice, as I'm using a B1 instance, so automatic scaling is not allowed.
That raises the question though, if I have an endpoint that takes a variable amount of time (could be minutes, could be hours), and I basically have to set an idle_timeout, will App Engine calculate idle_timeout from the time the request was made in the first place or when the app is done handling the request?
If the former is right, then it feels a bit unfair to have to guess how long requests will take when thread activity is a usable indicator of whether or not to initiate shutdown of an app.
Here you are mixing up two different terms.
idle_timeout is the time that the instance will wait before shutting down after receiving its last request
Request timeout is the amount of time that App Engine will wait for the return of a request from your app
As per the documentation:
Requests can run for up to 24 hours. A manually-scaled instance can
choose to handle /_ah/start and execute a program or script for many
hours without returning an HTTP response code. Task queue tasks can
run up to 24 hours.
I'm using Spring Boot 2.1.4, Kafka and React as a frontend UI. I have a user registration process from the UI which requires a backend process and it's data before the registration is complete.
The flow is like this:
The frontend UI makes a request to an API which returns a token and puts a message on to a request Kafka queue
The message is processed by a backend process (which takes approximately 1 minute)
When the process is finished, a message with the token and data is written to a reply Kafka queue which indicates the process is complete
What I want is the frontend UI to make the initial API request which returns immediately, show a loading screen and display a ready message when the registration process is complete.
I have thought of a couple of options:
Attach a KafkaListener to the reply queue. Once the reply message appears, store the response and token in a datastore (e.g. Redis). Provide an API to the UI which checks the datastore for the token. The UI will poll this API every 10 seconds. If the response is not available after 2 mins, the user will be asked to check back later.
Use WebSockets with React. I've not used WebSockets before but the only thing I'm unsure of is if I have multiple instances of the registration microservice, will this cause any issues with client/api communication.
Any recommendations or any other options on the best way to handle this?
Attach a KafkaListener to the reply queue. Once the reply message appears, store the response and token in a datastore (e.g. Redis). Provide an API to the UI which checks the datastore for the token. The UI will poll this API every 10 seconds. If the response is not available after 2 mins, the user will be asked to check back later.
This will work. I would use the built in RocksDB for storage though, just for simplicity. Below is the documentation for exposing a state store to be queryable outside of kafka streams.
https://kafka.apache.org/20/documentation/streams/developer-guide/interactive-queries.html
Use WebSockets with React. I've not used WebSockets before but the only thing I'm unsure of is if I have multiple instances of the registration microservice, will this cause any issues with client/api communication.
It can potentially cause issues. It depends on the implementation of the registration service. You won't know which instance of the registration service a client will establish a connection with. For instance session needs to be managed in a external datasource like Redis or you would have to use a laod balancer that supports sticky sessions (a bit of an archaic solution).
I have an IoT device that only sends data to the cloud (Google cloud function) infrequently. The data includes a time stamp. Once I turn off or I loose internet connection for the IoT device, I can't send a shut down notice to the cloud.
I would like to send a notification after not receiving any data for something like 10 minutes. Can my cloud function have a re-settable trigger that would send me a notification if it doesn't get reset by my IoT device checking in within that time? How do I create this delay?
I would prefer not to pay for the idle time.
Not knowing Cloud Functions well, my initial thought would be to use Task Queues on the App Engine.
On each incoming request from your device, you could enqueue a task with an eta of X minutes. When the task runs, it would check to see if any data has been written in the past X minutes. If not it would send a notification, and potentially queue up a fresh task to check again.
https://cloud.google.com/appengine/docs/standard/python/taskqueue/
My assumption here is that you can access the data written by your cloud function from an App Engine application.
Is Asynchronous URLFetch the fastest mechanism to get out of the App Engine sandbox?
http://ikaisays.com/2010/06/29/using-asynchronous-urlfetch-on-java-app-engine/
We had experienced very slow URLFetches in the past, but think Pull Queues would introduce too much latency.
Our Google App Engine app needs to send UDP messages in near real-time.
Since App Engine supports only HTTP on port 80, we plan to use HTTP POST to EC2/Rackspace instances that in turn send the UDP message.
At the end of the day, the time spent actually fetching the URL is the same whether you do it synchronously or asynchronously.
The difference lies in whether your app will need to wait for the result (and block until it comes), or whether it can fire off a request and then do other things while it's waiting. With asynchronous your app can fire off a request, and do other things [including firing off more requests] while it waits for the result to come back.
we wrote in C++ a screen sharing application based on sending screenshots.
It works by establishing a TCP connection btw the server and client, where the server forwards every new screenshot received for a user through the connection, and this is popped-up by the client.
Now, we are trying to host this on google app engine, and therefore need 'servlet'-ize and 'sandbox' the server code, so to implement this forwarding through HTTP requests.
I immagine the following:
1. Post request with the screenshot as multiple-data form (apache uploads ..).
But now the server needs to contact the specified client (who is logged in) to send it/forward the screenshot.
I'm not sure how to 'initiate' such connection from the servlet to the client. The client doesn't run any servlet environment (of course).
I know HTTP 1.1 mantains a TCP connection, but it seems gapps won't let me use it.
1 approaches that comes to mind is to send a CONTINUE 100 to every logged in user at login, and respond with the screenshot once it arrives. Upon receival the client makes another request, and so on.
an alternative (insipired from setting the refresh header for a browser) would be to have the app pool on a regular basis (every 5 secs).
You're not going to be able to do this effectively on GAE.
Problem 1: All output is buffered until your handler returns.
Problem 2: Quotas & Limits:
Some features impose limits unrelated
to quotas to protect the stability of
the system. For example, when an
application is called to serve a web
request, it must issue a response
within 30 seconds. If the application
takes too long, the process is
terminated and the server returns an
error code to the user. The request
timeout is dynamic, and may be
shortened if a request handler reaches
its timeout frequently to conserve
resources.
Comet support is on the product roadmap, but to me your app still seems like a poor fit for a GAE application.
Long Polling is the concept used for such asynchronous communications between server and client.
In Long Polling, servlet keeps a map of client and associated messages. Key of Map being client id and value being list of messages to be sent to the client. When a client opens a connection with server (sends request to a servlet), the servlet checks the Map if there are any messages to be sent to it. If found, it sends the messages to the client exits from the method. On receiving messages, the client opens a new connection to the server. If the servlet does not find any messages for given client, it waits till the Map gets updated with messages for given client.
This is a late reply, I'm aware, but I believe that Google have an answer for this requirement: the Channel API.