How to push to multiple endpoints in GCP pub/sub? - google-cloud-pubsub

I have multiple microservices that would need to be notified of certain events. All these microservices are running on Cloud Run. Given that Cloud Run can scale down to 0 instances, I’ve assumed that the Push model is better for our situation.
The problem arises with needing to get the notification to multiple endpoints. For example, we have an OrderService, an InventoryService, and PaymentService. The latter two both need to listen for the “OrderPlaced” event. The question is, how can we push this message to both services without having to explicitly call it (e.g. with Task Queues) or creating dependencies within the OrderService? Is there a “Google Cloud” way of solving this issue?
I’ve thought about creating “pull” listeners but the problem is that Cloud Run instances can scale to 0, right, which would effectively not receive events?

When a message is published to a topic, a subscriber is made aware of that message and can be used to push a copy of the message to an application that wishes to process it. If we have multiple applications that wish to be notified when a message is published, we can create multiple subscriptions to the same topic. Each subscription will independently cause a copy of the published message to be pushed to its own registered application in parallel.

Related

Pubsub push to pull subscriptions

I am moving from push to pull subscriptions. Given that I have several instances of my service running, during the deployment rollout both push and pull will be in play until all instances are updated. I do not want to loose message events or have events both pushed and pulled. Would it be best practice to simply have separate versions of both topics and subscriptions for pull and then remove the old push ones in a second deployment after the topics are drained? Or is there a better way to do this?
In transitioning from push to pull, you should not lose any messages; Cloud Pub/Sub handles this transition. However, there would be no way to guarantee that events are not going to be received by both a push subscriber and a pull subscriber during the transition if they are running simultaneously given because Cloud Pub/Sub only has at-least-once delivery guarantees and the transition from push to pull is an eventually consistent change across the system.
If that is a strict requirement, then there are a couple of options:
Use a separate topic and subscription and publish messages to only one of the topics. This does mean you need to transition your publishers to the new topics.
Change the subscription from a push subscription to a pull subscription (by removing the push endpoint from the subscription configuration) and wait until the push subscriber stops receiving messages. This should probably take a few minutes. Once that has happened, it means the transition from push to pull has completed. After that, you could bring up your pull subscribers. This does mean a brief period of downtime for your subscribers during the transition.
The choice comes down to a choice between having to update publishers to send messages to a different topic or a transient downtime for processing the messages in the subscribers.

Salesforce platform event duplicate events with ComeTd Client

I have built Salesforce platform event client using CometD java and It's similar to example EMP-Connector provided by forcedotcom.
I installed this client on OpenShift cloud and my app is running with 2 pods. Problem I am facing is that since there are two pods and each are running the same docker image thus both are getting the same of event. That means duplication event.
Per my understanding, Salesforce platform event should behave like Kafka subscriber.
I am unable to find a solution about how to avoid getting the same copy of events. Any suggestions here would be the great help.
Note: As of now I am able to create client side solution which drop duplicate copy of event. which is not an optimal solution.
I have to run my app atleast with 2 pods. That's limit I have on my cloud.
This is expected / by design. In CometD, when a message is published on a broadcast channel all subscribers listening on this channel will receive a copy of this message. The broadcast channel behaves like a messaging topic where one sender wants to send the same info to multiple recipients. There are other types of channels in CometD with different semantics. The broadcast channel and one-to-many message semantics is what you get with platform events available via CometD in Salesforce.
In your case it sounds like you have multiple subscribers, thus what you're seeing is expected. You can deduplicate the message stream on the client side as you have done or you can change your architecture so that you have a single subscription.

Programatically listing and sending requests to dynamic App Engine instances

I want to send a particular HTTP request (or otherwise communicate a message) to every (dynamic/autoscaled) instance which is currently running for a particular App Engine application.
My goal is to trigger each instance to discard some locally cached data (because I have just modified the underlying data and want them to reload it).
One possible solution is to store a value in Memcache, and have instances check this each time they handle a request to see if they should flush their cache. But this adds latency to every request.
Another possible solution would be to somehow stop all running instances. No fixed overhead, but some impact while instances are restarted.
An even less desirable solution would be to redeploy the application code in order to cause all instances to be stopped. This now adds additional delay on my end as a deployment takes some time.
You could use the management API to list instances for a given version, but I'd suggest that you'd probably want to use something like the PubSub API to create a subscription on each of your App Engine instances. Since each instance has its own subscription, any messages sent to the monitored queue will be received by all instances.
You can create the subscription at startup (the /_ah/start endpoint may be useful), and then delete it at shutdown (using the /_ah/stop endpoint).

Queue publish calls with PubNub when offline

I'm dabbling with using PubNub for various parts of my app. I'm using their AngularJS library for this.
Right now, I'm just testing it for doing "analytics". Basically, I want to track ever more a user makes in the app - buttons pressed, states navigated to, etc. So, I track actions and publish on a channel.
It all works great - when the user is online. However, when offline, I lose all this tracking. I was sort of hoping that PubNub client would automatically queue all the publish requests. It does not seem to do this.
So, I'm thinking I'll have a service to collect all publish requests and put them in a queue if the device is offline. Once the device is back online, I'll publish any queued requests.
Is this the best approach? Does anyone have a better suggestion? Does PubNub already have this ability and I'm just not finding it?
Yes, currently, this is the best way to achieve this.
There are different scenarios for queuing / retrying, for example -- depending on the content of the message (eg expiration/timeliness of the message), and depending on the reason (no internet, channel permissions) you may want to re-queue/retry some and not others, etc.
So if you can implement your own retry logic custom to your use case, thats ideal. We may provide more productized options on this moving forward...
geremy

Should I need a database to ensure immediate consistency with a message-oriented middleware?

App A wants to send domain events to App B through a middleware like RabbitMQ.
Let's take the example of one domain event called UserHasBeenRegistered, involving by the creation of the User entity.
A would inform B that this latter should send a welcome email, by sending this event.
I have in mind two workflows:
First:
- App A registers the user and the event is generated.
- App A sends the event directly to B through a queue provided by RabbitMQ
Second:
- App A registers the user and the event is generated.
- App A saves the event in some kind of event store as a database table (if relational) in the same local transaction used for persisting in database this new user.
- An asynchronous scheduler queries the event store, find this new user registration and sends the message through the RabbitMQ's queue.
Do you see the difference?
Yes, one is longer than the other... but the second is far more safe! albeit less performant.
Indeed, what while in the first case, the registration is rollback due to an exception thrown just after the publishing was made? => the mail would be sent whereas the user was not persisted.
This could be fixed by implementing a global XA transaction (two-phases commit), but it is well known that some middleware don't support it.
Therefore, is the second workflow mostly used in critical application?
What are its drawbacks?
I plan to implement one of both solutions for my project.
I had the same task and it was done as a mix of your two workflows:
App A registers the user and the event is generated.
App A sends the event which has ttl set to non-zero value directly to B through a queue provided by RabbitMQ.
App B receive event and send welcome message to user and store flag that welcome message sent.
There are background script which check whether there are newly registered users from last ttl + 1 time interval who doesn't receive messages.
You can remove background script and flag storing and stick with first workflow from you q. The cases when messages lost or any other cases are damn rare (with welcome messages sending it might be 1 failure per 1billion users) and unnecessary application complication may give you more errors.
The second workflow looks also stable, but why you are using RabbitMQ then?

Resources