SalesForce Notifications - Reliable Integration - salesforce

I need to develop a system that is listening to the changes happened with SalesForce objects and transfers them to my end.
Initially I considered SalesForce Streaming API that allows exactly that - create a push topic that subscribes to objects notifications and later have a set of clients that are reading them using long polling.
However such approach doesn't guarantee durability and reliable delivery of notifications - which I am in need.
What will be the architecture allowing to implement the same functionality in reliable way?
One approach I have in mind is create a Force.com applications that uses SalesForce triggers to subscribe to notifications and later just sends them using HTTPS to the cloud or my Data Server. Will this be a valid option - or are there any better ones?

I two very good questions on salesforce.stackexchange.com covering this very topic in details:
https://salesforce.stackexchange.com/questions/16587/integrating-a-real-time-notification-application-with-salesforce
https://salesforce.stackexchange.com/questions/20600/best-approach-for-a-package-to-respond-to-dml-events-dynamically-without-object

Related

Sharing data to client whenever API has results available?

I have the following scenario, can anyone guide what's the best approach:
Front End —> Rest API —> SOAP API (Legacy applications)
The Legacy applications behave unpredictably; sometimes it's very slow, sometimes fast.
The following is what needs to be achieved:
- As and when data is available to Rest API, the results should be made available to the client
- Whatever info is available, show the intermediate results.
Can anyone share insights in how to design this system?
you have several options to do that
polling from the UI - will require some changes to the API, the initial call will return a url where results will be available and the UI will check that out everytime
websockets - will require changing the api
server-sent events - essentially keeping the http connection open and pushing new results as they are available - sounds the closest to what you want
You want some sort of event-based API that the API consumers can subscribe to.
Event-driven architectures come in many forms - from event notification ('hey, I have new data, come and get it') to message/payload delivery; full-on publish/subscribe solutions to that allow consumers to subscribe to one or more "topics", with event back-up and replay functionality to relatively basic ones.
If you don't want a full-on eventing platform, you could look at WebHooks.
A great way to get started will be to start familiarizing yourself with some event-based architecture patterns. That last link is for Chris Richardson's website, he's got a lot of great info on such architectures and would be well worth a look.
In terms of the defining the event API, if you're familiar with OpenAPI, there's AsyncAPI which is the async equivalent.
In terms of solutions, there's a few well known platforms, including open source ones. The big cloud providers (Azure, GCP and AWS) will also have async / event based services you can use.
For more background there's this Wikipedia page (which I have not read - so can't speak for it's quality but it does look detailed).
Update: Webhooks
Webhooks are a bit like an ice-berg, there's more to them than might appear at first glance. A full-on eventing solution will have a very steep learning curve but will solve problems that you'll otherwise have to address separately (write your own code, etc). Two big areas to think about:
Consumer management. How will you onboard new consumers? Is it a small handful of internal systems / URLs that you can manage through some basic config, manually? Or is it external facing for public third parties? If it's the latter, will you need to provide auto-provisioning through a secure developer portal or get them to email/submit details for manual set-up at your end?
Error handling & missed events. Let's say you have an event, you call the subscribing webhook - but there's no response (or an error). What do you do? Do you retry? If so, how often, for how long? Once the consumer is back up what do you do - did you save the old events to replay? How many events? How do you track who has received what?
Polling
#Arnon is right to mention polling as an approach but I'd only do it if you have no other choice, or, if you have a very small number of internal system doing the polling, i.e - incurs low load, and you control both "ends" of the polling; in such a scenario its a valid approach.
But if its for an external API you'll need to implement throttling to protect your systems, as you'll have limited control over who's calling you and how much. Caching will be another obvious topic to explore in a polling approach.

Subscribing to changes in AWS DynamoDB

I have made an app in React Native, that posts to an AppSync graphql end point some data.
Using a RaspberryPi I would like to subscribe to changes that get made to the underlying DynamoDB.
I have access to the AWS ecosystem and I'm comfortable in JS and Python.
My question is what is the best way to subscribe to DynamoDB changes in the RapberryPi?
My question is what is the best way to subscribe to
DynamoDB changes in the RapberryPi?
As already commented, good start would be using the Dynamo DB streams - I'd suggest to bind a lambda
function to the stream. In theory there are other ways to process the update stream (kinesis,..), but for the start the lambda may be a good choice for your case. With the lambda you may store the messages to a service, where your device could pick it up.
The second part is - how do you get the updates to your raspberry pi. I'd suggest using the SQS (you can use long polling ) or IoT topic (with mqtt).

Same ZMQ_IDENTITY for multiple Subcribers

I am building a zeromq PUB-SUB pattern with lots of subscribers to one publisher.
I want to build durable Subscribers, so I found out about the ZMQ_IDENTITy socket-option.
What should I take into account when choosing the value for the Identity?
And can I take the same for all subscribers?
Since they are all the same type but on different machines, that should be a problem, right?
Furthermore, are UUIDs only generated vie inproc service as in the zguide example explained?
Don't use the IDENTITY option. Durable subscribers were removed from ZeroMQ/3.x so your application would be stuck on the 2.x version.
The reason that durable subscribers are so bad we removed them is that they consume arbitrary amounts of memory on the publisher (the messages have to stay somewhere), which leads too easily to out-of-memory, the fastest way to kill your pubsub system for all subscribers.
If you want a durable subscriber model, you can construct it quite easily on top of ZeroMQ, and there's an full worked example in the Guide (Clone pattern).
I'm not sure if anyone's built a durable pubsub broker over ZeroMQ, it would be a fun exercise and not so difficult.

Real-time synchronization of database data across all the clients

What's the best strategy to keep all the clients of a database server synchronized?
The scenario involves a database server and a dynamic number of clients that connect to it, viewing and modifying the data.
I need real-time synchronization of the data across all the clients - if data is added, deleted, or updated, I want all the clients to see the changes in real-time without putting too much strain on the database engine by continuous polling for changes in tables with a couple of million rows.
Now I am using a Firebird database server, but I'm willing to adopt the best technology for the job, so I want to know if there is any kind of already existing framework for this kind of scenario, what database engine does it use and what does it involve?
Firebird has a feature called EVENT that you may be able to use to notify clients of changes to the database. The idea is that when data in a table is changed, a trigger posts an event. Firebird takes care of notifying all clients who have registered an interest in the event by name. Once notified, each client is responsible for refreshing its own data by querying the database.
The client can't get info from the event about the new or old values. This is by design, because there's no way to resolve this with transaction isolation. Nor can your client register for events using wildcards. So you have to design your server-to-client notification pretty broadly, and let the client update to see what exactly changed.
See http://www.firebirdsql.org/doc/whitepapers/events_paper.pdf
You don't mention what client platform or language you're using, so I can't advise on the specific API you would use. I suggest you google for instance "firebird event java" or "firebird event php" or similar, based on the language you're using.
Since you say in a comment that you're using WPF, here's a link to a code sample of some .NET application code registering for notification of an event:
http://www.firebirdsql.org/index.php?op=devel&sub=netprovider&id=examples#3
Re your comment: Yes, the Firebird event mechanism is limited in its ability to carry information. This is necessary because any information it might carry could be canceled or rolled back. For instance if a trigger posts an event but then the operation that spawned the trigger violates a constraint, canceling the operation but not the event. So events can only be a kind of "hint" that something of interest may have happened. The other clients need to refresh their data at that time, but they aren't told what to look for. This is at least better than polling.
So you're basically describing a publish/subscribe mechanism -- a message queue. I'm not sure I'd use an RDBMS to implement a message queue. It can be done, but you're basically reinventing the wheel.
Here are a few message queue products that are well-regarded:
Microsoft MSMQ (seems to be part of Windows Professional and Server editions)
RabbitMQ (free open-source)
Apache ActiveMQ (free open-source)
IBM WebSphere MQ (probably overkill in your case)
This means that when one client modifies data in a way that others may need to know about, that client also has to post a message to the message queue. When consumer clients see the message they're interested in, they know to refresh their copy of some data.
SQL Server 2005 and higher support notification based data source caching expiry.

Implementing Comet on the database-side

This is more out of curiosity and "for future reference" than anything, but how is Comet implemented on the database-side? I know most implementations use long-lived HTTP requests to "wait" until data is available, but how is this done on the server-side? How does the web server know when new data is available? Does it constantly poll the database?
What DB are you using? If it supports triggers, which many RDBMSs do in some shape or form, then you could have the trigger fire an event that actually tells the HTTP request to send out the appropriate response.
Triggers remove the need to poll... polling is generally not the best idea.
PostgreSQL seems to have pretty good support (even PL/Python).
this is very much application dependent. The most likely implementation is some sort of messaging system.
Most likely, your server side code will consist of quite a few parts:
a few app servers that hansle incoming requests,
a (separate) comet server that deals with all the open connections to clients,
the database, and
some sort of messaging infrastructure
the last one, the messaging infrastructure is really the key. This provides a way for the app servers to talk to the comet server. So when a request comes in the app server will put a message into the message queue telling the comet server to notify the correct client(s)
How messaging is implemented is, again, very much application dependent. A very simple implementation would just use a database table called messages and poll that.
But depending on the stack you plan on using there should be more sphisticated tools available.
In Rails I'm using Juggernaut which simply listens on some network port. Whenever there is data to send the Rails Application server opens a connection to this juggernaut push server and tells it what to send to the clients.

Resources