Websphere MQ - Topic Subscription with multiple consumers - c

I have a micro-service which subscribes to a topic in WebSphere MQ. The subscription is managed and durable. I explicitly set the subscription name, so that it can be used to connect back to the queue, after recovering from any micro service failure. The subscription works as expected.
But I might have to scale up the micro service and run multiple instances. In this case I will end up with having multiple consumers to the same topic. But here it fails with error 2429 : MQRC_SUBSCRIPTION_IN_USE. I am not able to run more than one consumer to the topic subscription. Note : A message should be sent only to one of the consumers.
Any thought ?
IBM Websphere Version : 7.5
I use the C-client API to connect to the MQ.

When using a subscriber what you describe is only supported via the IBM MQ Classes for JMS API. In v7.0 and later you can use Cloned subscriptions (this is a IBM extension to the JMS spec), in addition in MQ v8.0 and later you can alternately use Shared subscriptions which is part of the JMS 2.0 spec. With these two options, multiple subscribers can be connected to the same subscription and only one of them will receive each published message.
UPDATE 20170710
According to this APAR IV96489: XMS.NET DOESN'T ALLOW SHARED SUBSCRIPTIONS EVEN WHEN CLONESUP PROPERTY IS ENABLED, XMS.NET is also supposed to support Cloned subscriptions but due to a defect this is will not be supported until 8.0.0.8 or 9.0.0.2 or if you request the IFIX for the APAR above.
You can accomplish something similar with other APIs like C by converting your micro-service to get from a queue instead of subscribing to a topic.
To get the published messages to the queue you have two options:
Setup a administrative subscription on the queue manager. You can do this a few different ways. The example below would be using a MQSC command.
DEFINE SUB('XYZ') TOPICSTR('SOME/TOPIC') DEST(SOME.QUEUE)
Create a utility app that can open a queue and create a durable subscription with that provided queue, the only purpose of this app would be to subscribe and unsubscribe a provided queue, it would not be used to consume any of the published messages.
Using the above method, each published message can only be read (GET) from the queue by one process or thread.

Related

How commands in fiware work correctly? Can we use IoT Agent instead of Orion Context Broker as user?

We are following the theory that as users we must issue a command to the Context Broker to change the state of a device: Image 1
In our case, this command already works if we do it from the IoT Agent, but however, if we execute it from the Context Broker through a PATCH, it does not reach the IoT Agent.
Do you know why this could be happening?
Our Context Broker request is the following: Image 2
And finally, the request we make from the IoT Agent, which is the one that works, is this: Image 3
Another doubt that arises is, if the IoT Agent updates all the information in the Context Broker, why not execute the request from there instead of from the Contex Broker?
Your request to Context Broker seems to be ok. Sometimes, the lack of ?type in the request causes problems (see for instance this post) but it doesn't seem to be your case.
I'd suggest to check registrations at Orion. Registration is the mechanism in which the request forwarding from Orion to IOTAgent is based (more info in Orion documentation. IOTAgent should create and manage them, but something could be failing. You can get existing registrations in Orion with the GET /v2/registrations operation.
With regards:
Another doubt that arises is, if the IoT Agent updates all the information in the Context Broker, why not execute the request from there instead of from the Contex Broker?
The FIWARE data management model is context-centric. Thus, the Context Broker is the central piece of the architecture, to intermediate between context producer and context consumer. Commands are a kind of "context production", so it makes sense that Context Broker deals with commands. Note that the client issuing the command maybe doesn't even are able to access to the IOTAgent directly (they use to be "close" to the physical devices they manage and not typically open to direct client requests).

How to subscribe to Salesforce connected app webhooks?

I want to implement a connected OAuth app in Salesforce which should trigger push events in case some entities changed, for example an opportunity was closed.
Zapier implemented something similar
https://zapier.com/apps/salesforce/integrations/webhook
Could not find something I need which is a simple way to subscribe to entity changes using the OAuth client's token and passing a webhook endpoint. I read about apex callouts, streaming API and outbound messages.
Yeah, we solved this exact problem at Fusebit and I can help you understand the process as well.
Typically speaking here's what you need to do:
Create triggers on the Salesforce Objects you want to get updates for
Upload Apex class that will send an outgoing message to a pre-determined URL
Enable Remote Site Setting for the Domain you want to send the message to
Add in Secret Verification (or other auth method) to prevent spamming of your external URL
If you're leveraging javascript, then you can use the jsforce sdk & salesforce tooling API to push the code into the salesforce instance AFTER the Auth flow has occurred AND on Salesforce Instances that have API access enabled (typically - this is enterprise and above OR professional with API enabled).
This will be helpful for you to look through: https://jamesward.com/2014/06/30/create-webhooks-on-salesforce-com/
FYI - Zapier's webhooks implementation is actually polling every 15 minutes, instead of real-time incoming events.
In which programming language?
For consuming outbound messages you just need to be able to accept an XML message and send back "Ack" message to acknowledge receiving, otherwise SF will keep trying to resend it for 24h.
For consuming platform events / streaming API / Change Data Capture (CDC) you'll need to raise the event in SF (Platform Event you could raise from code, flow, process builder, CDC would happen automatically, you just tell it which objects it should track).
And then in client app you'd need to login to SF (SOAP or REST API), subscribe to channel (any library that supports cometd should be fine). Have you seen "EMP Connector", mentioned for example in https://trailhead.salesforce.com/en/content/learn/modules/change-data-capture/subscribe-to-events?trail_id=architect-solutions-with-the-right-api ?
Picking right messaging way is an art, there's free course that can help: https://trailhead.salesforce.com/en/content/learn/trails/architect-solutions-with-the-right-api
And pretty awesome PDF if you want to study for certification: https://resources.docs.salesforce.com/sfdc/pdf/integration_patterns_and_practices.pdf

Salesforce platform event duplicate events with ComeTd Client

I have built Salesforce platform event client using CometD java and It's similar to example EMP-Connector provided by forcedotcom.
I installed this client on OpenShift cloud and my app is running with 2 pods. Problem I am facing is that since there are two pods and each are running the same docker image thus both are getting the same of event. That means duplication event.
Per my understanding, Salesforce platform event should behave like Kafka subscriber.
I am unable to find a solution about how to avoid getting the same copy of events. Any suggestions here would be the great help.
Note: As of now I am able to create client side solution which drop duplicate copy of event. which is not an optimal solution.
I have to run my app atleast with 2 pods. That's limit I have on my cloud.
This is expected / by design. In CometD, when a message is published on a broadcast channel all subscribers listening on this channel will receive a copy of this message. The broadcast channel behaves like a messaging topic where one sender wants to send the same info to multiple recipients. There are other types of channels in CometD with different semantics. The broadcast channel and one-to-many message semantics is what you get with platform events available via CometD in Salesforce.
In your case it sounds like you have multiple subscribers, thus what you're seeing is expected. You can deduplicate the message stream on the client side as you have done or you can change your architecture so that you have a single subscription.

User Destinations with Stomp, Spring Websockets, an External Broker with an External Consumer

My Question centers around this slide from one of Rossen Stoyanchev webinars.
When using a simpleBroker I can send messages to individual users with the /user/** destination format that is picked up in UserDestination and converted. I can also use it to send to a specific session, or all sessions of a specific user.
This is also possible when using an External Broker like ActiveMQ or RabbitMQ as long as the sender is also able to use /user/** or its helper annotations #SentToUser etc.
But, if I am not processing these messages locally and I have another consumer connected to the External Message Broker (Apache Camel for example) How do handle User specific messages and also reply at a user and session level?
If the other consumer is in the same JVM you can have the "brokerMessagingTemplate" bean injected and use it to send messages to user-prefixed destinations.
For 4.2 we plan to support user destinations in a deployment with multiple web application servers connected to an External broker (see https://jira.spring.io/browse/SPR-11620). So if the other consumer is in a different JVM, then you could declare the #EnableWebSocketMessageBroker setup in that JVM as well or you could simple extend AbstractMessageBrokerConfiguration if you don't need the WebSocket client bits.
HTH

Distributed ActiveMQ with Camel

I am in the process of learning ActiveMQ and Camel, with the goal to create a little prototype system that works something like this:
(source: paulstovell.com)
(big)
When an order is placed in the Orders system, a message is sent out to any subscribers (a pub/sub system), and they can play their part in processing the order. The Orders, Shipping and Invoicing applications have their own ActiveMQ installations, so that if any of the three systems are offline, the others can continue to function. Something takes care of moving messages between the ActiveMQ installs.
Getting Apache Camel to move messages from one queue to another via routes is quite easy, if they are on the same ActiveMQ instance. So this works for managing the subscription queues.
The next challenge is pushing messages from one ActiveMQ instance to another, and it's the bit where I am not sure what to look at next.
Can Camel route between different ActiveMQ installations? (I can't figure out what the JMI endpoint URI would be if they are on different machines).
I understand ActiveMQ has store and forward capabilities. Is this what I would use to move messages between Orders and Shipping/Invoicing?
Or is this what Apache ServiceMix is meant to solve?
This is a pretty straightforward asynchronous, event-driven application that is well-suited for ActiveMQ and Camel.
Actually you do not move messages explicitly from one ActiveMQ instance to another. The way it works is using what's known as a network of brokers. In your case, you'd have three brokers: ActiveMQ-purple, ActiveMQ-green and ActiveMQ-blue. ActiveMQ-purple creates a uni-directional broker network with ActiveMQ-green and ActiveMQ-blue. This allows ActiveMQ-purple to store-and-forward messages to ActiveMQ-green and ActiveMQ-blue based on consumer demand.
The Orders app accepts orders on the orders queue on ActiveMQ-purple. The Orders app uses Camel to consume and process a message to determine if it is an invoicing message or a shipping message. Camel routes the messages to either the invoicing queue or the shipping queue on ActiveMQ-purple.
Consumer demand comes from the Invoicing app and the Shipping app. The Invoicing uses Camel to consume messages from the invoicing queue on ActiveMQ-green. The Shipping app uses Camel to consume messages from the shipping queue on ActiveMQ-blue. Because of the broker network and because of the consumer demand on the ActiveMQ-green.invoicing queue and the ActiveMQ-blue.shipping queue, messages will be forwarded from ActiveMQ-purple to the appropriate broker and queue. There is no need to explicitly route messages to specific broker.
I hope this answers your questions. Let me know if you have anymore.
Bruce
Hmmmm, I've only dabbled at best, and not for a fair while, but I'll try and offer something.
ActiveMQ can route between different installations and just uses standard URIs to my knowledge so I'm not sure what the problem is here. I would think that using TCP you'd be fine. Using ServiceMix (you mention it later) you'd just specify a connectionFactory & then provide the URI in that. This link shows some examples http://servicemix.apache.org/servicemix-jms-new-endpoints.html.
Camel has support for Durable Subscriber if that's what you were after (http://camel.apache.org/durable-subscriber.html)? This pattern will ensure that if the subscriber is offline when the message is ready, it will be held until the subscriber is back online. This is also supported by ServiceMix (see link given above and look for 'subscriptionDurable'.

Resources