I have a primary and secondary Tibco EMS queues and need to send messages to the queues. The secondary will be in standby mode until primary goes down.
From camel code, I need to handle the failover scenario - if primary ems goes down, the application should send messages to secondary instance.
I have been searching sample for this scenario and finding something for ApacheMQ using
brokerURL=failover:(endpoint1,endpoint)
Can someone help how to achieve this for EMS provider?
Should it be something like this for EMS?
connectionFactory.setServerUrl(endpoint1,endpoint2)
The connection string for an EMS in HA is of the form:
tcp://host1:7222,tcp://host2:7222
Either one will be active at one point in time... the client will figure it out automatically (in the JMS(2).jar provided by TIBCO.)
Here is a nice tutorial (not by me).
Related
I have an application using Rebus with SQL Server as a transport layer (yes, I know this is not the ideal transport layer) and I'm, trying to find an out-of-the-box tool to do real-time monitoring of the queues. I tried Rebus Snoop but I found that it is not compatible with SQL Server.
Does anyone know a way to monitor Rebus queues?
Thank you very much.
"Rebus queues" is not really a thing, so you most likely want to turn to whatever broker you are using and use some kind of monitoring tool for that.
E.g. if you're using Azure Service Bus, you can use the Azure Portal's built-in metrics to figure out queue lengths, request rates, etc.
But since it looks you're using SQL Server as the transport, there doesn't really (to my knowledge) exist any kind of compatible "queue monitoring tool" - you'd probably find loads of tools (e.g. from RedGate) that can help with looking at your SQL Server's performance metrics, but it doesn't know that you happen to be using your SQL Server as a message broker.
You can of course also turn to Rebus Pro that comes with Fleet Manager, which can help with some of the things. It's a commercial product though, so it might not be the answer you were hoping for. 😉
I have a Mosquitto broker which receives positioning information from remote devices.
I need to store this data somewhere to be processed by other micro-services.
At present, there is a Node.js process which subscribes to the broker, and writes to the Postgres database in batches.
Devices -> Mosquitto -> DB writer -> (source-of-truth) Postgres
(source-of-truth) -> Service A
-> Service B
But the problem I see is that now any other service which needs to process this position data, needs to query the Postgres database.
Constraint: This is for on-premise deployments so ideally we want to maintain as little as possible. One VM with a database, and perhaps a link to a customer-maintained database.
An alternative to the database as the source of truth for the sensor data is a Kafka-like event log / event-sourcing approach. Then there would be one subscriber to the broker, and all microservices could read from it, and pick up where they left off if they go down.
Because it is on-premise I want something more lightweight than Kafka, and have found NATS Streaming Server.
Now, the NATS event log can be persisted by configuring it with a data store. It currently supports a simple file store and a SQL store.
Now if I used the SQL store, it seems like a waste of time to store raw messages to database, read from database, then store them again, plus bad for performance. The SQL store interface also has its own batching implemented. I'm not sure how much I trust the file store as the source of truth too.
So, is this a viable approach?
You can consume messages "by batches" in NATS Streaming by creating your subscription with a MaxInflight and ManualAckMode. The server will not send more than MaxInflight messages without receiving corresponding message acks from the clients.
If you need to do transformation before storing, I understand your process. However, if you just don't trust the FileStore or SQLStore from the NATS Streaming server, why would you be using NATS Streaming in the first place? That is, the stores have been implemented by the same people (including me) that wrote the NATS Streaming server ;-)
We use MQ as a primary route to transfer messages. It is integral to how our system works. There are times the message broker fails, and with it all the associated queues. Is there a way, in camel, to kickstart a failover, and revert back to master when it's up?
In general, messaging systems do not want to interrupt client -> server communication for any reason. Once a failover connection is re-established, the preference is to stay connected to that server. IBM MQ supports client failover as well, but I do not know of a way to do a rebalance after a failover has occurred.
Media Driver's Integrated Console supports rebalancing ActiveMQ/JBoss A-MQ clients after a failover or before a maintenance window: https://mediadriver.com/software/ (see: Client rebalancing video at the bottom)
Disclaimer: I am a co-founder of Media Driver and head up the product development of the Integrated Console.
I am looking for a best practice or example of how I might be able to generate events for all update events on a given SQL Server 2008 R2 db. To be more descriptive, I am working on a POC where I would essentially publish update events to a queue (RabbitMq in my case) that could then be consumed by various consumers. This would be the first part of implementing a CQRS query-only data model via event sourcing. By placing on the que anybody could then subscribe to these events for replication into any number of query-only data models. This part is clear and fairly well-defined. the problem I am having is determining the best approach for generating the events out from SQL server. I have been given a few ideas such as monitoring the transaction log and SSIS. However, I'm not entirely sure if these options are adviseable or even feasible.
Does anybody have any experience with this sort of thing or have any notions on how to go about such an adventure? any help or guidance would be greatly appreciated.
You cannot monitor the log because, even if you would be able to understand it, you have the problem of the log being recycled before you had a chance to read it. Unless the log is somehow marked not to be truncated it will reused. For instance when transactional replication is enabled the log be pinned until is read by the replication agent and only then truncated.
SSIS is a very broad concept and saying that 'using SSIS to detect changes' is akin to saying 'I'll use a programing language to solve my problem'. The details is how would you use SSIS? There is no way, with or without SSIS, to reliably detect data changes on an arbitrary schema. Even data models specifically designed to allow for detecting changes have issues, specially at detecting deletes.
However there are viable alternatives. You can deploy Change Data Capture and delegate to the engine itself to track the changes. Consuming these detected changes and publishing them to consumers (via RabbitMQ if that's your fancy) is a something SSIS would be good at. But you have to understand that SSIS does not fare well to continuos, real-time tasks. It is designed to run periodically on batches, so your change notification consumers will be notified in spikes, with long delays (minutes), when the SSIS jobs run.
For a real-time approach a better solution is Service Broker. One possibility is to SEND Service Broker messages from triggers, but I would not recommend it. A better design is to have the application itself publish the changes by SEND-ing the message explicitly, when it does the data modification. With SQL Server 2012 is possible to multicast Service Broker messages to other SQL Server consumers (including SQL Server Express). SSB message delivery is fully transactional (no message gets sent if transaction rolls back) and does not require two-phase-commit with a message store resource manager. But to broadcast via RabbitMQ you would need to bridge the communication, ie. RECEIVE the SSB messages and transform them into RabbitMQ notifications.
I'm learning about MSMQ and am successfully using it to queue email and text messages from a consumer-facing ASP.NET MVC website, to be handled by a separate client application.
In the event of a missing SQL Server database, perhaps while swapping drives or a broken database deploy, would it make sense to queue non time-critical inserts in a local MSMQ queue to improve up-time?
Theoretically, I can then pause/resume queue processing (persistence) while making database changes. Has anyone tried this or is there a better way?
If you're looking at higher availability by queueing locally then you should consider Service Broker deployed on SQL Express instances collocated with your IIS/ASP instance. The advantage of using SSB over MSMQ is that you have consistency between your message store and your data store (one consistent backup/restore, one consistent failover unit), it does scale much better than MSMQ under load, it does not require tw-phase-commit DTC to coordinate the MSMQ dequeue with the DB insert (can use one local DB transaction to dequeue/insert), it offers queryability of the pending messages (SELECT .. FROM queue), is integrated with the DB HA/DR solution (cluster failover/mirroring), you get DB contained activation and it all works from the familiar T-SQL programming environment. MSMQ's main advantage is support of a client side C#/.Net API.
I was on a team that implemented this for purposes of guaranteed delivery. We used MSMQ to forward the insert requests to the database server, which had its own service running that dequeued the requests and ran the inserts, then acknowledged the message (to ensure delivery). It's been running for over a year now, and we've never been asked to come figure out why it isn't working...seems pretty solid to me.
This is very subjective because it depends on what your application does and how. Generally, something like MSMQ is not used for this purpose, rather you want to set up some kind of high-availability clustering on your database of choice. The occurrence of a database going completely down is rare in most cases, and generally a bigger problem for most LOB applications than just having somewhere to store data entered while the DB is down for whatever reason.
There's also overhead to think about. An INSERT operation to a database is relatively quick (in the larger scheme of things); writing a serialized something into a queue and having something pick it up and do that insert operation is going to add large amounts of lag to your application, not to mention the fact that you'll have to account for the fact that now everything is asynchronous.
That said, MSMQ can be used to ensure delivery of stuff from one end of an application to another, so I suppose there are instances where this scenario might be desirable. Most of the time though you're just better off trusting your DB and using MSMQ to enable asynchronous processing and performing interprocess and intermachine communication.