Cound I create a trigger to send a record message to JMS? If yes, how can I do?
Thanks in advance!
I would summarize your options as follows:
Databases Supporting JMS
Oracle is the only database that I am aware of that supports JMS natively in the form of Oracle Advanced Queueing. If your message receiver is not too keen on that JMS implementation, it is usually possible to find some sort of messaging bridge that will transform and forward messages from one JMS implementation to another. For example:
Apache Active MQ
JBoss Messaging
Databases Supporting Java
Some databases such as Oracle and DB2 have a built in Java Virtual Machine and support the loading of third party libraries (Jars) and custom classes that can be invoked by trigger code. Depending on the requirements of your JMS client, this be an issue on account of the version of Java supported (if you need Java 5+ but the DB only supports Java 3). Also keep in mind that threading in some of these embedded JVMs is not what you might expect it to be, but, one might also expect that the sending of JMS messages might be more forgiving of this than the receiving of the same.
Databases Supporting External Invocations (but not in Java)
Several databases support different means of triggering asynchronous events out to connected clients which can in turn forward JMS messages built from the payload of the event:
Oracle: DBMS_ALERT (synchronous), DBMS_PIPE, DCN
Postgres: SQLNotify
Some databases (all of the above and including SQLServer) allow you to send SMTP messages from database procedural code (which can be invoked by triggers). While it is not JMS, a mail listener could potentially listen for mail messages (that might conveniently have a JSON or XML message body) and forward this content as a JMS message.
A basic alternate of this is databases packages that allow HTTP posts to call out to external sources where you might have a servlet listening and forwarding the submitted content as a JMS message.
Other databases such as Postgres support non-java languages such as Perl, Python and Tcl where you might employ some clever scripting to send a message to an external message transformer that will forward as JMS. Active MQ (and therefore its message bridge) supports a multi-language JMS client that includes Python and Perl (and many others).
Lowest Common Denominator
Short of all that, your trigger can write an event to a table and an external client can poll the contents of the table, looking for new data and forwarding JMS messages when it finds it. The JMS message can either include the content, or simply indicate that content exists and what the PK is, and the consumer can come and get it.
This is a technique widely supported in Apache Camel which has adapters (technically called Components) specifically for polling databases:
Hibernate
JDBC
SQL
iBatis
Events read from a database table by Camel can then be transformed and routed to a variety of destinations including a JMS server (in the form of a JMS message). Implementing this Camel is fairly straight forward and well documented, so this is not a bad way to go.
I hope this was helpful.
IBM z and i series servers have db2 with native SQL function such as MQSEND to send to IBM MQ and also MQREAD to read from IBM MQ. Also these server you can create triggers that call a program.
Related
I'm starting to learn websockets and I would like to know if they'res supported by a database like DB2 (or some other data-source)
Say I have a Spring Boot application, that provides data to a UI as a service. Typically, I would run SQL SELECT statements every so seconds from the Java application. However, I want to have a stream of data in the table (or perhaps a stream of just the changes made to the table) similar to having an open websocket connection to a Kafka topic .
Is it possible to use something like a STOMP websocket to have a connection opened to a DB2 table where it will stay open and consistently pull data? Does the data-source have to support websockets in order for that to work?
No they do not. RDBMS client-server protocols are more involved than just streaming a load of bytes for the client to interpret.
Having said that, database connections are already persistent, duplex, and stateful, and had been long before the WebSocket protocol was conceived.
Looking to create a service composed of JMS clients that support multiple JMS vendors and JMS versions. Ideally running in a single service instance/JVM.
Mule, Apache Camel, IBM Websphere MQ and the like have done something similar. It's a feature available in ESB's, and I'm not keen on how they have architected multiple JMS clients running on a single platform. They may or may not have built it out on a single service instance.
What sort of knowledge and tooling is out there that can help guide me in this quest? Prefer JVM solutions.
For the three who closed due to "not about coding"... JMS, the "Java Message Service" is a Java messaging framework https://docs.oracle.com/javaee/6/tutorial/doc/bncdq.html ... the solution will certainly be code based.
I'm trying to create a router to integrate a number of JMS topics & Queues. I am constrained by the fact the client I am working for can't change the JMS implementation (TibCo EMS with some custom client libraries) and the fact that they have written their own XA transaction manager which doesn't quite conform with the JTA spec. It is very important that message delivery is guaranteed.
I've done a lot of reading and experimenting with Camel and I've realised that I probably need to write my own JMS component, as the standard JMS component is not going to integrate with the JMS client libraries or TM I have.
I need to be able to put hooks into the route lifecycle at the following points:
During the route startup, I need to identify all JMS connections and enlist them as XA resources with the TM implementation
When a message is received at the consumer, I need to start a transaction including all the JMS connections in the route
When a routing decision is made, I need to send the message to the producer and commit the transaction
Given the above, I think I can implement a very simplified version of the camel-jms component which strips out all the Spring parts and only contains the bare minimum required to interact with my JMS libraries.
Where would be the best place to initialise the transaction manager? I've been looking at DefaultCamelContext, RoutePolicy and RouteContext but I can't find a place where all the endpoints are resolved and initialised.
I solved this problem by implementing the UserTransaction and TransactionManager interfaces and creating a PlatformTransactionManager which the Camel JMS component uses to create the DefaultMessageListenerContainer.
One important point to note is that the transacted property on the Camel JMSComponent refers to local transactions, not XA transactions. If you set this property to true after passing a PlatformTransactionManager to the component, the DMLC will effectively try to commit your transaction twice, which won't work.
This leaves me with a nice working example consuming from one JMS broker and producing to another, but it is very slow - ~5 messages per second. Unfortunately Spring JMS does not support batching so it seems the best solution here is to adjust the JMS topic configurations such that routing only takes place between topics on the same broker.
I am implementing a callback in java to store messages in a database. I have a client subscribing to '#'. But the problem is when this # client disconnects and reconnect it adds duplicate entries in the database of retained messages. If I search for previous entries bigger tables will be expensive in computing power. So should I allot a separate table for each sensor or per broker. I would really appreciate if you suggest me better designs.
Subscribing to wildcard with a single client is definitely an anti-pattern. The reasons for that are:
Wildcard subscribers get all messages of the MQTT broker. Most client libraries can't handle that load, especially not when transforming / persisting messages.
If you wildcard subscriber dies, you will lose messages (unless the broker queues endlessly for you, which also doesn't work)
You essentially have a single point of failure in your system. Use MQTT brokers which are hardened for production use. These are much more robust single point of failures than your hand-written clients. (You can overcome the SIP through clustering and load balancing, though).
So to solve the problem, I suggest the following:
Use a broker which can handle shared subscriptions (like HiveMQ or MessageSight), so you can balance all messages between many clients
Use a custom plugin for doing the persistence at the broker instead of the client.
You can also read more about that topic here: http://www.hivemq.com/blog/mqtt-sql-database
Also consider using QoS = 3 for all message to make sure one and only one message is delivered. Also you may consider time-stamp each message to avoid inserting duplicate messages if QoS requirement is not met.
This is more out of curiosity and "for future reference" than anything, but how is Comet implemented on the database-side? I know most implementations use long-lived HTTP requests to "wait" until data is available, but how is this done on the server-side? How does the web server know when new data is available? Does it constantly poll the database?
What DB are you using? If it supports triggers, which many RDBMSs do in some shape or form, then you could have the trigger fire an event that actually tells the HTTP request to send out the appropriate response.
Triggers remove the need to poll... polling is generally not the best idea.
PostgreSQL seems to have pretty good support (even PL/Python).
this is very much application dependent. The most likely implementation is some sort of messaging system.
Most likely, your server side code will consist of quite a few parts:
a few app servers that hansle incoming requests,
a (separate) comet server that deals with all the open connections to clients,
the database, and
some sort of messaging infrastructure
the last one, the messaging infrastructure is really the key. This provides a way for the app servers to talk to the comet server. So when a request comes in the app server will put a message into the message queue telling the comet server to notify the correct client(s)
How messaging is implemented is, again, very much application dependent. A very simple implementation would just use a database table called messages and poll that.
But depending on the stack you plan on using there should be more sphisticated tools available.
In Rails I'm using Juggernaut which simply listens on some network port. Whenever there is data to send the Rails Application server opens a connection to this juggernaut push server and tells it what to send to the clients.