I have a requirement to write a code whenever there is a new entry in the JMS queue, I want that entry to be persisted in the MySql database. I read that this is possible using Apache Camel project. Could any one point out to the examples or some documentation related to the same.
Lokesh
Yes, it's rather straight forward. At least the JMS and Database parts.
from("jms:queue:someQueue")
.bean(SomeTransformerBean.class) // transform the message, custom code etc in
.to("sql:insert into FOO X VALUES(#)"); // need to enter some valid SQL statement here
Read more here
http://camel.apache.org/sql-component.html
and here
http://camel.apache.org/jms
Related
Following on from this (apologies, had a different user): Kafka Key access on Ingress of a Python Flink Stateful function
Our use case is that we make use of the Kafka headers as a means of tracing and lineage as well as required metadata. Looking at this:
https://github.com/apache/flink-statefun/blob/master/statefun-flink/statefun-flink-io-bundle/src/main/java/org/apache/flink/statefun/flink/io/kafka/binders/ingress/v1/RoutableKafkaIngressDeserializer.java#L45-L61 It looks like using the standard deserializer, the headers are dropped.
Effectively what I'd want, is a way to inject my own deserializer that would return a message containing this and any other metadata from the record. I'd want to add something like the UniversalKafkaIngress so that I could configure it using a remote module.
Looking at the code, I can see that I could register a new ExtensionModule, and replace the deserializer (and create a custom kind). Is this recommended? If so - are there any docs on this (if not, how could I configure statefun to pick this up)?
Or, is there another preferred method?
Thanks again...
Ah - found out where I was going wrong.
You can load a ExtensionModule using the standard module SPI process - and therefore register it as a new 'universal' ingress, so that it can be loaded remotely. I had a typo - which is why I battled.
There are a few gotchas - and I'll post a gist a little later to show how it can be done.
I'm new to Camel, and have some basic questions which can't found the answers online. Please help and I'm appreciate it.
I have read many example online, and saw bunch example like this:
from(direct:A).to(jms:queue:B)
But didn't see any configuration for them. My question is what will happen if the direct doesn't exist? How about from(jms:queue:A).to(direct:A)? and what about the other components?
For this example, what's the execution order? does it pass the original message to B first, then process and pass to C?
from(direct:A)
.to(jms:B)
.process(something)
.to(jms:C)
Direct is an in memory queue, provided by Camel. Prior it Camel 3, it was bundled with the camel-core module and you would not need any configuration at all in order to use the direct component. However, due to sake of modularity, since Camel 3, direct has been made its own component and in order to use it camel-direct needs to be imported.
Jms on the other hand is a generic component, using which you can implement connectivity to different Jms providers such as ActiveMq(though in Camel activemq has it's own component), IBM MQ, Weblogic JMS server, and others.
For your 1st question, if direct doesn't not exist, you need to import the direct component into your build. If the uri provided to the component is not present, Camel will create its own. This is true for most of the Camel components. One of the most common is the file component, which is used to pick up files from a given directory. If the given directory is not present, Camel is smart enough to create the directory. Obviously, these are default behaviours and you have a lot of control to pick and choose how you want your route to behave.
For your second question, the route will be processed entirely in order, which is, the message will be picked up from the direct:A, then will be sent to jms:B. After that it will be processed using the something processor and will finally be sent to jms:c.
The thing to note here is that the direct:A is just an example to show the syntax of a route. You can use any component which can act as a consumer.
Iam trying to implement replay mechanisam with camel ie., i will have to retrieve all the messages already persisted and forward to appropriate camel route to reprocess.This will be triggred by quartz scheduler.
I achieved the same by using below.
1) once the quartz scheduler is triggered, fwd to processor which will query db and have the message as list and set the same in camel exchange properties as list.
2) Use the camel in which LoopProcessor will set appropriate xml in the iteration in the exchange.
3) fwd it to activemq which will be forwarded to appropriate camel route to reprocess.
Every thing is fine.
I see the following TWO issues
a) there might be 'n' number of msges(10,000+) which will be available in the camel exchange properties in the form of List - i can remove the message which is sent for processing but i think this will do more good on performance and memory usage.
b) I don want to forward all the 10,000+ messages to activemq which i guess will make it exhaustive. Is there a better mechanism to forward 10000+ messages to activemq.
-- I am thinking to use SEDA/VM(using different camel contexts).how good this can give me considering above questions.
Thanks.
Regards
Senthil Kumar Sekar
If the number of messages is a problem, then not all messages should be loaded at once.
Process as follows (see also my answer for your other SO):
Limit the number of results when querying the DB.
Set a marker (e.g. processedFlag) for the DB entries that are processed
Begin at 1. and query only the not already processed entries until all records are processed.
However, you should test the ActiveMQ approach as well, if 10,000+ messages are really a problem or not.
I am familiar with Camel-SMPP and also it works great for my consumer and producer routes. I am using Selenium SMPP SIM to test the same.
from uri="smpp://smppclient#127.0.0.1:8056?password=password&systemType=consumer"/>
to uri="smpp://smppclient#localhost:2775?password=password&&systemType=producer"/>
However I would like to have my Camel run as a Server (which accepts SMS from numerous clients). My current From route is tightly coupled with one SMS sender. How can I modify this as generic server. Is it possible in Camel ?
if I understand you question right, you have:
127.0.0.1:8056 as SMS client
localhost:2775 as SMS sender
it looks like this
from:client1 ----> to:sender1
lets say you want to connecto more SMS clients to your SMS sender.
from:client1 -----> to:sender1
from:client2 ----/
from:client3 ---/
All you need to make is to add more from nodes.
I think you are using springish xml file to configure Camel. It means you do it in declarative way and camel does as much as you declare in you xml file. No for loops or something. So, literaly you need to add more from uri="smpp://smppclient#127.0.0.1:8056?password=password&systemType=consumer"/> lines into your xml. In other way you can use camel java API to configre/add your nodes dynamically. So, you could configure or add your nodes from DB or whatsoever.
Well, but you have to add as much to uri="smpp://smppclient#localhost:2775?password=password&&systemType=producer"/> nodes which is not exactly what we meant. To fix this, we add a abstraction node between. It will look like:
from:client1 -----> direct:sender ----> to:sender1
from:client2 ----/
from:client3 ---/
So your code will be:
from uri="smpp://smppclient#127.0.0.1:8056?password=password&systemType=consumer"/>
to uri="direct://sender"
from uri="smpp://smppclient2#...."/>
to uri="direct://sender"
from uri="smpp://smppclient3#..."/>
to uri="direct://sender"
from uri="direct://sender"
to uri="smpp://smppclient#localhost:2775?password=password&&systemType=producer"/>
You can consider to use seda instead of direct so you get queuing quite easily.
I want to be able to copy the file I have which comes in as XML into a new folder location on the server. Essentially I want to hold a back up of the input files in a new folder.
What I have done so far is try to follow what has been said on this forum post - link text
At first I tried the last method which didn't do anything (file renaming while reading). So I tried one of the other options and altered the orchestration and put a Send shape just after the Receive shape. So the same message that comes in is sent out to the logical port. I export the MSI, and I have created a Send Port in the Admin console which has been set to point to my copy location. It copies the file but it continues to create one every second. The Event Viewer also reports warnings saying "The file exists". I have set the Copy Mode of the port to 'overwrite' and 'Create New', both are not working.
I have looked on Google but nothing helps - BTW I support BizTalk but I have no idea how pipelines, ports work. So any help would be appreciated.
thanks for the quick responses.
As David has suggested I want to be able to track the message off the wire before BizTalk does any processing with it.
I have tried to the CodePlex link that Ben supplied and its points to 'Atomic-Scope's BizTalk Message Archiving Pipeline Component' which looks like my client will have to pay for. I have downloaded the trial and will see if I have any luck.
David - I agree that the orchestration should represent the business flow and making a copy of a file isn't part of the business process. I just assumed when I started tinkering around I could do it myself in the orchestration as suggested on the link I posted.
I'd also rather not rely on the BizTalk tracking within the message box database as I suppose the tracked messages will need to be pruned on a regular basis. Is that correct or am I talking nonsense?
However is there a way I can do what Atomic-Scope have done which may be cheaper?
**Hi again, I have figured it out from David's original post as indicated I also created a Send port which just has a "Filter" expression like - BTS.ReceivePortName == ReceivePortName
Thanks all**
As the post you linked to suggests there are several ways of achieving this sort of result.
The first question is: What do you need to track?
It sounds like there are two possible answers to that question in your case, which I'll address seperately.
You need to track the message as received off the wire before BizTalk touches it
This scenario often arises where you need to be able to prove that your BizTalk solution is not the source of any message corruption or degradation being seen in messages.
There are two common approaches to this:
Use a pipeline component such as the one as Ben Runchey suggests
There is another example of a pipeline component for archiving here on codebetter.com. It looks good - just be careful if you use other components, and where you place this component, that you are still following BizTalk streaming model proper practices. BizTalk pipelines are all forwardonly streaming, meaning that your stream is readonly once, and all the work on them the happens in an eventing manner.
This is a good approach, but with the following caveats:
You need to be careful about the streaming employed within the pipeline component
You are not actually tracking the on the wire message - what your pipeline actually sees is the message after it has gone through the BizTalk adapter (e.g. HTTP adapter, File etc...)
Rely upon BizTalk's out of the box tracking
BizTalk automatically persists all messages to the message box database and if you turn on BizTalk tracking you can make BizTalk keep these messages around.
The main downside here is that enabling this tracking will result in some performance degradation on your server - depending on the exact scenario, this may not be a huge hit, but it can be signifigant.
You can track the message after it has gone through the initial receive pipeline
With this approach there are two main options, to use a pure messaging send port subscribing to the receive port, to use an orchestration send port.
I personally do not like the idea of using an orchestration send port. Orchestrations are generally best used to model the business flow needed. Unless this archiving is part of the business flow as understood by standard users, it could simply confuse what does what in your solution.
The approach I tend to use is to create a messaging send port in the BizTalk admin console that subscribes to your receive port. The send port will then just use a standard BizTalk file adapter, with a pass through pipeline.
I think you should look at the Biztalk Message Archiving pipeline component. You can find it on Codeplex (http://www.codeplex.com/btsmsgarchcomp).
You will have to create a new pipeline and deploy it to your biztalk group. Then update your receive pipeline to archive the file to a location that the host this receive location is running under has access to.