ActiveMQ Artemis Anycast and Multicast prefix - apache-camel

I'm using ActiveMQ Artemis version 2.6.2 and using Apache Camel to route messages.
When I connect Camel with jms-component in AMQ, for some reason in ActiveMQ Artemis the new queue is created with jms.queue. as a prefix.
I know if I add the follow code on acceptor in broker.xml the problem is solved. But sadly I don't have access to do that.
anycastPrefix=jms.queue.;multicastPrefix=jms.topic.
Is there a way to solve this in Java Code? I tried these steps but no success.
from("amq:QUEUE.TEST").setProperty("anycastPrefix", simple("jms.queue."))
from("amq:jms:queue:QUEUE.TEST")

The reason that the queue is being created with the jms.queue. prefix is almost certainly because Camel is using an Artemis 1.x client instead of a 2.x client. The 1.x client is hard-coded to use the jms.queue. and jms.topic. prefixes.
As you note, the simplest way to solve this issue is by configuring the prefixes on the appropriate acceptor in broker.xml. I don't know of any way to solve this issue in Java code. I think your best bet would be just to upgrade the Artemis client implementation which Camel is using.

Related

Configure HA for WILDLFY 10 artemis

We have two wildlfy 10.1.0 server with the activeMq (Artemis) running. We want the two artemis to be in sync with messages. And if one is down, other one should start receiving and sending messages. There is lack of documentation in google as well as artemis is new.
Also, keeping both as active for loadbalancing., is it possible?
I am assuming that you are using the internal instance of Artemis in wildfly. In the Wildfly Quickstarts, there is a good demo on Messaging Clustering. This will also work for 10.1. It shows how to configure using Standalone-HA or Domain
https://github.com/wildfly/quickstart/tree/11.x/messaging-clustering.

Apache camel to invoke ejb 2

Can I use apache camel to invoke remote ejbs (ejb2.0)? How do I pass parameters to these ejsb? The example given on the camel website is not very clear. Also I'm not using spring. Can someone please help?
To call remote EJBs you can just use Java code, and let Camel call your java code.
If you want to try the camel-ejb component, then you need to configure the component for remote EJBs which is not so easy - there is a JIRA ticket to improve this in a future release.
So I suggest to just use Java code - eg just call these remote EJBs as you would do from regular Java code without using Apache Camel.

Set TTL Apache Camel JAva DSL

How do you set the TTL for a message when using Java DSL?
I have something like this:
...
from ("timer:something?delay=3000&period=15000")
...
.to("{{some.property}}")
.end()
...
I want to set a time to live on the message being sent.
I ended up setting the JMSExpiration header field of the messages being created similar to the following
.setHeader("JMSExpiration", constant(System.currentTimeMillis() + 1000))
We are using Apache ActiveMQ 5.7.
I assume TTL means Time to Live.
In Camel this is component specific how they deal with this. Some components support this, and others do not.
You should check the documentation for the component you use, what it supports.
If you use JMS component then it has the timeToLive option as documented: http://camel.apache.org/jms
And mind about the problem with "client and server clock's can be out of sync". There is some details on the Camel JMS page. Some message brokers has ways to sync the clocks, such as Apache ActiveMQ with its timestamp plugin: http://activemq.apache.org/timestampplugin.html

A scalable bus with multiple Camel instances

My idea is to use camel to decouple modules. In order to support scalability and failover, I am wondering if the following architecture is adviced?
I have two applications with Camel embedded AppCamel1 and AppCamel2. Then I have standalone camel nodes Camel1 and Camel2.
AppCamel1 would have a route with fail-over/load balancing to Camel1 and Camel2. This way, if Camel1 crashes for example, Camel2 is used for failover.
Camel1 and 2 would do a REST call with the http component for example. Also there would be a request-reply from AppCamel1 up to camel1 or 2.
Is it a valid scenario?
What should I use to interconnect the different Camel instances (AppCamel1 to Camel1 or 2)? (I would like to know if it's possible to avoid another component like a jms server in the middle)
Thank you!
Edited following Boday's answer
the REST calls are from Camel1/2. I'd like to interconnect AppCamel1/2 to Camel1/2 and see if I can avoid anything in between. I guess mina is a possibility or even http but in that case a AppCamel1 and AppCamel2 need to know Camel1/2 which is not so good.
This is also being discussed at the Camel mailing list, where there is also some pointers and suggestions
http://camel.465427.n5.nabble.com/scalable-bus-with-multiple-Camel-instances-tp5606593p5606593.html
If you are trying to load balance HTTP requests to your AppCamel1/2, then you'd need a proxy server in between (apache mod_proxy, perlbal, etc). To load balance from AppCamel1/2 to Camel1/2, you can use Camel's load balancer or even JMS request/reply...
From AppCamel1/2 to Camel1/2, it sounds like you are using REST as the interface. If you need more complex communication between the instances, then I'd use JMS (via camel-activemq) for messaging and Hazelcast (via camel-hazelcast) for distributed caching/locking, etc.
If you use jms to communicate then you do not need a special load balancer. Just use one queue and let both Camel1/2 listen to the queue. Then they will automatically failover and load balance.
I would definetly go for a jms middleware. Activemq is the natural choice (camel is even considered a sub project of activemq). It is trivial to embedd amq along with your canel instances and cluster them. Activemq will then be able to handle both load balancing and failover for you.

How do you debug CXF endpoint publishing?

Given the "cxf-osgi" example from fuse source's apache-servicemix-4.4.1-fuse-00-08, built with maven 3.0.3, when deploying it to apache karaf 2.2.4 and CXF 2.4.3 the web service is never published and never visible to the CXF servlet (http://localhost:8181/cxf/). There are no errors in the karaf log. How would one go about debugging such behavior?
It's worth turning up the log level(s) - you can do this permanently in the etc/org.ops4j.pax.logging.cfg or in the console with log:set TRACE org.apache.cxf - IIRC this will show some useful information.
Also check that it's actually published on localhost/127.0.0.1 - it may well be being published on another interface, the IP of the local network but not localhost. Try using 0.0.0.0 as the the address, that way it will bind to all available interfaces.
As you're using maven, you can download the CXF source (easily in Eclipse) and connect a remote debugger to the Karaf instance, with some strategically placed breakpoints you should be able to get a handle on what's going on.
Try changing to Equinox instead of the default of Felix. There is a bug in 2.4.3 in that it doesn't work well with Felix. Alternatively, CXF 2.4.4 is now available that should also fix it.
Take a look at this issue I filed this week: https://issues.apache.org/jira/browse/CXF-4058
What I found is that if my beans.xml is loaded before the cxf bundle jar, then the endpoints are registered with CXF but not with the OSGi http service. So everything looks good from the logs but the endpoints are never accessible. This is a race condition.
I did two workarounds: 1) in the short term, just move my own jars later in the boot order (I use Karaf features) so Spring and CXF are fully loaded before my beans.xml is read and 2) abandon Spring and roll my own binding code based loosely on this approach: http://eclipsesource.com/blogs/2012/01/23/an-osgi-jax-rs-connector-part-1-publishing-rest-services/
I just implemented solution #2 yesterday and I'm already extremely happy with it. It's solved all of my classloader issues (before I had to manually add a lot of Import-Package lines because BND doesn't see beans.xml references) and fixed my boot race condition.

Resources