Clustered Apache Camel to run Jira routes - apache-camel

Background: We have written a Spring Boot Apache Camel based ingestion service which runs Camel routes that ingest data from shared directory (Excel files) and Jira (API calls). Jira based routes are fired using Scheduler at pre-defined frequency. User configures multiple integrations in the system and each integration maps to one Camel route. In production, there shall be 10 instances of the ingestion service running.
Problem Statement: For each integration using Jira, only one ingestion instance should fire a route, process it and rest should not if there is already a running instance for that specific route.
Question: How to make sure only one ingestion instance processes a route and rest ignore it (i.e. may start but stop after doing nothing)?
Analysis: It seems Camel Cluster component can be used but not sure if it can be used in conjunction with scheduler component. In addition, since cluster component can rely on standalone components such as cache, file etc., preferred solution would be to use something that does not require any new components in the architecture. Also it may be possible to use a custom solution but preference is to use an out-of-box solution.

Related

Implementing multiple version flow in single Mule project

Trying to create a Mule project with support for multiple version endpoints. To begin with I have started with two API specification with same endpoints but with different version.
hello-world-v1.raml (version: 1, GET /hello)
hello-world-v2.raml (version: 2, GET /hello)
Then used these both RAML files to create Mule project. By default it created two Listeners on different port. But I want to start app in single server and port but go to flow based version of path e.g.
https://www.custom-greetings.com/api/v1/hello will server based on first RAML specification whereas
https://www.custom-greetings.com/api/v2/hello will server based on second RAML specification
The reason I want to have Mule project with both version is so that my client can use same domain instead of
https://www.custom-greetings-v1.com vs https://www.custom-greetings-v2.com
I am pretty sure there is efficient way to do this but not finding any related example or guidance.
Any help/pointer is appreciated.
Thanks.
If you are deploying to a standalone Mule server you can move the HTTP Listener configuration to a Mule Domain and share it with both applications. That way both listen to the same port but in different URI paths. This method can not be used in CloudHub nor Runtime Fabric deployments because they don't support domains.
Another alternative would be to combine manually both RAMLs into a single one and create a single application with a single HTTP Listener for both APIs. This alternative will be compatible with CloudHub and Runtime Fabric.
Yet another option would be to put a load balancer in front of both applications. For a standalone Mule installation you need to provide your own load balancer. CloudHub provides a feature called Dedicated Load Balancer to do this. Runtime Fabric uses Kubernetes ingress mechanism.

Apache Camel single instance of a route running in cluster using database

I have Apache Camel application deployed on two servers and they consume from JMS endpoint. I want to make sure that only one camel route is consuming from jms endpoint at a time. The only option that I can use for clustering is using database as a lock store. Does Apache Camel provide such a feature?
I think the easiest way is to consume from a topic and not a queue.
On connection, use the same subscriptionName. Only the first connection will be allowed as far i know.

Best solution to "hot-deploy" Apache Camel routes and beans classes?

We use Apache Camel as a standalone application for ~ 2 years. It works very
well but the need to restart the process to upgrade the application each time
we add new routes becomes an issue.
We are searching for a new deployment solution that could allow us to deploy
new routes without having to restart the main process.
There is no problem for us to rewrite our Java DSL routes in XML but the issue
is that most of them (and probably future ones too) make use of custom beans,
processors, components etc. to inject some logic that is too complex to
be expressed in pure XML/Java DSL route.
After searching through Camel documentation, hot deploying XML routes seems to
be possible with spring-boot or with Karaf/OSGI.
But i have no idea if it is possible to "hot-deploy" bean, processors,
components etc. classes that are needed by theses XML routes. OSGI/Karaf looks
promising but i have never used boths technologies and it is not easy to grasp their purpose at first glance.
Which deployment method and which technology could allow us to "hot deploy' routes and beans classes ?
If you want to hot-deploy Java code, then you need an application server like platform such as Apache Karaf/ServiceMix/JBoss Fuse etc or a traditional like Tomcat, JBoss, WildFly etc (for WAR files).
Then you can do a "hot deployment" as a deployment of the application.
To hot-deploy a single class or some classes inside a running JVM is harder, and you would need special tooling such as JRebel.
You could try to use camel-blueprint to setup the context/route.
By exposing your bean as osgi service, you can use those beans in routes.
I would suggest you to take a look into apache camel blueprint maven archetype and camel component archetype to get started.
Hot deploy in Apache Karaf is simple, simply drop the bundle into $KARAF_HOME/deploy and it will reload automatically.
Reference:
camel-archetype-component camel-archetype-blueprint
Do let me know if this help.
PS: I don't have enough reputation for commenting hence the answer.

Is there a way to programmatically find the available CamelContext

I am running Camel inside PlayFramework and it all works pretty well but when the Play server is running in development mode it does dynamically class reloading but it starts a new Camel context each time.
I can hook into Play restart and shut down the Camel context by calling stop() on the CamelContext but I would prefer to be able to check if there is already a context running and if so just use that.
This must be possible as hawtio shows me a list of the camel contexts.
I don't use spring to configure camel.
You can use JMX to see what other CamelContext's are in the JVM mbean server. This is what hawtio uses to detect which Camel's are running in the JVM.
As alternative you may fiddle with Container spi to have events when a CamelContext is created. But this requires a way to hook into this: https://github.com/apache/camel/blob/master/camel-core/src/main/java/org/apache/camel/spi/Container.java

Apache-Camel, ActiveMQ, camel-jms and Fuse -> why do I need them?

I am still struggling with undertsanding some of Camel's main features and limitations.
My objective is to implement a demo application that can migrate camel endpoints. To achieve this everyone suggested that I should use the camel load-balancer pattern with the failover construct.
To achieve this objective people have suggested Fuse and ActiveMQ. Some have even suggested JBoss, but I am lost.
I understand that Camel needs the an implementation of a JMS server. To achieve this I can use ActiveMQ - a free implementation of a JMS server.
However camel also provies the jms-component. What is this? Is this a client implementation of JMS? If so, should I not be using an ActiveMQ client for JMS? Could someone provide a working example?
With ActiveMQ and JMS understood I can then try to find out why people suggest Fuse. I want my implementation to be as simple as possible. Why do I need Fuse? The Camel+ActiveMQ combination has the load balancer pattern with the failover mechanism right?
I am lost in this sea of new technologies, if someone could give a direction I would be thankful.
Camel provides two components. The first is the jms component, which is a generic API for working against JMS servers. The other is the activemq component, which uses the activemq API for working with activemq message brokers. The activemq component is the default component within things like servicemix/fuse, using an internal broker (not a networked/external broker).
If you are connecting to activemq, you can use either the activemq component or the jms component. The jms component will not start up a broker automatically, you would need to do this yourself.
Fusesource == JBoss Fuse == Apache ServiceMix + some addons. For argument sake, i'm going to refer to all three of these as ServiceMix.
ServiceMix is an enterprise service bus, you can lookup the term on wikipedia if you're not familiar with the concept. It uses Apache Camel to define routes between your components, implementing a number of integration patterns as you so need. ServiceMix deploys by default with Apache CXF, for JAX-RS and JAX-WS services and Apache ActiveMQ, a JMS message broker. Using Camel, you can tell service mix that when a REST API is called, do a series of steps, each step decoupled from the one before it.
JBoss Fuse (the enterprisey, costs money edition) comes with some additional components around fail over. Some of these are present in servicemix (namely, you can run servicemix in a hot stand by mode, waiting for the primary to go down). The Camel load balancer pattern doesn't really mean anything around replication, except that a message coming from one endpoint can be delivered to any of a set of a N endpoints. http://camel.apache.org/load-balancer.html
On the flipside, take a look at ServiceMix's failover http://servicemix.apache.org/docs/4.4.x/users-guide/failover.html
I think based on your question you're referring to system failure failover (needing to work against a new instance), and not a Camel Loadbalancer component (which is likely where the confusion is coming from, on the community side and your side).
start by reading these...Camel In Action, ActiveMQ In Action

Resources