I am running Camel inside PlayFramework and it all works pretty well but when the Play server is running in development mode it does dynamically class reloading but it starts a new Camel context each time.
I can hook into Play restart and shut down the Camel context by calling stop() on the CamelContext but I would prefer to be able to check if there is already a context running and if so just use that.
This must be possible as hawtio shows me a list of the camel contexts.
I don't use spring to configure camel.
You can use JMX to see what other CamelContext's are in the JVM mbean server. This is what hawtio uses to detect which Camel's are running in the JVM.
As alternative you may fiddle with Container spi to have events when a CamelContext is created. But this requires a way to hook into this: https://github.com/apache/camel/blob/master/camel-core/src/main/java/org/apache/camel/spi/Container.java
Related
Background: We have written a Spring Boot Apache Camel based ingestion service which runs Camel routes that ingest data from shared directory (Excel files) and Jira (API calls). Jira based routes are fired using Scheduler at pre-defined frequency. User configures multiple integrations in the system and each integration maps to one Camel route. In production, there shall be 10 instances of the ingestion service running.
Problem Statement: For each integration using Jira, only one ingestion instance should fire a route, process it and rest should not if there is already a running instance for that specific route.
Question: How to make sure only one ingestion instance processes a route and rest ignore it (i.e. may start but stop after doing nothing)?
Analysis: It seems Camel Cluster component can be used but not sure if it can be used in conjunction with scheduler component. In addition, since cluster component can rely on standalone components such as cache, file etc., preferred solution would be to use something that does not require any new components in the architecture. Also it may be possible to use a custom solution but preference is to use an out-of-box solution.
I have Apache Camel application deployed on two servers and they consume from JMS endpoint. I want to make sure that only one camel route is consuming from jms endpoint at a time. The only option that I can use for clustering is using database as a lock store. Does Apache Camel provide such a feature?
I think the easiest way is to consume from a topic and not a queue.
On connection, use the same subscriptionName. Only the first connection will be allowed as far i know.
I have previous experience in Apache Camel and JBoss Fuse and I am new to Openshift version 3.x I am trying to deploy a camel application which is developed using java dsl and spring DI.
I am using an external properties file to load the consumer and producer endpoint in camel.In JBoss Fuse I used the configAdmin services with the update-stratergy=reload as shown below in my blueprint.xml
<!-- OSGI blueprint property placeholder -->
<cm:property-placeholder id="routesConfig" persistent-id="org.sample.camel.routes.config" update-strategy="reload"/>
The above configuration will reload the camelContext automatically when there is a change in the properties file
How can I achieve the same functionality using fis-java-openshift:1.0 template image in openshift 3.x
We wrote some docs on how to work with configuration.
Generally speaking on kubernetes the use of service discovery and kubernetes secrets avoids most of the use cases for environment specific configuration.
Ideally we would use the same Continuous Delivery pipelines to change code or configuration to ensure things are properly tested before they hit production.
However if you really really want to reload configuration on the fly in Java containers you can store configuration in a ConfigMap and mount it as a file inside the pod; then have the Java code watch the file (eg with spring boot or ConfigAdmin).
I saw that HawtIO has a Dashboard that shows the flow of the route into each processor in it and the count for each call made.I checked into the apache Camel,I believe you are reading the JVM for getting the metrics of the Routes and the processors in it but what I don't understand is how are you able to construct this block diagram and the exact flow into each processor..
Can someone help me out with this.I am trying to build a similar UI such as hawtIO for specifically on Apache Camel and I want to know how it can be done?
Hawtio is getting its application insights with Jolokia. Jolokia is providing a HTTP bridge to JMX. So, in other words, all the informations you need are exposed by Camel MBeans via JMX.
So, you have two options to get hold of Camel's JMX info:
base your own UI on Jolokia as well.
go old school and use a JSR-160 connector.
I am still struggling with undertsanding some of Camel's main features and limitations.
My objective is to implement a demo application that can migrate camel endpoints. To achieve this everyone suggested that I should use the camel load-balancer pattern with the failover construct.
To achieve this objective people have suggested Fuse and ActiveMQ. Some have even suggested JBoss, but I am lost.
I understand that Camel needs the an implementation of a JMS server. To achieve this I can use ActiveMQ - a free implementation of a JMS server.
However camel also provies the jms-component. What is this? Is this a client implementation of JMS? If so, should I not be using an ActiveMQ client for JMS? Could someone provide a working example?
With ActiveMQ and JMS understood I can then try to find out why people suggest Fuse. I want my implementation to be as simple as possible. Why do I need Fuse? The Camel+ActiveMQ combination has the load balancer pattern with the failover mechanism right?
I am lost in this sea of new technologies, if someone could give a direction I would be thankful.
Camel provides two components. The first is the jms component, which is a generic API for working against JMS servers. The other is the activemq component, which uses the activemq API for working with activemq message brokers. The activemq component is the default component within things like servicemix/fuse, using an internal broker (not a networked/external broker).
If you are connecting to activemq, you can use either the activemq component or the jms component. The jms component will not start up a broker automatically, you would need to do this yourself.
Fusesource == JBoss Fuse == Apache ServiceMix + some addons. For argument sake, i'm going to refer to all three of these as ServiceMix.
ServiceMix is an enterprise service bus, you can lookup the term on wikipedia if you're not familiar with the concept. It uses Apache Camel to define routes between your components, implementing a number of integration patterns as you so need. ServiceMix deploys by default with Apache CXF, for JAX-RS and JAX-WS services and Apache ActiveMQ, a JMS message broker. Using Camel, you can tell service mix that when a REST API is called, do a series of steps, each step decoupled from the one before it.
JBoss Fuse (the enterprisey, costs money edition) comes with some additional components around fail over. Some of these are present in servicemix (namely, you can run servicemix in a hot stand by mode, waiting for the primary to go down). The Camel load balancer pattern doesn't really mean anything around replication, except that a message coming from one endpoint can be delivered to any of a set of a N endpoints. http://camel.apache.org/load-balancer.html
On the flipside, take a look at ServiceMix's failover http://servicemix.apache.org/docs/4.4.x/users-guide/failover.html
I think based on your question you're referring to system failure failover (needing to work against a new instance), and not a Camel Loadbalancer component (which is likely where the confusion is coming from, on the community side and your side).
start by reading these...Camel In Action, ActiveMQ In Action