Stop Apache Camel from starting LevelDB until the route using it is started - apache-camel

I have an Apache Camel Route that uses LevelDB as the aggregation repository. My problem is that when the Camel Context is started the LevelDBAggregationRepository is automatically started by Camel even if the Camel Route it is used in is off and not started.
Is there a way of preventing this?
Why is this important for me? I want my application to be highly available, so I want to share the same LevelDB between nodes. But the LevelDBAggregationRepository unfortunately does not support using multiple processes at a time, and I have no SQL DB available for the JDBC Aggregation Repository.
So, my current solution attempt is to use a route policy that ensures that only one node at a time has the Camel Route enabled (determined by leader election with Zookeeper). However, when I start a second node with the route turned off, its Camel Context tries to launch the LevelDB anyway and then all hell breaks loose.

Related

How should Camel SFTP .preMove work during a graceful shutdown?

We are using apache camel sftp 2.25.4 as a poller (Jsch) to read xml files. There is two spring boot 2.6.10 applications (same application for redundancy) reading from the same sftp folder 'inbound/orders' with the configuration:
sftp://user#localhost:2222/inbound/orders?preMove=$simple{file:parent}/.processing_$simple{sys.hostname}/$simple{file:onlyname}
When either of the application is shutdown for maintenance(graceful shutdown) the preMove file becomes orphaned. Is there a way to ensure camel fully consumes this 'preMove' file before shutting down the route?
I expect some may suggest an idempotent component to handle this which is something that we are considering but (dare I say) trying to avoid the overhead of a cache lookup during the read operation (we consider this a tier 0 service so need to avoid any dependencies).
I have tried other styles of control such as the markerFile and rename strategy but none seem to work as well as the preMove. The preMove works really well.

JCachePolicy in Camel 2

I want to implement some content-caching on my Camel 2.23.2 routes. While researching I came across Camel's JCache component which, according to the documentation, should have a JCachePolicy that would:
"The JCachePolicy is an interceptor around a route that caches the "result of the route" - the message body - after the route is completed. If next time the route is called with a "similar" Exchange, the cached value is used on the Exchange instead of executing the route."
Which is basically exactly what I'm looking for. However, as it turns out this policy is only available in Camel 3.x and higher.
So my question is, how might I recreate this functionality in Camel 2.23.2?
The solution was quite simple. In the Camel 3.x branch the policy package is available and in it are only two files. The actual policy and the processor.
As it turns out, pending more testing, these files work very well with little adjustment.
On the policy you need to change the method definition for the "wrap" and "beforeWrap" methods. These require the routeContext, not the Route. But moving from the RouteContext to the route is simple enough.
On the processor, the main change is using the correct package for "DelegateAsyncProcessor" it extends.
With those changes everything seemingly works as documented. I used the ehcache spring boot starter in my pom without any further change to have it work with ehcache as it's cachemanager.
One other remark, when you want to use this, the model you want to cache needs to be serializable.

Liquibase applies patches after Camel starts, leading to an error

I'm struggling to get Liquibase and Camel to work seamlessly.
It happens that Camel initiates its routes before Liquibase applies its patches, leading to an error if the former has to access a table that is not in the database yet.
As a workaround, I've put in a delayed thread the start of the routes. It works, but not in every case: e.g. Weld does not propagate context in new threads, so I cannot do anything complicated in them.
Is there a way to delay Camel start or anticipate the time Liquibase applies its patches?
Set autoStartup=false for your camel routes and start the routes once after liquibase patches up the database. You may use a timer or the value of LOCKED flag set on liquibase generated table 'databasechangeloglock' to check if liquibase is patching or not based on your use case.

Clarification required on how Camel Quartz works

I have a simple question that I can't find information on regarding Apache Camel-Quartz. For Camel-Quartz to work do you have to deploy inside a web container like Tomcat? And hence because the application will always be alive it will know when to run?
I'm asking because if you deploy your Camel application in a stand alone JVM I don't see how the application will be smart enough to understand when to run.
thanks
Quartz is embedded with your Camel application and thus when you start Camel, quartz is also started. And then it knows when to run, as long you keep the Camel application running.
There is no magic in there. Its just java code that runs, and Quartz is also just java code. And it does not require a special server etc. Quartz is just a library (some JAR files) that you run together with your own application.
Quartz just have logic for scheduling jobs (eg its like a big clock-work) that knows what the time is, and when to trigger jobs accordingly to how you tell it to.

Configuring logging for ActiveMQ 5.5 on Tomcat 6 with web app using SLF4j and logback

I would like my web app to log using SLF4j and logback. However, I am using ActiveMQ - which then requires that some if its jars go in /usr/share/tomcat6/lib (this is because the queues are defined outside of the web app so the classes to support them must be at container level).
ActiveMQ 5.5+ requires SLF4j-api so that jar has to go in to. Because SLF4j is now starting it needs to have a logging library added or it will simply nop. Thus, logback-core and logback-classic go in too.
After quite some frustration I got this working well enough that I can tidy it up shortly. I needed to configure logback to use a JNDI lookup to get the context. Then it can lookup logback-kenobi.xml in my web app and have a separate configuration there.
However, I'm wondering if this is the best way to do this. For one, the context handling appears not to support the groovy format. I did have a logback.groovy in my web app that logged to console when I was developing locally (which means that Eclipse WTP works nicely) but logs to file and to Splunk Storm when everywhere else. I'm going to want to do something similar with this setup but I'm not sure if I should do that by overwriting the logback-kenobi.xml or some other method.
Note that I don't, currently, need Tomcat itself to log with slf4j although I am planning to do that. Nor do I really need ActiveMQ to log with slf4j but I did need it to stop spewing debug messages every 30s as it was doing. I am aware of tomcat-slf4j-logbak but I don't believe it is directly useful as it is ActiveMQ requiring logging which is the issue.
However, I'm wondering if this is the best way to do this.
Best is an opinion, working is a fact.

Resources