I'm struggling to get Liquibase and Camel to work seamlessly.
It happens that Camel initiates its routes before Liquibase applies its patches, leading to an error if the former has to access a table that is not in the database yet.
As a workaround, I've put in a delayed thread the start of the routes. It works, but not in every case: e.g. Weld does not propagate context in new threads, so I cannot do anything complicated in them.
Is there a way to delay Camel start or anticipate the time Liquibase applies its patches?
Set autoStartup=false for your camel routes and start the routes once after liquibase patches up the database. You may use a timer or the value of LOCKED flag set on liquibase generated table 'databasechangeloglock' to check if liquibase is patching or not based on your use case.
Related
We are using apache camel sftp 2.25.4 as a poller (Jsch) to read xml files. There is two spring boot 2.6.10 applications (same application for redundancy) reading from the same sftp folder 'inbound/orders' with the configuration:
sftp://user#localhost:2222/inbound/orders?preMove=$simple{file:parent}/.processing_$simple{sys.hostname}/$simple{file:onlyname}
When either of the application is shutdown for maintenance(graceful shutdown) the preMove file becomes orphaned. Is there a way to ensure camel fully consumes this 'preMove' file before shutting down the route?
I expect some may suggest an idempotent component to handle this which is something that we are considering but (dare I say) trying to avoid the overhead of a cache lookup during the read operation (we consider this a tier 0 service so need to avoid any dependencies).
I have tried other styles of control such as the markerFile and rename strategy but none seem to work as well as the preMove. The preMove works really well.
I want to implement some content-caching on my Camel 2.23.2 routes. While researching I came across Camel's JCache component which, according to the documentation, should have a JCachePolicy that would:
"The JCachePolicy is an interceptor around a route that caches the "result of the route" - the message body - after the route is completed. If next time the route is called with a "similar" Exchange, the cached value is used on the Exchange instead of executing the route."
Which is basically exactly what I'm looking for. However, as it turns out this policy is only available in Camel 3.x and higher.
So my question is, how might I recreate this functionality in Camel 2.23.2?
The solution was quite simple. In the Camel 3.x branch the policy package is available and in it are only two files. The actual policy and the processor.
As it turns out, pending more testing, these files work very well with little adjustment.
On the policy you need to change the method definition for the "wrap" and "beforeWrap" methods. These require the routeContext, not the Route. But moving from the RouteContext to the route is simple enough.
On the processor, the main change is using the correct package for "DelegateAsyncProcessor" it extends.
With those changes everything seemingly works as documented. I used the ehcache spring boot starter in my pom without any further change to have it work with ehcache as it's cachemanager.
One other remark, when you want to use this, the model you want to cache needs to be serializable.
I have an Apache Camel Route that uses LevelDB as the aggregation repository. My problem is that when the Camel Context is started the LevelDBAggregationRepository is automatically started by Camel even if the Camel Route it is used in is off and not started.
Is there a way of preventing this?
Why is this important for me? I want my application to be highly available, so I want to share the same LevelDB between nodes. But the LevelDBAggregationRepository unfortunately does not support using multiple processes at a time, and I have no SQL DB available for the JDBC Aggregation Repository.
So, my current solution attempt is to use a route policy that ensures that only one node at a time has the Camel Route enabled (determined by leader election with Zookeeper). However, when I start a second node with the route turned off, its Camel Context tries to launch the LevelDB anyway and then all hell breaks loose.
The application has several camel contexts, each doing its own thing and as such do not need to communicate with each other. They are in the same module because they share some classes.
Are there any issues one needs to watch out for in the case of multiple contexts in a single osgi module ?
What is the recommendation and best-practice in this case ?
It is fairly subjective. IMHO: The two big things to consider are process control and upgrade impacts. Remember-- during a bundle upgrade all the contexts will stop and then restart.
You still have the ability to do fine grain process control (start, stop, pause, resume) at the Camel Context and Route level without having to rely on bundle start | stop.
If you wanted fine grain upgrade ability, you could put the Java classes in their own bundle, export the packages. Then put Camel Contexts in their own bundles and import the Java classes from the shared bundle. You then have the ability to do individual upgrades of the Camel Contexts w/o having to upgrade all the Contexts at once (and force them all to stop).
One single recommendation: have stateless beans/processors/aggregators.
All the state related information about the processing of your body must live in the Exchange headers/properties.
static final constants are good.
Configuration read only properties are fine too.
I'm digging into a project that uses camel routes with quartz scheduler. I'm a little unfamiliar with the environment, but trying to figure out what's happening and how everything fits together, while trying to make a change in functionality. Just not sure how.
The component is a job manager deployed to Apache Karaf. If I have schedule (quartz cron) for a job that is active, then the job runs when the cron string is matched. The schedule can be disabled (which sets toggles autoStartup flag from what I can tell). This is working as expected.
If I disable a schedule, wait for a match on the cron string, and then reenable the schedule, the job runs. I'd like to change this behaviour, configuring schedules to only execute for cron strings that are matched while the schedule is active, and not "catch up" with matches from the disabled autostartup. Is this possible?
I see a similar question was asked last October, but never answered - Camel Quartz route undesired job execution at route startup
On Quartz trigger there is MisfireInstruction property which can be set to MISFIRE_INSTRUCTION_IGNORE_MISFIRE_POLICY (which equals to -1)
Unfortunately, I don't known how to set this from Camel Quartz component. Adding something like trigger.misfireInstruction=-1 or trigger.MisfireInstruction=-1 might work.