Apache camel Endpoint schemes duplication - apache-camel

in apache camel, we have endpoints associated with scheme strings like "cxf", "ahc", "http" and the likes. What happens if there are two components built using the same scheme? I don't see a validation from camel framework which prevents the deployment of components with duplicate schemes. Should there be a validation in the first place or this is by design?
Regards
Gopal
I have a need to re-use available camel components from the community but change the endpoint scheme to make it unique. For example I want to use amqp component but in my blueprint route i would like to have unique scheme used "<camel from uri="mydomain-amqp">. This way I can re-use the amqp as a new component but also keep others using the amqp as is.

Camel prevents adding a component using a name/scheme that already exists. Adding your component under a different name would be something like this:
getContext().addComponent("mydomain-amqp", new AMQPComponent());

Related

React context between microfrontends

TLDR; How to use a single context between react micro frontends?
The application is divided into multiple Microfrontends or react apps. Each of these is running on a different port. A container is hosting other Micrfrontends. Each one is a separate react app and it is a runtime integration. (I have used martinfowler example to implement micro frontend)
Currently passing some data via URL and browser storage (localStorage/cookies) to other Microfrontends.
I need to pass the data across these react apps (MFEs) using React Context.
I have defined ReactContext Provider in Container (ReactApp1) and stored value (say color=black). To access this color inside the lower level Microfrontend (ReactApp2) we need the context to be available from any micro frontend. How to make it available?
(NOTE: I don't want to use localStorage or cookies for global data sharing)
<Container>
<LowerLevelMFE1/>
<LowerLevelMFE2/>
...
</Container>
Webpack 5's Module Federation & ModuleFederationPlugin is the best way I've found to achieve this architectural pattern.
In essence you create a "host" application that wraps the React.context around a "remote" component. This "remote" is actually another component in a standalone application that is being exposed as a remote through its webpack config file. In this 2nd application you would consume the context as normal using React.useContext.
You can find a full example of this setup here on github.
A few things to note:
Both applications will need to consume this Context's source code so moving it into its own package is preferred.
Your Context consuming application will still need, what I call in MFEs, "developer scaffolding". In this case that would be a simple component that wraps your <ContextConsumer /> in a <ContextProvider /> that you are importing from your shared Context package I mentioned in 1.
Remember this app with scaffolding is only exposing the <ContextConsumer /> which you need to make sure to specify in the webpack config.
I think sharing context between Micro-frontends is an anti-pattern and should be avoided if you can. If you use context to share data, you will automatically couple the MFes that depend on the context, eliminating the benefits of independent deployments by introducing coordination and a dependency.
My advice is that each micro-frontend loads the data they need and if there is communication need it, you need an api or a contract to handle this communication.
I think if you need that type of comunication between the mfe's then they are splited wrong.
As Ruben says, is an anti pattern.

Sharing state in camel?

It appears I am running into issues sharing information between routes.
What is the camel pattern for passing around information ?
I looked at exchange properties but that does not stick around between routes I think...
eg:
one file has one has some configutations
i have a route to read this file
and several other routes that will act on based on the configs,
how do I accomplish this ?
I thought of puttin the values in a singleton bean, but that seems kind of ugly...
Exchange properties are preserved across routes inside camel (but there are some limitations and special cases when using splitter/aggregator etc.)
Assign ID's to all sub routes which will act on based on the config. Then get the suitable Route or RouteDefinition from camel context and check whether you can advice or share information to the route according.
ModelCamelContext modelContext;
modelContext.getRouteDefinition(String routeId) or modelContext.getRoute(String routeId)

Apache camel route deployment - Independently?

Suppose I have 10 different Camel routes in my application, is it possible to stop one particular route alone during an issue and make changes to it(in one of the java processors) and deploy it again without affecting other routes.
Also can I create and deploy a new route on the fly, while other routes are already functioning.
If these are not the default behaviour, what are the options available to achieve this?
Karaf (so do Apache ServiceMix / JBoss Fuse)has hot deployment (nowadays this might be supported in JBoss AS / WildFly as well ). Meaning, you can create your routes as independent blueprint xml files in the deploy folder (meaning just xmls). Likewise you can have xml files for every route, whenever you make changes to XML's, it will be redeployed automatically.
This approach has few drawbacks, it will be complex if you have to deal with JPA or if your route has to deal with custom processors / classes.
Check out the examples in Apache ServiceMix / JBoss Fuse project.
I would recommend this approach especially if you want to take a microcontainer approach - Something like light weight Apache Karaf + Camel Route XML files + Docker.
I have done this few years back, may be this feature is possible to achieve in any other containers as well, which I am not sure.
You can stop a route via org.apache.camel.CamelContext.stopRoute(id) & you can modify it by building a new route and adding it to the context. This would let you change the logic of a route at runtime.
This wouldn't automatically let you hot deploy a new Java processor. I think this aspect of your question isn't Camel specific - their seem to be a few options for this, including OSGi/Karaf mentioned by #gnanaguru.
Perhaps moving the logic that you think might change from a Java processor to somewhere more dynamic (like some JavaScript in an external file, or in the route itself) would be a simpler solution to your problem.

Transacted camel route with auto-startup set to false

I am in the process of developing a message router which has a bunch of routes that are started and stopped at runtime based some certain conditions.
By default all these routes are configured with auto-starup=false
Now I am trying to add transactional support to these routes and it seems that you cannot define a transacted route and control the its startup behavior at the same time. This is because RouteDefinition.transacted() returns a TransactedDefinition instance which does not have an autoStartup(boolean autoStartup) method.
I am sure I am not the only one to need this kind of functionality and just wondering what is the camel way of addressing such requirements.
Thank you in advance for your inputs
Maybe just set autoStartup first, eg
from("direct:start").autoStartup(false)
.transacted()
.to("mock:result");

Disable Dynamic Routing capability in Camel

With Camel, it is possible to add routes dynamically to the context. And it appears the context is always passed as part of the exchange.
Is there a way to prevent applications from adding routes at runtime? I looked at Shiro security but did not seem to find something along those lines.
the only thing I can think of is to wrap interactions with these applications using POJO Bean Binding which only passes the body of the Exhchange around and limits access to the Exchange directly...
see http://camel.apache.org/bean-binding.html
Maybe you can extend DefaultContext and add security rule in that.

Resources