Get information about all CamelContext defined in a VM - apache-camel

I have a system which dynamically adds Camel contexts to a running system.
Retrieving information about the context from within a processor is very easy and convenient but I did not figure out a possibility to do so for more than the context the processor is defined in.
Is there any chance to retrieve status from all contexts using a single component?

Thanks to #petter and #claus-ibsen! I've taken the MBean approach. Because I stay within my own VM, I'm able to work against the local MBeanServer:
List<Map<String, String>> values = new ArrayList<>();
QueryExp qe = Query.isInstanceOf(new StringValueExp("org.apache.camel.management.mbean.ManagedCamelContext"));
MBeanServer ms = ManagementFactory.getPlatformMBeanServer();
Set<ObjectName> contexts = ms.queryNames(new ObjectName("org.apache.camel:*"), qe);
for (ObjectName context : contexts) {
Map<String, String> curMap = new HashMap<String, String>();
AttributeList al = ms.getAttributes(context, attributes);
List<Attribute> ale = al.asList();
for (int i = 0; i < ale.size(); i++) {
Attribute attribute = ale.get(i);
String val = attribute.getValue() != null ? attribute.getValue().toString() : "";
curMap.put(attribute.getName(), val);
}
values.add(curMap);
}
With e.g. attributes=new String[] { "CamelId", "MinProcessingTime", "MeanProcessingTime", "MaxProcessingTime" }; I can retrieve the information I like.
Camel is great at that point ;-)

Yes you can use JMX as Petter says. Apache Camel exposes a number of JMX MBeans to manage Camel apps: http://camel.apache.org/maven/current/camel-core/apidocs/org/apache/camel/api/management/mbean/package-summary.html
I would however also point to jolokia (http://jolokia.org/) which makes using JMX much easier as jolokia can expose JMX as REST services. This makes it trivial for a client to access that information as its just a REST call (eg over HTTP).
We use this in the hawtio web console to build a HTML5 web app for managing Java apps, which has a Camel plugin as well. This allows us using those REST services to manage all the Camels running in a JVM or remote JVMs.
http://hawt.io/

There is a nice piece of software that installs/uninstalls/starts/stops camel contexts on the fly that you might want to try: Apache Karaf. There are some guidelines here.
That said, yes - you can easily access other Camel contexts using JMX. The contexts are exposed as MBeans. You might need to add JMX support to your dynamic runtime for this to be possible.
You can explore what options you have and if your JMX exposure works using jconsole. Of course, you can access the same operations from code using the JMX api.

Related

Can/Is the latest version of IdentityServer4 supports dynamic client add?

I assume I can build an additional API that registers users/apps/containers.
But is there a simpler way to accept multiple clients dynamically ?
That is for example, if my IDP is in the UK, and i would like to allow a predefined containers to "add themselves" to the client list of my IDP.
I achieved a simple "User -> Client -> IDP" authentication but would like to automate the process.
Thank you fellow coders.
In short, yes - but you'd have to create the mechanism to do so yourself.
If using a database to back your client storage (rather than using the default static/config file based in-memory store) then you're free to implement that any way you like.
In our solution we have an API that allows for this as well as a more limited self-serve UI capability.
There is an OpenID Connect spec for this that may provide some inspiration: https://openid.net/specs/openid-connect-registration-1_0.html
You can implement you own client store using IClientStore interface.
Something like
internal class MyCustomClientStore : IClientStore
{
public Task<Client> FindClientByIdAsync(string clientId)
{
throw new System.NotImplementedException();
}
}
You can store client data anywhere you want, interface is pretty simple.
This implementation can be registered using DI with
services
.AddIdentityServer()...
.AddClientStore<MyCustomClientStore>()

can we use solr as persistent store for apache ignite?

I have been working to integrate solr and apache ignite.....while I am trying to run the program write
class org.apache.ignite.IgniteCheckedException: Cannot enable write-behind (writer or store is not provided) for cache
this error is shown
CacheConfiguration textMetaConfig = new CacheConfiguration<>("textMetaCache");
textMetaConfig.setWriteThrough(true);
textMetaConfig.setReadThrough(true);
textMetaConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
textMetaConfig.setWriteBehindEnabled(true);
textMetaConfig.setWriteBehindFlushSize(40960);
textMetaConfig.setWriteBehindFlushFrequency(1);
textMetaConfig.setWriteBehindFlushThreadCount(5);
textMetaConfig.setCacheMode(CacheMode.PARTITIONED);
textMetaConfig.setIndexedTypes(String.class, TextMeta.class);
this is how i have configured cache
You can implement the CacheStore interface to integrate with any kind of persistence storage. Out of the box Ignite provides Cassandra store implementation and JDBC store implementation which covers most of the regular relational databases. For anything else you will have to create your own implementation. And in any case, the store must be configured via CacheConfiguration.setCacheStoreFactory(..) configuration property. Please refer to this page for details: https://apacheignite.readme.io/docs/persistent-store

Distribute Solr Using Replication without Using SolrCloud

I want to use Solr replication without using SolrCloud.
I have three Solr servers, one is master and others are slave.
How to dispatch the search query on the Solr server which isn't busy?
What tools do and how to lead?
You can use any load balancer - Solr talks HTTP, which makes any existing load balancing technology available. HAProxy, varnish, nginx, etc. will all work as you expect, and you'll be able to use all the advanced features that the different packages offer. It'll also be independent of the client, meaning that you're not limited to the LBHttpSolrServer class from SolrJ or what your particular client offers. Certain LB solutions also offer high throughput caching (varnish) or dynamic real time fallover between live nodes.
Another option we've also used successfully is to replicate the core to each web node, allowing us to always query localhost for searching.
You have configured solr in master-slave mode. I think you can use LBHttpSolrServer from solrj api for querying the solr. You need to send the update requests to master node explicitly. The LBHttpSolrServer will provide you the load balancing among all the specified nodes. In the master-slave mode, slave are responsible for keeping themselves updated with the master.
Do NOT use this class for indexing in master/slave scenarios since documents must be sent to the correct master; no inter-node routing is done. In SolrCloud (leader/replica) scenarios, this class may be used for updates since updates will be forwarded to the appropriate leader.
I hope this will help.
apache camel can be used for general load balancer. like this:
public class LoadBalancer {
public static void main(String args[]) throws Exception {
CamelContext context = new DefaultCamelContext();
context.addRoutes(new RouteBuilder() {
public void configure() {
from("jetty://http://localhost:8080")
.loadBalance().roundRobin().to("http://172.28.39.138:8080","http://172.168.20.118:8080");
}
});
context.start();
Thread.sleep(100000);
context.stop();
}
}
There is some other materials maybe useful:
Basic Apache Camel LoadBalancer Failover Example
http://camel.apache.org/load-balancer.html
But is seems there are not straight way to solr-camel integration, because camel can be used to balance the requests upon he java "Beans" components
http://camel.apache.org/loadbalancing-mina-example.html
There is another useful example:
https://svn.apache.org/repos/asf/camel/trunk/camel-core/src/test/java/org/apache/camel/processor/CustomLoadBalanceTest.java
And you can use camel as a proxy between client and server
http://camel.apache.org/how-to-use-camel-as-a-http-proxy-between-a-client-and-server.html
There are some presentation to beginning with apache camel,its approach and architecture:
http://www.slideshare.net/ieugen222/eip-cu-apache-camel

camel change route policy at runtime via jmx

is it possible to change at runtime the route policy? for instance, if i have the code below
CronScheduledRoutePolicy startPolicy = new CronScheduledRoutePolicy();
startPolicy.setRouteStartTime("* 0 * * * ?");
startPolicy.setRouteStopTime("* 30 * * * ?");
from("direct:foo").routeId("myRoute").routePolicy(startPolicy).autoStartup(false).to("does://not-matter");
I would like to change the cron parameters during the camel execution. In JConsole I can just access to the getRoutePolicyList which returns
CronScheduledRoutePolicy(0x6dc7efb5)
Is it possible in some way access to the startPolicy object and re-instantiate it with a new value? Have I extend the mbean class of camel with some getter and setters?
No not out of the box. But yeah it would be a nice new feature to register the CronScheduledRoutePolicy as a JMX MBean so people can adjust it at runtime with JMX.
I have logged a ticket: https://issues.apache.org/jira/browse/CAMEL-6334
What you can do is to stop the route. And then adjust the startPolicy settings, and then start the route again.
There is JMX operations for starting and stopping routes. What you may need is to expose some JMX operations to adjust the cron policy.
I managed to do this using hawt.io. But for this to work, you need to upgrade to Camel version 2.13.0.
Using hawt.io, you can change cron expressions at runtime in a very user-friendly way.

Apache Camel: Keeping routing information completely independent of the Java Code

First of all thanks to folks who are currently involved in the development of Camel, I am grateful for all the hard work they have put in.
I am looking for some design advice.
The architecture is something like this:
I have a bunch of Java classes which when instantiated are required to connect to each other and send messages using Apache Camel. The design constraints require me to create a framework such that all routing information, producers, consumers, endpoints etc should be a part of the camel-context.xml.
An individual should have the capability to modify such a file and completely change the existing route without having the Java code available to him.(The Java code would not be provided, only the compiled Jar would be)
For example in One setup,
Bean A ->Bean B->Bean C->file->email.
in another
Bean B->Bean A->Bean C->ftp->file->email
We have tried various approached, but if the originating bean is not implemented as a Java DSL, the messages rate is very high because camel constantly invokes Bean A in the first example and Bean B in the second(they being the source).
Bean A and Bean B originate messages and are event driven. In case the required event occurs, the beans send out a notification message.
My transformations are very simple and I do not require the power of Java DSL at all.
To summarize, I have the following questions:
1) Considering the above constraints, I do I ensure all routing information, including destination addresses, everything is a part of the camel context file?
2) Are there example I can look at for keeping the routing information completely independent of the java code?
3) How do I ensure Camel does not constantly invoke the originating bean?
4) Does Camel constantly invoke just the originating bean or any bean it sends & messages to irrespective of the position of the bean in the entire messaging queue?
I have run out of options trying various ways to set this up. Any help would be appreciated.
Read about hiding the middleware on the Camel wiki pages. This allows you to let clients use an interface to send/receive messages but totally unaware of Camel (no Camel API used at all).
Even better consider buying the Camel in Action book and read chapter 14 which talks about this.
http://www.manning.com/ibsen/
Save 41% on Manning books: Camel in Action or ActiveMQ in Action. Use code s2941. Expires 6th oct. http://www.manning.com/ibsen/
If you consider using ServiceMix of FuseESB, you might want to separate your routes in two parts.
First part would be the Event-driver bean that trigger the route. It could push messages to the ServiceNMR (see http://camel.apache.org/nmr.html).
The other part would be left to the framework users, using Spring DSL. It would just listen to message on the NMR (push by the other route) and do whatever they want with it.
Of course endpoint definition could be propertized using servicemix configuration service (see http://camel.apache.org/properties.html#Properties-UsingBlueprintpropertyplaceholderwithCamelroutes)

Resources