The "User Guide" for Drools 6 states that a camel endpoint for drools shall be in the below format:
<to uri="kie:{1}/{2}" />
where
{1}: Execution Node identifier that is registered in the CamelContext
{2}: Knowledge Session identifier that was registered in the Execution Node with identifier {1}
Doubt # 1 :
If the sessions are created before the endpoints are built, how the incremental changes in kmodule will be picked up by the sessions created before?
Statement about KScanner from the document:
If the KieScanner finds, in the Maven repository, an updated version
of the Kie project used by that KieContainer it automatically
downloads the new version and triggers an incremental build of the new
project. From this moment all the new KieBases and KieSessions created
from that KieContainer will use the new project version.
Doubt # 2
And..I am trying to configure this endpoint to route to ksessions dynamically.
Traversing through the source code, I tried declaring the uri to kie:dynamic and adding new sessions to KieEndpoint.executorsByName, like
KieEndpoint endPoint = (KieEndpoint)camel.getEndpoint("kie:dynamic");
endPoint.executorsByName.put(sessionName, kSession);
Is this the right way of adding the dynamic sessions? I didn't find any hint in user guide for this.
Related
My company is using WSO2 IS version 5.2. We have implemented it clustered with 1 manager node and 3 worker nodes. We do not use multiple tenants. We are implementing a SAML approach to authentication. Our first implementation was in a development environment which included quite a bit of manual (UI based) configuration. The following was done using the management console:
adding custom claims
adding service providers (we have 3 currently)
assigning custom claims to SPs
configure the resident IdP
We now must setup and configure 50 more development, QA and UAT environments. We would like to be able to do this entirely through XML configuration with no human data entry. Is there a specific resource that can walk me through the above 4 items? Note: We have determined how to add our own custom claims through xml config. So item #1 is no longer an issue but I included it for reference. I am really mostly interested in items 2,3 and 4.
We did find the following topic in the docs:
https://docs.wso2.com/display/IS520/Configuring+a+SP+and+IdP+Using+Configuration+Files
However, the above link does not go far enough to explain how to map custom claim to SPs. We also found this which asks a very similar question but gives only part of what we are looking for.
Thanks for any assistance.
You could setup a basic environment and copy the database from the directoy conf/repository/database.
In our Apache Camel project, we are consuming a rest service which requires a .jks file.
Currently we are storing .jks file in a physical location and referring to that in Camel project. But it can't be used always, as we may be having access to the Fuse Management Console only and not to the physical location accessible from management console.
Another option is to store key file within bundle, which is can't be employed because, certificate may change based on the environment.
In this scenario, what can be a better solution to store key file?
Note
One option about which I thought was, storing .jks file within fabric profile. But could n't find any way to do that. Is it possible to store a file in Fabric profile?
What about storing the .jks in a java package and reading it as a resource?
You bundle imports org.niyasc.jks and loads the file from there. The bundle need not to change between environments.
Then you write 2 bundles to provide the same package org.niyasc.jks, one with production file and one with test file.
Production env:
RestConsumerBundle + ProductionJksProviderBundle
Test env:
RestConsumerBundle + TestJksProviderBundle
Mind that deploying both of them may be possible and RestConsumerBundle will be bound to the first deployed bundle. You can eventually play with OSGi directives to give priority to one of them.
EDIT:
A more elegant solution would be creating an OSGi service which exposes the .jks as an InputStream or byte[]. You can even play with JNDI if you feel to.
From Blueprint declare the dependency as mandatory, so your bundle will not start if the service is not available.
<!-- RestConsumerBundle -->
<reference id="jksProvider"
interface="org.niyasc.jks.Provider"
availability="mandatory"/>
Storing the JKS files in the Fuse profile could be a good idea.
If you have a broker profile created, such as "mq-broker-Group.BrokerName", take a look at it via the Fuse Web Console.
You can then access the jks file as a resource in the property file, as in "truststore.file=profile:truststore.jks"
And also check the "Customizing the SSL keystore.jks and truststore.jks file" section of this chapter:
https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/fabric_guide/mq#MQ-BrokerConfig
It has some good pointers.
Regarding how to add files to a Fabric profile, you can store any resources under src/main/fabric8 and use the fabric8 Maven plugin. For more, see:
https://fabric8.io/gitbook/mavenPlugin.html
-Codrin
I am totally new to Jackrabbit and Jackrabbit Oak. I worked a lot with Alfresco though, another JCR compliant open-source content repo.
I want to start a standalone Jackrabbit Oak repo, then connect to it via Java code. Unfortunately the Oak documentation is quite scarce.
I checked out the Oak repo, built it with mvn clean install and then ran the standalone server (memory repository is fine for me at the moment for testing) via:
$ java -jar oak-run-1.6-SNAPSHOT.jar server
Apache Jackrabbit Oak 1.6-SNAPSHOT
Starting Oak-Memory repository -> http://localhost:8080/
13:14:38.317 [main] WARN o.a.j.s.r.d.ProtectedRemoveManager - protectedhandlers-config is missing -> DIFF processing can fail for the Remove operation if the content toremove is protected!
When I open http://localhost:8080/ I see a blank page with code like this but the html / xhtml output as source like this:
I try to connect via Java code:
JcrUtils.getRepository("http://localhost:8080");
// or
JcrUtils.getRepository("http://localhost:8080/rmi");
but getting:
Connecting to http://localhost:8080
Exception in thread "main" javax.jcr.RepositoryException: Unable to access a repository with the following settings:
org.apache.jackrabbit.repository.uri: http://localhost:8080
The following RepositoryFactory classes were consulted:
org.apache.jackrabbit.oak.jcr.OakRepositoryFactory: declined
org.apache.jackrabbit.commons.JndiRepositoryFactory: declined
Perhaps the repository you are trying to access is not available at the moment.
at org.apache.jackrabbit.commons.JcrUtils.getRepository(JcrUtils.java:223)
at org.apache.jackrabbit.commons.JcrUtils.getRepository(JcrUtils.java:263)
at Main.main(Main.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
(The Oak documentation is not as complete as the Jackrabbit documentation, but I am also not sure how much of Jackrabbit 2 is still valid for Oak, since it's a complete rewrite.)
I found the same question in the mailing list/Nabble, but the provided answer there does not use a remote, standalone repository but a local one running in the same servlet container and even app (just that eventually the Mongo DB / Node store is configured as remote, but that would mean that the Mongo ports would need to be open). So the app creates the repository itself, which is not my case (I got this case working fine in Oak as well).
In Jackrabbit2 (not Oak), I can simply connect via
Repository repo = new URLRemoteRepository("http://localhost:8080/rmi");
and it's working fine, but this method is not available for Oak, it seems.
Is RMI not enabled by default in Oak? Is there a different URI to use?
However, the documentation of Oak says "Oak comes with a runnable jar" and the runnable jar offers the server method to start the server, so I assume that my scenario above is a valid one.
The blank page is a result of your browser being unable to parse the<title/> tag.
Go into developer mode to see how the browser incorrectly interpreted that tag.
Incorrect interpretation of title tag
i never saw an example of jackrabbit oak working like this.. are you sure it is possible to start oak outside of your application?
How do you set up the persistent store? (which one are you going to use?).
Here is the link how you normally set up jackrabbit oak: https://jackrabbit.apache.org/oak/docs/construct.html
For example if you use MongoDB as backend (which is the most powerful), you first connect to the db via
Db db = new MongoClient(ip, port).getDB("testDB");
where ip is the ip-address of your MongoDB-server with its port. This server doesn't need to be on the same machine like your Java code is running. You can even use instead of a single MongoDB instance a Replica set.
The same is valid by using a relational db.. only if you choose the tar-file system backend you are limited to your local machine.
Then, in a second step you create a jcr based on the chosen backend (see the link)
i began to refactor/ rebuild a xml based camel project to a java based one (i need to strictly separate configuration from functional stuff).
i am new to camel and so i am stumbling over the very first route, a ftp route. The ftp url and credentials are configuration but all the rest should be set in java.
at the moment the urie looks as follows:
ftp://<fromConfig>&stepwise=true&delay=1000&move=${file:name}.trans&recursive=true&binary=true&filter=#doneFilter&maxMessagesPerPoll=200&eagerMaxMessagesPerPoll=false&sorter=#pcrfSorter
So how to do this in java. especially the stuff using beans with "#".
thx in advance
The uri is the same in Java or XML DSL. Only that in XML mind you need to XML escape the & so it becomes & etc.
The # is a lookup in the registry, see more here: http://camel.apache.org/how-do-i-configure-endpoints.html
So the lookup happens in the Camel registry which can be a facade for JDNI / Spring etc. So it depends in what container you run Camel.
You can find a bit more details about Camel registry at: https://camel.apache.org/registry.html
Hello all
I'm trying to build up my first Hibernate project for a Web app, but i'm having some issues
Trying to find out where to place the method:
AnnotationConfiguration config =
new AnnotationConfiguration();
config.addAnnotatedClass(Object.class);
config.configure();
i have some java beans decorated with annotations, shel i just insert it in the same class there the bean is?
Thank you
Ideally, you'd call this only if you are developing a standalone application. In a Java EE environment, you'd just define a persistence.xml file (or hibernate.cfg.xml) in your deployment archive and the container (like JBoss AS) would make a #PersistenceContext (EntityManager) available to you.
In a standalone application, you'd call this in your "Bootstrap" code. The one which sets up the environment.
In "non-Java EE" web applications (seriously, who still uses that?), you'd have to resort to some "hacks", like doing some initialization during context startup (so that you won't need to run this for all requests, as it's an expensive operation).
Partenon is right, you should bootstrap JPA with a persistence.xml.
The Stripes web framework it self does not offer any persistence services. But to make life easier there is an Stripersist extension that offers an out of the box session in view pattern (starts a transaction before the actionbean and does a roll back after the request is handled). Very good examples of how to use and configure Stripersist can be found in the book: Stripes: ...and Java web development is fun again