What are the needed jars to work via RMI with a Jackrabbit repository? - jackrabbit

What jars are needed to work via RMI with a Jackrabbit repository?
When RMI is used via JcrUtils.getRepository("http://$SERVER_ADDRESS:$PORT/$CONTEXT"), what jars are needed in the classpath to be able to work with all JCR & Jackrabbit features?
Should all Jackrabbit jars be included or can we limit them only to some "interfaces"?

After investigations I have found
http://mvnrepository.com/artifact/org.apache.jackrabbit/jackrabbit-jcr-rmi/2.4.3
In concrete I have to add this list of Jars to Eclipce project:
jcr-2.0.jar
jackrabbit-jcr-commons-2.4.3.jar
jackrabbit-jcr-rmi-2.4.3.jar
logback-classic-1.0.0.jar
jackrabbit-api-2.4.3.jar
jackrabbit-core-2.4.3.jar
slf4j-api-1.6.4.jar
logback-core-1.0.0.jar

Related

Flink logging - Using Log4j2

We are running a Flink(1.9.1) application on AWS-EMR(5.29) using yarn. We are using a common logging adaptor throughout all the components(including the Flink application) in our project and it uses Log4j2.
From the documentation, I see that there are 3 configuration files.
log4j.properties
log4j-yarn-session.properties
log4j-cli.properties
I understand that I will have to modify log4j.properties for the job manager and task manager logs and log4j-cli.properties for the code not included in the cluster code.
Now given this situation,
How do I pass my log4j2.properties?
Do we replace the logging jars in the lib folder with log4j2 jars?
Not a solid solution but this is a workaround. If the log4j.properties file in the /conf folder is deleted, the log4j2 file within the jar that is within the classpath is referred. But be careful when you have multiple jars in the classpath with the log4j2 properties file.

Karaf cluster synchronization

I have some misunderstandings about Karaf cluster Cellar replication.
We have 4 nodes, and all these nodes surrounding in the cluster.
First problem:
If we install feature in cluster (cluster:feature install) and execute command cluster:bundle-list default, we see that all bundles in this feature have local type in column Located, but the feature has cluster/local type.
How can I achieve that bundles have cluster/local type?
Second problem:
When we install feature in cluster (cluster:feature install) all bundles exist in all nodes (auto sync) but then when we synchronize manually (cluster:sync -g default) for example on 3 nodes we see that bundles does not exist on it any more.
To return feature bundles to 3 nodes we have to uninstall and install feature in cluster again.
Please comment this strange behaviour.
Version:
apache-servicemix-7.0.1
karaf 4.0.9
cellar 4.0.4
camel 2.16.5
CXF 3.1.9

Flink S3 Hadoop 2.8.0

We were trying to use S3 for Flink backend state and checkpoints and used a bucket in Frankfurt (V4 authentication) It gave the error I posted here (Cannot access S3 bucket with Hadoop) and it was due to hadoop. However hadoop 2.8.0 works but there is no Flink support yet.
Guess my question is when will Flink offer a version based on hadoop 2.8.0?
Flink will probably offer a Hadoop 2.8.0 version once that Hadoop version is released.
In the meantime, you can build Flink yourself with a custom Hadoop version:
mvn clean install -DskipTests -Dhadoop.version=2.8.0
Hadoop 2.7.x does work with Frankfurt and v4 API endpoints. If you are having problems, check your joda-time version as an odd combination of old joda-time JARs and Java versions causes AWS to get the wrong-formatted timestamp, which it then rejects in the ubiquitous "bad auth" message.

How to install JTS in Solr 4?

Using the Solr 4 spatial field types seems to require an external library, the Java Topology Suite. How does one install this suite for use with Solr 4.1.0 on Ubuntu Server 12.04 with Java 1.6.0_24?
Thank you.
If you are running Solr in Tomcat on your Ubuntu Server and have deployed the Solr WAR into your <path to Tomcat>/webapps folder. Then according to the Lucene / Solr 4 Spatial documentation on the Solr Wiki, you just need to copy the all the jar files from the JTS distribution /lib folder to the WEB-INF/lib folder where Solr is running.
Update
Since you are using Jetty to run Solr, you will need to include the location of the JTS jar files as a classpath. Based on the Classloading Jetty documentation, something like the following should work:
java -Dsolr.solr.home=/mnt/SolrFiles/solr
-Djetty.class.path=<insert path to JTS here> -jar /opt/solr-4.1.0/example/start.jar
The JTS JAR file needs to be placed in the Solr web application's WEB-INF/lib folder. Otherwise you may encounter a NoClassDefFoundError: com/vividsolutions/jts/geom/Geometry when starting Solr.

Using JSVC to daemonize a Java app packaged with the Maven One-Jar Plugin

Here is the problem:
I have packaged my Java application into a single jar using the Maven plugin One-Jar.
Now I want to run the application as a Unix Daemon using JSVC, i.e. Apache Commons Daemon.
I am using JSVC as follows (which works for Jars made with the Maven assembly plugin, etc):
jsvc -user $USER -home $HOME -pidfile $PID_PATH -cp $PATH_TO_ONE_JAR my.package.MyClass
The error is this:
jsvc.exec error: Cannot find daemon loader org/apache/commons/daemon/support/DaemonLoader
jsvc.exec error: Service exit with a return value of 1
Does anyone know if it is even possible to use JSVC and One-Jar together, since One-Jar uses a custom class loader? The jar runs just fine when I run java -jar my-one-jar.jar.
What can be done?
Thank you for any insight!
I had to add all jars dependencies to the classpath option from jsvc. It seems jsvc doesn't use the jars inside another jar
If you use the (poorly-documented) Maven Shade Plugin instead of One-jar (they can achieve similar results as each other), it should solve your problems. It unpacks the dependent jars and stores the class files directly in the fat Jar (rather than having jars within the jar). I have used it to create an executable jar for running under JSVC with some success.
Of course, things are seldom as simple as they sound. With the Shade plugin, you may have to do some work to relocate classes when there are conflicts in your dependency tree, or use resource transformers to handle your non-Java resource files. But hopefully not.
(Of course Mkyong.com has a guide on this)

Resources