How do you debug CXF endpoint publishing? - cxf

Given the "cxf-osgi" example from fuse source's apache-servicemix-4.4.1-fuse-00-08, built with maven 3.0.3, when deploying it to apache karaf 2.2.4 and CXF 2.4.3 the web service is never published and never visible to the CXF servlet (http://localhost:8181/cxf/). There are no errors in the karaf log. How would one go about debugging such behavior?

It's worth turning up the log level(s) - you can do this permanently in the etc/org.ops4j.pax.logging.cfg or in the console with log:set TRACE org.apache.cxf - IIRC this will show some useful information.
Also check that it's actually published on localhost/127.0.0.1 - it may well be being published on another interface, the IP of the local network but not localhost. Try using 0.0.0.0 as the the address, that way it will bind to all available interfaces.
As you're using maven, you can download the CXF source (easily in Eclipse) and connect a remote debugger to the Karaf instance, with some strategically placed breakpoints you should be able to get a handle on what's going on.

Try changing to Equinox instead of the default of Felix. There is a bug in 2.4.3 in that it doesn't work well with Felix. Alternatively, CXF 2.4.4 is now available that should also fix it.

Take a look at this issue I filed this week: https://issues.apache.org/jira/browse/CXF-4058
What I found is that if my beans.xml is loaded before the cxf bundle jar, then the endpoints are registered with CXF but not with the OSGi http service. So everything looks good from the logs but the endpoints are never accessible. This is a race condition.
I did two workarounds: 1) in the short term, just move my own jars later in the boot order (I use Karaf features) so Spring and CXF are fully loaded before my beans.xml is read and 2) abandon Spring and roll my own binding code based loosely on this approach: http://eclipsesource.com/blogs/2012/01/23/an-osgi-jax-rs-connector-part-1-publishing-rest-services/
I just implemented solution #2 yesterday and I'm already extremely happy with it. It's solved all of my classloader issues (before I had to manually add a lot of Import-Package lines because BND doesn't see beans.xml references) and fixed my boot race condition.

Related

interceptFrom and interceptSendToEndpoint not working

I was trying to intercept from/to specific rabbitmq route something like the following,
interceptFrom(rabbitmq:localhost/someExchangeName?queue="somerRabbitMqQueueName").to("log:hello");
and i'm not getting anything there.
I've tested
intercept().to("log:hello") and i can confirm it's working, can anyone let me know if there's something else that i need to configure to make the intercept from/to works?
We're using Java DSL and Google Guice for dependency injection.
Some of the project setup as follows,
camel version: 2.18.3 (tried also 2.19.1)
camel-guice: 2.18.3
guice-multibindings: 4.1.0
camel-rabbitmq: 2.18.3
maven-compiler-plugin: 1.7
This was also asked on the Camel mailing list.
Make sure it can match the actual url, so make sure they are exact the
same if you are not using wildcard (*), you can just do
interceptFrom("rabbitmq:localhost/xxx*")
Or try with
interceptFrom("rabbitmq:localhost/xxx?queue=foo*")
See also the Camel documentation, and about the wildcard pattern (in the bottom of the page): http://camel.apache.org/intercept

Jackrabbit Oak: Getting started and connect to a standalone repository via RMI

I am totally new to Jackrabbit and Jackrabbit Oak. I worked a lot with Alfresco though, another JCR compliant open-source content repo.
I want to start a standalone Jackrabbit Oak repo, then connect to it via Java code. Unfortunately the Oak documentation is quite scarce.
I checked out the Oak repo, built it with mvn clean install and then ran the standalone server (memory repository is fine for me at the moment for testing) via:
$ java -jar oak-run-1.6-SNAPSHOT.jar server
Apache Jackrabbit Oak 1.6-SNAPSHOT
Starting Oak-Memory repository -> http://localhost:8080/
13:14:38.317 [main] WARN o.a.j.s.r.d.ProtectedRemoveManager - protectedhandlers-config is missing -> DIFF processing can fail for the Remove operation if the content toremove is protected!
When I open http://localhost:8080/ I see a blank page with code like this but the html / xhtml output as source like this:
I try to connect via Java code:
JcrUtils.getRepository("http://localhost:8080");
// or
JcrUtils.getRepository("http://localhost:8080/rmi");
but getting:
Connecting to http://localhost:8080
Exception in thread "main" javax.jcr.RepositoryException: Unable to access a repository with the following settings:
org.apache.jackrabbit.repository.uri: http://localhost:8080
The following RepositoryFactory classes were consulted:
org.apache.jackrabbit.oak.jcr.OakRepositoryFactory: declined
org.apache.jackrabbit.commons.JndiRepositoryFactory: declined
Perhaps the repository you are trying to access is not available at the moment.
at org.apache.jackrabbit.commons.JcrUtils.getRepository(JcrUtils.java:223)
at org.apache.jackrabbit.commons.JcrUtils.getRepository(JcrUtils.java:263)
at Main.main(Main.java:26)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
(The Oak documentation is not as complete as the Jackrabbit documentation, but I am also not sure how much of Jackrabbit 2 is still valid for Oak, since it's a complete rewrite.)
I found the same question in the mailing list/Nabble, but the provided answer there does not use a remote, standalone repository but a local one running in the same servlet container and even app (just that eventually the Mongo DB / Node store is configured as remote, but that would mean that the Mongo ports would need to be open). So the app creates the repository itself, which is not my case (I got this case working fine in Oak as well).
In Jackrabbit2 (not Oak), I can simply connect via
Repository repo = new URLRemoteRepository("http://localhost:8080/rmi");
and it's working fine, but this method is not available for Oak, it seems.
Is RMI not enabled by default in Oak? Is there a different URI to use?
However, the documentation of Oak says "Oak comes with a runnable jar" and the runnable jar offers the server method to start the server, so I assume that my scenario above is a valid one.
The blank page is a result of your browser being unable to parse the<title/> tag.
Go into developer mode to see how the browser incorrectly interpreted that tag.
Incorrect interpretation of title tag
i never saw an example of jackrabbit oak working like this.. are you sure it is possible to start oak outside of your application?
How do you set up the persistent store? (which one are you going to use?).
Here is the link how you normally set up jackrabbit oak: https://jackrabbit.apache.org/oak/docs/construct.html
For example if you use MongoDB as backend (which is the most powerful), you first connect to the db via
Db db = new MongoClient(ip, port).getDB("testDB");
where ip is the ip-address of your MongoDB-server with its port. This server doesn't need to be on the same machine like your Java code is running. You can even use instead of a single MongoDB instance a Replica set.
The same is valid by using a relational db.. only if you choose the tar-file system backend you are limited to your local machine.
Then, in a second step you create a jcr based on the chosen backend (see the link)

NoSuchMethodError exception on SSLSocketImpl.receivedChangeCipherSpec when using javapns in GAE

I am using javapns with Google App Engine. Everything was working fine until this morning. Now, it raises this exception:
java.lang.NoSuchMethodError: sun.security.ssl.SSLSocketImpl.receivedChangeCipherSpec()Z
at sun.security.ssl.Handshaker.receivedChangeCipherSpec(Handshaker.java:356)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:347)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:901)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:837)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1026)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1324)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:712)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:122)
at java.io.OutputStream.write(OutputStream.java:75)
at javapns.notification.PushNotificationManager.sendNotification(PushNotificationManager.java:402)
at javapns.notification.PushNotificationManager.sendNotification(PushNotificationManager.java:350)
at javapns.notification.PushNotificationManager.sendNotification(PushNotificationManager.java:320)
at javapns.Push.sendPayload(Push.java:177)
at javapns.Push.payload(Push.java:149)
Any idea? I have seen the missing method in JDK7u but I think I am using JDK7. Not sure if this is related.
I contacted Google Support regarding this issue and got the following response:
This is a known issue that is already resolved.
They did not disclosure the root cause.
I was trying to use the BigTable client and ran into the same issue. It's due to the Google API using HTTP2 with TLS. The ALPN library used to support TLS modifies the bytecode on boot up and is tightly coupled with the version of the JRE/JDK you are running. Check the "Versions" table at http://www.eclipse.org/jetty/documentation/current/alpn-chapter.html to match the specific version of ALPN to your JRE and you should be good.

Camel Route for Javamail Not Stopping on Shutdown

I've a simple route
from(
"myQuartz://EMAIL_Route?cron=0+0/5+*+*+*+?")
.routeId("EMAIL_Route")
.shutdownRunningTask(
ShutdownRunningTask.CompleteCurrentTaskOnly)
.beanRef("errorReportProcessor")
.filter((body().isNotNull()))
.to("smtp://smtpHost?From=someone&to=someoneElse&Subject=something").end();
Even if I shutdown the application in Websphere application server, I still continue to get emails. The scheduler/thread is not stopping. In my quartz properties file, I also tried
org.quartz.scheduler.makeSchedulerThreadDaemon=true
but, fruitless. The Camel, Quartz and Mail component version is 2.12.4. Spring 3.2.5.Release. Websphere 8.
SystemOut.log files clearly mentions, the application stopped without errors. However, I can see a java.exe instance running in task-manager.
OK. I found the issue was with missing "root-app-context". Once, I configured the "root-app-context", the Cron-scheduler is now stopping and no more stranded threads. :)
Even the extra configuration to makeSchedulerThreadDaemon was not required.
org.quartz.scheduler.makeSchedulerThreadDaemon=true

JBossWS & Stateless WebServices, OutFaultInterceptor being ignored

We are tying to use a WebService OutFaultInterceptor as per this blog post and it doesn't seem to work in JBoss 7.x.
The problem is simple in that it just ignores the #OutFaultInterceptor annotation. I tested this by putting in a erroneous interceptor name and it didn't error out. Logging within the interceptor is simply not called (when the interceptor name is correct).
I have also tried using the WEB-INF/jboss-webservices.xml to define out interceptors but that also seems to get ignored.
Removing the #Stateless annotation also does not seem to help.
This was working fine on JBoss 5.1 but simply seems to not work on JBoss 7.x. What am I missing here?
Is there an alternative way to "translate" exceptions into soap faults?
In order for using Apache CXF APIs and implementation classes you need to add a dependency to the org.apache.cxf (API) module and / or org.apache.cxf.impl (implementation) module.
Dependencies: org.apache.cxf services
According documentation:
When using annotations on your endpoints / handlers such as the Apache
CXF ones (#InInterceptor, #GZIP, ...) remember to add the proper
module dependency in your manifest. Otherwise your annotations are not
picked up and added to the annotation index by JBoss Application
Server 7, resulting in them being completely and silently ignored
See also: JBoss Modules
I hope this help.

Resources