JBossWS & Stateless WebServices, OutFaultInterceptor being ignored - cxf

We are tying to use a WebService OutFaultInterceptor as per this blog post and it doesn't seem to work in JBoss 7.x.
The problem is simple in that it just ignores the #OutFaultInterceptor annotation. I tested this by putting in a erroneous interceptor name and it didn't error out. Logging within the interceptor is simply not called (when the interceptor name is correct).
I have also tried using the WEB-INF/jboss-webservices.xml to define out interceptors but that also seems to get ignored.
Removing the #Stateless annotation also does not seem to help.
This was working fine on JBoss 5.1 but simply seems to not work on JBoss 7.x. What am I missing here?
Is there an alternative way to "translate" exceptions into soap faults?

In order for using Apache CXF APIs and implementation classes you need to add a dependency to the org.apache.cxf (API) module and / or org.apache.cxf.impl (implementation) module.
Dependencies: org.apache.cxf services
According documentation:
When using annotations on your endpoints / handlers such as the Apache
CXF ones (#InInterceptor, #GZIP, ...) remember to add the proper
module dependency in your manifest. Otherwise your annotations are not
picked up and added to the annotation index by JBoss Application
Server 7, resulting in them being completely and silently ignored
See also: JBoss Modules
I hope this help.

Related

Option 'getWithBody' of HttpComponent is not working in Apache Camel

The version of Apache Camel is 2.18.1
In documentation for 2.x, https://camel.apache.org/components/2.x/http4-component.html, getWithBody and deleteWithBody options as query parameters are provided.
deleteWithBody (producer) : Whether the HTTP DELETE should include the message body or not. By default HTTP DELETE do not include any HTTP body. However in some rare cases users may need to be able to include the message body. Default: false
getWithBody (producer) : Whether the HTTP GET should include the message body or not. By default HTTP GET do not include any HTTP body. However in some rare cases users may need to be able to include the message body. Default false
But when I concatenate one of these 2 parameters at the end of the endpoint URI, it's not recognized as an option. Instead, it's passed to the endpoint as an ordinary query parameter, while other query parameter options are treated as component options and not forwarded to the endpoint.
When I inspect the source code, I see that options are recognized by matching with the fields and methods of HttpEndpoint (org.apache.camel.component.http4) and HttpCommonEndpoint (org.apache.camel.http.common) classes. getWithBody and deleteWithBody fields doesn't exist in these classes while other options can be found among the fields of these classes.
Can I assume that the documentation is wrong? If so, how can I achieve to send body with HttpComponent(Http4Component) of Camel while the http method is GET or DELETE?
Option deleteWithBody was introduced in Apache Camel 2.19.0. See CAMEL-10916.
Option getWithBody was introduced in Apache Camel 3.0.0 and backported to 2.25.0. See CAMEL-14118.
For such old version use docs archived on github, it is not published on website - https://github.com/apache/camel/blob/camel-2.18.x/components/camel-http4/src/main/docs/http4-component.adoc
You need to update to newer version or implement custom component overriding some methods from HTTP4 component. There is no option to enable this OOTB in 2.18.1.
i agree with the answer given by #Bedla.
to add something additional this is what we did .
We checked the code and debugged - this allowed us to realize that at a point were camel interacts it drops the body if a GET Call has a one. it will be send as a normal GET Request because Camel version 3.x.x only supports it.
we tried different ways to forcefully add the body. that was also failed because what ever we add will be discarded by camel.
we were using camel v2.22.1 at the time. going to a higher version such as camel 3.x.x will be a big leap since there are multiple changes that will be included , so lucky for us camel team back ported this ability to send the GET Request with the Body (from here on getWithBody) to camel v2.25.0
coding level changes :
Append getWithBody=true to the Request URL
Updated camel Modules
camel-core-2.25.0.jar
camel-cxf-transport-2.25.0.jar
camel-cxf-2.25.0.jar
camel-core-xml-2.25.0.jar
camel-http-common-2.25.0.jar
camel-jaxb-2.25.0.jar
camel-spring-2.25.0.jar
camel-soap-2.25.0.jar
camel-cdi-2.25.0.jar
camel-jdbc-2.25.0.jar
camel-http4-2.25.0.jar
****special note ****
Note that under each folder location, there is a modules.xml file. You need yo open that and change the jar file version number to the one you want to use. in this case 2.25.0
That’s it and Happy Coding !

Multiple endpoint support in camel 3.0.0-m2 required?

Earlier in version 2.1x, from() was having String[] as parameter, but now at 3.0.0-m2 that feature is removed (only 1 endpoint can be given)
Could you please advice how to handle multiple endpoints in from() ?
This was not fully implemented and endorsed in Camel 1.x btw. In Camel 2 it has been removed and you should conform to 1 endpoint per route.
Also make sure to read the migration guide which you can find on github

interceptFrom and interceptSendToEndpoint not working

I was trying to intercept from/to specific rabbitmq route something like the following,
interceptFrom(rabbitmq:localhost/someExchangeName?queue="somerRabbitMqQueueName").to("log:hello");
and i'm not getting anything there.
I've tested
intercept().to("log:hello") and i can confirm it's working, can anyone let me know if there's something else that i need to configure to make the intercept from/to works?
We're using Java DSL and Google Guice for dependency injection.
Some of the project setup as follows,
camel version: 2.18.3 (tried also 2.19.1)
camel-guice: 2.18.3
guice-multibindings: 4.1.0
camel-rabbitmq: 2.18.3
maven-compiler-plugin: 1.7
This was also asked on the Camel mailing list.
Make sure it can match the actual url, so make sure they are exact the
same if you are not using wildcard (*), you can just do
interceptFrom("rabbitmq:localhost/xxx*")
Or try with
interceptFrom("rabbitmq:localhost/xxx?queue=foo*")
See also the Camel documentation, and about the wildcard pattern (in the bottom of the page): http://camel.apache.org/intercept

authentication/http headers support in forge.file trigger.io module?

in the official trigger.io docs there seems to be no provision for custom http headers when it comes to the forge.file module. I need this so I can download files behind an http authentication scheme. This seems like an easy thing to add, if support is not already there.
any workarounds? any chance of a quick fix in the next update? I know I could use forge.request instead, but I'd like to keep a local copy (saveURL).
thanks
Unfortunately the file module just uses simple "download url" methods rather than a full HTTP request library, which makes it a fairly big task to add support for custom headers.
I've added a task to our backlog for this, but I don't have a timeframe for it being added.
Currently on iOS you can do basic auth by using urls in the form http://user:password#url.com in case that helps.
Maybe to avoid this you can configure your server differently, or have a proxy server in front that allows you to pass authentication details as get parameters?

How do you debug CXF endpoint publishing?

Given the "cxf-osgi" example from fuse source's apache-servicemix-4.4.1-fuse-00-08, built with maven 3.0.3, when deploying it to apache karaf 2.2.4 and CXF 2.4.3 the web service is never published and never visible to the CXF servlet (http://localhost:8181/cxf/). There are no errors in the karaf log. How would one go about debugging such behavior?
It's worth turning up the log level(s) - you can do this permanently in the etc/org.ops4j.pax.logging.cfg or in the console with log:set TRACE org.apache.cxf - IIRC this will show some useful information.
Also check that it's actually published on localhost/127.0.0.1 - it may well be being published on another interface, the IP of the local network but not localhost. Try using 0.0.0.0 as the the address, that way it will bind to all available interfaces.
As you're using maven, you can download the CXF source (easily in Eclipse) and connect a remote debugger to the Karaf instance, with some strategically placed breakpoints you should be able to get a handle on what's going on.
Try changing to Equinox instead of the default of Felix. There is a bug in 2.4.3 in that it doesn't work well with Felix. Alternatively, CXF 2.4.4 is now available that should also fix it.
Take a look at this issue I filed this week: https://issues.apache.org/jira/browse/CXF-4058
What I found is that if my beans.xml is loaded before the cxf bundle jar, then the endpoints are registered with CXF but not with the OSGi http service. So everything looks good from the logs but the endpoints are never accessible. This is a race condition.
I did two workarounds: 1) in the short term, just move my own jars later in the boot order (I use Karaf features) so Spring and CXF are fully loaded before my beans.xml is read and 2) abandon Spring and roll my own binding code based loosely on this approach: http://eclipsesource.com/blogs/2012/01/23/an-osgi-jax-rs-connector-part-1-publishing-rest-services/
I just implemented solution #2 yesterday and I'm already extremely happy with it. It's solved all of my classloader issues (before I had to manually add a lot of Import-Package lines because BND doesn't see beans.xml references) and fixed my boot race condition.

Resources