SOLR JDBC failing with Could not initialize class org.apache.solr.handler.sql.SolrRules at org.apache.solr.handler.sql.SolrTableScan.register - solr

When trying to run a SOLR query,
SELECT count(*) from products;
we are seeing the below exception from solr server, ours is a SOLR cloud setup,
SOLR version - solr-8.8.2-PATCH2
solr-solrj-8.8.2 version
Complete Stack Trace i have mentioned below,
2023-02-09 14:21:34.824 ERROR (qtp1209411469-15) [c:products s:shard3 r:core_node12 x:otmm_shard3_replica_n10] o.a.s.s.HttpSolrCall java.lang.RuntimeException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.solr.handler.sql.SolrRules
at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:746)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:592)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:357)
at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:201)
at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:602)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:191)
at org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:516)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:386)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.solr.handler.sql.SolrRules
at org.apache.solr.handler.sql.SolrTableScan.register(SolrTableScan.java:72)
at org.apache.calcite.plan.AbstractRelOptPlanner.onNewClass(AbstractRelOptPlanner.java:239)
at org.apache.calcite.plan.volcano.VolcanoPlanner.onNewClass(VolcanoPlanner.java:464)
at org.apache.calcite.plan.AbstractRelOptPlanner.registerClass(AbstractRelOptPlanner.java:230)
at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1224)
at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)
at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)
at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)
at org.apache.calcite.plan.volcano.VolcanoPlanner.setRoot(VolcanoPlanner.java:265)
at org.apache.calcite.tools.Programs.lambda$standard$3(Programs.java:262)
at org.apache.calcite.tools.Programs$SequenceProgram.run(Programs.java:331)
at org.apache.calcite.prepare.Prepare.optimize(Prepare.java:166)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:297)
at org.apache.calcite.prepare.Prepare.prepareSql(Prepare.java:208)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepare2_(CalcitePrepareImpl.java:642)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepare_(CalcitePrepareImpl.java:508)
at org.apache.calcite.prepare.CalcitePrepareImpl.prepareSql(CalcitePrepareImpl.java:478)
at org.apache.calcite.jdbc.CalciteConnectionImpl.parseQuery(CalciteConnectionImpl.java:231)
at org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:556)
at org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:675)
at org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:156)
at org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:227)
at org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:278)
at org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:52)
at org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:465)
at org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:82)
at org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
at org.apache.solr.common.util.TextWriter.writeMap(TextWriter.java:216)
at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
at org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:153)
at org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:387)
at org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:293)
at org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
at org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
at org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:890)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:583)
... 40 more
Kindly let me know what is being wrong with this. Also am adding additional lines because its compalining me add more details.
Iam using Solrj, and i have a application it supports only JDBC and iam in a situation to use only JDBC solr using SOLRJ,
Its basically a BIRT reporting tool.
Where we can define SQL and output would be automatically mapped to the report.
This is what we are trying to do
Added below in solr.in.sh
SOLR_OPTS="$SOLR_OPTS -Dsolr.modules=sql"
And in solr admin, can able to see,
Args in Systemdashboard,
-Dsolr.modules=sql
But still not working
{
"responseHeader":{
"status":0,
"QTime":2},
"config":{"requestHandler":{"/export":{
"class":"solr.ExportHandler",
"useParams":"_EXPORT",
"components":["query"],
"invariants":{
"rq":"{!xport}",
"distrib":false},
"name":"/export",
"_useParamsExpanded_":{"_EXPORT":"[NOT AVAILABLE]"},
"_effectiveParams_":{
"distrib":"false",
"rq":"{!xport}"}}}}}

Edit: I just realized you had linked to the wrong docs for your version. The way you are trying to add modules was not available until 9.0 (https://issues.apache.org/jira/browse/SOLR-15914) 8.8 docs are at: https://solr.apache.org/guide/8_8/ Note that until recently this feature was called Parallel Sql https://solr.apache.org/guide/8_8/parallel-sql-interface.html
For 8.8 you shouldn't need to configure modules since at that time /sql was an implicitly loaded request handler.
https://solr.apache.org/guide/8_8/implicit-requesthandlers.html
You may need to verify if the implicit handlers have been (mis)configured via the request parameters API (https://solr.apache.org/guide/8_8/implicit-requesthandlers.html#how-to-edit-implicit-handler-paramsets)
Below refers to the 9.x versions of solr
java.lang.NoClassDefFoundError: Could not initialize class org.apache.solr.handler.sql.SolrRules
Probably indicates that you have not successfully loaded the sql module. Normally one enables modules at the very bottom of solr.in.sh not solr.sh as you said in your comments. solr.in.sh is the place where Solr intends for you to set up environment variables. One really shouldn't ever need to modify solr.sh directly, and doing so may make upgrades difficult in the future.
Check that there isn't another set of enablements in solr.in.sh that is overwriting what you've tried to do in solr.sh. Also check that you aren't enabling any other modules in other ways (several methods are shown here: https://solr.apache.org/guide/solr/latest/configuration-guide/solr-modules.html). You should pick one way of enabling modules (sysprops, solr.in.sh, solr.xml or solrconfig.xml) and then enable all modules that way to avoid having to understand any complicated precedence logic if possible. I don't know the precedence order, but I can probably figure it out if you have other modules and absolutely can't avoid using more than one method.
Also, you still haven't told us where you got the version ending in -PATCH2. This version spec sounds like you are working with some folks who are compiling their own custom Solr, so you should be sure to understand what they've changed in case they've customized something about how or where Solr loads jar files (not very likely, but one never knows).

Related

Updating fields in solr using SOLRNET - field data change [duplicate]

I'm doing a simple partial update scenario which worked with version 6.x and 7.x of Solr. After upgdrading both Solr and Solrj to 8.8, I'm getting the following exception:
2021-02-23 14:57:58.201 ERROR (qtp-459670553-28) [ x:core1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: TransactionLog doesn't know how to serialize class org.apache.lucene.document.LazyDocument$LazyField; try implementing ObjectResolver?
at org.apache.solr.update.TransactionLog$1.resolve(TransactionLog.java:100)
at org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:266)
at org.apache.solr.common.util.JavaBinCodec$BinEntryWriter.put(JavaBinCodec.java:441)
at org.apache.solr.common.ConditionalKeyMapWriter$EntryWriterWrapper.put(ConditionalKeyMapWriter.java:44)
at org.apache.solr.common.MapWriter$EntryWriter.putNoEx(MapWriter.java:101)
at org.apache.solr.common.MapWriter$EntryWriter.lambda$getBiConsumer$0(MapWriter.java:161)
at org.apache.solr.common.MapWriter$EntryWriter$$Lambda$548/0000000000000000.accept(Unknown Source)
at org.apache.solr.common.SolrInputDocument.lambda$writeMap$0(SolrInputDocument.java:59)
at org.apache.solr.common.SolrInputDocument$$Lambda$549/0000000000000000.accept(Unknown Source)
.....
solrj code is just similar to the sample provided here and was working before upgrade. The operation is 'add' with a simple integer field for a document whose id is provided.
Note that this is different from a previous question on stackoverflow, since I'm passing simple integer field and on solr/lucene side it's replaced with org.apache.lucene.document.LazyDocument$LazyField.
Seems to be a bug in Solr https://issues.apache.org/jira/browse/SOLR-13034 to be fixed in the next version of solr 8 (8.9).
Until it's released the workaround is to set <enableLazyFieldLoading>false</enableLazyFieldLoading> in solrconfig.xml

Solr throws error on partial update after upgrade to 8.8

I'm doing a simple partial update scenario which worked with version 6.x and 7.x of Solr. After upgdrading both Solr and Solrj to 8.8, I'm getting the following exception:
2021-02-23 14:57:58.201 ERROR (qtp-459670553-28) [ x:core1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: TransactionLog doesn't know how to serialize class org.apache.lucene.document.LazyDocument$LazyField; try implementing ObjectResolver?
at org.apache.solr.update.TransactionLog$1.resolve(TransactionLog.java:100)
at org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:266)
at org.apache.solr.common.util.JavaBinCodec$BinEntryWriter.put(JavaBinCodec.java:441)
at org.apache.solr.common.ConditionalKeyMapWriter$EntryWriterWrapper.put(ConditionalKeyMapWriter.java:44)
at org.apache.solr.common.MapWriter$EntryWriter.putNoEx(MapWriter.java:101)
at org.apache.solr.common.MapWriter$EntryWriter.lambda$getBiConsumer$0(MapWriter.java:161)
at org.apache.solr.common.MapWriter$EntryWriter$$Lambda$548/0000000000000000.accept(Unknown Source)
at org.apache.solr.common.SolrInputDocument.lambda$writeMap$0(SolrInputDocument.java:59)
at org.apache.solr.common.SolrInputDocument$$Lambda$549/0000000000000000.accept(Unknown Source)
.....
solrj code is just similar to the sample provided here and was working before upgrade. The operation is 'add' with a simple integer field for a document whose id is provided.
Note that this is different from a previous question on stackoverflow, since I'm passing simple integer field and on solr/lucene side it's replaced with org.apache.lucene.document.LazyDocument$LazyField.
Seems to be a bug in Solr https://issues.apache.org/jira/browse/SOLR-13034 to be fixed in the next version of solr 8 (8.9).
Until it's released the workaround is to set <enableLazyFieldLoading>false</enableLazyFieldLoading> in solrconfig.xml

[flink]Task manager initialization failed

I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E

Flink, odd behavior when using Hadoop Compatibility

I've add Flink Hadoop Compatibility to the project which reads sequence file from hdfs path,
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-hadoop-compatibility_2.11</artifactId>
<version>1.5.6</version>
</dependency>
Here's the java code snippet,
DataSource<Tuple2<NullWritable, BytesWritable>> input = env.createInput(HadoopInputs.readHadoopFile(
new org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat<NullWritable, BytesWritable>(),
NullWritable.class, BytesWritable.class, path));
This works pretty fine when I run it inside my Eclipse, but when I submit it via command line 'flink run ...', it complains,
The type returned by the input format could not be automatically determined. Please specify the TypeInformation of the produced type explicitly by using the 'createInput(InputFormat, TypeInformation)' method instead.
OK, so I update my code to add type information,
DataSource<Tuple2<NullWritable, BytesWritable>> input = env.createInput(HadoopInputs.readHadoopFile(
new org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat<NullWritable, BytesWritable>(),
NullWritable.class, BytesWritable.class, path),
TypeInformation.of(new TypeHint<Tuple2<NullWritable, BytesWritable>>() {}));
Now it complains,
Caused by: java.lang.RuntimeException: Could not load the TypeInformation for the class 'org.apache.hadoop.io.Writable'. You may be missing the 'flink-hadoop-compatibility' dependency.
Some people suggest to copy flink-hadoop-compatibility_2.11-1.5.6.jar to FLINK_HOME/lib, but it doesn't help, still same error.
Does anyone have any clue?
My Flink is a standalone installation, version 1.5.6.
UPDATE:
Sorry, I copied flink-hadoop-compatibility_2.11-1.5.6.jar to the wrong place, after fixing that, it works.
Now my question is, is there any other way to go? Because copying that jar file to FLINK_HOME/lib is definitely not a good idea to me, especially when talking about a big flink cluster.
Fixed in version 1.9.0, see https://issues.apache.org/jira/browse/FLINK-12163 for details

Apache CXF 2.7.11 on WebSphere 8.5

I have an application that exposes web services for clients via CXF. This side of things works perfectly.
The application also needs to act as a client itself and contact other servers, this is where I am running into problems.
With "Parent First" classloading I get this:
Caused by: javax.xml.ws.WebServiceException: Error: Maintain Session is enabled but none of the session properties (Cookies, Over-written URL) are returned.
at org.apache.axis2.jaxws.ExceptionFactory.createWebServiceException(ExceptionFactory.java:173) ~[org.apache.axis2.jar:na]
at org.apache.axis2.jaxws.ExceptionFactory.makeWebServiceException(ExceptionFactory.java:70) ~[org.apache.axis2.jar:na]
at org.apache.axis2.jaxws.ExceptionFactory.makeWebServiceException(ExceptionFactory.java:118) ~[org.apache.axis2.jar:na]
at org.apache.axis2.jaxws.BindingProvider.setupSessionContext(BindingProvider.java:355) ~[org.apache.axis2.jar:na]
at org.apache.axis2.jaxws.BindingProvider.checkMaintainSessionState(BindingProvider.java:322) ~[org.apache.axis2.jar:na]
at org.apache.axis2.jaxws.client.proxy.JAXWSProxyHandler.invokeSEIMethod(JAXWSProxyHandler.java:393) ~[org.apache.axis2.jar:na]
at ...
With "Parent last" classloading the application can't even expose its own services:
[23/06/15 15:33:12:985 BST] 000002d3 servlet E com.ibm.ws.webcontainer.servlet.ServletWrapper service Uncaught service() exception thrown by servlet cxf: java.lang.VerifyError: JVMVRFY013 class loading constraint violated; class=org/apache/cxf/jaxb/attachment/JAXBAttachmentUnmarshaller, method=getAttachmentAsDataHandler(Ljava/lang/String;)Ljavax/activation/DataHandler;, pc=0
at java.lang.J9VMInternals.verifyImpl(Native Method)
at java.lang.J9VMInternals.verify(J9VMInternals.java:85)
at java.lang.J9VMInternals.initialize(J9VMInternals.java:162)
I have tried disabling WebShere's own JAXWS Engine via the WAR's manifest.mf and no matter what I try with "Parent last" classloading I always get some error like the above. A different class depending on what JAR I have moved or replaced, but always a verify error.
I have also gone through the official Apache documentation, various IBM guides, countless blog and forum posts to no avail. I am at my wit's end with this
The same WAR runs perfectly on Tomcat, JBoss and WebLogic.
This is a complete list of all thirdparty JAR files:
activation-1.1.jar
antisamy-1.4.3.jar
aopalliance-1.0.jar
asm-3.3.1.jar
batik-css-1.7.jar
batik-ext-1.7.jar
batik-util-1.7.jar
bcprov-jdk15-1.46.jar
bsh-core-2.0b4.jar
commons-beanutils-1.7.0.jar
commons-beanutils-core-1.7.0.jar
commons-codec-1.3.jar
commons-collections-3.2.jar
commons-configuration-1.5.jar
commons-dbutils-1.6.jar
commons-digester-1.8.jar
commons-fileupload-1.3.1.jar
commons-httpclient-3.1.jar
commons-io-2.4.jar
commons-jexl-2.1.1.jar
commons-lang-2.4.jar
commons-logging-1.1.1.jar
cxf-api-2.7.11.jar
cxf-rt-bindings-soap-2.7.11.jar
cxf-rt-bindings-xml-2.7.11.jar
cxf-rt-core-2.7.11.jar
cxf-rt-databinding-jaxb-2.7.11.jar
cxf-rt-frontend-jaxws-2.7.11.jar
cxf-rt-frontend-simple-2.7.11.jar
cxf-rt-transports-http-2.7.11.jar
cxf-rt-ws-addr-2.7.11.jar
cxf-rt-ws-policy-2.7.11.jar
dom4j-1.6.1.jar
esapi-2.0.1.jar
FastInfoset-1.0.2.jar
geronimo-javamail_1.4_spec-1.7.1.ja
hamcrest-all-1.3.jar
hsqldb-1.8.0.10.jar
httpclient-4.3.6.jar
httpcore-4.3.3.jar
jaxen-1.1-beta-8.jar
jaxrpc-api-1.1.jar
jaxrpc-impl-1.1.3_01.jar
jaxrpc-spi-1.1.3_01.jar
joda-time-2.2.jar
js-1.7R2.jar
log4j-1.2.16.jar
logback-classic-0.9.21.jar
logback-core-0.9.21.jar
mail-1.4.7.jar
mailapi-1.4.3.jar
nekohtml-1.9.12.jar
not-yet-commons-ssl-0.3.9.jar
opensaml-2.6.1.jar
openws-1.5.1.jar
quartz-1.8.6.jar
saaj-api-1.3.5.jar
saaj-impl-1.3.jar
serializer-2.7.1.jar
slf4j-api-1.6.0.jar
slf4j-log4j12-1.6.0.jar
spring-aop-3.2.6.RELEASE.jar
spring-beans-3.2.6.RELEASE.jar
spring-context-3.2.6.RELEASE.jar
spring-core-3.2.6.RELEASE.jar
spring-expression-3.2.6.RELEASE.jar
spring-web-3.2.6.RELEASE.jar
stax2-api-3.1.4.jar
velocity-1.7.jar
vuelinkcore-20.2.3.jar
vueservlet-20.2.3.jar
woodstox-core-asl-4.2.1.jar
wsdl4j-1.6.3.jar
xml-apis-ext-1.3.04.jar
xml-resolver-1.2.jar
xmlsec-1.5.6.jar
xmltooling-1.4.1.jar
xom-1.1.jar
Does anyone know how to get Apache CXF 2.7.11 on WebSphere 8.5 to be able to act as a server and as a client?
We had the same problem using Was 8.5 (jdk 1.7_64), CXF, JAXB & xmlbeans:
JAXB is the default xml/java binding used by CXF. Was 8.5 uses endorsed JAXB api definition version 2.2.2 (in <WebSphere-dir>\AppServer\endorsed_apis\jaxb-api.jar) and standard implementation (in JRE rt.jar).
Xmlbeans 2.4.x holds inside org.w3c.* classes already present in Was (<WebSphere-dir>\AppServer\java_1.7_64\jre\lib\xml.jar).
In the end we solved so:
first following the instructions here:
http://www.ibm.com/developerworks/websphere/library/techarticles/1001_thaker/1001_thaker.html
then deleting from our deploy the following jar:
activation-*,
stax-api-* (but not stax2-api!),
jaxb-api-*,
jaxb-impl-*,
xercesImpl-*,
xml-apis-*
last deleting all org.w3c classes inside xmlbeas-2.x.jar
This is a complete list of all thirdparty JAR files we are successfully using:
cxf-*-2.7.11.jar
dom4j-1.6.1.jar
ehcache-2.8.2.jar
ehcache-core-2.5.1.jar
jettison-1.1.jar
neethi-3.0.3.jar
ognl-3.0.6.jar
opensaml-2.6.1.jar
openws-1.5.1.jar
spring-*-3.2.13.RELEASE.jar
stax2-api-3.1.1.jar
woodstox-core-asl-4.2.1.jar
wsdl4j-1.6.3.jar
wss4j-1.6.10.jar
xml-resolver-1.2.jar
xmlbeans-2.3.0-now3c.jar
xmlpull-1.1.3.1.jar
xmlschema-core-2.1.0.jar
xmlsec-1.5.4.jar
xmltooling-1.4.1.jar
xpp3_min-1.1.4c.jar
xstream-1.4.7.jar
We hope this is helpful.
PARENT_LAST:
Maybe you have a third party library in your deployment with the javax.activation.DataHandler class. Try to remove the activation-1.1.jar from your deployment.
This post can be usefull for you: LinkageError whilst trying to invoke CXF/SOAP webservice

Resources