Weblogic manage node not coming up - weblogic11g

I am getting below error while starting one of the manage node in my cluster. There are around 6 manage node and the other 5 are coming up fine. However one node is giving below error. Can someone let me know where i need to look for the cause of this.
<Mar 3, 2017 9:26:10 AM CST> <Error> <Deployer> <WL-149231> <Unable to set the activation state to true for the application 'apsp-ear-trp'.
weblogic.application.ModuleException: Could not setup environment
at weblogic.servlet.internal.WebAppModule.activateContexts(WebAppModule.java:1516)
at weblogic.servlet.internal.WebAppModule.activate(WebAppModule.java:444)
at weblogic.application.internal.flow.ModuleStateDriver$2.next(ModuleStateDriver.java:375)
at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
at weblogic.application.internal.flow.ModuleStateDriver.activate(ModuleStateDriver.java:95)
Truncated. see log file for complete stacktrace
Caused By: weblogic.deployment.EnvironmentException: [J2EE:160101]Error: The ejb-link 'UserManager' declared in the ejb-ref or ejb-local-ref 'ejb/UserManager' in the application module 'admin-tools.war' could not be resolved. The target EJB for the ejb-ref could not be found. Please ensure the link is correct.
at weblogic.deployment.BaseEnvironmentBuilder.addEJBLinkRef(BaseEnvironmentBuilder.java:469)
at weblogic.deployment.EnvironmentBuilder.addEJBReferences(EnvironmentBuilder.java:496)
at weblogic.servlet.internal.CompEnv.activate(CompEnv.java:157)
at weblogic.servlet.internal.WebAppServletContext.activate(WebAppServletContext.java:3164)
at weblogic.servlet.internal.WebAppModule.activateContexts(WebAppModule.java:1514)
Truncated. see log file for complete stacktrace
<Mar 3, 2017 9:26:10 AM CST> <Error> <Deployer> <WL-149250> <Unable to unprepare application 'apsp-ear-trp'.
java.lang.NoClassDefFoundError: org/hibernate/event/EventListeners$2
at org.hibernate.event.EventListeners.destroyListeners(EventListeners.java:215)
at org.hibernate.impl.SessionFactoryImpl.close(SessionFactoryImpl.java:850)
at org.hibernate.ejb.EntityManagerFactoryImpl.close(EntityManagerFactoryImpl.java:46)
at weblogic.deployment.BasePersistenceUnitInfoImpl.close(BasePersistenceUnitInfoImpl.java:656)
at weblogic.deployment.PersistenceUnitInfoImpl.close(PersistenceUnitInfoImpl.java:19)
Truncated. see log file for complete stacktrace
Caused By: java.lang.NoClassDefFoundError: org/hibernate/event/EventListeners$2
at org.hibernate.event.EventListeners.destroyListeners(EventListeners.java:215)
at org.hibernate.impl.SessionFactoryImpl.close(SessionFactoryImpl.java:850)
at org.hibernate.ejb.EntityManagerFactoryImpl.close(EntityManagerFactoryImpl.java:46)
at weblogic.deployment.BasePersistenceUnitInfoImpl.close(BasePersistenceUnitInfoImpl.java:656)
at weblogic.deployment.PersistenceUnitInfoImpl.close(PersistenceUnitInfoImpl.java:19)
Truncated. see log file for complete stacktrace

Related

quarkus-cxf native build failes - UnresolvedElementException - Discovered unresolved method during parsing

Good day to all :-)
I am using the quarkus-cxf extension and have now encountered the following problem.
In JVM mode everything works fine. Thank you very much for your library.
But I have errors in native mode. For a tip what I am probably doing wrong, I am very grateful …
Quarkus Version: 1.7.1.Final
quarkus-cxf Version: https://github.com/shumonsharif/quarkus-cxf/blob/master/pom.xml
Error occurs on
mvn clean package -Dquarkus.native.container-build=true -Dquarkus.container-image.build=true -Dquarkus.container-image.registry=nfrt-docker-staging-local.repo.pnet.ch -Dquarkus.container-image.tag=latest -Pnative
Caused by: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException: com.oracle.graal.pointsto.constraints.UnresolvedElementException: Discovered unresolved method during parsing: org.apache.cxf.staxutils.validation.W3CMultiSchemaFactory.<init>(). To diagnose the issue you can use the --allow-incomplete-classpath option. The missing method is then reported at run time when it is accessed the first time.
Detailed message:
Trace:
at parsing org.apache.cxf.staxutils.validation.Stax2ValidationUtils.getValidator(Stax2ValidationUtils.java:164)
Call path from entry point to org.apache.cxf.staxutils.validation.Stax2ValidationUtils.getValidator(Endpoint, ServiceInfo):
at org.apache.cxf.staxutils.validation.Stax2ValidationUtils.getValidator(Stax2ValidationUtils.java:136)
at org.apache.cxf.staxutils.validation.Stax2ValidationUtils.setupValidation(Stax2ValidationUtils.java:115)
at org.apache.cxf.staxutils.validation.WoodstoxValidationImpl.setupValidation(WoodstoxValidationImpl.java:66)
at org.apache.cxf.databinding.source.XMLStreamDataReader.validate(XMLStreamDataReader.java:231)
at org.apache.cxf.databinding.source.XMLStreamDataReader.read(XMLStreamDataReader.java:115)
at org.apache.cxf.databinding.source.XMLStreamDataReader.read(XMLStreamDataReader.java:83)
at org.apache.cxf.databinding.source.XMLStreamDataReader.read(XMLStreamDataReader.java:67)
at org.apache.cxf.wsdl.interceptors.BareInInterceptor.handleMessage(BareInInterceptor.java:131)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
at org.apache.cxf.transport.MultipleEndpointObserver.onMessage(MultipleEndpointObserver.java:98)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream$1.run(HTTPConduit.java:1201)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:834)
at com.oracle.svm.core.thread.JavaThreads.threadStartRoutine(JavaThreads.java:517)
at com.oracle.svm.core.posix.thread.PosixJavaThreads.pthreadStartRoutine(PosixJavaThreads.java:193)
at com.oracle.svm.core.code.IsolateEnterStub.PosixJavaThreads_pthreadStartRoutine_e1f4a8c0039f8337338252cd8734f63a79b5e3df(generated:0)
at com.oracle.graal.pointsto.constraints.UnsupportedFeatures.report(UnsupportedFeatures.java:126)
at com.oracle.svm.hosted.NativeImageGenerator.runPointsToAnalysis(NativeImageGenerator.java:750)
... 8 more
Caused by: com.oracle.graal.pointsto.constraints.UnresolvedElementException: Discovered unresolved method during parsing: org.apache.cxf.staxutils.validation.W3CMultiSchemaFactory.(). To diagnose the issue you can use the --allow-incomplete-classpath option. The missing method is then reported at run time when it is accessed the first time.
at com.oracle.svm.hosted.phases.SharedGraphBuilderPhase$SharedBytecodeParser.reportUnresolvedElement(SharedGraphBuilderPhase.java:259)
at com.oracle.svm.hosted.phases.SharedGraphBuilderPhase$SharedBytecodeParser.handleUnresolvedMethod(SharedGraphBuilderPhase.java:249)
at com.oracle.svm.hosted.phases.SharedGraphBuilderPhase$SharedBytecodeParser.handleUnresolvedInvoke(SharedGraphBuilderPhase.java:203)
at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.genInvokeSpecial(BytecodeParser.java:1811)
at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.genInvokeSpecial(BytecodeParser.java:1801)
at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.processBytecode(BytecodeParser.java:5339)
at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.iterateBytecodesForBlock(BytecodeParser.java:3423)
at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.processBlock(BytecodeParser.java:3230)
at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.build(BytecodeParser.java:1088)
at jdk.internal.vm.compiler/org.graalvm.compiler.java.BytecodeParser.buildRootMethod(BytecodeParser.java:982)
at jdk.internal.vm.compiler/org.graalvm.compiler.java.GraphBuilderPhase$Instance.run(GraphBuilderPhase.java:84)
at jdk.internal.vm.compiler/org.graalvm.compiler.phases.Phase.run(Phase.java:49)
at jdk.internal.vm.compiler/org.graalvm.compiler.phases.BasePhase.apply(BasePhase.java:214)
at jdk.internal.vm.compiler/org.graalvm.compiler.phases.Phase.apply(Phase.java:42)
at jdk.internal.vm.compiler/org.graalvm.compiler.phases.Phase.apply(Phase.java:38)
at com.oracle.graal.pointsto.flow.MethodTypeFlowBuilder.parse(MethodTypeFlowBuilder.java:225)
at com.oracle.graal.pointsto.flow.MethodTypeFlowBuilder.apply(MethodTypeFlowBuilder.java:352)
at com.oracle.graal.pointsto.flow.MethodTypeFlow.doParse(MethodTypeFlow.java:322)
at com.oracle.graal.pointsto.flow.MethodTypeFlow.ensureParsed(MethodTypeFlow.java:311)
at com.oracle.graal.pointsto.flow.MethodTypeFlow.addContext(MethodTypeFlow.java:112)
at com.oracle.graal.pointsto.DefaultAnalysisPolicy$DefaultSpecialInvokeTypeFlow.onObservedUpdate(DefaultAnalysisPolicy.java:373)
at com.oracle.graal.pointsto.flow.TypeFlow.notifyObservers(TypeFlow.java:470)
at com.oracle.graal.pointsto.flow.TypeFlow.update(TypeFlow.java:542)
at com.oracle.graal.pointsto.BigBang$2.run(BigBang.java:530)
at com.oracle.graal.pointsto.util.CompletionExecutor.lambda$execute$0(CompletionExecutor.java:173)
at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1426)
... 5 more
Error: Image build request failed with exit status 1
You can retry with last version on https://github.com/quarkiverse/quarkiverse-cxf

[flink]Task manager initialization failed

I am new to flink. I am trying to run the flink example on my local PC(windows).
However, after I run the start-cluster.bat, I login to the dashboard, it shows the task manager is 0.
I checked the log and seems it fails to initialize:
2020-02-21 23:03:14,202 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner - TaskManager initialization failed.
org.apache.flink.configuration.IllegalConfigurationException: Failed to create TaskExecutorResourceSpec
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpec.FromConfig(TaskExecutorResourceUtils.java:72)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:356)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.<init>(TaskManagerRunner.java:152)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:308)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerSecurely$2(TaskManagerRunner.java:322)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerSecurely(TaskManagerRunner.java:321)
at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:287)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: The required configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback keys: []) is not set
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkConfigOptionIsSet(TaskExecutorResourceUtils.java:90)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.lambda$checkTaskExecutorResourceConfigSet$0(TaskExecutorResourceUtils.java:84)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.checkTaskExecutorResourceConfigSet(TaskExecutorResourceUtils.java:84)
at org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:70)
... 7 more
2020-02-21 23:03:14,217 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
Basically, it looks like a required option 'taskmanager.cpu.cores' is not set. However, I can't find this property in flink-conf.yaml and in the document(https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html) either.
I am using flink 1.10.0. Any help would be highly appreciated!
That configuration option is intended for internal use only -- it shouldn't be user configured, which is why it isn't documented.
The windows start-cluster.bat is failing because of a bug introduced in Flink 1.10. See https://jira.apache.org/jira/browse/FLINK-15925.
One workaround is to use the bash script, start-cluster.sh, instead.
See also this mailing list thread: https://lists.apache.org/thread.html/r7693d0c06ac5ced9a34597c662bcf37b34ef8e799c32cc0edee373b2%40%3Cdev.flink.apache.org%3E

Quartz clustering in camel spring DSL

I am trying to achieve "requests recovery" in fail-over scenario in two different machine with their clock also sync.
My configuration as below:
step 1: camel-context.xml
I have defined the below route in camel-context.xml file.
<route id="quartz" trace="true">
<from uri="quartz2://cluster/quartz?cron=0+0/2+++*+?&durableJob=true&stateful=true&recoverableJob=true">
<route>
step 2: quartz.properties:
I have enabled
org.quartz.jobStore.isClustered = true
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.instanceName =ClusteredScheduler
Currently I am running same camel application in two different instances in my local and clustering is working fine . But when I try to test the "requests recovery" I am getting below exception.
Exception :
[QuartzScheduler_ClusteredScheduler-camelContext-16308243724_ClusterManager] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - ClusterManager: detected 1 failed or restarted instances.
[QuartzScheduler_ClusteredScheduler-camelContext-16308243724_ClusterManager] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - ClusterManager: Scanning for instance "6308270818"'s failed in-progress jobs.
[QuartzScheduler_ClusteredScheduler-camelContext-16308243724_ClusterManager] INFO org.quartz.impl.jdbcjobstore.JobStoreTX - ClusterManager: ......Scheduled 1 recoverable job(s) for recovery.
[ClusteredScheduler-camelContext_Worker-1] WARN org.apache.camel.component.quartz2.CamelJob - Cannot find existing QuartzEndpoint with uri: quartz2://cluster/quartz?cron=0+0%2F2+*+*+*+%3F&durableJob=true&recoverableJob=true&stateful=true. Creating new endpoint instance.
[ClusteredScheduler-camelContext_Worker-1] ERROR org.apache.camel.component.quartz2.CamelJob - Failed to execute CamelJob.
**org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: quartz2://cluster/quartz?cron=0+0%2F2+*+*+*+%3F&durableJob=true&recoverableJob=true&stateful=true due to: Trigger key cluster.quartz is already in used by Endpoint[quartz2://cluster/quartz?cron=0+0%2F2+*+*+*+%3F&durableJob=true&recoverableJob=true&stateful=true]**
at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:545)
at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:558)
at org.apache.camel.component.quartz2.CamelJob.lookupQuartzEndpoint(CamelJob.java:123)
at org.apache.camel.component.quartz2.CamelJob.execute(CamelJob.java:49)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: java.lang.IllegalArgumentException: Trigger key cluster.quartz is already in used by Endpoint[quartz2://cluster/quartz?cron=0+0%2F2+*+*+*+%3F&durableJob=true&recoverableJob=true&stateful=true]
at org.apache.camel.component.quartz2.QuartzEndpoint.ensureNoDupTriggerKey(QuartzEndpoint.java:272)
at org.apache.camel.component.quartz2.QuartzEndpoint.addJobInScheduler(QuartzEndpoint.java:254)
at org.apache.camel.component.quartz2.QuartzEndpoint.doStart(QuartzEndpoint.java:202)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.startService(DefaultCamelContext.java:2158)
at org.apache.camel.impl.DefaultCamelContext.doAddService(DefaultCamelContext.java:1016)
at org.apache.camel.impl.DefaultCamelContext.addService(DefaultCamelContext.java:977)
at org.apache.camel.impl.DefaultCamelContext.addService(DefaultCamelContext.java:973)
at org.apache.camel.impl.DefaultCamelContext.getEndpoint(DefaultCamelContext.java:541)
... 5 more
After shutting down the instance1 which is currently excuting the job , instance 2 is trying to recover the job immediately but its failing to execute the job .It is picking the same job in next interval (which is fine).
My requirement is active node immediately recover the failed job.
Thanks in advance.
I think we can avoid the checking of ensureNoDupTriggerKey, if the recoverableJob is true. I just created a JIRA CAMEL-8076 for it.

committed before 500 null error in solr 3.6.1

In solr 3.6.1, At some point am getting the following error when concurrent request(concurrent load test) performed against the solr server.
org.apache.solr.common.SolrException log
SEVERE: org.mortbay.jetty.EofException
Caused by: java.net.SocketException: Broken pipe
and
Committed before 500 null||org.mortbay.jetty.EofException|?at
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)|?at
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)|?at
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)|?at
sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)|?at
sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)|?at
java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)|?at
org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)|?at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:353)|?at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:273)|?at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)|?at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)|?at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)|?at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)|?at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)|?at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)|?at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)|?at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)|?at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)|?at
org.mortbay.jetty.Server.handle(Server.java:326)|?at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)|?at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)|?at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)|?at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)|?at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)|?at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)|?at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)|Caused
by: java.net.SocketException: Broken pipe|?at
java.net.SocketOutputStream.socketWrite0(Native Method)|?at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)|?at
java.net.SocketOutputStream.write(SocketOutputStream.java:136)|?at
org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)|?at
org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)|?at
org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:161)|?at
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)|?... 25
more
Kindly suggest any idea to resolve this error from solr ?
I don't think it's your solr, the broken pipe happens (happened to me, at least) because of a timeout problem with the client.
Check for your curl timeout value and try to set explicitly a keep-alive Tomcat so you can avoid this situation again.
quick update (just to give a hint, configuration may vary)
in your jetty folder, you should look for a folder named WEB-INF that should contain a file named jetty-web.xml (or web-jetty.xml)
adding these lines:
<session-config>
<session-timeout>720</session-timeout>
</session-config>
should help you (change 720 in what you like more)
there's also the option
<Set name="maxIdleTime">300000</Set>
that may do your trick. You'll have to dig into jetty's doc a lot to figure out this for your case
more about this: here and here

Tomcat cluster fails and generates tons of logs

Periodically, I'm getting problems with my Tomcat 6 cluster (2 nodes). One of the nodes would just go haywire and generate a ton of logs repeating the following:
Aug 25, 2009 11:44:10 AM org.apache.catalina.ha.session.DeltaRequest reset
SEVERE: Unable to remove element
java.util.NoSuchElementException
at java.util.LinkedList.remove(LinkedList.java:788)
at java.util.LinkedList.removeFirst(LinkedList.java:134)
at org.apache.catalina.ha.session.DeltaRequest.reset(DeltaRequest.java:201)
at org.apache.catalina.ha.session.DeltaRequest.execute(DeltaRequest.java:195)
at org.apache.catalina.ha.session.DeltaManager.handleSESSION_DELTA(DeltaManager.java:1364)
at org.apache.catalina.ha.session.DeltaManager.messageReceived(DeltaManager.java:1320)
at org.apache.catalina.ha.session.DeltaManager.messageDataReceived(DeltaManager.java:1083)
at org.apache.catalina.ha.session.ClusterSessionListener.messageReceived(ClusterSessionListener.java:87)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:916)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:897)
at org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:264)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:110)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:241)
at org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:225)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:188)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:91)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
That's the only thing that it shows. The other node in the cluster is still active at this time. There's nothing to do but to restart. The large amount of logs has caused disk space issues more than a couple of times too.
Does anybody have any idea what's wrong here?
Thanks!
Wong
Appears to be a bug in Tomcat 6. If you look at the source at:
http://www.java2s.com/Open-Source/Java-Document/Sevlet-Container/apache-tomcat-6.0.14/org/apache/catalina/ha/session/DeltaRequest.java.htm (line 225)
you'll see that the reset() method can potentially throw this exception. I suggest that you contact the Tomcat developers regarding this issue.

Resources