null:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException - solr

I see the following log messages on Solr Admin UI in the Logging section.
Any suggestions on the root cause and how to fix such issues?
The collection runs on SolrCloud(Solr version 4.10) with 2 shards.
2/12/2016, 3:27:40 PM
ERROR
SolrDispatchFilter
null:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Blocklist for /tmp/user1/solr/collection1/data/index/_a_Lucene41_0.tim has changed!
null:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Blocklist for /tmp/user1/solr/collection1/data/index/_a_Lucene41_0.tim has changed!
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:621)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:229)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:225)
at org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:157)
at org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:119)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

I can see that ur collection final index dir is /tmp/user1/solr/collection1/data/index, i hope this is a test collection, if you wanna get out of this issue just delete (stop the solr instance and trash in ur machine) complete index dir OR move ur index dir to some backup dir and restart should work.
The actual problem could be ur .tim (terms info file) file in index may have been corrupted for some reasons, you may find the reason if you recollect what you have done before u have seen this error.
There are may open source distribution to fix corrupted indexes, you can find if you search.
Hope this helps to resolve.

Related

AppEngine RemoteAPI SocketTimeoutException

I'm using RemoteAPI to fetch entities from GAE Datastore, 300 at a time.
I'm doing something along the lines of:
while(!(emails = getEmails()).isEmpty()) {
Filter filter = new FilterPredicate("email", FilterOperator.IN, emails)
Query query = new Query("MyEntity").setFilter(filter);
QueryResultIterable<Entity> result = ds.prepare(query).asQueryResultIterable();
for (Entity entity : result) {
System.out.println(entity.getProperty("name"));
}
}
I'm processing something like 50k emails. The first time I ran this code it got to maybe 3/4 of the way, then it threw the following exception. Now it throws it after a single loop iteration is run.
com.google.appengine.tools.remoteapi.RemoteApiException: remote API call: I/O error
at com.google.appengine.tools.remoteapi.RemoteRpc.makeException(RemoteRpc.java:160)
at com.google.appengine.tools.remoteapi.RemoteRpc.callImpl(RemoteRpc.java:104)
at com.google.appengine.tools.remoteapi.RemoteRpc.call(RemoteRpc.java:50)
at com.google.appengine.tools.remoteapi.RemoteDatastore.runQuery(RemoteDatastore.java:156)
at com.google.appengine.tools.remoteapi.RemoteDatastore.handleRunQuery(RemoteDatastore.java:115)
at com.google.appengine.tools.remoteapi.RemoteDatastore.handleDatastoreCall(RemoteDatastore.java:93)
at com.google.appengine.tools.remoteapi.RemoteApiDelegate.makeDefaultSyncCall(RemoteApiDelegate.java:57)
at com.google.appengine.tools.remoteapi.StandaloneRemoteApiDelegate.makeSyncCall(StandaloneRemoteApiDelegate.java:47)
at com.google.appengine.tools.remoteapi.StandaloneRemoteApiDelegate$1.call(StandaloneRemoteApiDelegate.java:58)
at com.google.appengine.tools.remoteapi.StandaloneRemoteApiDelegate$1.call(StandaloneRemoteApiDelegate.java:54)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:891)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:690)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1324)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at com.google.appengine.repackaged.com.google.api.client.http.javanet.NetHttpResponse.<init>(NetHttpResponse.java:37)
at com.google.appengine.repackaged.com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:94)
at com.google.appengine.repackaged.com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
at com.google.appengine.tools.remoteapi.OAuthClient.post(OAuthClient.java:54)
at com.google.appengine.tools.remoteapi.RemoteRpc.callImpl(RemoteRpc.java:102)
... 12 more
I can't figure out what the problem is, but the code seems to be evaluating the for() condition before throwing the exception.
Could this be a quota problem? The quota details screen doesn't show any problems and I can't find any relevant information in the documentation.
For future readers of this question, if you see occurrences of RemoteApiException: remote API call: I/O error which are happening consistently and not intermittently, this could be related to a disruption in network connectivity or possibly a remote issue on the App Engine side.
If the first possibility is ruled out, the best course of action is to report the issue on the Google App Engine issue tracker.
To fix this, first, check your Internet connection. Then clean all artifacts and build them again by (with IntelliJ):
Go to Build => Build Artifacts...
Focus on All Artifacts => Clean
Focus on All Artifacts => Build

DSE 3.2 SOLR FileNotFoundException

Just updated to DSE 3.2 from 3.1 using the guide to run the update, now the logs littered with this exception. When querying via SOLR we are getting missing data, however it seems that when querying using cqlsh or the cli, the data is there.
ERROR [IndexPool work thread-6] 2013-11-18 22:32:18,748 AbstractSolrSecondaryIndex .java (line 912) _yaqn8_Lucene41_0.tip
java.io.FileNotFoundException: _yaqn8_Lucene41_0.tip
at org.apache.lucene.store.bytebuffer.ByteBufferDirectory.fileLength( ByteBufferDirectory.java:129)
at org.apache.lucene.store.NRTCachingDirectory.sizeInBytes(NRTCachingDirectory .java:158)
at org.apache.lucene.store.NRTCachingDirectory.doCacheWrite( NRTCachingDirectory.java:289)
at org.apache.lucene.store.NRTCachingDirectory.createOutput( NRTCachingDirectory.java:199)
at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput( TrackingDirectoryWrapper.java:62)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.<init>( CompressingStoredFieldsWriter.java:107)
at com.datastax.bdp.cassandra.index.solr.CassandraStoredFieldsWriter.<init>( CassandraStoredFieldsWriter.java:25)
at com.datastax.bdp.cassandra.index.solr.CassandraStoredFieldsFormat. fieldsWriter(CassandraStoredFieldsFormat.java:39)
at org.apache.lucene.index.StoredFieldsProcessor.initFieldsWriter( StoredFieldsProcessor.java:86)
at org.apache.lucene.index.StoredFieldsProcessor.finishDocument( StoredFieldsProcessor.java:119)
at org.apache.lucene.index.TwoStoredFieldsConsumers.finishDocument( TwoStoredFieldsConsumers.java:65)
at org.apache.lucene.index.DocFieldProcessor.finishDocument(DocFieldProcessor. java:274)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument( DocumentsWriterPerThread.java:274)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter. java:376)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1485)
at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2. java:201)
at com.datastax.bdp.cassandra.index.solr.CassandraDirectUpdateHandler2.addDoc( CassandraDirectUpdateHandler2.java:103)
at com.datastax.bdp.cassandra.index.solr.AbstractSolrSecondaryIndex.doIndex( AbstractSolrSecondaryIndex.java:929)
at com.datastax.bdp.cassandra.index.solr.AbstractSolrSecondaryIndex. doUpdateOrDelete(AbstractSolrSecondaryIndex.java:586)
at com.datastax.bdp.cassandra.index.solr.ThriftSolrSecondaryIndex. updateColumnFamilyIndex(ThriftSolrSecondaryIndex.java:114)
at com.datastax.bdp.cassandra.index.solr.AbstractSolrSecondaryIndex$3.run( AbstractSolrSecondaryIndex.java:896)
at com.datastax.bdp.cassandra.index.solr.concurrent.IndexWorker.run( IndexWorker.java:38)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor. java:615)
at java.lang.Thread.run(Thread.java:724)
alo this:
ERROR 22:53:01,426 auto commit error...:org.apache.solr.common.SolrException: org.apache.solr.common.SolrException: Error opening new searcher
at com.datastax.bdp.cassandra.index.solr.CassandraDirectUpdateHandler2.commit(CassandraDirectUpdateHandler2.java:318)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1457)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1569)
at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:557)
at com.datastax.bdp.cassandra.index.solr.CassandraDirectUpdateHandler2.commit(CassandraDirectUpdateHandler2.java:276)
... 9 more
Caused by: java.io.FileNotFoundException: _xfgfw_Lucene41_0.tim
at org.apache.lucene.store.bytebuffer.ByteBufferDirectory.fileLength(ByteBufferDirectory.java:129)
at org.apache.lucene.store.NRTCachingDirectory.sizeInBytes(NRTCachingDirectory.java:158)
at org.apache.lucene.store.NRTCachingDirectory.doCacheWrite(NRTCachingDirectory.java:289)
at org.apache.lucene.store.NRTCachingDirectory.createOutput(NRTCachingDirectory.java:199)
at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:62)
at org.apache.lucene.codecs.lucene42.Lucene42FieldInfosWriter.write(Lucene42FieldInfosWriter.java:49)
at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:88)
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:365)
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:255)
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1393)
... 12 more
This is known issue that is fixed in DSE 3.2.1.
We just released 3.2.1, which should address your issues. Our developers where able to replicate the stack trace, and resolved that. We also addressed the issue with the indexes not properly handled after a restart.
That looks like some files did not flush correctly on shutdown. You will have to do a full re-index (with deleting) on nodes showing those errors to get the lucene indexes to rebuild.
This page shows how to initiate a re-index. http://www.datastax.com/docs/datastax_enterprise3.2/solutions/dse_search_upload#reloading-a-solr-core
A workaround for this is to change your solr config to use (we are working on a proper fix):
<directoryFactory name="DirectoryFactory" class="solr.StandardDirectoryFactory"/>
If the problem continues, then the CF needs to be re-indexed.

committed before 500 null error in solr 3.6.1

In solr 3.6.1, At some point am getting the following error when concurrent request(concurrent load test) performed against the solr server.
org.apache.solr.common.SolrException log
SEVERE: org.mortbay.jetty.EofException
Caused by: java.net.SocketException: Broken pipe
and
Committed before 500 null||org.mortbay.jetty.EofException|?at
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:791)|?at
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:569)|?at
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:1012)|?at
sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)|?at
sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)|?at
java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)|?at
org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)|?at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:353)|?at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:273)|?at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)|?at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)|?at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)|?at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)|?at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)|?at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)|?at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)|?at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)|?at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)|?at
org.mortbay.jetty.Server.handle(Server.java:326)|?at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)|?at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)|?at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)|?at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)|?at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)|?at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)|?at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)|Caused
by: java.net.SocketException: Broken pipe|?at
java.net.SocketOutputStream.socketWrite0(Native Method)|?at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)|?at
java.net.SocketOutputStream.write(SocketOutputStream.java:136)|?at
org.mortbay.io.ByteArrayBuffer.writeTo(ByteArrayBuffer.java:368)|?at
org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:129)|?at
org.mortbay.io.bio.StreamEndPoint.flush(StreamEndPoint.java:161)|?at
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:714)|?... 25
more
Kindly suggest any idea to resolve this error from solr ?
I don't think it's your solr, the broken pipe happens (happened to me, at least) because of a timeout problem with the client.
Check for your curl timeout value and try to set explicitly a keep-alive Tomcat so you can avoid this situation again.
quick update (just to give a hint, configuration may vary)
in your jetty folder, you should look for a folder named WEB-INF that should contain a file named jetty-web.xml (or web-jetty.xml)
adding these lines:
<session-config>
<session-timeout>720</session-timeout>
</session-config>
should help you (change 720 in what you like more)
there's also the option
<Set name="maxIdleTime">300000</Set>
that may do your trick. You'll have to dig into jetty's doc a lot to figure out this for your case
more about this: here and here

Jetty - Form too large Error

I have been working on Solr and running some load tests on it. After some point, I keep getting
Nov 29, 2012 3:34:43 PM org.apache.solr.common.SolrException log
SEVERE: null:java.lang.IllegalStateException: Form too large275768>200000
at org.eclipse.jetty.server.Request.extractParameters(Request.java:279)
at org.eclipse.jetty.server.Request.getParameterMap(Request.java:705)
at org.apache.solr.request.ServletSolrParams.<init>(ServletSolrParams.java:29)
at org.apache.solr.servlet.StandardRequestParser.parseParamsAndFillStreams(SolrRequestParsers.java:394)
at org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:115)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:260)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
at org.eclipse.jetty.server.Server.handle(Server.java:351)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)
at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:954)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:857)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)
at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66)
at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534)
at java.lang.Thread.run(Unknown Source)
Basically I made searches on google and stackoverflow too, and all I could find was this and applying the solutions there didnt helped at all..
I have tried to modify that value from org.apache.solr.client.solrj.embedded.JettySolrRunnertoo but even changing value from that file didnt helped at all.
anyone knows how to change max allowed form size for an embedded Jetty?
After checking the source code of Solr, I found one place where I can set the form size. The class I have modified is org.apache.solr.client.solrj.embedded.JettySolrRunner.java , basically adding some large number for the form size...
although it works, I am still confused why I cant set this value via config files

Tomcat cluster fails and generates tons of logs

Periodically, I'm getting problems with my Tomcat 6 cluster (2 nodes). One of the nodes would just go haywire and generate a ton of logs repeating the following:
Aug 25, 2009 11:44:10 AM org.apache.catalina.ha.session.DeltaRequest reset
SEVERE: Unable to remove element
java.util.NoSuchElementException
at java.util.LinkedList.remove(LinkedList.java:788)
at java.util.LinkedList.removeFirst(LinkedList.java:134)
at org.apache.catalina.ha.session.DeltaRequest.reset(DeltaRequest.java:201)
at org.apache.catalina.ha.session.DeltaRequest.execute(DeltaRequest.java:195)
at org.apache.catalina.ha.session.DeltaManager.handleSESSION_DELTA(DeltaManager.java:1364)
at org.apache.catalina.ha.session.DeltaManager.messageReceived(DeltaManager.java:1320)
at org.apache.catalina.ha.session.DeltaManager.messageDataReceived(DeltaManager.java:1083)
at org.apache.catalina.ha.session.ClusterSessionListener.messageReceived(ClusterSessionListener.java:87)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:916)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.messageReceived(SimpleTcpCluster.java:897)
at org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:264)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:110)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:79)
at org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:241)
at org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:225)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:188)
at org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:91)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:885)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:619)
That's the only thing that it shows. The other node in the cluster is still active at this time. There's nothing to do but to restart. The large amount of logs has caused disk space issues more than a couple of times too.
Does anybody have any idea what's wrong here?
Thanks!
Wong
Appears to be a bug in Tomcat 6. If you look at the source at:
http://www.java2s.com/Open-Source/Java-Document/Sevlet-Container/apache-tomcat-6.0.14/org/apache/catalina/ha/session/DeltaRequest.java.htm (line 225)
you'll see that the reset() method can potentially throw this exception. I suggest that you contact the Tomcat developers regarding this issue.

Resources