It's simple example from https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#mongo-template.aggregation-update:
Aggregation.newUpdate().set("average").toValue(ArithmeticOperators.valueOf("tests").avg());
When I run it the result is:
Caused by: java.lang.UnsupportedOperationException: null
at java.base/java.util.AbstractList.add(AbstractList.java:153) ~[na:na]
at java.base/java.util.AbstractList.add(AbstractList.java:111) ~[na:na]
at org.springframework.data.mongodb.core.aggregation.AggregationUpdate.set(AggregationUpdate.java:142) ~[spring-data-mongodb-3.0.0.RELEASE.jar:3.0.0.RELEASE]
at org.springframework.data.mongodb.core.aggregation.AggregationUpdate$1.toValue(AggregationUpdate.java:207) ~[spring-data-mongodb-3.0.0.RELEASE.jar:3.0.0.RELEASE]
Has anyone tried to use aggregation pipeline updates in spring data?
It looks like a bug in the spring documentation - a valid example: https://docs.spring.io/spring-data/mongodb/docs/current/api/org/springframework/data/mongodb/core/aggregation/AggregationUpdate.html
Related
please find the error while trying to pass the parameters during build -
java.lang.UnsupportedOperationException: Refusing to marshal com.cwctravel.hudson.plugins.extended_choice_parameter.ExtendedChoiceParameterValue for security reasons; see https://jenkins.io/redirect/class-filter/
at hudson.util.XStream2$BlacklistedTypesConverter.marshal(XStream2.java:541)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:69)
at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:43)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller$1.convertAnother(AbstractReferenceMarshaller.java:88)
at com.thoughtworks.xstream.converters.collections.AbstractCollectionConverter.writeItem(AbstractCollectionConverter.java:64)
at com.thoughtworks.xstream.converters.collections.CollectionConverter.marshal(CollectionConverter.java:74)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:69)
at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller$1.convertAnother(AbstractReferenceMarshaller.java:84)
at hudson.util.RobustReflectionConverter.marshallField(RobustReflectionConverter.java:264)
at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:251)
Caused: java.lang.RuntimeException: Failed to serialize hudson.model.ParametersAction#parameters for class hudson.model.ParametersAction
at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:255)
at hudson.util.RobustReflectionConverter$2.visit(RobustReflectionConverter.java:223)
at com.thoughtworks.xstream.converters.reflection.PureJavaReflectionProvider.visitSerializableFields(PureJavaReflectionProvider.java:138)
at hudson.util.RobustReflectionConverter.doMarshal(RobustReflectionConverter.java:209)
at hudson.util.RobustReflectionConverter.marshal(RobustReflectionConverter.java:150)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:69)
at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:43)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller$1.convertAnother(AbstractReferenceMarshaller.java:88)
at com.thoughtworks.xstream.converters.collections.AbstractCollectionConverter.writeItem(AbstractCollectionConverter.java:64)
at com.thoughtworks.xstream.converters.collections.CollectionConverter.marshal(CollectionConverter.java:74)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:69)
at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller$1.convertAnother(AbstractReferenceMarshaller.java:84)
at hudson.util.RobustReflectionConverter.marshallField(RobustReflectionConverter.java:264)
at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:251)
Caused: java.lang.RuntimeException: Failed to serialize hudson.model.Actionable#actions for class org.jenkinsci.plugins.workflow.job.WorkflowRun
at hudson.util.RobustReflectionConverter$2.writeField(RobustReflectionConverter.java:255)
at hudson.util.RobustReflectionConverter$2.visit(RobustReflectionConverter.java:223)
at com.thoughtworks.xstream.converters.reflection.PureJavaReflectionProvider.visitSerializableFields(PureJavaReflectionProvider.java:138)
at hudson.util.RobustReflectionConverter.doMarshal(RobustReflectionConverter.java:209)
at hudson.util.RobustReflectionConverter.marshal(RobustReflectionConverter.java:150)
at com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:69)
at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:58)
at com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:43)
at com.thoughtworks.xstream.core.TreeMarshaller.start(TreeMarshaller.java:82)
at com.thoughtworks.xstream.core.AbstractTreeMarshallingStrategy.marshal(AbstractTreeMarshallingStrategy.java:37)
at com.thoughtworks.xstream.XStream.marshal(XStream.java:1026)
at com.thoughtworks.xstream.XStream.marshal(XStream.java:1015)
at com.thoughtworks.xstream.XStream.toXML(XStream.java:988)
at hudson.XmlFile.write(XmlFile.java:195)
at hudson.model.Run.save(Run.java:2077)
at org.jenkinsci.plugins.workflow.cps.EnvActionImpl.forRun(EnvActionImpl.java:136)
at org.jenkinsci.plugins.workflow.cps.EnvActionImpl$Binder.getValue(EnvActionImpl.java:149)
at org.jenkinsci.plugins.workflow.cps.EnvActionImpl$Binder.getValue(EnvActionImpl.java:142)
at org.jenkinsci.plugins.workflow.cps.CpsScript.getProperty(CpsScript.java:121)
at org.codehaus.groovy.runtime.InvokerHelper.getProperty(InvokerHelper.java:174)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:456)
at org.kohsuke.groovy.sandbox.impl.Checker$7.call(Checker.java:355)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onGetProperty(GroovyInterceptor.java:68)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:354)
at org.kohsuke.groovy.sandbox.impl.Checker$7.call(Checker.java:353)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:357)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:29)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
Caused: java.io.IOException
at hudson.XmlFile.write(XmlFile.java:202)
at hudson.model.Run.save(Run.java:2077)
at org.jenkinsci.plugins.workflow.cps.EnvActionImpl.forRun(EnvActionImpl.java:136)
at org.jenkinsci.plugins.workflow.cps.EnvActionImpl$Binder.getValue(EnvActionImpl.java:149)
at org.jenkinsci.plugins.workflow.cps.EnvActionImpl$Binder.getValue(EnvActionImpl.java:142)
at org.jenkinsci.plugins.workflow.cps.CpsScript.getProperty(CpsScript.java:121)
at org.codehaus.groovy.runtime.InvokerHelper.getProperty(InvokerHelper.java:174)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:456)
at org.kohsuke.groovy.sandbox.impl.Checker$7.call(Checker.java:355)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onGetProperty(GroovyInterceptor.java:68)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onGetProperty(SandboxInterceptor.java:354)
at org.kohsuke.groovy.sandbox.impl.Checker$7.call(Checker.java:353)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:357)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedGetProperty(Checker.java:333)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.getProperty(SandboxInvoker.java:29)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
at WorkflowScript.run(WorkflowScript:8)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.get(PropertyishBlock.java:74)
at com.cloudbees.groovy.cps.LValueBlock$GetAdapter.receive(LValueBlock.java:30)
at com.cloudbees.groovy.cps.impl.PropertyishBlock$ContinuationImpl.fixName(PropertyishBlock.java:66)
at jdk.internal.reflect.GeneratedMethodAccessor629.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:35)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:32)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:237)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:32)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:331)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$100(CpsThreadGroup.java:82)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:243)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:231)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:136)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Finished: FAILURE
We are stated to see security issue related to extended choice parameters in jenkins ,which gets json array as input. Please find the error . Can you please help on this. FYI There was no jenkins or plugins update done recently. This stared happening all of the sudden (2 weeks back) and all the "extended choice parameters" got deleted without any clue in jenkins build.
extended choice parameters VERSION - 0.78
jenkins VERSION - 2.263.1
Issue has been fixed by replacing the extended choice parameter jar file with the latest version which had fix to extended_choice_parameter.ExtendedChoiceParameterValue parameters. Also, make sure to White list "export JDK_JAVA_OPTIONS="Dhudson.remoting.ClassFilter=com.cwctravel.hudson.plugins.extended_choice_parameter.ExtendedChoiceParameterValue" in .bash_profile of your jenkins machine
I'm doing a simple partial update scenario which worked with version 6.x and 7.x of Solr. After upgdrading both Solr and Solrj to 8.8, I'm getting the following exception:
2021-02-23 14:57:58.201 ERROR (qtp-459670553-28) [ x:core1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: TransactionLog doesn't know how to serialize class org.apache.lucene.document.LazyDocument$LazyField; try implementing ObjectResolver?
at org.apache.solr.update.TransactionLog$1.resolve(TransactionLog.java:100)
at org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:266)
at org.apache.solr.common.util.JavaBinCodec$BinEntryWriter.put(JavaBinCodec.java:441)
at org.apache.solr.common.ConditionalKeyMapWriter$EntryWriterWrapper.put(ConditionalKeyMapWriter.java:44)
at org.apache.solr.common.MapWriter$EntryWriter.putNoEx(MapWriter.java:101)
at org.apache.solr.common.MapWriter$EntryWriter.lambda$getBiConsumer$0(MapWriter.java:161)
at org.apache.solr.common.MapWriter$EntryWriter$$Lambda$548/0000000000000000.accept(Unknown Source)
at org.apache.solr.common.SolrInputDocument.lambda$writeMap$0(SolrInputDocument.java:59)
at org.apache.solr.common.SolrInputDocument$$Lambda$549/0000000000000000.accept(Unknown Source)
.....
solrj code is just similar to the sample provided here and was working before upgrade. The operation is 'add' with a simple integer field for a document whose id is provided.
Note that this is different from a previous question on stackoverflow, since I'm passing simple integer field and on solr/lucene side it's replaced with org.apache.lucene.document.LazyDocument$LazyField.
Seems to be a bug in Solr https://issues.apache.org/jira/browse/SOLR-13034 to be fixed in the next version of solr 8 (8.9).
Until it's released the workaround is to set <enableLazyFieldLoading>false</enableLazyFieldLoading> in solrconfig.xml
I'm doing a simple partial update scenario which worked with version 6.x and 7.x of Solr. After upgdrading both Solr and Solrj to 8.8, I'm getting the following exception:
2021-02-23 14:57:58.201 ERROR (qtp-459670553-28) [ x:core1] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: TransactionLog doesn't know how to serialize class org.apache.lucene.document.LazyDocument$LazyField; try implementing ObjectResolver?
at org.apache.solr.update.TransactionLog$1.resolve(TransactionLog.java:100)
at org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:266)
at org.apache.solr.common.util.JavaBinCodec$BinEntryWriter.put(JavaBinCodec.java:441)
at org.apache.solr.common.ConditionalKeyMapWriter$EntryWriterWrapper.put(ConditionalKeyMapWriter.java:44)
at org.apache.solr.common.MapWriter$EntryWriter.putNoEx(MapWriter.java:101)
at org.apache.solr.common.MapWriter$EntryWriter.lambda$getBiConsumer$0(MapWriter.java:161)
at org.apache.solr.common.MapWriter$EntryWriter$$Lambda$548/0000000000000000.accept(Unknown Source)
at org.apache.solr.common.SolrInputDocument.lambda$writeMap$0(SolrInputDocument.java:59)
at org.apache.solr.common.SolrInputDocument$$Lambda$549/0000000000000000.accept(Unknown Source)
.....
solrj code is just similar to the sample provided here and was working before upgrade. The operation is 'add' with a simple integer field for a document whose id is provided.
Note that this is different from a previous question on stackoverflow, since I'm passing simple integer field and on solr/lucene side it's replaced with org.apache.lucene.document.LazyDocument$LazyField.
Seems to be a bug in Solr https://issues.apache.org/jira/browse/SOLR-13034 to be fixed in the next version of solr 8 (8.9).
Until it's released the workaround is to set <enableLazyFieldLoading>false</enableLazyFieldLoading> in solrconfig.xml
I am using flink's table api, I receive data from kafka, then register it as
a table, then I use sql statement to process, and finally convert the result
back to a stream, write to a directory, the code looks like this:
def main(args: Array[String]): Unit = {
val sEnv = StreamExecutionEnvironment.getExecutionEnvironment
sEnv.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
val tEnv = TableEnvironment.getTableEnvironment(sEnv)
tEnv.connect(
new Kafka()
.version("0.11")
.topic("user")
.startFromEarliest()
.property("zookeeper.connect", "")
.property("bootstrap.servers", "")
)
.withFormat(
new Json()
.failOnMissingField(false)
.deriveSchema() //使用表的 schema
)
.withSchema(
new Schema()
.field("username_skey", Types.STRING)
)
.inAppendMode()
.registerTableSource("user")
val userTest: Table = tEnv.sqlQuery(
"""
select ** form ** join **"".stripMargin)
val endStream = tEnv.toRetractStream[Row](userTest)
endStream.writeAsText("/tmp/sqlres",WriteMode.OVERWRITE)
sEnv.execute("Test_New_Sign_Student")
}
I was successful in the local test, but when I submit the following command
in the cluster, I get the following error:
=======================================================
org.apache.flink.client.program.ProgramInvocationException: The main method
caused an error.
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
at
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
at
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could
not find a suitable table factory for
'org.apache.flink.table.factories.DeserializationSchemaFactory' in
the classpath.
Reason: No factory implements
'org.apache.flink.table.factories.DeserializationSchemaFactory'.
The following properties are requested:
connector.properties.0.key=zookeeper.connect
....
schema.9.name=roles
schema.9.type=VARCHAR
update-mode=append
The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
org.apache.flink.table.sinks.CsvBatchTableSinkFactory
org.apache.flink.table.sinks.CsvAppendTableSinkFactory
org.apache.flink.streaming.connectors.kafka.Kafka011TableSourceSinkFactory
at
org.apache.flink.table.factories.TableFactoryService$.filterByFactoryClass(TableFactoryService.scala:176)
at
org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:125)
at
org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:100)
at
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.scala)
at
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryBase.getDeserializationSchema(KafkaTableSourceSinkFactoryBase.java:259)
at
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryBase.createStreamTableSource(KafkaTableSourceSinkFactoryBase.java:144)
at
org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:50)
at
org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:44)
at
org.clay.test.Test_New_Sign_Student$.main(Test_New_Sign_Student.scala:64)
at
org.clay.test.Test_New_Sign_Student.main(Test_New_Sign_Student.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
===================================
Can someone tell me what caused this? I am very confused about this........
if you are using maven-shade-plugin, make sure SPI transformer is placed.
Flink uses java Service Provider to discover Source/Sink connector.
Without this transformer, you will 100% encoutner "org.apache.flink.table.api.NoMatchingTableFactoryException: Could
not find a suitable table factory", which happened on me.
https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/connect.html#update-mode
flink officially points out this, search "SPI" on this page
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
You have to add the JAR dependencies of the connectors (Kafka) and formats (JSON) that you are using to the classpath of your program, i.e., either build a fat JAR that includes them or provide them to the classpath of the Flink cluster by copying them in the ./lib folder.
Check the Flink documentation for links to download the respective dependencies.
I have the met the same problem, just add parameters --connector.type kafka when you run your application will solve this. see enter link description here
Just updated to DSE 3.2 from 3.1 using the guide to run the update, now the logs littered with this exception. When querying via SOLR we are getting missing data, however it seems that when querying using cqlsh or the cli, the data is there.
ERROR [IndexPool work thread-6] 2013-11-18 22:32:18,748 AbstractSolrSecondaryIndex .java (line 912) _yaqn8_Lucene41_0.tip
java.io.FileNotFoundException: _yaqn8_Lucene41_0.tip
at org.apache.lucene.store.bytebuffer.ByteBufferDirectory.fileLength( ByteBufferDirectory.java:129)
at org.apache.lucene.store.NRTCachingDirectory.sizeInBytes(NRTCachingDirectory .java:158)
at org.apache.lucene.store.NRTCachingDirectory.doCacheWrite( NRTCachingDirectory.java:289)
at org.apache.lucene.store.NRTCachingDirectory.createOutput( NRTCachingDirectory.java:199)
at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput( TrackingDirectoryWrapper.java:62)
at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.<init>( CompressingStoredFieldsWriter.java:107)
at com.datastax.bdp.cassandra.index.solr.CassandraStoredFieldsWriter.<init>( CassandraStoredFieldsWriter.java:25)
at com.datastax.bdp.cassandra.index.solr.CassandraStoredFieldsFormat. fieldsWriter(CassandraStoredFieldsFormat.java:39)
at org.apache.lucene.index.StoredFieldsProcessor.initFieldsWriter( StoredFieldsProcessor.java:86)
at org.apache.lucene.index.StoredFieldsProcessor.finishDocument( StoredFieldsProcessor.java:119)
at org.apache.lucene.index.TwoStoredFieldsConsumers.finishDocument( TwoStoredFieldsConsumers.java:65)
at org.apache.lucene.index.DocFieldProcessor.finishDocument(DocFieldProcessor. java:274)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument( DocumentsWriterPerThread.java:274)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter. java:376)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1485)
at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2. java:201)
at com.datastax.bdp.cassandra.index.solr.CassandraDirectUpdateHandler2.addDoc( CassandraDirectUpdateHandler2.java:103)
at com.datastax.bdp.cassandra.index.solr.AbstractSolrSecondaryIndex.doIndex( AbstractSolrSecondaryIndex.java:929)
at com.datastax.bdp.cassandra.index.solr.AbstractSolrSecondaryIndex. doUpdateOrDelete(AbstractSolrSecondaryIndex.java:586)
at com.datastax.bdp.cassandra.index.solr.ThriftSolrSecondaryIndex. updateColumnFamilyIndex(ThriftSolrSecondaryIndex.java:114)
at com.datastax.bdp.cassandra.index.solr.AbstractSolrSecondaryIndex$3.run( AbstractSolrSecondaryIndex.java:896)
at com.datastax.bdp.cassandra.index.solr.concurrent.IndexWorker.run( IndexWorker.java:38)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor. java:615)
at java.lang.Thread.run(Thread.java:724)
alo this:
ERROR 22:53:01,426 auto commit error...:org.apache.solr.common.SolrException: org.apache.solr.common.SolrException: Error opening new searcher
at com.datastax.bdp.cassandra.index.solr.CassandraDirectUpdateHandler2.commit(CassandraDirectUpdateHandler2.java:318)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1457)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1569)
at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:557)
at com.datastax.bdp.cassandra.index.solr.CassandraDirectUpdateHandler2.commit(CassandraDirectUpdateHandler2.java:276)
... 9 more
Caused by: java.io.FileNotFoundException: _xfgfw_Lucene41_0.tim
at org.apache.lucene.store.bytebuffer.ByteBufferDirectory.fileLength(ByteBufferDirectory.java:129)
at org.apache.lucene.store.NRTCachingDirectory.sizeInBytes(NRTCachingDirectory.java:158)
at org.apache.lucene.store.NRTCachingDirectory.doCacheWrite(NRTCachingDirectory.java:289)
at org.apache.lucene.store.NRTCachingDirectory.createOutput(NRTCachingDirectory.java:199)
at org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:62)
at org.apache.lucene.codecs.lucene42.Lucene42FieldInfosWriter.write(Lucene42FieldInfosWriter.java:49)
at org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:88)
at org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:493)
at org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:422)
at org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:559)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:365)
at org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:270)
at org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:255)
at org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:250)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1393)
... 12 more
This is known issue that is fixed in DSE 3.2.1.
We just released 3.2.1, which should address your issues. Our developers where able to replicate the stack trace, and resolved that. We also addressed the issue with the indexes not properly handled after a restart.
That looks like some files did not flush correctly on shutdown. You will have to do a full re-index (with deleting) on nodes showing those errors to get the lucene indexes to rebuild.
This page shows how to initiate a re-index. http://www.datastax.com/docs/datastax_enterprise3.2/solutions/dse_search_upload#reloading-a-solr-core
A workaround for this is to change your solr config to use (we are working on a proper fix):
<directoryFactory name="DirectoryFactory" class="solr.StandardDirectoryFactory"/>
If the problem continues, then the CF needs to be re-indexed.