datastax search node warnings after upgrade to DSE 4.8.7 - solr

Upgraded one of our dev nodes today from DSE 4.8.0 to 4.8.7 and now I'm seeing a ton of these errors in the system.log. Any insight into why this would occur and how to resolve?
WARN [main_development.skus Index WorkPool scheduler thread-0] 2016-05-18 13:51:11,037 WorkPool.java:672 - Listener com.datastax.bdp.search.solr.AbstractSolrSecondaryIndex$SSIIndexPoolListener#1d132e91 failed for pool main_development.skus Index with exception: SolrCore 'main_development.skus' is not available due to init failure: Unique key fields must not be tokenized. Problematic type: text_en_splitting_tight{class=org.apache.solr.schema.TextField,analyzer=org.apache.solr.analysis.TokenizerChain,args={autoGeneratePhraseQueries=true, positionIncrementGap=100, class=solr.TextField}} for field: sku
org.apache.solr.common.SolrException: SolrCore 'main_development.skus' is not available due to init failure: Unique key fields must not be tokenized. Problematic type: text_en_splitting_tight{class=org.apache.solr.schema.TextField,analyzer=org.apache.solr.analysis.TokenizerChain,args={autoGeneratePhraseQueries=true, positionIncrementGap=100, class=solr.TextField}} for field: sku
at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:742) ~[solr-uber-with-auth_2.0-4.10.3.1.1021.jar:na]
at com.datastax.bdp.search.solr.core.CassandraCoreContainer.getCore(CassandraCoreContainer.java:170) ~[dse-search-4.8.7.jar:4.8.7]
at com.datastax.bdp.search.solr.AbstractSolrSecondaryIndex.getCore(AbstractSolrSecondaryIndex.java:550) ~[dse-search-4.8.7.jar:4.8.7]
at com.datastax.bdp.search.solr.AbstractSolrSecondaryIndex$SSIIndexPoolListener.onBackPressure(AbstractSolrSecondaryIndex.java:1461) ~[dse-search-4.8.7.jar:4.8.7]
at com.datastax.bdp.concurrent.WorkPool.onBackPressure(WorkPool.java:668) [dse-core-4.8.7.jar:4.8.7]
at com.datastax.bdp.concurrent.WorkPool.access$300(WorkPool.java:57) [dse-core-4.8.7.jar:4.8.7]
at com.datastax.bdp.concurrent.WorkPool$BackPressureTask.run(WorkPool.java:694) [dse-core-4.8.7.jar:4.8.7]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_92]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_92]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_92]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_92]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_92]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_92]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
Caused by: org.apache.solr.common.SolrException: Unique key fields must not be tokenized. Problematic type: text_en_splitting_tight{class=org.apache.solr.schema.TextField,analyzer=org.apache.solr.analysis.TokenizerChain,args={autoGeneratePhraseQueries=true, positionIncrementGap=100, class=solr.TextField}} for field: sku
at com.datastax.bdp.search.solr.core.CassandraCoreContainer.load(CassandraCoreContainer.java:236) ~[dse-search-4.8.7.jar:4.8.7]
at com.datastax.bdp.search.solr.core.SolrCoreResourceManager.loadCore(SolrCoreResourceManager.java:257) ~[dse-search-4.8.7.jar:4.8.7]
at com.datastax.bdp.search.solr.AbstractSolrSecondaryIndex$4.run(AbstractSolrSecondaryIndex.java:1011) ~[dse-search-4.8.7.jar:4.8.7]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_92]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_92]
... 3 common frames omitted
Caused by: com.datastax.bdp.search.solr.CassandraIndexSchema$ValidationException: Unique key fields must not be tokenized. Problematic type: text_en_splitting_tight{class=org.apache.solr.schema.TextField,analyzer=org.apache.solr.analysis.TokenizerChain,args={autoGeneratePhraseQueries=true, positionIncrementGap=100, class=solr.TextField}} for field: sku
at com.datastax.bdp.search.solr.CassandraIndexSchema.validateUniqueKey(CassandraIndexSchema.java:479) ~[dse-search-4.8.7.jar:4.8.7]
at com.datastax.bdp.search.solr.CassandraIndexSchema.validate(CassandraIndexSchema.java:123) ~[dse-search-4.8.7.jar:4.8.7]
at com.datastax.bdp.search.solr.core.CassandraCoreContainer.load(CassandraCoreContainer.java:232) ~[dse-search-4.8.7.jar:4.8.7]
... 7 common frames omitted

This is by design, the unique or primary key in the Solr schema must not be a tokenized field such as a TextField. Try an StrField instead.

Related

java.io.IOException: Could not connect to BlobServer at address localhost/127.0.0.1:46385

flink version: 1.15.3 and jdk versioin: 1.8
conf/flink-conf.yaml file here:
jobmanager.rpc.address: test1
jobmanager.rpc.port: 6123
jobmanager.bind-host: test1
taskmanager.bind-host: test1
taskmanager.host: test1
taskmanager.numberOfTaskSlots: 2
parallelism.default: 2
start a local cluster:
./bin/start-cluster.sh
Starting the SQL Client CLI by calling:
./bin/sql-client.sh embedded
then enter the simple query below and press Enter to execute it.
SET 'sql-client.execution.result-mode' = 'tableau';
SET 'execution.runtime-mode' = 'batch';
SELECT
name,
COUNT(*) AS cnt
FROM
(VALUES ('Bob'), ('Alice'), ('Greg'), ('Bob')) AS NameTable(name)
GROUP BY name;
I got error:
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.runtime.rest.util.RestClientException: [org.apache.flink.runtime.rest.handler.RestHandlerException: Could not upload job files.
at org.apache.flink.runtime.rest.handler.job.JobSubmitHandler.lambda$uploadJobGraphFiles$4(JobSubmitHandler.java:201)
at java.util.concurrent.CompletableFuture.biApply(CompletableFuture.java:1105)
at java.util.concurrent.CompletableFuture$BiApply.tryFire(CompletableFuture.java:1070)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkException: Could not upload job files.
at org.apache.flink.runtime.client.ClientUtils.uploadJobGraphFiles(ClientUtils.java:86)
at org.apache.flink.runtime.rest.handler.job.JobSubmitHandler.lambda$uploadJobGraphFiles$4(JobSubmitHandler.java:195)
... 11 more
Caused by: java.io.IOException: Could not connect to BlobServer at address localhost/127.0.0.1:35138
at org.apache.flink.runtime.blob.BlobClient.<init>(BlobClient.java:103)
at org.apache.flink.runtime.rest.handler.job.JobSubmitHandler.lambda$null$3(JobSubmitHandler.java:199)
at org.apache.flink.runtime.client.ClientUtils.uploadJobGraphFiles(ClientUtils.java:82)
... 12 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:606)
at org.apache.flink.runtime.blob.BlobClient.<init>(BlobClient.java:97)
... 14 more
]
why ?

kafka connect for jdbc sink for microsoft sql server. it works for multiple keys for record_value and this error is poping up for record_key

I was using jdbc sink driver from kafka connect. it allows create table with one primary key when I try to add the 2 pk.key fields . it gives me error:
java.lang.NullPointerException
at io.confluent.connect.jdbc.util.TableDefinitions.refresh(TableDefinitions.java:86)
at io.confluent.connect.jdbc.sink.DbStructure.createOrAmendIfNecessary(DbStructure.java:65)
at io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:85)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
worked with primary key
My kafka connect configuration
bootstrap.servers=localhost:9092
group.id=connect-cluster
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
avro.compatibility.level=none
auto.register.schemas=true
config.storage.topic=connect-configs
offset.storage.topic=connect-offsets
status.storage.topic=connect-statuses
config.storage.replication.factor=1
offset.storage.replication.factor=1
status.storage.replication.factor=1
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
rest.host.name=kafka01.xxxxxxxxx.com
rest.port=8083
plugin.path=xxx/kafka/confluent-5.2.1/share/java,xxxx/kafka/confluent-5.2.1/share/java
producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor
consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor

SolR 6.6.0 : Error 500 when requesting more than 4 facet

I have a 500 error in solR with this query :
q=*&wt=json&rows=0&fq=date:[2014-01-01T09:15:57.000Z/SECOND TO *]&facet=true&facet.field=text&facet.limit=5
I noticed that if put the parameter "facet.limit" to less than 5 it works but otherwise it does not. Here is the stack from SolR :
msg : "Error from server at http://localhost:8983/solr/core_replica1: Exception during facet.field: text"
trace : "org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
Error from server at http://localhost:8983/solr/core_replica1: Exception during facet.field: text\n\tat
Do you know why this kind of error would happen?
I use Solr 6.6.0.
The field that I request is a org.apache.solr.schema.TextField that is :
Indexed | Tokenized | Stored | TermVector Stored | Store Offset With TermVector | Store Position With TermVector for the properties and shema
Thanks!
EDIT : solr logs :
2018-03-29 15:26:24.137 INFO (qtp42121758-601) [c:core s:shard2 r:core_node1 x:core_shard2_replica1] o.a.s.c.S.Request [core_shard2_replica1]
webapp=/solr path=/select params={facet.field={!terms%3D$text__terms}text&distrib=false&qt=weblab_facet&fl=id,score&shards.purpose=32&text__terms=est&fq=date:
[2014-01-01T09:15:57.000Z/SECOND+TO+*]&shard.url=http://localhost:8983/solr/core_shard2_replica1/&rows=0&version=2&f.mainDate.facet.range.end=NOW/DAY&q=*&facet.limit=5&f.mainDate.facet.range.gap=%2B1DAY&
NOW=1522329984131&isShard=true&facet=true&f.mainDate.facet.range.start=NOW/DAY-5DAYS&wt=javabin&facet.sort=count} hits=5 status=500 QTime=0
2018-03-29 15:26:24.137 ERROR (qtp42121758-601) [c:core s:shard2 r:core_node1 x:core_shard2_replica1] o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Exception during facet.field: text
at org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:809)
And :
2018-03-29 15:26:24.137 ERROR (qtp42121758-601) [c:core s:shard2 r:core_node1 x:core_shard2_replica1]
o.a.s.h.RequestHandlerBase
org.apache.solr.common.SolrException: Exception during facet.field: text
at org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:809)
Caused by: java.lang.NullPointerException
at org.apache.lucene.search.IndexSearcher.rewrite(IndexSearcher.java:683)

Exception with transacted set on activemq endpoint

I would like to understand why I would be getting the following exception when "transacted" is set on a route.
errorHandler(deadLetterChannel("activemq:EXCEPTION")
.useOriginalMessage()
.maximumRedeliveries(1));
from("activemq:TRIGGER").routeId("Trigger")
.transacted()
.bean("helloService")
.to("log:out");
I would like for any exceptions that are being thrown to be sent to an EXCEPTION queue. It is my understanding that "transacted" needs to be set to make the route a transaction. When I remove the configuration I don't get the exception below.
Exception is created when string for Hello class is set to larger then 5 characters.
Example code:
https://github.com/zachariahyoung/docker-camel
2015-12-03 21:59:26.706 WARN 9900 --- [nsumer[TRIGGER]] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 22001, SQLState: 22001
2015-12-03 21:59:26.706 ERROR 9900 --- [nsumer[TRIGGER]] o.h.engine.jdbc.spi.SqlExceptionHelper : Value too long for column "TEXT VARCHAR(5)": "'sdsfasdfasdf' (12)"; SQL statement:
insert into hello (id, text) values (null, ?) [22001-190]
2015-12-03 21:59:26.707 WARN 9900 --- [nsumer[TRIGGER]] o.a.c.s.spi.TransactionErrorHandler : Transaction rollback (0x7e2bd5e6) redelivered(true) for (MessageId: ID:CHARLA-49686-1449193952655-71:1:1:1:1 on ExchangeId: ID-CHARLA-63325-1449200820961-0-37) caught: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Transaction marked as rollbackOnly
2015-12-03 21:59:26.708 WARN 9900 --- [nsumer[TRIGGER]] o.a.c.c.jms.EndpointMessageListener : Execution of JMS message listener failed. Caused by: [org.apache.camel.RuntimeCamelException - org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Transaction marked as rollbackOnly]
org.apache.camel.RuntimeCamelException: org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Transaction marked as rollbackOnly
at org.apache.camel.util.ObjectHelper.wrapRuntimeCamelException(ObjectHelper.java:1642) ~[camel-core-2.16.0.jar:2.16.0]
at org.apache.camel.component.jms.EndpointMessageListener$EndpointMessageListenerAsyncCallback.done(EndpointMessageListener.java:186) ~[camel-jms-2.15.3.jar:2.15.3]
at org.apache.camel.component.jms.EndpointMessageListener.onMessage(EndpointMessageListener.java:107) ~[camel-jms-2.15.3.jar:2.15.3]
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:746) ~[spring-jms-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:684) ~[spring-jms-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:651) ~[spring-jms-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:315) [spring-jms-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:233) [spring-jms-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1150) [spring-jms-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1142) [spring-jms-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1039) [spring-jms-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: org.springframework.transaction.TransactionSystemException: Could not commit JPA transaction; nested exception is javax.persistence.RollbackException: Transaction marked as rollbackOnly
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:526) ~[spring-orm-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:761) ~[spring-tx-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:730) ~[spring-tx-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:150) ~[spring-tx-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.apache.camel.spring.spi.TransactionErrorHandler.doInTransactionTemplate(TransactionErrorHandler.java:174) ~[camel-spring-2.16.0.jar:2.16.0]
at org.apache.camel.spring.spi.TransactionErrorHandler.processInTransaction(TransactionErrorHandler.java:134) ~[camel-spring-2.16.0.jar:2.16.0]
at org.apache.camel.spring.spi.TransactionErrorHandler.process(TransactionErrorHandler.java:103) ~[camel-spring-2.16.0.jar:2.16.0]
at org.apache.camel.spring.spi.TransactionErrorHandler.process(TransactionErrorHandler.java:112) ~[camel-spring-2.16.0.jar:2.16.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190) ~[camel-core-2.16.0.jar:2.16.0]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:190) ~[camel-core-2.16.0.jar:2.16.0]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109) ~[camel-core-2.16.0.jar:2.16.0]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:87) ~[camel-core-2.16.0.jar:2.16.0]
at org.apache.camel.component.jms.EndpointMessageListener.onMessage(EndpointMessageListener.java:103) ~[camel-jms-2.15.3.jar:2.15.3]
... 11 common frames omitted
Caused by: javax.persistence.RollbackException: Transaction marked as rollbackOnly
at org.hibernate.jpa.internal.TransactionImpl.commit(TransactionImpl.java:74) ~[hibernate-entitymanager-4.3.11.Final.jar:4.3.11.Final]
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:517) ~[spring-orm-4.2.3.RELEASE.jar:4.2.3.RELEASE]
... 23 common frames omitted
Exception is thrown because camel is propagating rollback. If you call this route from another, camel will rollback that route too. You must catch exception which marked route as rollback and tell camel to only mark this route as rollback. Before .transacted() add:
.onException(Exception.class).markRollbackOnlyLast().end()

SOLR failed to create index for Cassandra 2.1.1 list which have user defined data type

I m trying to integrate Cassandra 2.1.1 with SOLR, but SOLR failed to
create index with following error message.
16767 [qtp297774990-12] INFO org.apache.solr.handler.dataimport.DataImporter – Loading DIH Configuration: dataconfigCassandra.xml
16779 [qtp297774990-12] INFO org.apache.solr.handler.dataimport.DataImporter – Data Configuration loaded successfully
16788 [Thread-15] INFO org.apache.solr.handler.dataimport.DataImporter – Starting Full Import
16789 [qtp297774990-12] INFO org.apache.solr.core.SolrCore – [Entity_dev] webapp=/solr path=/dataimport params={optimize=false&indent=true&clean=true&commit=true&verbose=false&command=full-import&debug=false&wt=json} status=0 QTime=27
16810 [qtp297774990-12] INFO org.apache.solr.core.SolrCore – [Entity_dev] webapp=/solr path=/dataimport params={indent=true&command=status&_=1416042006354&wt=json} status=0 QTime=0
16831 [Thread-15] INFO org.apache.solr.handler.dataimport.SimplePropertiesWriter – Read dataimport.properties
16917 [Thread-15] INFO org.apache.solr.search.SolrIndexSearcher – Opening Searcher#6214b0dc[Entity_dev] realtime
16945 [Thread-15] INFO org.apache.solr.handler.dataimport.JdbcDataSource – Creating a connection for entity Entity with URL: jdbc:cassandra://10.234.31.153:9160/galaxy_dev
17082 [Thread-15] INFO org.apache.solr.handler.dataimport.JdbcDataSource – Time taken for getConnection(): 136
17429 [Thread-15] ERROR org.apache.solr.handler.dataimport.DocBuilder – Exception while processing: Entity document : SolrInputDocument(fields: []):org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to execute query: select * from entity Processing Document # 1
at
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.<init>(JdbcDataSource.java:283)
at org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:240)
at org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:44)
at org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
Caused by: java.lang.NullPointerException
at org.apache.cassandra.cql.jdbc.ListMaker.compose(ListMaker.java:61)
at org.apache.cassandra.cql.jdbc.TypedColumn.<init>(TypedColumn.java:68)
at org.apache.cassandra.cql.jdbc.CassandraResultSet.createColumn(CassandraResultSet.java:1174)
at org.apache.cassandra.cql.jdbc.CassandraResultSet.populateColumns(CassandraResultSet.java:240)
at org.apache.cassandra.cql.jdbc.CassandraResultSet.<init>(CassandraResultSet.java:200)
at org.apache.cassandra.cql.jdbc.CassandraStatement.doExecute(CassandraStatement.java:169)
at org.apache.cassandra.cql.jdbc.CassandraStatement.execute(CassandraStatement.java:205)
at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.<init>(JdbcDataSource.java:276)
... 12 more
CREATE TABLE dev.entity (
id uuid PRIMARY KEY,
begining int,
domain text,
domain_type text,
template_name text,
****field_values list<frozen<fieldmap>>****
)
User Defined Type :
CREATE TYPE galaxy_dev.fieldmap (
key text,
value text );
Please let me know what driver or Jar use to create SOLR index for latest CASSANDRA.

Resources