What makes an invalid core name? - solr

While devising a naming scheme for core names, I tried naming a core "search/live" and received this exception when trying to start solr:
java.lang.RuntimeException: Invalid core name: search/live
at org.apache.solr.core.CoreContainer.registerCore(CoreContainer.java:411)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:499)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:255)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:249)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Evidently using / in a core name makes it invalid. What are the restricted characters that make a core name invalid? I can't seem to find any documentation on this.

The valid characters for a core name appear to be undocumented. According to the source of org.apache.solr.core.CoreContainer#registerCore(String, SolrCore, boolean) in Solr 4.10.4, the only invalid characters are:
Forward-slash: /
Back-slash: \
The following characters are problematic by causing issues in the admin interface and when performing general queries:
Colon: :

Related

Kylin Build Cube failed somtimes at" #19 Step Name: Hive Cleanup" java.lang.RuntimeException: Failed to read kylin_hive_conf.xml

The error occurs sometimes ,and after reboot kylin(kylin.sh stop and then kylin.sh start), it will find the conf dir location and pass this step.
I am using Kylin version "2.6.2", and KYLIN_CONF="/opt/kylin/conf" is already set correctly.
The errors hints are different , as i have countered the following:
1.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
at org.apache.kylin.common.util.SourceConfigurationUtil.loadXmlConfiguration(SourceConfigurationUtil.java:88)
at org.apache.kylin.common.util.SourceConfigurationUtil.loadHiveConfiguration(SourceConfigurationUtil.java:61)
at org.apache.kylin.common.util.HiveCmdBuilder.<init>(HiveCmdBuilder.java:48)
at org.apache.kylin.source.hive.GarbageCollectionStep.cleanUpIntermediateFlatTable(GarbageCollectionStep.java:63)
at org.apache.kylin.source.hive.GarbageCollectionStep.doWork(GarbageCollectionStep.java:49)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/bin/meta/kylin_hive_conf.xml'
3.
java.lang.RuntimeException: Failed to read kylin_hive_conf.xml at '/opt/apache-kylin-2.6.2-bin-hadoop3/conf/meta/kylin_hive_conf.xml'
who can kindly help me find the root cause and fix this problem ?
Thanks in advance.
I hope you have already solve the issue. I had encounter same problem and investigated about it.
Prefer to https://github.com/apache/kylin/blob/kylin-2.6.2/engine-mr/src/main/java/org/apache/kylin/engine/mr/common/AbstractHadoopJob.java#L481
When we use MapReduce, KYLIN_CONF will be set to different folder.
System.setProperty(KylinConfig.KYLIN_CONF, metaDir.getAbsolutePath());
I think to workaround with it, we have to create simple link for all XML xml configurations.
Try to check your Kylin log
cat YOUR_PATH/apache-kylin-2.6.3-bin-hbase1x/logs/kylin.log | grep "The absolute path"
You possibly see the result
2019-10-14 23:47:04,438 INFO [LocalJobRunner Map Task Executor #0] common.AbstractHadoopJob:482 : The absolute path for meta dir is /SOME_FOLDER/meta

why this error appears "all scheduled cores encountered errors in user code" is it related to core processor of servers?

We are analyzing sequencing data while filtering and trimming fastq files encountered following error. Is the following error due to unavailability of core for processing commands?
Error in colnames<-(*tmp*, value = c("cs103_R1_dada.fastq", "cs110_R1_dada.fastq", : attempt to set 'colnames' on an object with less than two dimensions In addition: Warning message: In mclapply(seq_len(n), do_one, mc.preschedule = mc.preschedule, : all scheduled cores encountered errors in user code >
As pengchy suggested there may be something wrong with function.
try the same call by using lapply and error message will be more informative.
To clarify on what #f2003596 and #HelloWorld said: This just means that a crash occurred within the function you called, i.e. while it was executing that function. But this does not necessarily mean that your function is incorrect. For example, you get the same error when a variable has not been found.
That would mean your R function has a crash.
Note: If you include an unexpected argument in mclapply you also can get this error message. I put mC.cores instead of mc.cores by mistake and I got it.

Warning causing android build to fail

I have native android code that i bundle with my app. This has always been working for many months. But today, the same code is failing with warnings. I think the last successful build was two days ago:
--
Note: there were 5 references to unknown classes.
You should check your configuration for typos.
(http://proguard.sourceforge.net/manual/troubleshooting.html#unknownclass)
Note: there were 1927 unkept descriptor classes in kept class members.
You should consider explicitly keeping the mentioned classes
(using '-keep').
(http://proguard.sourceforge.net/manual/troubleshooting.html#descriptorclass)
Note: there were 2 unresolved dynamic references to classes or interfaces.
You should check if you need to specify additional program jars.
(http://proguard.sourceforge.net/manual/troubleshooting.html#dynamicalclass)
Note: there were 4 class casts of dynamically created class instances.
You might consider explicitly keeping the mentioned classes and/or
their implementations (using '-keep').
(http://proguard.sourceforge.net/manual/troubleshooting.html#dynamicalclasscast)
Warning: there were 23 unresolved references to program class members.
Your input classes appear to be inconsistent.
You may need to recompile the code.
(http://proguard.sourceforge.net/manual/troubleshooting.html#unresolvedprogramclassmember)
Exception while processing task
java.io.IOException: Please correct the above warnings first.
at proguard.Initializer.execute(Initializer.java:473)
at proguard.ProGuard.initialize(ProGuard.java:233)
at proguard.ProGuard.execute(ProGuard.java:98)
at proguard.gradle.ProGuardTask.proguard(ProGuardTask.java:1074)
at com.android.build.gradle.tasks.AndroidProGuardTask.doMinification(AndroidProGuardTask.java:139)
at com.android.build.gradle.tasks.AndroidProGuardTask$1.run(AndroidProGuardTask.java:115)
at com.android.builder.tasks.Job.runTask(Job.java:48)
at com.android.build.gradle.tasks.SimpleWorkQueue$EmptyThreadContext.runTask(SimpleWorkQueue.java:41)
at com.android.builder.tasks.WorkQueue.run(WorkQueue.java:227)
at java.lang.Thread.run(Thread.java:745)
:proguardRelease (Thread[Daemon worker,5,main]) completed. Took 6.197 secs.
:dexRelease (Thread[Daemon worker,5,main]) started.
:dexRelease
Executing task ':dexRelease' (up-to-date check took 0.058 secs) due to:
---
So, i have found out, proguard task will halt execution if there are warnings. While searching the net reveals various ways of dealing with these,the bottom line is to make sure you fix the warnings.

Indexing Wikipedia with Solr doesn't work

I'm trying to index the English Wikipedia, around 40Gb, but it's not working. I've followed the tutorial at http://wiki.apache.org/solr/DataImportHandler#Configuring_DataSources and other related Stackoverflow questions like Indexing wikipedia with solr and Indexing wikipedia dump with solr.
I was able to import the wikipedia (simple english), about 150k documents, and Portuguese wikipedia (more than 1 million documents) using the configuration explained in the tutorial. The problem is happening when I try to index the English Wikipedia (more than 8 million documents). It gives the follow error:
Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException: java.lang.OutOfMemoryError: Java heap space
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:270)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:411)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:476)
at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:457)
Caused by: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException: java.lang.OutOfMemoryError: Java heap space
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:410)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:323)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:231)
... 3 more
Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException: java.lang.OutOfMemoryError: Java heap space
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:539)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:408)
... 5 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.index.ParallelPostingsArray.<init>(ParallelPostingsArray.java:34)
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.<init>(FreqProxTermsWriterPerField.java:254)
at org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:279)
at org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
at org.apache.lucene.index.TermsHashPerField$PostingsBytesStartArray.grow(TermsHashPerField.java:307)
at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:324)
at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:185)
at org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:165)
at org.apache.lucene.index.DocFieldProcessor.processDocument(DocFieldProcessor.java:248)
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:253)
at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:453)
at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1520)
at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:217)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:569)
at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:705)
at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:435)
at org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:100)
at org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70)
at org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:235)
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:504)
... 6 more
I'm using a MacBook pro with 4Gb RAM and more than 120Gb of free space in the HD. I've already tried to change the the 256 in the solrconfig.xml, but no success up to now.
Does anyone could help me, please?
Edited
Just in case, if someone has the same problem, I've used the command java Xmx1g -jar star.jar suggested by Cheffe to solve my problem.
Your Java VM is running out of memory. Give more memory to it. Like explained in this SO question Increase heap size in Java
java -Xmx1024m myprogram
Further detail on the Xmx parameter can be found in the docs, just search for -Xmxsize
Specifies the maximum size (in bytes) of the memory allocation pool in bytes. This value must be a multiple of 1024 and greater than 2 MB. Append the letter k or K to indicate kilobytes, m or M to indicate megabytes, g or G to indicate gigabytes. The default value is chosen at runtime based on system configuration. For server deployments, -Xms and -Xmx are often set to the same value. For more information, see Garbage Collector Ergonomics at http://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc-ergonomics.html
The following examples show how to set the maximum allowed size of allocated memory to 80 MB using various units:
Xmx83886080
Xmx81920k
Xmx80m
The -Xmx option is equivalent to -XX:MaxHeapSize.
If you have tomcat6, you can increase java heap size in the file
/etc/default/tomcat6
change the parameter -Xmx in the line (e.g. from Xmx128m to Xmx256m):
JAVA_OPTS="-Djava.awt.headless=true -Xmx256m -XX:+UseConcMarkSweepGC"
During the import, watch the Admin Dashboard web page, where you can see actual JVM-memory allocated.

unable to set password protection for PDF File using java(i text jar used)

PdfWriter writer =PdfWriter.getInstance(document,
new FileOutputStream("C:\\Documents and Settings\\abc\\Desktop\\Test.pdf"));
writer.setEncryption("123".getBytes(), "123".getBytes(),
PdfWriter.ALLOW_PRINTING,PdfWriter.ENCRYPTION_AES_128);
I am using itextpdf-5.4.4.jar.
When executing the setEncryption() method I get the following error:
Exception in thread "main java.lang.NoClassDefFoundError: org/bouncycastle/asn1/ASN1Primitive
Please suggest some solutions.
If I use itextpdf-5.2.1.jar then above code is working without any exceptions.
itextpdf 5.2.1 depends on BouncyCastle library bctsp-jdk15 1.46, while itextpdf 5.4.4 depends on two BouncyCastle libraries: bcpkix-jdk15on 1.49
and bcprov-jdk15on 1.49. ASN1Primitive was introduced to bcprov-jdk15on starting with version 1.47.

Resources