Background:
I have a Bitnami Solr image installed on Google Compute Engine
I have a custom core with a customized schema
I had updated the core with approximately 100 documents
Everything was running fine for about 3 weeks. I then decided to restart the server as a part of routine maintenance.
When I restarted, all documents in the core had disappeared. The core is empty. The core configuration is there, the schema configuration is there but the documents are gone.
I also checked the file storage area under solr/mycore/data/index and there isnt much there.
I am a Solr newbie and my usage of it is fairly simple but I am concerned that I may be doing something wrong.
Can someone please advise what could be the error?
Update:
I observed that reloading a core causes all documents in the core to be lost. So I think I may be doing something incorrect in terms of persisting documents
Update 2:
Further reading, I figured out that my autoCommit parameter in solrconfig.xml may not be set right. So I tried fiddling with it. I set maxTime to 1000 milliseconds and changed openSearcher to TRUE.
After doing the above, I tried adding a bunch of documents via the admin console and I got the below error. Am stumped now!!
auto commit error...:java.io.FileNotFoundException: /opt/bitnami/apache-solr/solr/mycore/data/index/_0.fnm (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(Unknown Source)
at java.io.FileOutputStream.<init>(Unknown Source)
at org.apache.lucene.store.FSDirectory$FSIndexOutput.<init>(FSDirectory.java:389)
at org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:282)
at org.apache.lucene.store.NRTCachingDirectory.unCache(NRTCachingDirectory.java:247)
at org.apache.lucene.store.NRTCachingDirectory.sync(NRTCachingDirectory.java:182)
at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4528)
at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3001)
at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3104)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3071)
at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:582)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown Source)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Just had a similar issue, I'm using Cloud, make sure zookeeper/conf/zoo.cfg has dataDir set to something outside of temp/ (this is used in many of the examples). temp is deleted on restart for many linux distributions.
Well, it seems you don't have write permissions on the disk. You should check if the OS user running your Solr instance is allowed to write on disk. Notice I don't know anything about GCE, just check if you have options for managing permissions on the file system in an administration console provided by Google.
Another option would be to move your indexes somewhere else on the file system where you have write permissions.
Make sure you don't have two vhosts in Catalina using the same solr home. I've found that it wipes the index on service stop.
Related
I have stargate 1.0.38 running fine in my DEV server. I am able to use stargate rest api to get auth_token and running insert, select queries.
Yesterday, I have created an index Cql3SolrSecondaryIndex for a table in my Cassandra DSE 6.8. Then I see bellow error in stargate log. After that, I dropped that index. But even after dropping the index, i still see bellow error in stargate log. I also try to stop/start stargate but the still see same error.
ERROR [MigrationStage:1] 2021-10-15 00:47:13,593 PullRequestScheduler.java:245 - Configuration exception merging remote schema
org.apache.cassandra.exceptions.ConfigurationException: Unable to find custom indexer class 'com.datastax.bdp.search.solr.Cql3SolrSecondaryIndex'
at org.apache.cassandra.utils.FBUtilities.classForName(FBUtilities.java:493)
at org.apache.cassandra.schema.IndexMetadata.getCustomIndexClass(IndexMetadata.java:190)
at org.apache.cassandra.schema.IndexMetadata.validate(IndexMetadata.java:131)
at org.apache.cassandra.schema.Indexes.lambda$validate$2(Indexes.java:168)
at java.lang.Iterable.forEach(Iterable.java:75)
at org.apache.cassandra.schema.Indexes.validate(Indexes.java:168)
at org.apache.cassandra.schema.TableMetadata.validate(TableMetadata.java:512)
at java.lang.Iterable.forEach(Iterable.java:75)
at org.apache.cassandra.schema.KeyspaceMetadata.validate(KeyspaceMetadata.java:112)
at org.apache.cassandra.schema.KeyspaceMetadata.<init>(KeyspaceMetadata.java:85)
at org.apache.cassandra.schema.KeyspaceMetadata.create(KeyspaceMetadata.java:167)
at org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:1154)
at org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspaces(SchemaKeyspace.java:1769)
at org.apache.cassandra.schema.SchemaManager.merge(SchemaManager.java:893)
at org.apache.cassandra.schema.SchemaManager.mergeAndAnnounceVersion(SchemaManager.java:877)
at org.apache.cassandra.schema.PullRequestScheduler.lambda$sendPullRequest$2(PullRequestScheduler.java:240)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:88)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
at org.apache.cassandra.utils.concurrent.InlinedThreadLocalThread.run(InlinedThreadLocalThread.java:251)
Caused by: java.lang.ClassNotFoundException: com.datastax.bdp.search.solr.Cql3SolrSecondaryIndex not found by io.stargate.db.dse [1]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1597)
at org.apache.felix.framework.BundleWiringImpl.access$300(BundleWiringImpl.java:79)
at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1982)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.cassandra.utils.FBUtilities.classForName(FBUtilities.java:489)
... 24 common frames omitted
Because of this error, I am still able to get auth_token but all the select queries get this error
{
"description": "Resource not found: keyspace 'test' not found",
"code": 404
}
Please help me to fix this issue.
Stargate does not currently support advanced workloads like Search and Graph. I think you might have to drop and recreate that keyspace without the Solr index for it to work again since the schema still exists on the other nodes.
This issue has been documented here. There has also been a request made to support Solr here.
#David, I was able to drop the DSE Search index, perform a rolling restart of my DSE node(s) and then Stargate node(s) and it started up just fine without any errors. I ensure all my prior data existed just fine and was able to validate running basic CRUD operations using Stargate REST, GraphQL & Document APIs without any issues post that.
I have been indexing for quite a long time but now I am not able to do so. I keep on getting the following error.
INFO [Thread-80] (00002SB6) [SolrIndexerJob] Started indexer cronjob.
ERROR [Thread-80] (00002SB6) [Job] Caught throwable de/hybris/platform/solrfacetsearch/config/IndexConfig
java.lang.NoClassDefFoundError: de/hybris/platform/solrfacetsearch/config/IndexConfig
at ma.glasnost.orika.generated.Orika_FacetSearchConfig_FacetSearchConfig_Mapper45623933135018$4.mapAtoB(Orika_FacetSearchConfig_FacetSearchConfig_Mapper45623
933135018$4.java)
at ma.glasnost.orika.impl.mapping.strategy.UseCustomMapperStrategy.map(UseCustomMapperStrategy.java:67)
at ma.glasnost.orika.impl.MapperFacadeImpl.map(MapperFacadeImpl.java:735)
at ma.glasnost.orika.impl.MapperFacadeImpl.map(MapperFacadeImpl.java:714)
at ma.glasnost.orika.impl.ConfigurableMapper.map(ConfigurableMapper.java:150)
at de.hybris.platform.solrfacetsearch.config.impl.DefaultFacetSearchConfigService.getConfiguration(DefaultFacetSearchConfigService.java:51)
at de.hybris.platform.solrfacetsearch.indexer.cron.AbstractIndexerJob.getFacetSearchConfig(AbstractIndexerJob.java:70)
at de.hybris.platform.solrfacetsearch.indexer.cron.SolrIndexerJob.performIndexingJob(SolrIndexerJob.java:49)
at de.hybris.platform.multicountry.solr.indexer.cron.impl.MulticountrySolrIndexerJob.performIndexingJob(MulticountrySolrIndexerJob.java:73)
at de.hybris.platform.solrfacetsearch.indexer.cron.AbstractIndexerJob.perform(AbstractIndexerJob.java:40)
at de.hybris.platform.servicelayer.internal.jalo.ServicelayerJob.performCronJob(ServicelayerJob.java:38)
at de.hybris.platform.cronjob.jalo.Job.execute(Job.java:1390)
at de.hybris.platform.cronjob.jalo.Job.performImpl(Job.java:814)
at de.hybris.platform.cronjob.jalo.Job.performImpl(Job.java:732)
at de.hybris.platform.cronjob.jalo.Job.perform(Job.java:644)
at de.hybris.platform.servicelayer.cronjob.impl.DefaultCronJobService.performCronJob(DefaultCronJobService.java:86)
at de.hybris.platform.solrfacetsearchbackoffice.wizards.BaseSolrIndexerWizardStep$WizardCronJobAsyncOperation.execute(BaseSolrIndexerWizardStep.java:158)
at com.hybris.cockpitng.engine.impl.DefaultWidgetInstanceManager$1.getResult(DefaultWidgetInstanceManager.java:206)
at com.hybris.cockpitng.engine.operations.ResultLongOperation.execute(ResultLongOperation.java:52)
at com.hybris.cockpitng.engine.operations.LongOperation.run(LongOperation.java:205)
at java.lang.Thread.run(Thread.java:748)
Not sure which change is causing this issue.
Tried to setup new fresh hybris suite still the same issue. Followed hybris answers which mentioned to perform ant clean all and server startup, didn't work. Created new index and then performed indexing still the same. I am able to open solr admin but not able to index anything.
Any help would be really appreciated.
Just restarted the server. It worked for me!
I am setting up a webapp using Geoserver and PostgreSQL. Once I created postgis datastore and configured all layers and layergroups. I didn't even close my computer but when I started again working with Geoserver can't reach the layer preview, I checked and seen that there is no option visible for adding postgis datastore. Before that I installed css and backup&restore extensions, and don't know maybe it is not relevant but my computer closed suddenly because of power off even that ı was able to reach datastore after power off. Additionaly I renamed the datastore name that i created.
I tried to reinstall geoserver and postgis but not fixed.
Here is the error:
Caused by: java.io.IOException: Failed to find the datastore factory for kadikoygis_itrf, did you forget to install the store extension jar?
at org.geoserver.catalog.ResourcePool.getDataStore(ResourcePool.java:535)
at org.geoserver.catalog.ResourcePool.getCacheableFeatureType(ResourcePool.java:916)
at org.geoserver.catalog.ResourcePool.tryGetFeatureType(ResourcePool.java:901)
at org.geoserver.catalog.ResourcePool.getFeatureType(ResourcePool.java:893)
at org.geoserver.catalog.ResourcePool.getFeatureType(ResourcePool.java:878)
at org.geoserver.catalog.impl.FeatureTypeInfoImpl.getFeatureType(FeatureTypeInfoImpl.java:123)
at jdk.internal.reflect.GeneratedMethodAccessor275.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at org.geoserver.catalog.impl.ModificationProxy.invoke(ModificationProxy.java:127)
at com.sun.proxy.$Proxy36.getFeatureType(Unknown Source)
at org.geoserver.wms.map.GetMapKvpRequestReader.checkStyle(GetMapKvpRequestReader.java:1215)
... 102 more
After remove and reinstallation it solved.
I am new to Solr.
I have created two cores from the admin page, let's call them "books" and "libraries", and imported some data there. Everything works without a hitch until I restart the server. When I do so, one of these cores disappears, and the logging screen in the admin page contains:
SEVERE CoreContainer null:java.lang.NoClassDefFoundError: net/arnx/jsonic/JSONException
SEVERE SolrCore REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore#454055ac (papers) has a reference count of 1
I was testing my query in the admin interface; when I refreshed it, the "libraries" core was gone, even though I could normally query it just a minute earlier. The contents of solr.xml are intact. Even if I restart Tomcat, it remains gone.
Additionally, I was trying to build a query similar to this: "Find books matching 'war peace' in libraries in Atlanta or New York". So given cores "books" and "libraries", I would issue "books" the following query (which might be wrong, if it is please correct me):
(title:(war peace) blurb:(war peace))
AND _query_:"{!join
fromIndex=libraries from=libraryid to=libraryid
v='city:(new york) city:(atlanta)'}"
When I do so, the query fails with "libraries" core disappears, with the above symptoms. If I re-add it, I can continue working (as long as I don't restart the server or issue another join query).
I am using Solr 4.0; if anyone has a clue what is happening, I would be very grateful. I could not find out anything about the meaning of the error message, so if anyone could suggest where to look for that, or how go about debugging this, it would be really great. I can't even find where the log file itself is located...
I would avoid the Debian package which may be misconfigured and quirky. And it contains (a very early build of?) solr 4.0, which itself may have lingering issues; being the first release in a new major version. The package maintainer may not have incorporated the latest and safest Solr release into his package.
A better way is to download Solr 4.1 yourself and set it up yourself with Tomcat or another servlet container.
In case you are looking to install SOLR 4.0 and configure, you can following the installation procedure from here
Update the solr config for the cores to be persistent.
In your solr.xml, update <solr> or <solr persistent="false"> to <solr persistent="true">
I have solrj client with infinite timeout(Solr4)
server.server.setSoTimeout(0)
server.server.setConnectionTimeout(0)
When I index my data I have many timeouts on server side.
Where can I update server side timeouts in solrconfig.xml or possible tomcat config?
Client side exception:
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
Server side exception:
Jan 31, 2013 8:55:54 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Read timed out
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:159)
at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1699)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:751)
We had the same problem with Solr 4. We solved this after reading a blog post by Uwe Schindler (a Solr commiter).
With Solr 4 and several Solr 3 versions, you have to let an important share of your RAM free so that the system can use properly the mmap system call. This can be subtle depending on your system configuration (the blog post gives a plenty of informations on that point). In our case, this solved the problem: we could finally index without any more timeout issue.
the info for tomcat server.xml config will solve this. we got same stack trace and the below solved it for us:
http://forums.alfresco.com/ja/node/8458