Alfresco - Configure 2 groupSearchBases for Active Directory - active-directory

How to configure 2 groupSearchBases for Alfresco?
Right now i have this property in my global.properties:
ldap.synchronization.groupSearchBase=CN\=Alfresco users,OU\=Users,OU\=AWE,DC\=main,DC\=awe
But i need to configure second search base with path
CN=Alfresco users,OU=Labs,OU=AWE,DC=main,DC=awe
. What i have tried is to configure the property with OR statement like this:
ldap.synchronization.groupSearchBase=(|(CN\=Alfresco users,OU\=Users,OU\=AWE,DC\=main,DC\=awe)(CN\=Alfresco users,OU\=Labs,OU\=AWE,DC\=main,DC\=awe))
This setting gave me an error:
00:30:07,147 ERROR [org.alfresco.repo.security.sync.ChainingUserRegistrySynchronizer] Synchronization aborted due to error
org.alfresco.error.AlfrescoRuntimeException: 02290000 Error during LDAP Search. Reason: null
...
Caused by: javax.naming.PartialResultException [Root exception is javax.naming.NamingException: LDAP response read timed out, timeout used:5000ms. [Root exception is com.sun.jndi.ldap.LdapReferralException: Continuation Reference; remaining name 'DC\=main,DC\=awe']; remaining name '']
...
Caused by: javax.naming.NamingException: LDAP response read timed out, timeout used:5000ms. [Root exception is com.sun.jndi.ldap.LdapReferralException: Continuation Reference; remaining name 'DC\=main,DC\=awe']; remaining name ''
...
Caused by: com.sun.jndi.ldap.LdapReferralException: Continuation Reference; remaining name 'DC\=main,DC\=awe'
I also minimized the searchBase path to include both of the directories like this:
ldap.synchronization.groupSearchBase=CN\=Alfresco users,OU\=AWE,DC\=main,DC\=awe
But this also gave me an error:
org.alfresco.error.AlfrescoRuntimeException: 02310000 Error during LDAP Search. Reason: [LDAP: error code 32 - 0000208D: NameErr: DSID-03100238, problem 2001 (NO_OBJECT), data 0, best match of: 'OU=AWE,DC=main,DC=awe'
...
Caused by: javax.naming.NameNotFoundException: [LDAP: error code 32 - 0000208D: NameErr: DSID-03100238, problem 2001 (NO_OBJECT), data 0, best match of:'OU=AWE,DC=main,DC=awe'
What i am doing wrong and how to make alfresco search for both groupSearchBases (the easiest way if possible). Thanks in advance.

as mentioned in the comments, the search base is a LDAP (Distinguished Name) path, not a query. This means that you should select the search base for your user and group query to a path for which both organizational units are subordinate: OU=AWE,DC=main,DC=awe.
Then you need to build the users and groups query so that only groups and users are returned as expected. E.g. for the person query can look like this:
(&
(objectCategory\=Person)
(|
(memberOf\:1.2.840.113556.1.4.1941\:\=CN\=Alfresco users,OU\=Users,OU\=AWE,DC\=main,DC\=awe)
(memberOf\:1.2.840.113556.1.4.1941\:\=CN\=Alfresco users,OU\=Labs,OU\=AWE,DC\=main,DC\=awe)
)
(userAccountControl\:1.2.840.113556.1.4.803\:\=512)
)
for the group search you should do the same.
hint: 1.2.840.113556.1.4.1941 is a Active-Directory specific filter to retrieve nested groups (recursive retrieval of all members of that DN). For more info check Active Directory: LDAP Syntax Filters | MS Tecnet

Related

extension 'inboundservices' not found within de.hybris.bootstrap.typesystem.YTypeSystem

I am getting this error even after removing all the orphaned types.
ERROR [hybrisHTTP6] [XMLContentHandler] error in taglistener de.hybris.bootstrap.typesystem.xml.ItemTypeTagListener#6db66316 at line 425 : error parsing system integrationservices at lines [400-425] : extension 'inboundservices' not found within de.hybris.bootstrap.typesystem.YTypeSystem#58732e23
Full error log:
de.hybris.bootstrap.xml.UnknownParseError: error parsing system
integrationservices at lines [400-425] : extension 'inboundservices'
not found within de.hybris.bootstrap.typesystem.YTypeSystem#58732e23
java.lang.IllegalArgumentException: extension 'inboundservices' not
found within de.hybris.bootstrap.typesystem.YTypeSystem#58732e23 at
de.hybris.bootstrap.typesystem.xml.AbstractTypeSystemTagListener.processError(AbstractTypeSystemTagListener.java:44)
at
de.hybris.bootstrap.xml.DefaultTagListener.endElement(DefaultTagListener.java:293)
at
de.hybris.bootstrap.xml.XMLContentHandler.endElement(XMLContentHandler.java:197)
This happens when itemtypes are moved from one extension to another. In your case you hade itemtypes defined in inboundservices-items.xml that are now in integrationservices-items.xml.
To find out which exact itemtypes are causing the issue, go to the lines mentioned in the stacktrace, integrationservices-items.xml lines 400 to 425.
To fix this you will need to manually delete some entries from the database. with the following, SQL, not FlexibleSearch, queries via the HAC.
Delete the attributeDescriptors of the moved attributes
delete
from attributedescriptors
where ownerpkstring in (select pk
from composedtypes
where internalcodelowercase in ('list','of','affected','attributes'));
Delete the composedtypes
delete
from composedtypes
where internalcodelowercase in ('list','of','affected','attributes');
And in case you have moved a deployment table, delete the ydeployments as well
delete
from ydeployments
where ExtensionName = 'inboundservices'
AND TableName = 'affectedItemTypeTable'
Some additional information that might help can be found here

LDAP 000021B1: SvcErr: DSD-0315154A, problem 5005 (UNABLE_TO_PROCEED),

I'm using a rust program to perform a modify_replace command on an Active Directory group. This command modify_replaces aroung 30,000 users. I verified the user has read/write access to the group. I'm modifying the member attribute on a group object.
Adding the largest successful modify_replace is about 8,000 objects.
The error I receive is:
2022-08-26T17:02:55.001Z ERROR [groupsyncer::ldap::ad] "000021B1:
SvcErr: DSID-0315154A, problem 5005 (UNABLE_TO_PROCEED),
The issue for me was that a few users in the modify_replace could not be added to the group. By adding them one at a time using modify_replace, I could narrow it down. For safety I chose modify_add as the only option.

Items not groupped correctly - CoGroupByKey

CoGroupByKey problem
Data description.
I have two datasets.
Records - the first, containes around 0.5-1M of records per (key,day). For testing I use 2-3 keys and 5-10 days of data. What I shoot for is 1000+ keys. Each record contains key, timestamp in μ-seconds and some other data.
Configs - the second, is rather small. It describes the key in time, e.g. you can think about it as a list of tuples: (key, start date, end date, description).
For the exploration I've encoded the data as files of length-prefixed Protocol Buffer binary encoded messages. Additionally the files are packed with gzip. Data is sharded by date. Each file is around 10MB.
Pipeline
I use Apache Beam to express a pipeline.
First I add keys to both datasets. For Records dataset it's (key, day rounded timestamp). For Configs a key is (key, day), where day is each timestamp value between start date and end date (pointing midnight).
The datasets are merged using CoGroupByKey.
As a key type I use org.apache.flink.api.java.tuple.Tuple2 with a Tuple2Coder from repo github.com/orian/tuple-coder.
The problem
If the Records dataset is tiny like 5 days, everything seems fine (check normal_run.log).
INFO [main] (FlinkPipelineRunner.java:124) - Final aggregator values:
INFO [main] (FlinkPipelineRunner.java:127) - item count : 4322332
INFO [main] (FlinkPipelineRunner.java:127) - missing val1 : 0
INFO [main] (FlinkPipelineRunner.java:127) - multiple val1 : 0
When I run the pipeline against 10+ days I encounter an error pointing that for some Records there's no Config (wrong_run.log).
INFO [main] (FlinkPipelineRunner.java:124) - Final aggregator values:
INFO [main] (FlinkPipelineRunner.java:127) - item count : 8577197
INFO [main] (FlinkPipelineRunner.java:127) - missing val1 : 6
INFO [main] (FlinkPipelineRunner.java:127) - multiple val1 : 0
Then I've added some extra logging messages:
(a.java:144) - 68643 items for KeyValue3 on: 1462665600000000
(a.java:140) - no items for KeyValue3 on: 1463184000000000
(a.java:123) - missing for KeyValue3 on: 1462924800000000
(a.java:142) - 753707 items for KeyValue3 on: 1462924800000000 marked as no-loc
(a.java:123) - missing for KeyValue3 on: 1462752000000000
(a.java:142) - 749901 items for KeyValue3 on: 1462752000000000 marked as no-loc
(a.java:144) - 754578 items for KeyValue3 on: 1462406400000000
(a.java:144) - 751574 items for KeyValue3 on: 1463011200000000
(a.java:123) - missing for KeyValue3 on: 1462665600000000
(a.java:142) - 754758 items for KeyValue3 on: 1462665600000000 marked as no-loc
(a.java:123) - missing for KeyValue3 on: 1463184000000000
(a.java:142) - 694372 items for KeyValue3 on: 1463184000000000 marked as no-loc
You can spot that in first line 68643 items were processed for KeyValue3 and time 1462665600000000.
Later on in line 9 it seems the operation processes the same key again, but it reports that no Config was available for these Records.
The line 10 informs they've been marked as no-loc.
The line 2 is saying that there were no items for KeyValue3 and time 1463184000000000, but in line 11 you can read that the items for this (key,day) pair were processed later and they've lacked a Config.
Some clues
During one of the exploration runs I've got an exception (exception_thrown.log).
05/26/2016 03:49:49 GroupReduce (GroupReduce at GroupByKey)(1/5) switched to FAILED
java.lang.Exception: The data preparation for task 'GroupReduce (GroupReduce at GroupByKey)' , caused an error: Error obtaining the sorted input: Thread 'SortMerger spilling thread' terminated due to an exception: Error obtaining the sorted input: Thread 'SortMerger Reading Thread' terminated due to an exception: tried to access field com.esotericsoftware.kryo.io.Input.inputStream from class org.apache.flink.api.java.typeutils.runtime.NoFetchingInput
at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:455)
at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:345)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Error obtaining the sorted input: Thread 'SortMerger spilling thread' terminated due to an exception: Error obtaining the sorted input: Thread 'SortMerger Reading Thread' terminated due to an exception: tried to access field com.esotericsoftware.kryo.io.Input.inputStream from class org.apache.flink.api.java.typeutils.runtime.NoFetchingInput
at org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:619)
at org.apache.flink.runtime.operators.BatchTask.getInput(BatchTask.java:1079)
at org.apache.flink.runtime.operators.GroupReduceDriver.prepare(GroupReduceDriver.java:94)
at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:450)
... 3 more
Caused by: java.io.IOException: Thread 'SortMerger spilling thread' terminated due to an exception: Error obtaining the sorted input: Thread 'SortMerger Reading Thread' terminated due to an exception: tried to access field com.esotericsoftware.kryo.io.Input.inputStream from class org.apache.flink.api.java.typeutils.runtime.NoFetchingInput
at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:799)
Caused by: java.lang.RuntimeException: Error obtaining the sorted input: Thread 'SortMerger Reading Thread' terminated due to an exception: tried to access field com.esotericsoftware.kryo.io.Input.inputStream from class org.apache.flink.api.java.typeutils.runtime.NoFetchingInput
at org.apache.flink.runtime.operators.sort.UnilateralSortMerger.getIterator(UnilateralSortMerger.java:619)
at org.apache.flink.runtime.operators.sort.LargeRecordHandler.finishWriteAndSortKeys(LargeRecordHandler.java:263)
at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$SpillingThread.go(UnilateralSortMerger.java:1409)
at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:796)
Caused by: java.io.IOException: Thread 'SortMerger Reading Thread' terminated due to an exception: tried to access field com.esotericsoftware.kryo.io.Input.inputStream from class org.apache.flink.api.java.typeutils.runtime.NoFetchingInput
at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:799)
Caused by: java.lang.IllegalAccessError: tried to access field com.esotericsoftware.kryo.io.Input.inputStream from class org.apache.flink.api.java.typeutils.runtime.NoFetchingInput
at org.apache.flink.api.java.typeutils.runtime.NoFetchingInput.readBytes(NoFetchingInput.java:122)
at com.esotericsoftware.kryo.io.Input.readBytes(Input.java:297)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.read(DefaultArraySerializers.java:35)
at com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.read(DefaultArraySerializers.java:18)
at com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:706)
at com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:611)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:732)
at org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:228)
at org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:242)
at org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:144)
at org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:30)
at org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:144)
at org.apache.flink.api.java.typeutils.runtime.TupleSerializer.deserialize(TupleSerializer.java:30)
at org.apache.flink.runtime.io.disk.InputViewIterator.next(InputViewIterator.java:43)
at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ReadingThread.go(UnilateralSortMerger.java:973)
at org.apache.flink.runtime.operators.sort.UnilateralSortMerger$ThreadBase.run(UnilateralSortMerger.java:796)
Work-around (after more testing, doesn't work, staying with Tuple2)
I've switched from using Tuple2 to a Protocol Buffer message:
message KeyDay {
optional ByteString key = 1;
optional int64 timestamp_usec = 2;
}
But using Tuple2.of() was just easier than: KeyDay.newBuilder().setKey(...).setTimestampUsec(...).build().
When switched to a key been a class derived from protobuf.Message the problem disappeared for 10-15 days (so data size which was problem for Tuple2), but increasing data size to 20 days revealed it's there.

WSO2 Message Broker Error while adding Queue - Invalid Object Name

I have just set up a WSO2 Message Broker 3.0.0 connecting to a SQL Server DB.
The DB for the Carbon MB component has been created successfully as well.
The DB for the Message Broker Data store is created and contains the table MB_QUEUE_MAPPING.
However when adding a Queue via the MB UI I see the following error in the stack trace:
[2015-12-16 15:00:41,472] ERROR {org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl} - Error occurred while retrieving destination queue id for destina
tion queue TestQ
java.sql.SQLException: Invalid object name 'MB_QUEUE_MAPPING'.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:505)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:1029)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.getQueueID(RDBMSMessageStoreImpl.java:1324)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.getCachedQueueID(RDBMSMessageStoreImpl.java:1298)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.addQueue(RDBMSMessageStoreImpl.java:1634)
at org.wso2.andes.store.FailureObservingMessageStore.addQueue(FailureObservingMessageStore.java:445)
at org.wso2.andes.kernel.AMQPConstructStore.addQueue(AMQPConstructStore.java:116)
at org.wso2.andes.kernel.AndesContextInformationManager.createQueue(AndesContextInformationManager.java:154)
at org.wso2.andes.kernel.disruptor.inbound.InboundQueueEvent.updateState(InboundQueueEvent.java:151)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:167)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:67)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:41)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
The "Add Queue" screen does not go away however the Queue does get added to the MB_QUEUE table just fine in the DB. Both tables MB_QUEUE_MAPPING & MB_QUEUE_COUNTER are blank.
The "List Queues" screen does blank despite a number of Queues in the MB_QUEUE table. Stack trace also shows errors but is not included as its not relevant to the error above.
I can create a Topic just fine however.
I want to know why MB would say the table MB_QUEUE_MAPPING is an Invalid object name when the table clearly exists ?
I suspect the way you have configure the mysql database is incorrect.So you can better try out one of these below two scenarios to make sure about this issue.
1) starting the server for the first time with the -Dsetup parameter or
2) you can refer the documentation(https://docs.wso2.com/display/MB300/Configuring+MySQL) "Configuring MySQL" and follow step by step instructions given in order.
I have tried out the second scenario and I did not get any exception when I am adding queue.And the document I have mentioned will have to be update as below.
you can see this command in the step 3.
mysql -u <db_user_name> -p -D<database_name> < '<WSO2MB_HOME>/dbscripts/mb-store/mysql-mb.sql ';
db_user_name - username of db.
database_name - database name that you have created in the step 1.
WSO2MB_HOME - home directory path for MB.
Hope this could help you to resolve this issue.
It seems user connecting to MSSQL database not having correct permission. Most probably SELECT permission. Reason why I am saying is, when you adding queue, it does get added. This means user has INSERT permission. Once queue added, page redirected to Queue List page. User must have SELECT permission to retrieve queue list. Topic are not getting added to database, it keeps in registry. You can verify user who connecting to MSSQL from configuration like below in wso2mb-3.0.0/repository/conf/datasources/master-datasources.xml.
<datasource>
   <name>WSO2_MB_STORE_DB</name>
   <jndiConfig>
       <name>WSO2MBStoreDB</name>
   </jndiConfig>
   <definition type="RDBMS">
         <configuration>
                    <url>jdbc:jtds:sqlserver://localhost:1433/wso2_mb</url>
                    <username>sa</username>
                    <password>sa</password>
                    <driverClassName>net.sourceforge.jtds.jdbc.Driver</driverClassName>
                    <maxActive>200</maxActive>
                    <maxWait>60000</maxWait>
                    <minIdle>5</minIdle>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                    <defaultAutoCommit>false</defaultAutoCommit>
         </configuration>
     </definition>
</datasource>

Neo4j: How do you rebuild the label scan store?

I shut down my Neo4J instance every night to do a backup. This morning I found that it failed to start up again:
2015-12-05 03:38:49.326+0000 INFO Successfully shutdown Neo4j Server
2015-12-05 03:38:49.330+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#7728902c' was successfully initialized, but failed to start. Please see attached cause exception. Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#7728902c' was successfully initialized, but failed to start. Please see attached cause exception.
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#7728902c' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:67)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:234)
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:97)
at org.neo4j.server.CommunityBootstrapper.start(CommunityBootstrapper.java:48)
at org.neo4j.server.CommunityBootstrapper.main(CommunityBootstrapper.java:35)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.database.LifecycleManagingDatabase#7728902c' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:462)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:194)
... 3 more
Caused by: java.lang.RuntimeException: Error starting org.neo4j.kernel.impl.factory.CommunityFacadeFactory, /lustre/scratch116/vr/vrpipe/neo4j/production/db
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:143)
at org.neo4j.kernel.impl.factory.CommunityFacadeFactory.newFacade(CommunityFacadeFactory.java:43)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:108)
at org.neo4j.server.CommunityNeoServer$1.newGraphDatabase(CommunityNeoServer.java:66)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:95)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
... 5 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.api.impl.index.LuceneLabelScanStore#28c94a12' failed to initialize. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:434)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:66)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:102)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:600)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:112)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:139)
... 10 more
Caused by: java.io.IOException: Label scan store could not be read, and needs to be rebuilt. To trigger a rebuild, ensure the database is stopped, delete the files in '/lustre/scratch116/vr/vrpipe/neo4j/production/db/schema/label/lucene', and then start the database again.
at org.neo4j.kernel.api.impl.index.LuceneLabelScanStore.init(LuceneLabelScanStore.java:259)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:424)
... 19 more
I followed its advice to delete db/schema/label/lucene/*, and the database started up fine, but I can't query any existing nodes or relationships. The web front end says I have no node labels or relationship types. I tried doing match (n)-[r]-() return n,r, but that returns nothing.
How do I get my database back? Perhaps I need to force rebuilding of the lucene indexes somehow?
You took a backup before you deleted it?
You only deleted that directory?
What does the new startup log look like?
How much data do you have in your db?
What does this return? match (n) return count(*)

Resources