Catalog 'myDB' not found in database 'db' - benerator

I was trying to run the Benerator to populate database (shop demo to fill database schemas based on a setup file). While running the following,
I am getting the below error.
15:25:50,232 INFO (main) [DefaultDBSystem] Fetching table details and ordering tables by dependency
15:25:50,554 ERROR (main) [DescriptorRunner] Error in Benerator execution
org.databene.commons.ConfigurationError: Catalog 'myDB' not found in database 'db'
at org.databene.platform.db.DBSystem.findTableInConfiguredCatalogAndSchema(DBSystem.java:819)
at org.databene.platform.db.DBSystem.getTable(DBSystem.java:791)
at org.databene.platform.db.DBSystem.getWriteColumnInfos(DBSystem.java:744)
at org.databene.platform.db.DBSystem.persistOrUpdate(DBSystem.java:831)
at org.databene.platform.db.DBSystem.store(DBSystem.java:360)
at org.databene.benerator.storage.StorageSystemInserter.startProductConsumption(StorageSystemInserter.java:53)
at org.databene.benerator.consumer.AbstractConsumer.startConsuming(AbstractConsumer.java:47)
at org.databene.benerator.consumer.ConsumerProxy.startConsuming(ConsumerProxy.java:62)
at org.databene.benerator.engine.statement.ConsumptionStatement.execute(ConsumptionStatement.java:53)
at org.databene.benerator.engine.statement.GenerateAndConsumeTask.execute(GenerateAndConsumeTask.java:159)
at org.databene.task.TaskProxy.execute(TaskProxy.java:59)
at org.databene.task.StateTrackingTaskProxy.execute(StateTrackingTaskProxy.java:52)
at org.databene.task.TaskExecutor.runWithoutPage(TaskExecutor.java:136)
at org.databene.task.TaskExecutor.runPage(TaskExecutor.java:126)
at org.databene.task.TaskExecutor.run(TaskExecutor.java:101)
at org.databene.task.TaskExecutor.run(TaskExecutor.java:77)
at org.databene.task.TaskExecutor.execute(TaskExecutor.java:71)
at org.databene.benerator.engine.statement.GenerateOrIterateStatement.executeTask(GenerateOrIterateStatement.java:156
at org.databene.benerator.engine.statement.GenerateOrIterateStatement.execute(GenerateOrIterateStatement.java:99)
at org.databene.benerator.engine.statement.LazyStatement.execute(LazyStatement.java:58)
at org.databene.benerator.engine.statement.StatementProxy.execute(StatementProxy.java:46)
at org.databene.benerator.engine.statement.TimedGeneratorStatement.execute(TimedGeneratorStatement.java:70)
at org.databene.benerator.engine.statement.SequentialStatement.executeSubStatements(SequentialStatement.java:52)
at org.databene.benerator.engine.statement.SequentialStatement.execute(SequentialStatement.java:47)
at org.databene.benerator.engine.BeneratorRootStatement.execute(BeneratorRootStatement.java:63)
at org.databene.benerator.engine.DescriptorRunner.execute(DescriptorRunner.java:127)
at org.databene.benerator.engine.DescriptorRunner.runWithoutShutdownHook(DescriptorRunner.java:109)
at org.databene.benerator.engine.DescriptorRunner.run(DescriptorRunner.java:102)
at org.databene.benerator.main.Benerator.runFile(Benerator.java:94)
at org.databene.benerator.main.Benerator.runFromCommandLine(Benerator.java:75)
at org.databene.benerator.main.Benerator.main(Benerator.java:68)
15:25:50,611 INFO (main) [CachingDBImporter] Exporting Database meta data of ___temp to cache file
15:25:50,635 INFO (main) [CONFIG] Max. committed heap size: 15 MB
Inside my 'db' folder, I have the file user.ben.xml and it starts with,
<database id="db" url="jdbc:oracle:thin:#localhost:1521:mirev" driver="oracle.jdbc.driver.OracleDriver" user="myDB" tableFilter="DB_.*" />
i am new to Benerator. Could anyone please tell me why this error is throwing.

By default Oracle DB does not support 'Catalog'. Make sure your DB has catalog enabled and defined. If not then remove the catalog from your configuration.

I tried the same today...
It seems the oracle user/schema (=catalog in jdbc terms) needs to be alphabetically first to make the example work. I created a user 'A1000' to make the example work.

Related

Connection to SQL Server database using sql_exporter and Prometheus but can't execute custom metrics on db

So, I've went through many of the options to get this set up and none worked with my SQL Server setup except using sql_exporter. There is a successful connection where I can read all the built-in metrics but when I tried my own query on a specific database and its table there is always something wrong with my query such as "Invalid Object" when trying to reach the database. There have been many resources I have attempted to use but would mostly like a custom metric like: https://sysdig.com/blog/monitor-sql-server-prometheus/.
sql_exporter.yml:
# The target to monitor and the collectors to execute on it.
target:
# Data source name always has a URI schema that matches the driver name. In some cases (e.g. MySQL)
# the schema gets dropped or replaced to match the driver expected DSN format.
data_source_name: 'sqlserver://username:password#localhost:1433'
# Collectors (referenced by name) to execute on the target.
collectors: [mssql_standard]
# Collector files specifies a list of globs. One collector definition is read from each matching file.
collector_files:
- "*.collector.yml"
prometheus.yml:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: 'sql_server'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9966']
When I tried the custom metric in the post I linked, sql_exporter crashes instantly no errors. My database is being found in the standard metrics of https://github.com/free/sql_exporter but I am unsure the syntax to execute a simple SELECT db_value FROM db_table. I understand there are ways out there and I've tried so will need assistance. Thank you in advance!

Debezium error, schema isn't known to this connector

I have a project using Debezium, mostly based on this example, which is then connected to an Apache Pulsar.
I have changed a few configurations. The file now looks like this:
database.history=io.debezium.relational.history.MemoryDatabaseHistory
connector.class=io.debezium.connector.mysql.MySqlConnector
offset.storage=org.apache.kafka.connect.storage.FileOffsetBackingStore
offset.storage.file.filename=offset.dat
offset.flush.interval.ms=5000
name=mysql-dbz-connector
database.hostname={ip}
database.port=3308
database.user={user}
database.password={pass}
database.dbname=database
database.server.name=test
table.whitelist=database.history_table,database.project_table
snapshot.mode=schema_only
schemas.enable=false
include.schema.changes=false
pulsar.topic=persistent://public/default/{0}
pulsar.broker.address=pulsar://{ip}:6650
database.history=io.debezium.relational.history.MemoryDatabaseHistory
As you may understand, what I'm trying to do is to monitor the history_table and the project_table modifications from the database and then write payloads onto an Apache Pulsar.
My problem is as follows. In whatever snapshot mode I use, when an offset has been written, I can't restart the Debezium without getting an error on the next database update.
Encountered change event for table database.history_table whose schema isn't known to this connector
It only happens with an existing offset.dat file. I think this is because the schema is null within the offset.dat file. Take this one for example:
¨Ìsrjava.util.HashMap⁄¡√`—F
loadFactorI thresholdxp?#wur[B¨Û¯T‡xpG{"schema":null,"payload":["mysql-dbz-connector",{"server":"test"}]}uq~U{"ts_sec":1563802215,"file":"database-bin.000005","pos":79574,"server_id":1,"event":1}x
I first suspected the schemas.enable=false or the include.schema.changes=false parameters that I used to make the JSON more concise, but their values don't change anything in the offset.dat file.
The problem lies in line database.history=io.debezium.relational.history.MemoryDatabaseHistory. The history will not survive restart. You should use FileDatabaseHistory instead of it.

Progress Error:Connect to database in $DLC (1379)

How to connect database in $DLC in progress openedge. for details see below image.
Thanks,
Purushottam
Databases in $DLC (the directory that Progress was installed in) are templates -- you must make a copy of the template db in some other directory in order to use it. You cannot run databases directly from $DLC.
Usually you use a command such as:
proenv> prodb sports sports
To make a local copy of the default "sports" db.
Or you can just type "prodb" and you will be prompted for the new db name and the template name. The new name can be different from the template name.
You must have to create a copy of sports database in other directory (not in openedge installation directory) using procopy or prodb command.
For Ex : in proenv
procopy Sports2000 D:\spdb Or,
prodb D:\spdb Sports2000.
Now, you can easily connect to the database...

WSO2 Message Broker Error while adding Queue - Invalid Object Name

I have just set up a WSO2 Message Broker 3.0.0 connecting to a SQL Server DB.
The DB for the Carbon MB component has been created successfully as well.
The DB for the Message Broker Data store is created and contains the table MB_QUEUE_MAPPING.
However when adding a Queue via the MB UI I see the following error in the stack trace:
[2015-12-16 15:00:41,472] ERROR {org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl} - Error occurred while retrieving destination queue id for destina
tion queue TestQ
java.sql.SQLException: Invalid object name 'MB_QUEUE_MAPPING'.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:505)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:1029)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.getQueueID(RDBMSMessageStoreImpl.java:1324)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.getCachedQueueID(RDBMSMessageStoreImpl.java:1298)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.addQueue(RDBMSMessageStoreImpl.java:1634)
at org.wso2.andes.store.FailureObservingMessageStore.addQueue(FailureObservingMessageStore.java:445)
at org.wso2.andes.kernel.AMQPConstructStore.addQueue(AMQPConstructStore.java:116)
at org.wso2.andes.kernel.AndesContextInformationManager.createQueue(AndesContextInformationManager.java:154)
at org.wso2.andes.kernel.disruptor.inbound.InboundQueueEvent.updateState(InboundQueueEvent.java:151)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:167)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:67)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:41)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
The "Add Queue" screen does not go away however the Queue does get added to the MB_QUEUE table just fine in the DB. Both tables MB_QUEUE_MAPPING & MB_QUEUE_COUNTER are blank.
The "List Queues" screen does blank despite a number of Queues in the MB_QUEUE table. Stack trace also shows errors but is not included as its not relevant to the error above.
I can create a Topic just fine however.
I want to know why MB would say the table MB_QUEUE_MAPPING is an Invalid object name when the table clearly exists ?
I suspect the way you have configure the mysql database is incorrect.So you can better try out one of these below two scenarios to make sure about this issue.
1) starting the server for the first time with the -Dsetup parameter or
2) you can refer the documentation(https://docs.wso2.com/display/MB300/Configuring+MySQL) "Configuring MySQL" and follow step by step instructions given in order.
I have tried out the second scenario and I did not get any exception when I am adding queue.And the document I have mentioned will have to be update as below.
you can see this command in the step 3.
mysql -u <db_user_name> -p -D<database_name> < '<WSO2MB_HOME>/dbscripts/mb-store/mysql-mb.sql ';
db_user_name - username of db.
database_name - database name that you have created in the step 1.
WSO2MB_HOME - home directory path for MB.
Hope this could help you to resolve this issue.
It seems user connecting to MSSQL database not having correct permission. Most probably SELECT permission. Reason why I am saying is, when you adding queue, it does get added. This means user has INSERT permission. Once queue added, page redirected to Queue List page. User must have SELECT permission to retrieve queue list. Topic are not getting added to database, it keeps in registry. You can verify user who connecting to MSSQL from configuration like below in wso2mb-3.0.0/repository/conf/datasources/master-datasources.xml.
<datasource>
   <name>WSO2_MB_STORE_DB</name>
   <jndiConfig>
       <name>WSO2MBStoreDB</name>
   </jndiConfig>
   <definition type="RDBMS">
         <configuration>
                    <url>jdbc:jtds:sqlserver://localhost:1433/wso2_mb</url>
                    <username>sa</username>
                    <password>sa</password>
                    <driverClassName>net.sourceforge.jtds.jdbc.Driver</driverClassName>
                    <maxActive>200</maxActive>
                    <maxWait>60000</maxWait>
                    <minIdle>5</minIdle>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                    <defaultAutoCommit>false</defaultAutoCommit>
         </configuration>
     </definition>
</datasource>

Neo4j: How do you rebuild the label scan store?

I shut down my Neo4J instance every night to do a backup. This morning I found that it failed to start up again:
2015-12-05 03:38:49.326+0000 INFO Successfully shutdown Neo4j Server
2015-12-05 03:38:49.330+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#7728902c' was successfully initialized, but failed to start. Please see attached cause exception. Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#7728902c' was successfully initialized, but failed to start. Please see attached cause exception.
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#7728902c' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:67)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:234)
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:97)
at org.neo4j.server.CommunityBootstrapper.start(CommunityBootstrapper.java:48)
at org.neo4j.server.CommunityBootstrapper.main(CommunityBootstrapper.java:35)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.database.LifecycleManagingDatabase#7728902c' was successfully initialized, but failed to start. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:462)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:194)
... 3 more
Caused by: java.lang.RuntimeException: Error starting org.neo4j.kernel.impl.factory.CommunityFacadeFactory, /lustre/scratch116/vr/vrpipe/neo4j/production/db
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:143)
at org.neo4j.kernel.impl.factory.CommunityFacadeFactory.newFacade(CommunityFacadeFactory.java:43)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:108)
at org.neo4j.server.CommunityNeoServer$1.newGraphDatabase(CommunityNeoServer.java:66)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:95)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
... 5 more
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.api.impl.index.LuceneLabelScanStore#28c94a12' failed to initialize. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:434)
at org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:66)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:102)
at org.neo4j.kernel.NeoStoreDataSource.start(NeoStoreDataSource.java:600)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.kernel.impl.transaction.state.DataSourceManager.start(DataSourceManager.java:112)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.kernel.impl.factory.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:139)
... 10 more
Caused by: java.io.IOException: Label scan store could not be read, and needs to be rebuilt. To trigger a rebuild, ensure the database is stopped, delete the files in '/lustre/scratch116/vr/vrpipe/neo4j/production/db/schema/label/lucene', and then start the database again.
at org.neo4j.kernel.api.impl.index.LuceneLabelScanStore.init(LuceneLabelScanStore.java:259)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:424)
... 19 more
I followed its advice to delete db/schema/label/lucene/*, and the database started up fine, but I can't query any existing nodes or relationships. The web front end says I have no node labels or relationship types. I tried doing match (n)-[r]-() return n,r, but that returns nothing.
How do I get my database back? Perhaps I need to force rebuilding of the lucene indexes somehow?
You took a backup before you deleted it?
You only deleted that directory?
What does the new startup log look like?
How much data do you have in your db?
What does this return? match (n) return count(*)

Resources