WSO2 API Manager - Setting 'CacheId' when clustering with SQL Server - sql-server

I'm clustering WSO2 API Manager (v1.10.0) across three servers (Gateway + Publisher/Store + Key Store) by following this guide:
https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+1.10.0
I am on Step 11a of the 'Installing and configuring the databases' section. This states the following:
To give the Publisher and Store components access to the registry database, open the /repository/conf/registry.xml file in each of these two components and configure them as follows:
a. In the Publisher component's registry.xml file, add or modify the dataSource attribute of the <dbConfig name="govregistry"> element as follows:
<dbConfig name="govregistry">
<dataSource>jdbc/WSO2REG_DB</dataSource>
</dbConfig>
<remoteInstance url="https://publisher.apim-wso2.com">
<id>gov</id>
<cacheId>user#jdbc:mysql://regdb.mysql-wso2.com:3306/regdb</cacheId>
<dbConfig>govregistry</dbConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/governance" overwrite="true">
<instanceId>gov</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
<mount path="/_system/config" overwrite="true">
<instanceId>gov</instanceId>
<targetPath>/_system/config</targetPath>
</mount>
However, I'm using Microsoft SQL Server, rather than MySQL, so the cacheId value doesn't look right to me.
How should the cacheId be configured for SQL Server please?
I have taken a look through the commented-out descriptions in the registry.xml file, but cannot figure this out.
Here is my WSO2REG_DB configuration:
<datasource>
<name>WSO2REG_DB</name>
<description>The datasource used by the registry</description>
<jndiConfig>
<name>jdbc/WSO2REG_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:sqlserver://***SERVER***:1433;databaseName=***DATABASE_NAME***</url>
<username>WS02RegUser</username>
<password>***REMOVED***</password>
<defaultAutoCommit>false</defaultAutoCommit>
<driverClassName>com.microsoft.sqlserver.jdbc.SQLServerDriver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>

cacheId - This is the cache id of the remote instance. Here the cache
id should be in the format of $database_username#$database_url, where
$database_username is the username of the remote instance database and
$database_url is the remote instance database URL.
Reference: https://docs.wso2.com/display/Governance460/Remote+Instance+and+Mount+Configuration+Details#RemoteInstanceandMountConfigurationDetails-JDBC-basedRemoteInstanceConfiguration

Related

Connection to SQL Server database using sql_exporter and Prometheus but can't execute custom metrics on db

So, I've went through many of the options to get this set up and none worked with my SQL Server setup except using sql_exporter. There is a successful connection where I can read all the built-in metrics but when I tried my own query on a specific database and its table there is always something wrong with my query such as "Invalid Object" when trying to reach the database. There have been many resources I have attempted to use but would mostly like a custom metric like: https://sysdig.com/blog/monitor-sql-server-prometheus/.
sql_exporter.yml:
# The target to monitor and the collectors to execute on it.
target:
# Data source name always has a URI schema that matches the driver name. In some cases (e.g. MySQL)
# the schema gets dropped or replaced to match the driver expected DSN format.
data_source_name: 'sqlserver://username:password#localhost:1433'
# Collectors (referenced by name) to execute on the target.
collectors: [mssql_standard]
# Collector files specifies a list of globs. One collector definition is read from each matching file.
collector_files:
- "*.collector.yml"
prometheus.yml:
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: "prometheus"
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
- job_name: 'sql_server'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9966']
When I tried the custom metric in the post I linked, sql_exporter crashes instantly no errors. My database is being found in the standard metrics of https://github.com/free/sql_exporter but I am unsure the syntax to execute a simple SELECT db_value FROM db_table. I understand there are ways out there and I've tried so will need assistance. Thank you in advance!

Azure Stream Analytics output to Azure Cosmos DB

Stream Analytics job ( iot hub to CosmosDB output) "Start" command is failing with the following error.
[12:49:30 PM] Source 'cosmosiot' had 1 occurrences of kind
'OutputDataConversionError.RequiredColumnMissing' between processing
times '2019-04-17T02:49:30.2736530Z' and
'2019-04-17T02:49:30.2736530Z'.
I followed the instructions and not sure what is causing this error.
Any suggestions please? Here is the CosmosDB Query:
SELECT
[bearings temperature],
[windings temperature],
[tower sway],
[position sensor],
[blade strain gauge],
[main shaft strain gauge],
[shroud accelerometer],
[gearbox fluid levels],
[power generation],
[EventProcessedUtcTime],
[EventEnqueuedUtcTime],
[IoTHub].[CorrelationId],
[IoTHub].[ConnectionDeviceId]
INTO
cosmosiot
FROM
TurbineData
If you're specifying fields in your query (ie Select Name, ModelNumber ...) rather than just using Select * ... the field names are converted to lowercase by default when using Compatibility Level 1.0, which throws off Cosmos DB. In the portal if you open your Stream Analytics job and go to 'Compatibility level' under the 'Configure' section and select v1.1 or higher that should fix the issue. You can read more about the compatibility levels in the Stream Analytics documentation here: https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-compatibility-level

Progress Error:Connect to database in $DLC (1379)

How to connect database in $DLC in progress openedge. for details see below image.
Thanks,
Purushottam
Databases in $DLC (the directory that Progress was installed in) are templates -- you must make a copy of the template db in some other directory in order to use it. You cannot run databases directly from $DLC.
Usually you use a command such as:
proenv> prodb sports sports
To make a local copy of the default "sports" db.
Or you can just type "prodb" and you will be prompted for the new db name and the template name. The new name can be different from the template name.
You must have to create a copy of sports database in other directory (not in openedge installation directory) using procopy or prodb command.
For Ex : in proenv
procopy Sports2000 D:\spdb Or,
prodb D:\spdb Sports2000.
Now, you can easily connect to the database...

WSO2 Message Broker Error while adding Queue - Invalid Object Name

I have just set up a WSO2 Message Broker 3.0.0 connecting to a SQL Server DB.
The DB for the Carbon MB component has been created successfully as well.
The DB for the Message Broker Data store is created and contains the table MB_QUEUE_MAPPING.
However when adding a Queue via the MB UI I see the following error in the stack trace:
[2015-12-16 15:00:41,472] ERROR {org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl} - Error occurred while retrieving destination queue id for destina
tion queue TestQ
java.sql.SQLException: Invalid object name 'MB_QUEUE_MAPPING'.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:505)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:1029)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.getQueueID(RDBMSMessageStoreImpl.java:1324)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.getCachedQueueID(RDBMSMessageStoreImpl.java:1298)
at org.wso2.andes.store.rdbms.RDBMSMessageStoreImpl.addQueue(RDBMSMessageStoreImpl.java:1634)
at org.wso2.andes.store.FailureObservingMessageStore.addQueue(FailureObservingMessageStore.java:445)
at org.wso2.andes.kernel.AMQPConstructStore.addQueue(AMQPConstructStore.java:116)
at org.wso2.andes.kernel.AndesContextInformationManager.createQueue(AndesContextInformationManager.java:154)
at org.wso2.andes.kernel.disruptor.inbound.InboundQueueEvent.updateState(InboundQueueEvent.java:151)
at org.wso2.andes.kernel.disruptor.inbound.InboundEventContainer.updateState(InboundEventContainer.java:167)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:67)
at org.wso2.andes.kernel.disruptor.inbound.StateEventHandler.onEvent(StateEventHandler.java:41)
at com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:128)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
The "Add Queue" screen does not go away however the Queue does get added to the MB_QUEUE table just fine in the DB. Both tables MB_QUEUE_MAPPING & MB_QUEUE_COUNTER are blank.
The "List Queues" screen does blank despite a number of Queues in the MB_QUEUE table. Stack trace also shows errors but is not included as its not relevant to the error above.
I can create a Topic just fine however.
I want to know why MB would say the table MB_QUEUE_MAPPING is an Invalid object name when the table clearly exists ?
I suspect the way you have configure the mysql database is incorrect.So you can better try out one of these below two scenarios to make sure about this issue.
1) starting the server for the first time with the -Dsetup parameter or
2) you can refer the documentation(https://docs.wso2.com/display/MB300/Configuring+MySQL) "Configuring MySQL" and follow step by step instructions given in order.
I have tried out the second scenario and I did not get any exception when I am adding queue.And the document I have mentioned will have to be update as below.
you can see this command in the step 3.
mysql -u <db_user_name> -p -D<database_name> < '<WSO2MB_HOME>/dbscripts/mb-store/mysql-mb.sql ';
db_user_name - username of db.
database_name - database name that you have created in the step 1.
WSO2MB_HOME - home directory path for MB.
Hope this could help you to resolve this issue.
It seems user connecting to MSSQL database not having correct permission. Most probably SELECT permission. Reason why I am saying is, when you adding queue, it does get added. This means user has INSERT permission. Once queue added, page redirected to Queue List page. User must have SELECT permission to retrieve queue list. Topic are not getting added to database, it keeps in registry. You can verify user who connecting to MSSQL from configuration like below in wso2mb-3.0.0/repository/conf/datasources/master-datasources.xml.
<datasource>
   <name>WSO2_MB_STORE_DB</name>
   <jndiConfig>
       <name>WSO2MBStoreDB</name>
   </jndiConfig>
   <definition type="RDBMS">
         <configuration>
                    <url>jdbc:jtds:sqlserver://localhost:1433/wso2_mb</url>
                    <username>sa</username>
                    <password>sa</password>
                    <driverClassName>net.sourceforge.jtds.jdbc.Driver</driverClassName>
                    <maxActive>200</maxActive>
                    <maxWait>60000</maxWait>
                    <minIdle>5</minIdle>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                    <defaultAutoCommit>false</defaultAutoCommit>
         </configuration>
     </definition>
</datasource>

Catalog 'myDB' not found in database 'db'

I was trying to run the Benerator to populate database (shop demo to fill database schemas based on a setup file). While running the following,
I am getting the below error.
15:25:50,232 INFO (main) [DefaultDBSystem] Fetching table details and ordering tables by dependency
15:25:50,554 ERROR (main) [DescriptorRunner] Error in Benerator execution
org.databene.commons.ConfigurationError: Catalog 'myDB' not found in database 'db'
at org.databene.platform.db.DBSystem.findTableInConfiguredCatalogAndSchema(DBSystem.java:819)
at org.databene.platform.db.DBSystem.getTable(DBSystem.java:791)
at org.databene.platform.db.DBSystem.getWriteColumnInfos(DBSystem.java:744)
at org.databene.platform.db.DBSystem.persistOrUpdate(DBSystem.java:831)
at org.databene.platform.db.DBSystem.store(DBSystem.java:360)
at org.databene.benerator.storage.StorageSystemInserter.startProductConsumption(StorageSystemInserter.java:53)
at org.databene.benerator.consumer.AbstractConsumer.startConsuming(AbstractConsumer.java:47)
at org.databene.benerator.consumer.ConsumerProxy.startConsuming(ConsumerProxy.java:62)
at org.databene.benerator.engine.statement.ConsumptionStatement.execute(ConsumptionStatement.java:53)
at org.databene.benerator.engine.statement.GenerateAndConsumeTask.execute(GenerateAndConsumeTask.java:159)
at org.databene.task.TaskProxy.execute(TaskProxy.java:59)
at org.databene.task.StateTrackingTaskProxy.execute(StateTrackingTaskProxy.java:52)
at org.databene.task.TaskExecutor.runWithoutPage(TaskExecutor.java:136)
at org.databene.task.TaskExecutor.runPage(TaskExecutor.java:126)
at org.databene.task.TaskExecutor.run(TaskExecutor.java:101)
at org.databene.task.TaskExecutor.run(TaskExecutor.java:77)
at org.databene.task.TaskExecutor.execute(TaskExecutor.java:71)
at org.databene.benerator.engine.statement.GenerateOrIterateStatement.executeTask(GenerateOrIterateStatement.java:156
at org.databene.benerator.engine.statement.GenerateOrIterateStatement.execute(GenerateOrIterateStatement.java:99)
at org.databene.benerator.engine.statement.LazyStatement.execute(LazyStatement.java:58)
at org.databene.benerator.engine.statement.StatementProxy.execute(StatementProxy.java:46)
at org.databene.benerator.engine.statement.TimedGeneratorStatement.execute(TimedGeneratorStatement.java:70)
at org.databene.benerator.engine.statement.SequentialStatement.executeSubStatements(SequentialStatement.java:52)
at org.databene.benerator.engine.statement.SequentialStatement.execute(SequentialStatement.java:47)
at org.databene.benerator.engine.BeneratorRootStatement.execute(BeneratorRootStatement.java:63)
at org.databene.benerator.engine.DescriptorRunner.execute(DescriptorRunner.java:127)
at org.databene.benerator.engine.DescriptorRunner.runWithoutShutdownHook(DescriptorRunner.java:109)
at org.databene.benerator.engine.DescriptorRunner.run(DescriptorRunner.java:102)
at org.databene.benerator.main.Benerator.runFile(Benerator.java:94)
at org.databene.benerator.main.Benerator.runFromCommandLine(Benerator.java:75)
at org.databene.benerator.main.Benerator.main(Benerator.java:68)
15:25:50,611 INFO (main) [CachingDBImporter] Exporting Database meta data of ___temp to cache file
15:25:50,635 INFO (main) [CONFIG] Max. committed heap size: 15 MB
Inside my 'db' folder, I have the file user.ben.xml and it starts with,
<database id="db" url="jdbc:oracle:thin:#localhost:1521:mirev" driver="oracle.jdbc.driver.OracleDriver" user="myDB" tableFilter="DB_.*" />
i am new to Benerator. Could anyone please tell me why this error is throwing.
By default Oracle DB does not support 'Catalog'. Make sure your DB has catalog enabled and defined. If not then remove the catalog from your configuration.
I tried the same today...
It seems the oracle user/schema (=catalog in jdbc terms) needs to be alphabetically first to make the example work. I created a user 'A1000' to make the example work.

Resources