I am trying out Opserver to monitor SQL Server instances. No issue with configuring standalone instances, but when I tried to configure SQL Server clusters using the method documented here: http://www.patrickhyatt.com/2013/10/25/setting-up-stackexchanges-opserver.html
I am confused about where to put SQL Server cluster named instance and Windows node servers:
In the JSON code below:
{
"defaultConnectionString": "Data Source=$ServerName$;Initial Catalog=master;Integrated Security=SSPI;",
"clusters": [
{
"name": "SDCluster01",
"nodes": [
{ "name": "SDCluster01\\SDCluster01_01" },
{ "name": "SDCluster02\\SDCluster01_02" },
]
},
],
I assume SDCLuster01 is the instance DNS name and SDCluster01_01 and SDCluster01_02 are Windows node server names.
But what if I have a named instance (clustered) like SDCluster01\instance1?
I tried to configure it like this:
{
"defaultConnectionString": "Data Source=$ServerName$;Initial Catalog=master;Integrated Security=SSPI;",
"clusters": [
{
"name": "SDCluster01\instance1",
"nodes": [
{ "name": "SDCluster01\\SDCluster01_01" },
{ "name": "SDCluster02\\SDCluster01_02" },
]
},
],
But after deploying to Opserver it gave me this error message:
[NullReferenceException: Object reference not set to an instance of an object.]
Any ideas on how to configure the JSON file correctly for SQL Server clusters?
Related
I'm using Azure Data factory,
I'm using SQLServer as source and Postgres as target. Goal is to copy 30 tables from SQLServer, with transformation, to 30 tables in Postgres. Hard part is I have 80 databases from and to, all with the exact same layout but different data. Its one database per customer so 80 customers each with their own databases.
Linked Services doesn't allow parameters for Postgres.
I have one dataset per source and target using parameters for schema and table names.
I have one pipeline per table with SQLServer source and Postgres target.
I can parameterize the SQLServer source in linked service but not Postgres
Problem is how can I copy 80 source databases to 80 target databases without adding 80 target linked services and 80 target datasets? Plus I'd have to repeat all 30 pipelines per target database.
BTW I'm only familiar with the UI, however anything else that does the job is acceptable.
Any help would be appreciated.
There is simple way to implement this. Essentially you need to have a single Linked Service, which reads the connection string out of KeyVault. You can then parameterize source and target as keyvault secret names, and easily switch between data sources by just changing the secret name. This relies on all connection related information being enclosed within a single connection string.
I will provide a simple overview for Postgresql, but the same logic applies to MSSQL servers as source.
Implement a Linked Service for Azure Key Vault.
Add a Linked Service for Azure Postgresql that uses Key Vault to store access url in format: Server=your_server_name.postgres.database.azure.com;Database=your_database_name;Port=5432;UID=your_user_name;Password=your_password;SSL Mode=Require;Keepalive=600; (advise to use server name as secret name)
Pass this parameter, which is essentially correct secret name, in the Pipeline (you can also implement a loop that would accept immediately array of x elements, and parse n elements at a time into separate pipeline)
Linked Service Definition for KeyVault:
{
"name": "your_keyvault_name",
"properties": {
"description": "KeyVault",
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://your_keyvault_name.vault.azure.net/"
}
}
}
Linked Service Definition for Postgresql:
{ "name": "generic_postgres_service".
"properties": {
"type": "AzurePostgreSql",
"parameters": {
"pg_database": {
"type": "string",
"defaultValue": "your_database_name"
}
},
"annotations": [],
"typeProperties": {
"connectionString": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "KeyVaultName",
"type": "LinkedServiceReference"
},
"secretName": "#linkedService().secret_name_for_server"
}
},
"connectVia": {
"referenceName": "AutoResolveIntegrationRuntime",
"type": "IntegrationRuntimeReference"
}
}
}
Dataset Definition for Postgresql:
{
"name": "your_postgresql_dataset",
"properties": {
"linkedServiceName": {
"referenceName": "generic_postgres_service",
"type": "LinkedServiceReference",
"parameters": {
"secret_name_for_server": {
"value": "#dataset().secret_name_for_server",
"type": "Expression"
}
}
},
"parameters": {
"secret_name_for_server": {
"type": "string"
}
},
"annotations": [],
"type": "AzurePostgreSqlTable",
"schema": [],
"typeProperties": {
"schema": {
"value": "#dataset().schema_name",
"type": "Expression"
},
"table": {
"value": "#dataset().table_name",
"type": "Expression"
}
}
}
}
Pipeline Definition for Postgresql:
{
"name": "your_postgres_pipeline",
"properties": {
"activities": [
{
"name": "Copy_Activity_1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
...
... i skipped definition
...
"inputs": [
{
"referenceName": "your_postgresql_dataset",
"type": "DatasetReference",
"parameters": {
"secret_name_for_server": "secret_name"
}
}
]
}
],
"annotations": []
}
}
I'am evaluating a Quarkus application on App Engine.
The application needs a Postgres DB on Cloud SQL, where I named the instance 'quarkus'.
But I'am stuck getting these access error:
Not authorized to access instance: addlogic-foodiefnf-1:quarkus
The serviceAccount:addlogic-foodiefnf-1#appspot.gserviceaccount.com has these roles:
Cloud SQL Admin
Cloud SQL Service Agent
Editor
What I'am missing?
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"status": {
"code": 7,
"message": "Not authorized to access instance: addlogic-foodiefnf-1:quarkus "
},
"authenticationInfo": {
"principalEmail": "addlogic-foodiefnf-1#appspot.gserviceaccount.com",
"serviceAccountDelegationInfo": [
{
"firstPartyPrincipal": {
"principalEmail": "app-engine-appserver#prod.google.com"
}
}
],
"principalSubject": "serviceAccount:addlogic-foodiefnf-1#appspot.gserviceaccount.com"
},
"requestMetadata": {
"callerIp": "107.178.230.54",
"requestAttributes": {
"time": "2021-09-27T06:18:33.283490Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudsql.googleapis.com",
"methodName": "cloudsql.instances.connect",
"authorizationInfo": [
{
"resource": "instances/quarkus ",
"permission": "cloudsql.instances.connect",
"granted": true,
"resourceAttributes": {
"service": "sqladmin.googleapis.com",
"name": "projects/addlogic-foodiefnf-1/instances/quarkus ",
"type": "sqladmin.googleapis.com/Instance"
}
}
],
"resourceName": "instances/quarkus ",
"request": {
"#type": "type.googleapis.com/google.cloud.sql.v1beta4.SqlInstancesCreateEphemeralCertRequest",
"instance": "europe-west3~quarkus ",
"project": "addlogic-foodiefnf-1",
"body": {}
},
"response": {}
},
"insertId": "-il5zyxe1b1rn",
"resource": {
"type": "cloudsql_database",
"labels": {
"project_id": "addlogic-foodiefnf-1",
"database_id": "addlogic-foodiefnf-1:quarkus ",
"region": "europe-west3"
}
},
"timestamp": "2021-09-27T06:18:33.270158Z",
"severity": "ERROR",
"logName": "projects/addlogic-foodiefnf-1/logs/cloudaudit.googleapis.com%2Factivity",
"receiveTimestamp": "2021-09-27T06:18:33.799357464Z"
}
Background story:
I've set up my quarkus application regarding to
https://quarkus.io/guides/deploying-to-google-cloud
but class 'PostgreSQL10Dialect'failed to load:
See Why is class PostgreSQL10Dialect not found on Quarkus in Google App Engine java11?
At this current post here I like to learn how to debug the access error at Google App Engine to Cloud SQL.
Cloud SQL instance is set up with public IP. Is there anymore setup needed at Cloud SQL instance?
As said above, service account at standard app engine has role 'Cloud SQL Admin' as required by
https://cloud.google.com/sql/docs/postgres/connect-app-engine-standard#java
Any help appreciated.
I understand that using ´quarkus.datasource.db-kind=postgresql´ triggers hibernate to do auto configuration. And therefore the connection can not be established.
I have to use quarkus.datasource.db-kind=other to prevent Quarkus auto-configuration and access problems.
(As this solves this question here, my issue at Why is class PostgreSQL10Dialect not found on Quarkus in Google App Engine java11? is still open.)
I'm working on a project where I'm converting an application's database from MSSqlServer to Oracle. I'm using Serilog for the logging, and the Serilog.Sinks.Oracle project for further help.
This is the program.cs code for my MSSQLServer implementation:
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
This is a snippet of the appsettings.json code also for the MSSQLServer implementation (the rest not shown is the implementation of custom columns):
{
"AllowedHosts": "*",
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "Console"
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "connString",
"schemaName": "dbo",
"tableName": "TestLog5",
"autoCreateSqlTable": false,
How would I go about recycling this code to change it to work with Oracle? The Serilog.Sinks.Oracle project gives instructions on how to implement it from program.cs, but I really need the configuring to come from appsettings.json.
Basically this possibility seems not to exist.
You can create an extension method like this :
public static LoggerConfiguration Oracle(this LoggerSinkConfiguration loggerConfiguration, string connectionString)
{
var sink = new BatchLoggerConfiguration()
.WithSettings(connectionString)
.UseBurstBatch()
.CreateSink();
return loggerConfiguration.Sink(sink);
}
Arguments of your extension method have to correspond to json properties under Args property.
Name in json should correspond to your extension method name.
Sample configuration file :
"Serilog": {
"Using": [
"Serilog.Sinks.File",
"<your assembly name>"
],
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "..output-.log",
"rollingInterval": "Day",
"fileSizeLimitBytes": "10485760"
}
},
{
"Name": "Oracle",
"Args": {
"connectionString": "Data Source=xxxx"
}
}
I have an Azure Logic App with a SQL Server connector through a On-Premise Data Gateway, the connection is made using SQL Server Authentication. It works fine from the Logic App Designer.
No details about the connection are stored in the ARM template of the SQL Server connection, so if I want to automate the deployment of the Logic App, I need to add some values to the ARM template. The documentation for this is really poor, even though I was able to write this template:
{
"type": "MICROSOFT.WEB/CONNECTIONS",
"apiVersion": "2018-07-01-preview",
"name": "[parameters('sql_2_Connection_Name')]",
"location": "[parameters('logicAppLocation')]",
"properties": {
"api": {
"id": "[concat(subscription().id, '/providers/Microsoft.Web/locations/', parameters('logicAppLocation'), '/managedApis/', 'sql')]"
},
"displayName": "[parameters('sql_2_Connection_DisplayName')]",
"parameterValues": {
"server": "[parameters('sql_2_server')]",
"database": "[parameters('sql_2_database')]",
"username": "[parameters('sql_2_username')]",
"password": "[parameters('sql_2_password')]",
"authType": "[parameters('sql_2_authtype')]",
"sqlConnectionString": "[parameters('sql_2_sqlConnectionString')]",
"gateway": {
"id": "[concat('subscriptions/', subscription().subscriptionId, '/resourceGroups/', parameters('dataGatewayResourceGroup'), '/providers/Microsoft.Web/connectionGateways/', parameters('dataGatewayName'))]"
}
}
}
}
But I can't find the correct value for the authType property corresponding to "SQL Server Authentication". The values windows and basic are accepted, but couldn't find the value for "SQL Server Authentication".
Can someone please tell me what's the value for the authType property corresponding to "SQL Server Authentication"?
Use following properties json inside your web api connection
"properties": {
"api": {
"id": "/subscriptions/<YourSubscriptionIDHere>/providers/Microsoft.Web/locations/australiaeast/managedApis/sql"
},
"parameterValueSet": {
"name": "sqlAuthentication",
"values": {
"server": {
"value": "SampleServer"
},
"database": {
"value": "WideWorldImporters"
},
"username": {
"value": "sampleuser"
},
"password": {
"value": "somepasssword"
},
"gateway": {
"value": {
"id": "/subscriptions/<subscriptionIDGoesHere>/resourceGroups/az-integration-study-rg/providers/Microsoft.Web/connectionGateways/<NameofTheGatewayHere>"
}
}
}
}
},
"location": "australiaeast"
That should do the trick
How are you.
I am deploying go backend api to ecs using docker.
And I am using circle ci for it.
I need to set database config environment variables to run backend api, but I don't know how to set that info in circle ci.
I am initializing aws resource using terraform, do I need to set db config environment variables in terraform? or can I set it on circle ci config.yml?
Thanks
You can define environment variables in the task definition so this will be available for your docker container in ECS.
resource "aws_ecs_task_definition" "backend-app" {
family = "backend"
container_definitions = <<EOF
[
{
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 3000
}
],
"environment":
[
{
"name": "NODE_ENV",
"value":"production"
},
{
"name": "DB_HOST",
"value": "HOST_ADDRESS"
},
{
"name": "DB_PASS",
"value": "DB_PASSWORD"
}
],
"cpu": 1000,
"memory": 1000,
"image": "***.dkr.ecr.us-west-2.amazonaws.com/backend:latest",
"name": "backend",
}
]
EOF
}