I'm working on a project where I'm converting an application's database from MSSqlServer to Oracle. I'm using Serilog for the logging, and the Serilog.Sinks.Oracle project for further help.
This is the program.cs code for my MSSQLServer implementation:
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.Build();
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
This is a snippet of the appsettings.json code also for the MSSQLServer implementation (the rest not shown is the implementation of custom columns):
{
"AllowedHosts": "*",
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "Console"
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "connString",
"schemaName": "dbo",
"tableName": "TestLog5",
"autoCreateSqlTable": false,
How would I go about recycling this code to change it to work with Oracle? The Serilog.Sinks.Oracle project gives instructions on how to implement it from program.cs, but I really need the configuring to come from appsettings.json.
Basically this possibility seems not to exist.
You can create an extension method like this :
public static LoggerConfiguration Oracle(this LoggerSinkConfiguration loggerConfiguration, string connectionString)
{
var sink = new BatchLoggerConfiguration()
.WithSettings(connectionString)
.UseBurstBatch()
.CreateSink();
return loggerConfiguration.Sink(sink);
}
Arguments of your extension method have to correspond to json properties under Args property.
Name in json should correspond to your extension method name.
Sample configuration file :
"Serilog": {
"Using": [
"Serilog.Sinks.File",
"<your assembly name>"
],
"WriteTo": [
{
"Name": "File",
"Args": {
"path": "..output-.log",
"rollingInterval": "Day",
"fileSizeLimitBytes": "10485760"
}
},
{
"Name": "Oracle",
"Args": {
"connectionString": "Data Source=xxxx"
}
}
Related
I'm using Azure Data factory,
I'm using SQLServer as source and Postgres as target. Goal is to copy 30 tables from SQLServer, with transformation, to 30 tables in Postgres. Hard part is I have 80 databases from and to, all with the exact same layout but different data. Its one database per customer so 80 customers each with their own databases.
Linked Services doesn't allow parameters for Postgres.
I have one dataset per source and target using parameters for schema and table names.
I have one pipeline per table with SQLServer source and Postgres target.
I can parameterize the SQLServer source in linked service but not Postgres
Problem is how can I copy 80 source databases to 80 target databases without adding 80 target linked services and 80 target datasets? Plus I'd have to repeat all 30 pipelines per target database.
BTW I'm only familiar with the UI, however anything else that does the job is acceptable.
Any help would be appreciated.
There is simple way to implement this. Essentially you need to have a single Linked Service, which reads the connection string out of KeyVault. You can then parameterize source and target as keyvault secret names, and easily switch between data sources by just changing the secret name. This relies on all connection related information being enclosed within a single connection string.
I will provide a simple overview for Postgresql, but the same logic applies to MSSQL servers as source.
Implement a Linked Service for Azure Key Vault.
Add a Linked Service for Azure Postgresql that uses Key Vault to store access url in format: Server=your_server_name.postgres.database.azure.com;Database=your_database_name;Port=5432;UID=your_user_name;Password=your_password;SSL Mode=Require;Keepalive=600; (advise to use server name as secret name)
Pass this parameter, which is essentially correct secret name, in the Pipeline (you can also implement a loop that would accept immediately array of x elements, and parse n elements at a time into separate pipeline)
Linked Service Definition for KeyVault:
{
"name": "your_keyvault_name",
"properties": {
"description": "KeyVault",
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://your_keyvault_name.vault.azure.net/"
}
}
}
Linked Service Definition for Postgresql:
{ "name": "generic_postgres_service".
"properties": {
"type": "AzurePostgreSql",
"parameters": {
"pg_database": {
"type": "string",
"defaultValue": "your_database_name"
}
},
"annotations": [],
"typeProperties": {
"connectionString": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "KeyVaultName",
"type": "LinkedServiceReference"
},
"secretName": "#linkedService().secret_name_for_server"
}
},
"connectVia": {
"referenceName": "AutoResolveIntegrationRuntime",
"type": "IntegrationRuntimeReference"
}
}
}
Dataset Definition for Postgresql:
{
"name": "your_postgresql_dataset",
"properties": {
"linkedServiceName": {
"referenceName": "generic_postgres_service",
"type": "LinkedServiceReference",
"parameters": {
"secret_name_for_server": {
"value": "#dataset().secret_name_for_server",
"type": "Expression"
}
}
},
"parameters": {
"secret_name_for_server": {
"type": "string"
}
},
"annotations": [],
"type": "AzurePostgreSqlTable",
"schema": [],
"typeProperties": {
"schema": {
"value": "#dataset().schema_name",
"type": "Expression"
},
"table": {
"value": "#dataset().table_name",
"type": "Expression"
}
}
}
}
Pipeline Definition for Postgresql:
{
"name": "your_postgres_pipeline",
"properties": {
"activities": [
{
"name": "Copy_Activity_1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
...
... i skipped definition
...
"inputs": [
{
"referenceName": "your_postgresql_dataset",
"type": "DatasetReference",
"parameters": {
"secret_name_for_server": "secret_name"
}
}
]
}
],
"annotations": []
}
}
I have an Azure Logic App with a SQL Server connector through a On-Premise Data Gateway, the connection is made using SQL Server Authentication. It works fine from the Logic App Designer.
No details about the connection are stored in the ARM template of the SQL Server connection, so if I want to automate the deployment of the Logic App, I need to add some values to the ARM template. The documentation for this is really poor, even though I was able to write this template:
{
"type": "MICROSOFT.WEB/CONNECTIONS",
"apiVersion": "2018-07-01-preview",
"name": "[parameters('sql_2_Connection_Name')]",
"location": "[parameters('logicAppLocation')]",
"properties": {
"api": {
"id": "[concat(subscription().id, '/providers/Microsoft.Web/locations/', parameters('logicAppLocation'), '/managedApis/', 'sql')]"
},
"displayName": "[parameters('sql_2_Connection_DisplayName')]",
"parameterValues": {
"server": "[parameters('sql_2_server')]",
"database": "[parameters('sql_2_database')]",
"username": "[parameters('sql_2_username')]",
"password": "[parameters('sql_2_password')]",
"authType": "[parameters('sql_2_authtype')]",
"sqlConnectionString": "[parameters('sql_2_sqlConnectionString')]",
"gateway": {
"id": "[concat('subscriptions/', subscription().subscriptionId, '/resourceGroups/', parameters('dataGatewayResourceGroup'), '/providers/Microsoft.Web/connectionGateways/', parameters('dataGatewayName'))]"
}
}
}
}
But I can't find the correct value for the authType property corresponding to "SQL Server Authentication". The values windows and basic are accepted, but couldn't find the value for "SQL Server Authentication".
Can someone please tell me what's the value for the authType property corresponding to "SQL Server Authentication"?
Use following properties json inside your web api connection
"properties": {
"api": {
"id": "/subscriptions/<YourSubscriptionIDHere>/providers/Microsoft.Web/locations/australiaeast/managedApis/sql"
},
"parameterValueSet": {
"name": "sqlAuthentication",
"values": {
"server": {
"value": "SampleServer"
},
"database": {
"value": "WideWorldImporters"
},
"username": {
"value": "sampleuser"
},
"password": {
"value": "somepasssword"
},
"gateway": {
"value": {
"id": "/subscriptions/<subscriptionIDGoesHere>/resourceGroups/az-integration-study-rg/providers/Microsoft.Web/connectionGateways/<NameofTheGatewayHere>"
}
}
}
}
},
"location": "australiaeast"
That should do the trick
I'm working with couchbase lite .net sdk, and I got a example from below url.
and my configuration file is like below.
{
"log": ["HTTP+"],
"adminInterface": "0.0.0.0:4985",
"interface": "0.0.0.0:4984",
"databases": {
"db": {
"server": "walrus:data",
"bucket": "todo",
"users": {
"GUEST": {"disabled": false, "admin_channels": ["*"] }
}
}
}
}
when I run the wpf app, I'm getting error like below image.
image
Please help me, I'm not sure how to implement couchbase sync gateway.
I fixed the issue.
I add shadow property to configuration json file.
You can read more information from these links.
https://groups.google.com/forum/#!topic/mobile-couchbase/NWd8xqPOjsc
https://github.com/couchbase/sync_gateway/wiki/Bucket-Shadowing
{
"interface": ":4984",
"adminInterface": ":4985",
"log": [ "*" ],
"databases": {
"sync_gateway": {
"server": "walrus:",
"bucket": "sync_gateway",
"users": {
"GUEST": {
"disabled": false,
"admin_channels": [ "*" ]
},
"user": {
"admin_channels": [ "*" ],
"password": "user"
}
},
"sync": `function(doc){ "channel(doc.channels); }`,
,
"shadow": {
"server": "http://couchbase-dev.thisisdmg.com:8091",
"bucket": "sales_agent"
}
}
}
}
I am trying out Opserver to monitor SQL Server instances. No issue with configuring standalone instances, but when I tried to configure SQL Server clusters using the method documented here: http://www.patrickhyatt.com/2013/10/25/setting-up-stackexchanges-opserver.html
I am confused about where to put SQL Server cluster named instance and Windows node servers:
In the JSON code below:
{
"defaultConnectionString": "Data Source=$ServerName$;Initial Catalog=master;Integrated Security=SSPI;",
"clusters": [
{
"name": "SDCluster01",
"nodes": [
{ "name": "SDCluster01\\SDCluster01_01" },
{ "name": "SDCluster02\\SDCluster01_02" },
]
},
],
I assume SDCLuster01 is the instance DNS name and SDCluster01_01 and SDCluster01_02 are Windows node server names.
But what if I have a named instance (clustered) like SDCluster01\instance1?
I tried to configure it like this:
{
"defaultConnectionString": "Data Source=$ServerName$;Initial Catalog=master;Integrated Security=SSPI;",
"clusters": [
{
"name": "SDCluster01\instance1",
"nodes": [
{ "name": "SDCluster01\\SDCluster01_01" },
{ "name": "SDCluster02\\SDCluster01_02" },
]
},
],
But after deploying to Opserver it gave me this error message:
[NullReferenceException: Object reference not set to an instance of an object.]
Any ideas on how to configure the JSON file correctly for SQL Server clusters?
I have successfully installed tilestache. I also has succesfully add a layer using shapefile from here. But when I tried to use my own shapefile, the server always return empty featurecollection. I have tried to add ST_Transform() in the query but still return empty feature.
My config file:
{
"cache":
{
"name": "Test",
"path": "/tmp/stache",
"umask": "0000"
},
"layers":
{
"osm-processed_p1": {
"allowed origin": "*",
"provider": {
"class": "TileStache.Goodies.VecTiles:Provider",
"kwargs": {
"dbinfo": {
"host": "127.0.0.1",
"user": "postgres",
"database": "ts_data"
},
"queries": [
"SELECT gid, geom AS __geometry__ FROM osm.kalimantan"
]
}
}
}
}
}
How can I fix this?