How to run logicapp standard on different environments without editing parameters.json file - azure-logic-apps

We have parameterized our connections.json file by adding configurations in the parameters.json file.
So now we have this:
connections.json
{
"AzureBlob-2": {
"displayName": "iam-automation-builtin-blob-mi-2",
"parameterValues": "#parameters('blob-connection-parameter-values')",
"serviceProvider": {
"id": "/serviceProviders/AzureBlob"
}
}
}
parameters.json
{
"blob-connection-parameter-values": {
"type": "Object",
"value": {
"authProvider": {
"Type": "ManagedServiceIdentity"
},
"blobStorageEndpoint": "#appsetting('AzureBlob_blobStorageEndpoint')"
}
}
}
parameters.development.json
{
"blob-connection-parameter-values": {
"type": "Object",
"value": {
"connectionString": "#appsetting('AzureBlob_connectionString')"
}
}
}
This requires us to edit parameters.json file before running it in a different environment. Is there a way to avoid this?
We need ability to run logicapp on all environments without changing parameters.json file manually. Something like we can do in MVC app with appsettings.json file.

Related

80 Postgres databases in Azure Data Factory to copy into

I'm using Azure Data factory,
I'm using SQLServer as source and Postgres as target. Goal is to copy 30 tables from SQLServer, with transformation, to 30 tables in Postgres. Hard part is I have 80 databases from and to, all with the exact same layout but different data. Its one database per customer so 80 customers each with their own databases.
Linked Services doesn't allow parameters for Postgres.
I have one dataset per source and target using parameters for schema and table names.
I have one pipeline per table with SQLServer source and Postgres target.
I can parameterize the SQLServer source in linked service but not Postgres
Problem is how can I copy 80 source databases to 80 target databases without adding 80 target linked services and 80 target datasets? Plus I'd have to repeat all 30 pipelines per target database.
BTW I'm only familiar with the UI, however anything else that does the job is acceptable.
Any help would be appreciated.
There is simple way to implement this. Essentially you need to have a single Linked Service, which reads the connection string out of KeyVault. You can then parameterize source and target as keyvault secret names, and easily switch between data sources by just changing the secret name. This relies on all connection related information being enclosed within a single connection string.
I will provide a simple overview for Postgresql, but the same logic applies to MSSQL servers as source.
Implement a Linked Service for Azure Key Vault.
Add a Linked Service for Azure Postgresql that uses Key Vault to store access url in format: Server=your_server_name.postgres.database.azure.com;Database=your_database_name;Port=5432;UID=your_user_name;Password=your_password;SSL Mode=Require;Keepalive=600; (advise to use server name as secret name)
Pass this parameter, which is essentially correct secret name, in the Pipeline (you can also implement a loop that would accept immediately array of x elements, and parse n elements at a time into separate pipeline)
Linked Service Definition for KeyVault:
{
"name": "your_keyvault_name",
"properties": {
"description": "KeyVault",
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://your_keyvault_name.vault.azure.net/"
}
}
}
Linked Service Definition for Postgresql:
{ "name": "generic_postgres_service".
"properties": {
"type": "AzurePostgreSql",
"parameters": {
"pg_database": {
"type": "string",
"defaultValue": "your_database_name"
}
},
"annotations": [],
"typeProperties": {
"connectionString": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "KeyVaultName",
"type": "LinkedServiceReference"
},
"secretName": "#linkedService().secret_name_for_server"
}
},
"connectVia": {
"referenceName": "AutoResolveIntegrationRuntime",
"type": "IntegrationRuntimeReference"
}
}
}
Dataset Definition for Postgresql:
{
"name": "your_postgresql_dataset",
"properties": {
"linkedServiceName": {
"referenceName": "generic_postgres_service",
"type": "LinkedServiceReference",
"parameters": {
"secret_name_for_server": {
"value": "#dataset().secret_name_for_server",
"type": "Expression"
}
}
},
"parameters": {
"secret_name_for_server": {
"type": "string"
}
},
"annotations": [],
"type": "AzurePostgreSqlTable",
"schema": [],
"typeProperties": {
"schema": {
"value": "#dataset().schema_name",
"type": "Expression"
},
"table": {
"value": "#dataset().table_name",
"type": "Expression"
}
}
}
}
Pipeline Definition for Postgresql:
{
"name": "your_postgres_pipeline",
"properties": {
"activities": [
{
"name": "Copy_Activity_1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
...
... i skipped definition
...
"inputs": [
{
"referenceName": "your_postgresql_dataset",
"type": "DatasetReference",
"parameters": {
"secret_name_for_server": "secret_name"
}
}
]
}
],
"annotations": []
}
}

Users Get Access Key and Secret Key Stored to Secrets Manager in Cloud Formation

{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Installing CloudAuth Application in Ubuntu 18.04 LTS",
"Parameters": {
"secretname": {
"Type": "String",
"Description": "A descriptive name that helps you find your secret later"
},
"myuser": {
"Type": "String",
"Description": "Enter existing user name"
}
},
"Resources": {
"myaccesskey": {
"Type": "AWS::IAM::AccessKey",
"Properties": {
"UserName": {
"Ref": "myuser"
}
}
},
"mysecrets": {
"Type": "AWS::SecretsManager::Secret",
"Properties": {
"Name": {
"Ref": "secretname"
},
"SecretString": "{\"Access_Key\":\"${myaccesskey}\",\"Secret_Key\":\"${myaccesskey.SecretAccessKey}\"}"
}
}
}
}
From my understanding, you are trying to pass the Access Key and Secret Key to the secret string of Secrets manager resources.
Instead of using:
"SecretString":
"{"Access_Key":"${myaccesskey}","Secret_Key":"${myaccesskey.SecretAccessKey}"}"
Try the following format(Fn::Sub function):
SecretString: !Sub '{ "access_key": "${AccessKey}",
"secret_key": "${AccessKey.SecretAccessKey}" }'
I used the YAML format for the stack creation.

AWS IoT JITPovisioning template with Fn::Join

I am trying to add registrationConfig for my CA certificate in AWS IoT. I would like to do some manipulation of data for Thing attributes. But I can't seem to get that JITP to work if the template body has Fn::Join in it.
Following are extract of the template body (string unescaped for reading purpose)
NOT working:
"Resources": {
"thing": {
"Type": "AWS::IoT::Thing",
"Properties": {
"ThingName": {
"Ref": "AWS::IoT::Certificate::CommonName"
},
"ThingTypeName" : "w2-device",
"ThingGroups" : ["w2-devices"],
"AttributePayload": {
"location": {
"Fn::Join":["",["ThingPrefix_",{"Ref":"SerialNumber"}]]
},
"organization": {
"Ref": "AWS::IoT::Certificate::Organization"
},
"version": "w2",
"country": {
"Ref": "AWS::IoT::Certificate::Country"
}
}
}
},
In the above when I have Fn::Join in AttributePayload/location it fails to create the Thing during JITP. I don't see any errors in CloudWatch either.
Working:
"Resources": {
"thing": {
"Type": "AWS::IoT::Thing",
"Properties": {
"ThingName": {
"Ref": "AWS::IoT::Certificate::CommonName"
},
"ThingTypeName" : "w2-device",
"ThingGroups" : ["w2-devices"],
"AttributePayload": {
"location": {
"Ref": "AWS::IoT::Certificate::StateName"
},
"organization": {
"Ref": "AWS::IoT::Certificate::Organization"
},
"version": "w2",
"country": {
"Ref": "AWS::IoT::Certificate::Country"
}
}
}
},
Note: I have also asked this in aws forum but without any answer there yet.
Provisioning templates for JITP define a set of parameters beginning with AWS::IoT::Certificate.
The AWS::IoT::Certificate::SerialNumber parameter should be used instead of just SerialNumber in the attribute payload. e.g.
"AttributePayload": {
"location": {
"Fn::Join":["",["ThingPrefix_",{"Ref":"AWS::IoT::Certificate::SerialNumber"}]]
},
https://docs.aws.amazon.com/iot/latest/developerguide/jit-provisioning.html lists the defined parameters for JITP as:
AWS::IoT::Certificate::Country
AWS::IoT::Certificate::Organization
AWS::IoT::Certificate::OrganizationalUnit
AWS::IoT::Certificate::DistinguishedNameQualifier
AWS::IoT::Certificate::StateName
AWS::IoT::Certificate::CommonName
AWS::IoT::Certificate::SerialNumber
AWS::IoT::Certificate::Id
The SerialNumber examples in the AWS documentation (without the AWS::IoT::Certificate prefix are used for the Bulk Registration process.

IBM CLOUD function action took too long to respond in IBM watson chat dialog

Hi, I am creating a chatbot. I developed a IBM cloud function(action) in IBM.
This is the action code..
{
"context": {
"my_creds": {
"user": "ssssssssssssssssss",
"password": "sssssssssssssssssssssss"
}
},
"output": {
"generic": [
{
"values": [
{
"text": ""
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
},
"actions": [
{
"name": "ssssssssssss/user-detail",
"type": "server",
"parameters": {
"name": "<?input.text?>",
"lastname": "<?input.text?>"
},
"credentials": "$my_creds",
"result_variable": "$my_result"
}
]
}
Now my action user detail is giving response when i am invoking the code.
But when i am checking the output with my chatbot I am getting execution of cloud functions action took too long.
There is currently a 5 second limitation on processing time for a cloud function being called from a dialog node. If your process will need longer than this, you'll need to do it client side through your application layer.

NoSQL Structure for handling labeled tags

Currently I have a hundreds of thousands of files like so:
{
"_id": "1234567890",
"type": "file",
"name": "Demo File",
"file_type": "application/pdf",
"size": "1400",
"timestamp": "1491421149",
"folder_id": "root"
}
Currently, I index all the names, and a client can search for files based on the name of the file. These files also have tags that need to be associated with the file but they also have specific labels.
An example would be:
{
"tags": [
{ "client": "john doe" },
{ "office": "virginia" },
{ "ssn": "1234" }
]
}
Is adding the tags array to my above file object the ideal solution if I want to be able to search thousands of files with a client of John Doe?
The only other solution I can think of is having something an object per tag and having an array of file ID's associated with each tag like so:
{
"_id": "11111111",
"type": "tag",
"label": "client",
"items": [
"1234567890",
"1222222222",
"1333333333"
]
}
With this being a LOT of objects I need to add tags to, I'd rather do it the most efficient way possible FIRST so I don't have to backtrack in the near future when I start running into issues.
Any guidance would be greatly appreciated.
Your original design, with a tags array, works well with Cloudant Search: https://console.ng.bluemix.net/docs/services/Cloudant/api/search.html#search.
With this approach you would define a single design document that will index any tag in the tags array. You do not have to create different views for different tags and you can use the Lucene syntax for queries: http://lucene.apache.org/core/4_3_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Overview.
So, using your example, if you have a document that looks like this with tags:
{
"_id": "1234567890",
"type": "file",
"name": "Demo File",
"file_type": "application/pdf",
"size": "1400",
"timestamp": "1491421149",
"folder_id": "root",
"tags": [
{ "client": "john doe" },
{ "office": "virginia" },
{ "ssn": "1234" }
]
}
You can create a design document that indexes each tag like so:
{
"_id": "_design/searchFiles",
"views": {},
"language": "javascript",
"indexes": {
"byTag": {
"analyzer": "standard",
"index": "function (doc) {\n if (doc.type === \"file\" && doc.tags) {\n for (var i=0; i<doc.tags.length; i++) {\n for (var name in doc.tags[i]) {\n index(name, doc.tags[i][name]);\n }\n }\n }\n}"
}
}
}
The function looks like this:
function (doc) {
if (doc.type === "file" && doc.tags) {
for (var i=0; i<doc.tags.length; i++) {
for (var name in doc.tags[i]) {
index(name, doc.tags[i][name]);
}
}
}
}
Then you would search like this:
https://your_cloudant_account.cloudant.com/your_db/_design/searchFiles/_search/byTag
?q=client:jack+OR+office:virginia
&include_docs=true
The solution, that comes into my mind would be using map reduce functions.
To do that, you would add the tags to your original document:
{
"_id": "1234567890",
"type": "file",
"name": "Demo File",
"file_type": "application/pdf",
"size": "1400",
"timestamp": "1491421149",
"folder_id": "root",
"client": "john",
...
}
Afterwards, you can create a design document, that looks like this:
{
"_id": "_design/query",
"views": {
"byClient": {
"map": "function(doc) { if(doc.client) { emit(doc.client, doc._id) }}"
}
}
}
After the view is processed, you can open it with
GET /YOURDB/_design/query/_view/byClient?key="john"
By adding the query parameter include_docs=true, the whole document will be returned, instead of the id.
You can also write your tags into an tags attribute, but you have to update the map function to match the new design.
More information about views can be found here:
http://docs.couchdb.org/en/2.0.0/api/ddoc/views.html

Resources