I have to add some IP to whitelist for azure logic app IP restrictions through powershell script.I have googled to find any command to add IP restrictions to azure logic app to whitelist those IPs but I could not find any such commands is there any way to do it through powershell?
The access control properties are set in the ARM template for the app, so that is how you would update them via PowerShell. Since you already have the app created, the simplest method will be to export the template from the existing app, make your edits and then run the PowerShell commands
In particular, you need to set the accessControl property:
"accessControl": {
"triggers": {
"allowedCallerIpAddresses": [{
"addressRange": "192.168.1.0-192.168.1.100"
}
]
},
"actions": {
"allowedCallerIpAddresses": [{
"addressRange": "192.168.1.0-192.168.1.100"
}
]
}
}
Once you've made your template changes, it can be deployed with PowerShell or the CLI
New-AzResourceGroupDeployment -ResourceGroupName <Azure-resource-group-name> -TemplateUri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-logic-app-create/azuredeploy.json
or
az group deployment create -g <Azure-resource-group-name> --template-uri https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-logic-app-create/azuredeploy.json
Related
Im creating an app engine using the following module: google_app_engine_flexible_app_version.
By default, Google creates a Default App Engine Service Account with roles/editor permissions.
I want to reduce the permissions of my AppEngine.
Therefore, I want to remove the roles/editor permission and add it my custom role.
In order to remove it I know I can use gcloud projects remove-iam-policy-binding cli.
But I want it to be part of my terraform plan.
If you are using https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/app_engine_flexible_app_version to creating your infrastructure then you must have seen the following line in it.
role = "roles/compute.networkUser"
This role is used when setting up your infra and you can tinker it after referring from https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iam_deny_policy
Note: When setting up role, please ensure valid permissions are in place for your app engine to work properly.
I. Using Provided Terraform Code as template & Tinker it
One simple hack I would suggest you, is to
(1) First setup your infra-structure with the basic terraform code your have and then (2) Update/tinker your infra as per your expectations (3) Now you can do terraform refresh and terraform plan to find the differences required to update your code.
Below is not related but only as an example.
resource "google_dns_record_set" "default" {
name = google_dns_managed_zone.default.dns_name
managed_zone = google_dns_managed_zone.default.name
type = "A"
ttl = 300
rrdatas = [
google_compute_instance.default.network_interface.0.access_config.0.nat_ip
]
}
Above is the code for creating a DNS record using Terraform. After mentioned above step 1, 2 & 3, I get following differences to update my code
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_dns_record_set.default will be updated in-place
~ resource "google_dns_record_set" "default" {
id = "projects/mmterraform03/managedZones/example-zone-googlecloudexample/rrsets/googlecloudexample.com./A"
name = "googlecloudexample.com."
~ ttl = 360 -> 300
# (4 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
II. Using Terraform Import
Google Cloud Platform tool - gcloud, terraform and several other open source platform are available today that can read your existing infrastructure and write Terraform code for you.
So you can check terraform import or Google's docs - https://cloud.google.com/docs/terraform/resource-management/import#:~:text=Terraform%20can%20import%20existing%20infrastructure,manage%20your%20deployment%20in%20Terraform.
But to use this method, you have to setup your infrastructure first. You either do it completely manually from Google Console UI or use terraform first and then update it.
As a III option, you can reach out/hire a Terraform Expert to do this task for you but I and II options works best for many cases.
On a different note, please https://stackoverflow.com/help/how-to-ask,
https://stackoverflow.com/help/minimal-reproducible-example. Opinion based and how/what to do questions are usually discouraged in StackOverflow.
This is one situation where you might consider to use google_project_iam_policy
That could be used to knock out the Editor role, but it will knock out everything else you don't explicitly list in the policy!
Beware - There is a risk of locking yourself out of your project if you are not sure what you are doing.
Another option would be to use a custom service account.
Use terraform to create the account and apply the desired roles.
Use gcloud app deploy --service-account={custom-sa} to deploy a service to app engine that uses the custom account.
But you may still wish to remove the Editor role from the default service account. Given that you already have the gcloud command to do it, gcloud projects remove-iam-policy-binding you could use resource terraform-google-gcloud to execute the command from terraform.
See also this feature request.
I have a logic app that was deploying from visual studio without issue a few weeks ago.
Today its throwing the following error on deployment:
17:27:10 - "error": {
17:27:10 - "code": "IntegrationAccountAssociationRequired",
17:27:10 - "message": "The workflow must be associated with an integration account to use the workflow run action 'Liquid transform' of type 'Liquid'."
17:27:10 - }
Within my logic app, it has a parameter that references the integration account:
"IntegrationAccountRef": {
"type": "string",
"minLength": 1,
"defaultValue": "/subscriptions/99x99x9x-9xx9-x99x-x99x-x99x99x99x99/resourcegroups/devResourceGroup/providers/Microsoft.Logic/integrationAccounts/devIntegrationAccount"
},
I also reference this parameter in the parameters section of the logic app resource, so the logic app knows its the integration account:
"integrationAccount": {
"id": "[parameters('IntegrationAccountRef')]"
}
Yet it still throws the error mentioned at the top.
Has something changed in how Logic Apps now reference integration accounts in an arm template?
Appreciate any advice and expertise.
Just summarize the steps in your comments for other communities reference:
Even though we have set the reference to the integration account in the template code, it also needs to be done in the logic app properties within visual studio.
Click anyplace in the white space of the Visual Studio Logic App.
Look inside the Property Windows for the Integration Account selection windows.
Select the Integration Account you want to use and save your Logic App
I created a logic app custom connector successfully (via the portal, not ARM) and it's in use (demo) working fine. It's a wrapper for an azure function but to provide better usability up front to less tech savy users i.e. expose properties VS providing json.
Any how once created my query is a simple one. Can it be edited 1. in the portal? 2. via ARM (if it was created by arm)? i.e. I want to add a better icon.
When I view the logic apps custom connector though in the portal and click EDIT all it does is populate the Connector Name and no more. See below. All original configuration, paramaters etc is missing.
So my queries.
Is this the norm?
On export of the custom connector (azure portal menu item) the template really has nothing in it. No content either of the connector details?
Is there an ARM template to deploy this?
If yes to 3, how do you go about modifying in the scenario you have to?
I also understand in using it in a logic app it created an API Connection reference. Does this stand alone, almost derived off the customer connector? And further uses say of a modified connector would create different API connections?
I feel I'm just missing some of the basic knowledge on how these are implemented. which in turn would explain the deploying, and maintenance.
Anyone :) ?
EDIT:
Think I've come to learn the portal is very buggy. The swagger editor loaded no content either and broke the screen. I've since tried a simpler connector i.e. no sample markup with escaped regex patterns and it seems to like going back into it to edit :) (Maybe one to report as a bug after all this)
That said then - Yes, edit should be possible but the other queries regarding ARM, export, redeploy and current connections still stands :)
You can deploy the Logic apps custom connector really easily. You need to do following steps
Configure you custom connector with proper settings and update it.
Once updated, click on the download link available at the top of the connector.
Download the ARM template skeleton using the Export Template.
In the properties section, just add a new property called swagger and paste the swagger you downloaded in step 2.
Parameterise your ARM template
Deploy using your choice of deployment using Azure DevOps , PowerShell etc.
Please refer to following ARM template for your perusal.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"customApis_tempconnector_name": {
"defaultValue": "tempconnector",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Web/customApis",
"apiVersion": "2016-06-01",
"name": "[parameters('customApis_tempconnector_name')]",
"location": "australiaeast",
"properties": {
"connectionParameters": {
"api_key": {
"type": "securestring",
"uiDefinition": {
"displayName": "API Key",
"description": "The API Key for this api",
"tooltip": "Provide your API Key",
"constraints": {
"tabIndex": 2,
"clearText": false,
"required": "true"
}
}
}
},
"backendService": {
"serviceUrl": "http://petstore.swagger.io/v2"
},
"description": "This is a sample server Petstore server. You can find out more about Swagger at [http://swagger.io](http://swagger.io) or on [irc.freenode.net, #swagger](http://swagger.io/irc/). For this sample, you can use the api key `special-key` to test the authorization filters.",
"displayName": "[parameters('customApis_tempconnector_name')]",
"iconUri": "/Content/retail/assets/default-connection-icon.e6bb72160664a5e37b9923c3d9f50ca5.2.svg",
"swagger": { "Enter Swagger Downloaded from Step 2 here" }
}
}
]
}
I created a .bacpac from the original SQL Azure DB I want to import to a new database during my deployment process. To do this, I want to have a github page with the usual Deploy to Azure button, that in as close to one click performs the deployment task, and sets up my entire application.
To do this, however, I need to set up some initial data on the database. After consulting the internet, I saw the post Using Azure Resource Manager to Copy Azure SQL Databases, which had a similar issue.
Right now, I have a MSDeploy extension running in the ARM template that deploys a website from a public azure blob. I'd ideally like to do this with the database too, but the command seems to require storageKeyType and storageKey parameters to be filled.
Is there any way to go around this limitation? Should I just give up and have my application perform the initial setup of the database? Sharing the storage key in a public github template does not seem like a very good plan!
Here's a code snippet:
"resources": [
{
"name": "Import",
"type": "extensions",
"apiVersion": "2014-04-01-preview",
"dependsOn": [
"[variables('sqlsrvmymisName')]",
"[variables('sqldbmymisName')]"
],
"properties": {
"storageUri": "https://publicblob.blob.core.windows.net/artifacts/publicblob.bacpac",
"administratorLogin": "MasterAccount",
"administratorLoginPassword": "P#ssw0rd",
"operationMode": "Import",
"storageKeyType": "Primary",
"storageKey": ""
}
}
]
If you really want it to be public, try this:
"storageKeyType": "SharedAccessKey",
"storageKey": "?",
I was trying to install new theme in IBM Portal 8001, using webDAV anyclient I am uploading the static resources of theme in theme list but it doesn't upload all files.
Do i have to give any permissions to webDAV in portal?
I have already added the 'all portal users' access to THEME MANAGEMENT in portal access control. Even though it doesn't upload the files.
I have tried using WebDrive, Bitkinex clients also still it doesn't upload the files.
In addition to access rights you need to enable the write for non administrative users. See below the instructions
To enable write access for all authenticated users, proceed as
follows:
Add the following property to the WP ConfigService resource environment provider in the WebSphere® Application Server
administrative console: filestore.writeaccess.allowed.
Set the value for the property to true .
restart the portal server for the new setting to take effect.
https://serverfault.com/questions/555638/how-do-i-enable-webdav-write-access-in-websphere-portal-8-0
it doesn't upload all files? or none? If is partially uploaded is probably a limitation of the WebDAV client; try using ConfigEngine task to upload zip to WebDAV store:
http://www-01.ibm.com/support/knowledgecenter/SSYJ99_8.0.0/dev/csa2r_cfgtsk_webdavdplzip.dita
ConfigEngine.sh webdav-deploy-zip-file -DTargetURI=dav:fstype1/themes// -DZipFilePath=/tmp/YourTheme.zip