Azure Logic App not updated in destination Resource Group (PowerShell) - azure-logic-apps

I am trying to deploy an existing Logic App from a source Resource Group to a destination Resource Group via PowerShell:
New-AzResourceGroupDeployment -ResourceGroupName <DestinationResourceGroup> -TemplateFile <TemplateFile> -TemplateParameterFile <ParametersFile>
The deployment is "sucessful" but the destination LogicApp is not getting updated
Is there any way to update the Logic App in the destination Resource Group? Do I have to delete and create it again every time?

Here are a few workarounds that you can try
WAY-1
To use Set-AzLogicApp which modifies a logic app in a resource group.
Set-AzLogicApp -ResourceGroupName <YOURRESOURCEGROUP> -Name <YOURLOGICAPP> -DefinitionFilePath <PATHOFYOURDEFINITIONFILE> -ParameterFilePath <PATHOFYOURPARAMETERFILE>
WAY-2
You can store the ARM template in a variable and use in Set-AzLogicApp
$json = '{
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {},
"triggers": {}
}'
Set-AzLogicApp -ResourceGroupName <YOURRESOURCEGROUPNAME> -ResourceName <YOURLOGICAPPNAME> -Definition $json
WAY-3
Just copy and paste your code view everytime after successful run.
REFERENCES:
Set-AzLogicApp
How to set azure logic app definition using powershell

Related

Is it possible to add Microsoft Graph delegated permissions to Azure AD app via Powershell?

I registered an application in Azure AD from PowerShell using the below script.
//To create new application
$myapp = New-AzureADApplication -DisplayName MyApp
$myappId=$myapp.AppId
//To set ApplicationID URI
Set-AzureADApplication -ApplicationId $myappId -IdentifierUris "api://$myappId"
//To retrieve details of new application
Get-AzureADApplication -Filter "DisplayName eq $myapp"
Now I want to set delegated API permissions(Calendars.Read, Application.Read.All, Directory.Read.All) for this app.
From Azure Portal, I know how to assign these. But is it possible to add these permissions via PowerShell? If yes, can anyone help me with the script or cmdlets?
Any help will be appreciated. Thank you.
Yes, it's possible to set delegated API permissions via PowerShell
Initially, please note AppID of new application that can be retrieved by below cmdlet:
Get-AzureADApplication -Filter "DisplayName eq $myapp"
Check whether you have Service Principal named "Microsoft Graph" using below cmdlet:
Get-AzureADServicePrincipal -All $true | ? { $_.DisplayName -eq "Microsoft Graph" }
In order to assign API permissions via PowerShell, you should know the GUIDs of those delegated permissions that can be displayed using below cmdlet:
$MSGraph.Oauth2Permissions | FT ID, Value
Note the IDs of required permissions like Calendars.Read, Application.Read.All and Directory.Read.All
Please find the complete script below:
$myapp = New-AzureADApplication -DisplayName MyApp
$myappId=$myapp.ObjectId
Get-AzureADApplication -Filter "DisplayName eq 'MyApp'"
$MSGraph = Get-AzureADServicePrincipal -All $true | ? { $_.DisplayName -eq "Microsoft Graph" }
$MSGraph.Oauth2Permissions | FT ID, Value
# Create a Resource Access resource object and assign the service principal’s App ID to it.
$Graph = New-Object -TypeName "Microsoft.Open.AzureAD.Model.RequiredResourceAccess"
$Graph.ResourceAppId = $MSGraph.AppId
# Create a set of delegated permissions using noted IDs
$Per1 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "c79f8feb-a9db-4090-85f9-90d820caa0eb","Scope"
$Per2 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "465a38f9-76ea-45b9-9f34-9e8b0d4b0b42","Scope"
$Per3 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "06da0dbc-49e2-44d2-8312-53f166ab848a","Scope"
$Graph.ResourceAccess = $Per1, $Per2, $Per3
# Set the above resource access object to your application ObjectId so permissions can be assigned.
Set-AzureADApplication -ObjectId $myappId -RequiredResourceAccess $Graph
Reference:
How to assign Permissions to Azure AD App by using PowerShell?

Training Job in Sagemaker gives error in locating file in S3 to docker image path

I am trying to use scikit_bring_your_own/container/decision_trees/train mode, running in AWS CLI, I had no issues. Trying to replicate in Creating Sagemaker Training Job , facing issue in loading data from S3 to docker image path.
In CLI command we used specify the docker run -v $(pwd)/test_dir:/opt/ml --rm ${image} train from where the input needs to referred.
In training job, mentioned the S3 bucket location and output path for model artifacts.
Error entered in the exception as in train - "container/decision_trees/train"
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(training_path, channel_name))
Traceback (most recent call last):
File "/opt/program/train", line 55, in train
'does not have permission to access the data.').format(training_path, channel_name))
So not understanding is there any tweaking required or any access missing.
kindly help
If you set the InputDataConfig in the CreateTrainingJob API like this
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/a.csv"
}
},
"InputMode": "File",
},
{
"ChannelName": "eval",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/b.csv"
}
},
"InputMode": "File",
}
]
SageMaker download the data specified above from S3 to the /opt/ml/input/data/channel_name directory in the Docker container. In this case, the algorithm container should be able to find the input data under
/opt/ml/input/data/train/a.csv
/opt/ml/input/data/eval/b.csv
You can find more details in https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html

How to pass the output of one ARM template to the input parameter of the next ARM template in Continuous Deployment using VSTS?

I have a ServiceBuswithQueue ARM template that has the output section like this below:
"outputs": {
"serviceBusNamespaceName": {
"type": "string",
"value": "[parameters('serviceBusNamespaceName')]"
},
"namespaceConnectionString": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), variables('sbVersion')).primaryConnectionString]"
},
"sharedAccessPolicyPrimaryKey": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), variables('sbVersion')).primaryKey]"
},
"serviceBusQueueName": {
"type": "string",
"value": "[parameters('serviceBusQueueName')]"
}
}
For that I Created the Continuous Integration (CI) and Continuous Deployment (CD) in VSTS, In CD I have used the PowerShell task to deploy the above ARM template. But I want to pass the output of this ARM template like "$(serviceBusQueueName)" to input parameter of the next ARM template in Continuous Deployment.
In know the above scenario can achieved using ARM outputs in between the two ARM task in Continuous Deployment. But I don’t want it because currently I am using the PowerShell task to deploy the ARM template.
Before posting this question, I was researched and find the following links but those are not helpful to resolve my issue.
Azure ARM templates - using the output of other deployments
How do I use ARM 'outputs' values another release task?
Can anyone please suggest me how to resolve the above issue?
You can override parameters by specifying corresponding parameters.
Override template parameter in the script
# Start the deployment
Write-Host "Starting deployment...";
$outputs = New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -Mode Incremental -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath;
foreach ($key in $outputs.Outputs.Keys){
$type = $outputs.Outputs.Item($key).Type
$value = $outputs.Outputs.Item($key).Value
Write-Host "##vso[task.setvariable variable=$key;]$value"
}
You can display all the environment variables in a subsequent script:
Write-Host "Environment variables:"
gci env:* | sort-object name

How to set permission for Azure Active directory application in Azure DataLake Store using powershell commands

Hi,
I am trying to set the AAD(Azure Active Directory) application permission(read/write/execute & other settings) in ADLS(Azure DataLakeStore) using powershell commands.
I tried using below powershell command:
Set-AzureRmDataLakeStoreItemAclEntry -AccountName "adls" -Path /
-AceType User -Id (Get-AzureRmADApplication -ApplicationId 490eee0-2ee1-51ee-88er-0f53aerer7b).ApplicationId -Permissions All
But this command sets/displays the ApplicationId under "Access" properties in ADLS with only read/write/execute access. But this setting are not correct as I perform Manual steps of Service Authentication in ADLS.
Is there any other way to set permissions of AAD application in ADLS?
The parameter User of Set-AzureRmDataLakeStoreItemAclEntry commands should be the object ID of the AzureActive Directory user, group, or service principal for which to modify an ACE.
You can refer the command below to assign the permission:
Set-AzureRmDataLakeStoreItemAclEntry -AccountName "accountName" -Path / -AceType User -Id
(Get-AzureRmADServicePrincipal -ServicePrincipalName "{applicationId}").Id -Permissions All
More detail about this command, you can refer link below:
Set-AzureRmDataLakeStoreItemAclEntry
You need to set the ObjectId (not the application id) as the Id parameter to Set-AzureRmDataLakeStoreItemAclEntry
Set-AzureRmDataLakeStoreItemAclEntry -AccountName "adls" -Path / -AceType User -Id (Get-AzureRmADApplication -ApplicationId 490eee0-2ee1-51ee-88er-0f53aerer7b).Id -Permissions All

Automatic AWS DynamoDB to S3 export failing with "role/DataPipelineDefaultRole is invalid"

Precisely following the step-by-step instructions on this page I am trying to export contents of one of my DynamoDB tables to an S3 bucket. I create a pipeline exactly as instructed but it fails to run. It seems that it has trouble identifying/running an EC2 resource to do the export. When I access EMR through AWS Console, I see entries like this:
Cluster: df-0..._#EmrClusterForBackup_2015-03-06T00:33:04Terminated with errorsEMR service role arn:aws:iam::...:role/DataPipelineDefaultRole is invalid
Why am I getting this message? Do I need to set up/configure something else for the pipeline to run?
UPDATE: UnderIAM->Roles in AWS console I am seeing this for DataPipelineDefaultResourceRole:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Put*",
"s3:Get*",
"s3:DeleteObject",
"dynamodb:DescribeTable",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:UpdateTable",
"rds:DescribeDBInstances",
"rds:DescribeDBSecurityGroups",
"redshift:DescribeClusters",
"redshift:DescribeClusterSecurityGroups",
"cloudwatch:PutMetricData",
"datapipeline:PollForTask",
"datapipeline:ReportTaskProgress",
"datapipeline:SetTaskStatus",
"datapipeline:PollForTask",
"datapipeline:ReportTaskRunnerHeartbeat"
],
"Resource": ["*"]
}]
}
And this for DataPipelineDefaultRole:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Put*",
"s3:Get*",
"s3:DeleteObject",
"dynamodb:DescribeTable",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:UpdateTable",
"ec2:DescribeInstances",
"ec2:DescribeSecurityGroups",
"ec2:RunInstances",
"ec2:CreateTags",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"elasticmapreduce:*",
"rds:DescribeDBInstances",
"rds:DescribeDBSecurityGroups",
"redshift:DescribeClusters",
"redshift:DescribeClusterSecurityGroups",
"sns:GetTopicAttributes",
"sns:ListTopics",
"sns:Publish",
"sns:Subscribe",
"sns:Unsubscribe",
"iam:PassRole",
"iam:ListRolePolicies",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfiles",
"cloudwatch:*",
"datapipeline:DescribeObjects",
"datapipeline:EvaluateExpression"
],
"Resource": ["*"]
}]
}
Do these need to be modified somehow?
I ran into the same error.
In IAM, attach the AWSDataPipelineRole managed policy to DataPipelineDefaultRole
I also had to update the Trust Relationship to the following (needed ec2 which is not in the documentation):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"elasticmapreduce.amazonaws.com",
"datapipeline.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
There is a similar question in AWS forum and it seems it is related to an issue with managed policies
https://forums.aws.amazon.com/message.jspa?messageID=606756
In that question, they recommend using specific inline policies for both access and trust policies to define those roles changing some permissions. Oddly enough, the specific inline policies can be found at
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.html
I had the same issue. The managed policies were correct in my case, but I had to update the trust relationships for both the DataPipelineDefaultRole and DataPipelineDefaultResourceRole roles using the documentation Gonfva linked to above as they were out of date.
Issue might be with the IAM role.
It might help, although not in all cases.
I had the same problem when I was trying to export dynamodb data to S3 using data pipeline. Issue is with the
Resource Role - DataPipelineDefaultResourceRole and,
Role - DataPipelineDefaultRole roles used in Data Pipeline
Solution
Go to IAM -> Roles -> DataPipelineDefaultResourceRole and attach AmazonDynamoDBFullAccess and AmazonS3FullAccess policies to this role.
Do the same for DataPipelineDefaultRole.
Please note: You should give restricted DynamoDB and S3 access based upon your use case.
Try running your data pipeline now. It will be in Running State.

Resources