I have structure below coming via webhook and I'm having trouble understanding if there is built in action in Logic App to get nicely formatted object array of rows which is inside table, where each item will have name and associated value with it.
"SearchResults": {
"tables": [
{
"name": "PrimaryResult",
"columns": [
{
"name": "TimeGenerated",
"type": "datetime"
},
{
"name": "ResourceGroup",
"type": "string"
},
{
"name": "ActivityStatusValue",
"type": "string"
},
{
"name": "d_resource",
"type": "dynamic"
},
{
"name": "c_title",
"type": "dynamic"
},
{
"name": "c_details",
"type": "dynamic"
}
],
"rows": [
[
"2020-06-18T16:30:07.89Z",
"USEASTPROD",
"Updated",
"aueglbwvhypap07",
"Remote disk disconnected",
"We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. An unexpected problem is preventing us from automatically recovering your virtual machine."
],
[
"2020-06-18T16:30:07.89Z",
"USEASTPROD",
"Updated1",
"agggggypap07",
"Remote disk disconnected",
"We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. An unexpected problem is preventing us from automatically recovering your virtual machine."
]
]
}
]
}
I'd like it to be an array where columns are entity name and each value from rows is it's value like below
"rows": [
[
"timeGenerated" :"2020-06-18T16:30:07.89Z",
"ResourceGroup": "USEASTPROD",
"ActivityStatusValue":"Updated",
"d_resource" : "aueglbwvhypap07",
"c_title" : "Remote disk disconnected",
"c_details": "We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. An unexpected problem is preventing us from automatically recovering your virtual machine."
],
[
"timeGenerated" :"2020-06-18T16:30:07.89Z",
"ResourceGroup": "USEASTPROD",
"ActivityStatusValue":"Updated1",
"d_resource" : "agggggypap07",
"c_title" : "Remote disk disconnected",
"c_details": "We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. An unexpected problem is preventing us from automatically recovering your virtual machine."
]
]
For this requirement, we can just use liquid with integration account in logic app to implement it. Please refer to the steps below:
1. We need to create an integration account on azure portal first and link it to your logic app. You can refer to this tutorial.
2. In my logic app, I initialize a variable and store your data(add a {} around the data) to simulate your situation.
3. Then use "Parse JSON" action to parse the string above.
4. Create a liquid map in local (I named it as testRow.liquid) with the code below:
{% assign rows = content.SearchResults.tables[0].rows %}
{
"rows": [
{% for item in rows %}
{
"timeGenerated": "{{item[0]}}",
"ResourceGroup": "{{item[1]}}",
"ActivityStatusValue": "{{item[2]}}",
"d_resource": "{{item[3]}}",
"c_title": "{{item[4]}}",
"c_details": "{{item[5]}}"
},
{% endfor %}
]
}
Upload the liquid map(testRow.liquid) to your integration account, for this step you can refer to this tutorial.
5. Then use "Transform JSON to JSON" action", choose the Body from "Parse JSON" action above and use the map which you upload.
6. After running the logic app, we can get the result as below:
The whole result json is:
{
"rows": [
{
"timeGenerated": "6/18/2020 4:30:07 PM",
"ResourceGroup": "USEASTPROD",
"ActivityStatusValue": "Updated",
"d_resource": "aueglbwvhypap07",
"c_title": "Remote disk disconnected",
"c_details": "We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. An unexpected problem is preventing us from automatically recovering your virtual machine."
},
{
"timeGenerated": "6/18/2020 4:30:07 PM",
"ResourceGroup": "USEASTPROD",
"ActivityStatusValue": "Updated1",
"d_resource": "agggggypap07",
"c_title": "Remote disk disconnected",
"c_details": "We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. An unexpected problem is preventing us from automatically recovering your virtual machine."
}
]
}
By the way:
The format of the sample you provided is invalid in json, we must use {} instead of [] in each of the item.
In liquid map, it will convert your datetime to another format automatically.
Hope it helps~
Related
I am looking into Azure AD SCIM Provisioning and I have a question I am hoping I could get some help on. My use case is as follows
I created a Group in Azure AD and added "John Smith" and "Jane Smith" as members to it.
I went over to my Non-Gallery application added the Group created above to my application and triggered an On-Demand provisioning.
Both "John Smith" and "Jane Smith" were successfully created in my local database.
I removed "John Smith" from my group and triggered an On-Demand provisioning again.
My expectation was that the following PATCH request would be sent by Azure Ad
"Operations": [
{
"op": "Remove",
"path": "members",
"value": "john-smith-id"
}
]
but instead Azure AD sends a PATCH request to /Users with the following body
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:PatchOp"
],
"Operations": [
{
"op": "Add",
"path": "displayName",
"value": "John Smith"
}
]
and another PATCH request to /Groups with the following body
"schemas": [
"urn:ietf:params:scim:api:messages:2.0:PatchOp"
],
"Operations": [
{
"op": "Add",
"path": "externalId",
"value": "some-guid"
}
]
Is this correct? I feel like I am messing something up when removing the member from the Group which isn't triggering the desired PATCH request
After step #4, I would recommend checking if the user has successfully been removed from the group.
Also, make sure that you're using the right rule ID in the on-demand provisioning request. One easy way to do this is to try through the UI and look at the network traffic ctrl+shift+i
The rule ID can be found in the schema.
For some of our collections when we run a Collections API DELETE synchronously followed immediately by a Configset API DELETE for the underlying configset we end up with a messed up collection state.
I have been unable to reproduce this issue in a test environment, it only happens on the live production instances inconsistently, so it may be load/race condition related.
Running a COLSTATUS against the broken collection provides the following response,
{
"responseHeader": {
"status": 404,
"QTime": 33
},
"collection_19744": {
"stateFormat": 2,
"znodeVersion": 51,
"properties": {
"autoAddReplicas": "false",
"maxShardsPerNode": "1",
"nrtReplicas": "3",
"pullReplicas": "0",
"replicationFactor": "3",
"router": {
"name": "compositeId"
},
"tlogReplicas": "0"
},
"activeShards": 1,
"inactiveShards": 0
},
"error": {
"metadata": [
"error-class",
"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException",
"root-error-class",
"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"
],
"msg": "Error from server at http://solr1.prod-internal:8983/solr/collection_19744_shard1_replica_n4: Expected mime type application/octet-stream but got text/html. <html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\"/>\n<title>Error 404 Not Found</title>\n</head>\n<body><h2>HTTP ERROR 404</h2>\n<p>Problem accessing /solr/collection_19744_shard1_replica_n4/admin/segments. Reason:\n<pre> Not Found</pre></p>\n</body>\n</html>\n",
"code": 404
}
}
The underlying shard data for the collection has been successfully removed from disk and is not present on any of the solr nodes, the /collections/collection_19744 node has also been successfully deleted from Zookeeper, which I tested using the zkcli script. Receiving a NoNode for /collections/collection_19744 message.
As the COLSTATUS is broken we cannot delete the associated configset, doing so results in a "Can not delete ConfigSet as it is currently being used by collection [collection_19744]" message. Which is false.
Where exactly does the COLSTATUS get its collection meta information, as the /collections/collection_19744 node is absent in zookeeper?
I want to remove the broken collection metadata so I can then remove the configset and recreate the collection with the original naming.
I created an event through a shared mailbox in Graph API.
https://graph.microsoft.com/v1.0/users/{shared-user-id}/calendars/{shared-calendar-id}/events
{
"subject": "New Event Test",
"body": {
"contentType": "HTML",
"content": "Mail FLow Test"
},
"start": {
"dateTime": "2021-01-29T12:00:00",
"timeZone": "Eastern Standard Time"
},
"end": {
"dateTime": "2021-01-30T14:00:00",
"timeZone": "Eastern Standard Time"
},
"attendees": [
{
"emailAddress": {
"address":"calendar#contoso.com",
"name": "Calendar Organizer"
},
"type": "required"
}
]
}
This creates an event successfully, and after that, I patched this event with extended data using open extension.
https://graph.microsoft.com/v1.0/users/{user-id}/calendars/{calendar-id}/events/{just-created-event-id}
{
"extensions": [
{
"#odata.type": "microsoft.graph.openTypeExtension",
"extensionName": "Com.Contoso.Events",
"courseId": 22,
"materialId": 75,
"courseType": "video"
}
]
}
This seems not to work. This responses Access is denied.
https://graph.microsoft.com/v1.0/users/{shared-user-id}/calendars/{shared-calendar-id}/events?$expand=extensions($filter=id eq 'Microsoft.OutlookServices.OpenTypeExtension.Com.Contoso.Events')
It responses ErrorAccessDenided with error message "Access is denied. Check credentials and try again".
But if I try this without expanding extensions, then it works.
I couldn't even to create an event because it responded with the same error and message "Access is denied. Check credentials and try again", so I added an API permission MailboxSettings.ReadWrite in my Azure AD that made work an event creation through the shared mailbox.
What is the reason why I can create or get events but not add or expand extensions?
Move my comment here so that this issue is treated as answered.
The method you are using is incorrect. Please refer to this sample to create the open extension.
But based on my test, we cannot use an admin (or a delegated user or a shared mailbox member) to create the extension for the shared mailbox (Even if I have added Calendars.Readwrite.Shared permission). It will give 403 error as you have encountered.
When I sign in with the shared mailbox user, it can create the open extension for itself.
So the conclusion is: when we use delegated permissions (user token), we can only create an open extension for the currently logged in user himself.
I have a Security System with traits action.devices.traits.ArmDisarm and action.devices.traits.StatusReport and some other sensors: WaterLeak Sensor, Door Sensor ...
I report some errors about other devices with StatusReport state.
For example: when the door sensor detects that the door is open, the security system must give deviceOpen error.
When I say, "Is my security system ok?", my server's response to the query intent is the JSON below, but Google Assistant says that he couldn't reach my action (Unexpected error happened).
Is there anything wrong with this response?
{
"requestId": "10417064006786362499",
"payload": {
"devices": {
"3rL3QL7Kq2HrQjs53Y7o": {
"isArmed": true,
"currentStatusReport": [
{
"blocking": true,
"deviceTarget": "4BCIpzBWpgLA24mMI7r2",
"priority": 0,
"statusCode": "deviceOpen"
},
{
"blocking": true,
"deviceTarget": "MxRCd6ERRSWzYzyNTE8S",
"priority": 0,
"statusCode": "waterLeakDetected"
}
],
"status": "EXCEPTIONS",
"online": true
}
}
}
}
In Firebase Console there are no errors.
Logs in Firebase Console
Your response to the query intent looks right, but there might be an error in other parts of the process. You can follow the Troubleshooting Guide to see how your failed intent is counted in the Smart Home metrics and what are the details on your logs. (Firebase logs only gives info about your server. The logging mentioned in the guide (Google Cloud Logging) is a different and more comprehensive for the intent handling)
Precisely following the step-by-step instructions on this page I am trying to export contents of one of my DynamoDB tables to an S3 bucket. I create a pipeline exactly as instructed but it fails to run. It seems that it has trouble identifying/running an EC2 resource to do the export. When I access EMR through AWS Console, I see entries like this:
Cluster: df-0..._#EmrClusterForBackup_2015-03-06T00:33:04Terminated with errorsEMR service role arn:aws:iam::...:role/DataPipelineDefaultRole is invalid
Why am I getting this message? Do I need to set up/configure something else for the pipeline to run?
UPDATE: UnderIAM->Roles in AWS console I am seeing this for DataPipelineDefaultResourceRole:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Put*",
"s3:Get*",
"s3:DeleteObject",
"dynamodb:DescribeTable",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:UpdateTable",
"rds:DescribeDBInstances",
"rds:DescribeDBSecurityGroups",
"redshift:DescribeClusters",
"redshift:DescribeClusterSecurityGroups",
"cloudwatch:PutMetricData",
"datapipeline:PollForTask",
"datapipeline:ReportTaskProgress",
"datapipeline:SetTaskStatus",
"datapipeline:PollForTask",
"datapipeline:ReportTaskRunnerHeartbeat"
],
"Resource": ["*"]
}]
}
And this for DataPipelineDefaultRole:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:List*",
"s3:Put*",
"s3:Get*",
"s3:DeleteObject",
"dynamodb:DescribeTable",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:GetItem",
"dynamodb:BatchGetItem",
"dynamodb:UpdateTable",
"ec2:DescribeInstances",
"ec2:DescribeSecurityGroups",
"ec2:RunInstances",
"ec2:CreateTags",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"elasticmapreduce:*",
"rds:DescribeDBInstances",
"rds:DescribeDBSecurityGroups",
"redshift:DescribeClusters",
"redshift:DescribeClusterSecurityGroups",
"sns:GetTopicAttributes",
"sns:ListTopics",
"sns:Publish",
"sns:Subscribe",
"sns:Unsubscribe",
"iam:PassRole",
"iam:ListRolePolicies",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfiles",
"cloudwatch:*",
"datapipeline:DescribeObjects",
"datapipeline:EvaluateExpression"
],
"Resource": ["*"]
}]
}
Do these need to be modified somehow?
I ran into the same error.
In IAM, attach the AWSDataPipelineRole managed policy to DataPipelineDefaultRole
I also had to update the Trust Relationship to the following (needed ec2 which is not in the documentation):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"elasticmapreduce.amazonaws.com",
"datapipeline.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
There is a similar question in AWS forum and it seems it is related to an issue with managed policies
https://forums.aws.amazon.com/message.jspa?messageID=606756
In that question, they recommend using specific inline policies for both access and trust policies to define those roles changing some permissions. Oddly enough, the specific inline policies can be found at
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.html
I had the same issue. The managed policies were correct in my case, but I had to update the trust relationships for both the DataPipelineDefaultRole and DataPipelineDefaultResourceRole roles using the documentation Gonfva linked to above as they were out of date.
Issue might be with the IAM role.
It might help, although not in all cases.
I had the same problem when I was trying to export dynamodb data to S3 using data pipeline. Issue is with the
Resource Role - DataPipelineDefaultResourceRole and,
Role - DataPipelineDefaultRole roles used in Data Pipeline
Solution
Go to IAM -> Roles -> DataPipelineDefaultResourceRole and attach AmazonDynamoDBFullAccess and AmazonS3FullAccess policies to this role.
Do the same for DataPipelineDefaultRole.
Please note: You should give restricted DynamoDB and S3 access based upon your use case.
Try running your data pipeline now. It will be in Running State.