Looking to limit a device to just it's resources (shadow) inside of AWS IOT, based upon the certificate it uses to authenticate.
Device1 is attached to Cert1 - I want to have a generic policy that would only let Device1 update the shadow of Device 1 and not Device2
but all being triggered off the cert the devices uses to authenticate with.
The policy below doesn't seem to work - any help?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iot:Connect",
"Resource": "*",
"Condition": {
"Bool": {
"iot:Connection.Thing.IsAttached": [
"true"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"iot:Publish",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:us-east-1:xxxxxx:topic/${iot:Connection.Thing.ThingTypeName}/${iot:Connection.Thing.ThingName}",
"arn:aws:iot:us-east-1:xxxxxx:topic/${iot:Connection.Thing.ThingTypeName}/${iot:Connection.Thing.ThingName}/*"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe"
],
"Resource": [
"arn:aws:iot:us-east-1:xxxxxx:topicfilter/${iot:Connection.Thing.ThingTypeName}/${iot:Connection.Thing.ThingName}",
"arn:aws:iot:us-east-1:xxxxxx:topicfilter/${iot:Connection.Thing.ThingTypeName}/${iot:Connection.Thing.ThingName}/*"
]
}
]
}
This is what I ended up using, to limit the device to it's own resources and have the ClientID be the name of the AWS Thing as well
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iot:Connect",
"Resource": "*",
"Condition": {
"Bool": {
"iot:Connection.Thing.IsAttached": [
"true"
]
},
"ForAnyValue:StringEquals": {
"iot:ClientId": [
"${iot:Connection.Thing.ThingName}"
]
}
}
},
{
"Effect": "Allow",
"Action": "iot:Publish",
"Resource": "arn:aws:iot:us-east-1:xxx:topic/$aws/things/${iot:Connection.Thing.ThingName}/*"
},
{
"Effect": "Allow",
"Action": "iot:Subscribe",
"Resource": "arn:aws:iot:us-east-1:xxx:topicfilter/$aws/things/${iot:Connection.Thing.ThingName}/*"
},
{
"Effect": "Allow",
"Action": "iot:Receive",
"Resource": "arn:aws:iot:us-east-1:xxx:topic/$aws/things/${iot:Connection.Thing.ThingName}/*"
}
]
}
Related
I've created a docker image web service I'm trying to get to run on SageMaker as an inference service.
I've created an execution role with the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::arn:aws:s3:::XXXXX-ai-sagemaker"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::arn:aws:s3:::XXXXX-ai-sagemaker/*"
]
},
{
"Effect": "Allow",
"Action": [
"sagemaker:BatchPutMetrics",
"ecr:GetAuthorizationToken",
"ecr:ListImages"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
],
"Resource": [
"arn:aws:ecr:eu-central-1:553XXXXXXXX:repository/similarity"
]
},
{
"Effect": "Allow",
"Action": "cloudwatch:PutMetricData",
"Resource": "*",
"Condition": {
"StringLike": {
"cloudwatch:namespace": [
"*SageMaker*",
"*Sagemaker*",
"*sagemaker*"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:CreateLogGroup",
"logs:DescribeLogStreams"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/sagemaker/*"
}
]
}
I have the docker image hosted on a private repository in ECR - The role has permission to pull images from the repository.
When running the following python code:
import boto3
from time import gmtime, strftime
model_name = 'similarity-model' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
image_config = {
'RepositoryAccessMode': 'Vpc'
}
vpc_config = {
'SecurityGroupIds': ['sg-1ebfXXXX'],
'Subnets': ['subnet-8a47XXXX', 'subnet-b707XXX']
}
primary_container = {
'ContainerHostname': 'ModelContainer',
'Image': '553XXXXXX.dkr.ecr.eu-central-1.amazonaws.com/similarity:latest',
'ImageConfig': image_config
}
sm = boto3.client('sagemaker')
execution_role_arn = 'arn:aws:iam::553XXXXXX:role/service-role/SageMaker-sagemaker'
try:
resp = sm.create_model(
ModelName=model_name,
PrimaryContainer=primary_container,
ExecutionRoleArn=execution_role_arn,
VpcConfig=vpc_config,
)
except Exception as e:
print(f'error calling CreateModel operation: {e}')
else:
print(resp)
I recieve the following error from create_model():
An error occurred (ValidationException) when calling the CreateModel operation: Using ECR image "553XXXXXXX.dkr.ecr.eu-central-1.amazonaws.com/similarity:latest" with Vpc repository access mode is not supported.
Any suggestions would be good at this point :)
I'm trying to deploy I react app in AWS Elastic and I added some parameters. However, when trying to build via pipeline and codeBuild, I'm getting the error below:
Phase context status code: Decrypted Variables Error Message: AccessDeniedException: User: arn:aws:sts::MYCODE:assumed-role/codebuild-QA-service-role/AWSCodeBuild-4701b85f-fc5b-49c8-b2f9-f634930aca4f is not authorized to perform: ssm:GetParameters on resource: arn:aws:ssm:sa-east-1:MYCODE:parameter/PARAMETERVALUE because no identity-based policy allows the ssm:GetParameters action
I tried using the syntax below to the user codebuild-QA-service-role to allow it, however still getting that error
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:DescribeParameters"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameters"
],
"Resource": "arn:aws:ssm:sa-east-1:CODEHERE:parameter/*"
}
]
}
If your paramereters are of type SecureString, you need permissions to decrypt them. Check the docs for the policy needed:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter*"
],
"Resource": "arn:aws:ssm:sa-east-1:CODEHERE:parameter/*"
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": "arn:aws:kms:sa-east-1:CODEHERE:key/YOURKEY"
}
]
}
https://docs.aws.amazon.com/kms/latest/developerguide/services-parameter-store.html#parameter-store-policies
I am creating a AWS-IoT thing dynamically that can publish any topic and can listen to any topic in the AWS-IoT-core broker.
The Policy I am using is very broad and this thing can perform all operations in the server:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iot:*",
"Resource": "*"
}
]
}
Now I want to narrow these options. I want to allow this thing just to publish to the topics TOPICS-TEST/# and subscribe ONLY to the topics TOPICS-TEST/#. Even though we have many different topics in the broker I want this thing to only have access to the topics that starts with TOPICS-TEST/.
In order to do that I was checking this documentation and I have created this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:us-east-1:xxxx:client/${iot:Connection.Thing.ThingName}"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe"
],
"Resource": [
"arn:aws:iot:us-east-1:xxxx:topicfilter/TOPICS-TEST/*"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Receive"
],
"Resource": [
"arn:aws:iot:us-east-1:xxxx:topicfilter/TOPICS-TEST/*"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Publish"
],
"Resource": [
"arn:aws:iot:us-east-1:xxxx:topicfilter/TOPICS-TEST/*"
]
}
]
}
The previous policy is not working.
I can't see nothing, I can't publishing nothing.
What am I missing?
I figure out the way to do that
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe"
],
"Resource": [
"arn:aws:iot:us-east-1:xxxxxxxx:topicfilter/TOPICS-TEST*"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Receive"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Publish"
],
"Resource": [
"arn:aws:iot:us-east-1:xxxxxxxx:topic/TOPICS-TEST/*"
]
}
]
}
The previous policy will allow the thing to receive notifications from AWS-IoT core, to connect, to push ONLY to the sub-topics TOPICS-TEST/ ... and subscribe to TOPICS-TEST/.... This wing will not be able to see other topics in this broker.
I was using ...:topicfilter/... for Publish. Should be ...:topic/....
I agree with the solution, but what if you have millions of things and have different topics for different shadows.
Do we got to attach all devices to this policy.
In my opinion I feel there should be some IAM policy which should be applicable in this case. I find this tutorial project very useful.
https://youtu.be/1Pk05kpBX2A
At the moment I am using this setup in my firebase.json file, the service-worker.js file would cause the bundles to be cleared out of cache and downloaded fresh but it required a hard control+R refresh to do so, it didn't happen automatically, and my wildcard pattern matching on **/*.#(js|css) is now causing my rule for no-cache on /service-worker.js to not work either, but it is no good to force the user to hard refresh control+R whenever I deploy the next version, I need a better solution than setting max-age on my bundle to 0.
Any ideas?
"hosting": {
"public": "build",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
],
"headers": [
{
"source": "**/*.#(eot|otf|ttf|ttc|woff|font.css)",
"headers": [
{
"key": "Access-Control-Allow-Origin",
"value": "*"
}
]
},
{
"source": "**/*.#(js|css)",
"headers": [
{
"key": "Cache-Control",
"value": "max-age=86400"
}
]
},
{
"source": "**/*.#(jpg|jpeg|gif|png)",
"headers": [
{
"key": "Cache-Control",
"value": "max-age=604800"
}
]
},
{
"source": "/service-worker.js",
"headers": [
{
"key": "Cache-Control",
"value": "no-cache"
}
]
}
]
}
}
I had similar question but I figured out how to. NOTE in my case Service Worker file is sw.js. You can find my config below. I just added separate rule for sw.js before wild card rule matching all the JS and CSS files:
{
"hosting": {
"public": "public",
"ignore": ["firebase.json", "**/.*", "**/node_modules/**"],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
],
"headers": [
{
"source": "**/*.#(jpg|jpeg|gif|png|svg|ico)",
"headers": [
{
"key": "Cache-Control",
"value": "max-age=7200"
}
]
},
{
"source": "sw.js",
"headers": [
{
"key": "Cache-Control",
"value": "max-age=0"
}
]
},
{
"source": "*/*.#(js|css)",
"headers": [
{
"key": "Cache-Control",
"value": "max-age=3600"
}
]
},
{
"source": "manifest.json",
"headers": [
{
"key": "Cache-Control",
"value": "max-age=86400"
}
]
}
]
}
}
I'm trying to create a json object that holds multiple Network Security Groups (NSG), in order to build them and apply to vNet subnets using "count" to minimize the template code. The Microsoft Documentation covers how to create an object for one NSG's settings under the "Using a property object in a copy loop" section. This would require me to create a new parameter object for each NSG I need, and lengthy template code for each NSG.
I'm currently using the following parameter object to hold all information about a Virtual network including the NSGs. NSGs will be tied to subnets, with the first subnet "GatewaySubnet" being excluded from needing a NSG
"vNetProperties": {
"value": {
"vNetAddressSpace": "10.136.0.0/16",
"subnetNames": [
"GatewaySubnet",
"Kemp-frontend-subnet",
"AD-backend-subnet"
],
"subnetRanges": [
"10.136.0.0/27",
"10.136.1.0/24",
"10.136.2.0/24"
],
"networkSecurityGroups": {
"value": {
"kempNSG": {
"value": {
"securityRules": [
{
"name": "HTTPS",
"description": "allow HTTPS connections",
"direction": "Inbound",
"priority": 100,
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "10.0.0.0/24",
"sourcePortRange": "*",
"destinationPortRange": "443",
"access": "Allow",
"protocol": "Tcp"
},
{
"name": "HTTP",
"description": "allow HTTP connections",
"direction": "Inbound",
"priority": 100,
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "10.0.0.0/24",
"sourcePortRange": "*",
"destinationPortRange": "80",
"access": "Allow",
"protocol": "Tcp"
}
]
}
},
"adNSG": {
"value": {
"securityRules": [
{
"name": "RDPAllow",
"description": "allow RDP connections",
"direction": "Inbound",
"priority": 100,
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "10.0.0.0/24",
"sourcePortRange": "*",
"destinationPortRange": "3389",
"access": "Allow",
"protocol": "Tcp"
}
]
}
}
}
}
}
}
My template code to process the object is as follows:
{
"apiVersion": "2016-06-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[concat(parameters('vNetProperties').subnetNames[copyIndex(1)], '-nsg')]",
"location": "[resourceGroup().location]",
"copy": {
"name": "NSGs",
"count": "[length(array(parameters('vNetProperties').networkSecurityGroups))]"
},
"properties": {
"copy": [
{
"name": "securityRules",
"count": "[length(array(parameters('vNetProperties').networkSecurityGroups[copyIndex('securityRules')]))]",
"input": {
"description": "[parameters('vNetProperties').networkSecurityGroups[0].securityRules[0].description]",
"priority": "[parameters('vNetProperties').networkSecurityGroups[copyIndex('NSGs')].securityRules[copyIndex('securityRules')].priority]",
"protocol": "[parameters('vNetProperties').networkSecurityGroups[copyIndex('NSGs')].securityRules[copyIndex('securityRules')].protocol]",
"sourcePortRange": "[parameters('vNetProperties').networkSecurityGroups[copyIndex('NSGs')].securityRules[copyIndex('securityRules')].sourcePortRange]",
"destinationPortRange": "[parameters('vNetProperties').networkSecurityGroups[copyIndex('NSGs')].securityRules[copyIndex('securityRules')].destinationPortRange]",
"sourceAddressPrefix": "[parameters('vNetProperties').networkSecurityGroups[copyIndex('NSGs')].securityRules[copyIndex('securityRules')].sourceAddressPrefix]",
"destinationAddressPrefix": "[parameters('vNetProperties').networkSecurityGroups[copyIndex('NSGs')].securityRules[copyIndex('securityRules')].destinationAddressPrefix]",
"access": "[parameters('vNetProperties').networkSecurityGroups[copyIndex('NSGs')].securityRules[copyIndex('securityRules')].access]",
"direction": "[parameters('vNetProperties').networkSecurityGroups[copyIndex('NSGs')].securityRules[copyIndex('securityRules')].direction]"
}
}
]
}
}
My code right now most definitely does not work. I'm at the point where I need to validate this type of logic is even possible in the ARM at this time. Is it possible to have an array, where each item in the array is an array itself, and reference both levels of arrays in such fashion as array1[i].array2[j].name?
This approach won't work, you cannot have loop and a properties copy loop in the same resource and reference objects like that (sadly).
You work around would be to create a nested deployment for each parent object (networkSecurityGroups) and in that deployment create a properties copy loop (security rules). That will work because you will only have a copy loop.