I'm trying to create 3 x NSGs in a copy loop (this works) and then add three different security rules that contain multiple IP address ranges per security rule. I can make it work when specifying just a single IP address space per rule. And I can specify multiple ranges directly in the ARM template when not using a parameter like below:
"sourceAddressPrefixes": [
"10.100.139.96/28",
"10.100.139.64/27"
],
But when I try to specify an array with multiple strings it doesn't work. So my question is: What should the parameter nsgPrefixes look like so that that multiple ranges can be added per security rule?
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string"
},
"nsgNames": {
"type": "array"
},
"nsgPrefixes": {
"type": "array"
}
},
"variables": {},
"resources": [
{
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2020-11-01",
"name": "[concat(parameters('nsgNames')[copyIndex()])]",
"location": "[resourceGroup().location]",
"properties": {
"securityRules": [
{
"name": "DenyInternalSubnetInbound",
"properties": {
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "*",
"destinationAddressPrefix": "*",
"access": "Deny",
"priority": 4096,
"direction": "Inbound",
"sourcePortRanges": [],
"destinationPortRanges": [],
"sourceAddressPrefixes": [
"[concat(parameters('nsgPrefixes')[copyIndex()])]"
],
"destinationAddressPrefixes": []
}
},
]
},
"copy": {
"name": "NSGcopy",
"count": "[length(parameters('nsgNames'))]"
}
}
]
}
parameters file:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"virtualNetworks_vnet_conn_weu_001_name": {
"value": "vnet-conn-weu-001"
},
"location": {
"value": "westeurope"
},
"nsgNames": {
"value": [
"nsg-snet-weu-001",
"nsg-snet-weu-002",
"nsg-snet-weu-003"
]
},
//this works:
"nsgPrefixes": {
"value": [
"10.100.139.0/26",
"10.100.139.64/27",
"10.100.139.96/28"
]
},
//this does not work:
"nsgPrefixes2": {
"value": [
"10.100.139.0/26", "10.100.139.64/27"
"10.100.139.64/27", "10.100.139.96/28"
"10.100.139.96/28", "10.100.139.0/26"
]
},
}
}
With some assistance some Microsoft, I got the answer:
The parameter nsgPrefixes should be configured as an array (within an array) and look like below (in the parameters file):
"nsgPrefixes": {
"value": [
["10.100.139.0/26", "10.100.139.64/27"],
["10.100.139.64/27", "10.100.139.96/28"],
["10.100.139.96/28", "10.100.139.0/26"]
]
}
In the template file I had two outer brackets [] too many, they have been removed, so it looks like below:
"sourceAddressPrefixes":
"[concat(parameters('nsgPrefixes')[copyIndex()])]",
That's it. This works and the IP address ranges are added to the security rule as expected.
Related
I have a JSON array of arbitrary length. Each item in the array is a nested block of JSON objects, they all have same properties but different values.
I need a JSON schema to check the array if the last block in the array has the values defined in the schema.
How should the scheme be defined so that it only considers the last block in the array and ignores all the blocks before in the array?
My current solution successfully validates the JSON objects if there is only one block in the array. As soon as I have more blocks, it fails because all the others are not valid against my schema - for sure, this corresponds to the expected behaviour.
In my example, the JSON array contains two nested blocks of JSON objects. These differ for the following items:
event.action = "[load|button]"
event.label = "[journey:device-only|submit,journey:device-only]"
type = "[page|track]"
An example for my data are:
[
{
"page": {
"path": "order/checkout/summary",
"language": "en"
},
"cart": {
"ordercase": "neworder",
"product_list": [
{
"name": "Apple iPhone 14 Plus",
"quantity": 1,
"price": 1000
}
]
},
"event": {
"action": "load",
"label": "journey:device-only"
},
"type": "page"
},
{
"page": {
"path": "order/checkout/summary",
"language": "en"
},
"cart": {
"ordercase": "neworder",
"product_list": [
{
"name": "Apple iPhone 14 Plus",
"quantity": 1,
"price": 1000
}
]
},
"event": {
"action": "button",
"label": "submit,journey:device-only",
},
"type": "track"
}
]
And the schema I use which works fine for the second block if the block would be the only one in the array:
{
"type": "array",
"$schema": "http://json-schema.org/draft-07/schema#",
"items": {
"type": "object",
"required": ["event", "page", "type"],
"properties": {
"page": {
"type": "object",
"properties": {
"path": {
"const": "order/checkout/summary"
},
"language": {
"enum": ["de", "fr", "it", "en"]
}
},
"required": ["path", "language"]
},
"event": {
"type": "object",
"additionalProperties": false,
"properties": {
"action": {
"const": "button"
},
"label": {
"type": "string",
"pattern": "^[-_:, a-z0-9]*$",
"allOf": [
{
"type": "string",
"pattern": "^\\S*(?:(submit,|,submit))\\S*$"
},
{
"type": "string",
"pattern": "^\\S*(journey:(?:(device-only|device-plus)))\\S*$"
}
]
}
},
"required": ["action", "label"]
},
"type": {
"enum": ["track", "string"]
}
}
}
}
This seems to be the most authoritative documentation that I've found so far: https://docs.metaplex.com/nft-standard
{
"name": "Solflare X NFT",
"symbol": "",
"description": "Celebratory Solflare NFT for the Solflare X launch",
"seller_fee_basis_points": 0,
"image": "https://www.arweave.net/abcd5678?ext=png",
"animation_url": "https://www.arweave.net/efgh1234?ext=mp4",
"external_url": "https://solflare.com",
"attributes": [
{ "trait_type": "web", "value": "yes" },
{ "trait_type": "mobile", "value": "yes" },
{ "trait_type": "extension", "value": "yes" }
],
"collection": { "name": "Solflare X NFT", "family": "Solflare" },
"properties": {
"files": [
{
"uri": "https://www.arweave.net/abcd5678?ext=png",
"type": "image/png"
},
{
"uri": "https://watch.videodelivery.net/9876jkl",
"type": "unknown",
"cdn": true
},
{ "uri": "https://www.arweave.net/efgh1234?ext=mp4", "type": "video/mp4" }
],
"category": "video",
"creators": [
{ "address": "SOLFLR15asd9d21325bsadythp547912501b", "share": 100 }
]
}
}
These same docs state clearly that many fields are optional and should be omitted when not used. But which fields are required and which ones are optional?
Depends what you want to use it for. The simplest requirements I have used were:
{
"name": "Solflare X NFT",
"seller_fee_basis_points": 0,
"image": "https://www.arweave.net/abcd5678?ext=png",
"properties": {
"files": [
{
"uri": "https://www.arweave.net/abcd5678?ext=png",
"type": "image/png"
}
],
"category": "image",
"creators": [
{ "address": "SOLFLR15asd9d21325bsadythp547912501b", "share": 100 }
]
}
}
There is no reason to not include the rest as the cost of hosting this off-chain is minimal. I think most things would be optional but the important ones for an NFT would be the image attribute, as otherwise the NFT wont be able to be displated anywhere, and probably then the propertiess field because some wallets, DApps and marketplaces might use these fields to check file type. Creators should also be added if you want to receive royalties and without this field could result in your collection failing to be listed on marketplaces.
A short answer though, the minimum is not defined anywhere as removing certain things could break certain third party DApps. Depending how/where you want to use your NFT I would find out the requirements if you are desperately trying to minimise the metadata. Otherwise try to keep most of it.
I am creating an indexer that takes a document, runs the KeyPhraseExtractionSkill and outputs it back to the index.
For many documents, this works out of the box. But for those records which are over 50,000, this does not work. OK, no problem; this is clearly stated in the docs.
What the docs suggest is so use the Text Split Skill. What I've done is use the Text Split skill, split the original document into pages, pass all pages to the KeyPhraseExtractionSkill. Then we need to merge them back, as we'd end up with an array of arrays of strings. Unfortunately, it seems that the Merge Skill does not accept an array of arrays, just an array.
https://i.imgur.com/dBD4qgb.png <- Link to the skillset hierarchy.
This is the error reported by Azure:
Required skill input was not of the expected type 'StringCollection'. Name: 'itemsToInsert', Source: '/document/content/pages/*/keyPhrases'. Expression language parsing issues:
What I want to achieve in the end of the day is to run the KeyPhraseExtractionSkill for text which is larger than 50,000 to add it back to the index eventually.
JSON for skillset
"#odata.context": "https://-----------.search.windows.net/$metadata#skillsets/$entity",
"#odata.etag": "\"0x8D957466A2C1E47\"",
"name": "devalbertcollectionfilesskillset2",
"description": null,
"skills": [
{
"#odata.type": "#Microsoft.Skills.Text.SplitSkill",
"name": "SplitSkill",
"description": null,
"context": "/document/content",
"defaultLanguageCode": "en",
"textSplitMode": "pages",
"maximumPageLength": 1000,
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "textItems",
"targetName": "pages"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.EntityRecognitionSkill",
"name": "EntityRecognitionSkill",
"description": null,
"context": "/document/content/pages/*",
"categories": [
"person",
"quantity",
"organization",
"url",
"email",
"location",
"datetime"
],
"defaultLanguageCode": "en",
"minimumPrecision": null,
"includeTypelessEntities": null,
"inputs": [
{
"name": "text",
"source": "/document/content/pages/*"
}
],
"outputs": [
{
"name": "persons",
"targetName": "people"
},
{
"name": "organizations",
"targetName": "organizations"
},
{
"name": "entities",
"targetName": "entities"
},
{
"name": "locations",
"targetName": "locations"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.KeyPhraseExtractionSkill",
"name": "KeyPhraseExtractionSkill",
"description": null,
"context": "/document/content/pages/*",
"defaultLanguageCode": "en",
"maxKeyPhraseCount": null,
"modelVersion": null,
"inputs": [
{
"name": "text",
"source": "/document/content/pages/*"
}
],
"outputs": [
{
"name": "keyPhrases",
"targetName": "keyPhrases"
}
]
},
{
"#odata.type": "#Microsoft.Skills.Text.MergeSkill",
"name": "Merge Skill - keyPhrases",
"description": null,
"context": "/document",
"insertPreTag": " ",
"insertPostTag": " ",
"inputs": [
{
"name": "itemsToInsert",
"source": "/document/content/pages/*/keyPhrases"
}
],
"outputs": [
{
"name": "mergedText",
"targetName": "keyPhrases"
}
]
}
],
"cognitiveServices": {
"#odata.type": "#Microsoft.Azure.Search.CognitiveServicesByKey",
"key": "------",
"description": "/subscriptions/13abe1c6-d700-4f8f-916a-8d3bc17bb41e/resourceGroups/mde-dev-rg/providers/Microsoft.CognitiveServices/accounts/mde-dev-cognitive"
},
"knowledgeStore": null,
"encryptionKey": null
}```
Please let me know if there is anything else that I can add to improve the question. Thanks!
[1]: https://i.stack.imgur.com/GNf7F.png
You don't have to merge the key phrase outputs to insert them to the index.
Assuming your index already has a field called mykeyphrases of type Collection(Edm.String), to populate it with the key phrase outputs, add this indexer output field mapping:
"outputFieldMappings": [
...
{
"sourceFieldName": "/document/content/pages/*/keyPhrases/*",
"targetFieldName": "mykeyphrases"
},
...
]
The /* at the end of sourceFieldName is important to flattening the array of arrays of strings. This will also work as the skill input if you want to pass an array of strings to another skill for other enrichments.
I am trying to implement a copy function in a arm template used to deploy Network security group.
I have previously deployed templates using this format but due to Microsoft deciding to use two distinct names depending on if the Property is a single item or a list I am unable to use the copy function.
I have had to look into using If statements to ignore null parameters if present in a loop, which I have not been able to achieve.
So my question is how to go through a loop and ignore a specific Property if it not present in a loop.
The two Properties in question are
sourceAddressPrefix or sourceAddressPrefixes.
This is causing problems in the 2nd interation, I will get an error message
The
language expression property 'sourceAddressPrefixes' doesn't exist (if i switch the order of the paramater file, ie sourceAddressPrefixes is first, then the error message will point to 'sourceAddressPrefix'
Parameter file,
as you can see there are two secutiry rules, one set as sourceAddressPrefix, and the other sourceAddressPrefixes
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"value": "westeurope"
},
"SecurityRule":{
"value": [
{
"name": "AllowSyncWithAzureAD",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "443",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 101,
"direction": "Inbound"
},
{
"name": "AllowPSRemotingSliceP",
"protocol": "Tcp",
"sourcePortRange": "*",
"destinationPortRange": "5986",
"sourceAddressPrefixes": "[variables('PSRemotingSlicePIPAddresses')]",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 301,
"direction": "Inbound"
}
]
}
}
}
in the Template file I have added both properties with if statements, but clearly I have not written them correctly, as the intended outcome is, if in a loop the property does not exist, ignore property.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"SecurityRule": {
"type": "array"
}
},
"variables": {
"domainServicesNSGName": "AGR01MP-NSGAADDS01",
"PSRemotingSlicePIPAddresses": [
"52.182.100.238",
"52.180.177.87"
],
"RDPIPAddresses": [
"210.66.188.40/27",
"15.156.75.52/27",
"134.104.124.36/27",
"144.122.4.96/27"
],
"PSRemotingSliceTIPAddresses": [
"56.180.182.67",
"56.180.121.39",
"56.175.228.121"
]
},
"resources": [
{
"apiVersion": "2018-10-01",
"type": "Microsoft.Network/networkSecurityGroups",
"name": "[variables('domainServicesNSGName')]",
"location": "[parameters('location')]",
"properties": {
"copy": [
{
"name":"securityRules",
"count": "[length(parameters('securityRule'))]",
"mode": "serial",
"input": {
"name": "[concat(parameters('securityRule')[copyIndex('securityRules')].name)]",
"properties": {
"protocol": "[concat(parameters('securityRule')[copyIndex('securityRules')].protocol)]",
"sourcePortRange": "[concat(parameters('securityRule')[copyIndex('securityRules')].sourcePortRange)]",
"destinationPortRange": "[concat(parameters('securityRule')[copyIndex('securityRules')].destinationPortRange)]",
"sourceAddressPrefixes": "[if(equals(parameters('securityRule')[copyIndex('securityRules')].sourceAddressPrefixes,''), json('null'), parameters('securityRule')[copyIndex('securityRules')].sourceAddressPrefixes)]",
"sourceAddressPrefix": "[if(equals(parameters('securityRule')[copyIndex('securityRules')].sourceAddressPrefix,''), json('null'), parameters('securityRule')[copyIndex('securityRules')].sourceAddressPrefix)]",
"destinationAddressPrefix": "[concat(parameters('securityRule')[copyIndex('securityRules')].destinationAddressPrefix)]",
"access": "[concat(parameters('securityRule')[copyIndex('securityRules')].access)]",
"priority": "[concat(parameters('securityRule')[copyIndex('securityRules')].priority)]",
"direction": "[concat(parameters('securityRule')[copyIndex('securityRules')].direction)]"
}
}
}
]
}
}
],
"outputs": {}
}
found solution
"sourceAddressPrefix": "[if(equals(parameters('SecurityRule')[copyIndex('securityRules')].name, 'SyncWithAzureAD'), parameters('SecurityRule')[copyIndex('securityRules')].sourceAddressPrefix, json('null'))]" ,
"sourceAddressPrefixes": "[if(contains(parameters('SecurityRule')[copyIndex('securityRules')].name, 'Allow'), parameters('SecurityRule')[copyIndex('securityRules')].sourceAddressPrefixes, json('null'))]" ,
the above code allows me to deploy to change to ignore a null value in an array. Though I had to change AllowSyncWithAzureAD to SyncWithAzureAD, in order for it not to be picked up by the 2nd line
I have a logic app which makes HTTP call to Key Vault URI to get the secret needed to connect to external system. I have developed this in the dev resource group. I want to know how to setup the key vault from dev resource groups to other resource groups (test/prod). Also, how to migrate the logic app and get the secret per environment.
:) The solution is to use ARM templates and ADO/any other pipeline. You can create ARM templates with different parameters' values for different environments and use them to deploy your Logic App and Key vault to different environments.
Logic App Template sample:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
// Template parameters
"parameters": {
"<template-parameter-name>": {
"type": "<parameter-type>",
"defaultValue": "<parameter-default-value>",
"metadata": {
"description": "<parameter-description>"
}
}
},
"variables": {},
"functions": [],
"resources": [
{
// Start logic app resource definition
"properties": {
<other-logic-app-resource-properties>,
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {<action-definitions>},
// Workflow definition parameters
"parameters": {
"<workflow-definition-parameter-name>": {
"type": "<parameter-type>",
"defaultValue": "<parameter-default-value>",
"metadata": {
"description": "<parameter-description>"
}
}
},
"triggers": {
"<trigger-name>": {
"type": "<trigger-type>",
"inputs": {
// Workflow definition parameter reference
"<attribute-name>": "#parameters('<workflow-definition-parameter-name')"
}
}
},
<...>
},
// Workflow definition parameter value
"parameters": {
"<workflow-definition-parameter-name>": {
"value": "[parameters('<template-parameter-name>')]"
}
},
"accessControl": {}
},
<other-logic-app-resource-definition-attributes>
}
// End logic app resource definition
],
"outputs": {}
}
Key Vault template:
{
"name": "string",
"type": "Microsoft.KeyVault/vaults",
"apiVersion": "2018-02-14",
"location": "string",
"tags": {},
"properties": {
"tenantId": "string",
"sku": {
"family": "A",
"name": "string"
},
"accessPolicies": [
{
"tenantId": "string",
"objectId": "string",
"applicationId": "string",
"permissions": {
"keys": [
"string"
],
"secrets": [
"string"
],
"certificates": [
"string"
],
"storage": [
"string"
]
}
}
],
"vaultUri": "string",
"enabledForDeployment": "boolean",
"enabledForDiskEncryption": "boolean",
"enabledForTemplateDeployment": "boolean",
"enableSoftDelete": "boolean",
"createMode": "string",
"enablePurgeProtection": "boolean",
"networkAcls": {
"bypass": "string",
"defaultAction": "string",
"ipRules": [
{
"value": "string"
}
],
"virtualNetworkRules": [
{
"id": "string"
}
]
}
},
"resources": []
}
Moreover, you can read this article to understand more about setting up your ADO pipelines: Integrate ARM templates with Azure Pipelines