How to pass variable in file() of terraform - file

It is required to create cloudfront public key using terraform, Here public key is separate based on environment and its stored as {env_name}.pem in directory name public-key-cf.
env_name can be dev,stage,prod.
To achieve this below terraform block is used:
resource "aws_cloudfront_public_key" "documents-signing-key" {
name = "cf-public-key"
comment = "Public Key"
encoded_key = file("${path.module}/public-key-cf/"${var.environment}".pem)"
}
I am getting error as :
This character is not used within the language.
How to fix this issue?
Thanks.

You seem to have syntax issues within your code and have used quotes in the wrong places. Please refer to String Templates for string interpolations in terraform.
This is the structure I have used to simulate your query.
.
├── dependencies.tf
├── file_function_variable.tf
├── main.tf
└── public-key-cf
└── dev.pub
Where file_function_variable.tf is the one where we focus mostly.
## File function within a sting input (multiple string interpolation).
resource "aws_security_group" "file_function_variable" {
name = "allow_tls"
description = "Allow TLS inbound traffic with ${file("${path.module}/public-key-cf/${var.environment}.pub")}"
vpc_id = local.vpc_id
tags = {
Name = "allow_tls"
}
}
## usage of explicit file function.
resource "aws_cloudfront_public_key" "documents-signing-key" {
name = "cf-public-key"
comment = "Public Key"
encoded_key = file("${path.module}/public-key-cf/${var.environment}.pub")
}
variable "environment" {
type = string
description = "(optional) Environment for the deployment"
default = "dev"
}
The above code has generated the below plan, to verify how will it look like.
➜ stackoverflow tf plan <aws:sre>
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
+ create
Terraform will perform the following actions:
# aws_cloudfront_public_key.documents-signing-key will be created
+ resource "aws_cloudfront_public_key" "documents-signing-key" {
+ caller_reference = (known after apply)
+ comment = "Public Key"
+ encoded_key = <<-EOT
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII3EZdb2OUzuMtgxCp5nyR3SmXs1Fml1Z6/kk1cyEuWf
EOT
+ etag = (known after apply)
+ id = (known after apply)
+ name = "cf-public-key"
+ name_prefix = (known after apply)
}
# aws_security_group.file_function_variable will be created
+ resource "aws_security_group" "file_function_variable" {
+ arn = (known after apply)
+ description = <<-EOT
Allow TLS inbound traffic with ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII3EZdb2OUzuMtgxCp5nyR3SmXs1Fml1Z6/kk1cyEuWf
EOT
+ egress = (known after apply)
+ id = (known after apply)
+ ingress = (known after apply)
+ name = "allow_tls"
+ name_prefix = (known after apply)
+ owner_id = (known after apply)
+ revoke_rules_on_delete = false
+ tags = {
+ "Name" = "allow_tls"
}
+ tags_all = {
+ "Name" = "allow_tls"
}
+ vpc_id = (known after apply)
}
Conclusion:
As mentioned in another answer, it's better to use plugins/extensions while working with terraform.
For VSCode there is an official HashiCorp.terraform plugin which supports syntax highlighting and much more.

encoded_key = file("${path.module}/public-key-cf/"${var.environment}".pem)"
It seems to me that you made a syntactical mistake by placing the quotes in the wrong place, I think you meant to write:
encoded_key = file("${path.module}/public-key-cf/${var.environment}.pem")
If it's the same case in your code that's likely the reason behind that rather cryptic looking error message.
Consider installing a plugin for syntax checks if you haven't yet, it simplifies writing code in terraform (and in general too) by a lot.

Related

Get oauth2_permissions from azuread_application using Terraform

I have an app registration which defines two oauth2_permissions blocks, e.g. (other details elided)
resource "azuread_application" "myapp" {
oauth2_permissions {
is_enabled = true
type = "User"
value = "Permission.One"
}
oauth2_permissions {
is_enabled = true
type = "User"
value = "Permission.Two"
}
}
Which, when applied,works just fine. I then want to refer to those permissions in another app registration, e.g.
resource "azuread_application" "myotherapp" {
required_resource_access {
resource_app_id = azuread_application.myapp.application_id
resource_access {
id = ??
type = "Scope"
}
}
}
For the id here, I have tried:
id = lookup(azuread_application.myapp.oauth2_permissions[0], "id")
which gives This value does not have any indices. As does
id = azuread_application.myapp.oauth2_permissions.0.id
I can define a data block and get the output of oauth2_permissions from myapp:
data "azuread_application" "myapp" {
application_id = azuread_application.myapp.application_id
}
output "myapp-perms" {
value = data.azuread_application.myapp.oauth2_permissions
}
And on apply, that will correctly show an array of the two permission blocks. If I try to refer to the data block instead of the application block, i.e.
id = lookup(data.azuread_application.myapp.oauth2_permissions[0], "id")
This gives me a different error: The given key does not identify an element in this collection value
If I apply those two permissions manually on the console, everything works fine. From reading around I was fairly sure that at least one of the above methods should work but I am clearly missing something.
For completeness, provider definition:
provider "azurerm" {
version = "~> 2.12"
}
provider "azuread" {
version = "~> 0.11.0"
}
Based on comments.
The solution is to use tolist. The reason is that the multiple oauth2_permissions blocks will be represented as sets of objects, which can't be accessed using indices.
id = tolist(azuread_application.myapp.oauth2_permissions)[0].id
However, the sets don't have guaranteed order. Thus a special attention should be payed to this.

How can I associate NSG's and Subnets being created by loops in Terraform?

Here is the code I am using to create subnets and nsgs I want to associate the NSG and subnet in the same script but I am unable to understand how can I get subnet IDs and NSG IDs which are being produced here and use them in the association resource. Thanks in advance for the help !
First part of code this is being used to create n no of Subnets and NSGs depends upon the parameter
provider "azurerm" {
version = "2.0.0"
features {}
}
resource "azurerm_resource_group" "new-rg" {
name = var.rg_name
location = "West Europe"
}
resource "azurerm_virtual_network" "new-vnet" {
name = var.vnet_name
address_space = ["${var.vnet_address_space}"]
location = azurerm_resource_group.new-rg.location
resource_group_name = azurerm_resource_group.new-rg.name
}
resource "azurerm_subnet" "test" {
count = "${length(var.subnet_prefix)}"
name = "${element(var.subnet_subnetname, count.index)}"
resource_group_name = azurerm_resource_group.new-rg.name
virtual_network_name = azurerm_virtual_network.new-vnet.name
address_prefix = "${element(var.subnet_prefix, count.index)}"
}
resource "azurerm_network_security_group" "new-nsg" {
count = "${length(var.subnet_prefix)}"
name = "${element(var.subnet_subnetname, count.index)}-nsg"
location = azurerm_resource_group.new-rg.location
resource_group_name = azurerm_resource_group.new-rg.name
}
Below is the resource where i have to pass the parameters to create the association for the above subnets and nsgs being created.
Second Part of code Need to make the below code usable for above solution for n no of associations.
resource "azurerm_subnet_network_security_group_association" "example" {
subnet_id = azurerm_subnet.example.id
network_security_group_id = azurerm_network_security_group.example.id
}
How can associate the n number of subnets and nsgs being created by using 2nd part of code, I cant find my way to that
This seems like a good case for for_each. Here is some code I'm using for AWS (the same logic applies as far as I can tell)-
(var.nr_azs is just an int, formatlist is used because for_each only likes strings)
locals {
az_set = toset(formatlist("%s", range(var.nr_azs))) # create a list of numbers and convert them to strings)
}
resource "aws_subnet" "private" {
for_each = local.az_set
availability_zone = random_shuffle.az.result[each.key]
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, each.key)
vpc_id = aws_vpc.main.id
map_public_ip_on_launch = false
}
resource "aws_eip" "nat_gw" {
vpc = true
}
resource "aws_nat_gateway" "gw" {
for_each = aws_subnet.private
allocation_id = aws_eip.nat_gw.id
subnet_id = each.value.id
}
resource "aws_route_table" "private_egress" {
for_each = aws_nat_gateway.gw
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = each.value.id
}
}
resource "aws_route_table_association" "private" {
for_each = local.az_set
subnet_id = aws_subnet.private[each.key].id
route_table_id = aws_route_table.private_egress[each.key].id
}
So i was able to solve the issue mentioned by me above the following code contains the solution for the mentioned scenario for the problem.
resource "azurerm_subnet_network_security_group_association" "snet-nsg-association" {
count = length(var.subnet_subnetname)
subnet_id = element(azurerm_subnet.multi-snet.*.id, count.index)
network_security_group_id = element(azurerm_network_security_group.new-nsg.*.id, count.index)
}

concatenate filepath prefix and file name in terraform code

I'm trying to create policies in aws with terraform.
variable "path" {
type = "string"
}
variable "policies" {
type = list(object ({
name = string
plcyfilename = string
asmplcyfilename = string
desc = string
ownner = string}))
default = []
}
resource "aws_iam_policy" "policy" {
count = length(var.policies)
name = lookup(var.policies[count.index], "name")
policy = file(lookup(var.policies[count.index], concat("var.path","plcyfilename")))
description = "Policy for ${lookup(var.policies[count.index], "desc")}"
}
and this is how my tfvars looks like:
path = "./../t2/scripts/"
policies = [
{name = "cwpolicy", plcyfilename = "cw.json" , asmplcyfilename ="csasm.json", desc ="vpcflowlogs", ownner ="vpc"},
]
The error that is thrown while I do this is like this:
Error: Invalid function argument
on main.tf line 13, in resource "aws_iam_policy" "policy":
13: policy = file(lookup(var.policies[count.index], "${concat("${var.path}","plcyfilename")}"))
Invalid value for "seqs" parameter: all arguments must be lists or tuples; got
string.
I'm using terraform 0.12.
It works as expected if I change the variable to have complete file path:plcyfilename=./../t2/scripts/cw.json.
However I want to isolate the file path from the file names.
Can someone point me where I am going wrong.
The concat function is for concatenating lists, not for concatenating strings.
To concatenate strings in Terraform, we use template interpolation syntax:
policy = file("${var.path}/${var.policies[count.index].policy_filename}")
Since your collection of policies is not a sequence where the ordering is significant, I'd recommend also changing this to use resource for_each, which will ensure that Terraform tracks the policies using the policy name strings rather than using the positions in the list:
variable "policies" {
type = map(object({
policy_filename = string
assume_policy_filename = string
description = string
owner = string
}))
default = {}
}
resource "aws_iam_policy" "policy" {
for_each = var.policies
name = each.key
policy = file("${var.path}/${each.value.policy_filename}")
description = "Policy for ${each.value.description}"
}
In this case the policies variable is redefined as being a map, so you'd now present the name of each policy as the key within the map rather than as one of the attributes:
policies = {
cw = {
policy_filename = "cw.json"
assume_policy_filename = "csasm.json"
description = "vpcflowlogs"
owner = "vpc"
}
# ...
}
Because the for_each value is the policies map, each.key inside the resource block is a policy name and each.value is the object representing that policy, making the resulting expressions easier to read and understand.
By using for_each, we will cause Terraform to create resource instance addresses like aws_iam_policy.policy["cw"] rather than like aws_iam_policy.policy[1], and so adding and removing elements from the map will cause Terraform to add and remove corresponding instances from the resource, rather than try to update instances in-place to respect the list ordering as it would've done with your example.

Akka.Net PersistenceQuery not returning all results

I am using Akka.Net (v 1.3.2) and am trying to query the event journal for all events with a specific tag. I only want the events that exist at the time I query the journal. Inside an actor, I have the following code:
var readJournal = PersistenceQuery.Get(Context.System).ReadJournalFor<SqlReadJournal>(SqlReadJournal.Identifier);
var stream = readJournal.CurrentEventsByTag("The Tag Name", Offset.NoOffset());
var materializer = ActorMaterializer.Create(Context.System);
stream.RunForeach(envelope =>
{
// Do some stuff with the EventEnvelope
}, materializer).Wait();
This will successfully query the event journal. However, the problem is it will only return the first 100 events. I need all of them that match the query!
Question: How do I remove the limit/filter that exists when querying the event journal by tag name?
If you need it, here is my akka.persistence configuration:
var config = Akka.Configuration.ConfigurationFactory.ParseString(#"
akka.persistence {
journal {
plugin = ""akka.persistence.journal.sql-server""
sql-server {
class = ""Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer""
connection-string = """ + connectionString + #"""
schema-name = dbo
table-name = __akka_EventJournal
metadata-table-name = __akka_Metadata
auto-initialize = on
}
}
snapshot-store {
plugin = ""akka.persistence.snapshot-store.sql-server""
sql-server {
class = ""Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer""
connection-string = """ + connectionString + #"""
schema-name = dbo
table-name = __akka_SnapshotStore
auto-initialize = on
}
}
}"
);
There are two things to check out:
You can set the maximum number of messages returned in one query by setting up akka.persistence.query.journal.sql.max-buffer-size value (see: reference.conf).
Use readJournal.EventsByTag instead of readJournal.CurrentEventsByTag to get a continuous stream of events. Just keep in mind, that it won't complete by itself, but will live on waiting for new events to arrive. You can stop it explicitly i.e. by using KillSwitch.

How to tell Stylus to not fail on missing #import

I have a project that can be packaged for 2 targets (mobile and desktop). Tho I still want to keep my source files in the same place since only a few of them are different, but the difference is too big tho to do it only with responsive method (pages missing on mobile, or totally different on desktop, ...) and I want to keep the packaged app as small as possible.
So I created a loader.mobile.styl and loader.desktop.styl, knowing that the packager will import one or the other depending on the target/platform it's building for:
TARGET='mobile' // or 'desktop' for loader.desktop.stylus
#import '_import' // my import helper
import('_application') // my application main stylus file requiring al others
and in _import.styl:
import(file)
#import file
#import file + '.' + TARGET
So the goal is, when you call import('_application') for example, to first import _application.styl and then _application.mobile.styl (or _application.desktop.styl if the target is desktop)
It is working great except that in most of the cases only the shared _application.styl or the specific _application.mobile.styl may exist and not the other.
So I am trying without success to find a way to do an import if exists with Stylus. If just something like fileExists or such was available I could do it, or a try...catch even without the catch block, so that if it fails it doesn't matter.
After some research I ended up writing a plugin which would replace #import directive by defining a custom import function. For those who it might help, here is how I did in my own case:
In file plugins.js:
var sysPath = require('path');
var fs = require('fs');
// here is where I defined some helper to know what is the currently building target
var conf = require('../config');
var plugin = function () {
return function (style) {
var nodes = this.nodes;
style.define('import', function (param) {
var target = conf.currentTarget(),
realPath = sysPath.dirname(conf.ROOT + sysPath.sep + param.filename),
baseName = param.string.replace(/\.styl$/, ''),
targetFile = baseName + '.' + target + '.styl',
file = baseName + '.styl',
res = new nodes.Root();
// first include the file (myFile.styl for example) if it exists
if (fs.existsSync(realPath + sysPath.sep + file)) {
res.push(new nodes.Import(new nodes.String(file)));
}
// then include the target specific file (myFile.mobile.styl for example) if it exists
if (fs.existsSync(realPath + sysPath.sep + targetFile)) {
res.push(new nodes.Import(new nodes.String(targetFile)));
}
return res;
});
};
};
module.exports = plugin;
in file loader.styl:
use('plugins.js')
import('application')
So then any import('xyz') would import xyz.styl if it exists and xyz.mobile.styl (or xyz.desktop.styl if desktop is the target) if it exists.

Resources