Limit access to Google Endpoint URL - google-app-engine

I am trying to build a service to service API call. I am using Google Endpoints with open API.
My problem is that after adding "securityDefinitions:" I'm still able to access the endpoint without authentication. (Able to access it from anywhere)
How do I make sure that only Compute Engine with a particular Service Account can access the REST API.
swagger: "2.0"
info:
description: "A simple Google Cloud Endpoints API example."
title: "Endpoints Example"
version: "1.0.0"
host: "echo-api.endpoints.projectid.cloud.goog"
x-google-endpoints:
- name: "echo-api.endpoints.projectid.cloud.goog"
target: "SOME IP"
# [END swagger]
basePath: "/"
consumes:
- "application/json"
produces:
- "application/json"
schemes:
- "http"
paths:
"/list":
get:
description: "list"
operationId: "Project.get"
produces:
- "application/json"
responses:
200:
description: "lists"
schema:
$ref: "#/definitions/Project"
parameters:
- description: "Project Name"
in: body
name: project_id
required: true
schema:
$ref: "#/definitions/Res"
security:
- google_jwt_client-1: []
definitions:
Res:
properties:
apierrmsg:
type: "string"
apiresults:
type: "Array"
definitions:
Project:
properties:
project_id:
type: "string"
securityDefinitions:
google_jwt_client-1:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "some#projectid.iam.gserviceaccount.com"
x-google-jwks_uri: "https://www.googleapis.com/robot/v1/metadata/x509/some#projectid.iam.gserviceaccount.com"

Related

Unable to set custom headers on AWS Amplify

I'm trying to set these headers for my app on Amplify but without success:
customHeaders:
- pattern: /*
headers:
- key: Cross-Origin-Opener-Policy
value: same-origin
- key: Cross-Origin-Embedder-Policy
value: require-corp
- key: Access-Control-Allow-Origin
value: '*'
- key: Access-Control-Allow-Methods
value: GET
I've tried setting it from "App settings" >> "Custom headers" (customHttp.yml) and from the build script (amplify.yml) but no luck.
Okay, apparently I solved editing the pattern like this (only on "customHttp.yml"), I removed the custom headers from the "amplify.yml":
customHeaders:
- pattern: '**'
headers:
- key: Cross-Origin-Opener-Policy
value: same-origin
- key: Cross-Origin-Embedder-Policy
value: require-corp
- key: Access-Control-Allow-Origin
value: '*'
- key: Access-Control-Allow-Methods
value: GET

Google Cloud Api Gateway - {"message":"no healthy upstream","code":503}

I am building an api that is hosted on App Engine Standard (Python).
I have test it with curl and I was able to make a successfully post request:
curl --header "Content-Type: application/json" \
--request POST \
--data '{"amount":xxxx,"currency":"xxx","date":xxxx,"name":"xxxx","symbol":"xxx"}' \
https://xxxxxxxxxx.appspot.com/api/add/
Then I deployed an Api Gateway to access my App Engine Standard back end.
Getting started with API Gateway and App Engine
I tested and it worked fine for GET requests.
Now I am having issue performing a POST request. This is my code from config.yaml file:
/add:
post:
summary: Creates a new transaction.
operationId: create-transaction
consumes:
- application/json
parameters:
- in: body
name: transaction
description: The transaction to create.
schema:
type: object
required:
- name
symbol
currency
amount
date
properties:
name:
type: string
symbol:
type: string
currency:
type: string
amount:
type: number
date:
type: integer
x-google-backend:
address: https://xxxxxxx.appspot.com
path_translation: [ APPEND_PATH_TO_ADDRESS ]
jwt_audience: 272804158038-6kc250fms52o33b7dcfjrl1f9d8rripb.apps.googleusercontent.com
responses:
'201':
description: A successful transaction created
schema:
type: string
When I am trying to run the same curl command :
curl --header "Content-Type: application/json"
--request POST
--data '{"amount":xxxx,"currency":"xxx","date":xxxx,"name":"xxxx","symbol":"xxxx"}'
https://saxxx-xxxxxxx-gateway.dev/add/
I receive :
{"message":"no healthy upstream","code":503}
Can somebody please help me troubleshooting this error message ? Again, I am able to run successfully GET requests on the same Gateway.
This is the log I found on Logging :
{
httpRequest: {
latency: "0s"
protocol: "http"
remoteIp: ""
requestMethod: "POST"
requestSize: "936"
requestUrl: "/add"
responseSize: "158"
status: 503
}
insertId: ""
jsonPayload: {
api_key_state: "NOT CHECKED"
api_method: "1.xxxxxx_2ere0etrrw81sxxxxxxxxxxxxxx_cloud_goog.Createtransaction"
api_name: "1.xxxxxxxxxxx_api_2exxxxxx_cloud_goog"
api_version: "1.0.0"
location: ""
log_message: "1.sxxxxxxxxxxx.Createtransaction is called"
producer_project_id: "xxxxxx"
response_code_detail: "no_healthy_upstream"
service_agent: "ESPv2/2.21.0"
service_config_id: "sxxxxxxxx4jvuvmlcb"
timestamp: 1616270864.269634
}
logName: "projects/sagexxxxxxxx.apigateway.xxxxxxxxx.cloud.goog%2Fendpoints_log"
receiveTimestamp: "2021-03-20T20:07:46.372838475Z"
resource: {
labels: {
location: ""
method: "1.xxxxxxxxxxxx_goog.Createtransaction"
project_id: ""
service: "xxxxxxxxxxxxx.cloud.goog"
version: "1.0.0"
}
type: "api"
}
severity: "ERROR"
timestamp: "2021-03-20T20:07:44.269633934Z"
}
I had a problem with my config.yaml file code:
/add:
post:
summary: Creates a new transaction.
operationId: create-transaction
consumes:
- application/json
produces:
- application/json
parameters:
- in: body
name: body
required: false
schema:
$ref: '#/definitions/Model0'
x-google-backend:
address: https://appspot.com
path_translation: [ APPEND_PATH_TO_ADDRESS ]
jwt_audience: .googleusercontent.com
responses:
'201':
description: A successful transaction created
schema:
type: string
definitions:
Model0:
properties:
amount:
type: number
format: double
currency:
type: string
date:
type: integer
format: int64
name:
type: string
symbol:
type: string

Wiremock - how to match all permutations of array elements of a multipart/form-data in JSON?

I'm trying to mock a YAML API using JSON Wiremock for a PUT multipart/form-data. The multipart contains two arrays of metadata. How can I match a set of specific values (or regex) in each array, disregarding of the order.
We are bound to use YAML 2.0 (if you wonder), which is why we have these two arrays for the metadata. I've been able to successfully match specific values for the array (for example, for fileMetadataName, I can match "permissions,owner"), but I haven't found how to match the full set of potential values (5 values with all possible permutations, see YAML below).
Here is the JSON Wiremock file that can match one case of the array:
{
"request": {
"method": "PUT",
"urlPath": "/files",
"headers": {
"Content-Type": { "contains": "multipart/form-data"},
"Source": { "matches": "POC(.+)"}
},
"multipartPatterns": [
{
"headers": {
"Content-Disposition": {
"contains": "name=\"typeOfFile\""
}
},
"bodyPatterns": [ {
"matches": "PDF"
} ]
},
{
"headers": {
"Content-Disposition": {
"contains": "name=\"fileMetadataName\""
}
},
"bodyPatterns": [ {
"matches": "permissions,owner"
} ]
}
]
},
"response": {
"status": 201,
"jsonBody": {
"DocumentId": "123456789-123456789"
}
}
}
And here is an extract of the YAML that describes the multipart input:
paths:
'/files':
put:
tags:
- ProofOfConcept
summary: Upload a file in the files repository
description: Do the job
operationId: putFile
consumes:
- multipart/form-data
produces:
- application/json
parameters:
- name: Source
description: ID of the sender
in: header
type: string
required: true
- name: theFile
description: The file to be uploaded
in: formData
required: true
type: file
- name: typeOfFile
description: 'File type: PDF, JPG...'
in: formData
required: true
type: string
- name: fileMetadataName
description: 'Metadata name. Possible values are: permissions, owner, group, creationDate, appGeneratedId (format: <app-name>;<id>)'
in: formData
type: array
items:
type: string
- name: fileMetadataValue
description: Value associated to the corresponding metadata name
in: formData
type: array
items:
type: string
responses:
'201':
description: Created
schema:
$ref: '#/definitions/DocumentId'
I expect to be able to match for fileMetadataName, for example, all permutations of :
permissions, owner, group, creationDate, appGeneratedId
And in the case of fileMetadataValue, I expect to be able to match regex values for all permutations (e.g. ([0-9]{3,3}) for permissions).

Swagger array of objects

I am having some issues with swagger: I have an array of objects (address) described in this way in the .yaml file:
Address:
properties:
street:
type: string
city:
type: string
state:
type: string
country:
type: string
and this is the other yaml file with the definitions of the API (address is a params):
- name: addresses
in: formData
description: List of adresses of the user. The first one is the default one.
type: array
items:
$ref: '#/definitions/Address'
And this is the text I put in the swagger UI:
[
{
"street": "Bond street",
"city": "Torino",
"state": "Italy",
"country": "Italy"
}
]
but in node.js, if I print what I receive:
{"addresses":["["," {"," \"street\": \"Bond street\",","
\"city\": \"Torino\","," \"state\": \"Italy\","," \"country\":
\"Italy\""," }","]"]}
And I get a parsing error... There are some extra [ and ". It seems that swagger parse it as string (?)
To send JSON data, you need to use use an in: body parameter (not in: formData) and specify that the operation consumes application/json. formData parameters are used for operations that consume application/x-www-form-urlencoded or multipart/form-data.
paths:
/something:
post:
consumes:
- application/json
parameters:
- in: body
name: addresses
required: true
schema:
type: array
items:
$ref: "#/definitions/Address" # if "Address" is in the same file
# or
# $ref: "anotherfile.yaml#/definitions/Address" # if "Address" is in another file

Cloudformation template for creating wordpress on 4 ec2 instances with on single RDS Mysql

Currently i am writing a cloudformation template which installing wordpress on 4 instances with on single RDS MySql instance . so far i wrote code for launching complete wordpress on one single ec2 instance and database on RDS instance. i want to do this for 3 more instance with 3 different (or same will do trick also) databases but on same Single RDS instance... how should i proceed??? here is the code which launches 1 ec2 ,1 rds with wordpress installed on it. The code is in YAML language. any suggestion ???what should i add an how to do this... thankYou.
CLOUDFORMATION TEMPLATE...
AWSTemplateFormatVersion: "2010-09-09"
Description: "Wordpress: highly available and scalable, a cloudonaut.io template"
Parameters:
BlogID:
Description: "A unique identifier for your blog. For internal use only."
Type: String
AllowedPattern: "[A-Za-z0-9\\-]+"
ConstraintDescription: "Only letters, digits or dash allowed."
BlogTitle:
Description: "The title of the Wordpress blog."
Type: String
Default: "Just another Wordpress blog"
BlogAdminUsername:
Description: "A username for the Wordpress admin."
Type: String
Default: "admin"
BlogAdminPassword:
Description: "A password for the Wordpress admin."
Type: String
NoEcho: "true"
BlogAdminEMail:
Description: "The email address of the Wordpress admin."
Type: String
WebServerKeyName:
Description: "The key pair to establish a SSH connection to the web servers."
Type: "AWS::EC2::KeyPair::KeyName"
WebServerInstanceType:
Description: "The instance type of web servers (e.g. t2.micro)."
Type: String
Default: "t2.micro"
DBServerInstanceType:
Description: "The instance type of database server (e.g. db.t2.micro)."
Type: String
Default: "db.t2.micro"
Mappings:
EC2RegionMap:
ap-northeast-1: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-cbf90ecb" }
ap-southeast-1: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-68d8e93a" }
ap-southeast-2: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-fd9cecc7" }
eu-central-1: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-a8221fb5" }
eu-west-1: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-a10897d6" }
sa-east-1: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-b52890a8" }
us-east-1: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-1ecae776" }
us-west-1: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-d114f295" }
us-west-2: { AmazonLinuxAMIHVMEBSBacked64bit: "ami-e7527ed7" }
Resources:
VPC:
Type: "AWS::EC2::VPC"
Properties:
CidrBlock: "172.31.0.0/16"
EnableDnsHostnames: "true"
InternetGateway:
Type: "AWS::EC2::InternetGateway"
Properties: {}
VPCGatewayAttachment:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
VpcId: {"Ref": "VPC"}
InternetGatewayId: {"Ref": "InternetGateway"}
SubnetA:
Type: "AWS::EC2::Subnet"
Properties:
AvailabilityZone: {"Fn::Select": ["0", {"Fn::GetAZs": ""}]}
CidrBlock: "172.31.38.0/24"
VpcId: {"Ref": "VPC"}
SubnetB:
Type: "AWS::EC2::Subnet"
Properties:
AvailabilityZone: {"Fn::Select": ["1", {"Fn::GetAZs": ""}]}
CidrBlock: "172.31.37.0/24"
VpcId: {"Ref": "VPC"}
RouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: {"Ref": "VPC"}
RouteTableAssociationA:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: {"Ref": "SubnetA"}
RouteTableId: {"Ref": "RouteTable"}
RouteTableAssociationB:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: {"Ref": "SubnetB"}
RouteTableId: {"Ref": "RouteTable"}
RoutePublicNATToInternet:
Type: "AWS::EC2::Route"
Properties:
RouteTableId: {"Ref": "RouteTable"}
DestinationCidrBlock: "0.0.0.0/0"
GatewayId: {"Ref": "InternetGateway"}
DependsOn: "VPCGatewayAttachment"
NetworkAcl:
Type: "AWS::EC2::NetworkAcl"
Properties:
VpcId: {"Ref": "VPC"}
SubnetNetworkAclAssociationA:
Type: "AWS::EC2::SubnetNetworkAclAssociation"
Properties:
SubnetId: {"Ref": "SubnetA"}
NetworkAclId: {"Ref": "NetworkAcl"}
SubnetNetworkAclAssociationB:
Type: "AWS::EC2::SubnetNetworkAclAssociation"
Properties:
SubnetId: {"Ref": "SubnetB"}
NetworkAclId: {"Ref": "NetworkAcl"}
NetworkAclEntryIngress:
Type: "AWS::EC2::NetworkAclEntry"
Properties:
NetworkAclId: {"Ref": "NetworkAcl"}
RuleNumber: "100"
Protocol: "-1"
RuleAction: "allow"
Egress: "false"
CidrBlock: "0.0.0.0/0"
NetworkAclEntryEgress:
Type: "AWS::EC2::NetworkAclEntry"
Properties:
NetworkAclId: {"Ref": "NetworkAcl"}
RuleNumber: "100"
Protocol: "-1"
RuleAction: "allow"
Egress: "true"
CidrBlock: "0.0.0.0/0"
LoadBalancer:
Type: "AWS::ElasticLoadBalancing::LoadBalancer"
Properties:
Subnets: [{"Ref": "SubnetA"}, {"Ref": "SubnetB"}]
LoadBalancerName: {"Ref": "BlogID"}
Listeners:
- InstancePort: "80"
InstanceProtocol: "HTTP"
LoadBalancerPort: "80"
Protocol: "HTTP"
HealthCheck:
HealthyThreshold: "2"
Interval: "5"
Target: "TCP:80"
Timeout: "3"
UnhealthyThreshold: "2"
SecurityGroups: [{"Ref": "LoadBalancerSecurityGroup"}]
Scheme: "internet-facing"
CrossZone: "true"
LoadBalancerSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: "wordpress-elb"
VpcId: {"Ref": "VPC"}
SecurityGroupIngress:
- CidrIp: "0.0.0.0/0"
FromPort: 80
IpProtocol: "tcp"
ToPort: 80
WebServerSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: "wordpress-ec2"
VpcId: {"Ref": "VPC"}
SecurityGroupIngress:
- CidrIp: "0.0.0.0/0"
FromPort: 22
IpProtocol: "tcp"
ToPort: 22
- FromPort: 80
IpProtocol: "tcp"
SourceSecurityGroupId: {"Ref": "LoadBalancerSecurityGroup"}
ToPort: 80
DatabaseSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: "wordpress-rds"
VpcId: {"Ref": "VPC"}
SecurityGroupIngress:
- IpProtocol: "tcp"
FromPort: "3306"
ToPort: "3306"
SourceSecurityGroupId: {"Ref": "WebServerSecurityGroup"}
Database:
Type: "AWS::RDS::DBInstance"
Properties:
AllocatedStorage: "5"
BackupRetentionPeriod: "0"
DBInstanceClass: {"Ref": "DBServerInstanceType"}
DBInstanceIdentifier: {"Ref": "BlogID"}
DBName: "wordpress"
Engine: "MySQL"
MasterUsername: "wordpress"
MasterUserPassword: "wordpress"
VPCSecurityGroups: [{"Fn::GetAtt": ["DatabaseSecurityGroup", "GroupId"]}]
DBSubnetGroupName: {"Ref": "DBSubnetGroup"}
MultiAZ: "true"
StorageType: "gp2"
DBSubnetGroup:
Type: "AWS::RDS::DBSubnetGroup"
Properties:
DBSubnetGroupDescription: "DB subnet group"
SubnetIds: [{"Ref": "SubnetA"}, {"Ref": "SubnetB"}]
S3Bucket:
Type: "AWS::S3::Bucket"
Properties:
BucketName: {"Ref": "BlogID"}
IAMUser:
Type: "AWS::IAM::User"
Properties:
Path: "/"
Policies:
- PolicyName: "UploadToS3"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: ["s3:*"]
Resource:
- {"Fn::Join": ["", ["arn:aws:s3:::", {"Ref": "BlogID"} ,"/*"]]}
IAMAccessKey:
Type: "AWS::IAM::AccessKey"
Properties:
UserName: {"Ref": "IAMUser"}
LaunchConfiguration:
Type: "AWS::AutoScaling::LaunchConfiguration"
Metadata:
"AWS::CloudFormation::Init":
config:
packages:
yum:
"php": []
"php-mysql": []
"mysql": []
"httpd": []
sources: {"/var/www/html": "https://wordpress.org/wordpress-4.2.4.tar.gz"}
files:
"/root/config.sh":
content:
"Fn::Join":
- ""
- [
"#!/bin/bash -ex\n",
"cp wp-config-sample.php wp-config.php\n",
"sed -i \"s/'database_name_here'/'wordpress'/g\" wp-config.php\n",
"sed -i \"s/'username_here'/'wordpress'/g\" wp-config.php\n",
"sed -i \"s/'password_here'/'wordpress'/g\" wp-config.php\n",
"sed -i \"s/'localhost'/'", {"Fn::GetAtt": ["Database", "Endpoint.Address"]}, "'/g\" wp-config.php\n",
"echo \"define('AWS_ACCESS_KEY_ID', '", {"Ref": "IAMAccessKey"},"'); \" >> wp-config.php \n",
"echo \"define('AWS_SECRET_ACCESS_KEY', '", {"Fn::GetAtt": ["IAMAccessKey", "SecretAccessKey"]},"'); \" >> wp-config.php \n",
"echo \"define( 'DISALLOW_FILE_MODS', true ); \" >> wp-config.php \n",
"echo \"define( 'WP_AUTO_UPDATE_CORE', false ); \" >> wp-config.php \n",
"chmod -R 777 wp-content/ \n",
"curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar \n",
"php wp-cli.phar core install --url=\"", {"Fn::GetAtt": ["LoadBalancer", "DNSName"]}, "\" --title=\"", {"Ref": "BlogTitle"}, "\" --admin_user=\"", {"Ref": "BlogAdminUsername"}, "\" --admin_password=\"", {"Ref": "BlogAdminPassword"}, "\" --admin_email=\"", {"Ref": "BlogAdminEMail"}, "\" \n",
"php wp-cli.phar plugin install --activate amazon-web-services \n",
"php wp-cli.phar plugin install --activate amazon-s3-and-cloudfront \n",
"CHARCOUNT=`printf \"",{"Ref": "BlogID"} ,"\" | wc -c` \n",
"php wp-cli.phar db query \"DELETE FROM wp_options WHERE option_name = 'tantan_wordpress_s3'; INSERT INTO wp_options (option_name, option_value, autoload) VALUES('tantan_wordpress_s3', 'a:15:{s:17:\\\"post_meta_version\\\";i:1;s:6:\\\"bucket\\\";s:", "$CHARCOUNT", ":\\\"", {"Ref": "BlogID"},"\\\";s:6:\\\"region\\\";s:0:\\\"\\\";s:6:\\\"domain\\\";s:9:\\\"subdomain\\\";s:7:\\\"expires\\\";s:1:\\\"0\\\";s:10:\\\"cloudfront\\\";s:0:\\\"\\\";s:13:\\\"object-prefix\\\";s:19:\\\"wp-content/uploads/\\\";s:10:\\\"copy-to-s3\\\";s:1:\\\"1\\\";s:13:\\\"serve-from-s3\\\";s:1:\\\"1\\\";s:17:\\\"remove-local-file\\\";s:1:\\\"0\\\";s:3:\\\"ssl\\\";s:7:\\\"request\\\";s:12:\\\"hidpi-images\\\";s:1:\\\"0\\\";s:17:\\\"object-versioning\\\";s:1:\\\"0\\\";s:21:\\\"use-yearmonth-folders\\\";s:1:\\\"1\\\";s:20:\\\"enable-object-prefix\\\";s:1:\\\"1\\\";}', 'yes');\" \n"
]
mode: "000500"
owner: "root"
group: "root"
commands:
01_mv:
command: "mv * ../"
cwd: "/var/www/html/wordpress"
02_config:
command: "/root/config.sh"
cwd: "/var/www/html"
services:
sysvinit:
httpd:
enabled: "true"
ensureRunning: "true"
Properties:
ImageId: {"Fn::FindInMap": ["EC2RegionMap", {"Ref": "AWS::Region"}, "AmazonLinuxAMIHVMEBSBacked64bit"]}
InstanceType: {"Ref": "WebServerInstanceType"}
SecurityGroups: [{"Ref": "WebServerSecurityGroup"}]
KeyName: {"Ref": "WebServerKeyName"}
AssociatePublicIpAddress: "true"
UserData:
"Fn::Base64":
"Fn::Join":
- ""
- [
"#!/bin/bash -ex\n",
"yum update -y aws-cfn-bootstrap\n",
"/opt/aws/bin/cfn-init -v --stack ", {"Ref": "AWS::StackName"}, " --resource LaunchConfiguration --region ", {"Ref": "AWS::Region"}, "\n",
"/opt/aws/bin/cfn-signal -e $? --stack ", {"Ref": "AWS::StackName"}, " --resource AutoScalingGroup --region ", {"Ref": "AWS::Region"}, "\n"
]
InstanceMonitoring: "true"
AutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
LoadBalancerNames: [{"Ref": "LoadBalancer"}]
LaunchConfigurationName: {"Ref": "LaunchConfiguration"}
MinSize: "2"
MaxSize: "4"
DesiredCapacity: "2"
Cooldown: "60"
HealthCheckGracePeriod: "120"
HealthCheckType: "ELB"
VPCZoneIdentifier: [{"Ref": "SubnetA"}, {"Ref": "SubnetB"}]
Tags:
- PropagateAtLaunch: "true"
Value: "wordpress"
Key: "Name"
CreationPolicy:
ResourceSignal:
Timeout: "PT10M"
ScalingUpPolicy:
Type: "AWS::AutoScaling::ScalingPolicy"
Properties:
AdjustmentType: "PercentChangeInCapacity"
MinAdjustmentStep: "1"
AutoScalingGroupName: {"Ref": "AutoScalingGroup"}
Cooldown: "300"
ScalingAdjustment: "25"
CPUHighAlarm:
Type: "AWS::CloudWatch::Alarm"
Properties:
EvaluationPeriods: "1"
Statistic: "Average"
Threshold: "75"
AlarmDescription: "Alarm if CPU load is high."
Period: "60"
AlarmActions: [{"Ref": "ScalingUpPolicy"}]
Namespace: "AWS/EC2"
Dimensions:
- Name: "AutoScalingGroupName"
Value: {"Ref": "AutoScalingGroup"}
ComparisonOperator: "GreaterThanThreshold"
MetricName: "CPUUtilization"
ScalingDownPolicy:
Type: "AWS::AutoScaling::ScalingPolicy"
Properties:
AdjustmentType: "PercentChangeInCapacity"
MinAdjustmentStep: "1"
AutoScalingGroupName: {"Ref": "AutoScalingGroup"}
Cooldown: "300"
ScalingAdjustment: "-25"
CPULowAlarm:
Type: "AWS::CloudWatch::Alarm"
Properties:
EvaluationPeriods: "1"
Statistic: "Average"
Threshold: "25"
AlarmDescription: "Alarm if CPU load is low."
Period: "60"
AlarmActions: [{"Ref": "ScalingDownPolicy"}]
Namespace: "AWS/EC2"
Dimensions:
- Name: "AutoScalingGroupName"
Value: {"Ref": "AutoScalingGroup"}
ComparisonOperator: "LessThanThreshold"
MetricName: "CPUUtilization"
Outputs:
URL:
Value: {"Fn::Join": ["", ["http://", {"Fn::GetAtt": ["LoadBalancer", "DNSName"]}]]}
Description: "URL to Wordpress"
The template you provided (which seems to be from cloudonaut) does not create a single EC2 instance, it creates an Auto Scaling Group containing 2-4 EC2 instances in order to provide high-availability and load-based scaling to the Wordpress installation.
You will need to copy the following resources once for each new Wordpress installation you require (e.g., the resource named LoadBalancer should be copied with new names like LoadBalancer2, LoadBalancer3, etc):
LoadBalancer
LaunchConfiguration
AutoScalingGroup
ScalingUpPolicy
CPUHighAlarm
ScalingDownPolicy
CPULowAlarm
You will also need to change the reference to the database from wordpress to wordpress2, wordpress3, etc in each updated LaunchConfiguration user-data script, in the following line:
"sed -i \"s/'database_name_here'/'wordpress'/g\" wp-config.php\n",
This should give you a working template, though it will have a lot of repetition across each installation's resources. You could probably refactor the duplicated resources into a nested stack that you can reuse across each instance, passing the database reference through stack outputs, and so on.

Resources