In my previous blog I have investigated how to create custom metrics for an EC2 instance using Python and Boto3, the AWS SDK for Python. At the time I was specifically interested in demonstrating that it was not necessary to use these “old fashioned” PERL scripts to post custom metrics from your EC2 instance, but you could also do this yourself rather easily using Python. All steps I’ve taken in the process have been performed manually, but that’s not really something you’d like to do whenever there’s more than [insert your threshold here] instances to provision.
In a nutshell
I want to create a simple EC2 instance, running Amazon Linux, install an Apache webserver to serve a very simple page and to setup a custom Python script to periodically sent custom metrics data to CloudWatch. To top things off, I also want to create a custom dashboard, showing these custom metrics for my newly created instance.
CloudFormation to the rescue
In AWS, there are multiple tools with which you can provision the image, e.g. OpsWorks (Chef/Puppet) would be an option. Alternatively, building an instance manually and creating a “golden AMI” from it would be another one. Both options only solve the problem of creating the instance, at least you’d have to do some things beyond installing and configuring the software:
- creating an IAM role to allow the instance to send data to CloudWatch
- creating a custom dashboard for the new metrics.
In theory, you probably could do all by installing the AWS CLI, but this would amount to doing a lot of complex and error-prone command line scripting. In AWS, there exists a much better alternative called AWS CloudFormation: “Model and provision all your cloud infrastructure resources” or “Infrastructure as code”:

Image taken from https://aws.amazon.com/cloudformation/
AWS CloudFormation is the one-stop solution for your infrastructure and provisioning:
- It is free! (I am Dutch, so this feels as an important benefit) You’ll only pay for the resources you create, not for CloudFormation itself
- Supports both creation and changes to the stack
- Will determine in which order the resources will be provisioned
- Both YAML and JSON are supported for templates
- Offers a UI to model or validate the template
- Provides an overview of change sets to be applied
- Supports over 100 different AWS resource types and support for custom resources so you can extend CloudFormation’s functionality
- Lots of templates available in documentation and elsewhere on the internet.
Automation Goal
In the next sections I will be developing a so-called CloudFormation template. CloudFormation is used to provision a stack, which is nothing more than a set of AWS resources. Template definitions consist of a combination of different blocks:
- Parameters – this section enables the user to specify variable values, e.g. names, port numbers or address ranges. Everything that should be dynamically user-definable should be a parameters.
- Resources – this is the meat of the template definition, defining what AWS resources should be created by this template. This is also the only mandatory section in the template (a question that used to occur frequently on the AWS certification exams).
- Mappings – mappings provide a way to translate different values. An classic example is the actual Amazon Machine Image identification (AMI), which is different between regions. When selecting an AMI by name, the mappings section may be used to retrieve the correct AMI identification for the user’s current region.
- Outputs – outputs expose relevant properties of the created resources, like an website address, IP address etc. Values can also be exported so that they can be reused (imported) in other templates! This allows for a more modular design of the template, like off-loading the design for the VPC to networking experts.
My goal for this blog post is to radically automate every step of the process that I have described in the aforementioned previous blog on custom metrics. From the creation of the instance, via the installation of the script and its depencies, collection of the statistics to the provisioning of the dashboard: EVERYTHING.SHOULD.BE.AUTOMATED! So no more manual steps. Let’s get started.
What will it do?
What I want this template to take care of, is the complete provisioning of my environment. When I use the template in CloudFormation, after providing the parameters and waiting a couple of minutes, the following steps should have been performed successfully:
- provisioning of an EC2 instance (into my default VPC), supporting SSH logins
- installing Apache and configure it to serve a simple webpage (accessible from the internet)
- installing Python and dependencies to support sending metrics
- retrieving the latest source for the metrics script from source control and installing it
- scheduling the script to send metrics to Cloudwatch every single minute
- as a bonus: creating and scheduling a script to generate some load and disk I/O, so we can see some actual moving metrics
- creating a new dashboard for the newly provisioned EC2 instance in Cloudwatch
Making amends (for past shortcuts)
In my previous blog post where I developed the script, I had used a ‘plain vanilla’ EC2 instance initially. That is, an EC2 instance without a specific IAM-role attached to it.This setup necessitated me to configure the AWS CLI, so Boto3 could obtain the required authorization keys. However, when attaching an IAM role to the EC2 instance, the AWS SDK (Boto3) can obtain temporary credentials from there. So an additional item on the to do list will be setting up a proper IAM role for EC2, having the correct permissions to allow the instance to call the CloudWatch PutMetrics API.
Developing the template
Getting started with CloudFormation involves one important choice: will you use YAML or JSON as the markup language for your template, as CloudFormation supports both these formats. My preference went to YAML almost immediately, as in my opinion this is much more “human friendly” as compared to JSON. Fortunately, there are starter templates for both in abundance. I picked a starter template from one of the templates that had been provided as part of Stephane Maarek’s AWS CloudFormation Masterclass on Udemy.
Actually defining the template is the hard work, using the template to create the stack is simple. So without more ado, let’s get started with the template!
Parameter definitions
Below is the block with parameter definitions used in the CloudFormation template. The use of a parameters block is optional, but it allows for greater reuse by exposing dynamic values to be entered by the CloudFormation user:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instances
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
SSHLocation:
Description: The IP address range that can be used to SSH to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
LatestAmiId:
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
MetricsScriptURI:
Description: The URI of the script that collects the metrics
Type: String
Default: https://raw.githubusercontent.com/mnuman/CFN-CustomMetrics/master/Python-Custom-Metrics/metrics.py
Resources
The resources block is the only mandatory element in the CloudFormation template – without a definition of resources to be provisioned, there is no valid CloudFormation template.
IAM Role
As I explained earlier, attaching a custom IAM role to your EC2 instance can make life much easier with respect to the authetication and authorization required for the call from the AWS SDK to CloudWatch to push your custom metrics. The Boto3 documentation lists the options for the SDK to obtain credentials and even though the IAM role attached to the EC2 instance is the eighth and last location it searches, it is by far the easiest and best option for maintainability. As you can see in the script definition, there are several !Ref functions present; these take care of resolving to identifiers to the referenced resources, e.g. “!Ref EC2CloudWatchMetricsRole” in the resource stanza for the RolePolicies will take care of insert the reference to the actual IAM role “EC2CloudWatchMetricsRole” into its definition.
EC2CloudWatchMetricsRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
RolePolicies:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: "EC2CloudWatchMetricsPolicy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "cloudwatch:PutMetricData"
Resource: "*"
Roles:
- !Ref EC2CloudWatchMetricsRole
CloudWatchMetricsRoleProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Path: "/"
Roles:
- !Ref EC2CloudWatchMetricsRole
Provisioning an EC2 instance
Provisioning an EC2 instance is straight forward and configuring the new instance is done by updating and installing packages, creating files and running scripts. This can also be made part of the template defintion by using the Amazon provided cfn-init script, where you will have to signal to CloudFormation when you’re done running the scripts by communicating a completion status using cfn-signal.
The entire setup is a block of boilerplate program code to be defined in the EC2 UserData section:
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash -xe
# Get the latest CloudFormation package
yum update -y aws-cfn-bootstrap
# Start cfn-init
/opt/aws/bin/cfn-init -s ${AWS::StackId} -r WebServerHost --region ${AWS::Region} || error_exit 'Failed to run cfn-init'
# Start up the cfn-hup daemon to listen for changes to the EC2 instance metadata
/opt/aws/bin/cfn-hup || error_exit 'Failed to start cfn-hup'
# All done so signal success
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackId} --resource WebServerHost --region ${AWS::Region}
Configuration Sections
Within the CloudFormation Init section of the EC2 Instance definition, it is possible to define multiple configuration sections. These configuration sections support miscellaneous OS related tasks, like:
- packages – define OS packages to be installed or updated
- files: create files on the filesystem, with specified permissions, ownerships and content
- groups/users – create OS groups or users
- services: define various UNIX services to be created, enabled or started
- commands: provide some command line scripting
Configuration sections can be combined into configuration sets and steps within a configuration set are executed in the order specified. You can specify which configuration set to run from the command line, or as an argument to the cfn-init script in the user data. Alternatively, if you do not set an configuration set argument, the configuration set called ‘default’ will be executed – if present:
AWS::CloudFormation::Init:
configSets:
default:
- "os-packages"
- "apache-configuration"
- "script-setup"
- "schedule-python-metrics"
script-setup
This cfn-init section configures the scripts that we need to have in place; first of all, it creates a “config” script in the user’s .aws directory. The purpose of this script is to provide configuration for the Boto3 SDK, as it requires the region in which it is running and this is not provided as part of the instance metadata. In absence of this file, all service calls fail.
Furthermore, this section creates the shell script that is scheduled and generates load on the web server and I/O activity on the file system. Properties declared in this section (apart from the actual content) include the ownership and file protections:. As you can see from the snippet below, the AWS CLI configuration and loadgenerator files have their content specified inline, whereas the metrics script is pulled from a URL (pointing to the source of the script in my GitHub repository):
script-setup:
files:
"/home/ec2-user/.aws/config":
content: !Sub |
[default]
region=${AWS::Region}
mode: "000400"
owner: "ec2-user"
group: "ec2-user"
"/home/ec2-user/generateSomeLoad.sh":
content: |
#!/usr/bin/env bash
typeset -i i j
... more scripting content omitted ...
# your mama does not work here - clean up after yourself!
rm -f test*.img
mode: "000700"
owner: "ec2-user"
group: "ec2-user"
"/home/ec2-user/metrics.py":
source: !Ref MetricsScriptURI
mode: "000700"
owner: "ec2-user"
group: "ec2-user"
commands:
install-python-deps:
command: "sudo pip3 install psutil boto3 requests"
cwd: "~"
schedule-python-metrics
This configuration section actually schedules the scripts to be executed periodically via the user crontab. I could not think of another way than writing the crontab task specification info a file and submit this to the user’s crontab. Of course, I dispose of this temporary file after using it.
If you can think of another more declarative way to achieve this, I’d like to know!
schedule-python-metrics:
files:
"/tmp/crontab":
content: !Sub |
*/1 * * * * /home/ec2-user/metrics.py
*/8 * * * * /home/ec2-user/generateSomeLoad.sh
mode: "000400"
owner: "root"
group: "root"
commands:
schedule-python-metrics:
command: "crontab -u ec2-user /tmp/crontab && rm /tmp/crontab"
cwd: "~"
Custom Dashboard Provisioning
Provisioning the custom dashboard is not hard, it’s just a little awkward. This is because the dashboard’s definition is made up a block of JSON, in which you need to inject the actual instance-identification to show the correct data; the custom dashboard definition below consists of just two fields, the first being the dashboard’s name and the second is the actual defintion of the dashboard. As you can see from the code sample, the definition itself is collated from blocks of JSON code and CloudFormation references to the WebServerHost (which is the name of the EC2 resource being provisioned) that returns the actual EC2 instance identifier:
CustomMetricDashboard:
Type: AWS::CloudWatch::Dashboard
Properties:
DashboardName: !Sub Dashboard_Stack_${AWS::StackName}
DashboardBody: !Join
- ""
- - '{"widgets": [ { "type": "metric", "x": 0, "y": 0, "width": 6, "height": 6, "properties": { "metrics": [ [ "CustomMetrics", "ApacheMemory", "InstanceId","'
- !Ref WebServerHost
- '", "InstanceType", "t2.micro", { "period": 60 } ] ], "view": "timeSeries", "stacked": false, "region": "eu-west-1", "period": 300 } }, { "type": "metric", "x": 6, "y": 0, "width": 6, "height": 6, "properties": { "metrics": [ [ "CustomMetrics", "DiskspaceUsed", "InstanceId","'
- !Ref WebServerHost
- '", "InstanceType", "t2.micro", { "period": 60 } ] ], "view": "timeSeries", "stacked": true, "region": "eu-west-1", "period": 300 } }, { "type": "metric", "x": 12, "y": 0, "width": 6, "height": 3, "properties": { "metrics": [ [ "CustomMetrics", "MemoryInUse", "InstanceId", "'
- !Ref WebServerHost
- '", "InstanceType", "t2.micro", { "period": 60 } ] ], "view": "singleValue", "region": "eu-west-1", "period": 300 } } ] }'
Outputs
The final section in the template definition is the output section; here, the template developer can specify which values should be exposed, both on the dashboard as well as to the outside world for further use.
As a result of the stack creation, we expose the relevant identifications for the webhost (instance identifier, URL for the webserver and its public IPv4 address) and the name of its custom metrics dashboard:
Outputs:
InstanceId:
Description: The instance ID of the web server
Value:
Ref: WebServerHost
WebsiteURL:
Value:
!Sub 'http://${WebServerHost.PublicDnsName}'
Description: URL for newly created LAMP stack
PublicIP:
Description: Public IP address of the web server
Value:
!GetAtt WebServerHost.PublicIp
Dashboard:
Description: Custom dashboard created for stack
Value:
Ref: CustomMetricDashboard
Provisioning a stack
First step: specify which template to use for the stack. You can either specify a template from an existing S3 bucket, upload a template or even create a new one using the CloudFormation template Designer. Here, we upload the template:

Next, specify the parameter values with which to create the actual stack; this includes the stack name, and any parameters you may have defined as part of the template:

I have specified I want to be using my “Ireland” keypair (for setting up an SSH connection to log into the instance), keep the defaults value for the latest & greatest Amazon Linux Image (AMI – this defines what OS and packages the virtual machine will be provisioned with), location of the metrics script and the CIDR block for acceptable hosts from which to allow an SSH connection into the instance.
Next up is a page with advanced options, where the “Rollback on failure” option turned out to be pretty useful. One of the nice features of CloudFormation is that it will rollback a failed stack creation, hence all resources you provisioned or modified during the process are removed when the stack creation fails. The good part about this is that you do not have to clean up yourself (across all different kinds of consoles) and you will not incur any more charges after the clean-up has completed, but the downside to this is that it becomes more difficult to debug a faulty CloudFormation template. When disabling “Rollback on failure” you can prevent deletion of the resources (don’t worry – you can also do this from the CloudFormation console) and you have the opportunity to debug the process

The final screen of CloudFormation provides you with an overview of the information entered and in this case with a mandatory acknowledgement. You will have to allow CloudFormation to create security changes as you’re provisioning an IAM role in this template:

After submitting the parametrized template, CloudFormation will get started and provides feedback on its progress periodically on the stack creation dashboard:

After completion, you can get an overview of the AWS resources CloudFormation has created on your behalf, with their unique identifiers:

And finally, there’s the section of outputs. These are the properties that are being returned by CloudFormation after (successfully) creating the stack:

Dashboard
The definitive proof that all this works correctly is given by the functioning dashboard, showing the periodically varying amount of diskspace and the increased amount of memory taken by the Apache processes (which does not seem to be returned):

Conclusions
Using CloudFormation, it is fairly simple to create a template to provision an entire set of resources for your application’s requirements. Installing patches, pulling in source code, running scripts, creating files and the like can all be done easily and declaratively.
There is a real plethora of CloudFormation configuration ‘blocks’ available in the AWS documentation and on the internet. It really feels like building your infrastructure as a script in LEGO blocks!

You can find the complete CloudFormation template as well as the Python script in my Github repository.