Writing an AWS CloudFormation Resource Provider in Python: Step by Step

19/12/2019
19/12/2019 Ben Bridts

Disclaimer: The python plugin for the CloudFormation CLI (cloudformation-cli-python-plugin) is still in developer preview. Use this at your own risk.

Last month AWS announced a new way to manage third party resources in CloudFormation. Ian Mckay wrote a great (java based) walkthrough when this came out and since then the python plugin was released as a developer preview. In this blog post we will do something very similar as Ian, but in python. We will be following the same structure as his blog post here, and focus on calling out the differences (both in our approaches and the plugins), rather than making a direct python translation.

If you want more background on the resource providers themselves, read the “How it works” and “CloudFormation registry console” parts in Ian’s post. Let’s dive in immediately with the python stuff:

Setting up your environment

The cloudformation-cli tool is the same for every language, but we do need the python plugin for our version. For OSX with homebrew installing the shared parts looks like:

brew update
brew install python awscli
pip3 install cloudformation-cli

Because the python plugin is still in developer preview we need to install it from the github source instead of getting it from pypi (the python package index).
Note: If you use pipenv, make sure the install it editable mode, or the dependencies will not be included in your Pipfile.lock)

pip3 install git+https://github.com/aws-cloudformation/aws-cloudformation-rpdk-python-plugin.git#egg=cloudformation-cli-python-plugin

The CloudFormation cli

Like with the java plugin, you run cfn-init and specify a type name. Make sure you select one of the python versions, we are going to use python3.7 (the plugin supports both python3.6 and python3.7).

If you got the error “‘cfn init’ terminated by signal SIGABRT (Abort)”, congrats on running OSX Catalina. You can make cfn work by running it in a virtualenv, or by using the workaround from this issue.

There is a of course a difference in which files are generated, for python it’s interesting to look at these:

  • requirements.txt: This is where you declare your dependencies. There is already one dependency defined, we will come back to this later.
  • corpname-group-thing.json: Exactly the same as for every other provider. Here we define our resource provider specification/schema.
  • corpname_group_thing/handlers.py: The python source file where we can implement the behaviour of our handlers.
  • models.py: A file that is generated based on the specification with model classes from that.

Writing your schema

As the schema is independent of the used language to implement the provider, we started from Ian’s schema. We made a few tweaks to better reflect how our implementation works (you can see the full file here):

  • We added a pattern to the KeyName, while the ec2 api documentation says it will accept any ASCII string here, we had an easier time getting it working by restricting this to only alphanumeric characters (plus dash and underscore)
    "KeyName": {
        "description": "The name for the key pair.",
        "type": "string",
        "pattern": "^[a-zA-Z0-9_-]+$",
        "minLength": 1,
        "maxLength": 255
    },
    
  • We defined “PublicKey” as a “writeOnlyProperty”. The AWS documentation describe these as “Resource properties that can be specified by the user, but cannot be returned by a read or list request. Write-only properties are often used to contain passwords, secrets, or other sensitive data”. While the public key does not contain sensitive data, the ec2 api does not return it when describing a key. Defining this as write only means we do not have to figure out a way to store and retrieve it.
    "writeOnlyProperties": [
        "/properties/PublicKey"
    ],
    
  • We made both the KeyName and PublicKey “createOnlyProperties”. This means that changing these properties will always trigger the creation of a new resource. Since those are the only two properties that a user can set, this also means that we do not have to write code to handle updates, as every update will trigger first a new create (in the UPDATE_IN_PROGRESS phase) and then a delete (in the UPDATE_COMPLETE_CLEANUP_IN_PROGRESS phase).
    "createOnlyProperties": [
        "/properties/PublicKey",
        "/properties/KeyName"
     ],
    
  • We removed the update handler and added ec2:DescribeKeyPairs permissions to the delete handler. We will talk about why we did that below.
     "handlers": {
        "create": {
            "permissions": [
                "ec2:ImportKeyPair"
            ]
        },
        "read": {
            "permissions": [
                "ec2:DescribeKeyPairs"
            ]
        },
        "delete": {
            "permissions": [
                "ec2:DeleteKeyPair",
                "ec2:DescribeKeyPairs"
            ]
        },
        "list": {
            "permissions": [
                "ec2:DescribeKeyPairs"
            ]
        }
    }
    

Since we edited our schema, we also need to regenerate some code that was created for us. We can do this with

cfn generate

Adding dependencies

Before we can write our handlers, we like to have all our dependencies set up correctly. Mostly so our IDE can be smart and link things together. Normally you would run `pip3 install -r requirements.txt` but that gives you an error (because the plugin is still in preview). We can install them directly from source with:

pip install 'git+https://github.com/aws-cloudformation/aws-cloudformation-rpdk-python-plugin.git#egg=cloudformation_cli_python_lib&subdirectory=src'`

Keep in mind that you want to keep this in sync with the plugin version you’re using to generate code.
The cloudformation_cli_python_lib already installs boto3 as a dependency, so we do not have to worry about extra dependencies yet. If we had extra dependencies we could add them to requirements.txt.

Writing the handlers

We will write our handlers directly in corpname_group_thing/handlers.py using the already provided functions.

Create Handler (create_handler)

We get the following arguments passed to this handler:

We also get some example code that shows us how to work with the session.

We want to take the following actions in our own code:

  1. Create a boto3 client from the session
    ec2 = session.client('ec2')
    
  2. Read properties from the model (included in the request) and use them to execute our request to the EC2 API.
    response = ec2.import_key_pair(KeyName=model.KeyName, PublicKeyMaterial=model.PublicKey)
    
  3. Set the fingerprint on our model so it can be retrieved in our CloudFormation template:
    model.Fingerprint = response['KeyFingerprint']
    
  4. We also set the PublicKey to None, because we defined as a writeOnlyProperty
    model.PublicKey = None
    
  5. Return our updated model to CloudFormation and tell the service we finished successfully.
    # Setting Status to success will signal to cfn that the operation is complete
    progress.resourceModel = model
    progress.status = OperationStatus.SUCCESS
    return progress
    

After adding error handling to tell CloudFormation when someone tries to create an already existing KeyPair, our code looks like this. We’ll talk more about those exceptions when we talk about testing below.

model = request.desiredResourceState
ec2 = session.client("ec2")

try:
    response = ec2.import_key_pair(
        KeyName=model.KeyName, PublicKeyMaterial=model.PublicKey
    )
except ClientError as e:
    if e.response.get("Error", {}).get("Code") == "InvalidKeyPair.Duplicate":
        raise exceptions.AlreadyExists(TYPE_NAME, model.KeyName)
    else:
        # raise the original exception
        raise

model.Fingerprint = response["KeyFingerprint"]

# Setting Status to success will signal to CloudFormation that the operation is complete
return ProgressEvent(status=OperationStatus.SUCCESS, resourceModel=model)

We can use the same steps (but with different code) to implement the other handlers. You can find the complete file here. We do not implement an update handler, because EC2 has no UpdateKeyPair API, so every change will be a replacement. If we do not create an update handler, CloudFormation will always replace our resource if it changes. We achieved a similar thing by making everything a create only property in our schema.

Testing

Ian talked about Unit tests, the documentation mentions both testing with test events and by running the contract tests. We will dive a bit deeper into the latter.

The tested contract

The idea of the contract tests is to more confident that a) your implementation matches your specification/schema and b) your implementation behaves the way CloudFormation expects. This does not mean that if they all succeed your code and behaviour will be perfect, but it can be an indication of things you missed.

The former is tested by generating inputs based on your schema. For that reason it’s important to define a pattern for your inputs, so that test does not send data you can’t use. In our case the EC2 API will also return an error if the PublicKey is not a real/valid key. To make the test use a valid value for a public key we can define this in overrides.json:

{
  "CREATE": {
    "/PublicKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDmD5aF1a3R7yPRUebshvL9KFgNDPA7y22lAHt+q7sFyojTA8ukXT05la9+h2tRSyrm1WSYTp8gTuLqAa4A4Hb+DQBOaOh0uD6Bj4FFa5EIYy7NtZNQfNb9w7vUi/BknyIvLxLYLNfsnomq4DhKPC+g/VmyPkc5V1mocM3TfGUpukLpPTFYZhhNdD++yq+EOQbG6bia49j5W+f1OGZjLV69/J0ycktaYUl9e9Dj2UEg65Xux0MGuK7VLppPvoozQVIi3zmGFcfjgou/WhkwUQy0GOo7RSeEQl20zluqn/7/uwkqapM3utXl1AFYxce7eA12whV2G0ByJLVZEKs40tNX Ben@Cloudar"
  }
}

The behaviour includes multiple tests, and most are about returning the right status and model to CloudFormation. Returning the right thing on success means making sure you return OperationStatus.SUCCESS and the right values in the ResourceModel(s). One caveat here is that we cannot return values in the create handler that we do not return in the list handler. This is reason we set our writeOnlyProperties to None in the create handler; we can’t retrieve it in the list call.

Returning the right thing on failure means raising the right exception (depending on why things fail). All exceptions that CloudFormation understand are defined in a library that’s part of the python plugin. For our provider and its test, the important ones are:

  • AlreadyExists: A resource can’t be created, because there already exists one with the same primaryIdentifier.
  • NotFound: An action was tried on a resource that does not exist (anymore). Note that this is also expected on a Delete action (for a resource that might already be deleted). In our case this means that we have to provide more information than what the DeletePublicKey API returns. We solve this by doing a read before we delete (and having the read raise the right exception).

Building and testing

First we need to get a version of cloudformation-cli-python-lib, this can be done by downloading the source code of aws-cloudformation-python-plugin and running ./package_lib.sh in its root directory. You can copy it to your own project after that.

This also means that as long this is in developer preview you might have to rebuild this when the upstream repo is updated.

# assuming the myorg_foo_bar directory with our resource provider is the root directory of our source code
cd ..
git clone https://github.com/aws-cloudformation/aws-cloudformation-rpdk-python-plugin.git
cd aws-cloudformation-rpdk-python-plugin
./package_lib.sh
cd ../myorg_foo_bar
cp ../aws-cloudformation-rpdk-python-plugin/cloudformation-cli-python-lib-0.0.1.tar.gz .

If we have this, we can create a testable build by dry running the submit process (you probably need docker running for this):

cfn submit --dry-run

To run the test we need two terminals (or run one of these processes in the background). First we have to make our build listen for events with sam local:

sam local start-lambda

And then we can run our contract tests against this. You will need to have credentials in your environment for this (or your default profile configured with credentials). These steps will actually try to create/delete resources with the provided credentials. You can use –role-arn to specify a different role to assume.

cfn test

If we did not make any mistakes, we should see all the tests succeeding:
Passing contract tests.

Submission

We already did a dry-run of the submit, the actuall submit means removing the --dry-run flag, and also needs credentials available, similar to the test.

cfn submit

After this stops polling and returns the message “Finished submit”, you can start using the resource in your own template (here is an example template to test with).

If you used submit multiple times, you can use `cfn submit –set-default` to immediately update the version used by CloudFormation.

Summary

In this blog post we went through the steps to write a CloudFormation resource provider in python. We also looked in more detail at the contract tests and have a full example available on github. The python plugin is still in developer preview, so test it out and give your feedback in the issue tracker. We look forward to see what you build with this!

  • SHARE

LET'S WORK
TOGETHER

Need a hand? Or a high five?
Feel free to visit our offices and come say hi
… or just drop us a message

We are ready when you are

Cloudar NV – Operations

Veldkant 7
2550 Kontich (Antwerp)
Belgium

info @ cloudar.be

+32 3 450 67 18

Cloudar NV – HQ

Veldkant 33A
2550 Kontich (Antwerp)
Belgium

VAT BE0564 763 890

    This contact form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

    contact
    • SHARE