No valley to deep, no mountain too high
No no limits, won’t give up the fight
We do what we want and we do it with pride
No no, no no no no, no no no no, no no there’s no limit!
(2 Unlimited – https://www.youtube.com/watch?v=RkEXGgdqMz8)
No limit? Well, actually there is. Several actually. And that became painfully clear yesterday, when I was scripting the new environment for one of our customers. Not using Troposphere, so it can more easily be managed by non-Python savvy people.
What they need is not that special. They want to be able to deploy identical environments fast and easy. Not very complex environments either. Mainly EC2 and RDS. Say 10 servers and 5 DB instances.
But you know how it goes. All servers in an environment have different disk layouts. Different instance types. Different availability zones. And while the requirement now is to deploy completely identical environments, you know the day will come someone will come up to you and ask: why are we using SSD disks in our Dev environment? Why are those partitions so large in Test? So it’s best to be prepared, and allow for some flexibility. The plan was to create a CloudFormation script, and deploy it using Ansible. All configurable parameters can then be put in Ansible in an easy Yaml structure instead of -for example- a JSON parameter file.
So I started writing the code to create one server and its backend RDS instance, thinking: if I get this straightened out, it’s just a matter of copy pasting it for most other servers and instances, and setting server specific parameter values in Ansible. Well, pretty soon I hit the first AWS limit: one can only have 60 parameters for a CloudFormation template. I had many more. Bummer. I first looked into nested Stacks to overcome this limit, but as you can’t pass parameters straight to a child stack, they were not the answer here. They are an answer to a different problem though, but more on that later.
{ "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "Deploy new AWS environment", "Parameters": { "Env": { "Type": "String", "Description": "AWS Environment", "AllowedValues": ["dev","uat"] } }, "Mappings": { "Envs": { "dev": { "SubnetPrivate1a": "subnet-1234xxxx", "SubnetPrivate1b": "subnet-1234yyyy" }, "uat": { "SubnetPrivate1a": "subnet-4321xxxx", "SubnetPrivate1b": "subnet-4321yyyy" } } }, "Resources" : { "DBSubnets": { "Type": "AWS::RDS::DBSubnetGroup", "Properties": { "DBSubnetGroupDescription": "Database Subnet group", "SubnetIds": [ { "Fn::FindInMap": [ "Envs", { "Ref": "Env" }, "SubnetPrivate1a" ] }, { "Fn::FindInMap": [ "Envs", { "Ref": "Env" }, "SubnetPrivate1b" ] } ] } }, ...
So yeah, I was pretty pleased with the result. I was able to rewrite my code and transfer a lot of parameters to mappings. A new environment would now mean creating a new entry in the map. Not that big a deal. And hey, one can have a hundred mappings per template. We will never have that many environments. We are golden! Well… until I started to copy and paste all mapping entries… There I hit the second limit. One can only have a maximum of 63 mapping attributes. OK, that is 33 more than what is stated in the official documentation, but with the variables I wanted and the amount of servers, that was not nearly enough.
"Resources" : { "Server1": { "Type": "AWS::CloudFormation::Stack", "Properties": { "TemplateURL": "https://s3.eu-central-1.amazonaws.com/xxx/xxx.json.template", "TimeoutInMinutes": "60", "Parameters": { "CEnv": { "Ref": "Env" },
{ "AWSTemplateFormatVersion": "2010-09-09", "Description": "Deploy Server1", "Parameters": { "CEnv": { "Type": "String", },
{"Fn::FindInMap": ["Envs", { "Ref":"CEnv"}, "KeyToLookFor"]},