Terraform: Get Data with Python

Update 2017-12-26: There’s now a more complete, step-by-step example of how to use terraform’s data resource, pip, and this decorator in the source.

Good morning!

Sometimes I have data I need to assemble during terraform’s apply phase, and I like to use Python helper scripts to do that. Awesomely, terraform natively supports using Python to populate the data resource:

data "external" "cars_count" {
  program = ["python", "${path.module}/get_cool_data.py"]

  query = {
    thing_to_count = "cars"
  }
}

output "cars_count" {
  value = "${data.external.cars_count.result.cars}"
}

A slick, easy way to drop out of terraform and use Python to grab what you need (although it can get you in to trouble if you abuse it).

The Python script has to follow a protocol that defines formats, error handling, etc. It’s minimal but it’s fiddly, plus if you need more than one external data script it’s better to modularize than copy and paste, so I wrote a pip-installable package with a decorator that implements the protocol for you:

from terraform_external_data import terraform_external_data

@terraform_external_data
def get_cool_data(query):
    return {query['thing_to_count']: '3'}

if __name__ == '__main__':
    get_cool_data()

And it’s available on PyPI! Just pip install terraform_external_data. Here’s the source.

Happy terraforming,

Adam

AWS Certification and Networking

Hello!

Recently I’ve been working on the AWS Certification exams, and I’ve found they require much deeper understanding of networking on the platform than I had. For example, ICMP is a stateless protocol, so to ping between two servers do you need ingress and egress rules on both Security Groups? I knew from past experience with iptables that the answer varies by setup, but I didn’t know how it worked in EC2.

For me, gnarly networking is easiest to learn hands-on. Docs get me part of the way but I really need to engineer it myself before I’ll remember it. To prep for certification I ended up building a sandbox environment in my AWS account where I could play around. It took some doing; many AWS patterns come pre-baked with Security Groups, ACLs, etc. that make everything work, but I wanted everything turned off so I could verify what was really needed for different traffic flows. If I delete the egress rule on one side of a connection, does traffic still flow? Hard to validate if there are broad, generic rules in place. Easy to validate if only exactly what’s needed is present.

Since it was tricky, I published the automation for the sandbox I’ve been using. If you want to do your own deep dive of networking in AWS, hopefully this will help you out.

github.com/operatingops/aws_study

diagram

Happy Operating!

Adam

Production-ready Scripts and Python

Production is hard. Even a simple script that looks up queue state and sends it to an API gets complex in prod. Without tests, the divide by zero case you missed will mask queue overloads. Someone won’t see that required argument you didn’t enforce and break everything when they accidentally publish a null value. You’ll forget to timestamp one of your output lines and then when the queue goes down you won’t be able to correlate queue status to network events.

Python can help! Out of box it can give you essential but often-skipped features, like these:

  • Automated tests for multiple platforms.
  • A --simulate option.
  • Command line sanity like a --help option and enforcement of required arguments.
  • Informative log output.
  • An easy way to build and package.
  • An easy way to install a build without a git clone.
  • A command that you can just run like any other command. No weird shell setup or invocation required.

It can be a little tricky, though, if you haven’t already done it. So I wrote a project that demonstrates it for you. It includes an example of a script that isn’t ready for prod.

Hopefully this will save you from some of the many totally avoidable, horrible problems that bad scripts have caused in my prods.

Thanks for reading!

Adam

Pear: A Better Way to Deploy to AWS

Update 29 September 2018: Since the release of AWS Step Functions, this pattern is out of date. Orchestrating deployment with a slim lambda function is still a solid pattern, but Step Functions could make the implementation simpler and could provide more features (like graphical display of deployment status). You can even see some of the ways this might work in AWS’s DevOps blog. Those implementation changes would drive some re-architecture, too. Because those are major changes, for now Pear is an interesting exploration of a pattern but isn’t ready for use in new deployments.

A while back I wanted to put a voice interface in front of my deployment automation. I think passwords on voice interfaces are annoying and aren’t secure, so I wanted an unauthenticated system. I didn’t want to lay awake at night worried about all the things people could break with it, so I set out to engineer a deployment infrastructure that I could put behind voice control and still feel was secure and reliable.

After a lot of learning (the journey towards this is where the Life Coach and the Better Alexa Quick Start came from) and several revisions, I had built my new infrastructure. For reasons I can’t remember, I called it Pear.

Pear is designed to make it easy to add slick features like voice interfaces while giving you enough control to stay secure and enough stability to operate in production. It’s meant for infrastructures too complex to fit in tools like Heroku or Elastic Beanstalk, for situations where you need to be able to turn all the knobs. I think it achieves those goals, so I decided to publish it. Check out the repo for an example implementation and more complete documentation of its features, but to give you a taste here’s a diagram of the basic architecture:

pear architecture

Cheers!

Adam

The Better Alexa Quick Start

You’ve seen the Life Coach. That was my second Alexa project. The one I used to learn the platform.

I began with Amazon’s quick start tutorial, but I didn’t like the alexa-skills-kit-color-expert-python example code. It feels like a spaghetti-nest and is more complicated than it needs to be for a quick start (it’s 200 lines, the Life Coach is 40).

Hoping to make the intro process smoother for you, I’ve turned the Life Coach into my own quick start for Alexa. It gives you an example that’s simple but still has the tools you need to do your own development. Essential things like error log capturing and scripts to build and publish code updates that aren’t included in the Amazon guide.

Especially without Ops or AWS background, setting up AWS well enough that you don’t want to pull out your hair can be harder than writing your first skill. I’ve included automation to handle the infrastructure. You can skip that and just use the Life Coach code if you’d rather.

I’ve also included the better lambda packaging practices from my previous post.

Hopefully this makes your Alexa journey easier. Check it out!

If you want an intro to how Alexa works before you dive in to code, check out the slides from my lighting talk on this project at the May 2017 meeting of the San Diego Python group.

Talk to you soon,

Adam