Securing IAM Policies

SecurityIdentityCompliance_GRAYSCALE_IAM

Since the beginning, writing IAM policies with the minimum necessary permissions has been hard. Some services don’t have resource-level permissions (you have to grant to *), but then later they do. When a service has resource-level permissions, it may only be for some of its permissions (the rest still need *). Some services have their own Condition Operators (separate from the global ones) that may or may not help you tighten control. Et cetera. The details are documented differently for each service and it’s a lot of hunting and testing to try to put together a tight policy.

Amazon made it easier! There’s new magic in the IAM UI to help you create policies. It has some limitations, but it’s a big improvement. Here are some of the things it can do that I used to have to do myself:

  • Knows which S3 permissions require the resource list to include a bucket name and which require the bucket name and an object path.StatementSplitting
  • Tries to group permissions and resources into statements when it results in equivalent access (but sometimes ends up granting extra access, see below).StatementGrouping
  • Knows when a service doesn’t support resource-level permissions.ResourceSpecificPermissionsDetection
  • Knows about the Condition Operators specific to each service (not just the global ones).ConditionOperators

There are some limitations:

  • Doesn’t deduplicate. If you add permissions it doesn’t go back and put them into existing statements, it just adds new statements that may duplicate parts of old ones.
  • Only generates JSON, so if you’re writing a YAML CloudFormation template you should translate.
  • Seems to have limited form validation on Condition Operators. You can put in strings that will never match because the API calls for that service can’t contain what you entered (making the statement a no-op).
  • Can end up grouping permissions in a way that makes some resource restrictions meaningless and grants more access than might be expected.TooMuchPermission
  • Sometimes it messes up the syntax. Seems to happen if you don’t put exactly what it expects into the forms.Bug

 

So there are a few problems, but this is still way better than it was before! My plan is to use the visual editor to write policies, then go through and touch it up afterward. Based on what I’ve seen so far, this cuts the time it takes me to develop policies by about 30%.

Happy securing,

Adam

Beating EC2 Security Groups

 

NetworkingContentDelivery_GRAYSCALE_AmazonVPC

Today I’ll show you how to pass traffic through an EC2 Security Group that’s configured not to allow that traffic.

This isn’t esoteric hacking, it’s a detail in the difference between config and state that’s easy to miss when you’re operating an infrastructure.

Like I showed in a previous post, EC2 Security Groups are stateful. They know the difference between the first packet of a new connection and packets that are part of connections that are already established.

This statefulness is why you can let host A SSH to host B just by allowing outgoing SSH on A’s SG and incoming SSH on B’s SG. B doesn’t need to allow outgoing SSH because it knows the return traffic is part of a connection that was already allowed. Similarly for A and incoming SSH.

Here’s the detail of today’s post: if the Security Group sees traffic as part of an established connection, it’ll allow it even if its rules say not to. Ok now let’s break a Security Group.

The Lab

Two hosts, testa and testb. One SG for each, both allowing all outgoing traffic. Testb’s SG allows incoming TCP on port 4321 (a random ephemeral port I’m using for this test):

TrafficAllowed

To test traffic flow, I’m going to use nc. It’s a common Linux utility that sends and receives TCP traffic:

  • Listen: nc -l [port]
  • Send: nc [host] [port]

Test Steps:

(screenshots of shell output below)

  1. Listen on port 4321 on testb.
  2. Start a connection from testa to port 4321 on testb.
  3. Send a message. It’s delivered, as expected.
  4. Remove testb’s SG rule allowing port 4321:TrafficDenied
  5. Send another message through the connection. It will get through! There’s no rule to allow it, but it still gets through.

WAT.

To show nothing else was going on, let’s redo the test with the security group as it is now (no rule allowing 4321).

  1. Quit nc on testa to close the connection. You’ll see it also close on testb.
  2. Listen on port 4321 on testb.
  3. Start a connection from tests a to port 4321 on testb.
  4. Send a message. Not delivered. This time there was no established connection so the traffic was compared to the SGs rules. There was no rule to allow it, so it was denied.

Testb Output

(where we listened)

testb

Only two messages got through.

Testa Output

(where we sent)

testa

We sent three messages. The last two were sent while the SG had the same rules, but the first message was allowed and the second was denied.

Beware!

The rules in EC2 Security Groups don’t apply to open (established) TCP connections. If you need to ensure traffic isn’t flowing between two instances you can’t just remove rules from your SGs. You have to close all open connections.

Happy securing,

Adam

The Fallacy of Rest

Hello!

A while back I made a bad scheduling mistake. I knew about the anti-pattern that caused it, but didn’t see myself using it. It forced me to push out dates that cost me some money.

Later I looked back to see what went wrong. It was exactly what I have advised others not to do. It’s easy to miss! I’m writing this article to re-expose the anti-pattern I used.

The project was Move to a New City. I would be taking my job with me. This is the schedule I wrote:

  • Week 1
    • Pack
    • Work
  • Week 2
    • Weekdays
      • Pack
      • Work
      • Clean
    • Weekend
      • Clean
      • Say goodbye to friends
  • Week 3
    • Monday (Vacation Day)
      • Exercise and rest
      • Say goodbye to friends
    • Tuesday (Vacation Day)
      • Return keys
      • Drive to new city (5 hours on the road)
      • Check in to AirBnB
      • Hang out with friend who lives in new city
    • Wednesday through Friday
      • Work
      • Look at new housing

Seems fine! I even budgeted time to exercise.

Tuesday of week 3. 100% on schedule. It’s bedtime and I’m watching an episode of The Dick Van Dyke Show on my laptop and laughing myself to sleep with Mary Tyler Moore’s performance. I feel awesome. I sleep like I’ve just run a marathon.

Wednesday. Mild headache (whatever – I’m an engineer, we get headaches). I catch up on work, message about a couple rentals, and attend the morning meetings. As the meetings are wrapping up I get a reply on a rental with a proposed time to view it. I can just barely make it, so I head out.

See the mistake yet? I still hadn’t. Wednesday was a busy day and I felt rushed, but I’ve had lots of busy days. I just kept going. I didn’t make the mistake on Wednesday.

That afternoon I got one more email about a rental. It was a wafer-thin mint (see Monty Python’s The Meaning of Life ⬅️ this is how I am making the post about Python). Suddenly getting through the rest of my inbox felt like climbing a mountain. I was burnt out.

The mistake happened when I first wrote the schedule. Here’s the fallacy I used:

People are like horses. Rest them two hours a day and one full day every week or so and they’re fine. Feed and water three times a day.

People are not like horses. They can’t sustain themselves on periodic rest intervals.

Here’s how people work:

Productive workers have a budget of hours per week. When those hours are spent they spend themselves to keep going. Once too much of themselves is gone, they stop producing.

I wrote a schedule in the mindset of making sure I had rest intervals, but I should have figured out the hours needed and divided that by my sustainable weekly hours (a number I’ve learned during two decades of working). That would be the total weeks really needed to complete the move.

Going back over the hours I spent I found I had scheduled 200% of my sustainable capacity and had expected to sustain that for most of a month. (╯°□°)╯︵ ┻━┻

Another way to look at my mistake is that I didn’t count saying goodbye to friends as work (just like I sometimes forget to count attending meetings as work). In the context of human capacity, leaving behind your friends is absolutely work (just like sitting in a frustrating meeting is). It drains your budget of hours. If you do too much of it, you exhaust.

To write a schedule that workers can reliably complete, budget based on what workers can do per week and make sure you get that amount from their real history of work. Don’t make it up, look back at the past and compute it.

I’m going to bed. Happy scheduling!

Adam

3 Tools to Validate CloudFormation

Hello!

Note: If you just want the script and don’t need the background, go to the gist.

If you found this page, SEO means you probably already found the AWS page on validating CloudFormation templates. If you haven’t, read that first. It’s a better starting place.

I run three tools before applying CF templates.

#1 AWS CLI’s validator

This is the native tool. It’s ok. It’s really only a syntax checker, there are plenty of errors you won’t see until you apply a template to a stack. Still, it’s fast and catches some things.

aws cloudformation validate-template --template-body file://./my_template.yaml

Notes:

  • The CLI has to be configured with access keys or it won’t run the validator.
  • If the template is JSON, this will ignore some requirements (e.g. it’ll allow trailing commas). However, the CF service ignores the same things.

#2 Python’s JSON library

Because the AWS CLI validator ignores some JSON requirements, I like to pass JSON templates through Python’s parser to make sure they’re valid. In the past, I’ve had to do things like load and search templates for unused parameters, etc. That’s not ideal but it’s happened a couple times while doing cleanup and refactoring of legacy code. It’s easier if the JSON is valid JSON.

It’s fiddly to run this in a shell script. I do it with a heredoc so I don’t have to write multiple scripts to the filesystem:

python - <<END
import json
with open('my_template.json') as f:
    json.load(f)
END

Notes:

  • I use Python for this because it’s a dependency of the AWS CLI so I know it’s already installed. You could use jq or another tool, though.
  • I don’t do the YAML equivalent of this because it errors on CF-specific syntax like !Ref.

#3 cfn-nag

This is a linter for CloudFormation. It’s not perfect. I’ve seen it generate false positives like “don’t use * in IAM policy resources” even when * is the only option because it’s all that’s supported by the service I’m writing a policy for. Still, it’s one more way to catch things before you deploy, and it catches some good stuff.

cfn_nag_scan --input-path my_template.yaml

Notes:

  • Annoyingly, this is a Ruby gem so you need a new dependency chain to install it. I highly recommend setting up RVM and creating a gemset to isolate this from your system and other projects (just like you’d do with a Python venv).

Happy automating!

Adam

Python on Mac OS X: One of the Good Ways

Good morning!

When I start Python development on a new Apple, I immediately hit two problems:

  1. I need a version of Python that’s not installed.
  2. I need to install a bunch of packages from PyPI for ProjectA and a different bunch for ProjectB.

Virtualenv is not the answer! That’s the first tool you’ll hear about but it only partially solves one of these problems. You need more. There are a ton of tools and a ton of different ways to use them. Here’s how I do it on Apple’s Mac OS X.

If you’re asking questions like, “Why do you need multiple versions installed? Isn’t latest enough?” or “Why not just pip install all the packages for ProjectA and ProjectB?” then this article probably isn’t where you should start. Great answers to those questions have already been written. This is just a disambiguation page that shows you which tools to use for which problems and how to use them.

Installing Python Versions

I use pyenv, which is available in homebrew. It allows me to install arbitrary versions of Python and switch between them without replacing what’s included with the OS.

Note: You can use homebrew to install other versions of Python, but only a single version of Python 2 and a single version of Python 3 at a time. You can’t easily switch between two projects each frozen at 3.4 and 3.6 (for example). There’s also a limited list of versions available.

Install pyenv:

$ brew update
$ brew install pyenv

Ensure pyenv loads when you login by adding this to ~/.profile:

$ eval "$(pyenv init -)"

Activate pyenv now by either closing and re-opening Terminal or running:

$ source ~/.profile

List which versions are available and install one:

$ pyenv install --list
$ pyenv install 3.6.4

If the version you wanted was missing, update pyenv via homebrew:

$ brew update && brew upgrade pyenv

If you get weird errors about missing gcc or zlib, install the XCode Command Line Tools and try again:

$ xcode-select --install

I always set my global (aka default) version to the latest 3:

$ pyenv global 3.6.4

Pyenv has lots of great features, like support for setting a different version whenever you’re in a specific directory. Check out its commands reference.

Installing PyPI Packages

In the old days, virtualenv was always the right solution. Today, it depends on the version of Python you’re using.

Python <= 3.3 (Including 2)

This is legacy Python, when environment management wasn’t native. In these ancient times, you needed a third party tool called virtualenv.

$ pyenv global 2.7.14
$ pip install virtualenv
$ virtualenv ~/my_env
$ source ~/my_env/bin/activate
(my_env) $ pip install <project dependency>

This installs the virtualenv Python package into the root environment for the legacy version of Python I need, then creates a virtual Python environment where I can install project-specific dependencies.

Python >= 3.3

In PEP 405 an environment manager called venv was added to core. It works pretty much like virtualenv.

Note: Virtualenv works with newer versions of Python, but it’s better to use a core library than to add a dependency. I only use the third party tool when I have to.

$ pyenv global 3.6.4
$ python -m venv ~/my_env
$ source ~/my_env/bin/activate
(my_env) $ pip install <project dependency>

Happy programming!

Adam

A Book from 2017: Stretch Goals and Prescriptions

Happy New Year!

Today’s post is a little outside my usual DevOps geekery, but it’s been an influencer on my work and my career choices this year so I wanted to share it.

For the record, I have zero connections to 3M.

In my teens, I noticed that whenever I bought something with the 3M logo it was noticeably better than the other brands. I didn’t know what 3M was, but this pattern kept repeating and I started to always choose them. Years later, deep inside a career in technology, I was still choosing 3M. I started to ask myself how they did it. Why were all their products better than everyone else’s?

I didn’t know anyone at 3M, so I found a book. The 3M Way to Innovation: Balancing People and Profit.

the3mwaytoinnovation.jpg

Balance? At work? And still better than everyone else? Bring it on.

The book approaches 3M through their innovations. They built hugely successful product lines in everything from sandpaper to projectors, and it turns out other companies have long looked to them as the top standard for the innovation that drives such diverse success. As I worked through the book, one thing really stuck with me: 3M’s definition of Stretch Goals.

I’ve seen a lot of managers ask their teams what can be accomplished in the next unit of time (sprint, quarter, etc.). Often, the team replies with a list that’s shorter than the manager would like. The manager then over-assigns the team by adding items as “stretch goals”. If the team works hard enough and accomplishes enough, they’ll have time to stretch themselves to meet these goals. The outcome I usually see is pressure for teams to work longer hours (with no extra pay) so they can deliver more product (at no extra cost to the company).

This book described 3M’s stretch goals very differently, which I’ll summarize in my own words because it’s characterized throughout the book and there’s no single quote that I think captures it. 3M sets these goals to stretch an aspect of the business that’s needed for it to remain a top competitor, and they’re deliberately ambitious. For example, one that 3M actually used: 30% of annual sales should come from products introduced in the last four years. Goals like these drive innovation because they’re too big to meet with the company’s current practices.

The key difference is that 3M isn’t trying to stretch the capacity of individuals. They’re not trying to increase Scrum points by pushing everyone to work late. They’re setting targets for the company that are impossible to meet unless the teams find new ways to work. They’re driving change by looking for things that can only be done with new approaches; things that can’t be done just by working longer hours. And after they set these goals, they send deeply committed managers out into the trenches to help their teams find and implement these changes. Most of the book is about what happens in those trenches. I highly recommend it.

There’s one other thing from the book I want to highlight: the process of innovation doesn’t simplify into management practices you can choose off a menu. There’s more magic to it than that. It takes skilled leaders and a delicate combination of freedom and pressure to build a company where the best engineers can do their best work, and trying to reduce that to a prescription doesn’t work. Here’s a quote from Dick Lidstad, one of the 3M leaders interviewed for the book, talking about staff from other companies who come to 3M looking to learn some of the innovation practices so they can implement them in their own teams:

They want to take away one or two things that will help them to innovate. … We say that maintaining a climate in which innovation flourishes may be the single biggest factor overall. As the conversation winds down, it becomes clear that what they want is something that is easily transferable. They want specific practices or policies, and get frustrated because they’d like to go away with a clear prescription.

I heard truth in that quote. Despite being a believer in the value of tools like Scrum, which are supposed to foster creativity and innovation, I’ve spent a lot of my career held back by the overhead of process that’s good in principle but applied with too little care to be effective. Ever spent an entire day in Scrum ceremonies? There’s more value in the experience of 3M’s teams overall than there is in any list of process.

This book was written in 2000, but not only has 3M stock continued to perform well, I found many parallels in the stories this author tells and my own experience in the modern tech world. It’s heavy with references and first-hand interviews, and I think it’s a valuable read for anyone in tech today.

If you read it, let me know what you think!

Adam