Today’s article demonstrates a surprisingly easy way to tighten the network-layer permissions in an AWS VPC. (If you’re in AWS but you’re not in a VPC: 😡)
Security Groups have ingress and egress rules (also called inbound and outbound rules). In most SGs, the egress rules allow all traffic to everywhere. You’ve probably seen this:
That’s a problem because, someday, you will get hacked. Breaches are inevitable, perfect security doesn’t exist. Someone or some bot will get access to what that SG was protecting (EC2 instance, Fargate ENI, whatever). When that happens, they can send out anything they want. Instead, you want them to have the most limited capabilities possible. They should find walls everywhere they turn. It’s not just about what’s coming in, it’s also about what’s going out.
I like AWS’s directive from the third bullet of the Security Pillar in their Well-Architected Framework: “Apply security at all layers”. Incoming and outgoing.
Fortunately, SG outbound rules are easy to tighten!
Imagine an Autoscaling Group of Linux instances. They handle background jobs and sometimes engineers SSH into them to run diagnostics (for this example we’re imagining you haven’t set up SSM Session Manager). At boot time they
yum install some packages.
As usual, you need an incoming rule to allow SSH:
Now, here’s how to determine the outgoing rules you need: if the resource protected by the SG starts the connection, you need an outgoing rule. If it only replies to connections started by someone else, you don’t need an outgoing rule. Details farther down.
For anyone who’s forgotten, yum uses HTTP/S (ports 80/443) and FTP (ports 20/21).
The instance receives and then replies to SSH requests. That means it didn’t start those connections, so we don’t need an outgoing rule. But, it sends HTTP/S and FTP requests because it has to request to download packages from the yum servers. That means it starts those connections, so we need outgoing rules:
That’s it. If you attach this SG to the instances, engineers will still be able to SSH into it and it will still be able to
yum install packages. But, if the instance tries to do something else, like maybe connect to a MySQL database (port 3306), the SG will block that traffic and the connection will time out. When something Evil breaks into this instance, it won’t be able to access that database.
These three rules are enough because Security Groups are stateful. To dramatically simplify statefulness, it means that SGs know whether traffic passing through them is part of a connection the instance has already agreed to. If it is, they pass the traffic whether or not a rule is present.
I’m skipping a ton of details. This is meant to help you quickly do one round of tightening on your network. There are advantages and disadvantages to stateful filtering, and the details can take you deep into the weeds, but most of the time it’s enough to know what rules you need. If you want to go farther with this on your own, check out this project for a demonstration environment where you can experiment with variations.
- We have to allow yum traffic to
0.0.0.0/0because we don’t know the IP addresses of the upstream yum servers.
- VPC subnet ACLs are not stateful, so you need different rules for those. I’ll cover that in another article.
In this example we weren’t able to stop whatever Evil Thing had broken into your instance from sending your Super Secret Stuff to some Evil Webserver somewhere, but we did take away some of their other tools. We’ve put up one more wall they might run in to, and every wall they hit might stop them. Walls at every turn.
If this was helpful and you want to get the latest Cloud DevOps security best practices in your inbox, subscribe here. If you don’t want to wait for the next one, check out these: