Thursday, April 14, 2016

Docker and Network Security

Docker is great. Containers are awesome. But we still have to beware of security with them.

I have been getting more and more into Docker and Linux Containers of late. They make the old schroot functionality extremely easy to use (though the same caveats apply), but also make distributing that functionality extremely easy, and building it very reproducible.

Docker Compose takes it a step further, enabling multiple containers to be built and interlinked via the Docker Network. Just don't forget about your firewall.

On my dev boxes, I have a firewalls that by default rejects all traffic and then allows SSH so I can work on them. I've been using Docker containers on one of them lately, and noticed that some of the containers had requests from outside sources. That shouldn't have been - I didn't enable the firewall to allow that. So I checked IPTables, and sure enough there it was:

root@dev:~/project# iptables --list DOCKER
Chain DOCKER (1 references)
 target     prot opt source               destination
 ACCEPT     tcp  --  anywhere             172.17.0.2        tcp dpt:6379


The problem is the source column. Since it is set to "anywhere" any traffic coming from any IP or Interface can access the container. That's not what I wanted.

After asking around, there's an "--iptables=false" flag that can be provided to the Docker service. Using it prevents the IPtables rule from being entered at all. But then the container can't be accessed. It's isolated unless I write the rules myself - something I also don't want to do since it's more likely that I would get them wrong than if Docker did it.

From a security perspective, the above should be the following:

root@dev:~/project# iptables --list DOCKER
Chain DOCKER (1 references)
 target     prot opt source               destination
 ACCEPT     tcp  --  127.0.0.0/24         172.17.0.2        tcp dpt:6379
 ACCEPT     tcp  --  172.17.0.0/16        172.17.0.2        tcp dpt:6379


This limits all traffic to the containers to (a) anything from local host, and (b) anything from within the Docker Network. Alternatively, it could be resolved by using the Docker Bridget Network devices (e.g docker0) and the loop back interface (lo) so that anything abound to them would work. Either way it would be a dramatic security improvement over the current situation.

So here's an example.

You have an application that requires a database and provides a RESTful API. You want to use a tool like nginx to terminate SSL connections. In the normal case only the SSL connection port would be exposed to the public for use - both the ports for the database and for the RESTful API are to be hidden inside the container network, but they have to be exposed to each other so that all the containers can talk to each other. You dockerize all these. Then you check the firewall and see that all three are exposed to the public network.

This issue is several fold:

1. It's an issue for devs because they may be doing this on systems on random networks (if using a laptop) or publicly available systems (if using a cloud server). Nefarious actors can then target the devs and possibly learn about stuff that will eventually be in production, and know things you don't want them to know.

2. It's an issue for deployments if you're not careful. The only ways to resolve it are (a) disable firewall modifications by Docker and manage it all yourself, or (b) put the entire system into a private network. This also assumes you actually have control to do that instead of using a service that just uses some specifications (e.g docker-compose.yml) to build things out and host the site for you.

I've filed a Bug/Feature-request against Docker on the issue. Hopefully we can get some attention and help to get this fixed and enable everyone to use Docker more securely - preferably by default, but even a non-default option would be an improvement.

Just to be clear - does this mean you shouldn't use containers or Docker? Absolutely NOT. Just be careful when doing so, and take precautions when using it for development and especially for production deployments.

Friday, February 19, 2016

Releasing Python Packages with PBR...

So it's been a while since I've had to release one of my Python-based projects and publish it to the PyPi distribution network. Publishing packages is generally really easy:


$ python setup.py sdist build
...
Writing myproj-x.y.z.tar.gz
...
$ twine upload -r pypi dist/myproj-x.y.z.tar.gz

However, I also use OpenStack's PBR (Python Build Reasonableness) as it makes the setup.py and related functionality very easy. However, it also complicates the above...

$ python setup.py sdist build
...
Writing myproj-x.y.z-devNNN.tar.gz
...
$

What to do?

If you look closely at the documentation for PBR you can find some notes for packagers - http://docs.openstack.org/developer/pbr/packagers.html. Among these notes is a statement about the environment variable PBR_VERSION - which is easy to overlook given the non-obvious link to the package your trying to release.

In the end, you just have to use PBR_VERSION to get it right and bypass any version calculations PBR itself does like so:

$ export PBR_VERSION=x.y.z
$ python setup.py sdist build
$ twine upload -r pypi dist/myproject-x.y.z.tar.gz

And voilĂ  it's the correct package for the version and now it's up on PyPi.