Future of Docker networking

Disclaimer: I do not work for Docker nor for any company whose business is tied to Docker in any way

Recently there has been a lot of discussions about the future of Docker networking in various communication channels. It is due to hugely increased Docker usage over the past year. Users are now realising Docker’s networking limitations which are becoming more apparent. The truth is, and I’m sure we can all agree on this, the current networking capabilities of Docker are not sufficient for more complicated setups. Current model is plagued with performance issues and not flexible enough. But for a basic usage it does a pretty good job. However, we never just stay within “basic usage” realm. With the cloud and current world of microservices this is almost impossible. So the time has come to move forward. As one of many contributors to libcontainer, I feel like I need to express my opinion about where I stand in all these discussions and this blog post is about that.

As with every project, there is a bit of a history behind it. Early adopters and tinkerers realised the insufficiencies of the current networking model very quickly. When they needed to assign more than one IP address to their Docker containers they often couldn’t understand why this was possible with LXC but not with Docker, which at the time was using LXC as its container engine. Whilst the containers have been around for a long time, only recently thanks to Docker more and more people started learning about Linux namespaces, including Network namespace, which is the base stone of Linux container networking. Thanks to the many blog posts by Docker Tinkerer Extraordinaire and his awesome pipework tool more people gained a better understanding of the topic and could build more complicated network setups with Docker. Still, we all knew pipework was an awesome and helpful tool, but it was still a “hack-around” tool. Much more was needed from the core of Docker.

I became interested in this topic at the dawn of version 0.7, but instead of diving into the source code straight away, I tried to get my hands dirty with the LXC networking first, as I hoped that it would help me understand the core of the problem so that I could then help by contributing to Docker. I got in touch with Jerome and we agreed on some golang-pipework hack which could be potentially embedded into Docker in the future. The rest is a history. This had happened in the span of 7 months. In these seven months there wasn’t much activity in the Docker-network-land. There was a few network related issues opened on Github. When I opened a PR to “fix” the netlink issues, I realised not much had changed in that time. Docker guys were focussing on more crucial issues (rightly so!) and since we had pipework and by-pipework-inspired-tools, as well as various orchestration tools we could hack around the Docker network’s limitations. People did not seem to be interested in networking too much as only a small number of people contributed by fixing the existing issues or adding new functionality.

Fast forward X months and things are finally starting to change. We now have a first proper official networking proposal and we even have a #libnetwork IRC room, albeit its existence is rather premature at this point. The discussion about the future of networking in Docker is now happening with much larger audience than just between a few people concentrated in Github issues, IRC conversations and private email threads. The most important thing though will be the outcome of these discussions and how it will affect Docker users, both companies and individuals. This could be in my opinion incredibly crucial for Docker and I’m glad we as a community are starting to realise it more.

The topic of Docker networking is quite complex. It touches both underlay and overlay side of things. It literally affects very project from the smallest ones like local dev environments to the large monsters like the awesome Kubernetes project. Feel free to read through the Kubernetes networking design document and you will get a pretty good idea of what I’m talking about. Over the past year we have developed lots of tools and libraries to solve these issues so I really hope the decision outcome of the ongoing discussion should not fuck any of the existing work, but rather create a much nicer environment for them and for the newcomers. I chose the F word deliberately because being in the industry for a fairly long time I have had a chance to see how bad decisions can turn into a total dismay. And that slightly worries me with regards to Docker as well. Maybe it’s really just a result of the experience with other projects, but you know the old saying: “History teaches us nothing…”, so I am on “alert” :-)

To me Docker is not only just another means of shipping software. It’s not just another DevOps enabler. To me the Docker is a tool, but also maybe even more importantly a platform. Open platform that is. Platform which people can build on top of. Not the platform in the sense as we are normally used to when talking about software platforms, but platform nevertheless. If it is about to stay open to users (and companies) building their solutions on top of it it must not tie itself to another project. I have a big hope it won’t, because Docker guys learnt that when they got rid of dependency on LXC and started libcontainer project as a home grown replacement. Similar thinking should be applied to the decision about the future of Networking.

There are some visible initiatives trying to tie Docker to the awesome OVS project, which has been a part of the mainline Linux Kernel since version 3.3. I hope I’m just being a little paranoid here. I have nothing against the OVS. In fact, I find it to be one of the most awesome and advanced networking toolkits out there. It has a bit of a rocky setup and learning curve for a beginner but once you master it, you will gain some serious networking power! One of the arguments I’ve heard on this topic was “if OVS just works Docker should just work fine with it”. Let me tell you something my friend: If there is one thing I learnt over the years in this industry, it is that “NOTHING JUST WORKS”. I don’t want to sound cynical, but some of us have seen even ls utility crashing under certain circumstances. So building an argument on an assumption like that is simply crazy. There are much smarter people than me who would agree with me. Yes, you can hack anything to work FOR YOU, we all do in on daily basis. Yes, you can make SysAdmins or NetOps happy, but probably not both, let alone programmers included. This is a complex topic and of course there is no silver bullet solution, so we should not approach it like that.

But this post is not a about whether or not to make OVS to be part of Docker. I chose to talk about the OVS above as it seemed to have been the most vocally discussed “solution”. This is about a more generic question of tying the Docker to any third party project, regardless of whether it’s open source or not. So far Docker guys have done a pretty good job of avoiding this. This is important as apart from aforementioned problems of project dependencies, once you make a decision like that, your focus will inevitably shift to the chosen path by which you can harm the ecosystem which could grow organically from your platform if you keep it free of such dependencies. Right now we can talk about projects like flannel, weave, docket and probably many others I have not heard of which grew out of the current platform, some due to necessity others bringing something new.

There is currently a proposal for pluggable networking backends which looks promising, but it will take a bit of time until it materialises as it seems Docker is going through a phase of designing solid pluggable architecture not necessarily related to just networking but to other important parts. This must happen in the core of Docker before the project moves on to networking. It is however hugely important in my opinion that the Docker will have a sane default backends whilst leaving the decisions about more advanced solutions to the users and not forcing them down a certain path. This would just inevitably lead to hack arounds as we know them from other projects and would arguably harm the current and possibly future users. We all know you can’t make everyone happy, but you can create a nice “playground” for users where they can build their own solutions without harming each others work. If you’re thinking unified pluggable API then you might just be on the same boat as me!

I’m really excited about the future of networking in Docker and I have a big hope that Docker will make the best decision. Not that I ever doubted it, but I felt like I had to get this post out of my chest having been involved with Docker networking and many discussions about it which made me sense some odd lobbies sneaking in. Most importantly, I hope this post will inspire even more people to get involved in these conversations and help the project to be even more awesome than it is alredy right now.


See also