The PivotNine Blog

Solo.io and Networking Options in Kubernetes

cfd16-solo-gloo-3x2.jpg

Towards the end of Solo's presentation on service mesh at Cloud Field Day 16 I wondered aloud: why does any of this exist?

Why isn't service mesh, and networking more generally, just baked into Kubernetes?

The answer is illuminating and demonstrates an important challenge in enterprise technology: there often isn't one obviously right answer. You only get one obviously right answer when the question is trivial.

Choices

Choice and flexibility are worthwhile things, particularly in the enterprise. I often remind vendors and enterprise architects that mergers and acquisitions is a something that happens in enterprise businesses. Whatever uniform, elegant, holistic design you've come up with, the messy day-to-day reality of business will try to break it in creative and frustrating ways.

It's why multicloud is a thing, because if one company goes all-in on AWS and another chooses Azure as the only way to do things, if one company buys the other you now have multicloud. The same thing happens if an all-cloud company buys one that has zero cloud, or is only partway through moving to the cloud. Instant hybrid cloud!

And yet the Kubernetes experience demonstrates why removing choices is sometimes beneficial: too much choice, particularly when there's little apparent difference between options, makes choosing hard. Customers will take longer to decide, or may defer deciding altogether until it becomes easier to do.

Kubernetes attempts to address this challenge by having standard interfaces for common problems. "We need to store data" means there needs to be a storage interface, so we have the Container Storage Interface (CSI). Containers need to communicate with each other, so we need networking: behold the Container Network Interface (CNI).

Picking Winners

My question about "why isn't this just part of Kubernetes" exposes some philosophical choices the Kubernetes community have made. If there is a baked-in way of doing things, you have to pick a winner. The default option will get used more than anything else because it requires no decision-making. It's the path of least effort, and humans tend to choose that absent other pressures.

Kubernetes doesn't want to pick winners for anything other than container orchestration. That is what Kubernetes is for and it tries hard not to get too opinionated about anything else. This makes Kubernetes a more attractive platform to build on, because you're not having to compete against a default option that gets preferential treatment by virtue of being the default. And so we get a proliferation of options all trying to convince us that they are the way we should do networking or storage or whatever.

This is, mostly, a good thing. Yes, it adds complexity and makes deciding harder, but unless there's a broad consensus that there's really only one way to do networking between containers, we can't really have a default. Not until we figure out what "good enough" means for most people most of the time.

We've gone through this process before, which helps us understand why things are happening this way again. History doesn't repeat, but it rhymes.

Network evolution

Communication between containers doesn't actually need a network, if you stay within a minimal toy example of Kubernetes where all the containers are in the same pod on a single node and can talk to each other over local sockets or ports. Just like you can on a single unix server.

Once you decide you want to communicate between servers, you now have choices. Today we've broadly settled on Ethernet as the main method, but Infiniband exists, and CXL is coming, and people still use Fibre Channel. You can use IP for everything, but IPv4 or IPv6? Will you use static routes or RIPv2 or EIGRP or OSPF or BGP or some new exotic method?

The complexity of Kubernetes networking mirrors the inherent complexity of a new frontier where we haven't really reached a consensus on the best way to do things. And we may never reach consensus! We might always need to have several tools in our toolbox, each suited to a different kind of challenge we want to address. Hitting everything with the hammer of Banyan Vines doesn't make a lot of sense.

Solo is providing one set of options to address the challenge of Kubernetes networking. It's a good option, but we don't know enough yet to decide if we should throw away all our other tools. We don't want to get caught with nothing but a hammer if we need to paint a house one day.

I was a guest of GestaltIT/TechFieldDay at Cloud Field Day 16