The PivotNine Blog

Containers Are The New App Servers

21 February 2017
Justin Warren

Two conversations over the past couple of weeks have surprised me in a good way about what infrastructure companies are doing about containers.

The first was with Atlantis Computing, which is known for its VDI-centric hyper-converged appliances. Patrick Brennan, Atlantis' Senior Product Marketing Manager, pointed out that there are a lot of similarities between VDI workloads and containers.

Consider that virtual desktops are essentially stateless (more on this in a moment). You don't care which server the desktop is running on in the back end, and if it dies, you just restart it and continue on where you left off. You run hundreds (or thousands) of them, and they're all pretty much identical. They're usually based off a master image, as well.

Containers are very similar: they have no state (again, more on this shortly), you run lots of them and you treat them more as cattle than pets.

If you've built infrastructure that works well for VDI, then a lot of the requirements for running containers are going to be pretty similar, right? I mean, hyper-converged infrastructure started out because Google and Facebook and friends were running these vast herds of simple, commodity servers with storage built in, right? They weren't using it to run virtual desktops. Kubernetes came from Google. You've read the (excellent, btw) Google SRE book, right?

I'm probably very late to this realization, but of course hyper-converged gear will be good for containers! And Atlantis is already bringing its decade-plus of experience with doing VDI on HCI to container-land by partnering with Rancher.

I see that as the longer-term future for the company than its ill-fated attempt to expand out of their strengths in VDI and the government, education, and financial and insurance verticals.

Then there was my chat with Diamanti, a new hyper-converged player that has started in container-land with an HCI appliance that runs bare-metal containers. There's no hypervisor, and their secret sauce is a special I/O controller card and software to make the storage inside the HCI easy for containers to use, and for the external network to look familiar to existing enterprises. The parallels to SimpliVity, with its special accelerator card, are obvious.

Mark Balch, Diamanti's VP of Products and Marketing, explained that their appliance approach is aimed at companies who want to deploy containers on infrastructure they can trust, rather than try to roll their own from commodity components. As we recalled together, Google and Facebook spend a lot of effort building their own highly-customised server farms out of commodity components, and most enterprises don't have the time, skills, or economies of scale to make that worthwhile.

There's a clear appeal of a “just plug it in and go” approach to putting in infrastructure that developers can then start to use. As I mentioned previously, developers hate infrastructure, so just get something that works well for containers and let them go.

Diamanti is also all NVME based inside. Death to SCSI, say I! I look forward to the coming wave of separating compute from storage as NVME fabrics mean we can put compute servers in one set of racks and use RDMA networking to talk to storage over in a different set of racks. Smells a lot like FC-SAN, doesn't it?

As Balch pointed out, we're still a while away from the NVME standards settling down, and for prices to drop further, for this to become a mass-adoption possibility, but I firmly believe it's coming.

Containers Aren't VDI

While HCI might look good for both VDI and container workloads, containers are definitely not VDI. Containers are used for a wider variety of workloads and will, I believe, come to supplant most of—if not completely replace—virtualisation in the VMware sense. They function very much like application servers in the (now old hat) three-tier model of web server, app server, database server style applications.

But, and it's a big but, the main functions of this three-tier style are still in use. The web layer is for simple traffic management and flow control: inbound connections needs to get routed to services, be they thicker, full-service app style or micro-services and functions. The app layer is micro-services or functions: business logic that transforms information as it flows through them. The only state here is configuration, be it in NVRAM in the load-balancer/firewall appliance, or a container image or a text file, but it does still need to be stored.

Which is where we come to the tricky part of things: storage of state. Containers started life as stateless entities that can come and go, but they need to access state somewhere to be useful. Some of that state is configuration required locally (which port should I listen to for incoming connections?) and some of it is stated that the container acts on remotely: a database, object store, or simple filesystem full of cat gifs.

The container becomes the app server, and we see that—once again—there is nothing new under the sun.

This is far from a bad thing. It means we have a set of options that are already well-known and well-tested that can be brought to bear on the problems we encounter with this ‘new' environment of containers.

Now we get to see if the heritage companies can make their existing stable of well-known and well-tested options can be updated to match the new era.

Strap yourselves in. It's going to be a bumpy ride.

This article first appeared in Forbes.com here.