HashiCorp today announced the availability of the final two parts of their suite of DevOps workflow tools, Otto and Nomad, completing a multi-year journey to provide a set of tools for developers and operators of the modern datacenter.
Nomad is a scheduler which is essentially a tool for ensuring the right number of components for your application are running at any one time. It monitors the environment and adds new webserver nodes, application nodes, API endpoints, load-balancers, database instances, etc., depending on the architecture of your application. Thanks to the integration with HashiCorp’s Terraform tool, Nomad can provision resources into multiple clouds, as well as on-site environments like [entity display=”VMware” type=”organization” subtype=”company” active=”true” key=”vmware” ticker=”VMW” exchange=”NYSE” natural_id=”fred/company/5897″]VMware[/entity].
We can be agnostic to the technology, and focus on the workflow instead.
— Kevin Fishner, Head of Customer Success, HashiCorp
Nomad is multi-datacenter and multi-region aware; useful if your cloud provider happens to be down as AWS was recently.
The Otto tool is an abstraction of all the other HashiCorp tools — Vagrant, Packer, Terraform, Serf, Consul, Vault, and now Nomad — to provide a single, simplified interface for managing development and deployment. A rich ecosystem of tools is useful, but when they can be connected together in myriad ways the complexity of managing the stack grows by its very nature. As in other areas of technology, an abstraction layer helps to make using the ecosystem more tractable for mere humans.HashiCorp decided not to produce reams of documentation on so-called best practices. “We wanted to give people the best tool, not best practice,” says HashiCorp Head of Customer Success Kevin Fishner.
You may not have heard of HashiCorp, but your developers are almost certainly using one of their tools, as they boast some 500,000 monthly active users of their open-source tools.
One of the strengths of the HashiCorp ecosystem is its modularity. Each tool does a specific job, and can connect to others using an easily understandable interface, following the Unix philosophy of simple, modular, and composable parts. You don’t need to buy into the entire HashiCorp ecosystem at once, instead choosing the components you need, when you need them, unlike some other operations tools.
“Imagine telling a company ‘Alright, I’m going to completely change the way you deploy in your company.’ There’s no way!” says Fishner. “There’s no way you’re going to go to GE, or JPMorgan, or any of these large enterprises and completely overhaul their deployment workflow in one snap. It’d take years!”
The HashiCorp approach means you can adopt a way of working that functions regardless of the specific technologies used underneath. Developers can use whichever languages they deem best for the job, and applications can be deployed into whatever infrastructure works best. Today it might be on-site and virtualized in [entity display=”VMware” type=”organization” subtype=”company” active=”false” key=”vmware” ticker=”VMW” exchange=”NYSE” natural_id=”fred/company/5897″]VMware[/entity], tomorrow components are moved into the cloud, while you also experiment with techniques like Docker or whatever the new flavor of the month is. “We can be agnostic to the technology, and focus on the workflow instead,” says Fishner.