Clumio has emerged from stealth, launching its enterprise-focused backup-as-a-service offering on the back of $51 million in funding from two funding rounds.
“What we’ve built at Clumio is essentially a cloud data fabric that is completely built on top of the public cloud to take advantage of the cloud the right way,” said Clumio founder and CEO Poojan Kumar. Kumar was previously CEO of PernixData which was acquired by Nutanix in 2016.
The initial offering supports VMware workloads, either on-site or running in VMware-on-AWS. To back them up, Clumio supplies a data mover VM as part of the sign-up process that you install into your VMware cluster. It then connects to the Clumio service to provide data movement services, and administration is done via the Clumio SaaS portal. Data is sent via the Clumio service to S3 object stores.
Clumio takes care of the backend service details, including the egress data charges if you need to restore, which is a hidden gotcha for some alternate approaches to S3-based backup. Charging is based on the number of VMs backed up, not the amount of data under management.
Clumio plans to take on far more than backup. “Conceptually you can think of Clumio as building a data fabric,” said Kumar. “Over time, we’ll be delivering more services on top of this data fabric.”
“We picked backup as the first service because it’s obviously a huge market,” he said. “Our goal is to get into data management via the backup service as we start managing more and more of our customers’ data.”
Clumio’s data fabric concept which echoes existing data fabric noises made by various storage and backup vendors in recent years. Becoming the central point for all data management has been a goal for many vendors for many years, though none of them seem to have succeeded thus far.
Choosing backup as the first use-case for a company with more fulsome data management ambitions is now a well trodden path. What’s not clear to me is how Clumio stands out in a field that is getting fairly crowded. Druva has been running as AWS-based data-protection-as-a-service for some time. I’m unclear on what advantage Clumio has over the Druva approach.
Rubrik and Cohesity both walked this path some years ago, and have now expanded into other areas. Both have added more cloud-native capabilities, including the ability to back up NoSQL databases via acquisitions, as well as supporting the huge amount of enterprise data that still lives on-site.
Veeam started life exclusively backing up VMware workloads, but long ago added support for non-VM workloads, and added cloud-native workloads with its N2WS acquisition over a year ago. Even industry veterans NetBackup, Commvault, and Networker have had cloud-target options for a while now. Clumio is enterprise targeted, so we can set aside the plethora of mid-market offerings like Acronis, Backblaze, Carbonite, and others.
My impression is that Clumio is designed with a fairly traditional catalog server and data mover architecture. The catalog server runs as a cloud service, providing the software-as-a-service angle, and the data movers run as VMs within a customer’s VMware environment to link Clumio’s service into the customer’s network. There’s nothing wrong with this, but the mismatch between the cloud-focused rhetoric and the architectural reality gives me pause.
It’s clear that the market for backup is enormous and it is able to support a large number of players. Whether or not it can do so profitably remains to be seen, particularly given the eye-watering amount of funding some of the players have managed to take on. Taking on those players head-on would be a substantial challenge at this early stage of Clumio’s journey, so I’m keen to see a more nimble approach that shows off some real points of difference.