For all the hype about Software Defined Storage, and the many vendors purporting to offer it, the reality is that most organisations still have storage hardware.
We have yet to see a true hypervisor for storage appear, yet that’s exactly what FalconStor are trying to achieve with their FreeStor offering. It aims to be an “intelligent abstraction layer” that supplies data services independently of the physical storage itself. I spoke to FalconStor President and CEO Gary Quinn, and VP of Global Marketing and Enablement Tim Sheet, during VMworld 2015 about what FalconStor are trying to do.
“Everybody really wants, just like they commoditized the servers, people want to commoditize their storage,” says Quinn. “Buy a raw box for the performance, or the SLA, or the economics you’re looking for, and then move your data across those boxes.” Then you just add FreeStor on top of whatever underlying storage you have to provide the data services in an abstracted way. The servers are de-coupled from the storage, the exact opposite of the hyper-converged approach of VMware VSAN, Nutanix, SimpliVity, Atlantis Computing, Scale Computing, etc.
The FreeStor architecture is a collection of active/active pairs of FreeStor Storage Servers in the data path, with FreeStor Management Servers for the control plane. I’d prefer to see a scale-out architecture where I can scale up or down by adding extra nodes, in hardware, or software, or even both, but active/active is a far simpler design, and meant FalconStor could get this new product to market quicker.
FalconStor are also going with a subscription approach where you pay for the storage you use on an annual ‘True-up” basis. The lack of individual license keys and nit-picking about temporary bursts of usage appeals to me, but a consumption model for what is basically just software isn’t a new idea. Adding more storage to the environment still requires buying physical hardware from *someone*, and there was no suggestion that FalconStor would be renting physical storage to customers.
Unfortunately, from where I sit, the promise of truly virtualized storage is not here yet with FreeStor. Dynamically moving storage around on physical servers is possible, yes, but only if you’re using block protocols to address it. The FreeStor platform currently supports FibreChannel, iSCSI, and FCoE, but no file or object protocols. That’s a shame, because what I really want is “some storage” where I no longer have to worry about the way I get at it, just that it exists, and performs, and has the features I want.
That’s the promise of software defined storage. The hardware is already abstracted away; I don’t write raw byte strings to a specific cylinder on a specific platter, or to a specific cell range on flash. No. I just open a file in my local filesystem and write the data out, or send it over a network using an API.
This isn’t even a new idea. I’m old enough to remember Rainfinity which tried to do much the same thing but for file storage instead of block, and was acquired by EMC back in 2005. Note how no one seems to use this concept now, largely because scale-out storage options (like Isilon) appeared to solve the same problem, but without this extra layer of abstraction and overhead. Rainfinity itself was re-positioned as a data migration tool, rather than a storage virtualisation layer.
When you think about it, it’s somewhat odd that we’ve reached 2015 without something like VMware for storage.
Maybe FalconStor will finally get us there when they finish adding features to the product.
This article first appeared in Forbes.com here.