The PivotNine Blog

Druva, Cohesity Crow About Independent Research Results

knowledge-1052010_1920-2.jpg
16 September 2019
Justin Warren

Druva and Cohesity are both very pleased with their location in the 2019 Forrester Wave™ for Data Resiliency Solutions.

knowledge-1052010_1920.jpg

Released on Thursday 12 September, the research report has Druva appearing for the first time, snagging a Strong Performer ranking with what Forrester believes is a strong strategy.

Cohesity is ranked as a Leader, also with a strong strategy, and is particularly proud of its score for customer feedback, based on the communications I've been receiving in the past couple of days.

Commvault stands out in the Leader zone with a strong current offering as well as a strong strategy, with Veeam and Rubrik rounding out the leaders, Rubrik with a particularly strong strategy, according to Forrester.

It's when we dig into the details of the report that things become a little murkier. Druva told me that the way scalability was evaluated discounted the data it backs up for endpoint customers, which contributed to its low scalability score.

In the report, Forrester discusses the customer count and data under management requirements for inclusion in the report, but the link to the scalability score isn't clear. The requirements here favor established players targeted at large enterprises, which makes the achievements of relative newcomers to this market all the more remarkable.

More perplexing to me are the ratings for recoverability, for which multiple vendors scored a one out of five, and Micro Focus somehow managed to score a zero. I'm told the recoverability rating is based on the ability to detect and guard against ransomware attacking the backup system itself, rather than the ability to recover data. Recovery of data is, in my humble opinion, somewhat of a core function of a data protection system, so I find this choice of terminology by Forrester rather perplexing.

Many customers, particularly enterprises, value these kinds of analyst reports as a shortcut for product selection. Detailed evaluation is time-consuming and difficult to do well, and customers trust analyst firms to perform a lot of this challenging work on their behalf. This makes sense from a triage perspective, but it's important to look at the details of these reports, not just the headline positions in a specific region of a proprietary branded matrix.

A challenge for vendors is the fear that failure to appear in a particular location in one of these reports (or, horror, not at all) might exclude them from being considered by certain customers. While I have known some customers to require vendor presence in particular analyst reports, I've known just as many to use such presence as a counter-indicator, believing (for whatever reason) that the position in a report isn't fairly earned.

Neither of these responses is fair to the vendors or the analyst firms. These reports should be used as an information source to be evaluated on its own merits. Analysts have to draw the lines somewhere, and once these somewhat arbitrary lines are drawn, the analysis and measurement is conducted without fear or favor by any reputable firm.

Customers need to familiarise themselves with the methods used for each report type, and how each analyst firm operates, to get the most use out of these reports. Someone on the product evaluation team should be reading these reports in detail, and understanding them in their own customer context. Blindly choosing a vendor partner because they're in a specific report category would be unwise.

However, analyst firms could do more to assist customers reading these reports by including clearer explanations of how the evaluations are performed. Reading a report in detail should increase understanding, not invoke confusion about why certain terminology was used or what it means. It would help research firms to guard against criticism from vendors who may feel unfairly treated, and help to maintain trust with customers who rely on them for their expertise.

I encourage customers to look beyond the headline results and speak to their vendors candidly about why they were scored as they were. If nothing else, customers will learn more about the challenges faced by analysts trying to find a way to summarise a complex body of knowledge for general consumption, and how vendors face the difficult prospect of fitting their own pegs into analyst report shaped holes.