Cloud

How sound is your cloud?

Pro
(Image: Stockfresh)

7 December 2015

Every public cloud likes to proclaim it is the cheapest, fastest, and most convenient to use, but obviously not everyone can be fastest. Hence the need for good, comprehensive public cloud benchmarks — and not only to use when making buying decisions, but also to keep cloud vendors straight about their claims.

However, cloud resources are tough to benchmark with their notoriously fickle and inconsistent performance. The question then how best to attempt it?

Do it yourself…
Right now there are three ways to go about obtaining benchmarks: Run your own applications on different clouds and see what happens; run someone else’s benchmarks; or get someone in the business of benchmarking to run the tests for you. All three approaches have shortcomings — in large part because it is tough to create a single, consistent standard for what to test and how.

The do-it-yourself route is the least appealing and the scenario most people want to avoid. The second approach, deploy someone else’s benchmarks, is slightly more appealing because a number of third-party tests can be used.

In 2010, Yahoo produced the Yahoo Cloud Serving Benchmark (YCSB), which it still revises and keeps current. Its main focus is benchmarking the behaviours of big data database systems in the cloud, with support for everything from HBase, Cassandra, and MongoDB to Redis, GemFire, DynamoDB, and everything in between. Its narrow focus is both a boon and a drawback; it’s useful if you want to test the behaviour of a single class of services commonly hosted in the cloud, but by itself it isn’t an end-to-end metric.

Earlier this year Google tried a more comprehensive approach with PerfKit. Written in Python, it rolls a bushel of benchmark suites — including YCSB — into one, and provides visualisation tools to get a grip on the final results. That’s useful, but simply combining a number of tests isn’t prescriptive. Whoever’s analysing the tests has to be able to look at the results and figure out if there’s anything actionable.

… or trust someone else to do it
That is where third-party benchmarking firms come into the picture: to provide perspective. They not only run benchmarks on public clouds but provide additional analysis about price vs. performance, and track changes in performance for different offerings over time. The trick, though, is to ensure they’re being honest about how they test and what they report.

Two such vendors, Cloud Spectator and Cloudscreener, publish regular reports that benchmark the behaviours of various vendors’ clouds. Cloudscreener also provides a visual tool that lets you create a sample system confirmation and determine what kind of performance you’re likely to get from each of the major providers — and how much you can expect to spend.

With third-party commercial benchmarking, though, the danger is that the benchmarks will be opaque, either because of the actual benchmarks run or the testing methodology. Cloud Spectator uses Geekbench and fio, two common stress tests for system performance. The latter is open source, but source code for the former can only be read with a corporate license. Cloudscreener uses fio and Sysbench, both open source. Cloudscreener co-founder and CEO Anthony Sollinger stated in an email that the company is “fully transparent about the methodology” and “this kind of benchmark and ranking methodology has to be fully transparent and explained.”

InfoWorld contributor Steven J Vaughan-Nichols has expressed scepticism about the usefulness of cloud benchmarking and believes the only real benchmark that matters is one’s own application. But ultimately, having testing methodologies for the public cloud is useful above and beyond getting an idea of how one’s apps might run. It is about ensuring that cloud vendors offer what they claim and keeping them above-board. If it falls to third parties to do that, so be it — so long as they, too, can be kept honest.

 

Serdar Yegulalp, IDG News Service

Read More:


Back to Top ↑

TechCentral.ie