Fair benchmarking for cloud computing systems
Compute resource benchmarks are an established part of the high performance computing (HPC) research computing landscape, and are also in general evidence in non-HPC settings. As Cloud Computing technology becomes more widely adopted, there will be increasing need for well-understood benchmarks that offer fair evaluation of such generic systems in comparison other kinds of computing systems that have been optimized for specific purposes. Cloud Computing benchmarks need to be able to account for all parts of the lifecycle of cloud system use, and most existing benchmarks do not allow for this.
Cloud-specific benchmarks will increase in importance because clouds have a wider range of possible applications than are offered by HPC, and also because the variety of options and configurations of cloud systems, and efforts needed to get to the point at which traditional benchmarks can be run, have various effects on the fairness of the comparison.
In this pilot, we set out to create an academically focused cloud benchmark site that accounts fairly for such variations. The principal outcome will be a web portal that embodies such considerations and which can be used to access data about benchmark runs, and potentially to adapt benchmarks to run on other Cloud systems. The proposed portal will offer a service to a knowledgeable user that returns the closest matches, based on the closest portfolio of benchmark elements, to a set of requirements specified about their own application as a Service Level Agreement (SLA). The portal will also offer access to bundled benchmark tests (virtual machines containing such applications) that have been constructed during the project, and which will alleviate the need for other researchers to repeat such work and the associated costs of Cloud as well as of effort, in doing so.
- Professor Mark Baker, University of Reading, UK
- Dr Terry Harmer, Belfast e-Science Centre, UK
- EoverI Ltd, Belfast, UK / Mediasmiths International Ltd.