Crowd sourced benchmarks

Colin Gillespie

System benchmarking

R benchmarking made easy. The package contains a number of benchmarks, heavily based on the benchmarks at https://mac.R-project.org/benchmarks/R-benchmark-25.R, for assessing the speed of your system.

Overview

A straightforward way of speeding up your analysis is to buy a better computer. Modern desktops are relatively cheap, especially compared to user time. However, it isn’t clear if upgrading your computing is worth the cost. The benchmarkme package provides a set of benchmarks to help quantify your system. More importantly, it allows you to compare your timings with other systems.

Overview

The package is on CRAN and can be installed in the usual way

install.packages("benchmarkme")

There are two groups of benchmarks:

The benchmark_std() function

This benchmarks numerical operations such as loops and matrix operations. This benchmark comprises of three separate benchmarks: prog, matrix_fun, and matrix_cal. If you have less than 3GB of RAM (run get_ram() to find out how much is available on your system), then you should kill any memory hungry applications, e.g. firefox, and set runs = 1 as an argument.

To benchmark your system, use

library("benchmarkme")
## Increase runs if you have a higher spec machine
res = benchmark_std(runs = 3)

and upload your results

## You can control exactly what is uploaded. See details below.
upload_results(res)

You can compare your results to other users via

plot(res)

The benchmark_io() function

This function benchmarks reading and writing a 5MB or 50MB (if you have less than 4GB of RAM, reduce the number of runs to 1). Run the benchmark using

res_io = benchmark_io(runs = 3)
upload_results(res_io)
plot(res_io)

By default the files are written to a temporary directory generated

tempdir()

which depends on the value of

Sys.getenv("TMPDIR")

You can alter this to via the tmpdir argument. This is useful for comparing hard drive access to a network drive.

res_io = benchmark_io(tmpdir = "some_other_directory")

Parallel benchmarks

The benchmark functions above have a parallel option - just simply specify the number of cores you want to test. For example to test using four cores

res_io = benchmark_std(runs = 3, cores = 4)

The process for the parallel benchmarks of the pseudo function benchmark_x(cores = n) is: - initialise the parallel environment - Start timer - Run job x in core 1, 2, …, n simultaneously - when all jobs finish stop timer - stop parallel environment This procedure is repeat runs times.

Previous versions of this

This package was started around 2015. However, multiple changes in the byte compiler over the last few years, has made it very difficult to use previous results. So we have to start from scratch.

The previous data can be obtained via

data(past_results, package = "benchmarkmeData")

Machine specs

The package has a few useful functions for extracting system specs:

The above functions have been tested on a number of systems. If they don’t work on your system, please raise GitHub issue.

Uploaded data sets

A summary of the uploaded data sets is available in the benchmarkmeData package

data(past_results_v2, package = "benchmarkmeData")

A column of this data set, contains the unique identifier returned by the upload_results() function.

What’s uploaded

Two objects are uploaded:

  1. Your benchmarks from benchmark_std() or benchmark_io();
  2. A summary of your system information (get_sys_details()).

The get_sys_details() returns:

The function Sys.info() does include the user and nodenames. In the public release of the data, this information will be removed. If you don’t wish to upload certain information, just set the corresponding argument, i.e.

upload_results(res, args = list(sys_info = FALSE))

Development of this package was supported by Jumping Rivers