Getting Started Guide

Our evaluation consists of microbenchmarks and real-world benchmarks, both are fully automated. The system requirements to execute them are as follows:

The benchmarks requires heap sizes of 4GB, thus machines with at leas 8GB RAM are recommended.

We assume familiarity with UNIX terminals and command line tools. We do not go into details how to install the above mentioned command line tools. We ourselves used our artifact, both, under Apple OS X and Linux.

We wanted to make the use of our artifact as simple as possible. If the system requirements are fulfilled, the reproduction of our results require the execution of three commands in a console/terminal:

Moving into the artifacts directory:

cd oopsla15-artifacts

Setting up and compiling the artifacts:

make prepare

Running microbenchmarks and real-world benchmarks:

make run

Running result analysis and post-processing:

make postprocessing

The first command does not consume time. The second command should take approximately five minutes to complete and should complete without errors. The third command however will take several hours or even days. E.g., in our real-world evaluation the slowest single invocation completes in 30 minutes. For statistical testing of our results we invoke every benchmarks multiple times. Step four, the analysis and postprocessing takes around a minute or less usually.

As an alternative to make prepare and make run we provide a make run_prebuilt command that runs a prebuilt benchmarks JAR file. If you experience any issues in running the experiments, you might start with the make run_prebuilt command.

We further included all results that we obtained form step number three. Consequently our results can be evaluated without the necessity to execute our automated benchmark suite. We provide an extra command for this purpose:

make postprocessing_cached

Furthermore, in section "Running the Benchmarks on Smaller Samples" we will point out how to run the experiments on smaller subsets, that consume less time.

To manually inspect what the make commands do, have a look at oopsla15-artifacts/Makefile.

Key Data Items of our Evaluation

Our cached results are contained in the folder oopsla15-benchmarks/resources/r.

The following files contain data from the microbenchmarks that are discussed in Section 6 of the paper:

These CSV files are then processed by benchmarks.r, a R script, and produce directly the boxplots of Figures 4, 5, 6 and 7 of the papers. The boxplots are named all-benchmarks-vf_pdbpersistent(current|memoized)_byvf(scala|clojure)-(set|map)-boxplot.pdf.

The following files contain data from the real-world benchmarks that are discussed in Section 7 of the paper:

Key Source Items of our Artifact

Our CHAMP hash trie implementations can be found under pdb.values/src/org/eclipse/imp/pdb/facts/util/Trie(Set|Map), and MEMCHAMP under pdb.values/src/org/eclipse/imp/pdb/facts/util/Trie(Set|Map), for people interested in manually inspecting our implementations.

Projects pdb.values.persistent.(clojure|scala) contain simple interface facades that enables cross-library benchmarks under a common API.

The benchmark implementations can be found in the oopsla15-benchmarks project. Files Dominators(Champ|Clojure).java and DominatorsScala_Default.scala implement the real-word experiment (Section 7 of the paper). For Champ and Scala there are addtional dominator implementations with postfix LazyHashCode for the normalized experiments.

Files Jmh(Set|Map) measure the runtimes of individual operations, whereas performs footprint measurements (cf. Section 6, Figures 4, 5, 6 and 7). Note that the benchmarks contain default parameters for their invocation, the actual parameters are set in and

Running the Benchmarks on Smaller Samples

In order to run the microbenchmarks on smaller-sized examples we recommend (some of) the following changes in

In order to run the real-world benchmarks on smaller-sized examples we recommend (some of) the following changes in