Skip to content

nataliepopescu/bencher_scrape

Repository files navigation

Setting up Environment

Requirements

make

python3

cmake >= 3.13.4

scrapy >= 2.0.0

numpy >= 1.16.1

dash >= 0.42.0

LLVM

Clone this LLVM repository and make sure you're in the right branch:

$ git checkout match-version-from-rust

Configure

Configure LLVM with your desired build system (we used Unix Makefiles) and these flags:

$ mkdir build && cd build && cmake -G "Unix Makefiles" \
	-DCMAKE_INSTALL_PREFIX="/path/you/have/read/write/access/to" \
	-DLLVM_ENABLE_PROJECTS="clang" \
	-DCMAKE_BUILD_TYPE=Release ../llvm

Build and Install

$ make install-llvm-headers && make -j$(nproc)

Rust

Clone this Rust repository.

Configure

Make the following changes to you config.toml:

[install]
...

prefix = "/another/path/you/have/read/write/access/to"

sysconfdir = "etc"
...

[target.*]
...

llvm-config = "path/to/local/llvm-config"
...

Build and Install

$ ./x.py build && ./x.py install && ./x.py install cargo && ./x.py doc
cargo install cargo-edit

Benchmarking

Clone this repository and run:

$ python3 tool.py -h

Example Workflow: Benchmarking Reverse Dependencies of Criterion

  1. Scrape crates.io and download the latest set of Criterion reverse dependencies, by running the following command from the top-level directory in this repository:
$ python3 tool.py --scrape 200

This will create and populate a directory called "criterion_rev_deps" with the 200 most downloaded reverse dependencies of the criterion benchmarking crate.

  1. Now you can pre-compile the benchmarks with:
$ python3 tool.py --compile
  1. Finally, run the benchmarks by passing the number of rounds you want each benchmark to run for:
$ python3 tool.py --bench 10

Note, you can also run steps 1-3 in a single command depending on how the benchmarks will be run:

$ python3 tool.py --scrape 200 --compile --bench 10
  1. If you would like to consolidate per-crate results from all runs on the current node, you can run this instead of the above step:
$ python3 tool.py --scrape 200 --compile --bench 10 --local

or consolidate separately like:

$ python3 tool.py --local

You will find the consolidated results in the "crunched.data" file in the results directory, which can then be visualized (see this section).

  1. If you would like to consolidate all results across one or more remote machines, assuming benchmarks were run on the remote machines with some version of the command from step 2 and every remote machine contains results for the same number of runs, run:
$ python3 tool.py --remote <filename>

where <filename> should contain a list of the remote ssh destination nodes from which to get results and an absolute path pointing to the location of this repository on the remote nodes. If this repository lives in the same place on all nodes, a single absolute path can be used. Otherwise, an absolute path must be specified per remote node. See remote_same.example and remote.example for what such files should look like.

Visualizing Results

$ python3 result_presenter.py

Coarse Findings

Top 200 crates: 8811 Lines of Unsafe Rust Code (LoURC) - 0.83%

Top 500 crates: 17260 LoURC - 0.84%

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages