make
python3
cmake >= 3.13.4
scrapy >= 2.0.0
numpy >= 1.16.1
dash >= 0.42.0
Clone this LLVM repository and make sure you're in the right branch:
$ git checkout match-version-from-rust
Configure LLVM with your desired build system (we used Unix Makefiles) and these flags:
$ mkdir build && cd build && cmake -G "Unix Makefiles" \
-DCMAKE_INSTALL_PREFIX="/path/you/have/read/write/access/to" \
-DLLVM_ENABLE_PROJECTS="clang" \
-DCMAKE_BUILD_TYPE=Release ../llvm
$ make install-llvm-headers && make -j$(nproc)
Clone this Rust repository.
Make the following changes to you config.toml
:
[install]
...
prefix = "/another/path/you/have/read/write/access/to"
sysconfdir = "etc"
...
[target.*]
...
llvm-config = "path/to/local/llvm-config"
...
$ ./x.py build && ./x.py install && ./x.py install cargo && ./x.py doc
cargo install cargo-edit
Clone this repository and run:
$ python3 tool.py -h
Example Workflow: Benchmarking Reverse Dependencies of Criterion
- Scrape crates.io and download the latest set of Criterion reverse dependencies, by running the following command from the top-level directory in this repository:
$ python3 tool.py --scrape 200
This will create and populate a directory called "criterion_rev_deps" with the
200 most downloaded reverse dependencies of the criterion
benchmarking crate.
- Now you can pre-compile the benchmarks with:
$ python3 tool.py --compile
- Finally, run the benchmarks by passing the number of rounds you want each benchmark to run for:
$ python3 tool.py --bench 10
Note, you can also run steps 1-3 in a single command depending on how the benchmarks will be run:
$ python3 tool.py --scrape 200 --compile --bench 10
- If you would like to consolidate per-crate results from all runs on the current node, you can run this instead of the above step:
$ python3 tool.py --scrape 200 --compile --bench 10 --local
or consolidate separately like:
$ python3 tool.py --local
You will find the consolidated results in the "crunched.data" file in the results directory, which can then be visualized (see this section).
- If you would like to consolidate all results across one or more remote machines, assuming benchmarks were run on the remote machines with some version of the command from step 2 and every remote machine contains results for the same number of runs, run:
$ python3 tool.py --remote <filename>
where <filename> should contain a list of the remote ssh destination nodes from which to get results and an absolute path pointing to the location of this repository on the remote nodes. If this repository lives in the same place on all nodes, a single absolute path can be used. Otherwise, an absolute path must be specified per remote node. See remote_same.example and remote.example for what such files should look like.
$ python3 result_presenter.py
Top 200 crates: 8811 Lines of Unsafe Rust Code (LoURC) - 0.83%
Top 500 crates: 17260 LoURC - 0.84%