Skip to content

Benchmarks of approximate nearest neighbor libraries in Python

Notifications You must be signed in to change notification settings

fulQuan/ann-benchmarks

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

image

Benchmarking nearest neighbors

This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search.

Evaluated

Data sets

Motivation

Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but with little attempt at objectively comparing methods.

Install

Clone the repo and run bash install.sh. This will install all libraries. It could take a while. It has been tested in Ubuntu 12.04 and 14.04.

To download and preprocess the data sets, run bash install/glove.sh and bash install/sift.sh.

There is also a Docker image available under erikbern/ann containing all libraries and data sets.

Principles

  • Everyone is welcome to submit pull requests with tweaks and changes to how each library is being used.
  • In particular: if you are the author of any of these libraries, and you think the benchmark can be improved, consider making the improvement and submitting a pull request.
  • This is meant to be an ongoing project and represent the current state.
  • Make everything easy to replicate, including installing and preparing the datasets.
  • To make it simpler, look only at the precision-performance tradeoff.
  • Try many different values of parameters for each library and ignore the points that are not on the precision-performance frontier.
  • High-dimensional datasets with approximately 100-1000 dimensions. This is challenging but also realistic. Not more than 1000 dimensions because those problems should probably be solved by doing dimensionality reduction separately.
  • Use single core benchmarks. I believe most real world scenarios could be parallelized in other ways (eg. do multiple queries in parallel).
  • Avoid extremely costly index building (more than several hours).
  • Focus on datasets that fit in RAM. Out of core ANN could be the topic of a later comparison.
  • Do proper train/test set of index data and query points.

Results

This is very much a work in progress... more results coming later!

1.19M vectors from GloVe (100 dimensions, trained from tweets), cosine similarity, run on an c4.2xlarge instance on EC2.

1M SIFT features (128 dimensions), Euclidean distance, run on an c4.2xlarge:

Note that KGraph has a substantial performance regression in the latest version. Once the author has confirmed and fixed, I will rerun the KGraph benchmarks.

Also note that NMSLIB saves indices in the directory indices. If the tests are re-run using a different seed and/or a different number of queries, the content of this directory should be deleted.

Testing

The project is fully tested using Travis, with unit tests run for all different libraries and algorithms.

References

About

Benchmarks of approximate nearest neighbor libraries in Python

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 91.8%
  • Shell 8.2%