Skip to content

Traditional Machine Learning Models for Large-Scale Datasets in PyTorch.

License

Notifications You must be signed in to change notification settings

akarshkumar0101/pycave

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyCave

PyPi

PyCave provides well-known machine learning models for the usage with large-scale datasets. This is achieved by leveraging PyTorch's capability to easily perform computations on a GPU as well as implementing batch-wise training for all models.

As a result, PyCave's models are able to work with datasets orders of magnitudes larger than datasets that are commonly used with Sklearn. At the same time, PyCave provides an API that is very familiar both to users of Sklearn and PyTorch.

Internally, PyCave's capabilities are heavily supported by PyBlaze which enables seamless batch-wise GPU training without additional code.

Features

PyCave currently includes the following models:

All of these models can be trained on a (single) GPU and using batches of data.

Installation

PyCave is available on PyPi and can simply be installed as follows:

pip install pycave

Quickstart

A simple guide is available in the documentation.

Benchmarks

In order to demonstrate the potential of PyCave, we compared the runtime of PyCave both on CPU and GPU against the runtime of Sklearn's Gaussian Mixture Model.

We train on 100k 128-dimensional datapoints sampled from a "ground truth" GMM with 512 components. PyCave's GMM and Sklearn should then minimize the negative log-likelihood (NLL) of the data. For our GMM, convergence was reached when converging on a per-datapoint NLL of 0.01. For Sklearn, we had to set it to 1e-5. Initialization was random instead of K-Means initialization as K-Means needs to be run via Sklearn, hence on the CPU.

Implementation Avg. Train Duration Speedup
Sklearn (CPU) 299.04s -
PyCave (CPU) 31.29s x9.56
PyCave (GPU) 0.35s x864.36

By moving to PyCave's GPU implementation of GMMs, you can therefore expects speedups by a factor of hundreds.

For huge datasets, PyCave's GMM also supports mini-batch training on a GPU. We run PyCave's GMM on the same kind of data as described above, yet on 100 million instead of 100k datapoints. We use a batch size of 750k to train on a GPU.

Implementation Training Time
PyCave (GPU, mini-batch) 373.23s

As can be observed, the GMM training scales almost linearly with the number of datapoints (0.35s from the table above times 1,000).

We ran the benchmark on 8 Cores of an Intel Xeon E5-2630 with 2.2 GHz and a single GeForce GTX 1080 GPU with 11 GB of memory.

License

PyCave is licensed under the MIT License.

About

Traditional Machine Learning Models for Large-Scale Datasets in PyTorch.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.8%
  • Dockerfile 0.2%