Skip to content

indra-ji/dvc

 
 

Repository files navigation

DVC logo

Website • Docs • Blog • Twitter • Chat (Community & Support) • Tutorial • Mailing List

Release GHA Tests Code Climate Codecov Donate DOI

PyPI deb|pkg|rpm|exe Homebrew Conda-forge Chocolatey Snapcraft

Data Version Control or DVC is an open-source tool for data science and machine learning projects. Key features:

  1. Simple command line Git-like experience. Does not require installing and maintaining any databases. Does not depend on any proprietary online services.
  2. Management and versioning of datasets and machine learning models. Data is saved in S3, Google cloud, Azure, Alibaba cloud, SSH server, HDFS, or even local HDD RAID.
  3. Makes projects reproducible and shareable; helping to answer questions about how a model was built.
  4. Helps manage experiments with Git tags/branches and metrics tracking.

DVC aims to replace spreadsheet and document sharing tools (such as Excel or Google Docs) which are being used frequently as both knowledge repositories and team ledgers. DVC also replaces both ad-hoc scripts to track, move, and deploy different model versions; as well as ad-hoc data file suffixes and prefixes.

Contents

How DVC works

We encourage you to read our Get Started guide to better understand what DVC is and how it can fit your scenarios.

The easiest (but not perfect!) analogy to describe it: DVC is Git (or Git-LFS to be precise) & Makefiles made right and tailored specifically for ML and Data Science scenarios.

  1. Git/Git-LFS part - DVC helps store and share data artifacts and models, connecting them with a Git repository.
  2. Makefiles part - DVC describes how one data or model artifact was built from other data and code.

DVC usually runs along with Git. Git is used as usual to store and version code (including DVC meta-files). DVC helps to store data and model files seamlessly out of Git, while preserving almost the same user experience as if they were stored in Git itself. To store and share the data cache, DVC supports multiple remotes - any cloud (S3, Azure, Google Cloud, etc) or any on-premise network storage (via SSH, for example).

how_dvc_works

The DVC pipelines (computational graph) feature connects code and data together. It is possible to explicitly specify all steps required to produce a model: input dependencies including data, commands to run, and output information to be saved. See the quick start section below or the Get Started tutorial to learn more.

Quick start

Please read Get Started guide for a full version. Common workflow commands include:

Step Command
Track data
$ git add train.py
$ dvc add images.zip
Connect code and data by commands
$ dvc run -d images.zip -o images/ unzip -q images.zip
$ dvc run -d images/ -d train.py -o model.p python train.py
Make changes and reproduce
$ vi train.py
$ dvc repro model.p.dvc
Share code
$ git add .
$ git commit -m 'The baseline model'
$ git push
Share data and ML models
$ dvc remote add myremote -d s3://mybucket/image_cnn
$ dvc push

Installation

There are four options to install DVC: pip, Homebrew, Conda (Anaconda) or an OS-specific package. Full instructions are available here.

Snap (Snapcraft/Linux)

Snapcraft

snap install dvc --classic

This corresponds to the latest tagged release. Add --beta for the latest tagged release candidate, or --edge for the latest master version.

Choco (Chocolatey/Windows)

Chocolatey

choco install dvc

Brew (Homebrew/Mac OS)

Homebrew

brew install dvc

Conda (Anaconda)

Conda-forge

conda install -c conda-forge mamba # installs much faster than conda
mamba install -c conda-forge dvc

Depending on the remote storage type you plan to use to keep and share your data, you might need to install optional dependencies: dvc-s3, dvc-azure, dvc-gdrive, dvc-gs, dvc-oss, dvc-ssh.

pip (PyPI)

PyPI

pip install dvc

Depending on the remote storage type you plan to use to keep and share your data, you might need to specify one of the optional dependencies: s3, gs, azure, oss, ssh. Or all to include them all. The command should look like this: pip install dvc[s3] (in this case AWS S3 dependencies such as boto3 will be installed automatically).

To install the development version, run:

pip install git+git://github.com/iterative/dvc

Package

deb|pkg|rpm|exe

Self-contained packages for Linux, Windows, and Mac are available. The latest version of the packages can be found on the GitHub releases page.

Ubuntu / Debian (deb)

sudo wget https://dvc.org/deb/dvc.list -O /etc/apt/sources.list.d/dvc.list
sudo apt-get update
sudo apt-get install dvc

Fedora / CentOS (rpm)

sudo wget https://dvc.org/rpm/dvc.repo -O /etc/yum.repos.d/dvc.repo
sudo yum update
sudo yum install dvc

Comparison to related technologies

  1. Data Engineering tools such as AirFlow, Luigi and others - in DVC data, model and ML pipelines represent a single ML project and focuses on data scientist' experience while data engineering pipeline orchestrates multiple data projects and focuses on efficent execution. DVC project can be used from data pipeline as a single execution step. AirFlow DVC is an example of such integration.
  2. Git-annex - DVC uses the idea of storing the content of large files (which should not be in a Git repository) in a local key-value store, and uses file hardlinks/symlinks instead of copying/duplicating files.
  3. Git-LFS - DVC is compatible with any remote storage (S3, Google Cloud, Azure, SSH, etc). DVC also uses reflinks or hardlinks to avoid copy operations on checkouts; thus handling large data files much more efficiently.
  4. Makefile (and analogues including ad-hoc scripts) - DVC tracks dependencies (in a directed acyclic graph).
  5. Workflow Management Systems - DVC is a workflow management system designed specifically to manage machine learning experiments. DVC is built on top of Git.
  6. DAGsHub - online service of Git+DVC repositories with pipeline and metrics visualization and DVC-specific cloud storage.
  7. DVC Studio - online service of DVC repositories visualisation from DVC team. Also, integrated with CML (CI/CD for ML) for training models in clouds and Kubernetes.

Contributing

Code Climate Donate

Contributions are welcome! Please see our Contributing Guide for more details.

Mailing List

Want to stay up to date? Want to help improve DVC by participating in our occasional polls? Subscribe to our mailing list. No spam, really low traffic.

Copyright

This project is distributed under the Apache license version 2.0 (see the LICENSE file in the project root).

By submitting a pull request to this project, you agree to license your contribution under the Apache license version 2.0 to this project.

Citation

DOI

Iterative, DVC: Data Version Control - Git for Data & Models (2020) DOI:10.5281/zenodo.012345.

Barrak, A., Eghan, E.E. and Adams, B. On the Co-evolution of ML Pipelines and Source Code - Empirical Study of DVC Projects , in Proceedings of the 28th IEEE International Conference on Software Analysis, Evolution, and Reengineering, SANER 2021. Hawaii, USA.

About

🦉Data Version Control | Git for Data & Models

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.4%
  • Other 0.6%