Skip to content

A workflow management tool that accelerates DS/ML experimentation

License

Notifications You must be signed in to change notification settings

edblancas/ploomber

 
 

Repository files navigation

ploomber

image

Documentation Status

Click here for focumentation

ploomber is workflow management tool that accelerates experimentation and facilitates building production systems. It achieves so by providing incremental builds, interactive execution, tools to inspect pipelines, by facilitating testing and reducing boilerplate code.

Install

If you want to try out everything ploomber has to offer:

pip install ploomber[all]

Note that installing everything will attemp to install pygraphviz, which depends on graphviz, you have to install that first:

# if you are using conda (recommended)
conda install graphviz
# if you are using homebew
brew install graphviz
# for other systems, see: https://www.graphviz.org/download/

If you want to start with the minimal amount of dependencies:

pip install ploomber

Example

from ploomber import DAG
from ploomber.products import File
from ploomber.tasks import PythonCallable, SQLDump
from ploomber.clients import SQLAlchemyClient

dag = DAG()

# the first task dumps data from the db to the local filesystem
task_dump = SQLDump('SELECT * FROM example',
                    File(tmp_dir / 'example.csv'),
                    dag,
                    name='dump',
                    client=SQLAlchemyClient(uri),
                    chunksize=None)

def _add_one(upstream, product):
    """Add one to column a
    """
    df = pd.read_csv(str(upstream['dump']))
    df['a'] = df['a'] + 1
    df.to_csv(str(product), index=False)

# we convert the Python function to a Task
task_add_one = PythonCallable(_add_one,
                              File(tmp_dir / 'add_one.csv'),
                              dag,
                              name='add_one')

# declare how tasks relate to each other
task_dump >> task_add_one

# run the pipeline - incremental buids: ploomber will keep track of each
# task's source code and will only execute outdated tasks in the next run
dag.build()

# a DAG also serves as a tool to interact with your pipeline, for example,
# status will return a summary table
dag.status()

About

A workflow management tool that accelerates DS/ML experimentation

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 91.9%
  • HTML 7.7%
  • Other 0.4%