A Scala / Java / Python library for cleaning, transforming and executing other preparation tasks for large datasets on Apache Spark.
It is currently maintained by a team of developers from ThoughtWorks.
Post questions and comments to the Google group, or email them directly to data-commons-toolchain@googlegroups.com
Docs are available at http://data-commons.github.io/prep-buddy Or check out the [Javadocs] (http://data-commons.github.io/prep-buddy/javadocs) or Python doc (coming soon!).
Our aim is to provide a set of algorithms for cleaning and transforming very large data sets, inspired by predecessors such as Open Refine, Pandas and Scikit-learn packages.
- Official source code repo: https://github.com/data-commons/prep-buddy
- Javadocs (development version): http://data-commons.github.io/prep-buddy/javadocs
- Download releases: Coming soon!
- Issue tracker: https://github.com/data-commons/prep-buddy/issues
- Mailing list: data-commons-toolchain@googlegroups.com
- Slack channel: Coming soon!
This library is currently built for Spark 1.6.0, but is also compatible with 1.4.1.
The library depends on a few other Java libraries.
- Apache Commons Math for general math and statistics functionality.
- Apache Spark for all the distributed computation capabilities.
- Latest Stable 0.5. Coming soon!
- Coming Soon!