Skip to content

shammishailaj/MicroDrill

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MicroDrill

Simple Apache Drill alternative using PySpark inspired by PyDAL

Setup

Run terminal command pip install microdrill

Dependencies

PySpark was tested with Spark 1.6

Usage

Defining Query Parquet Table

ParquetTable(table_name, schema_index_file=file_name)

  • table_name: Table referenced name.
  • file_name: File name to search for table schema.

Using Parquet DAL

ParquetDAL(file_uri, sc)

Connecting in tables

parquet_conn = ParquetDAL(file_uri, sc)
parquet_table = ParquetTable(table_name, schema_index_file=file_name)
parquet_conn.set_table(parquet_table)

Queries

Returning Table Object

parquet_conn(table_name)

Returning Field Object

parquet_conn(table_name)(field_name)

Basic Query

parquet_conn.select().where(field_object=value) for select all
parquet_conn.select(field_object, [field_object2, ...]).where(field_object=value)
parquet_conn.select(field_object1, field_object2).where(field_object1==value1 & ~field_object2==value2)
parquet_conn.select(field_object1, field_object2).where(field_object1!=value1 | field_object1.regexp(reg_exp))

Grouping By

parquet_conn.groupby(field_object1, [field_object2, ...])

Ordering By

parquet_conn.orderby(field_object1, [field_object2, ...])
parquet_conn.orderby(~field_object)

Limiting

parquet_conn.limit(number)

Executing

df = parquet_conn.execute() execute() returns a PySpark DataFrame.

Returning Field Names From Schema

parquet_conn(table_name).schema()

Developers

Install latest jdk and run in terminal make setup

About

Simple Apache Drill alternative using PySpark

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.3%
  • Makefile 1.7%