Skip to content

This repo demonstrates how to load a sample Parquet formatted file from an AWS S3 Bucket. A python job will then be submitted to a Apache Spark instance running on AWS EMR, which will run a SQLContext to create a temporary table using a DataFrame. SQL queries will then be possible against the temporary table.

redapt/pyspark-s3-parquet-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pyspark-s3-parquet-example

This repository demonstrates some of the mechanics necessary to load a sample Parquet formatted file from an AWS S3 Bucket. A python job will then be submitted to a local Apache Spark instance which will run a SQLContext to create a temporary table and load the Parquet file contents into a DataFrame. SQL queries will then be possible against the in-memory temporary table. SparkSQL has a lot to explore and this repo will serve as cool place to check things out.

The sample Parquet file was pulled from the following repository. Thanks a bunch!

Running the Examples

AWS EMR using A Zeppelin Notebook

The following script can be copied and pasted inside a Zeppelin notebook running in AWS EMR.

Prerequisites

  1. AWS Account created
  2. AWS ACCESS key and SECRET ACCESS KEY stored in ~/.aws/credentials file
  3. EMR Cluster Configured with Spark 1.6.1 and Apache Zeppelin
  4. Copy the parquet file to a s3 bucket in your AWS account.

Steps

  1. Configure the Spark Interpreter in Zeppelin.
  2. Copy the script into a new Zeppelin Notebook.
  3. Run the script with the "arrow button".
  4. Profit and play around with PySpark in the safety of the Zeppelin notebook.

Sample Output

The following output is from one of my sample runs ...

Run against local instance of Spark

The following-script as been configured to run against a local instance of spark. The location of the Parquet file is also being served locally rather than from s3. Other than that the script is doing the same as the AWS script and the output will be the same.

Prerequisites

  1. Apache Spark 1.6.1 installed locally.
  2. Python 2.7.11

Steps

  1. From root cd into pyspark-scripts
  2. Run the following python nations-parquet-sql-local.py
  3. Once again profit and play around.

About

This repo demonstrates how to load a sample Parquet formatted file from an AWS S3 Bucket. A python job will then be submitted to a Apache Spark instance running on AWS EMR, which will run a SQLContext to create a temporary table using a DataFrame. SQL queries will then be possible against the temporary table.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages