Skip to content

atul114/Big-data-hadoop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Big-data-hadoop

The project has basically 3 parts -

->We created our own cluster in our lab with 7-8 computers , in which one was Name Node and the others were Data Node .

  1. we first implemented HDFS . uploaded a file of around 4 GB in our cluster.
  2. we perform Map reduce . basically we run a program for word count on that file .
  3. we implemeted hive in which we created a database and perform ceratin operation in it.

->We created our cluster using AWS instances . // at run time it asks user about the no of instances to launch and then make one of those instances Namenode and othe datanode.

->We created our cluster using DOCKER.

Screenshots of my project

First login

screenshot from 2018-07-14 18-43-37

After a successful login you have the three options as described above

screenshot from 2018-07-14 18-43-41

All the three options have all the three features as described above i.e. Map Reduce , HDFS , HIVE

screenshot from 2018-07-14 18-43-46

MAP REDUCE FEATURES IMPLEMENTED

screenshot from 2018-07-14 18-45-51

HDFS Services(in AWS)

screenshot from 2018-07-14 18-43-55

Other Screenshots

screenshot from 2018-07-14 18-45-59

screenshot from 2018-07-14 18-46-06

screenshot from 2018-07-14 18-44-18

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published