Apache Hadoop: An Introduction

  • Posted on : October 4, 2013
  • Modified: November 5, 2020

  • Author : SPEC INDIA
  • Category : Big Data & Database

Apache Hadoop is an open source framework which enables users to write and run distributed applications that process large amounts of data. Distributed computing is a wide and varied field. It is related with Big Data, a collection of data set. As we all know, the amount of data in our world has been exploding, and in future analyzing large data sets will become a key basis of competition, productivity, growth, innovation, market prediction, and consumer surplus. Apache Hadoop is best suitable to process such large data sets for analysis.

Characteristics That Attracts Enterprises To Deploy Hadoop

  • Accessible

Hadoop runs on large clusters of commodity machines or on cloud computing services such as Amazon’s Elastic Compute Cloud (EC2) .

  • Robust

It is intended to run on commodity hardware, it is architected with the assumption of frequent hardware malfunctions.   It can gracefully handle most such failures like a self healing system.

  • Scalable

Hadoop scales linearly to handle larger data by adding more nodes to the cluster. User can add more nodes as data grows without any modification which is better then vertical scalability.

  • Simple

Hadoop allows users to quickly write efficient parallel code using MapReduce framework.

  • Hadoop Ecosystem :

Hadoop Ecosystem

  • Distributed System :

Deployment, configuration, management, monitoring, and debugging a single threaded, single process system is easy then a multi-threaded, multi-process, multi-host (distributed) system. Hadoop has a ton of moving parts and while it gets better with each release, it’s still a complex system that requires specialized knowledge.

  • Huge Ecosystem :

Dozens of open source and commercial products / projects have developed around Hadoop that inter operate with it in some way. More than a single (distributed) system, Hadoop is an entire world into itself. This can be extremely overwhelming at first.

  • Hadoop Is Evolving :

Apache hadoop is an open source project so it is evolving and changing at an extremely rapid pace. You need to be aware about each change if you have running hadoop cluster in production.

  • Security :

Hadoop implements kerberos authentication protocol to authenticate the client request, which is very complex to configure in production cluster. Namenode and Jobtracker web interface has no authentication mechanism for end users.

At SPEC INDIA, We have a expert dedicate team for Big Data Solutions. We have set up our own Hadoop cluster on local environment which is helpful to provide the robust solutions as well as to perform the client in our set environment.

Author: SPEC INDIA


less words, more information

Tech
IN 200
words

Read our microblogs

Subscribe Now For Fresh Content

Loading

Guest Contribution

We are looking for industry experts to contribute to our blog section through fresh and innovative content.

Write For Us

Our Portfolio

Proven Solutions Across Industries
Technology for Real-Life

Visit Our Portfolio

Scroll Up