Hadoop Explained: HDFS

In our continuing coverage of Hadoop, our subject today is the Hadoop Distributed File System (HDFS), a Java-based file system that provides scalable and reliable data storage that is designed to span large clusters of commodity servers. HDFS, MapReduce, and YARN form the core of Apache Hadoop. In production clusters, HDFS has demonstrated scalability of up to 200 PB of storage and a single cluster of 4500 servers, supporting close to a billion files and blocks. That is modern scalability that Oracle and SQL server just can’t support.

Hadoop

HDFS Details

HDFS is designed to be a scalable, fault-tolerant, distributed storage system that works closely with MapReduce. HDFS is designed to “just work” under a variety of physical and systemic circumstances. By distributing storage and computation across many servers, the combined storage resource can grow with demand while remaining economical at every size.

These specific features ensure that the Hadoop clusters are highly functional and highly available:

  • Rack awareness allows consideration of a node’s physical location, when allocating storage and scheduling tasks
  • Minimal data motion. MapReduce moves compute processes to the data on HDFS and not the other way around. Processing tasks can occur on the physical node where the data resides. This significantly reduces the network I/O patterns and keeps most of the I/O on the local disk or within the same rack and provides very high aggregate read/write bandwidth.
  • Utilities diagnose the health of the files system and can rebalance the data on different nodes
  • Rollback allows system operators to bring back the previous version of HDFS after an upgrade, in case of human or system errors
  • Standby NameNode provides redundancy and supports high availability
  • Highly operable. Hadoop handles different types of cluster that might otherwise require operator intervention. This design allows a single operator to maintain a cluster of 1000s of nodes.

How It Works

An HDFS cluster is comprised of a NameNode which manages the cluster metadata and DataNodes that store the data. Files and directories are represented on the NameNode by inodes. Inodes record attributes like permissions, modification and access times, or namespace and disk space quotas.

The file content is split into large blocks (typically 128 megabytes), and each block of the file is independently replicated at multiple DataNodes. The blocks are stored on the local file system on the datanodes. The Namenode actively monitors the number of replicas of a block. When a replica of a block is lost due to a DataNode failure or disk failure, the NameNode creates another replica of the block. The NameNode maintains the namespace tree and the mapping of blocks to DataNodes, holding the entire namespace image in RAM.

HDFS Overview

The NameNode does not directly send requests to DataNodes. It sends instructions to the DataNodes by replying to heartbeats sent by those DataNodes. The instructions include commands to: replicate blocks to other nodes, remove local block replicas, re-register and send an immediate block report, or shut down the node.

Scalability

If you need more processing power and more drive space you just add more systems to the cluster and more hard drives to those systems. But with more systems added to the cluster, you also get added complexity and management headaches.

You can learn more about HDFS from Apache here.

Advertisements

1 thought on “Hadoop Explained: HDFS”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s