Skip to main content

Hadoop / HBase and DNS


Hadoop and HBase (especially HBase) are very picky about DNS entries.  When setting  up a Hadoop cluster one doesn't always have access to a DNS server.  So  here is 'poor developers' guide to getting DNS correct.

Following these simple steps, can avoid a few thorny issues down the line.
set Hostname
verify hostname --> IP address resolution is working (DNS resolution)
verify IP address --> hostname resolution is working (reverse DNS)
DNS verification tool

1) HOSTNAME

I like to set these to FULLY QUALIFIED NAMES.
so ' hadoop1.lab.mycompany.com'   is good
just 'hadoop1'   is not.

on CENTOS
set this in  '/etc/sysconfig/network'
    HOSTNAME=hadoop1.lab.mycompany.com

on UBUNTU:
set this on '/etc/hostname'
      hadoop1.lab.mycompany.com

just reboot the host for hostname settings to take effect (to be safe)
Do this at every node.

2) DNS ENTRIES WHEN YOU DON'T HAVE DNS SERVER

So don't want to mess around (or can't) with DNS server?  No worries.  We can use   '/etc/hosts' file to make our own tiny DNS for our hadoop cluster

file : /etc/hosts
add the following *AFTER* the entries already present in /etc/hosts.

### hadoop cluster
# format
# ip_addres    fully_qualified_hostname     alias

10.1.1.101     hadoop1.lab.mycompna.com    hadoop1
10.1.1.102     hadoop2.lab.mycompny.com    hadoop2
# and so on....


Few things to note:
the content of this file has to be distributed across the cluster on all machines.  DO NOT copy the file onto target machines, the hadoop section needs to be APPENDED to /etc/hosts (see below for a quick script to do it)
the first entry is IP address (usually an internal IP address)
second entry is the FULLY QUALIFIED HOST NAME.   This makes sure reverse DNS lookup picks up the correct hostname
3rd entry is a shorthand alias;   It saves me some typing.  So I can just type 'ssh hadoop1'  rather than   'ssh  hadoop1.lab.mycompany.com'
One of common mistake that happens here is when host alias and fully qualified hostnames are swapped.
following isn't correct,
            10.1.1.101    hadoop1     hadoop1.lab.mycompany.com
aliases should follow, fully qualified host names.

The hadoop cluster section of /etc/hosts file has to be distributed on all cluster nodes.

HOW TO DISTRIBUTE THE DNS ENTRIES ACROSS THE CLUSTER?

There are  confguration managemnet systems (CMS) like Chef and Puppet that makes distributing config files on a cluster easy.   For a very large cluster, using  a CMS could be a recommended choice.

Here is a quick way to distribute the files:

If you have SSH password-less login setup between master and slaves, the following would work:

1) backup existing hosts file:  (do this only ONCE!)
run the following script with ROOT privileges

#!/bin/bash
hosts=$(cat $HADOOP_HOME/conf/slaves | grep -v '#')
for host in $hosts
do
    echo "------------------" $host "------------" 
    ssh -o StrictHostKeyChecking=no  root@$host "cp /etc/hosts  /etc/hosts.orig"
done


2) save the hadoop specific DNS entries into a file 

say 'hadoop_hosts' is our file that has the following content:
### hadoop cluster
10.1.1.101     hadoop1.lab.mycompna.com    hadoop1
10.1.1.102     hadoop2.lab.mycompny.com    hadoop2
# and so on....


3) run the following script;  it will copy the custom hosts files to destination and append it to the existing  /etc/hosts file

#!/bin/bash
hosts=$(cat $HADOOP_HOME/conf/slaves | grep -v '#')
for host in $hosts
do
    echo "------------------" $host "------------" 
    scp -o StrictHostKeyChecking=no  hadoop_hosts  $host:
    ssh -o StrictHostKeyChecking=no  root@$host "cat /etc/hosts.orig hadoop_hosts >> /etc/hosts"
done

CHECKING DNS ACROSS THE CLUSTER

Here is a simple Java utility  to verify that DNS is working ok in ALL cluster machines.  

Here are some features:

It is written in Java, so it will resolve hostnames just like Hadoop / Hbase would (or at least close enough)
It is written in pure Java, doesn't use any third party libraries.  So it is very easy to compile and run.  If you are running Hadoop, you already have JDK installed anyway
it does both   IP lookup and reverse DNS lookup
will also check if machine's own hostname resolves correctly
it can run on a single machine 
it can run on machines across cluster (as long as passsword-less ssh is enabled)
To run this, say from hadoop master:
get the code (using git : git clone  git@github.com:sujee/hadoop-dns-checker.git)
compile:  ./compile.sh    it should create a jar file 'a.jar'
create a hosts file ('my_hosts')  containing all machines in your hadoop cluster:
hadoop1.domain.com
hadoop2.domain.com
hadoop3.domain.com
first run this in a single machine mode:
./run.sh   my_hosts
here is a sample output:

==== Running on : c2107.pcs.hds.com/172.17.34.99 =====
# self check...
-- host : c2107.pcs.hds.com
   host lookup : success (172.17.34.99)
   reverse lookup : success (c2107.pcs.hds.com)
   is reachable : yes
# end self check

-- host : c2107.pcs.hds.com
   host lookup : success (172.17.34.99)
   reverse lookup : success (c2107.pcs.hds.com)
   is reachable : yes

-- host : c2108.pcs.hds.com
   host lookup : success (172.17.34.100)
   reverse lookup : success (c2108.pcs.hds.com)
   is reachable : yes

great.  Now we can run this on cluster.  It will login to each machine specified in 'hosts' file, and run this script.
./run-on-cluster.sh hosts2
if any error is encountered it will print out '*** FAIL *** '.  So it is easy to spot any errors

Comments

Popular posts from this blog

Python and Parquet Performance

In Pandas, PyArrow, fastparquet, AWS Data Wrangler, PySpark and Dask. This post outlines how to use all common Python libraries to read and write Parquet format while taking advantage of  columnar storage ,  columnar compression  and  data partitioning . Used together, these three optimizations can dramatically accelerate I/O for your Python applications compared to CSV, JSON, HDF or other row-based formats. Parquet makes applications possible that are simply impossible using a text format like JSON or CSV. Introduction I have recently gotten more familiar with how to work with  Parquet  datasets across the six major tools used to read and write from Parquet in the Python ecosystem:  Pandas ,  PyArrow ,  fastparquet ,  AWS Data Wrangler ,  PySpark  and  Dask . My work of late in algorithmic trading involves switching between these tools a lot and as I said I often mix up the APIs. I use Pandas and PyArrow for in-RAM comput...

How to construct a File System that lives in Shared Memory.

Shared Memory File System Goals 1. MOUNTED IN SHARED MEMORY The result is a very fast, real time file system. We use Shared Memory so that the file system is public and not private. 2. PERSISTS TO DISK When the file system is unmounted, what happens to it? We need to be able to save the file system so that a system reboot does not destroy it. A great way to achieve this is to save the file system to disk. 3. EXTENSIBLE IN PLACE We want to be able to grow the file system in place. 4. SUPPORTS CONCURRENCY We want multiple users to be able to access the file system at the same time. In fact, we want multiple users to be able to access the same file at the same time. With the goals now in mind we can now talk about the major design issues: FAT File System & Design Issues The  FAT File System  has been around for quite some time. Basically it provides a pretty good file structure. But I have two problems with it: 1. FAT IS NOT EXTENSIBLE IN PLAC...

Fetching Facebook Friends using Windows Azure Mobile Services

This tutorial shows you how to fetch Facebook Friends if you have Facebook accessToken. Here is the the code for Scheduled task called getFriends function getFriends() { //Name of the table where accounts are stored var accountTable = tables.getTable('FacebookAccounts'); //Name of the table where friends are stored var friendsTable = tables.getTable('Friends'); checkAccounts(); function checkAccounts(){ accountTable .read({success: function readAccounts(accounts){ if (accounts.length){ for (var i = 0; i < accounts.length; i++){ console.log("Creating query"); //Call createQuery function for all of the accounts that are found createQuery(accounts[i], getDataFromFacebook); } } else { console.log("Didn't find any account"); prepareAccountTable(); } }}); } function prepareAccountTable(){ var myAccount = { accessToken: "", //enter here you facebook accessToken. You can retrieve ...