wiki:waue/Hadoop_DRBL

Version 4 (modified by waue, 15 years ago) (diff)

--

主要參考 Jazz: DRBL_Hadoop

  • drbl server 作業環境:
debian etch (4.0) server - 64 bit
  • 安裝drbl
  • 安裝 java 6

在套件庫裡 /etc/apt/sources.list 加入 non-free 庫以及 backports 網址才能安裝 sun-java6

deb http://opensource.nchc.org.tw/debian/ etch main contrib non-free
deb-src http://opensource.nchc.org.tw/debian/ etch main contrib non-free
deb http://security.debian.org/ etch/updates main contrib non-free
deb-src http://security.debian.org/ etch/updates main contrib non-free
deb http://www.backports.org/debian etch-backports main non-free
deb http://free.nchc.org.tw/drbl-core drbl stable

安裝key及java6

$ wget http://www.backports.org/debian/archive.key
$ sudo apt-key add archive.key
$ apt-get update
$ apt-get install sun-java6-bin  sun-java6-jdk sun-java6-jre

Hadoop Install

  • download Hadoop 0.18.3
    $ cd /opt
    $ wget http://ftp.twaren.net/Unix/Web/apache/hadoop/core/hadoop-0.18.3/hadoop-0.18.3.tar.gz
    $ tar zxvf hadoop-0.18.3.tar.gz
    hadoop:/opt# ln -sf hadoop-0.18.3 hadoop
    
  • setup JAVA_HOME environment variable
    $ echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> ~/.bash_profile
    $ source ~/.bash_profile
    
  • edit hadoop-0.18.3/conf/hadoop-env.sh
    • hadoop-0.18.3/conf/hadoop-env.sh

      old new  
      66# remote nodes.
      77
      88# The java implementation to use.  Required.
      9 # export JAVA_HOME=/usr/lib/j2sdk1.5-sun
       9export JAVA_HOME=/usr/lib/jvm/java-6-sun
       10export HADOOP_HOME=/opt/hadoop-0.18.3
       11export HADOOP_CONF_DIR=$HADOOP_HOME/conf
      1012
      1113# Extra Java CLASSPATH elements.  Optional.
      1214# export HADOOP_CLASSPATH=
  • edit hadoop-0.18.3/conf/hadoop-site.xml
    • hadoop-0.18.3/conf/hadoop-site.xml

      old new  
      44<!-- Put site-specific property overrides in this file. -->
      55
      66<configuration>
      7 
       7  <property>
       8    <name>fs.default.name</name>
       9    <value>hdfs://192.168.1.254:9000/</value>
       10    <description>
       11      The name of the default file system. Either the literal string
       12      "local" or a host:port for NDFS.
       13    </description>
       14  </property>
       15  <property>
       16    <name>mapred.job.tracker</name>
       17    <value>hdfs://192.168.1.254:9001</value>
       18    <description>
       19      The host and port that the MapReduce job tracker runs at. If
       20      "local", then jobs are run in-process as a single map and
       21      reduce task.
       22    </description>
       23  </property>
      824</configuration>

DRBL setup

Environment

******************************************************
          NIC    NIC IP                    Clients
+------------------------------+
|         DRBL SERVER          |
|                              |
|    +-- [eth0] X.X.X.X        +- to WAN
|                              |
|    +-- [eth1] 192.168.1.254 +- to clients group 1 [ 16 clients, their IP
|                              |             from 192.168.1.1 - 192.168.1.16]
+------------------------------+
******************************************************
Total clients: 16
******************************************************

ssh

  • Hadoop will use ssh connections for internal connection, thus we have to do SSH key exchange.
    $ ssh-keygen
    $ cp .ssh/id_rsa.pub .ssh/authorized_keys
    $ sudo apt-get install dsh
    $ mkdir -p .dsh
    $ for ((i=1;i<=16;i++)); do echo "192.168.1.$i" >> .dsh/machines.list; done
    

DRBL Server as Hadoop namenode

  • edit /etc/rc.local for DRBL Server as Hadoop namenode
    • /etc/rc.local

      old new  
      1111#
      1212# By default this script does nothing.
      1313
       14echo 3 > /proc/sys/vm/drop_caches
       15/opt/hadoop-0.18.3/bin/hadoop namenode -format
       16/opt/hadoop-0.18.3/bin/hadoop-daemon.sh start namenode
       17/opt/hadoop-0.18.3/bin/hadoop-daemon.sh start jobtracker
       18/opt/hadoop-0.18.3/bin/hadoop-daemon.sh start tasktracker
      1419exit 0
  • edit hadoop_datanode for DRBL client as datanode
    $ cat > hadoop_datanode << EOF
    
#! /bin/sh
set -e

# /etc/init.d/hadoop_datanode: start and stop Hadoop DFS datanode for DRBL Client

export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"

case "\$1" in
  start)
        echo -n "starting datanode:"
        /opt/hadoop-0.18.3/bin/hadoop-daemon.sh start datanode
        echo "[OK]"
        ;;
  stop)
        echo -n "stoping datanode:"
        /opt/hadoop-0.18.3/bin/hadoop-daemon.sh stop datanode
        echo "[OK]"
        ;;

  *)
        echo "Usage: /etc/init.d/hadoop_datanode {start|stop}"
        exit 1
esac

exit 0
EOF
$ chmod a+x hadoop_datanode
$ sudo /opt/drbl/sbin/drbl-cp-host hadoop_datanode /etc/init.d/
$ sudo /opt/drbl/bin/drbl-doit update-rc.d hadoop_datanode defaults 99