Version 10 (modified by waue, 16 years ago) (diff) |
---|
- drbl server 作業環境:
debian | etch (4.0) | server - 64 bit |
- 安裝drbl
- 安裝 java 6
在套件庫裡 /etc/apt/sources.list 加入 non-free 庫以及 backports 網址才能安裝 sun-java6
deb http://opensource.nchc.org.tw/debian/ etch main contrib non-free deb-src http://opensource.nchc.org.tw/debian/ etch main contrib non-free deb http://security.debian.org/ etch/updates main contrib non-free deb-src http://security.debian.org/ etch/updates main contrib non-free deb http://www.backports.org/debian etch-backports main non-free deb http://free.nchc.org.tw/drbl-core drbl stable
安裝key及java6
$ wget http://www.backports.org/debian/archive.key $ sudo apt-key add archive.key $ apt-get update $ apt-get install sun-java6-bin sun-java6-jdk sun-java6-jre
Hadoop Install
- download Hadoop 0.18.3
$ cd /opt $ wget http://ftp.twaren.net/Unix/Web/apache/hadoop/core/hadoop-0.18.3/hadoop-0.18.3.tar.gz $ tar zxvf hadoop-0.18.3.tar.gz hadoop:/opt# ln -sf hadoop-0.18.3 hadoop
- setup JAVA_HOME environment variable
$ echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> ~/.bash_profile $ source ~/.bash_profile
- edit hadoop-0.18.3/conf/hadoop-env.sh
-
hadoop-0.18.3/conf/hadoop-env.sh
old new 6 6 # remote nodes. 7 7 8 8 # The java implementation to use. Required. 9 # export JAVA_HOME=/usr/lib/j2sdk1.5-sun 9 export JAVA_HOME=/usr/lib/jvm/java-6-sun 10 export HADOOP_HOME=/opt/hadoop-0.18.3 11 export HADOOP_CONF_DIR=$HADOOP_HOME/conf 10 12 11 13 # Extra Java CLASSPATH elements. Optional. 12 14 # export HADOOP_CLASSPATH=
-
- edit hadoop-0.18.3/conf/hadoop-site.xml
-
hadoop-0.18.3/conf/hadoop-site.xml
old new 4 4 <!-- Put site-specific property overrides in this file. --> 5 5 6 6 <configuration> 7 7 <property> 8 <name>fs.default.name</name> 9 <value>hdfs://192.168.1.254:9000/</value> 10 <description> 11 The name of the default file system. Either the literal string 12 "local" or a host:port for NDFS. 13 </description> 14 </property> 15 <property> 16 <name>mapred.job.tracker</name> 17 <value>hdfs://192.168.1.254:9001</value> 18 <description> 19 The host and port that the MapReduce job tracker runs at. If 20 "local", then jobs are run in-process as a single map and 21 reduce task. 22 </description> 23 </property> 8 24 </configuration>
-
DRBL setup
Environment
****************************************************** NIC NIC IP Clients +------------------------------+ | DRBL SERVER | | | | +-- [eth2] 140.110.xxx.130| +- to WAN | | | +-- [eth1] 192.168.1.254 +- to clients group 1 [ 16 clients, their IP | | from 192.168.1.1 - 192.168.1.16] +------------------------------+ ****************************************************** Total clients: 16 ******************************************************
ssh
- 編寫 /etc/ssh/ssh_config
StrictHostKeyChecking no
- 執行
$ ssh-keygen -t rsa -b 1024 -N "" -f ~/.ssh/id_rsa $ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
- 寫個自動化 auto.shell 並執行
#!/bin/bash for ((i=1;i<=16;i++)); do scp scp -r ~/.ssh/ "192.168.1.$i":~/ scp /etc/ssh/ssh_config "192.168.1.$i":/etc/ssh/ssh_config ssh "192.168.1.$i" /etc/init.d/ssh restart done
- 正確無誤則可免密碼登入
dsh
$ sudo apt-get install dsh $ mkdir -p .dsh $ for ((i=1;i<=16;i++)); do echo "192.168.1.$i" >> .dsh/machines.list; done
DRBL Server as Hadoop namenode
- edit /etc/rc.local for DRBL Server as Hadoop namenode
-
/etc/rc.local
old new 11 11 # 12 12 # By default this script does nothing. 13 13 14 echo 3 > /proc/sys/vm/drop_caches 15 /opt/hadoop-0.18.3/bin/hadoop namenode -format 16 /opt/hadoop-0.18.3/bin/hadoop-daemon.sh start namenode 17 /opt/hadoop-0.18.3/bin/hadoop-daemon.sh start jobtracker 18 /opt/hadoop-0.18.3/bin/hadoop-daemon.sh start tasktracker 14 19 exit 0
-
- edit hadoop_datanode for DRBL client as datanode
$ cat > hadoop_datanode << EOF
#! /bin/sh set -e # /etc/init.d/hadoop_datanode: start and stop Hadoop DFS datanode for DRBL Client export PATH="${PATH:+$PATH:}/usr/sbin:/sbin" case "\$1" in start) echo -n "starting datanode:" /opt/hadoop-0.18.3/bin/hadoop-daemon.sh start datanode echo "[OK]" ;; stop) echo -n "stoping datanode:" /opt/hadoop-0.18.3/bin/hadoop-daemon.sh stop datanode echo "[OK]" ;; *) echo "Usage: /etc/init.d/hadoop_datanode {start|stop}" exit 1 esac exit 0 EOF
$ chmod a+x hadoop_datanode $ sudo /opt/drbl/sbin/drbl-cp-host hadoop_datanode /etc/init.d/ $ sudo /opt/drbl/bin/drbl-doit update-rc.d hadoop_datanode defaults 99
- shutdown DRBL clients
- reboot DRBL server
- use "Wake on LAN" for DRBL clients
- browse http://192.168.1.254:50070 for DFS status
參考
問題排解
- drbl似乎安裝不順
drblsrv -i 出現以下錯誤訊息
Kernel 2.6 was found, so default to use initramfs. The requested kernel "" 2.6.18-6-amd64 kernel files are NOT found in /tftpboot/node_root/lib/modules/s and /tftpboot/node_root/boot in the server! The necessary modules in the network initrd can NOT be created! Client will NOT remote boot correctly! Program terminated! Done!
ps: 原因為 apt 的鏡像站台沒有複製到資料因此無法安裝新kernel,導致出現問題