}}}
[[PageOutline]]
== 前言 Preface ==
* 您手邊有兩台電腦,假設剛剛操作的電腦為"主機一" ,另一台則為"主機二" 。則稍後的環境如下[[BR]]Now, You have two computer. Assume that you're in front of '''"node1"''' and the other computer is '''"node2"'''
|| || '''管理Data的身份'''[[BR]]for HDFS || '''管理Job的身份'''[[BR]]for MapReduce ||
|| '''"主機一" '''[[BR]]'''"node1"''' || namenode + datanode || jobtracker + tasktracker||
|| '''"主機二" '''[[BR]]'''"node2"''' || datanode|| tasktracker ||
* 這個實作會架設運作在叢集環境上的Hadoop,因此若是你的電腦還存在著之前的實作一的環境,請先作step 0,以移除掉之前的設定。[[BR]]Following steps will go through the setup of cluster setup. If you kept the environment of Lab1, please erase that by instruction listed in '''Step 0'''.
* 確認您"主機一"的 hostname 與 "主機二" 的 hostname,並將下面指令有 主機一與主機二 的地方作正確的取代[[BR]]Please replace "node1" and "node2" with the hostname of your computers.
* 維持好習慣,請幫你待會要操作的主機設 root 密碼[[BR]]Since Ubuntu does not configure super user (root) password for you, please set your own super user (root) password.
{{{
~$ sudo passwd
}}}
== Step 0: 清除所有在實作一作過的環境 ==
== Step 0: Erase installed Hadoop of Lab1 ==
* 在 "主機一" (有操作過 實作一 的電腦)上操作[[BR]]On "node1", please remove the folder of /opt/hadoop and /var/hadoop created in Lab1
{{{
node1:~$ cd ~
node1:~$ /opt/hadoop/bin/stop-all.sh
node1:~$ rm -rf /var/hadoop
node1:~$ sudo rm -rf /opt/hadoop
node1:~$ rm -rf ~/.ssh
}}}
== Step 1: 設定兩台機器登入免密碼 ==
== Step 1: Setup SSH key exchange ==
* 請注意我們實驗環境已經把 /etc/ssh/ssh_config裡的StrictHostKeyChecking改成no,下面的指令可以檢查,如果你的設定不同的話,請修改此檔會比較順。[[BR]]Please check your /etc/ssh/ssh_config configurations. It should had been change the setting of "StrictHostKeyChecking" to "no". If not, please modified it by editor.
{{{
node1:~$ cat /etc/ssh/ssh_config |grep StrictHostKeyChecking
StrictHostKeyChecking no
}}}
* 在"主機一" 上操作,並將 key 產生並複製到其他節點上[[br]]On "node1", generate SSH key and copy these keys to "node2".
{{{
node1:~$ ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
node1:~$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
node1:~$ scp -r ~/.ssh 主機二(node2):~/
}}}
* 測試看看是否登入免密碼[[BR]]To make sure that you've configure it correct, you could try following commands.
{{{
node1:~$ ssh 主機二(node2)
node2:~$ ssh 主機一(node1)
node1:~$ exit
node2:~$ exit
node1:~$
~$
}}}
== Step 2: 安裝 Java ==
== Step 2: Install Java ==
* 為兩台電腦安裝 Java[[BR]]Install Java Runtime for both computer.
{{{
node1:~$ sudo apt-get purge java-gcj-compat
node1:~$ sudo apt-get install sun-java6-bin sun-java6-jdk sun-java6-jre
node1:~$ ssh 主機二(node2)
node2:~$ export LC_ALL=C
node2:~$ sudo apt-get update
node2:~$ sudo apt-get purge java-gcj-compat
node2:~$ sudo apt-get install sun-java6-bin sun-java6-jdk sun-java6-jre
node2:~$ exit
node1:~$
}}}
== Step 3: 下載安裝 Hadoop 到"主機一" ==
== Step 3: Download Hadoop Source Package to Node 1 ==
* 先在"主機一" 上安裝,其他node的安裝等設定好之後在一起作[[BR]]We'll first install at '''node1''', then copy the entire folder including settings to '''node2''' later.
{{{
node1:~$ cd /opt
node1:/opt$ sudo wget http://ftp.twaren.net/Unix/Web/apache/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz
node1:/opt$ sudo tar zxvf hadoop-0.20.2.tar.gz
node1:/opt$ sudo mv hadoop-0.20.2/ hadoop
node1:/opt$ sudo chown -R hadooper:hadooper hadoop
node1:/opt$ sudo mkdir /var/hadoop
node1:/opt$ sudo chown -R hadooper:hadooper /var/hadoop
}}}
== Step 4: 設定 hadoop-env.sh ==
== Step 4: Configure hadoop-env.sh ==
* "主機一" 上用 gedit 編輯 conf/hadoop-env.sh [[BR]] Use gedit to configure conf/hadoop-env.sh
{{{
/opt$ cd hadoop/
/opt/hadoop$ gedit conf/hadoop-env.sh
}}}
將以下資訊貼入 conf/hadoop-env.sh 檔內[[BR]]Paste following information to conf/hadoop-env.sh
{{{
#!sh
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export HADOOP_HOME=/opt/hadoop
export HADOOP_CONF_DIR=/opt/hadoop/conf
export HADOOP_LOG_DIR=/tmp/hadoop/logs
export HADOOP_PID_DIR=/tmp/hadoop/pids
}}}
* 注意,在此實作中,我們多設定了HADOOP_PID_DIR及HADOOP_LOG_DIR的參數,並且把值寫入到我們hadooper的家目錄中,此舉並非完全必要,但一個目的是介紹更多hadoop-env.sh內的參數,另一目的為讓log,pid等常變資料與hadoop家目錄分離[[BR]]Note: in Lab5, we also configure two new variables '''HADOOP_PID_DIR''' and '''HADOOP_LOG_DIR'''. This is just to introduce that you can change the location to store logs and PIDs.
== Step 5: 設定 *-site.xml ==
== Step 5: Configure *-site.xml ==
* 接下來的設定檔共有3個 core-site.xml, hdfs-site.xml, mapred-site.xml,由於官方所提供的範例並無法直接執行,因此我們參考[http://hadoop.apache.org/core/docs/current/quickstart.html 線上文件],做了以下的修改。[[BR]]Next, let's configure three configuration files including core-site.xml, hdfs-site.xml, mapred-site.xml. Reference from "[http://hadoop.apache.org/core/docs/current/quickstart.html Hadoop Quick Start]", please copy and paste the command:
{{{
/opt/hadoop# gedit conf/core-site.xml
}}}
將以下資料取代掉原本的內容[[BR]]Then paste following settings in gedit.
{{{
#!sh
fs.default.namehdfs://主機一(node1):9000hadoop.tmp.dir/var/hadoop/hadoop-${user.name}
}}}
{{{
/opt/hadoop# gedit conf/hdfs-site.xml
}}}
將以下資料取代掉原本的內容[[BR]]Then paste following settings in gedit.
{{{
#!sh
dfs.replication2
}}}
{{{
/opt/hadoop# gedit conf/mapred-site.xml
}}}
將以下資料取代掉原本的內容[[BR]]Then paste following settings in gedit.
{{{
#!sh
mapred.job.tracker主機一(node1):9001
}}}
* 注意!fs.default.name = hdfs://主機一:9000/ ;而mapred.job.tracker = 主機一:9001,看出差異了嗎!一個有指hdfs://,一個沒有,重要!易混淆。[[BR]]Note: you will see that the parameter have little difference:
* fs.default.name = hdfs://node1:9000/
* mapred.job.tracker = node1:9001
== Step 6: 設定 masters 及 slaves ==
== Step 6: Configure masters and slaves ==
* 接著我們要編輯哪個主機當 namenode, 若有其他主機則為 datanodes [[BR]] Next, we need to configure which node to be namenode and others to be datanodes.
* 編輯 conf/slaves [[BR]] edit conf/slaves
{{{
/opt/hadoop$ gedit conf/slaves
}}}
原本內容只有localhost一行,請刪除此行並換上"主機一" 及"主機二" 的ip [[BR]] replace with IP address of "node1" and "node2"
{{{
#!sh
主機一
主機二
}}}
== Step 7: HADOOP_HOME 內的資料複製到其他主機上 ==
== Step 7: Copy entire HADOOP_HOME to other computers ==
* 連線到"主機二" 作開資料夾/opt/hadoop及權限設定 [[BR]] SSH to node2 and create related folder for Hadoop and change permissions.
{{{
hadooper@node1:/opt/hadoop$ ssh 主機二(node2)
hadooper@node2:/opt/hadoop$ sudo mkdir /opt/hadoop
hadooper@node2:/opt/hadoop$ sudo chown -R hadooper:hadooper /opt/hadoop
hadooper@node2:/opt/hadoop$ sudo mkdir /var/hadoop
hadooper@node2:/opt/hadoop$ sudo chown -R hadooper:hadooper /var/hadoop
}}}
* 複製"主機一" 的hadoop資料夾到"主機二" 上 [[BR]]
{{{
hadooper@node2:/opt/hadoop$ scp -r node1:/opt/hadoop/* /opt/hadoop/
hadooper@node2:/opt/hadoop$ exit
hadooper@node1:/opt/hadoop$
}}}
== Step 8: 格式化 HDFS ==
== Step 8: Format HDFS ==
* 以上我們已經安裝及設定好 Hadoop 的叢集環境,接著讓我們來啟動 Hadoop ,首先還是先格式化hdfs,在"主機一" 上操作 [[BR]] Now, we have configured the fully distributed mode of Hadoop Cluster. Before we start Hadoop related services, we need to format NameNode on node1.
{{{
hadooper@node1:/opt/hadoop$ bin/hadoop namenode -format
}}}
執行畫面如:[[BR]]You should see results like this:
{{{
09/03/23 20:19:47 INFO dfs.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = 主機一
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 736250; compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
************************************************************/
09/03/23 20:19:47 INFO fs.FSNamesystem: fsOwner=hadooper,hadooper
09/03/23 20:19:47 INFO fs.FSNamesystem: supergroup=supergroup
09/03/23 20:19:47 INFO fs.FSNamesystem: isPermissionEnabled=true
09/03/23 20:19:47 INFO dfs.Storage: Image file of size 82 saved in 0 seconds.
09/03/23 20:19:47 INFO dfs.Storage: Storage directory /tmp/hadoop/hadoop-hadooper/dfs/name has been successfully formatted.
09/03/23 20:19:47 INFO dfs.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at 主機一
************************************************************/
}}}
== Step 9: 啟動Hadoop ==
== Step 9: Start Hadoop ==
* bin/start-dfs.sh腳本會參照namenode上${HADOOP_CONF_DIR}/slaves文件的內容,在所有列出的slave上啟動datanode。[[BR]] The bash script "bin/start-dfs.sh" will ssh to all computers listed in ${HADOOP_CONF_DIR}/slaves to start DataNodes.
* 在"主機一" 上,執行下面的命令啟動HDFS:[[BR]] On node1, you can use start-dfs.sh to start HDFS related services.
{{{
/opt/hadoop$ bin/start-dfs.sh
}}}
------
* http://主機一(node1):50070/ - Hadoop DFS 狀態
* [[Image(wiki:0428Hadoop_Lab3:datanode.png)]]
------
* ps: 然而JobTracker還沒啟動,因此 http://主機一:50030/ 網頁無法顯示[[BR]]Since JobTracker is not yet running, you can't see http://node1:50030/
* bin/start-mapred.sh腳本會參照jobtracker上${HADOOP_CONF_DIR}/slaves文件的內容,在所有列出的slave上啟動tasktracker。[[BR]] The bash script "bin/start-mapred.sh" will ssh to all computers listed in ${HADOOP_CONF_DIR}/slaves to start TaskTracker.
* 在"主機一"執行下面的命令啟動Map/Reduce:[[BR]] You can use start-mapred.sh to start MapReduce related services.
{{{
/opt/hadoop$ /opt/hadoop/bin/start-mapred.sh
}}}
* 啟動之後, jobtracker也正常運作囉![[BR]]Now, you have JobTracker and TaskTracker running.
------
* http://主機一:50030/ - Hadoop 管理介面
* [[Image(wiki:0428Hadoop_Lab3:job.png)]]
------
== Note: 停止 Hadoop ==
== Note: Stop Hadoop ==
* 在"主機一" 上,執行下面的命令停止HDFS:[[BR]] You can use stop-dfs.sh to stop HDFS related services.
{{{
/opt/hadoop$ bin/stop-dfs.sh
}}}
* 在"主機一" 上,執行下面的命令停止Map/Reduce:[[BR]] You can use stop-mapred.sh to stop MapReduce related services.
{{{
/opt/hadoop$ bin/stop-mapred.sh
}}}