| 21 | * 進入 hadoop 目錄,做進一步的設定。我們需要修改兩個檔案,第一個是 '''hadoop-env.sh''',需要設定 JAVA_HOME, HADOOP_HOME, HADOOP_CONF_DIR 三個環境變數。 |
| 22 | {{{ |
| 23 | /opt$ cd hadoop/ |
| 24 | /opt/hadoop$ sudo su |
| 25 | /opt/hadoop# cat >> conf/hadoop-env.sh << EOF |
| 26 | export JAVA_HOME=/usr/lib/jvm/java-6-sun |
| 27 | export HADOOP_HOME=/opt/hadoop |
| 28 | export HADOOP_CONF_DIR=/opt/hadoop/conf |
| 29 | EOF |
| 30 | }}} |
| 31 | * 第二個設定檔是 '''hadoop-site.xml''',由於官方所提供的範例並無法直接執行,因此我們參考線上文件,做了以下的修改。 |
| 32 | {{{ |
| 33 | /opt/hadoop# cat > conf/hadoop-site.sh << EOF |
| 34 | <?xml version="1.0"?> |
| 35 | <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> |
| 36 | <configuration> |
| 37 | <property> |
| 38 | <name>fs.default.name</name> |
| 39 | <value>hdfs://localhost:9000/</value> |
| 40 | <description> |
| 41 | The name of the default file system. Either the literal string |
| 42 | "local" or a host:port for NDFS. |
| 43 | </description> |
| 44 | </property> |
| 45 | <property> |
| 46 | <name>mapred.job.tracker</name> |
| 47 | <value>localhost:9001</value> |
| 48 | <description> |
| 49 | The host and port that the MapReduce job tracker runs at. If |
| 50 | "local", then jobs are run in-process as a single map and |
| 51 | reduce task. |
| 52 | </description> |
| 53 | </property> |
| 54 | </configuration> |
| 55 | EOF |
| 56 | }}} |