| 1 | [[PageOutline]] |
| 2 | {{{ |
| 3 | #!html |
| 4 | <div style="text-align: center;"><big |
| 5 | style="font-weight: bold;"><big><big> hadoop 程式開發 (eclipse plugin) </big></big></big></div> |
| 6 | }}} |
| 7 | = 零. 環境配置 = |
| 8 | |
| 9 | |
| 10 | == 0.1 環境說明 == |
| 11 | * ubuntu 8.10 |
| 12 | * sun-java-6 |
| 13 | * [http://www.java.com/zh_TW/download/linux_manual.jsp?locale=zh_TW&host=www.java.com:80 java 下載處] |
| 14 | * [https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jdk-6u10-docs-oth-JPR@CDS-CDS_Developer JavaDoc ] |
| 15 | * eclipse 3.3.2 |
| 16 | * eclipse 各版本下載點 [http://archive.eclipse.org/eclipse/downloads/] |
| 17 | * hadoop 0.18.3 |
| 18 | * hadoop 各版本下載點 [http://ftp.twaren.net/Unix/Web/apache/hadoop/core/] |
| 19 | |
| 20 | == 0.2 目錄說明 == |
| 21 | |
| 22 | * 使用者:hadoop |
| 23 | * 使用者家目錄: /home/hadooper |
| 24 | * 專案目錄 : /home/hadooper/workspace |
| 25 | * hadoop目錄: /opt/hadoop |
| 26 | |
| 27 | = 一、安裝 = |
| 28 | |
| 29 | 安裝的部份沒必要都一模一樣,僅提供參考,反正只要安裝好java , hadoop , eclipse,並清楚自己的路徑就可以了 |
| 30 | |
| 31 | == 1.1. 安裝java == |
| 32 | |
| 33 | 首先安裝java 基本套件 |
| 34 | |
| 35 | {{{ |
| 36 | $ sudo apt-get install java-common sun-java6-bin sun-java6-jdk sun-java6-jre |
| 37 | }}} |
| 38 | |
| 39 | == 1.1.1. 安裝sun-java6-doc == |
| 40 | |
| 41 | 1 將javadoc (jdk-6u10-docs.zip) 下載下來 |
| 42 | [https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jdk-6u10-docs-oth-JPR@CDS-CDS_Developer 下載點] |
| 43 | [[Image(wiki:waue/2009/0617:1-1.png)]] |
| 44 | |
| 45 | 2 下載完後將檔案放在 /tmp/ 下 (假測已下載到 /home/hadooper/tools/ ) |
| 46 | {{{ |
| 47 | $ mv /home/hadooper/tools/jdk-*-docs.zip /tmp/ |
| 48 | }}} |
| 49 | |
| 50 | 3 執行 |
| 51 | |
| 52 | {{{ |
| 53 | $ sudo apt-get install sun-java6-doc |
| 54 | }}} |
| 55 | |
| 56 | == 1.2. ssh 安裝設定 == |
| 57 | |
| 58 | |
| 59 | == 1.3. 安裝hadoop == |
| 60 | |
| 61 | |
| 62 | == 1.4. 安裝eclipse == |
| 63 | |
| 64 | * 取得檔案 eclipse 3.3.2 (假設已經下載於/home/hadooper/tools/ 內),執行下面指令: |
| 65 | |
| 66 | {{{ |
| 67 | $ cd ~/tools/ |
| 68 | $ tar -zxvf eclipse-SDK-3.3.2-linux-gtk.tar.gz |
| 69 | $ sudo mv eclipse /opt |
| 70 | $ sudo ln -sf /opt/eclipse/eclipse /usr/local/bin/ |
| 71 | }}} |
| 72 | |
| 73 | = 二、 建立專案 = |
| 74 | |
| 75 | == 2.1 安裝hadoop 的 eclipse plugin == |
| 76 | |
| 77 | * 匯入hadoop eclipse plugin |
| 78 | |
| 79 | {{{ |
| 80 | $ cd /opt/hadoop |
| 81 | $ sudo cp /opt/hadoop/contrib/eclipse-plugin/hadoop-0.18.3-eclipse-plugin.jar /opt/eclipse/plugins |
| 82 | }}} |
| 83 | |
| 84 | 補充: 可斟酌參考eclipse.ini內容(非必要) |
| 85 | |
| 86 | {{{ |
| 87 | $ sudo vim /opt/eclipse/eclipse.ini |
| 88 | }}} |
| 89 | |
| 90 | {{{ |
| 91 | #!sh |
| 92 | -startup |
| 93 | plugins/org.eclipse.equinox.launcher_1.0.101.R34x_v20081125.jar |
| 94 | --launcher.library |
| 95 | plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.0.101.R34x_v20080805 |
| 96 | -showsplash |
| 97 | org.eclipse.platform |
| 98 | --launcher.XXMaxPermSize |
| 99 | 512m |
| 100 | -vmargs |
| 101 | -Xms40m |
| 102 | -Xmx512m |
| 103 | }}} |
| 104 | |
| 105 | == 2.2 開啟eclipse == |
| 106 | |
| 107 | * 打開eclipse |
| 108 | |
| 109 | {{{ |
| 110 | $ eclipse & |
| 111 | }}} |
| 112 | |
| 113 | 一開始會出現問你要將工作目錄放在哪裡:在這我們用預設值 |
| 114 | [[Image(wiki:waue/2009/0617:2-1.png)]] |
| 115 | ------- |
| 116 | |
| 117 | '''PS: 之後的說明則是在eclipse 上的介面操作''' |
| 118 | |
| 119 | ------- |
| 120 | |
| 121 | == 2.3 選擇視野 == |
| 122 | |
| 123 | || window -> || open pers.. -> || other.. -> || map/reduce|| |
| 124 | |
| 125 | [[Image(wiki:waue/2009/0617:win-open-other.png)]] |
| 126 | |
| 127 | ------- |
| 128 | |
| 129 | 設定要用 Map/Reduce 的視野 |
| 130 | [[Image(wiki:waue/2009/0617:2-2.png)]] |
| 131 | |
| 132 | --------- |
| 133 | |
| 134 | 使用 Map/Reduce 的視野後的介面呈現 |
| 135 | [[Image(wiki:waue/2009/0617:2-3.png)]] |
| 136 | |
| 137 | -------- |
| 138 | |
| 139 | == 2.4 建立專案 == |
| 140 | |
| 141 | || file -> || new -> || project -> || Map/Reduce -> || Map/Reduce Project -> || next || |
| 142 | [[Image(wiki:waue/2009/0617:file-new-project.png)]] |
| 143 | |
| 144 | -------- |
| 145 | |
| 146 | 建立mapreduce專案(1) |
| 147 | |
| 148 | [[Image(wiki:waue/2009/0617:2-4.png)]] |
| 149 | |
| 150 | ----------- |
| 151 | |
| 152 | 建立mapreduce專案的(2) |
| 153 | {{{ |
| 154 | #!sh |
| 155 | project name-> 輸入 : icas (隨意) |
| 156 | use default hadoop -> Configur Hadoop install... -> 輸入: "/opt/hadoop" -> ok |
| 157 | Finish |
| 158 | }}} |
| 159 | |
| 160 | [[Image(wiki:waue/2009/0617:2-4-2.png)]] |
| 161 | |
| 162 | |
| 163 | -------------- |
| 164 | |
| 165 | == 2.5 設定專案 == |
| 166 | |
| 167 | 由於剛剛建立了icas這個專案,因此eclipse已經建立了新的專案,出現在左邊視窗,右鍵點選該資料夾,並選properties |
| 168 | |
| 169 | -------------- |
| 170 | |
| 171 | Step1. 右鍵點選project的properties做細部設定 |
| 172 | |
| 173 | [[Image(wiki:waue/2009/0617:2-5.png)]] |
| 174 | |
| 175 | ---------- |
| 176 | |
| 177 | Step2. 進入專案的細部設定頁 |
| 178 | |
| 179 | hadoop的javadoc的設定(1) |
| 180 | [[Image(wiki:waue/2009/0617:2-5-1.png)]] |
| 181 | |
| 182 | * java Build Path -> Libraries -> hadoop0.18.3-ant.jar |
| 183 | * java Build Path -> Libraries -> hadoop0.18.3-core.jar |
| 184 | * java Build Path -> Libraries -> hadoop0.18.3-tools.jar |
| 185 | * 以 hadoop0.18.3-core.jar 的設定內容如下,其他依此類推 |
| 186 | |
| 187 | {{{ |
| 188 | #!sh |
| 189 | source ...-> 輸入:/opt/opt/hadoop0.18.3/src/core |
| 190 | javadoc ...-> 輸入:file:/opt/hadoop/docs/api/ |
| 191 | }}} |
| 192 | |
| 193 | ------------ |
| 194 | Step3. hadoop的javadoc的設定完後(2) |
| 195 | [[Image(wiki:waue/2009/0617:2-5-2.png)]] |
| 196 | |
| 197 | ------------ |
| 198 | Step4. java本身的javadoc的設定(3) |
| 199 | |
| 200 | * javadoc location -> 輸入:file:/usr/lib/jvm/java-6-sun/docs/api/ |
| 201 | |
| 202 | [[Image(wiki:waue/2009/0617:2-5-3.png)]] |
| 203 | |
| 204 | ----- |
| 205 | 設定完後回到eclipse 主視窗 |
| 206 | |
| 207 | |
| 208 | == 2.6 連接hadoop server == |
| 209 | |
| 210 | -------- |
| 211 | Step1. 視窗右下角黃色大象圖示"Map/Reduce Locations tag" -> 點選齒輪右邊的藍色大象圖示: |
| 212 | [[Image(wiki:waue/2009/0617:2-6.png)]] |
| 213 | |
| 214 | ------------- |
| 215 | Step2. 進行eclipse 與 hadoop 間的設定(2) |
| 216 | [[Image(wiki:waue/2009/0617:2-6-1.png)]] |
| 217 | |
| 218 | {{{ |
| 219 | #!sh |
| 220 | Location Name -> 輸入:hadoop (隨意) |
| 221 | Map/Reduce Master -> Host-> 輸入:localhost |
| 222 | Map/Reduce Master -> Port-> 輸入:9001 |
| 223 | DFS Master -> Host-> 輸入:9000 |
| 224 | Finish |
| 225 | }}} |
| 226 | ---------------- |
| 227 | |
| 228 | 設定完後,可以看到下方多了一隻藍色大象,左方展開資料夾也可以秀出在hdfs內的檔案結構 |
| 229 | [[Image(wiki:waue/2009/0617:2-6-2.png)]] |
| 230 | ------------- |
| 231 | |
| 232 | = 三、 撰寫範例程式 = |
| 233 | |
| 234 | * 之前在eclipse上已經開了個專案icas,因此這個目錄在: |
| 235 | * /home/hadooper/workspace/icas |
| 236 | * 在這個目錄內有兩個資料夾: |
| 237 | * src : 用來裝程式原始碼 |
| 238 | * bin : 用來裝編譯後的class檔 |
| 239 | * 如此一來原始碼和編譯檔就不會混在一起,對之後產生jar檔會很有幫助 |
| 240 | * 在這我們編輯一個範例程式 : WordCount |
| 241 | |
| 242 | == 3.1 mapper.java == |
| 243 | |
| 244 | 1. new |
| 245 | |
| 246 | || File -> || new -> || mapper || |
| 247 | [[Image(wiki:waue/2009/0617:file-new-mapper.png)]] |
| 248 | |
| 249 | ----------- |
| 250 | |
| 251 | 2. create |
| 252 | |
| 253 | [[Image(wiki:waue/2009/0617:3-1.png)]] |
| 254 | {{{ |
| 255 | #!sh |
| 256 | source folder-> 輸入: icas/src |
| 257 | Package : Sample |
| 258 | Name -> : mapper |
| 259 | }}} |
| 260 | ---------- |
| 261 | |
| 262 | 3. modify |
| 263 | |
| 264 | {{{ |
| 265 | #!java |
| 266 | package Sample; |
| 267 | |
| 268 | import java.io.IOException; |
| 269 | import java.util.StringTokenizer; |
| 270 | |
| 271 | import org.apache.hadoop.io.IntWritable; |
| 272 | import org.apache.hadoop.io.LongWritable; |
| 273 | import org.apache.hadoop.io.Text; |
| 274 | import org.apache.hadoop.mapred.MapReduceBase; |
| 275 | import org.apache.hadoop.mapred.Mapper; |
| 276 | import org.apache.hadoop.mapred.OutputCollector; |
| 277 | import org.apache.hadoop.mapred.Reporter; |
| 278 | |
| 279 | public class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { |
| 280 | private final static IntWritable one = new IntWritable(1); |
| 281 | private Text word = new Text(); |
| 282 | |
| 283 | public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { |
| 284 | String line = value.toString(); |
| 285 | StringTokenizer tokenizer = new StringTokenizer(line); |
| 286 | while (tokenizer.hasMoreTokens()) { |
| 287 | word.set(tokenizer.nextToken()); |
| 288 | output.collect(word, one); |
| 289 | } |
| 290 | } |
| 291 | } |
| 292 | |
| 293 | }}} |
| 294 | |
| 295 | 建立mapper.java後,貼入程式碼 |
| 296 | [[Image(wiki:waue/2009/0617:3-2.png)]] |
| 297 | |
| 298 | ------------ |
| 299 | |
| 300 | == 3.2 reducer.java == |
| 301 | |
| 302 | 1. new |
| 303 | |
| 304 | * File -> new -> reducer |
| 305 | [[Image(wiki:waue/2009/0617:file-new-reducer.png)]] |
| 306 | |
| 307 | ------- |
| 308 | 2. create |
| 309 | [[Image(wiki:waue/2009/0617:3-3.png)]] |
| 310 | |
| 311 | {{{ |
| 312 | #!sh |
| 313 | source folder-> 輸入: icas/src |
| 314 | Package : Sample |
| 315 | Name -> : reducer |
| 316 | }}} |
| 317 | |
| 318 | ----------- |
| 319 | |
| 320 | 3. modify |
| 321 | |
| 322 | {{{ |
| 323 | #!java |
| 324 | package Sample; |
| 325 | |
| 326 | import java.io.IOException; |
| 327 | import java.util.Iterator; |
| 328 | |
| 329 | import org.apache.hadoop.io.IntWritable; |
| 330 | import org.apache.hadoop.io.Text; |
| 331 | import org.apache.hadoop.mapred.MapReduceBase; |
| 332 | import org.apache.hadoop.mapred.OutputCollector; |
| 333 | import org.apache.hadoop.mapred.Reducer; |
| 334 | import org.apache.hadoop.mapred.Reporter; |
| 335 | |
| 336 | public class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { |
| 337 | public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { |
| 338 | int sum = 0; |
| 339 | while (values.hasNext()) { |
| 340 | sum += values.next().get(); |
| 341 | } |
| 342 | output.collect(key, new IntWritable(sum)); |
| 343 | } |
| 344 | } |
| 345 | }}} |
| 346 | |
| 347 | * File -> new -> Map/Reduce Driver |
| 348 | [[Image(wiki:waue/2009/0617:file-new-mr-driver.png)]] |
| 349 | ---------- |
| 350 | |
| 351 | == 3.3 WordCount.java (main function) == |
| 352 | |
| 353 | 1. new |
| 354 | |
| 355 | 建立WordCount.java,此檔用來驅動mapper 與 reducer,因此選擇 Map/Reduce Driver |
| 356 | [[Image(wiki:waue/2009/0617:3-4.png)]] |
| 357 | ------------ |
| 358 | |
| 359 | 2. create |
| 360 | |
| 361 | {{{ |
| 362 | #!sh |
| 363 | source folder-> 輸入: icas/src |
| 364 | Package : Sample |
| 365 | Name -> : WordCount.java |
| 366 | }}} |
| 367 | |
| 368 | ------- |
| 369 | 3. modify |
| 370 | |
| 371 | {{{ |
| 372 | #!java |
| 373 | package Sample; |
| 374 | import org.apache.hadoop.fs.Path; |
| 375 | import org.apache.hadoop.io.IntWritable; |
| 376 | import org.apache.hadoop.io.Text; |
| 377 | import org.apache.hadoop.mapred.FileInputFormat; |
| 378 | import org.apache.hadoop.mapred.FileOutputFormat; |
| 379 | import org.apache.hadoop.mapred.JobClient; |
| 380 | import org.apache.hadoop.mapred.JobConf; |
| 381 | import org.apache.hadoop.mapred.TextInputFormat; |
| 382 | import org.apache.hadoop.mapred.TextOutputFormat; |
| 383 | |
| 384 | public class WordCount { |
| 385 | |
| 386 | public static void main(String[] args) throws Exception { |
| 387 | JobConf conf = new JobConf(WordCount.class); |
| 388 | conf.setJobName("wordcount"); |
| 389 | |
| 390 | conf.setOutputKeyClass(Text.class); |
| 391 | conf.setOutputValueClass(IntWritable.class); |
| 392 | |
| 393 | conf.setMapperClass(Map.class); |
| 394 | conf.setCombinerClass(Reduce.class); |
| 395 | conf.setReducerClass(Reduce.class); |
| 396 | |
| 397 | conf.setInputFormat(TextInputFormat.class); |
| 398 | conf.setOutputFormat(TextOutputFormat.class); |
| 399 | |
| 400 | FileInputFormat.setInputPaths(conf, new Path(args[0])); |
| 401 | FileOutputFormat.setOutputPath(conf, new Path(args[1])); |
| 402 | |
| 403 | JobClient.runJob(conf); |
| 404 | } |
| 405 | } |
| 406 | }}} |
| 407 | |
| 408 | 三個檔完成後並存檔後,整個程式建立完成 |
| 409 | [[Image(wiki:waue/2009/0617:3-5.png)]] |
| 410 | |
| 411 | ------- |
| 412 | |
| 413 | * 三個檔都存檔後,可以看到icas專案下的src,bin都有檔案產生,我們用指令來check |
| 414 | |
| 415 | {{{ |
| 416 | $ cd workspace/icas |
| 417 | $ ls src/Sample/ |
| 418 | mapper.java reducer.java WordCount.java |
| 419 | $ ls bin/Sample/ |
| 420 | mapper.class reducer.class WordCount.class |
| 421 | }}} |
| 422 | |
| 423 | = 四、測試範例程式 = |
| 424 | |
| 425 | 在此提供兩種方法來run我們從eclipse 上編譯出的code。 |
| 426 | |
| 427 | 方法一是直接在eclipse上用圖形介面操作,參閱 4.1 在eclipse上操作 |
| 428 | |
| 429 | 方法二是產生jar檔後搭配自動編譯程式Makefile,參閱4.2 |
| 430 | |
| 431 | |
| 432 | == 4.1 法一:在eclipse上操作 == |
| 433 | |
| 434 | * 右鍵點選專案資料夾:icas -> run as -> run on Hadoop |
| 435 | |
| 436 | [[Image(wiki:waue/2009/0617:run-on-hadoop.png)]] |
| 437 | |
| 438 | |
| 439 | == 4.2 法二:jar檔搭配自動編譯程式 == |
| 440 | |
| 441 | * eclipse 可以產生出jar檔 : |
| 442 | |
| 443 | File -> Export -> java -> JAR file [[br]] |
| 444 | -> next -> |
| 445 | -------- |
| 446 | 選擇要匯出的專案 -> |
| 447 | jarfile: /home/hadooper/mytest.jar -> [[br]] |
| 448 | next -> |
| 449 | -------- |
| 450 | next -> |
| 451 | -------- |
| 452 | main class: 選擇有Main的class -> [[br]] |
| 453 | Finish |
| 454 | -------- |
| 455 | |
| 456 | * 以上的步驟就可以在/home/hadooper/ 產生出你的 mytest.jar |
| 457 | * 不過程式常常修改,每次都做這些動作也很累很煩,讓我們來體驗一下'''用指令比用圖形介面操作還方便'''吧 |
| 458 | |
| 459 | === 4.2.1 產生Makefile 檔 === |
| 460 | {{{ |
| 461 | |
| 462 | $ cd /home/hadooper/workspace/icas/ |
| 463 | $ gedit Makefile |
| 464 | }}} |
| 465 | |
| 466 | * 輸入以下Makefile的內容 |
| 467 | {{{ |
| 468 | #!sh |
| 469 | |
| 470 | JarFile="sample-0.1.jar" |
| 471 | MainFunc="Sample.WordCount" |
| 472 | LocalOutDir="/tmp/output" |
| 473 | |
| 474 | all:help |
| 475 | jar: |
| 476 | jar -cvf ${JarFile} -C bin/ . |
| 477 | |
| 478 | run: |
| 479 | hadoop jar ${JarFile} ${MainFunc} input output |
| 480 | |
| 481 | clean: |
| 482 | hadoop fs -rmr output |
| 483 | |
| 484 | output: |
| 485 | rm -rf ${LocalOutDir} |
| 486 | hadoop fs -get output ${LocalOutDir} |
| 487 | gedit ${LocalOutDir}/part-r-00000 & |
| 488 | |
| 489 | help: |
| 490 | @echo "Usage:" |
| 491 | @echo " make jar - Build Jar File." |
| 492 | @echo " make clean - Clean up Output directory on HDFS." |
| 493 | @echo " make run - Run your MapReduce code on Hadoop." |
| 494 | @echo " make output - Download and show output file" |
| 495 | @echo " make help - Show Makefile options." |
| 496 | @echo " " |
| 497 | @echo "Example:" |
| 498 | @echo " make jar; make run; make output; make clean" |
| 499 | |
| 500 | }}} |
| 501 | |
| 502 | === 4.2.2 執行 === |
| 503 | |
| 504 | * 執行Makefile,可以到該目錄下,執行make [參數],若不知道參數為何,可以打make 或 make help |
| 505 | * make 的用法說明 |
| 506 | |
| 507 | {{{ |
| 508 | $ cd /home/hadooper/workspace/icas/ |
| 509 | $ make |
| 510 | Usage: |
| 511 | make jar - Build Jar File. |
| 512 | make clean - Clean up Output directory on HDFS. |
| 513 | make run - Run your MapReduce code on Hadoop. |
| 514 | make output - Download and show output file |
| 515 | make help - Show Makefile options. |
| 516 | |
| 517 | Example: |
| 518 | make jar; make run; make output; make clean |
| 519 | }}} |
| 520 | |
| 521 | * 下面提供各種make 的參數 |
| 522 | |
| 523 | === make jar === |
| 524 | * 1. 編譯產生jar檔 |
| 525 | |
| 526 | {{{ |
| 527 | $ make jar |
| 528 | }}} |
| 529 | |
| 530 | === make run === |
| 531 | * 2. 跑我們的wordcount 於hadoop上 |
| 532 | |
| 533 | {{{ |
| 534 | $ make run |
| 535 | }}} |
| 536 | |
| 537 | * make run基本上能正確無誤的運作到結束,因此代表我們在eclipse編譯的程式可以順利在hadoop0.18.3的平台上運行。 |
| 538 | |
| 539 | * 而回到eclipse視窗,我們可以看到下方視窗run完的job會呈現出來;左方視窗也多出output資料夾,part-r-00000就是我們的結果檔 |
| 540 | |
| 541 | [[Image(wiki:waue/2009/0617:4-1.png)]] |
| 542 | ------ |
| 543 | * 因為有設定完整的javadoc, 因此可以得到詳細的解說與輔助 |
| 544 | [[Image(wiki:waue/2009/0617:4-2.png)]] |
| 545 | |
| 546 | === make output === |
| 547 | * 3. 這個指令是幫助使用者將結果檔從hdfs下載到local端,並且用gedit來開啟你的結果檔 |
| 548 | |
| 549 | {{{ |
| 550 | $ make output |
| 551 | }}} |
| 552 | |
| 553 | === make clean === |
| 554 | * 4. 這個指令用來把hdfs上的output資料夾清除。如果你還想要在跑一次make run,請先執行make clean,否則hadoop會告訴你,output資料夾已經存在,而拒絕工作喔! |
| 555 | |
| 556 | {{{ |
| 557 | $ make clean |
| 558 | }}} |
| 559 | |
| 560 | = 五、結論 = |
| 561 | |
| 562 | * 搭配eclipse ,我們可以更有效率的開發hadoop |
| 563 | * hadoop 0.20 與之前的版本api以及設定都有些改變,可以看 [wiki:waue/2009/0617 hadoop 0.20 coding (eclipse )] |