291 | | |
292 | | = 五、 參考 = |
| 291 | = 五、 用Eclipse製成可在Hadoop上運行MapReduce的jar檔 = |
| 292 | |
| 293 | 1. 開啟MapReduce 專案 |
| 294 | |
| 295 | || 視窗操作 || 介面中設定 || 註解 || |
| 296 | || '''File''' > '''new''' > '''Map/Reduce Project'''>'''next''' || '''Project name''':''sample'' [[br]] '''Configure Hadoop install directory''': /opt/hadoop [[br]] => '''Finish''' || 完成會增加sample專案並切換成MapReduce的視野 || |
| 297 | |
| 298 | 2. 加入檔案WordCount.java檔 |
| 299 | |
| 300 | || 視窗操作 || 介面中設定 || 結果 || |
| 301 | || 右鍵點選sample專案 > '''new''' > '''file''' || sample >'''src''' [[br]] '''File Name''': WordCount.java [[br]] => '''Finish''' || 完成後就多了一個WordCount.java檔 || |
| 302 | |
| 303 | 3. 寫入WordCount.java的內容([wiki:WordCount code]) |
| 304 | |
| 305 | 4. 執行 |
| 306 | |
| 307 | || 視窗操作 || 介面中設定 || 結果 || |
| 308 | || '''run''' > '''Run Configurations...''' || '''Main''' tag :[[br]] '''Name''': '''WordCount''' [[br]] '''Project''': sample [[br]] '''Main class:''': WordCount ;'''Arguments''' tag : [[br]] '''Program arguments''': /opt/hadoop/log /opt/hadoop/test2 => '''Apply''' => '''Run''' || console 介面會出現執行結果 || |
| 309 | |
| 310 | * Eclipse是用模擬的方式模擬Hadoop的環境,執行這段程式碼,所以並沒有送上HDFS給Hadoop的job tracker作Map Reduce。http://localhost:50030 沒有工作運作的紀錄可以證明這點。 |
| 311 | * 既然是在本機端上運作,所以給的Program arguments參數 '''/opt/hadoop/input /opt/hadoop/output''' 是本機上的目錄。 |
| 312 | * 請確認 input 資料夾內有純文字資料,且output資料夾尚未存在(執行後系統會自行建立此資料夾並將結果放入) |
| 313 | * 若Console 介面沒有錯誤訊息,則代表這段程式在主機端運作無誤 |
| 314 | {{{ |
| 315 | 09/02/06 17:18:35 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= |
| 316 | 09/02/06 17:18:35 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. |
| 317 | 09/02/06 17:18:35 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). |
| 318 | 09/02/06 17:18:35 INFO mapred.FileInputFormat: Total input paths to process : 1 |
| 319 | |
| 320 | ... 略 ... |
| 321 | |
| 322 | 09/02/06 17:18:36 INFO mapred.JobClient: Map output bytes=445846 |
| 323 | 09/02/06 17:18:36 INFO mapred.JobClient: Map input bytes=320950 |
| 324 | 09/02/06 17:18:36 INFO mapred.JobClient: Combine input records=37943 |
| 325 | 09/02/06 17:18:36 INFO mapred.JobClient: Map output records=37943 |
| 326 | 09/02/06 17:18:36 INFO mapred.JobClient: Reduce input records=9284 |
| 327 | }}} |
| 328 | |
| 329 | 錯誤排除 : |
| 330 | |
| 331 | * input 資料夾內有純文字資料 |
| 332 | * output 資料夾尚未存在(執行後系統會自行建立此資料夾並將結果放入) |
| 333 | * 檢查"run configuration" 內的 "Java Application" > "WordCount" 的設定是否正確 |
| 334 | |
| 335 | 5. 打包成JAR |
| 336 | |
| 337 | || 視窗操作 || 介面中設定 || 結果 || |
| 338 | || '''File''' > '''Export''' > Java > Runnable JAR file || ''' Launch configuration''' : '''WordCount - sample''' [[br]] '''Export destionation''' : /opt/hadoop/WordCount.jar => Finish => ok ||/opt/hadoop/下可以找到檔案WordCount.jar || |
| 339 | |
| 340 | * 最後一個ok在於包入Hadoop的必要library,所以匯出的WordCount.jar 檔大約有4.3MB |
| 341 | |
| 342 | 6. 運行WordCount於HDFS之上 |
| 343 | |
| 344 | 指令: |
| 345 | {{{ |
| 346 | $ cd /opt/hadoop |
| 347 | $ bin/hadoop jar WordCount.jar /user/waue/input /user/waue/out/ |
| 348 | }}} |
| 349 | |
| 350 | * bin/hadoop jar 不可用 '''-jar''',但若是單純用java執行jar, 則要用'''$ java -jar XXX.jar''',不可只用jar |
| 351 | * /user/waue/input /user/waue/out/ 為輸入和輸出的兩個參數,這兩個路徑是HDFS上得路徑,請確認hdfs內的/user/waue/input有純文字檔,且無/user/waue/out/這個資料夾。 |
| 352 | * 若已經成功執行過,想再執行第二次,請更換output的資料夾名稱,否則會因資料夾已存在而出現錯誤訊息。 |
| 353 | |
| 354 | 執行畫面 |
| 355 | {{{ |
| 356 | 09/02/06 18:13:14 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. |
| 357 | 09/02/06 18:13:14 INFO mapred.FileInputFormat: Total input paths to process : 1 |
| 358 | 09/02/06 18:13:14 INFO mapred.FileInputFormat: Total input paths to process : 1 |
| 359 | 09/02/06 18:13:15 INFO mapred.JobClient: Running job: job_200902051032_0009 |
| 360 | 09/02/06 18:13:16 INFO mapred.JobClient: map 0% reduce 0% |
| 361 | 09/02/06 18:13:20 INFO mapred.JobClient: map 100% reduce 0% |
| 362 | 09/02/06 18:13:23 INFO mapred.JobClient: Job complete: job_200902051032_0009 |
| 363 | 09/02/06 18:13:23 INFO mapred.JobClient: Counters: 16 |
| 364 | 09/02/06 18:13:23 INFO mapred.JobClient: File Systems |
| 365 | 09/02/06 18:13:23 INFO mapred.JobClient: HDFS bytes read=320950 |
| 366 | 09/02/06 18:13:23 INFO mapred.JobClient: HDFS bytes written=130568 |
| 367 | 09/02/06 18:13:23 INFO mapred.JobClient: Local bytes read=168448 |
| 368 | 09/02/06 18:13:23 INFO mapred.JobClient: Local bytes written=336932 |
| 369 | 09/02/06 18:13:23 INFO mapred.JobClient: Job Counters |
| 370 | 09/02/06 18:13:23 INFO mapred.JobClient: Launched reduce tasks=1 |
| 371 | 09/02/06 18:13:23 INFO mapred.JobClient: Launched map tasks=1 |
| 372 | 09/02/06 18:13:23 INFO mapred.JobClient: Data-local map tasks=1 |
| 373 | 09/02/06 18:13:23 INFO mapred.JobClient: Map-Reduce Framework |
| 374 | 09/02/06 18:13:23 INFO mapred.JobClient: Reduce input groups=9284 |
| 375 | 09/02/06 18:13:23 INFO mapred.JobClient: Combine output records=18568 |
| 376 | 09/02/06 18:13:23 INFO mapred.JobClient: Map input records=7868 |
| 377 | 09/02/06 18:13:23 INFO mapred.JobClient: Reduce output records=9284 |
| 378 | 09/02/06 18:13:23 INFO mapred.JobClient: Map output bytes=445846 |
| 379 | 09/02/06 18:13:23 INFO mapred.JobClient: Map input bytes=320950 |
| 380 | 09/02/06 18:13:23 INFO mapred.JobClient: Combine input records=47227 |
| 381 | 09/02/06 18:13:23 INFO mapred.JobClient: Map output records=37943 |
| 382 | 09/02/06 18:13:23 INFO mapred.JobClient: Reduce input records=9284 |
| 383 | }}} |
| 384 | |
| 385 | * http://localhost:50030 會紀錄剛剛運作的工作 |
| 386 | = 六、 參考 = |