Changes between Version 2 and Version 3 of Hinet130923/Lab11


Ignore:
Timestamp:
Sep 24, 2013, 12:18:48 PM (11 years ago)
Author:
jazz
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Hinet130923/Lab11

    v2 v3  
    44
    55= 實作十一 Lab11 =
     6
     7{{{
     8#!html
     9<div style="text-align: center;"><big style="font-weight: bold;"><big>MapReduce 基本指令操作<br>Basic Commands of Hadoop MapReduce</big></big></div>
     10}}}
     11
     12== Sample 1 : WordCount ==
     13 
     14 * 如名稱,WordCount會對所有的字作字數統計,並且從a-z作排列[[BR]]WordCount example will count each word shown in documents and sorting from a to z.
     15{{{
     16~$ hadoop fs -put /etc/hadoop/conf lab5_input
     17~$ hadoop fs -rmr lab5_out2
     18~$ hadoop jar hadoop-examples.jar wordcount lab5_input lab5_out2
     19}}}
     20 * 檢查輸出結果的方法同之前方法[[BR]]Let's check the computed result of '''wordcount''' from HDFS :
     21{{{
     22$ hadoop fs -ls lab5_out2
     23$ hadoop fs -cat lab5_out2/part-r-00000
     24}}}
     25 * 結果如下[[BR]]You should see results like this:
     26{{{
     27"".     4
     28"*"     9
     29"127.0.0.1"     3
     30"AS     2
     31"License");     2
     32"_logs/history/"        1
     33"alice,bob      9
     34
     35( ... skip ... )
     36}}}
     37
     38== Sample 2: grep ==
     39 
     40 * grep 這個命令是擷取文件裡面特定的字元,在Hadoop example中此指令可以擷取文件中有此指定文字的字串,並作計數統計[[BR]]grep is a command to extract specific characters in documents. In hadoop examples, you can use this command to extract strings match the regular expression and count for matched strings.
     41{{{
     42$ hadoop fs -ls lab5_input
     43$ hadoop jar hadoop-examples.jar grep lab5_input lab5_out3 'dfs[a-z.]+'
     44}}}
     45 * 運作的畫面如下:[[BR]]You should see procedure like this: 
     46{{{
     4711/04/19 10:00:20 INFO mapred.FileInputFormat: Total input paths to process : 25
     4811/04/19 10:00:20 INFO mapred.JobClient: Running job: job_201104120101_0645
     4911/04/19 10:00:21 INFO mapred.JobClient:  map 0% reduce 0%
     50( ... skip ... )
     51}}}
     52 * 接著查看結果[[BR]]Let's check the computed result of '''grep''' from HDFS :
     53{{{
     54$ hadoop fs -ls lab5_out3
     55Found 2 items
     56drwx------   - hXXXX supergroup          0 2011-04-19 10:00 /user/hXXXX/lab5_out1/_logs
     57-rw-r--r--   2 hXXXX supergroup       1146 2011-04-19 10:00 /user/hXXXX/lab5_out1/part-00000
     58$ hadoop fs -cat lab5_out3/part-00000
     59}}}
     60 * 結果如下[[BR]]You should see results like this:
     61{{{
     624       dfs.permissions
     634       dfs.replication
     644       dfs.name.dir
     653       dfs.namenode.decommission.interval.
     663       dfs.namenode.decommission.nodes.per.interval
     673       dfs.
     68( ... skip ... )
     69}}}
     70
     71== More Examples ==
     72 
     73 可執行的指令一覽表:[[BR]]Here is a list of hadoop examples :
     74
     75 || aggregatewordcount ||  An Aggregate based map/reduce program that counts the words in the input files. ||
     76 || aggregatewordhist || An Aggregate based map/reduce program that computes the histogram of the words in the input files. ||
     77 || grep ||  A map/reduce program that counts the matches of a regex in the input. ||
     78 || join || A job that effects a join over sorted, equally partitioned datasets ||
     79 || multifilewc ||  A job that counts words from several files. ||
     80 || pentomino  || A map/reduce tile laying program to find solutions to pentomino problems. ||
     81 || pi ||  A map/reduce program that estimates Pi using monte-carlo method. ||
     82 || randomtextwriter ||  A map/reduce program that writes 10GB of random textual data per node. ||
     83 || randomwriter || A map/reduce program that writes 10GB of random data per node. ||
     84 || sleep ||  A job that sleeps at each map and reduce task. ||
     85 || sort || A map/reduce program that sorts the data written by the random writer. ||
     86 || sudoku ||  A sudoku solver. ||
     87 || wordcount || A map/reduce program that counts the words in the input files. ||
     88
     89You could find more detail at [http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/examples/package-summary.html org.apache.hadoop.examples]