實做三、執行 MapReduce 基本運算
1 Hadoop運算命令 grep
- grep 這個命令是擷取文件裡面特定的字元,在Hadoop example中此指令可以擷取文件中有此指定文字的字串,並作計數統計
$ cd /opt/hadoop $ bin/hadoop fs -put conf lab3_input $ bin/hadoop fs -ls lab3_input $ bin/hadoop jar hadoop-*-examples.jar grep lab3_input lab3_out1 'dfs[a-z.]+'
運作的畫面如下:
09/03/24 12:33:45 INFO mapred.FileInputFormat: Total input paths to process : 9 09/03/24 12:33:45 INFO mapred.FileInputFormat: Total input paths to process : 9 09/03/24 12:33:45 INFO mapred.JobClient: Running job: job_200903232025_0003 09/03/24 12:33:46 INFO mapred.JobClient: map 0% reduce 0% 09/03/24 12:33:47 INFO mapred.JobClient: map 10% reduce 0% 09/03/24 12:33:49 INFO mapred.JobClient: map 20% reduce 0% 09/03/24 12:33:51 INFO mapred.JobClient: map 30% reduce 0% 09/03/24 12:33:52 INFO mapred.JobClient: map 40% reduce 0% 09/03/24 12:33:54 INFO mapred.JobClient: map 50% reduce 0% 09/03/24 12:33:55 INFO mapred.JobClient: map 60% reduce 0% 09/03/24 12:33:57 INFO mapred.JobClient: map 70% reduce 0% 09/03/24 12:33:59 INFO mapred.JobClient: map 80% reduce 0% 09/03/24 12:34:00 INFO mapred.JobClient: map 90% reduce 0% 09/03/24 12:34:02 INFO mapred.JobClient: map 100% reduce 0% 09/03/24 12:34:10 INFO mapred.JobClient: map 100% reduce 10% 09/03/24 12:34:12 INFO mapred.JobClient: map 100% reduce 13% 09/03/24 12:34:15 INFO mapred.JobClient: map 100% reduce 20% 09/03/24 12:34:20 INFO mapred.JobClient: map 100% reduce 23% 09/03/24 12:34:22 INFO mapred.JobClient: Job complete: job_200903232025_0003 09/03/24 12:34:22 INFO mapred.JobClient: Counters: 16 09/03/24 12:34:22 INFO mapred.JobClient: File Systems 09/03/24 12:34:22 INFO mapred.JobClient: HDFS bytes read=48245 09/03/24 12:34:22 INFO mapred.JobClient: HDFS bytes written=1907 09/03/24 12:34:22 INFO mapred.JobClient: Local bytes read=1549 09/03/24 12:34:22 INFO mapred.JobClient: Local bytes written=3584 09/03/24 12:34:22 INFO mapred.JobClient: Job Counters ......
- 接著查看結果
$ bin/hadoop fs -ls lab3_out1 $ bin/hadoop fs -cat lab3_out1/part-00000
結果如下
3 dfs.class 3 dfs. 2 dfs.period 1 dfs.http.address 1 dfs.balance.bandwidth 1 dfs.block.size 1 dfs.blockreport.initial 1 dfs.blockreport.interval 1 dfs.client.block.write.retries 1 dfs.client.buffer.dir 1 dfs.data.dir 1 dfs.datanode.address 1 dfs.datanode.dns.interface 1 dfs.datanode.dns.nameserver 1 dfs.datanode.du.pct 1 dfs.datanode.du.reserved 1 dfs.datanode.handler.count 1 dfs.datanode.http.address 1 dfs.datanode.https.address 1 dfs.datanode.ipc.address 1 dfs.default.chunk.view.size 1 dfs.df.interval 1 dfs.file 1 dfs.heartbeat.interval 1 dfs.hosts 1 dfs.hosts.exclude 1 dfs.https.address 1 dfs.impl 1 dfs.max.objects 1 dfs.name.dir 1 dfs.namenode.decommission.interval 1 dfs.namenode.decommission.interval. 1 dfs.namenode.decommission.nodes.per.interval 1 dfs.namenode.handler.count 1 dfs.namenode.logging.level 1 dfs.permissions 1 dfs.permissions.supergroup 1 dfs.replication 1 dfs.replication.consider 1 dfs.replication.interval 1 dfs.replication.max 1 dfs.replication.min 1 dfs.replication.min. 1 dfs.safemode.extension 1 dfs.safemode.threshold.pct 1 dfs.secondary.http.address 1 dfs.servers 1 dfs.web.ugi 1 dfsmetrics.log
2 Hadoop運算命令 WordCount
- 如名稱,WordCount會對所有的字作字數統計,並且從a-z作排列
/opt/hadoop$ bin/hadoop jar hadoop-*-examples.jar wordcount lab3_input lab3_out2
檢查輸出結果的方法同之前方法
$ bin/hadoop fs -ls lab3_out2 $ bin/hadoop fs -cat lab3_out2/part-00000
3. 使用網頁Gui瀏覽資訊
4. 更多運算命令
可執行的指令一覽表:
aggregatewordcount An Aggregate based map/reduce program that counts the words in the input files. aggregatewordhist An Aggregate based map/reduce program that computes the histogram of the words in the input files. grep A map/reduce program that counts the matches of a regex in the input. join A job that effects a join over sorted, equally partitioned datasets multifilewc A job that counts words from several files. pentomino A map/reduce tile laying program to find solutions to pentomino problems. pi A map/reduce program that estimates Pi using monte-carlo method. randomtextwriter A map/reduce program that writes 10GB of random textual data per node. randomwriter A map/reduce program that writes 10GB of random data per node. sleep A job that sleeps at each map and reduce task. sort A map/reduce program that sorts the data written by the random writer. sudoku A sudoku solver. wordcount A map/reduce program that counts the words in the input files.
Last modified 15 years ago
Last modified on Apr 26, 2010, 12:02:06 PM