wiki:waue/2009/0715

Version 9 (modified by waue, 15 years ago) (diff)

--

hadoop programming 完全公式

輸入 key 輸入 value 輸出 Key 輸出 Value
Mapper < A , B , C , D >
map ( A , B , OutputCollector < C , D > , Reporter reporter )
output . collect ( c , d )
Reducer < C , D , E , F >
reduce ( C , D , OutputCollector < E , F > , Reporter reporter )
output . collect ( e , f )
  • A, B, C, D ,E, F 分別代表可以用的類別;c, d, e, f 代表由C,D,E,F所產生的物件
  • 有了這張表,我們規劃要寫M/R程式的時候:
    • 先把Map的輸入<key,value> 應該屬於哪種類別的,則A,B定好
    • Map的輸出<key,value>定好,則 C,D也ok了
    • 接下來想最終輸出的<key,value>該為何類別,則 E,F 決定好
    • 分別填入 ABCDEF之後,整個程式的架構就出來了,接下來就看你的程式如何實做
14.        public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
15.        private final static IntWritable one = new IntWritable(1);
16.        private Text word = new Text();
17.   
18.        public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
19.          String line = value.toString();
20.          StringTokenizer tokenizer = new StringTokenizer(line);
21.          while (tokenizer.hasMoreTokens()) {
22.            word.set(tokenizer.nextToken());
23.            output.collect(word, one);
24.          }
25.        }
26.      }
27.   
28.      public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
29.        public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
30.          int sum = 0;
31.          while (values.hasNext()) {
32.            sum += values.next().get();
33.          }
34.          output.collect(key, new IntWritable(sum));
35.        }
36.      }