1 | | [[PageOutline]] |
2 | | {{{ |
3 | | #!html |
4 | | <div style="text-align: center;"><big |
5 | | style="font-weight: bold;"><big><big> hadoop 程式開發 (eclipse plugin) </big></big></big></div> |
6 | | }}} |
7 | | = 零. 環境配置 = |
8 | | |
9 | | |
10 | | == 0.1 環境說明 == |
11 | | * ubuntu 8.10 |
12 | | * sun-java-6 |
13 | | * [http://www.java.com/zh_TW/download/linux_manual.jsp?locale=zh_TW&host=www.java.com:80 java 下載處] |
14 | | * [https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jdk-6u10-docs-oth-JPR@CDS-CDS_Developer JavaDoc ] |
15 | | * eclipse 3.3.2 |
16 | | * eclipse 各版本下載點 [http://archive.eclipse.org/eclipse/downloads/] |
17 | | * hadoop 0.18.3 |
18 | | * hadoop 各版本下載點 [http://ftp.twaren.net/Unix/Web/apache/hadoop/core/] |
19 | | |
20 | | == 0.2 目錄說明 == |
21 | | |
22 | | * 使用者:hadoop |
23 | | * 使用者家目錄: /home/hadooper |
24 | | * 專案目錄 : /home/hadooper/workspace |
25 | | * hadoop目錄: /opt/hadoop |
26 | | |
27 | | = 一、安裝 = |
28 | | |
29 | | 安裝的部份沒必要都一模一樣,僅提供參考,反正只要安裝好java , hadoop , eclipse,並清楚自己的路徑就可以了 |
30 | | |
31 | | == 1.1. 安裝java == |
32 | | |
33 | | 首先安裝java 基本套件 |
34 | | |
35 | | {{{ |
36 | | $ sudo apt-get install java-common sun-java6-bin sun-java6-jdk sun-java6-jre |
37 | | }}} |
38 | | |
39 | | == 1.1.1. 安裝sun-java6-doc == |
40 | | |
41 | | 1 將javadoc (jdk-6u10-docs.zip) 下載下來放在 /tmp/ 下 |
42 | | |
43 | | * 教學環境內,已經存在於 /home/hadooper/tools/ ,將其複製到 /tmp |
44 | | {{{ |
45 | | $ cp /home/hadooper/tools/jdk-*-docs.zip /tmp/ |
46 | | }}} |
47 | | |
48 | | * 或是到官方網站將javadoc (jdk-6u10-docs.zip) 下載下來放到 /tmp |
49 | | [https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_Developer-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=jdk-6u10-docs-oth-JPR@CDS-CDS_Developer 下載點] |
50 | | [[Image(wiki:waue/2009/0617:1-1.png)]] |
51 | | |
52 | | 2 執行 |
53 | | |
54 | | {{{ |
55 | | $ sudo apt-get install sun-java6-doc |
56 | | $ sudo ln -sf /usr/share/doc/sun-java6-jdk/html /usr/lib/jvm/java-6-sun/docs |
57 | | }}} |
58 | | |
59 | | == 1.2. ssh 安裝設定 == |
60 | | |
61 | | [http://trac.nchc.org.tw/cloud/wiki/Hadoop_Lab1 詳見實作一] |
62 | | == 1.3. 安裝hadoop == |
63 | | [http://trac.nchc.org.tw/cloud/wiki/Hadoop_Lab1 詳見實作一] |
64 | | |
65 | | == 1.4. 安裝eclipse == |
66 | | |
67 | | * 取得檔案 eclipse 3.3.2 (假設已經下載於/home/hadooper/tools/ 內),執行下面指令: |
68 | | |
69 | | {{{ |
70 | | $ cd ~/tools/ |
71 | | $ tar -zxvf eclipse-SDK-3.3.2-linux-gtk.tar.gz |
72 | | $ sudo mv eclipse /opt |
73 | | $ sudo ln -sf /opt/eclipse/eclipse /usr/local/bin/ |
74 | | }}} |
75 | | |
76 | | = 二、 建立專案 = |
77 | | |
78 | | == 2.1 安裝hadoop 的 eclipse plugin == |
79 | | |
80 | | * 匯入hadoop eclipse plugin |
81 | | |
82 | | {{{ |
83 | | $ cd /opt/hadoop |
84 | | $ sudo cp /opt/hadoop/contrib/eclipse-plugin/hadoop-0.18.3-eclipse-plugin.jar /opt/eclipse/plugins |
85 | | }}} |
86 | | |
87 | | 補充: 可斟酌參考eclipse.ini內容(非必要) |
88 | | |
89 | | {{{ |
90 | | $ sudo cat /opt/eclipse/eclipse.ini |
91 | | }}} |
92 | | |
93 | | {{{ |
94 | | #!sh |
95 | | -showsplash |
96 | | org.eclipse.platform |
97 | | -vmargs |
98 | | -Xms40m |
99 | | -Xmx256m |
100 | | }}} |
101 | | |
102 | | == 2.2 開啟eclipse == |
103 | | |
104 | | * 打開eclipse |
105 | | |
106 | | {{{ |
107 | | $ eclipse & |
108 | | }}} |
109 | | |
110 | | 一開始會出現問你要將工作目錄放在哪裡:在這我們用預設值 |
111 | | |
112 | | |
113 | | [[Image(wiki:waue/2009/0617:2-1.png)]] |
114 | | ------- |
115 | | |
116 | | '''PS: 之後的說明則是在eclipse 上的介面操作''' |
117 | | |
118 | | ------- |
119 | | |
120 | | == 2.3 選擇視野 == |
121 | | |
122 | | || window -> || open pers.. -> || other.. -> || map/reduce|| |
123 | | |
124 | | [[Image(wiki:waue/2009/0617:win-open-other.png)]] |
125 | | |
126 | | ------- |
127 | | |
128 | | 設定要用 Map/Reduce 的視野 |
129 | | |
130 | | |
131 | | [[Image(wiki:waue/2009/0617:2-2.png)]] |
132 | | |
133 | | --------- |
134 | | |
135 | | 使用 Map/Reduce 的視野後的介面呈現 |
136 | | |
137 | | |
138 | | [[Image(wiki:waue/2009/0617:2-3.png)]] |
139 | | |
140 | | -------- |
141 | | |
142 | | == 2.4 建立專案 == |
143 | | |
144 | | || file -> || new -> || project -> || Map/Reduce -> || Map/Reduce Project -> || next || |
145 | | [[Image(wiki:waue/2009/0617:file-new-project.png)]] |
146 | | |
147 | | -------- |
148 | | |
149 | | 建立mapreduce專案(1) |
150 | | |
151 | | [[Image(wiki:waue/2009/0617:2-4.png)]] |
152 | | |
153 | | ----------- |
154 | | |
155 | | 建立mapreduce專案的(2) |
156 | | {{{ |
157 | | #!sh |
158 | | project name-> 輸入 : icas (隨意) |
159 | | use default hadoop -> Configur Hadoop install... -> 輸入: "/opt/hadoop" -> ok |
160 | | Finish |
161 | | }}} |
162 | | |
163 | | [[Image(wiki:waue/2009/0617:2-4-2.png)]] |
164 | | |
165 | | |
166 | | -------------- |
167 | | |
168 | | == 2.5 設定專案 == |
169 | | |
170 | | 由於剛剛建立了icas這個專案,因此eclipse已經建立了新的專案,出現在左邊視窗,右鍵點選該資料夾,並選properties |
171 | | |
172 | | -------------- |
173 | | |
174 | | Step1. 右鍵點選project的properties做細部設定 |
175 | | |
176 | | [[Image(wiki:waue/2009/0617:2-5.png)]] |
177 | | |
178 | | ---------- |
179 | | |
180 | | Step2. 進入專案的細部設定頁 |
181 | | |
182 | | hadoop的javadoc的設定(1) |
183 | | |
184 | | |
185 | | [[Image(wiki:waue/2009/0617:2-5-1.png)]] |
186 | | |
187 | | * java Build Path -> Libraries -> hadoop0.18.3-ant.jar |
188 | | * java Build Path -> Libraries -> hadoop0.18.3-core.jar |
189 | | * java Build Path -> Libraries -> hadoop0.18.3-tools.jar |
190 | | * 以 hadoop0.18.3-core.jar 的設定內容如下,其他依此類推 |
191 | | |
192 | | {{{ |
193 | | #!sh |
194 | | source ...-> 輸入:/opt/hadoop/src/core |
195 | | javadoc ...-> 輸入:file:/opt/hadoop/docs/api/ |
196 | | }}} |
197 | | |
198 | | ------------ |
199 | | Step3. hadoop的javadoc的設定完後(2) |
200 | | [[Image(wiki:waue/2009/0617:2-5-2.png)]] |
201 | | |
202 | | ------------ |
203 | | Step4. java本身的javadoc的設定(3) |
204 | | |
205 | | * javadoc location -> 輸入:file:/usr/lib/jvm/java-6-sun/docs/api/ |
206 | | |
207 | | [[Image(wiki:waue/2009/0617:2-5-3.png)]] |
208 | | |
209 | | ----- |
210 | | 設定完後回到eclipse 主視窗 |
211 | | |
212 | | |
213 | | == 2.6 連接hadoop server == |
214 | | |
215 | | -------- |
216 | | Step1. 視窗右下角黃色大象圖示"Map/Reduce Locations tag" -> 點選齒輪右邊的藍色大象圖示: |
217 | | [[Image(wiki:waue/2009/0617:2-6.png)]] |
218 | | |
219 | | ------------- |
220 | | Step2. 進行eclipse 與 hadoop 間的設定(2) |
221 | | [[Image(wiki:waue/2009/0617:2-6-1.png)]] |
222 | | |
223 | | {{{ |
224 | | #!sh |
225 | | Location Name -> 輸入:hadoop (隨意) |
226 | | Map/Reduce Master |
227 | | -> Host-> 輸入:localhost |
228 | | -> Port-> 輸入:9001 |
229 | | DFS Master |
230 | | -> Host-> 輸入:9000 |
231 | | Finish |
232 | | }}} |
233 | | ---------------- |
234 | | |
235 | | 設定完後,可以看到下方多了一隻藍色大象,左方展開資料夾也可以秀出在hdfs內的檔案結構 |
236 | | [[Image(wiki:waue/2009/0617:2-6-2.png)]] |
237 | | ------------- |
238 | | |
239 | | = 三、 撰寫範例程式 = |
240 | | |
241 | | * 之前在eclipse上已經開了個專案icas,因此這個目錄在: |
242 | | * /home/hadooper/workspace/icas |
243 | | * 在這個目錄內有兩個資料夾: |
244 | | * src : 用來裝程式原始碼 |
245 | | * bin : 用來裝編譯後的class檔 |
246 | | * 如此一來原始碼和編譯檔就不會混在一起,對之後產生jar檔會很有幫助 |
247 | | * 在這我們編輯一個範例程式 : WordCount |
248 | | |
249 | | == 3.1 mapper.java == |
250 | | |
251 | | 1. new |
252 | | |
253 | | || File -> || new -> || mapper || |
254 | | [[Image(wiki:waue/2009/0617:file-new-mapper.png)]] |
255 | | |
256 | | ----------- |
257 | | |
258 | | 2. create |
259 | | |
260 | | [[Image(wiki:waue/2009/0617:3-1.png)]] |
261 | | {{{ |
262 | | #!sh |
263 | | source folder-> 輸入: icas/src |
264 | | Package : Sample |
265 | | Name -> : mapper |
266 | | }}} |
267 | | ---------- |
268 | | |
269 | | 3. modify |
270 | | |
271 | | {{{ |
272 | | #!java |
273 | | package Sample; |
274 | | |
275 | | import java.io.IOException; |
276 | | import java.util.StringTokenizer; |
277 | | |
278 | | import org.apache.hadoop.io.IntWritable; |
279 | | import org.apache.hadoop.io.LongWritable; |
280 | | import org.apache.hadoop.io.Text; |
281 | | import org.apache.hadoop.mapred.MapReduceBase; |
282 | | import org.apache.hadoop.mapred.Mapper; |
283 | | import org.apache.hadoop.mapred.OutputCollector; |
284 | | import org.apache.hadoop.mapred.Reporter; |
285 | | |
286 | | public class mapper extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { |
287 | | private final static IntWritable one = new IntWritable(1); |
288 | | private Text word = new Text(); |
289 | | |
290 | | public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { |
291 | | String line = value.toString(); |
292 | | StringTokenizer tokenizer = new StringTokenizer(line); |
293 | | while (tokenizer.hasMoreTokens()) { |
294 | | word.set(tokenizer.nextToken()); |
295 | | output.collect(word, one); |
296 | | } |
297 | | } |
298 | | } |
299 | | |
300 | | }}} |
301 | | |
302 | | 建立mapper.java後,貼入程式碼 |
303 | | [[Image(wiki:waue/2009/0617:3-2.png)]] |
304 | | |
305 | | ------------ |
306 | | |
307 | | == 3.2 reducer.java == |
308 | | |
309 | | 1. new |
310 | | |
311 | | * File -> new -> reducer |
312 | | [[Image(wiki:waue/2009/0617:file-new-reducer.png)]] |
313 | | |
314 | | ------- |
315 | | 2. create |
316 | | [[Image(wiki:waue/2009/0617:3-3.png)]] |
317 | | |
318 | | {{{ |
319 | | #!sh |
320 | | source folder-> 輸入: icas/src |
321 | | Package : Sample |
322 | | Name -> : reducer |
323 | | }}} |
324 | | |
325 | | ----------- |
326 | | |
327 | | 3. modify |
328 | | |
329 | | {{{ |
330 | | #!java |
331 | | package Sample; |
332 | | |
333 | | import java.io.IOException; |
334 | | import java.util.Iterator; |
335 | | |
336 | | import org.apache.hadoop.io.IntWritable; |
337 | | import org.apache.hadoop.io.Text; |
338 | | import org.apache.hadoop.mapred.MapReduceBase; |
339 | | import org.apache.hadoop.mapred.OutputCollector; |
340 | | import org.apache.hadoop.mapred.Reducer; |
341 | | import org.apache.hadoop.mapred.Reporter; |
342 | | |
343 | | public class reducer extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { |
344 | | public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { |
345 | | int sum = 0; |
346 | | while (values.hasNext()) { |
347 | | sum += values.next().get(); |
348 | | } |
349 | | output.collect(key, new IntWritable(sum)); |
350 | | } |
351 | | } |
352 | | }}} |
353 | | |
354 | | * File -> new -> Map/Reduce Driver |
355 | | [[Image(wiki:waue/2009/0617:file-new-mr-driver.png)]] |
356 | | ---------- |
357 | | |
358 | | == 3.3 WordCount.java (main function) == |
359 | | |
360 | | 1. new |
361 | | |
362 | | 建立WordCount.java,此檔用來驅動mapper 與 reducer,因此選擇 Map/Reduce Driver |
363 | | |
364 | | |
365 | | [[Image(wiki:waue/2009/0617:3-4.png)]] |
366 | | ------------ |
367 | | |
368 | | 2. create |
369 | | |
370 | | {{{ |
371 | | #!sh |
372 | | source folder-> 輸入: icas/src |
373 | | Package : Sample |
374 | | Name -> : WordCount.java |
375 | | }}} |
376 | | |
377 | | ------- |
378 | | 3. modify |
379 | | |
380 | | {{{ |
381 | | #!java |
382 | | package Sample; |
383 | | import org.apache.hadoop.fs.Path; |
384 | | import org.apache.hadoop.io.IntWritable; |
385 | | import org.apache.hadoop.io.Text; |
386 | | import org.apache.hadoop.mapred.FileInputFormat; |
387 | | import org.apache.hadoop.mapred.FileOutputFormat; |
388 | | import org.apache.hadoop.mapred.JobClient; |
389 | | import org.apache.hadoop.mapred.JobConf; |
390 | | import org.apache.hadoop.mapred.TextInputFormat; |
391 | | import org.apache.hadoop.mapred.TextOutputFormat; |
392 | | |
393 | | public class WordCount { |
394 | | |
395 | | public static void main(String[] args) throws Exception { |
396 | | JobConf conf = new JobConf(WordCount.class); |
397 | | conf.setJobName("wordcount"); |
398 | | |
399 | | conf.setOutputKeyClass(Text.class); |
400 | | conf.setOutputValueClass(IntWritable.class); |
401 | | |
402 | | conf.setMapperClass(mapper.class); |
403 | | conf.setCombinerClass(reducer.class); |
404 | | conf.setReducerClass(reducer.class); |
405 | | |
406 | | conf.setInputFormat(TextInputFormat.class); |
407 | | conf.setOutputFormat(TextOutputFormat.class); |
408 | | |
409 | | FileInputFormat.setInputPaths(conf, new Path("/user/hadooper/input")); |
410 | | FileOutputFormat.setOutputPath(conf, new Path("lab5_out2")); |
411 | | |
412 | | JobClient.runJob(conf); |
413 | | } |
414 | | } |
415 | | }}} |
416 | | |
417 | | 三個檔完成後並存檔後,整個程式建立完成 |
418 | | [[Image(wiki:waue/2009/0617:3-5.png)]] |
419 | | |
420 | | ------- |
421 | | |
422 | | * 三個檔都存檔後,可以看到icas專案下的src,bin都有檔案產生,我們用指令來check |
423 | | |
424 | | {{{ |
425 | | $ cd workspace/icas |
426 | | $ ls src/Sample/ |
427 | | mapper.java reducer.java WordCount.java |
428 | | $ ls bin/Sample/ |
429 | | mapper.class reducer.class WordCount.class |
430 | | }}} |
431 | | |
432 | | = 四、測試範例程式 = |
433 | | |
434 | | 在此提供兩種方法來run我們從eclipse 上編譯出的code。 |
435 | | |
436 | | 方法一是直接在eclipse上用圖形介面操作,參閱 4.1 在eclipse上操作 |
437 | | |
438 | | 方法二是產生jar檔後搭配自動編譯程式Makefile,參閱4.2 |
439 | | |
440 | | |
441 | | == 4.1 法一:在eclipse上操作 == |
442 | | |
443 | | * 右鍵點選專案資料夾:icas -> run as -> run on Hadoop |
444 | | |
445 | | [[Image(wiki:waue/2009/0617:run-on-hadoop.png)]] |
446 | | |
447 | | |
448 | | == 4.2 法二:jar檔搭配自動編譯程式 == |
449 | | |
450 | | * eclipse 可以產生出jar檔 : |
451 | | |
452 | | File -> Export -> java -> JAR file [[br]] |
453 | | -> next -> |
454 | | -------- |
455 | | 選擇要匯出的專案 -> |
456 | | jarfile: /home/hadooper/mytest.jar -> [[br]] |
457 | | next -> |
458 | | -------- |
459 | | next -> |
460 | | -------- |
461 | | main class: 選擇有Main的class -> [[br]] |
462 | | Finish |
463 | | -------- |
464 | | |
465 | | * 以上的步驟就可以在/home/hadooper/ 產生出你的 mytest.jar |
466 | | * 不過程式常常修改,每次都做這些動作也很累很煩,讓我們來體驗一下'''用指令比用圖形介面操作還方便'''吧 |
467 | | |
468 | | === 4.2.1 產生Makefile 檔 === |
469 | | {{{ |
470 | | $ cd /home/hadooper/workspace/icas/ |
471 | | $ gedit Makefile |
472 | | }}} |
473 | | |
474 | | * 輸入以下Makefile的內容 (注意 ":" 後面要接 "tab" 而不是 "空白") |
475 | | {{{ |
476 | | JarFile="sample-0.1.jar" |
477 | | MainFunc="Sample.WordCount" |
478 | | LocalOutDir="/tmp/output" |
479 | | HADOOP_BIN="/opt/hadoop/bin" |
480 | | |
481 | | all:jar run output clean |
482 | | |
483 | | jar: |
484 | | jar -cvf ${JarFile} -C bin/ . |
485 | | |
486 | | run: |
487 | | ${HADOOP_BIN}/hadoop jar ${JarFile} ${MainFunc} input output |
488 | | |
489 | | clean: |
490 | | ${HADOOP_BIN}/hadoop fs -rmr output |
491 | | |
492 | | output: |
493 | | rm -rf ${LocalOutDir} |
494 | | ${HADOOP_BIN}/hadoop fs -get output ${LocalOutDir} |
495 | | gedit ${LocalOutDir}/part-r-00000 & |
496 | | |
497 | | help: |
498 | | @echo "Usage:" |
499 | | @echo " make jar - Build Jar File." |
500 | | @echo " make clean - Clean up Output directory on HDFS." |
501 | | @echo " make run - Run your MapReduce code on Hadoop." |
502 | | @echo " make output - Download and show output file" |
503 | | @echo " make help - Show Makefile options." |
504 | | @echo " " |
505 | | @echo "Example:" |
506 | | @echo " make jar; make run; make output; make clean" |
507 | | }}} |
508 | | |
509 | | * 或是直接下載 [http://trac.nchc.org.tw/cloud/raw-attachment/wiki/Hadoop_Lab5/Makefile Makefile] 吧 |
510 | | {{{ |
511 | | $ cd /home/hadooper/workspace/icas/ |
512 | | $ wget http://trac.nchc.org.tw/cloud/raw-attachment/wiki/Hadoop_Lab5/Makefile |
513 | | }}} |
514 | | |
515 | | === 4.2.2 執行 === |
516 | | |
517 | | * 執行Makefile,可以到該目錄下,執行make [參數],若不知道參數為何,可以打make 或 make help |
518 | | * make 的用法說明 |
519 | | |
520 | | {{{ |
521 | | $ cd /home/hadooper/workspace/icas/ |
522 | | $ make |
523 | | Usage: |
524 | | make jar - Build Jar File. |
525 | | make clean - Clean up Output directory on HDFS. |
526 | | make run - Run your MapReduce code on Hadoop. |
527 | | make output - Download and show output file |
528 | | make help - Show Makefile options. |
529 | | |
530 | | Example: |
531 | | make jar; make run; make output; make clean |
532 | | }}} |
533 | | |
534 | | * 下面提供各種make 的參數 |
535 | | |
536 | | === make jar === |
537 | | * 1. 編譯產生jar檔 |
538 | | |
539 | | {{{ |
540 | | $ make jar |
541 | | }}} |
542 | | |
543 | | === make run === |
544 | | * 2. 跑我們的wordcount 於hadoop上 |
545 | | |
546 | | {{{ |
547 | | $ make run |
548 | | }}} |
549 | | |
550 | | * make run基本上能正確無誤的運作到結束,因此代表我們在eclipse編譯的程式可以順利在hadoop0.18.3的平台上運行。 |
551 | | |
552 | | * 而回到eclipse視窗,我們可以看到下方視窗run完的job會呈現出來;左方視窗也多出output資料夾,part-r-00000就是我們的結果檔 |
553 | | |
554 | | [[Image(wiki:waue/2009/0617:4-1.png)]] |
555 | | ------ |
556 | | * 因為有設定完整的javadoc, 因此可以得到詳細的解說與輔助 |
557 | | [[Image(wiki:waue/2009/0617:4-2.png)]] |
558 | | |
559 | | === make output === |
560 | | * 3. 這個指令是幫助使用者將結果檔從hdfs下載到local端,並且用gedit來開啟你的結果檔 |
561 | | |
562 | | {{{ |
563 | | $ make output |
564 | | }}} |
565 | | |
566 | | === make clean === |
567 | | * 4. 這個指令用來把hdfs上的output資料夾清除。如果你還想要在跑一次make run,請先執行make clean,否則hadoop會告訴你,output資料夾已經存在,而拒絕工作喔! |
568 | | |
569 | | {{{ |
570 | | $ make clean |
571 | | }}} |
572 | | |
573 | | = 五、結論 = |
574 | | |
575 | | * 搭配eclipse ,我們可以更有效率的開發hadoop |
576 | | * hadoop 0.20 與之前的版本api以及設定都有些改變,可以看 [wiki:waue/2009/0617 hadoop 0.20 coding (eclipse )] |
577 | | |
578 | | = 六、練習:匯入專案 = |
579 | | * 將 [http://trac.nchc.org.tw/cloud/raw-attachment/wiki/Hadoop_Lab5/hadoop_sample_codes.zip nchc-sample] 給匯入到eclipse 內開發吧! |
| 1 | [[WikiInclude(waue/2009/0617)]] |