Changes between Version 1 and Version 2 of NCTU110329/Lab4


Ignore:
Timestamp:
Apr 19, 2011, 8:48:49 AM (13 years ago)
Author:
jazz
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • NCTU110329/Lab4

    v1 v2  
    1010}}}
    1111
     12  * 以下練習,請連線至 hadoop.nchc.org.tw 操作。
     13
    1214== Content 1: HDFS Shell 基本操作 ==
    1315== Content 1: Basic HDFS Shell Commands ==
     
    1719
    1820{{{
    19 /opt/hadoop$ bin/hadoop fs -ls
    20 /opt/hadoop$ bin/hadoop fs -lsr
     21~$ hadoop fs -ls
     22~$ hadoop fs -lsr
    2123}}}
    2224
     
    2729
    2830{{{
    29 /opt/hadoop$ bin/hadoop fs -put conf input
     31~$ hadoop fs -put conf input
    3032}}}
    3133
     
    3335
    3436{{{
    35 /opt/hadoop$ bin/hadoop fs -ls
    36 /opt/hadoop$ bin/hadoop fs -ls input
     37~$ hadoop fs -ls
     38~$ hadoop fs -ls input
    3739}}}
    3840 
     
    4345
    4446{{{
    45 /opt/hadoop$ bin/hadoop fs -get input fromHDFS
     47~$ hadoop fs -get input fromHDFS
    4648}}}
    4749
     
    4951
    5052{{{
    51 /opt/hadoop$ ls -al | grep fromHDFS
    52 /opt/hadoop$ ls -al fromHDFS
     53~$ ls -al | grep fromHDFS
     54~$ ls -al fromHDFS
    5355}}} 
    5456
     
    5759
    5860{{{
    59 /opt/hadoop$ bin/hadoop fs -ls input
    60 /opt/hadoop$ bin/hadoop fs -rm input/masters
     61~$ hadoop fs -ls input
     62~$ hadoop fs -rm input/masters
    6163}}}
    6264
     
    6567
    6668{{{
    67 /opt/hadoop$ bin/hadoop fs -ls input
    68 /opt/hadoop$ bin/hadoop fs -cat input/slaves
     69~$ hadoop fs -ls input
     70~$ hadoop fs -cat input/slaves
    6971}}}
    7072
     
    7375
    7476{{{
    75 hadooper@vPro:/opt/hadoop$ bin/hadoop fs
     77hadooper@vPro:~$ hadoop fs
    7678
    7779Usage: java FsShell
     
    115117-archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.
    116118The general command line syntax is
    117 bin/hadoop command [genericOptions] [commandOptions]
     119hadoop command [genericOptions] [commandOptions]
    118120}}} 
    119121 
     
    127129== Content 3: More about HDFS Shell ==
    128130 
    129  * bin/hadoop fs <args> ,下面則列出 <args> 的用法[[BR]]Following are the examples of hadoop fs related commands.
     131 * hadoop fs <args> ,下面則列出 <args> 的用法[[BR]]Following are the examples of hadoop fs related commands.
    130132 * 以下操作預設的目錄在 /user/<$username>/ 下[[BR]]By default, your working directory will be at /user/<$username>/.
    131133{{{
    132 $ bin/hadoop fs -ls input
     134$ hadoop fs -ls input
    133135Found 4 items
    134136-rw-r--r--   2 hadooper supergroup  115045564 2009-04-02 11:51 /user/hadooper/input/1.txt
     
    139141 * 完整的路徑則是 '''hdfs://node:port/path''' 如:[[BR]]Or you have to give a __''absolute path''__, such as '''hdfs://node:port/path'''
    140142{{{
    141 $ bin/hadoop fs -ls hdfs://gm1.nchc.org.tw:9000/user/hadooper/input
     143$ hadoop fs -ls hdfs://gm1.nchc.org.tw:9000/user/hadooper/input
    142144Found 4 items
    143145-rw-r--r--   2 hadooper supergroup  115045564 2009-04-02 11:51 /user/hadooper/input/1.txt
     
    151153 * 將路徑指定文件的內容輸出到 STDOUT [[BR]] Print given file content to STDOUT
    152154{{{
    153 $ bin/hadoop fs -cat quota/hadoop-env.sh
     155$ hadoop fs -cat quota/hadoop-env.sh
    154156}}}
    155157
     
    158160 * 改變文件所屬的組 [[BR]] Change '''owner group''' of given file or folder
    159161{{{
    160 $ bin/hadoop fs -chgrp -R hadooper own
     162$ hadoop fs -chgrp -R hadooper own
    161163}}}
    162164
     
    165167 * 改變文件的權限 [[BR]] Change '''read and write permission''' of given file or folder
    166168{{{
    167 $ bin/hadoop fs -chmod -R 755 own
     169$ hadoop fs -chmod -R 755 own
    168170}}}
    169171
     
    172174 * 改變文件的擁有者 [[BR]] Change '''owner''' of given file or folder
    173175{{{
    174 $ bin/hadoop fs -chown -R hadooper own
     176$ hadoop fs -chown -R hadooper own
    175177}}}
    176178
     
    179181 * 從 local 放檔案到 hdfs [[BR]] Both commands will copy given file or folder from local to HDFS
    180182{{{
    181 $ bin/hadoop fs -put input dfs_input
     183$ hadoop fs -put input dfs_input
    182184}}}
    183185
     
    186188 * 把hdfs上得檔案下載到 local [[BR]] Both commands will copy given file or folder from HDFS to local
    187189{{{
    188 $ bin/hadoop fs -get dfs_input input1
     190$ hadoop fs -get dfs_input input1
    189191}}}
    190192
     
    193195 * 將文件從 hdfs 原本路徑複製到 hdfs 目標路徑 [[BR]] Copy given file or folder from HDFS source path to HDFS target path
    194196{{{
    195 $ bin/hadoop fs -cp own hadooper
     197$ hadoop fs -cp own hadooper
    196198}}}
    197199
     
    200202 * 顯示目錄中所有文件的大小 [[BR]] Display the size of files in given folder
    201203{{{
    202 $ bin/hadoop fs -du input
     204$ hadoop fs -du input
    203205
    204206Found 4 items
     
    212214 * 顯示該目錄/文件的總大小 [[BR]] Display total size of given folder
    213215{{{
    214 $ bin/hadoop fs -dus input
     216$ hadoop fs -dus input
    215217
    216218hdfs://gm1.nchc.org.tw:9000/user/hadooper/input 143451003
     
    221223 * 清空垃圾桶 [[BR]] Clean up Recycled
    222224{{{
    223 $ bin/hadoop fs -expunge
     225$ hadoop fs -expunge
    224226}}}
    225227
     
    228230 * 將來源目錄<src>下所有的文件都集合到本地端一個<localdst>檔案內 [[BR]] Merge all files in HDFS source folder <src> into one local file
    229231{{{
    230 $ bin/hadoop fs -getmerge <src> <localdst>
     232$ hadoop fs -getmerge <src> <localdst>
    231233}}}
    232234{{{
    233235$ echo "this is one; " >> in1/input
    234236$ echo "this is two; " >> in1/input2
    235 $ bin/hadoop fs -put in1 in1
    236 $ bin/hadoop fs -getmerge in1 merge.txt
     237$ hadoop fs -put in1 in1
     238$ hadoop fs -getmerge in1 merge.txt
    237239$ cat ./merge.txt
    238240}}}
     
    244246 * 目錄名 <dir> 修改日期 修改時間 權限 用戶ID 組ID [[BR]] <folder name> <modified date> <modified time> <permission> <user id> <group id>
    245247{{{
    246 $ bin/hadoop fs -ls
     248$ hadoop fs -ls
    247249}}}
    248250
     
    251253 * ls 命令的遞迴版本 [[BR]] list files and folders with recursive
    252254{{{
    253 $ bin/hadoop fs -lsr /
     255$ hadoop fs -lsr /
    254256}}}
    255257
     
    258260 * 建立資料夾 [[BR]] create directories
    259261{{{
    260 $ bin/hadoop fs -mkdir a b c
     262$ hadoop fs -mkdir a b c
    261263}}}
    262264
     
    265267 * 將 local 端的資料夾剪下移動到 hdfs 上 [[BR]] move local files or folder to HDFS ( it will delete local files or folder. )
    266268{{{
    267 $ bin/hadoop fs -moveFromLocal in1 in2
     269$ hadoop fs -moveFromLocal in1 in2
    268270}}}
    269271
     
    272274 * 更改資料的名稱 [[BR]] Change file name or folder name.
    273275{{{
    274 $ bin/hadoop fs -mv in2 in3
     276$ hadoop fs -mv in2 in3
    275277}}}
    276278
     
    279281 * 刪除指定的檔案(不可資料夾)[[BR]] Remove given files (not folders)
    280282{{{
    281 $ bin/hadoop fs -rm in1/input
     283$ hadoop fs -rm in1/input
    282284}}}
    283285=== -rmr ===
     
    285287 * 遞迴刪除資料夾(包含在內的所有檔案) [[BR]] Remove given files and folders with recursive
    286288{{{
    287 $ bin/hadoop fs -rmr in1
     289$ hadoop fs -rmr in1
    288290}}}
    289291
     
    292294 * 設定副本係數 [[BR]] setup replication numbers of given files or folder
    293295{{{
    294 $ bin/hadoop fs -setrep [-R] [-w] <rep> <path/file>
    295 }}}
    296 {{{
    297 $ bin/hadoop fs -setrep -w 2 -R input
     296$ hadoop fs -setrep [-R] [-w] <rep> <path/file>
     297}}}
     298{{{
     299$ hadoop fs -setrep -w 2 -R input
    298300Replication 2 set: hdfs://gm1.nchc.org.tw:9000/user/hadooper/input/1.txt
    299301Replication 2 set: hdfs://gm1.nchc.org.tw:9000/user/hadooper/input/2.txt
     
    310312 * 印出時間資訊 [[BR]] Print Status of time stamp of folder
    311313{{{
    312 $ bin/hadoop fs -stat input
     314$ hadoop fs -stat input
    3133152009-04-02 03:51:29
    314316}}}
     
    318320 * 用法  Usage
    319321{{{
    320 bin/hadoop fs -tail [-f] 檔案 (-f 參數用來顯示如果檔案增大,則秀出被append上得內容)
    321 bin/hadoop fs -tail [-f] <path/file> (-f is used when file had appended)
    322 }}}
    323 {{{
    324 $ bin/hadoop fs -tail input/1.txt
     322hadoop fs -tail [-f] 檔案 (-f 參數用來顯示如果檔案增大,則秀出被append上得內容)
     323hadoop fs -tail [-f] <path/file> (-f is used when file had appended)
     324}}}
     325{{{
     326$ hadoop fs -tail input/1.txt
    325327}}}
    326328
     
    331333 * 用法 Usage
    332334{{{
    333 $ bin/hadoop fs -test -[ezd] URI
    334 }}}
    335  
    336 {{{
    337 $ bin/hadoop fs -test -e /user/hadooper/input/5.txt
    338 $ bin/hadoop fs -test -z /user/hadooper/input/5.txt
     335$ hadoop fs -test -[ezd] URI
     336}}}
     337 
     338{{{
     339$ hadoop fs -test -e /user/hadooper/input/5.txt
     340$ hadoop fs -test -z /user/hadooper/input/5.txt
    339341test: File does not exist: /user/hadooper/input/5.txt
    340 $ bin/hadoop fs -test -d /user/hadooper/input/5.txt
     342$ hadoop fs -test -d /user/hadooper/input/5.txt
    341343
    342344test: File does not exist: /user/hadooper/input/5.txt
     
    356358 * ps : 目前沒支援zip的函式庫 [[BR]] PS. It does not support zip files yet.
    357359{{{
    358 $ bin/hadoop fs -text b/a.txt.zip
     360$ hadoop fs -text b/a.txt.zip
    359361PK
    360362���:��H{
     
    370372 * 建立一個空文件 [[BR]] creat an empty file
    371373{{{
    372 $ bin/hadoop fs -touchz b/kk
    373 $ bin/hadoop fs -test -z b/kk
     374$ hadoop fs -touchz b/kk
     375$ hadoop fs -test -z b/kk
    374376$ echo $?
    3753771
    376 $ bin/hadoop fs -test -z b/a.txt.zip
     378$ hadoop fs -test -z b/a.txt.zip
    377379$ echo $?
    3783800