wiki:waue/2009/0402

Version 14 (modified by waue, 15 years ago) (diff)

--

Hadoop分佈式文件系統使用指南二

升級

  • 由於換版本的話,資料夾內的conf設定檔也勢必被更改,因此目前作法為: 把conf 移至/opt/conf ,hadoop 0.16 與 hadoop 0.18用 ln 做捷徑代換。由於conf已不在hadoop_home內,因此記得匯入conf/hadoop-env.sh
    $ source /opt/conf/hadoop-env.sh
    
  • 先看狀態
    $ bin/hadoop dfsadmin -upgradeProgress status
    
    There are no upgrades in progress.
    
  • 停止hdfs
    • 注意不可使用bin/stop-all.sh來停止
      $ bin/stop-dfs.sh
      
  • 部署新版本的Hadoop
    • 注意每個node的版本都要統一,否則會出現問題
  • 啟動
    $ bin/start-dfs.sh -upgrade
    
  • namenode管理網頁會出現升級狀態

退回

  • 停止集群
    $ bin/stop-dfs.sh
    
  • 部署老版本的Hadoop
  • 退回之前版本
    $ bin/start-dfs.sh -rollback
    

檢查hdfs磁碟狀況

  • fsck 可以
    $ bin/hadoop fsck /
    
    .
    /user/waue/input/1.txt:  Under replicated blk_-90085106852013388_1001. Target Replicas is 3 but found 2 replica(s).
    /user/waue/input/1.txt:  Under replicated blk_-4027196261436469955_1001. Target Replicas is 3 but found 2 replica(s).
    .
    /user/waue/input/2.txt:  Under replicated blk_-2300843106107816641_1002. Target Replicas is 3 but found 2 replica(s).
    .
    /user/waue/input/3.txt:  Under replicated blk_-1561577350198661966_1003. Target Replicas is 3 but found 2 replica(s).
    .
    /user/waue/input/4.txt:  Under replicated blk_1316726598778579026_1004. Target Replicas is 3 but found 2 replica(s).
    Status: HEALTHY
     Total size:	143451003 B
     Total dirs:	8
     Total files:	4
     Total blocks (validated):	5 (avg. block size 28690200 B)
     Minimally replicated blocks:	5 (100.0 %)
     Over-replicated blocks:	0 (0.0 %)
     Under-replicated blocks:	5 (100.0 %)
     Mis-replicated blocks:		0 (0.0 %)
     Default replication factor:	3
     Average block replication:	2.0
     Corrupt blocks:		0
     Missing replicas:		5 (50.0 %)
     Number of data-nodes:		2
     Number of racks:		1
    The filesystem under path '/' is HEALTHY
    
  • 加不同的參數有不同的用處,如
    $ bin/hadoop fsck / -files
    
    /tmp <dir>
    /tmp/hadoop <dir>
    /tmp/hadoop/hadoop-waue <dir>
    /tmp/hadoop/hadoop-waue/mapred <dir>
    /tmp/hadoop/hadoop-waue/mapred/system <dir>
    /user <dir>
    /user/waue <dir>
    /user/waue/input <dir>
    /user/waue/input/1.txt 115045564 bytes, 2 block(s):  Under replicated blk_-90085106852013388_1001. Target Replicas is 3 but found 2 replica(s).
     Under replicated blk_-4027196261436469955_1001. Target Replicas is 3 but found 2 replica(s).
    /user/waue/input/2.txt 987864 bytes, 1 block(s):  Under replicated blk_-2300843106107816641_1002. Target Replicas is 3 but found 2 replica(s).
    /user/waue/input/3.txt 1573048 bytes, 1 block(s):  Under replicated blk_-1561577350198661966_1003. Target Replicas is 3 but found 2 replica(s).
    /user/waue/input/4.txt 25844527 bytes, 1 block(s):  Under replicated blk_1316726598778579026_1004. Target Replicas is 3 but found 2 replica(s).
    Status: HEALTHY
    ....(同上)
    

Hadoop指令手冊

$ hadoop [--config confdir] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]

[GENERIC_OPTIONS] :

-conf <configuration file> 指定應用程序的配置文件。
-D <property=value> 為指定property指定值value。
-fs <local|namenode:port> 指定namenode。
-jt <local|jobtracker:port> 指定job tracker。只適用於job。

archieve

  • archieve就是把資料壓縮成一個檔案,在壓縮的過程中,還會將被壓縮的目錄結構紀錄在index與masterindex內。
  • 由於每個上傳上去的檔案都被放在一個block中,因此我的input資料夾內共有四個檔,但是每個檔都會佔用一個block,用此方法就可以按照整個打包大小來分配共用去多少個block數。
  • hadoop archive -archiveName name <src>* <dest>
    $ bin/hadoop archive -archiveName foo.har input/* output
    09/04/02 14:02:30 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
    09/04/02 14:02:30 INFO mapred.JobClient: Running job: job_200904021140_0001
    09/04/02 14:02:31 INFO mapred.JobClient:  map 0% reduce 0%
    09/04/02 14:02:44 INFO mapred.JobClient:  map 20% reduce 0%
    09/04/02 14:02:49 INFO mapred.JobClient:  map 100% reduce 0%
    09/04/02 14:02:56 INFO mapred.JobClient: Job complete: job_200904021140_0001
    ...略
    
  • 看har裡面的檔案結構
    $ bin/hadoop dfs -lsr /user/waue/output/foo.har
    
  • 看har內檔案的內容
    $ bin/hadoop dfs -cat /user/waue/output/foo.har/part-0
    
  • ps: 官方文件介紹的 hadoop dfs -lsr har:///user/hadoop/output/foo.har 會出現錯誤!
    lsr: could not get get listing for 'har:/user/waue/output/foo.har/user/waue' : File: har://hdfs-gm1.nchc.org.tw:9000/user/waue/output/foo.har/user/waue/input does not exist in har:///user/waue/output/foo.har
    
    

distCp

  • 是用於大規模集群內部和集群之間拷貝的工具
  • 使用Map/Reduce實現文件分發,錯誤處理和恢復,以及報告生成
  • 舉例為:
    hadoop distcp hdfs://nn1:8020/foo/bar hdfs://nn2:8020/bar/foo
    

?? 然而8020 port 在機器上沒有開,且不是應該檔案會均勻散佈在每個節點上嗎?怎麼還會知道nn1的節點上有這個檔要複製到nn2呢?