Changes between Version 1 and Version 2 of GPFS_DRBL


Ignore:
Timestamp:
Feb 29, 2008, 1:48:13 PM (17 years ago)
Author:
rock
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GPFS_DRBL

    v1 v2  
    11= How to deploy GPFS nodes massively using DRBL =
     2
     3rock, rock@nchc.org.tw
     4
     5[[BR]]
     6[[BR]]
     7== 1. Introduction ==
     8        Follow the time, People's digital data need more and large spaces to store. Although, hard disk's volume is become large, but hard disk has phsical limit. GPFS(General Parallel File System)
     9is a high-performance shard-disk file system. GPFS provide a virtual view to merge multi-disk into one large disk. DRBL is a Diskless Remote Boot mechanism, you just install OS and essential software in the DRBL server, don't install any software in client(client just enable PXE in BIOS, and anfter reboot, you have DRBL environment) .In this article we use DRBL to massively deploy GPFS nodes, it has two advantage (1)you can use DRBL command to manage your storage cluster (2)you can use GPFS to effective utilize client disk(If your client has disk).
     10
     11[[BR]]
     12[[BR]]
     13== 2. Software ==
     14We use below software:
     15 * Debian
     16||Linux Distribution.
     17http://www.debian.org/||
     18
     19 * GPFS
     20||The IBM General Parallel File System™ (GPFS™) is a high-performance shared-disk file management solution that provides fast, reliable access to a common set of file data from two computers to hundreds of systems. GPFS integrates into your environment by bringing together mixed server and storage components to provide a common view to enterprise file data. GPFS provides online storage management, scalable access and integrated information lifecycle tools capable of managing petabytes of data and billions of files.
     21http://www-03.ibm.com/systems/clusters/software/gpfs/index.html||
     22
     23 * DRBL
     24||Diskless Remote Boot in Linux (DRBL) provides a diskless or systemless environment for client machines. It works on Debian, Ubuntu, Mandriva, Red Hat, Fedora, CentOS and SuSE. DRBL uses distributed hardware resources and makes it possible for clients to fully access local hardware. It also includes Clonezilla, a partitioning and disk cloning utility similar to Symantec Ghost®.
     25http://drbl.sourceforge.net/||
     26
     27[[BR]]
     28[[BR]]
     29== 3. Install GPFS ==
     30=== 3.1 Install requirement package ===
     31{{{
     32$ sudo aptitude install  ksh  xutils-dev alien
     33$ sudo aptitude install libstdc++5-3.3-dev
     34}}}
     35
     36=== 3.2 Config Linux Environment ===
     37{{{
     38Because GPFS just support SuSE and RedHat environment, Debian must adjust partial path.
     39$ sudo ln -s /usr/bin /usr/X11R6/bin
     40$ sudo ln -s /usr/bin/sort /bin/sort
     41$ sudo ln -s /usr/bin/awk /bin/awk
     42$ sudo ln -s /usr/bin/grep /bin/grep
     43$ sudo ln -s /usr/bin/rpm /bin/rpm
     44}}}
     45
     46=== 3.3 Download GPFS ===
     47 * Method 1:(Download GPFS 3.1 from http://www14.software.ibm.com/webapp/set2/sas/f/gpfs/download/systemx.html
     48this version support 2.6.18 kernel )
     49{{{
     50$ sudo alien *.rpm
     51$ dpkg -i *.deb
     52$ wget http://0rz.tw/e13Jo
     53$ sudo ./gpfs.shell
     54}}}
     55
     56 * Method 2:(usr our team patched GPFS 3.1, the version can support 2.6.20 kernel. Download from http://0rz.tw/a63rY)
     57{{{
     58$ sudo wget http://0rz.tw/a63rY
     59$ cd /usr
     60$ sudo tar zxvf gpfs_ker262015_v0625.tar.gz
     61}}}
     62
     63=== 3.4 Config & Install GPFS ===
     64{{{
     65$ cd /usr/lpp/mmfs/src/config/
     66$ sudo cp site.mcr.proto site.mcr
     67$ sudo vim site.mcr
     68edit below content:
     69(
     70LINUX_DISTRIBUTION = KERNEL_ORG_LINUX
     71#define LINUX_KERNEL_VERSION 2062015
     72)
     73
     74$ su
     75$ vim ./bashrc
     76add blew context then use root to relogin:
     77(
     78export PATH=$PATH:/usr/lpp/mmfs/bin
     79export SHARKCLONEROOT=/usr/lpp/mmfs/src
     80)
     81
     82$ make World
     83$ make InstallImages
     84}}}
     85
     86[[BR]]
     87[[BR]]
     88== 4. Install DRBL ==
     89=== 4.1 Add apt source ===
     90{{{
     91$ sudo vim /etc/apt/sources.list
     92add below content: 
     93(deb http://free.nchc.org.tw/drbl-core drbl stable)
     94
     95$ wget http://drbl.nchc.org.tw/GPG-KEY-DRBL sudo apt-key add GPG-KEY-DRBL
     96$ sudo apt-get update
     97}}}
     98
     99=== 4.2 Install DRBL ===
     100{{{
     101Before we install DRBL, we must clear plan our DRBL environment. The below layout is our environment, eth0 used to connect WAN, eth1 used for DRBL internal clients.
     102
     103     NIC     NIC IP                                Clients
     104+-----------------------------+
     105|         DRBL SERVER     |
     106|                                         |
     107| +-- [eth0] 140.110.X.X  +- to WAN
     108|                                         |
     109| +-- [eth1] 192.168.1.254  +- to clients group 1 [ 7 clients, their IP from 192.168.1.1 - 192.168.0.7]
     110|                                         |
     111+-----------------------------+
     112
     113$ sudo aptitude install drbl
     114(DRBL will be installed in directory /opt/drbl )
     115
     116$ sudo /opt/drbl/sbin/drblsrv -i
     117$ sudo /opt/drbl/sbin/drblpush-offline -s `uname -r`
     118(The command used interactive mothod help user to install. It install related packages (nfs, dhcp, tftp......) and create /tftpboot directory. The /tftpboot include:
     119nbi_img: kenrel , initrd image and grub menu
     120node_root: server directories copy
     121nodes: each nodes' individual directories)
     122
     123$ sudo /opt/drbl/sbin/drblpush -i
     124(the command will deploy client environment, like client name, DRBL mode, swap ...)
     125}}}
     126
     127[[BR]]
     128[[BR]]
     129== 5. Test DRBL and GPFS ==
     130=== 5.1 Setup auto login in DRBL environment ===
     131{{{
     132GPFS command must use root to execute.
     133$ su
     134$ ssh {client_node}
     135(sever must test ssh to all nodes for authenticity of host. ex. ssh gpfs01 . )
     136
     137$ ssh-keygen -t rsa
     138(all node need this step)
     139
     140$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
     141$ cat /tftpboot/nodes/{client ip}/root/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
     142(all node public key must add to this authorized . ex. cat /tftpboot/nodes/192.168.1.1/root/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys)
     143
     144$ cp ~/.ssh/authorized_keys  /tftpboot/nodes/{client ip}/root/.ssh/
     145$ cp ~/.ssh/known_hosts  /tftpboot/nodes/{client ip}/root/.ssh/
     146(this two step must cp to all node)
     147}}}
     148
     149=== 5.2 Config GPFS Environment ===
     150{{{
     151First, check your /etc/hosts to know your machine information.
     152this's our /etc/hosts content:
     153(
     154192.168.1.254 gpfs00
     155192.168.1.1 gpfs01
     156192.168.1.2 gpfs02
     157192.168.1.3 gpfs03
     158192.168.1.4 gpfs04
     159192.168.1.5 gpfs05
     160192.168.1.6 gpfs06
     161192.168.1.7 gpfs07
     162)
     163
     164$ mkdir /home/gpfs
     165
     166$ vim gpfs.nodes
     167Edit your node information. you can refer to your /etc/hosts.
     168You can assign “quorum” to your GPFS Server.
     169(
     170gpfs00:quorum
     171gpfs01:quorum
     172gpfs02:
     173gpfs03:
     174gpfs04:
     175gpfs05:
     176gpfs06:
     177gpfs07:
     178)
     179
     180$vim gpfs.disks
     181Before vim this file. you must know how many hard disks want to use. Because our environment is DRBL, so we can use client's all disk.
     182This is our disk information:
     183(
     184/dev/sda:gpfs01::dataAndMetadata::
     185/dev/sdb:gpfs01::dataAndMetadata::
     186/dev/sda:gpfs02::dataAndMetadata::
     187/dev/sdb:gpfs02::dataAndMetadata::
     188/dev/sda:gpfs03::dataAndMetadata::
     189/dev/sdb:gpfs03::dataAndMetadata::
     190/dev/sda:gpfs04::dataAndMetadata::
     191/dev/sdb:gpfs04::dataAndMetadata::
     192/dev/sda:gpfs05::dataAndMetadata::
     193/dev/sdb:gpfs05::dataAndMetadata::
     194/dev/sda:gpfs06::dataAndMetadata::
     195/dev/sdb:gpfs06::dataAndMetadata::
     196/dev/sda:gpfs07::dataAndMetadata::
     197/dev/sdb:gpfs07::dataAndMetadata::
     198)
     199}}}
     200
     201[[BR]]
     202[[BR]]
     203=== 5.3 Run GPFS ===
     204{{{
     205$ cd /home/gpfs
     206$ mmcrcluster -n gpfs.nodes -p  gpfs00 -s gpfs01 -r `which ssh` -R `which scp`
     207(
     208-n: Node file
     209-p: Primary Server
     210-s: Secondary Server
     211-r: Remote shell
     212-R: Remote cp
     213)
     214
     215$ mmlscluster
     216$ mmlsnode
     217this two command can see your gpfs node information and check your mmcrcluster command.
     218Our display:
     219(
     220gpfs-server:/home/gpfs# mmlsnode
     221GPFS nodeset    Node list
     222-------------   -------------------------------------------------------
     223   gpfs00       gpfs00 gpfs01 gpfs02 gpfs03 gpfs04 gpfs05 gpfs06 gpfs07
     224)
     225
     226
     227$ mmcrnsd -F gpfs.disks
     228(
     229-F: disk file
     230)
     231Setup your disk.
     232
     233$ mmlsnsd
     234check your disk information.
     235Our display:
     236(
     237gpfs-server:/home/gpfs# mmlsnsd
     238File system   Disk name    Primary node             Backup node           
     239---------------------------------------------------------------------------
     240 gpfs0         gpfs1nsd     gpfs01                   
     241 gpfs0         gpfs2nsd     gpfs01                   
     242 gpfs0         gpfs3nsd     gpfs02                   
     243 gpfs0         gpfs4nsd     gpfs02                   
     244 gpfs0         gpfs5nsd     gpfs03                   
     245 gpfs0         gpfs6nsd     gpfs03                   
     246 gpfs0         gpfs7nsd     gpfs04                   
     247 gpfs0         gpfs8nsd     gpfs04                   
     248 gpfs0         gpfs9nsd     gpfs05                   
     249 gpfs0         gpfs10nsd    gpfs05                   
     250 gpfs0         gpfs11nsd    gpfs06                   
     251 gpfs0         gpfs12nsd    gpfs06                   
     252 gpfs0         gpfs13nsd    gpfs07                   
     253 gpfs0         gpfs14nsd    gpfs07           
     254)
     255
     256$ mmstartup -a
     257this command can load all GPFS module and start GPFS services.
     258
     259$ mmgetstate
     260$ tsstatus
     261This two command to check your GPFS service.
     262}}}
     263
     264=== 5.4 Mount GPFS and Enjoy Large Spaces ===
     265{{{
     266$ mmcrfs /home/gpfs_mount gpfs0 -F gpfs.disks -B 1024K -m 1 -M 2 -r 1 -R 2
     267(
     268-F: Disk File
     269-B: Block size
     270-m: Default Metadata Replicas
     271-M: Max Metadata Replicas
     272-r:  Default Data Replicas
     273-R: Max Data Replicas
     274If you want to enable fail tolerance, you -m and -r value must setup 2.
     275)
     276
     277$ mmmount /dev/gpfs0 /home/gpfs_mount -a
     278
     279$ df
     280check your disk volume. Below is our display:
     281(
     282gpfs-server:/home/gpfs# df -h
     283Filesystem            Size  Used Avail Use% Mounted on
     284/dev/sda1              19G  7.2G   11G  41% /
     285tmpfs                 1.5G     0  1.5G   0% /lib/init/rw
     286udev                   10M   68K   10M   1% /dev
     287tmpfs                 1.5G  8.0K  1.5G   1% /dev/shm
     288/dev/sdb1             294G   13G  266G   5% /home/mount
     289/dev/gpfs0            3.1T  137G  3.0T   5% /home/gpfs_mount
     290)
     291}}}
     292
     293[[BR]]
     294[[BR]]
     295== 6. Reference ==
     296IBM GPFS, http://www-03.ibm.com/systems/clusters/software/gpfs/index.html
     297DRBL, http://drbl.sourceforge.net/
     298
     299
     300
     301
     302
     303