'''Open:''' We will demo "How to deploy GPFS nodes massively using Diskless Remote Boot Linux". '''Conetxt:''' ||We start on 7 node and use 11 disks(6*160G + 5*320G=2.5T), then we dynamic add 3 disks(1*160G + 2*320G=0.8T). Final, we will see 3.3T in our environment. || * Installation * DRBL Test {{{ $ sudo /opt/drbl/sbin/dcs }}} * GPFS Test {{{ $ sudo su $ cat gpfs.nodes gpfs00:quorum gpfs01:quorum gpfs02: gpfs03: gpfs04: gpfs05: gpfs06: gpfs07: $ mmcrcluster -N gpfs.nodes -p gpfs00 -s gpfs01 -r /usr/bin/ssh -R /usr/bin/scp $ mmlscluster $ mmlsnode -a $ cat gpfs.disks /dev/sda:gpfs01::dataAndMetadata:: /dev/sdb:gpfs01::dataAndMetadata:: /dev/sda:gpfs02::dataAndMetadata:: /dev/sdb:gpfs02::dataAndMetadata:: /dev/sda:gpfs03::dataAndMetadata:: /dev/sdb:gpfs03::dataAndMetadata:: /dev/sda:gpfs04::dataAndMetadata:: /dev/sdb:gpfs04::dataAndMetadata:: /dev/sda:gpfs05::dataAndMetadata:: /dev/sdb:gpfs05::dataAndMetadata:: /dev/sda:gpfs06::dataAndMetadata:: $ mmcrnsd -F gpfs.disks ($ mmcrnsd -F gpfs.disks -v no) --> if this disk ever used for gpfs $mmlsnsd $ mmstatup -a $ mmgetstate -a $ tsstatus $ mmcrfs /home/gpfs_mount gpfs0 -F gpfs.disks -B 1024K -r 1 -R 2 -m 1 -M 2 ($ mmcrfs /home/gpfs_mount gpfs0 -F gpfs.disks -B 1024K -r 2 -R 2 -m 2 -M 2 ) --> enable data replicate $ mmmount gpfs0 /home/gpfs_mount $ df -h $ cat gpfs.adddisks /dev/sdb:gpfs06::dataAndMetadata:: /dev/sda:gpfs07::dataAndMetadata:: /dev/sdb:gpfs07::dataAndMetadata:: $ mmadddisk gpfs0 -F gpfs.adddisks -r $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 19G 7.1G 11G 41% / tmpfs 1.5G 0 1.5G 0% /lib/init/rw udev 10M 68K 10M 1% /dev tmpfs 1.5G 8.0K 1.5G 1% /dev/shm /dev/sdb1 294G 13G 266G 5% /home/mount /dev/gpfs0 3.1T 473M 3.1T 1% /home/gpfs_mount }}} [[BR]] [[BR]]