[[PageOutline]] = GPFS Performance Report (Uses iozone command) = zsjheng, rock [[BR]] [[BR]] == 0. Machine Informance == ||Node ||8 nodes (1 server , 7 client provide disks)|| ||CPU ||Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (each node)|| ||Memory ||2GB DDR2 667 (each node)|| ||Disk ||320G+160G (each node), All nodes: (320G+ 160G) * 7 = 3.36T|| ||NIC ||Intel Corporation 82566DM Gigabit Network Connection|| ||Switch ||D-link 24 port GE switch|| [[BR]] [[BR]] == 1. 8 Nodes, No Replicate, Adjust Parameters == {{{ $ iozone -g 16g -aRb test.wks }}} [[BR]] == 2. 8 Nodes, Replicate, Adjust Parameters == {{{ $ iozone -g 16g -aRb test2.wks }}} [[BR]] == 3. Comapre == The following six diagrams show the testing results using IOzone as the benchmark to evaluate how GPFS works with and without data-replication. The six IO operations we have done for performance evaluation are write, re-write, read, re-read, random read, and random write opertion, respectively. The left surface chart of each diagram represents the testing with data-replication enable, and the right one is opposite. The first diagram of comparison indicates the write opertion testing on GPFS. It shows a difference about 130 MB per second of the maximum throughput, and we found that the maximum throughput of the left one is higher than the right one. The evaluation result is out of our expection that the writing operation without data-replication should perform well than with data-repication enable. [[Image(cmp_1.jpg)]] [[Image(cmp_2.jpg)]] [[Image(cmp_3.jpg)]] [[Image(cmp_4.jpg)]] [[Image(cmp_5.jpg)]] [[Image(cmp_6.jpg)]] [[BR]] == 4. Running information == {{{ gpfs-server:/home/gpfs_mount# ls iozone.tmp iozone.tmp.DUMMY test.wks gpfs-server:/home/gpfs_mount# du -h 69M . gpfs-server:/home/gpfs_mount# du -h 129M . gpfs-server:/home/gpfs_mount# du -h 129M . gpfs-server:/home/gpfs_mount# du -h 25M . gpfs-server:/home/gpfs_mount# du -h 129M . gpfs-server:/home/gpfs_mount# du -h 107M . gpfs-server:/home/gpfs_mount# dstat -cdn -M gpfs -N eth1 ----total-cpu-usage---- -dsk/total- --net/eth1- --gpfs-i/o- usr sys idl wai hiq siq|_read _writ|_recv _send|_read write 0 0 98 0 0 2| 622B 13k| 0 0 | 0.3 0 1 2 94 0 1 2| 0 0 | 0 0 | 16M 16M 1 3 90 0 1 6| 0 0 | 43M 53M| 0 24M 0 1 98 0 0 1| 0 0 | 0 0 | 0 11M 0 2 92 0 1 5| 0 0 |5653k 120M| 0 19M 0 2 93 0 1 3| 0 0 | 0 0 | 0 25M 1 4 88 0 2 6| 0 0 | 13M 187M| 0 33M 1 3 92 0 1 3| 0 0 | 0 0 | 58M 0 1 4 86 0 2 8| 0 0 | 0 0 | 86M 0 0 2 95 0 0 2| 0 0 | 140M 34M| 29M 0 1 6 86 0 2 6| 0 0 | 0 0 | 87M 128M 1 4 87 0 1 6| 0 0 | 119M 4232k| 50M 0 1 4 87 0 2 6| 0 0 | 0 0 | 51M 0 }}} [[Image(dstat_iozone_R_A.png)]] [[BR]]