Changes between Initial Version and Version 1 of GPFS_Per_gpfsperf


Ignore:
Timestamp:
Feb 27, 2008, 2:42:12 PM (16 years ago)
Author:
rock
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GPFS_Per_gpfsperf

    v1 v1  
     1
     2 = GPFS Performance Report (Uses gpfsperf Command) =
     3
     4rock, rock@nchc.org.tw
     5
     6[[BR]]
     7[[BR]]
     8== Machine Informance ==
     9Node
     108 nodes (1 server , 7 client provide disks)
     11CPU
     12Intel(R) Core(TM)2 Quad CPU    Q6600  @ 2.40GHz (each node)
     13Memory
     142GB DDR2 667 (each node)
     15Disk
     16 320G+160G (each node)
     17All nodes: WD 320G * 7 + 160G * 7 = 3.36T
     18NIC
     19Intel Corporation 82566DM Gigabit Network Connection
     20Switch
     21D-link 24 port GE switch
     22
     23
     24
     251. 8 Nodes, Replicate, Adjust Parameters
     26
     27Context: Create 16G data (sequence)
     28gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m
     29./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G
     30  recSize 1M nBytes 16G fileSize 16G
     31  nProcesses 1 nThreadsPerProcess 1
     32  file cache flushed before test
     33  not using data shipping
     34  not using direct I/O
     35  offsets accessed will cycle through the same file segment
     36  not using shared memory buffer
     37  not releasing byte-range token after open
     38  no fsync at end of test
     39    Data rate was 54649.55 Kbytes/sec, thread utilization 1.000
     40
     41Context: Read 16G data (sequence)
     42gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m
     43./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G
     44  recSize 1M nBytes 16G fileSize 16G
     45  nProcesses 1 nThreadsPerProcess 1
     46  file cache flushed before test
     47  not using data shipping
     48  not using direct I/O
     49  offsets accessed will cycle through the same file segment
     50  not using shared memory buffer
     51  not releasing byte-range token after open
     52    Data rate was 83583.30 Kbytes/sec, thread utilization 1.000
     53
     54Context: Write 16G data (sequence)
     55gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m
     56./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G
     57  recSize 1M nBytes 16G fileSize 16G
     58  nProcesses 1 nThreadsPerProcess 1
     59  file cache flushed before test
     60  not using data shipping
     61  not using direct I/O
     62  offsets accessed will cycle through the same file segment
     63  not using shared memory buffer
     64  not releasing byte-range token after open
     65  no fsync at end of test
     66    Data rate was 50898.76 Kbytes/sec, thread utilization 1.000
     67
     68
     69
     702. 8 Nodes, No Replicate, Adjust Parameters
     71
     72Context: Create 16G data (sequence)
     73gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m
     74./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_2
     75  recSize 1M nBytes 16G fileSize 16G
     76  nProcesses 1 nThreadsPerProcess 1
     77  file cache flushed before test
     78  not using data shipping
     79  not using direct I/O
     80  offsets accessed will cycle through the same file segment
     81  not using shared memory buffer
     82  not releasing byte-range token after open
     83  no fsync at end of test
     84    Data rate was 108330.24 Kbytes/sec, thread utilization 1.000
     85
     86Context: Read 16G data (sequence)
     87gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m
     88./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_2
     89  recSize 1M nBytes 16G fileSize 16G
     90  nProcesses 1 nThreadsPerProcess 1
     91  file cache flushed before test
     92  not using data shipping
     93  not using direct I/O
     94  offsets accessed will cycle through the same file segment
     95  not using shared memory buffer
     96  not releasing byte-range token after open
     97    Data rate was 82420.96 Kbytes/sec, thread utilization 1.000
     98
     99Context: Write 16G data (sequence)
     100gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m
     101./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_2
     102  recSize 1M nBytes 16G fileSize 16G
     103  nProcesses 1 nThreadsPerProcess 1
     104  file cache flushed before test
     105  not using data shipping
     106  not using direct I/O
     107  offsets accessed will cycle through the same file segment
     108  not using shared memory buffer
     109  not releasing byte-range token after open
     110  no fsync at end of test
     111    Data rate was 108820.45 Kbytes/sec, thread utilization 1.000
     112
     113
     114
     1153. Multi-thread
     1163.1 Create Operation
     117
     118Context: Create 16G data, 1 thread
     119gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 1
     120./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     121  recSize 1M nBytes 16G fileSize 16G
     122  nProcesses 1 nThreadsPerProcess 1
     123  file cache flushed before test
     124  not using data shipping
     125  not using direct I/O
     126  offsets accessed will cycle through the same file segment
     127  not using shared memory buffer
     128  not releasing byte-range token after open
     129  no fsync at end of test
     130    Data rate was 50800.95 Kbytes/sec, thread utilization 1.000
     131
     132Context: Create 16G data, 2 thread
     133gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 2
     134./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     135  recSize 1M nBytes 16G fileSize 16G
     136  nProcesses 1 nThreadsPerProcess 2
     137  file cache flushed before test
     138  not using data shipping
     139  not using direct I/O
     140  offsets accessed will cycle through the same file segment
     141  not using shared memory buffer
     142  not releasing byte-range token after open
     143  no fsync at end of test
     144    Data rate was 50297.13 Kbytes/sec, thread utilization 0.999
     145
     146Context:  Create 16G data, 4 thread
     147gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 4
     148./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     149  recSize 1M nBytes 16G fileSize 16G
     150  nProcesses 1 nThreadsPerProcess 4
     151  file cache flushed before test
     152  not using data shipping
     153  not using direct I/O
     154  offsets accessed will cycle through the same file segment
     155  not using shared memory buffer
     156  not releasing byte-range token after open
     157  no fsync at end of test
     158    Data rate was 50848.45 Kbytes/sec, thread utilization 0.998
     159
     160Context:  Create 16G data, 8 thread
     161gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 8
     162./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     163  recSize 1M nBytes 16G fileSize 16G
     164  nProcesses 1 nThreadsPerProcess 8
     165  file cache flushed before test
     166  not using data shipping
     167  not using direct I/O
     168  offsets accessed will cycle through the same file segment
     169  not using shared memory buffer
     170  not releasing byte-range token after open
     171  no fsync at end of test
     172    Data rate was 50469.88 Kbytes/sec, thread utilization 0.963
     173
     174Context:  Create 16G data, 16 thread
     175gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 16
     176./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     177  recSize 1M nBytes 16G fileSize 16G
     178  nProcesses 1 nThreadsPerProcess 16
     179  file cache flushed before test
     180  not using data shipping
     181  not using direct I/O
     182  offsets accessed will cycle through the same file segment
     183  not using shared memory buffer
     184  not releasing byte-range token after open
     185  no fsync at end of test
     186    Data rate was 52578.33 Kbytes/sec, thread utilization 0.919
     187
     188Context:  Create 16G data, 32 thread
     189gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 32
     190./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     191  recSize 1M nBytes 16G fileSize 16G
     192  nProcesses 1 nThreadsPerProcess 32
     193  file cache flushed before test
     194  not using data shipping
     195  not using direct I/O
     196  offsets accessed will cycle through the same file segment
     197  not using shared memory buffer
     198  not releasing byte-range token after open
     199  no fsync at end of test
     200    Data rate was 53107.28 Kbytes/sec, thread utilization 0.966
     201
     202Context:  Create 16G data, 64 thread
     203gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 64
     204./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     205  recSize 1M nBytes 16G fileSize 16G
     206  nProcesses 1 nThreadsPerProcess 64
     207  file cache flushed before test
     208  not using data shipping
     209  not using direct I/O
     210  offsets accessed will cycle through the same file segment
     211  not using shared memory buffer
     212  not releasing byte-range token after open
     213  no fsync at end of test
     214    Data rate was 53019.53 Kbytes/sec, thread utilization 0.978
     215
     216
     2173.2 Read Operation
     218
     219Context:  Read 16G data, 1 thread
     220gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1
     221
     222./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     223
     224  recSize 1M nBytes 16G fileSize 16G
     225
     226  nProcesses 1 nThreadsPerProcess 1
     227
     228  file cache flushed before test
     229
     230  not using data shipping
     231
     232  not using direct I/O
     233
     234  offsets accessed will cycle through the same file segment
     235
     236  not using shared memory buffer
     237
     238  not releasing byte-range token after open
     239
     240    Data rate was 81685.18 Kbytes/sec, thread utilization 1.000
     241
     242
     243Context:  Read 16G data, 2 thread
     244gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 2
     245
     246./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     247
     248  recSize 1M nBytes 16G fileSize 16G
     249
     250  nProcesses 1 nThreadsPerProcess 2
     251
     252  file cache flushed before test
     253
     254  not using data shipping
     255
     256  not using direct I/O
     257
     258  offsets accessed will cycle through the same file segment
     259
     260  not using shared memory buffer
     261
     262  not releasing byte-range token after open
     263
     264    Data rate was 90844.61 Kbytes/sec, thread utilization 0.999
     265
     266
     267Context:  Read 16G data, 4 thread
     268gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 4
     269
     270./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     271
     272  recSize 1M nBytes 16G fileSize 16G
     273
     274  nProcesses 1 nThreadsPerProcess 4
     275
     276  file cache flushed before test
     277
     278  not using data shipping
     279
     280  not using direct I/O
     281
     282  offsets accessed will cycle through the same file segment
     283
     284  not using shared memory buffer
     285
     286  not releasing byte-range token after open
     287
     288    Data rate was 89538.89 Kbytes/sec, thread utilization 0.997
     289
     290
     291Context:  Read 16G data, 8 thread
     292gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 8
     293
     294./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     295
     296  recSize 1M nBytes 16G fileSize 16G
     297
     298  nProcesses 1 nThreadsPerProcess 8
     299
     300  file cache flushed before test
     301
     302  not using data shipping
     303
     304  not using direct I/O
     305
     306  offsets accessed will cycle through the same file segment
     307
     308  not using shared memory buffer
     309
     310  not releasing byte-range token after open
     311
     312    Data rate was 87044.97 Kbytes/sec, thread utilization 0.994
     313
     314
     315Context:  Read 16G data, 16 thread
     316gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 16
     317
     318./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     319
     320  recSize 1M nBytes 16G fileSize 16G
     321
     322  nProcesses 1 nThreadsPerProcess 16
     323
     324  file cache flushed before test
     325
     326  not using data shipping
     327
     328  not using direct I/O
     329
     330  offsets accessed will cycle through the same file segment
     331
     332  not using shared memory buffer
     333
     334  not releasing byte-range token after open
     335
     336    Data rate was 94899.75 Kbytes/sec, thread utilization 0.990
     337
     338Context:  Read 16G data, 32 thread
     339gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 32
     340./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     341  recSize 1M nBytes 16G fileSize 16G
     342  nProcesses 1 nThreadsPerProcess 32
     343  file cache flushed before test
     344  not using data shipping
     345  not using direct I/O
     346  offsets accessed will cycle through the same file segment
     347  not using shared memory buffer
     348  not releasing byte-range token after open
     349    Data rate was 90657.18 Kbytes/sec, thread utilization 0.983
     350
     351Context:  Read 16G data, 64 thread
     352gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 64
     353./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     354  recSize 1M nBytes 16G fileSize 16G
     355  nProcesses 1 nThreadsPerProcess 64
     356  file cache flushed before test
     357  not using data shipping
     358  not using direct I/O
     359  offsets accessed will cycle through the same file segment
     360  not using shared memory buffer
     361  not releasing byte-range token after open
     362    Data rate was 89751.67 Kbytes/sec, thread utilization 0.983
     363
     364
     3653.3 Write Operation
     366
     367Context:  Write 16G data, 1 thread
     368gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1
     369
     370./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     371
     372  recSize 1M nBytes 16G fileSize 16G
     373
     374  nProcesses 1 nThreadsPerProcess 1
     375
     376  file cache flushed before test
     377
     378  not using data shipping
     379
     380  not using direct I/O
     381
     382  offsets accessed will cycle through the same file segment
     383
     384  not using shared memory buffer
     385
     386  not releasing byte-range token after open
     387
     388  no fsync at end of test
     389
     390    Data rate was 50819.17 Kbytes/sec, thread utilization 1.000
     391
     392Context:  Write 16G data, 2 thread
     393gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 2
     394
     395./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     396
     397  recSize 1M nBytes 16G fileSize 16G
     398
     399  nProcesses 1 nThreadsPerProcess 2
     400
     401  file cache flushed before test
     402
     403  not using data shipping
     404
     405  not using direct I/O
     406
     407  offsets accessed will cycle through the same file segment
     408
     409  not using shared memory buffer
     410
     411  not releasing byte-range token after open
     412
     413  no fsync at end of test
     414
     415    Data rate was 50588.81 Kbytes/sec, thread utilization 1.000
     416
     417Context:  Write 16G data, 4 thread
     418gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 4
     419
     420./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     421
     422  recSize 1M nBytes 16G fileSize 16G
     423
     424  nProcesses 1 nThreadsPerProcess 4
     425
     426  file cache flushed before test
     427
     428  not using data shipping
     429
     430  not using direct I/O
     431
     432  offsets accessed will cycle through the same file segment
     433
     434  not using shared memory buffer
     435
     436  not releasing byte-range token after open
     437
     438  no fsync at end of test
     439
     440    Data rate was 50694.87 Kbytes/sec, thread utilization 0.999
     441
     442
     443Context:  Write 16G data, 8 thread
     444gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 8
     445
     446./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     447
     448  recSize 1M nBytes 16G fileSize 16G
     449
     450  nProcesses 1 nThreadsPerProcess 8
     451
     452  file cache flushed before test
     453
     454  not using data shipping
     455
     456  not using direct I/O
     457
     458  offsets accessed will cycle through the same file segment
     459
     460  not using shared memory buffer
     461
     462  not releasing byte-range token after open
     463
     464  no fsync at end of test
     465
     466    Data rate was 51648.90 Kbytes/sec, thread utilization 0.985
     467
     468
     469Context:  Write 16G data, 16 thread
     470gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 16
     471
     472./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     473
     474  recSize 1M nBytes 16G fileSize 16G
     475
     476  nProcesses 1 nThreadsPerProcess 16
     477
     478  file cache flushed before test
     479
     480  not using data shipping
     481
     482  not using direct I/O
     483
     484  offsets accessed will cycle through the same file segment
     485
     486  not using shared memory buffer
     487
     488  not releasing byte-range token after open
     489
     490  no fsync at end of test
     491
     492    Data rate was 53019.51 Kbytes/sec, thread utilization 0.924
     493
     494Context:  Write 16G data, 32 thread
     495gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 32
     496
     497./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     498
     499  recSize 1M nBytes 16G fileSize 16G
     500
     501  nProcesses 1 nThreadsPerProcess 32
     502
     503  file cache flushed before test
     504
     505  not using data shipping
     506
     507  not using direct I/O
     508
     509  offsets accessed will cycle through the same file segment
     510
     511  not using shared memory buffer
     512
     513  not releasing byte-range token after open
     514
     515  no fsync at end of test
     516
     517    Data rate was 53003.69 Kbytes/sec, thread utilization 0.966
     518
     519
     520Context:  Write 16G data, 64 thread
     521gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 64
     522
     523./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     524
     525  recSize 1M nBytes 16G fileSize 16G
     526
     527  nProcesses 1 nThreadsPerProcess 64
     528
     529  file cache flushed before test
     530
     531  not using data shipping
     532
     533  not using direct I/O
     534
     535  offsets accessed will cycle through the same file segment
     536
     537  not using shared memory buffer
     538
     539  not releasing byte-range token after open
     540
     541  no fsync at end of test
     542
     543    Data rate was 53590.98 Kbytes/sec, thread utilization 0.971
     544
     545
     546
     547
     5484. Compare
     549
     550
     551All kind Operation (sequence)
     552
     553Replicate & Adjust Parameters
     554No Replicate & Adjust Parameters
     555Create
     55654649.55 KB/s
     557108330.24 KB/s
     558Read
     55983583.30 KB/s
     56082420.96 KB/s
     561Write
     56250898.76 KB/s
     563108820.45 KB/s
     564
     565
     566
     567
     568
     569
     570
     571Multi-thread (sequence)
     572
     573
     5741
     5752
     5764
     5778
     57816
     57932
     58064
     581Create
     58250800.95 KB/s
     58350297.13 KB/s
     58450848.45 KB/s
     58550469.88 KB/s
     58652578.33 KB/s
     58753107.28 KB/s
     58853019.53 KB/s
     589Read
     59081685.18 KB/s
     59190844.61 KB/s
     59289538.89 KB/s
     59387044.97 KB/s
     59494899.75 KB/s
     59590657.18 KB/s
     59689751.67 KB/s
     597Write
     59850819.17 KB/s
     59950588.81 KB/s
     60050694.87 KB/s
     60151648.90 KB/s
     60253019.51 KB/s
     60353003.69 KB/s
     60453590.98 KB/s
     605
     606