Changes between Version 1 and Version 2 of GPFS_Per_gpfsperf


Ignore:
Timestamp:
Feb 27, 2008, 2:58:43 PM (16 years ago)
Author:
rock
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • GPFS_Per_gpfsperf

    v1 v2  
    66[[BR]]
    77[[BR]]
    8 == Machine Informance ==
    9 Node
    10 8 nodes (1 server , 7 client provide disks)
    11 CPU
    12 Intel(R) Core(TM)2 Quad CPU    Q6600  @ 2.40GHz (each node)
    13 Memory
    14 2GB DDR2 667 (each node)
    15 Disk
    16  320G+160G (each node)
    17 All nodes: WD 320G * 7 + 160G * 7 = 3.36T
    18 NIC
    19 Intel Corporation 82566DM Gigabit Network Connection
    20 Switch
    21 D-link 24 port GE switch
    22 
    23 
    24 
    25 1. 8 Nodes, Replicate, Adjust Parameters
    26 
    27 Context: Create 16G data (sequence)
     8 == Machine Informance ==
     9||Node ||8 nodes (1 server , 7 client provide disks)||
     10
     11||CPU ||Intel(R) Core(TM)2 Quad CPU    Q6600  @ 2.40GHz (each node)||
     12
     13||Memory ||2GB DDR2 667 (each node)||
     14
     15||Disk  ||320G+160G (each node), All nodes:  (320G+ 160G) * 7 = 3.36T||
     16
     17||NIC ||Intel Corporation 82566DM Gigabit Network Connection||
     18
     19||Switch ||D-link 24 port GE switch||
     20
     21[[BR]]
     22[[BR]]
     23== 1. 8 Nodes, Replicate, Adjust Parameters ==
     24
     25 * Context: Create 16G data (sequence)
     26{{{
    2827gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m
    2928./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G
     
    3837  no fsync at end of test
    3938    Data rate was 54649.55 Kbytes/sec, thread utilization 1.000
    40 
    41 Context: Read 16G data (sequence)
     39}}}
     40
     41 * Context: Read 16G data (sequence)
     42{{{
    4243gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m
    4344./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G
     
    5152  not releasing byte-range token after open
    5253    Data rate was 83583.30 Kbytes/sec, thread utilization 1.000
    53 
    54 Context: Write 16G data (sequence)
     54}}}
     55
     56 * Context: Write 16G data (sequence)
     57{{{
    5558gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m
    5659./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G
     
    6568  no fsync at end of test
    6669    Data rate was 50898.76 Kbytes/sec, thread utilization 1.000
    67 
    68 
    69 
    70 2. 8 Nodes, No Replicate, Adjust Parameters
    71 
    72 Context: Create 16G data (sequence)
     70}}}
     71
     72[[BR]]
     73[[BR]]
     74== 2. 8 Nodes, No Replicate, Adjust Parameters ==
     75
     76 * Context: Create 16G data (sequence)
     77{{{
    7378gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m
    7479./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_2
     
    8388  no fsync at end of test
    8489    Data rate was 108330.24 Kbytes/sec, thread utilization 1.000
    85 
    86 Context: Read 16G data (sequence)
     90}}}
     91
     92 * Context: Read 16G data (sequence)
     93{{{
    8794gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m
    8895./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_2
     
    96103  not releasing byte-range token after open
    97104    Data rate was 82420.96 Kbytes/sec, thread utilization 1.000
    98 
    99 Context: Write 16G data (sequence)
     105}}}
     106
     107 * Context: Write 16G data (sequence)
     108{{{
    100109gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m
    101110./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_2
     
    110119  no fsync at end of test
    111120    Data rate was 108820.45 Kbytes/sec, thread utilization 1.000
    112 
    113 
    114 
    115 3. Multi-thread
    116 3.1 Create Operation
    117 
    118 Context: Create 16G data, 1 thread
     121}}}
     122
     123[[BR]]
     124[[BR]]
     125 == 3. Multi-thread ==
     126 === 3.1 Create Operation ===
     127
     128 * Context: Create 16G data, 1 thread
     129{{{
    119130gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 1
    120131./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     
    129140  no fsync at end of test
    130141    Data rate was 50800.95 Kbytes/sec, thread utilization 1.000
    131 
    132 Context: Create 16G data, 2 thread
     142}}}
     143
     144 * Context: Create 16G data, 2 thread
     145{{{
    133146gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 2
    134147./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     
    143156  no fsync at end of test
    144157    Data rate was 50297.13 Kbytes/sec, thread utilization 0.999
    145 
    146 Context:  Create 16G data, 4 thread
     158}}}
     159
     160 * Context:  Create 16G data, 4 thread
     161{{{
    147162gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 4
    148163./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     
    157172  no fsync at end of test
    158173    Data rate was 50848.45 Kbytes/sec, thread utilization 0.998
    159 
    160 Context:  Create 16G data, 8 thread
     174}}}
     175
     176 * Context:  Create 16G data, 8 thread
     177{{{
    161178gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 8
    162179./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     
    171188  no fsync at end of test
    172189    Data rate was 50469.88 Kbytes/sec, thread utilization 0.963
    173 
    174 Context:  Create 16G data, 16 thread
     190}}}
     191
     192 * Context:  Create 16G data, 16 thread
     193{{{
    175194gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 16
    176195./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     
    185204  no fsync at end of test
    186205    Data rate was 52578.33 Kbytes/sec, thread utilization 0.919
    187 
    188 Context:  Create 16G data, 32 thread
     206}}}
     207
     208 * Context:  Create 16G data, 32 thread
     209{{{
    189210gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 32
    190211./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     
    199220  no fsync at end of test
    200221    Data rate was 53107.28 Kbytes/sec, thread utilization 0.966
    201 
    202 Context:  Create 16G data, 64 thread
     222}}}
     223
     224 * Context:  Create 16G data, 64 thread
     225{{{
    203226gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 64
    204227./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3
     
    213236  no fsync at end of test
    214237    Data rate was 53019.53 Kbytes/sec, thread utilization 0.978
    215 
    216 
    217 3.2 Read Operation
    218 
    219 Context:  Read 16G data, 1 thread
    220 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1
    221 
     238}}}
     239
     240[[BR]]
     241 === 3.2 Read Operation ===
     242 * Context:  Read 16G data, 1 thread
     243{{{gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1
    222244./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
    223 
    224   recSize 1M nBytes 16G fileSize 16G
    225 
     245  recSize 1M nBytes 16G fileSize 16G
    226246  nProcesses 1 nThreadsPerProcess 1
    227 
    228   file cache flushed before test
    229 
    230   not using data shipping
    231 
    232   not using direct I/O
    233 
    234   offsets accessed will cycle through the same file segment
    235 
    236   not using shared memory buffer
    237 
    238   not releasing byte-range token after open
    239 
     247  file cache flushed before test
     248  not using data shipping
     249  not using direct I/O
     250  offsets accessed will cycle through the same file segment
     251  not using shared memory buffer
     252  not releasing byte-range token after open
    240253    Data rate was 81685.18 Kbytes/sec, thread utilization 1.000
    241 
    242 
    243 Context:  Read 16G data, 2 thread
     254}}}
     255
     256 * Context:  Read 16G data, 2 thread
     257{{{
    244258gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 2
    245 
    246259./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
    247 
    248   recSize 1M nBytes 16G fileSize 16G
    249 
     260  recSize 1M nBytes 16G fileSize 16G
    250261  nProcesses 1 nThreadsPerProcess 2
    251 
    252   file cache flushed before test
    253 
    254   not using data shipping
    255 
    256   not using direct I/O
    257 
    258   offsets accessed will cycle through the same file segment
    259 
    260   not using shared memory buffer
    261 
    262   not releasing byte-range token after open
    263 
     262  file cache flushed before test
     263  not using data shipping
     264  not using direct I/O
     265  offsets accessed will cycle through the same file segment
     266  not using shared memory buffer
     267  not releasing byte-range token after open
    264268    Data rate was 90844.61 Kbytes/sec, thread utilization 0.999
    265 
    266 
    267 Context:  Read 16G data, 4 thread
     269}}}
     270
     271 * Context:  Read 16G data, 4 thread
     272{{{
    268273gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 4
    269 
    270274./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
    271 
    272   recSize 1M nBytes 16G fileSize 16G
    273 
     275  recSize 1M nBytes 16G fileSize 16G
    274276  nProcesses 1 nThreadsPerProcess 4
    275 
    276   file cache flushed before test
    277 
    278   not using data shipping
    279 
    280   not using direct I/O
    281 
    282   offsets accessed will cycle through the same file segment
    283 
    284   not using shared memory buffer
    285 
    286   not releasing byte-range token after open
    287 
     277  file cache flushed before test
     278  not using data shipping
     279  not using direct I/O
     280  offsets accessed will cycle through the same file segment
     281  not using shared memory buffer
     282  not releasing byte-range token after open
    288283    Data rate was 89538.89 Kbytes/sec, thread utilization 0.997
    289 
    290 
    291 Context:  Read 16G data, 8 thread
     284}}}
     285
     286 * Context:  Read 16G data, 8 thread
     287{{{
    292288gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 8
    293 
    294289./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
    295 
    296   recSize 1M nBytes 16G fileSize 16G
    297 
     290  recSize 1M nBytes 16G fileSize 16G
    298291  nProcesses 1 nThreadsPerProcess 8
    299 
    300   file cache flushed before test
    301 
    302   not using data shipping
    303 
    304   not using direct I/O
    305 
    306   offsets accessed will cycle through the same file segment
    307 
    308   not using shared memory buffer
    309 
    310   not releasing byte-range token after open
    311 
     292  file cache flushed before test
     293  not using data shipping
     294  not using direct I/O
     295  offsets accessed will cycle through the same file segment
     296  not using shared memory buffer
     297  not releasing byte-range token after open
    312298    Data rate was 87044.97 Kbytes/sec, thread utilization 0.994
    313 
    314 
    315 Context:  Read 16G data, 16 thread
     299}}}
     300
     301 * Context:  Read 16G data, 16 thread
     302{{{
    316303gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 16
    317 
    318304./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
    319 
    320   recSize 1M nBytes 16G fileSize 16G
    321 
     305  recSize 1M nBytes 16G fileSize 16G
    322306  nProcesses 1 nThreadsPerProcess 16
    323 
    324   file cache flushed before test
    325 
    326   not using data shipping
    327 
    328   not using direct I/O
    329 
    330   offsets accessed will cycle through the same file segment
    331 
    332   not using shared memory buffer
    333 
    334   not releasing byte-range token after open
    335 
     307  file cache flushed before test
     308  not using data shipping
     309  not using direct I/O
     310  offsets accessed will cycle through the same file segment
     311  not using shared memory buffer
     312  not releasing byte-range token after open
    336313    Data rate was 94899.75 Kbytes/sec, thread utilization 0.990
    337 
    338 Context:  Read 16G data, 32 thread
     314}}}
     315
     316 * Context:  Read 16G data, 32 thread
     317{{{
    339318gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 32
    340319./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     
    348327  not releasing byte-range token after open
    349328    Data rate was 90657.18 Kbytes/sec, thread utilization 0.983
    350 
    351 Context:  Read 16G data, 64 thread
     329}}}
     330
     331 * Context:  Read 16G data, 64 thread
     332{{{
    352333gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 64
    353334./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3
     
    361342  not releasing byte-range token after open
    362343    Data rate was 89751.67 Kbytes/sec, thread utilization 0.983
    363 
    364 
    365 3.3 Write Operation
    366 
    367 Context:  Write 16G data, 1 thread
     344}}}
     345
     346[[BR]]
     347=== 3.3 Write Operation ===
     348 * Context:  Write 16G data, 1 thread
     349{{{
    368350gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1
    369 
    370 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
    371 
    372   recSize 1M nBytes 16G fileSize 16G
    373 
     351./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     352  recSize 1M nBytes 16G fileSize 16G
    374353  nProcesses 1 nThreadsPerProcess 1
    375 
    376   file cache flushed before test
    377 
    378   not using data shipping
    379 
    380   not using direct I/O
    381 
    382   offsets accessed will cycle through the same file segment
    383 
    384   not using shared memory buffer
    385 
    386   not releasing byte-range token after open
    387 
    388   no fsync at end of test
    389 
     354  file cache flushed before test
     355  not using data shipping
     356  not using direct I/O
     357  offsets accessed will cycle through the same file segment
     358  not using shared memory buffer
     359  not releasing byte-range token after open
     360  no fsync at end of test
    390361    Data rate was 50819.17 Kbytes/sec, thread utilization 1.000
    391 
    392 Context:  Write 16G data, 2 thread
     362}}}
     363
     364 * Context:  Write 16G data, 2 thread
     365{{{
    393366gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 2
    394 
    395 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
    396 
    397   recSize 1M nBytes 16G fileSize 16G
    398 
     367./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     368  recSize 1M nBytes 16G fileSize 16G
    399369  nProcesses 1 nThreadsPerProcess 2
    400 
    401   file cache flushed before test
    402 
    403   not using data shipping
    404 
    405   not using direct I/O
    406 
    407   offsets accessed will cycle through the same file segment
    408 
    409   not using shared memory buffer
    410 
    411   not releasing byte-range token after open
    412 
    413   no fsync at end of test
    414 
     370  file cache flushed before test
     371  not using data shipping
     372  not using direct I/O
     373  offsets accessed will cycle through the same file segment
     374  not using shared memory buffer
     375  not releasing byte-range token after open
     376  no fsync at end of test
    415377    Data rate was 50588.81 Kbytes/sec, thread utilization 1.000
    416 
    417 Context:  Write 16G data, 4 thread
     378}}}
     379
     380 * Context:  Write 16G data, 4 thread
     381{{{
    418382gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 4
    419 
    420 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
    421 
    422   recSize 1M nBytes 16G fileSize 16G
    423 
     383./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     384  recSize 1M nBytes 16G fileSize 16G
    424385  nProcesses 1 nThreadsPerProcess 4
    425 
    426   file cache flushed before test
    427 
    428   not using data shipping
    429 
    430   not using direct I/O
    431 
    432   offsets accessed will cycle through the same file segment
    433 
    434   not using shared memory buffer
    435 
    436   not releasing byte-range token after open
    437 
    438   no fsync at end of test
    439 
     386  file cache flushed before test
     387  not using data shipping
     388  not using direct I/O
     389  offsets accessed will cycle through the same file segment
     390  not using shared memory buffer
     391  not releasing byte-range token after open
     392  no fsync at end of test
    440393    Data rate was 50694.87 Kbytes/sec, thread utilization 0.999
    441 
    442 
    443 Context:  Write 16G data, 8 thread
     394}}}
     395
     396 * Context:  Write 16G data, 8 thread
     397{{{
    444398gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 8
    445 
    446 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
    447 
    448   recSize 1M nBytes 16G fileSize 16G
    449 
     399./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     400  recSize 1M nBytes 16G fileSize 16G
    450401  nProcesses 1 nThreadsPerProcess 8
    451 
    452   file cache flushed before test
    453 
    454   not using data shipping
    455 
    456   not using direct I/O
    457 
    458   offsets accessed will cycle through the same file segment
    459 
    460   not using shared memory buffer
    461 
    462   not releasing byte-range token after open
    463 
    464   no fsync at end of test
    465 
     402  file cache flushed before test
     403  not using data shipping
     404  not using direct I/O
     405  offsets accessed will cycle through the same file segment
     406  not using shared memory buffer
     407  not releasing byte-range token after open
     408  no fsync at end of test
    466409    Data rate was 51648.90 Kbytes/sec, thread utilization 0.985
    467 
    468 
    469 Context:  Write 16G data, 16 thread
     410}}}
     411
     412 * Context:  Write 16G data, 16 thread
     413{{{
    470414gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 16
    471 
    472 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
    473 
    474   recSize 1M nBytes 16G fileSize 16G
    475 
     415./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     416  recSize 1M nBytes 16G fileSize 16G
    476417  nProcesses 1 nThreadsPerProcess 16
    477 
    478   file cache flushed before test
    479 
    480   not using data shipping
    481 
    482   not using direct I/O
    483 
    484   offsets accessed will cycle through the same file segment
    485 
    486   not using shared memory buffer
    487 
    488   not releasing byte-range token after open
    489 
    490   no fsync at end of test
    491 
     418  file cache flushed before test
     419  not using data shipping
     420  not using direct I/O
     421  offsets accessed will cycle through the same file segment
     422  not using shared memory buffer
     423  not releasing byte-range token after open
     424  no fsync at end of test
    492425    Data rate was 53019.51 Kbytes/sec, thread utilization 0.924
    493 
    494 Context:  Write 16G data, 32 thread
     426}}}
     427
     428 * Context:  Write 16G data, 32 thread
     429{{{
    495430gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 32
    496 
    497 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
    498 
    499   recSize 1M nBytes 16G fileSize 16G
    500 
     431./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     432  recSize 1M nBytes 16G fileSize 16G
    501433  nProcesses 1 nThreadsPerProcess 32
    502 
    503   file cache flushed before test
    504 
    505   not using data shipping
    506 
    507   not using direct I/O
    508 
    509   offsets accessed will cycle through the same file segment
    510 
    511   not using shared memory buffer
    512 
    513   not releasing byte-range token after open
    514 
    515   no fsync at end of test
    516 
     434  file cache flushed before test
     435  not using data shipping
     436  not using direct I/O
     437  offsets accessed will cycle through the same file segment
     438  not using shared memory buffer
     439  not releasing byte-range token after open
     440  no fsync at end of test
    517441    Data rate was 53003.69 Kbytes/sec, thread utilization 0.966
    518 
    519 
    520 Context:  Write 16G data, 64 thread
     442}}}
     443
     444 * Context:  Write 16G data, 64 thread
     445{{{
    521446gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 64
    522 
    523 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
    524 
    525   recSize 1M nBytes 16G fileSize 16G
    526 
     447./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3
     448  recSize 1M nBytes 16G fileSize 16G
    527449  nProcesses 1 nThreadsPerProcess 64
    528 
    529   file cache flushed before test
    530 
    531   not using data shipping
    532 
    533   not using direct I/O
    534 
    535   offsets accessed will cycle through the same file segment
    536 
    537   not using shared memory buffer
    538 
    539   not releasing byte-range token after open
    540 
    541   no fsync at end of test
    542 
     450  file cache flushed before test
     451  not using data shipping
     452  not using direct I/O
     453  offsets accessed will cycle through the same file segment
     454  not using shared memory buffer
     455  not releasing byte-range token after open
     456  no fsync at end of test
    543457    Data rate was 53590.98 Kbytes/sec, thread utilization 0.971
    544 
    545 
    546 
    547 
    548 4. Compare
    549 
    550 
    551 All kind Operation (sequence)
    552 
    553 Replicate & Adjust Parameters
    554 No Replicate & Adjust Parameters
    555 Create
    556 54649.55 KB/s
    557 108330.24 KB/s
    558 Read
    559 83583.30 KB/s
    560 82420.96 KB/s
    561 Write
    562 50898.76 KB/s
    563 108820.45 KB/s
    564 
    565 
    566 
    567 
    568 
    569 
    570 
    571 Multi-thread (sequence)
    572 
    573 
    574 1
    575 2
    576 4
    577 8
    578 16
    579 32
    580 64
    581 Create
    582 50800.95 KB/s
    583 50297.13 KB/s
    584 50848.45 KB/s
    585 50469.88 KB/s
    586 52578.33 KB/s
    587 53107.28 KB/s
    588 53019.53 KB/s
    589 Read
    590 81685.18 KB/s
    591 90844.61 KB/s
    592 89538.89 KB/s
    593 87044.97 KB/s
    594 94899.75 KB/s
    595 90657.18 KB/s
    596 89751.67 KB/s
    597 Write
    598 50819.17 KB/s
    599 50588.81 KB/s
    600 50694.87 KB/s
    601 51648.90 KB/s
    602 53019.51 KB/s
    603 53003.69 KB/s
    604 53590.98 KB/s
    605 
    606 
     458}}}
     459
     460[[BR]]
     461[[BR]]
     462=== 4. Compare ===
     463 * All kind Operation (sequence)
     464
     465||            ||Replicate & Adjust Parameters  ||No Replicate & Adjust Parameters||
     466||Create||54649.55 KB/s                                 ||108330.24 KB/s||
     467||Read    ||83583.30 KB/s                                 ||82420.96 KB/s||
     468||Write  ||50898.76 KB/s                                 ||108820.45 KB/s||
     469
     470 * Multi-thread (sequence)
     471||             || 1                      || 2                      || 4                          || 8                            || 16                      || 32                      || 64                    ||
     472||Create||50800.95 KB/s ||50297.13 KB/s ||50848.45 KB/s    ||50469.88 KB/s       || 52578.33 KB/s ||53107.28 KB/s   || 53019.53 KB/s||
     473||Read    ||81685.18 KB/s  ||90844.61 KB/s ||89538.89 KB/s || 87044.97 KB/s || 94899.75 KB/s || 90657.18 KB/s || 89751.67 KB/s ||
     474||Write||50819.17 KB/s || 50588.81 KB/s || 50694.87 KB/s || 51648.90 KB/s || 53019.51 KB/s || 53003.69 KB/s || 53590.98 KB/s ||
     475
     476[[BR]]