Changes between Version 1 and Version 2 of GPFS_Per_gpfsperf
- Timestamp:
- Feb 27, 2008, 2:58:43 PM (17 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
GPFS_Per_gpfsperf
v1 v2 6 6 [[BR]] 7 7 [[BR]] 8 == Machine Informance == 9 Node 10 8 nodes (1 server , 7 client provide disks) 11 CPU 12 Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (each node) 13 Memory 14 2GB DDR2 667 (each node) 15 Disk 16 320G+160G (each node) 17 All nodes: WD 320G * 7 + 160G * 7 = 3.36T 18 NIC 19 Intel Corporation 82566DM Gigabit Network Connection 20 Switch 21 D-link 24 port GE switch 22 23 24 25 1. 8 Nodes, Replicate, Adjust Parameters 26 27 Context: Create 16G data (sequence) 8 == Machine Informance == 9 ||Node ||8 nodes (1 server , 7 client provide disks)|| 10 11 ||CPU ||Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (each node)|| 12 13 ||Memory ||2GB DDR2 667 (each node)|| 14 15 ||Disk ||320G+160G (each node), All nodes: (320G+ 160G) * 7 = 3.36T|| 16 17 ||NIC ||Intel Corporation 82566DM Gigabit Network Connection|| 18 19 ||Switch ||D-link 24 port GE switch|| 20 21 [[BR]] 22 [[BR]] 23 == 1. 8 Nodes, Replicate, Adjust Parameters == 24 25 * Context: Create 16G data (sequence) 26 {{{ 28 27 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m 29 28 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G … … 38 37 no fsync at end of test 39 38 Data rate was 54649.55 Kbytes/sec, thread utilization 1.000 40 41 Context: Read 16G data (sequence) 39 }}} 40 41 * Context: Read 16G data (sequence) 42 {{{ 42 43 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m 43 44 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G … … 51 52 not releasing byte-range token after open 52 53 Data rate was 83583.30 Kbytes/sec, thread utilization 1.000 53 54 Context: Write 16G data (sequence) 54 }}} 55 56 * Context: Write 16G data (sequence) 57 {{{ 55 58 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m 56 59 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G … … 65 68 no fsync at end of test 66 69 Data rate was 50898.76 Kbytes/sec, thread utilization 1.000 67 68 69 70 2. 8 Nodes, No Replicate, Adjust Parameters 71 72 Context: Create 16G data (sequence) 70 }}} 71 72 [[BR]] 73 [[BR]] 74 == 2. 8 Nodes, No Replicate, Adjust Parameters == 75 76 * Context: Create 16G data (sequence) 77 {{{ 73 78 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m 74 79 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_2 … … 83 88 no fsync at end of test 84 89 Data rate was 108330.24 Kbytes/sec, thread utilization 1.000 85 86 Context: Read 16G data (sequence) 90 }}} 91 92 * Context: Read 16G data (sequence) 93 {{{ 87 94 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m 88 95 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_2 … … 96 103 not releasing byte-range token after open 97 104 Data rate was 82420.96 Kbytes/sec, thread utilization 1.000 98 99 Context: Write 16G data (sequence) 105 }}} 106 107 * Context: Write 16G data (sequence) 108 {{{ 100 109 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m 101 110 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_2 … … 110 119 no fsync at end of test 111 120 Data rate was 108820.45 Kbytes/sec, thread utilization 1.000 112 113 114 115 3. Multi-thread 116 3.1 Create Operation 117 118 Context: Create 16G data, 1 thread 121 }}} 122 123 [[BR]] 124 [[BR]] 125 == 3. Multi-thread == 126 === 3.1 Create Operation === 127 128 * Context: Create 16G data, 1 thread 129 {{{ 119 130 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 1 120 131 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 … … 129 140 no fsync at end of test 130 141 Data rate was 50800.95 Kbytes/sec, thread utilization 1.000 131 132 Context: Create 16G data, 2 thread 142 }}} 143 144 * Context: Create 16G data, 2 thread 145 {{{ 133 146 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 2 134 147 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 … … 143 156 no fsync at end of test 144 157 Data rate was 50297.13 Kbytes/sec, thread utilization 0.999 145 146 Context: Create 16G data, 4 thread 158 }}} 159 160 * Context: Create 16G data, 4 thread 161 {{{ 147 162 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 4 148 163 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 … … 157 172 no fsync at end of test 158 173 Data rate was 50848.45 Kbytes/sec, thread utilization 0.998 159 160 Context: Create 16G data, 8 thread 174 }}} 175 176 * Context: Create 16G data, 8 thread 177 {{{ 161 178 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 8 162 179 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 … … 171 188 no fsync at end of test 172 189 Data rate was 50469.88 Kbytes/sec, thread utilization 0.963 173 174 Context: Create 16G data, 16 thread 190 }}} 191 192 * Context: Create 16G data, 16 thread 193 {{{ 175 194 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 16 176 195 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 … … 185 204 no fsync at end of test 186 205 Data rate was 52578.33 Kbytes/sec, thread utilization 0.919 187 188 Context: Create 16G data, 32 thread 206 }}} 207 208 * Context: Create 16G data, 32 thread 209 {{{ 189 210 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 32 190 211 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 … … 199 220 no fsync at end of test 200 221 Data rate was 53107.28 Kbytes/sec, thread utilization 0.966 201 202 Context: Create 16G data, 64 thread 222 }}} 223 224 * Context: Create 16G data, 64 thread 225 {{{ 203 226 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 64 204 227 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 … … 213 236 no fsync at end of test 214 237 Data rate was 53019.53 Kbytes/sec, thread utilization 0.978 215 216 217 3.2 Read Operation 218 219 Context: Read 16G data, 1 thread 220 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1 221 238 }}} 239 240 [[BR]] 241 === 3.2 Read Operation === 242 * Context: Read 16G data, 1 thread 243 {{{gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1 222 244 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 223 224 recSize 1M nBytes 16G fileSize 16G 225 245 recSize 1M nBytes 16G fileSize 16G 226 246 nProcesses 1 nThreadsPerProcess 1 227 228 file cache flushed before test 229 230 not using data shipping 231 232 not using direct I/O 233 234 offsets accessed will cycle through the same file segment 235 236 not using shared memory buffer 237 238 not releasing byte-range token after open 239 247 file cache flushed before test 248 not using data shipping 249 not using direct I/O 250 offsets accessed will cycle through the same file segment 251 not using shared memory buffer 252 not releasing byte-range token after open 240 253 Data rate was 81685.18 Kbytes/sec, thread utilization 1.000 241 242 243 Context: Read 16G data, 2 thread 254 }}} 255 256 * Context: Read 16G data, 2 thread 257 {{{ 244 258 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 2 245 246 259 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 247 248 recSize 1M nBytes 16G fileSize 16G 249 260 recSize 1M nBytes 16G fileSize 16G 250 261 nProcesses 1 nThreadsPerProcess 2 251 252 file cache flushed before test 253 254 not using data shipping 255 256 not using direct I/O 257 258 offsets accessed will cycle through the same file segment 259 260 not using shared memory buffer 261 262 not releasing byte-range token after open 263 262 file cache flushed before test 263 not using data shipping 264 not using direct I/O 265 offsets accessed will cycle through the same file segment 266 not using shared memory buffer 267 not releasing byte-range token after open 264 268 Data rate was 90844.61 Kbytes/sec, thread utilization 0.999 265 266 267 Context: Read 16G data, 4 thread 269 }}} 270 271 * Context: Read 16G data, 4 thread 272 {{{ 268 273 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 4 269 270 274 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 271 272 recSize 1M nBytes 16G fileSize 16G 273 275 recSize 1M nBytes 16G fileSize 16G 274 276 nProcesses 1 nThreadsPerProcess 4 275 276 file cache flushed before test 277 278 not using data shipping 279 280 not using direct I/O 281 282 offsets accessed will cycle through the same file segment 283 284 not using shared memory buffer 285 286 not releasing byte-range token after open 287 277 file cache flushed before test 278 not using data shipping 279 not using direct I/O 280 offsets accessed will cycle through the same file segment 281 not using shared memory buffer 282 not releasing byte-range token after open 288 283 Data rate was 89538.89 Kbytes/sec, thread utilization 0.997 289 290 291 Context: Read 16G data, 8 thread 284 }}} 285 286 * Context: Read 16G data, 8 thread 287 {{{ 292 288 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 8 293 294 289 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 295 296 recSize 1M nBytes 16G fileSize 16G 297 290 recSize 1M nBytes 16G fileSize 16G 298 291 nProcesses 1 nThreadsPerProcess 8 299 300 file cache flushed before test 301 302 not using data shipping 303 304 not using direct I/O 305 306 offsets accessed will cycle through the same file segment 307 308 not using shared memory buffer 309 310 not releasing byte-range token after open 311 292 file cache flushed before test 293 not using data shipping 294 not using direct I/O 295 offsets accessed will cycle through the same file segment 296 not using shared memory buffer 297 not releasing byte-range token after open 312 298 Data rate was 87044.97 Kbytes/sec, thread utilization 0.994 313 314 315 Context: Read 16G data, 16 thread 299 }}} 300 301 * Context: Read 16G data, 16 thread 302 {{{ 316 303 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 16 317 318 304 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 319 320 recSize 1M nBytes 16G fileSize 16G 321 305 recSize 1M nBytes 16G fileSize 16G 322 306 nProcesses 1 nThreadsPerProcess 16 323 324 file cache flushed before test 325 326 not using data shipping 327 328 not using direct I/O 329 330 offsets accessed will cycle through the same file segment 331 332 not using shared memory buffer 333 334 not releasing byte-range token after open 335 307 file cache flushed before test 308 not using data shipping 309 not using direct I/O 310 offsets accessed will cycle through the same file segment 311 not using shared memory buffer 312 not releasing byte-range token after open 336 313 Data rate was 94899.75 Kbytes/sec, thread utilization 0.990 337 338 Context: Read 16G data, 32 thread 314 }}} 315 316 * Context: Read 16G data, 32 thread 317 {{{ 339 318 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 32 340 319 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 … … 348 327 not releasing byte-range token after open 349 328 Data rate was 90657.18 Kbytes/sec, thread utilization 0.983 350 351 Context: Read 16G data, 64 thread 329 }}} 330 331 * Context: Read 16G data, 64 thread 332 {{{ 352 333 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 64 353 334 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 … … 361 342 not releasing byte-range token after open 362 343 Data rate was 89751.67 Kbytes/sec, thread utilization 0.983 363 364 365 3.3 Write Operation 366 367 Context: Write 16G data, 1 thread 344 }}} 345 346 [[BR]] 347 === 3.3 Write Operation === 348 * Context: Write 16G data, 1 thread 349 {{{ 368 350 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1 369 370 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 371 372 recSize 1M nBytes 16G fileSize 16G 373 351 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 352 recSize 1M nBytes 16G fileSize 16G 374 353 nProcesses 1 nThreadsPerProcess 1 375 376 file cache flushed before test 377 378 not using data shipping 379 380 not using direct I/O 381 382 offsets accessed will cycle through the same file segment 383 384 not using shared memory buffer 385 386 not releasing byte-range token after open 387 388 no fsync at end of test 389 354 file cache flushed before test 355 not using data shipping 356 not using direct I/O 357 offsets accessed will cycle through the same file segment 358 not using shared memory buffer 359 not releasing byte-range token after open 360 no fsync at end of test 390 361 Data rate was 50819.17 Kbytes/sec, thread utilization 1.000 391 392 Context: Write 16G data, 2 thread 362 }}} 363 364 * Context: Write 16G data, 2 thread 365 {{{ 393 366 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 2 394 395 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 396 397 recSize 1M nBytes 16G fileSize 16G 398 367 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 368 recSize 1M nBytes 16G fileSize 16G 399 369 nProcesses 1 nThreadsPerProcess 2 400 401 file cache flushed before test 402 403 not using data shipping 404 405 not using direct I/O 406 407 offsets accessed will cycle through the same file segment 408 409 not using shared memory buffer 410 411 not releasing byte-range token after open 412 413 no fsync at end of test 414 370 file cache flushed before test 371 not using data shipping 372 not using direct I/O 373 offsets accessed will cycle through the same file segment 374 not using shared memory buffer 375 not releasing byte-range token after open 376 no fsync at end of test 415 377 Data rate was 50588.81 Kbytes/sec, thread utilization 1.000 416 417 Context: Write 16G data, 4 thread 378 }}} 379 380 * Context: Write 16G data, 4 thread 381 {{{ 418 382 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 4 419 420 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 421 422 recSize 1M nBytes 16G fileSize 16G 423 383 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 384 recSize 1M nBytes 16G fileSize 16G 424 385 nProcesses 1 nThreadsPerProcess 4 425 426 file cache flushed before test 427 428 not using data shipping 429 430 not using direct I/O 431 432 offsets accessed will cycle through the same file segment 433 434 not using shared memory buffer 435 436 not releasing byte-range token after open 437 438 no fsync at end of test 439 386 file cache flushed before test 387 not using data shipping 388 not using direct I/O 389 offsets accessed will cycle through the same file segment 390 not using shared memory buffer 391 not releasing byte-range token after open 392 no fsync at end of test 440 393 Data rate was 50694.87 Kbytes/sec, thread utilization 0.999 441 442 443 Context: Write 16G data, 8 thread 394 }}} 395 396 * Context: Write 16G data, 8 thread 397 {{{ 444 398 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 8 445 446 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 447 448 recSize 1M nBytes 16G fileSize 16G 449 399 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 400 recSize 1M nBytes 16G fileSize 16G 450 401 nProcesses 1 nThreadsPerProcess 8 451 452 file cache flushed before test 453 454 not using data shipping 455 456 not using direct I/O 457 458 offsets accessed will cycle through the same file segment 459 460 not using shared memory buffer 461 462 not releasing byte-range token after open 463 464 no fsync at end of test 465 402 file cache flushed before test 403 not using data shipping 404 not using direct I/O 405 offsets accessed will cycle through the same file segment 406 not using shared memory buffer 407 not releasing byte-range token after open 408 no fsync at end of test 466 409 Data rate was 51648.90 Kbytes/sec, thread utilization 0.985 467 468 469 Context: Write 16G data, 16 thread 410 }}} 411 412 * Context: Write 16G data, 16 thread 413 {{{ 470 414 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 16 471 472 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 473 474 recSize 1M nBytes 16G fileSize 16G 475 415 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 416 recSize 1M nBytes 16G fileSize 16G 476 417 nProcesses 1 nThreadsPerProcess 16 477 478 file cache flushed before test 479 480 not using data shipping 481 482 not using direct I/O 483 484 offsets accessed will cycle through the same file segment 485 486 not using shared memory buffer 487 488 not releasing byte-range token after open 489 490 no fsync at end of test 491 418 file cache flushed before test 419 not using data shipping 420 not using direct I/O 421 offsets accessed will cycle through the same file segment 422 not using shared memory buffer 423 not releasing byte-range token after open 424 no fsync at end of test 492 425 Data rate was 53019.51 Kbytes/sec, thread utilization 0.924 493 494 Context: Write 16G data, 32 thread 426 }}} 427 428 * Context: Write 16G data, 32 thread 429 {{{ 495 430 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 32 496 497 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 498 499 recSize 1M nBytes 16G fileSize 16G 500 431 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 432 recSize 1M nBytes 16G fileSize 16G 501 433 nProcesses 1 nThreadsPerProcess 32 502 503 file cache flushed before test 504 505 not using data shipping 506 507 not using direct I/O 508 509 offsets accessed will cycle through the same file segment 510 511 not using shared memory buffer 512 513 not releasing byte-range token after open 514 515 no fsync at end of test 516 434 file cache flushed before test 435 not using data shipping 436 not using direct I/O 437 offsets accessed will cycle through the same file segment 438 not using shared memory buffer 439 not releasing byte-range token after open 440 no fsync at end of test 517 441 Data rate was 53003.69 Kbytes/sec, thread utilization 0.966 518 519 520 Context: Write 16G data, 64 thread 442 }}} 443 444 * Context: Write 16G data, 64 thread 445 {{{ 521 446 gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 64 522 523 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 524 525 recSize 1M nBytes 16G fileSize 16G 526 447 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 448 recSize 1M nBytes 16G fileSize 16G 527 449 nProcesses 1 nThreadsPerProcess 64 528 529 file cache flushed before test 530 531 not using data shipping 532 533 not using direct I/O 534 535 offsets accessed will cycle through the same file segment 536 537 not using shared memory buffer 538 539 not releasing byte-range token after open 540 541 no fsync at end of test 542 450 file cache flushed before test 451 not using data shipping 452 not using direct I/O 453 offsets accessed will cycle through the same file segment 454 not using shared memory buffer 455 not releasing byte-range token after open 456 no fsync at end of test 543 457 Data rate was 53590.98 Kbytes/sec, thread utilization 0.971 544 545 546 547 548 4. Compare 549 550 551 All kind Operation (sequence) 552 553 Replicate & Adjust Parameters 554 No Replicate & Adjust Parameters 555 Create 556 54649.55 KB/s 557 108330.24 KB/s 558 Read 559 83583.30 KB/s 560 82420.96 KB/s 561 Write 562 50898.76 KB/s 563 108820.45 KB/s 564 565 566 567 568 569 570 571 Multi-thread (sequence) 572 573 574 1 575 2 576 4 577 8 578 16 579 32 580 64 581 Create 582 50800.95 KB/s 583 50297.13 KB/s 584 50848.45 KB/s 585 50469.88 KB/s 586 52578.33 KB/s 587 53107.28 KB/s 588 53019.53 KB/s 589 Read 590 81685.18 KB/s 591 90844.61 KB/s 592 89538.89 KB/s 593 87044.97 KB/s 594 94899.75 KB/s 595 90657.18 KB/s 596 89751.67 KB/s 597 Write 598 50819.17 KB/s 599 50588.81 KB/s 600 50694.87 KB/s 601 51648.90 KB/s 602 53019.51 KB/s 603 53003.69 KB/s 604 53590.98 KB/s 605 606 458 }}} 459 460 [[BR]] 461 [[BR]] 462 === 4. Compare === 463 * All kind Operation (sequence) 464 465 || ||Replicate & Adjust Parameters ||No Replicate & Adjust Parameters|| 466 ||Create||54649.55 KB/s ||108330.24 KB/s|| 467 ||Read ||83583.30 KB/s ||82420.96 KB/s|| 468 ||Write ||50898.76 KB/s ||108820.45 KB/s|| 469 470 * Multi-thread (sequence) 471 || || 1 || 2 || 4 || 8 || 16 || 32 || 64 || 472 ||Create||50800.95 KB/s ||50297.13 KB/s ||50848.45 KB/s ||50469.88 KB/s || 52578.33 KB/s ||53107.28 KB/s || 53019.53 KB/s|| 473 ||Read ||81685.18 KB/s ||90844.61 KB/s ||89538.89 KB/s || 87044.97 KB/s || 94899.75 KB/s || 90657.18 KB/s || 89751.67 KB/s || 474 ||Write||50819.17 KB/s || 50588.81 KB/s || 50694.87 KB/s || 51648.90 KB/s || 53019.51 KB/s || 53003.69 KB/s || 53590.98 KB/s || 475 476 [[BR]]