Version 8 (modified by rock, 17 years ago) (diff) |
---|
GPFS Performance Report (Uses gpfsperf Command)
rock, rock@…
0. Machine Informance
Node | 8 nodes (1 server , 7 client provide disks) |
CPU | Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (each node) |
Memory | 2GB DDR2 667 (each node) |
Disk | 320G+160G (each node), All nodes: (320G+ 160G) * 7 = 3.36T |
NIC | Intel Corporation 82566DM Gigabit Network Connection |
Switch | D-link 24 port GE switch |
1. 8 Nodes, Replicate, Adjust Parameters
- Context: Create 16G data (sequence)
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 54649.55 Kbytes/sec, thread utilization 1.000
- Context: Read 16G data (sequence)
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 83583.30 Kbytes/sec, thread utilization 1.000
- Context: Write 16G data (sequence)
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G -n 16g -r 1m ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 50898.76 Kbytes/sec, thread utilization 1.000
2. 8 Nodes, No Replicate, Adjust Parameters
- Context: Create 16G data (sequence)
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_2 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 108330.24 Kbytes/sec, thread utilization 1.000
- Context: Read 16G data (sequence)
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_2 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 82420.96 Kbytes/sec, thread utilization 1.000
- Context: Write 16G data (sequence)
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_2 -n 16g -r 1m ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_2 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 108820.45 Kbytes/sec, thread utilization 1.000
3. Multi-thread
3.1 Create Operation
- Context: Create 16G data, 1 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 1 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 50800.95 Kbytes/sec, thread utilization 1.000
- Context: Create 16G data, 2 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 2 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 2 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 50297.13 Kbytes/sec, thread utilization 0.999
- Context: Create 16G data, 4 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 4 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 4 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 50848.45 Kbytes/sec, thread utilization 0.998
- Context: Create 16G data, 8 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 8 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 8 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 50469.88 Kbytes/sec, thread utilization 0.963
- Context: Create 16G data, 16 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 16 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 16 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 52578.33 Kbytes/sec, thread utilization 0.919
- Context: Create 16G data, 32 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 32 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 32 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 53107.28 Kbytes/sec, thread utilization 0.966
- Context: Create 16G data, 64 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 -n 16g -r 1m -th 64 ./gpfsperf create seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 64 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 53019.53 Kbytes/sec, thread utilization 0.978
3.2 Read Operation
- Context: Read 16G data, 1 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 81685.18 Kbytes/sec, thread utilization 1.000
- Context: Read 16G data, 2 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 2 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 2 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 90844.61 Kbytes/sec, thread utilization 0.999
- Context: Read 16G data, 4 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 4 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 4 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 89538.89 Kbytes/sec, thread utilization 0.997
- Context: Read 16G data, 8 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 8 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 8 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 87044.97 Kbytes/sec, thread utilization 0.994
- Context: Read 16G data, 16 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 16 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 16 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 94899.75 Kbytes/sec, thread utilization 0.990
- Context: Read 16G data, 32 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 32 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 32 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 90657.18 Kbytes/sec, thread utilization 0.983
- Context: Read 16G data, 64 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 64 ./gpfsperf read seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 64 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open Data rate was 89751.67 Kbytes/sec, thread utilization 0.983
3.3 Write Operation
- Context: Write 16G data, 1 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 1 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 1 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 50819.17 Kbytes/sec, thread utilization 1.000
- Context: Write 16G data, 2 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 2 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 2 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 50588.81 Kbytes/sec, thread utilization 1.000
- Context: Write 16G data, 4 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 4 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 4 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 50694.87 Kbytes/sec, thread utilization 0.999
- Context: Write 16G data, 8 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 8 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 8 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 51648.90 Kbytes/sec, thread utilization 0.985
- Context: Write 16G data, 16 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 16 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 16 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 53019.51 Kbytes/sec, thread utilization 0.924
- Context: Write 16G data, 32 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 32 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 32 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 53003.69 Kbytes/sec, thread utilization 0.966
- Context: Write 16G data, 64 thread
gpfs-server:/usr/lpp/mmfs/samples/perf# ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 -r 1m -n 16g -th 64 ./gpfsperf write seq /home/gpfs_mount/gpfsperf_16G_3 recSize 1M nBytes 16G fileSize 16G nProcesses 1 nThreadsPerProcess 64 file cache flushed before test not using data shipping not using direct I/O offsets accessed will cycle through the same file segment not using shared memory buffer not releasing byte-range token after open no fsync at end of test Data rate was 53590.98 Kbytes/sec, thread utilization 0.971
4. Compare
- All kind Operation (sequence)
Replicate & Adjust Parameters | No Replicate & Adjust Parameters | |
Create | 54649.55 KB/s | 108330.24 KB/s |
Read | 83583.30 KB/s | 82420.96 KB/s |
Write | 50898.76 KB/s | 108820.45 KB/s |
- Multi-thread (sequence) (Replicate & Adjust Parameters)
1 | 2 | 4 | 8 | 16 | 32 | 64 | |
Create | 50800.95 KB/s | 50297.13 KB/s | 50848.45 KB/s | 50469.88 KB/s | 52578.33 KB/s | 53107.28 KB/s | 53019.53 KB/s |
Read | 81685.18 KB/s | 90844.61 KB/s | 89538.89 KB/s | 87044.97 KB/s | 94899.75 KB/s | 90657.18 KB/s | 89751.67 KB/s |
Write | 50819.17 KB/s | 50588.81 KB/s | 50694.87 KB/s | 51648.90 KB/s | 53019.51 KB/s | 53003.69 KB/s | 53590.98 KB/s |
Attachments (1)
- GPFS_Performance_report_gpfsperf.pdf (54.9 KB) - added by rock 17 years ago.
Download all attachments as: .zip