wiki:rock/paper/PFS_HPC

Version 7 (modified by rock, 15 years ago) (diff)

--

Parallel File Systems: File Systems for HPC Clusters

1. 重點

1.1 作者將 Cluster FS 分為兩類:

  • DFS (Distributed File Systems)
    • ”parallel” as they utilize multiple data servers
  • PFS (Parallel File Systems)
    • use a single server, are not necessarily parallel, they can give the user ”parallel access” to a file system.

2.2 DFS

  • 2.2.1 DFS are network based(i.e. the actual storage hardware is not necessarily on the nodes) but it not necessarily parallel (i.e.there may not be multiple servers that are delivering the file system).
  • 2.2.2 作者先舉 NFS 來介紹:
    • NFS:
      • Primary file system for clusters, and is pretty much” plug and play” on most *nix systems.
      • It was the first popular file system that allowed distributed systems to share data.
    • NFSv3:
      • The most popular version of NFS. It was released around 1995 and
      • added several features including support for 64-bit file sizes and offsets (so it can handle files larger than 4GB), asynchronous write support, and TCP as a transport layer.
    • NFSv4:
      • Around 2003, NFSv4 was released with some improvements. In particular,
      • it added some speed improvements, strong security (with compatibility for multiple security protocols), and NFS became a stateful protocol.
    • NFS 的效能面:
      • The good news for NFS and NAS is that many codes don’t require lots of I/O for good performance. These codes will run very well using NFS as the storage protocol even for large runs (100+ nodes or several hundred cores). NFS provides adequate performance until the input and output files for these codes become extremely large, or if the code is run across a very large number of processors (in the thousands).
      • NFS still lacks the performance and scalability required by many large clusters, but that is about to change.
    • pNFS (NFSv4.1):
      • adding PFS capability to the NFS protocol. The goal is to improve performance and scalability while making the changes within a standard (recall that NFS is the only true shared file system standard).
      • this standard is designed to be used with file based, block based, and object based storage devices with an eye towards freeing customers from vendor lock-in.
      • pNFS Architecture, No image "pNFS_arch.png" attached to rock/paper/PFS_HPC
        • pNFS Sever 連接 Client 和 Storage ,當 Client 要存取檔案時先至 pNFS Server 查詢 Metadata 找到檔案的位置,再連接到存放檔案的 Sotrage
  • 2.2.3 作者第二個舉的是 Clustered NAS:
    • Clustered NAS systems were developed to make NAS systems more scalable and to give them more performance
    • Uses several filer heads instead of a single one. The filer heads are then connected to storage.
    • Two Arch:
      1. several file heads each have some storage assigned to them.
        • First approach is used by NetApp (NetApp-GX)
      2. the filer heads are really gateways from the clients to a parallel file system.
        • filer heads communicate with the client using NFS over the client network but access the parallel file system on a private storage network.
        • allows the ClusterNAS to be scaled quite large because you can just add more gateways– which also increases aggregate performance because there are more NFS gateways.
        • Used by Isilon. And Panasas, IBM’s GPFS, and other parallel file systems when they are running in a NFS mode.
    • The problem is that you have limited performance to the client because you are using NFS as the communication protocol. Most of the Cluster NAS solutions use a single GigE connection so you are limited to about 90-100 MB/s at most to each client.


2.3 PFS

  • Provide lots of I/O for clusters
  • provide a centralized file system for clusters
    • Centralized file systems can ease a management burden & improve the scalability of cluster storage
  • PFS are distinguished from DFS because the clients contact multiple storage devices instead of a single device or a gateway
  • 作者將 PFS 分為兩類:
    1. First Type First group uses more traditional methods such as file locking as part of the file system (block based, or even file based, schemes)
      • GPFS
        • 早期只能使用在 AIX 系統上,後來 IBM 將其移植到 Linux,早期只能使用在 IBM 的機器上,到了2005年非IBM的機器也可使用,目前只有一家 OEM 提供GPFS (linux networx)
        • it’s direct attached storage (DAS) or some type of Storage Area Network (SAN) storage. In some cases, you can combine various types of storage.
        • GPFS -> high-speed, parallel, distributed file system. GPFS achieves high-performance by striping data across multiple disks on multiple storage devices.
        • 使用三種 striping 方式:
          1. Round Robin
          2. Random
          3. Balanced Random
        • To further improve performance, GPFS uses client-side caching and deep prefetching such as read-ahead and write-behind. It recognizes standard access patterns (sequential, reverse sequential, and random) and will optimize I/O accesses for these particular patterns. Furthermore GPFS can read or write large blocks of data in a single I/O operation.
        • Block size 可選擇 16KB, 64KB, 256KB, 512KB, 1MB, and 2MB block sizes are supported with 256K being the most common
          • large block sizes helps improve performance when large data accesses are common
          • Small block sizes are used when small data accesses are common
          • GPFS subdivides the blocks into 32 sub-blocks.
          • Block is a largest chunk of contiguous data that can be accessed. A sub-block is the smallest contiguous data that can be accessed. Sub-blocks are useful for files that are smaller than a block and are stored using the sub-blocks. This can help the performance of applications that use lots of small data files (i.e. life sciences applications).
        • High Availability (HA) -> GPFS uses distributed metadata so that there is no single point of failure, nor a performance bottleneck. GPFS can be configured to use logging and replication. GPFS will log (journal) the metadata of the file system.
          • GPFS can also be configured for fail-over both at a disk level and at a server level
        • GPFS 現今依舊有在使用,in the Linux world, there are GPFS clusters with over 2,400 nodes (clients). One aspect of GPFS that should be mentioned in this context is that the GPFS is priced by the node for both I/O nodes and clients.
        • GPFS 版本:
          • 3 -> only uses TCP as the transport protocol
          • 4 -> have native IB protocols
          • In addition, the I/O nodes of GPFS can act as NFS servers if NFS is required.
        • Feature -> multi-cluster. This allows two different GPFS file systems to be connected over a network.This is a great feature for groups in disparate locations to share data
        • last feature -> GPFS Open Source Portability Layer. \The portability layer allows these GPFS kernel modules to communicate with

the Linux kernel. A way to create a bridge from the GPL kernel to a non-GPL set of kernel modules, it actually serves a very useful purpose.

  • IBRIX
  • EMC MPFS
  • Object based file systems


想法

  • 此篇論點屬 survey 性質,可用於撰寫 DFS Paper 的文獻探討 DFS from wiki

Attachments (3)

Download all attachments as: .zip