Changes between Version 2 and Version 3 of rock/paper/PFS_HPC/PFS-type2


Ignore:
Timestamp:
Mar 6, 2009, 2:39:19 PM (15 years ago)
Author:
rock
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • rock/paper/PFS_HPC/PFS-type2

    v2 v3  
    2525 * The advantage to an object based file system is that allocation management of data is distributed over many nodes, avoiding a central bottleneck.
    2626 * Lustre has a '''''metadata component, a data component, and a client part'''''
    27    * these components to be put on different machines, or a single machine (usually only for home clusters or for testing)
    28    * The metadata can be distributed across machines called '''''!MetaData Servers (MDS)''''', to ensure that the failure of one machine will not cause the file system to crash.
    29    * MDS support failover as well. In Lustre 1.x, you can use up to two MDS machines (one in active mode and one in standby mode) while in Lustre 2.x, the goal is to have tens or even hundreds of MDS machines.
     27   * __'''!MetaData Servers (MDS)'''__
     28     * these components to be put on different machines, or a single machine (usually only for home clusters or for testing)
     29     * The metadata can be distributed across machines called '''''!MetaData Servers (MDS)''''', to ensure that the failure of one machine will not cause the file system to crash.
     30     * MDS support failover as well. In Lustre 1.x, you can use up to two MDS machines (one in active mode and one in standby mode) while in Lustre 2.x, the goal is to have tens or even hundreds of MDS machines.
     31   * __'''Object Storage Servers (OSS)'''__   
     32     * The file system data itself is stored as objects on the Object Storage Servers (OSS) machines
     33     * The data can be spread across the OSS machines in a round-robin fashion (striped in a RAID-0 sense) to allow parallel data access across many nodes resulting in higher throughput
     34     * Lustre mount points can be put into /etc/fstab or in an automounter
     35 * uses an Open network protocol to allow the components of the file system to communicate. It uses an open protocol called Portals, originally developed at Sandia National Laboratories.
     36    * This allows the networking portion of Lustre to be abstracted so new networks can be easily added
     37    * Supports TCP networks (Fast Ethernet, Gigabit Ethernet, 10GigE),Quadrics Elan, Myrinet GM, Scali SDP, and Infiniband. Lustre also uses Remote Direct Memory Access (RDMA) and OS-bypass capabilities to improve I/O performance.
     38 * 有人抱怨 client 需 patche kernel 才能使用 Lustre. Very few people could take the patches and apply them to a Kernel.org kernel and successfully build and run Lustre
     39   * people had to rely on ClusterFS for kernels. This was a burden on the users and on ClusterFS.
     40   * With the advent of Lustre 1.6.x, ClusterFS now has a patchless kernel for the client if you use a 2.6.15-16 kernel or greater. According to ClusterFS, there may be a performance reduction when using the patchless client, but for many people this is a useful thing since they can quickly rebuild a Lustre client kernel if there is a security problem.
     41   * Lustre follows a slightly unusual open source model so that development for the project can be paid for
     42     * the newest version of Lustre is only available from Cluster File Systems, the company developing Lustre, while the previous version is available freely from www.lustre.org.
    3043
    3144
    3245
     46
     47