.TH mmcrfs 12/04/06 mmcrfs Command .SH "Name" .PP \fBmmcrfs\fR - Creates a GPFS file system. .SH "Synopsis" .PP \fBmmcrfs\fR \fIMountpoint \fR \fIDevice\fR {"\fIDiskDesc\fR[;\fIDiskDesc\fR...]" | \fB-F\fR \fIDescFile\fR} [\fB-A\fR {\fB\fIyes\fR\fR | \fBno | automount\fR}] [\fB-D {\fR\fBnfs4\fR | \fB\fIposix\fR\fR\fB}\fR] [\fB-B\fR \fIBlockSize\fR] [\fB-E\fR {\fB\fIyes\fR\fR | \fBno\fR}] [\fB-j {\fR\fBcluster | scatter}\fR] [\fB-k {\fR\fB\fIposix\fR\fR\fB | nfs4 | all}\fR] [\fB-K {\fR\fBno\fR | \fB\fIwhenpossible\fR\fR | \fBalways}\fR] [\fB-m\fR \fIDefaultMetadataReplicas\fR] [\fB-M \fR \fIMaxMetadataReplicas\fR] [\fB-n\fR \fINumNodes\fR] [\fB-N\fR \fINumInodes\fR[:\fINumInodesToPreallocate\fR]] [\fB-Q\fR {\fByes\fR | \fB\fIno\fR\fR}] [\fB-r \fR \fIDefaultDataReplicas\fR] [\fB-R\fR \fIMaxDataReplicas\fR] [\fB-S\fR {\fByes\fR | \fB\fIno\fR\fR}] [\fB-v\fR {\fB\fIyes\fR\fR | \fBno\fR}] [\fB-z\fR {\fByes\fR | \fB\fIno\fR\fR}] .SH "Description" .PP Use the \fBmmcrfs\fR command to create a GPFS file system. The first three options \fImust\fR be \fIMountpoint\fR, \fIDevice\fR, and either \fIDiskDescList\fR or \fIDescFile\fR and they \fImust\fR be in that order. The block size and replication factors chosen affect file system performance. There is a maximum of 32 file systems that may be mounted in a GPFS cluster at one time, including remote file systems. .PP When deciding on the maximum number of files (number of inodes) in a file system, consider that for file systems that will be doing parallel file creates, if the total number of free inodes is not greater than 5% of the total number of inodes, there is the potential for slowdown in file system access. The total number of inodes can be increased using the \fBmmchfs\fR command. .PP When deciding on a block size for a file system, consider these points: .RS +3 .HP 3 1. Supported block sizes are 16 KB, 64 KB 256 KB, 512 KB, 1 MB, 2 MB, 4 MB. .HP 3 2. The GPFS block size determines: .RS +3 .HP 3 \(bu The minimum disk space allocation unit. The minimum amount of space that file data can occupy is a subblock. A subblock is 1/32 of the block size. .HP 3 \(bu The maximum size of a read or write request that GPFS sends to the underlying disk driver. .RE .HP 3 3. From a performance perspective, you are advised to set the GPFS block size to match either the application buffer size, the RAID stripe size, or a multiple of the RAID stripe size. If the GPFS block size does not match the RAID stripe size, performance may be severely degraded, especially for write operations. .HP 3 4. In file systems with a high degree of variance in the size of files within the file system, using a small block size would have a large impact on performance when accessing large files. In this kind of system it is suggested that you use a block size of 256 KB (8 KB subblock). Even if only 1% of the files are large, the amount of space taken by the large files usually dominates the amount of space used on disk, and the waste in the subblock used for small files is usually insignificant. For further performance information, see the GPFS white papers at www.ibm.com/servers/eserver/clusters/library/wp_aix_lit.html. .HP 3 5. The effect of block size on file system performance largely depends on the application I/O pattern. .RS +3 .HP 3 \(bu A larger block size is often beneficial for large sequential read and write workloads. .HP 3 \(bu A smaller block size is likely to offer better performance for small file, small random read and write, and metadata-intensive workloads. .RE .HP 3 6. The efficiency of many algorithms that rely on caching file data in a GPFS page pool depends more on the number of blocks cached rather than the absolute amount of data. For a page pool of a given size, a larger file system block size would mean fewer blocks cached. Therefore, when you create file systems with a block size larger than the default 256 KB, it is recommended that you increase the page pool size in proportion to the block size. .HP 3 7. The file system block size may not exceed the value of the GPFS \fBmaxBlockSize\fR configuration parameter. The \fBmaxBlockSize\fR parameter is set to 1 MB by default. If a larger block size is desired, use the \fBmmchconfig\fR command to increase the \fBmaxBlockSize\fR prior to starting GPFS. .RE .PP \fBResults\fR .PP Upon successful completion of the \fBmmcrfs\fR command, these tasks are completed on all GPFS nodes: .RS +3 .HP 3 \(bu Mount point directory is created. .HP 3 \(bu File system is formatted. .RE .SH "Parameters" .PP .RS +3 \fB\fIMountpoint\fR \fR .RE .RS +9 The mount point directory of the GPFS file system. .RE .PP .RS +3 \fB\fIDevice\fR \fR .RE .RS +9 The device name of the file system to be created. .PP File system names need not be fully-qualified. \fBfs0\fR is as acceptable as \fB/dev/fs0\fR. However, file system names must be unique within a GPFS cluster. Do not specify an existing entry in \fB/dev\fR. .RE .PP .RS +3 \fB-D {nfs4 | \fB\fIposix\fR\fR} \fR .RE .RS +9 Specifies whether a 'deny-write open lock' will block writes, which is expected and required by NFS V4. File systems supporting NFS V4 must have \fB-D nfs4\fR set. The option \fB-D posix\fR allows NFS writes even in the presence of a deny-write open lock. If you intend to export the file system using NFS V4 or Samba, you must use \fB-D nfs4\fR. For NFS V3 (or if the file system is not NFS exported at all) use \fB-D posix\fR. The default is \fB-D posix\fR. .RE .PP .RS +3 \fB-F \fIDescFile\fR \fR .RE .RS +9 Specifies a file containing a list of disk descriptors, one per line. You may use the rewritten \fIDiskDesc\fR file created by the \fBmmcrnsd\fR command, create your own file, or enter the disk descriptors on the command line. When using the \fIDiskDesc\fR file created by the \fBmmcrnsd\fR command, the values supplied on input to the command for \fIDisk Usage\fR and \fIFailureGroup \fR are used. When creating your own file or entering the descriptors on the command line, you must specify these values or accept the system defaults. A sample file can be found in \fB/usr/lpp/mmfs/samples/diskdesc\fR. .RE .PP .RS +3 \fB"\fIDiskDesc\fR[;\fIDiskDesc\fR...]" \fR .RE .RS +9 A descriptor for each disk to be included. Each descriptor is separated by a semicolon (;). The entire list must be enclosed in quotation marks (' or "). .PP The current maximum number of disk descriptors that can be defined for any single file system is 268 million. However, to achieve this maximum limit, you must recompile GPFS. The actual number of disks in your file system may be constrained by products other than GPFS that you have installed. Refer to the individual product documentation. .PP A disk descriptor is defined as (second, third and sixth fields reserved): .sp .nf DiskName:::DiskUsage:FailureGroup::StoragePool: .fi .sp .PP .RS +3 \fB\fIDiskName\fR \fR .RE .RS +9 .PP You must specify the name of the NSD previously created by the \fBmmcrnsd\fR command. For a list of available disks, issue the \fBmmlsnsd -F\fR command. .RE .PP .RS +3 \fB\fIDiskUsage\fR \fR .RE .RS +9 Specify a disk usage or accept the default: .PP .RS +3 \fBdataAndMetadata \fR .RE .RS +9 Indicates that the disk contains both data and metadata. This is the default. .RE .PP .RS +3 \fBdataOnly \fR .RE .RS +9 Indicates that the disk contains data and does not contain metadata. .RE .PP .RS +3 \fBmetadataOnly \fR .RE .RS +9 Indicates that the disk contains metadata and does not contain data. .RE .PP .RS +3 \fBdescOnly \fR .RE .RS +9 Indicates that the disk contains no data and no file metadata. Such a disk is used solely to keep a copy of the file system descriptor, and can be used as a third failure group in certain disaster recovery configurations. For more information, see \fIGeneral Parallel File System: Advanced Administration\fR and search on \fISynchronous mirroring utilizing GPFS replication\fR. .RE .RE .PP .RS +3 \fB\fIFailureGroup\fR \fR .RE .RS +9 A number identifying the failure group to which this disk belongs. You can specify any value from -1 (where -1 indicates that the disk has no point of failure in common with any other disk) to 4000. If you do not specify a failure group, the value defaults to the primary server node number plus 4000. If an NSD server node is not specified, the value defaults to -1. GPFS uses this information during data and metadata placement to assure that no two replicas of the same block are written in such a way as to become unavailable due to a single failure. All disks that are attached to the same NSD server or adapter should be placed in the same failure group. .PP If replication of \fB-m\fR or \fB-r\fR is set to 2, storage pools must have two failure groups for the commands to work properly. .RE .PP .RS +3 \fB\fIStoragePool\fR \fR .RE .RS +9 Specifies the storage pool to which the disk is to be assigned. If this name is not provided, the default is \fBsystem\fR. .PP Only the \fBsystem\fR pool may contain \fBdescOnly\fR, \fBmetadataOnly\fR or \fBdataAndMetadata\fR disks. .RE .RE .SH "Options" .PP .RS +3 \fB-A {\fB\fIyes\fR\fR | no | automount} \fR .RE .RS +9 Indicates when the file system is to be mounted: .PP .RS +3 \fByes \fR .RE .RS +9 When the GPFS daemon starts. This is the default. .RE .PP .RS +3 \fBno \fR .RE .RS +9 Manual mount. .RE .PP .RS +3 \fBautomount \fR .RE .RS +9 When the file system is first accessed. .RE .RE .PP .RS +3 \fB-B \fIBlockSize\fR \fR .RE .RS +9 Size of data blocks. Must be 16 KB, 64 KB, 256 KB (the default), 512 KB, 1 MB, 2 MB, or 4 MB. Specify this value with the character \fBK\fR or \fBM\fR, for example 512K. .RE .PP .RS +3 \fB-E {\fB\fIyes\fR\fR | no} \fR .RE .RS +9 Specifies whether to report \fIexact\fR \fBmtime\fR values (\fB-E yes\fR), or to periodically update the \fBmtime\fR value for a file system (\fB-E no\fR). If it is more desirable to display exact modification times for a file system, specify or use the default \fB-E yes\fR option. .RE .PP .RS +3 \fB-j {cluster | scatter} \fR .RE .RS +9 Specifies the block allocation map type. When allocating blocks for a given file, GPFS first utilizes a round-robin algorithm to spread the data across all disks in the file system. After a disk is selected, the location of the data block on the disk is determined by the block allocation map type. If \fBcluster\fR is specified, GPFS attempts to allocate blocks in clusters. Blocks that belong to a given file are kept adjacent to each other within each cluster. If \fBscatter\fR is specified, the location of the block is chosen randomly. .PP The \fBcluster\fR allocation method may provide better disk performance for some disk subsystems in relatively small installations. The benefits of clustered block allocation diminish when the number of nodes in the cluster or the number of disks in a file system increases, or when the file system free space becomes fragmented. The \fBcluster\fR allocation method is the default for GPFS clusters with eight or fewer nodes or files systems with eight or fewer disks. .PP The \fBscatter\fR allocation method provides more consistent file system performance by averaging out performance variations due to block location (for many disk subsystems, the location of the data relative to the disk edge has a substantial effect on performance). This allocation method is appropriate in most cases and is the default for GPFS clusters with more than eight nodes or file systems with more than eight disks. .PP The block allocation map type cannot be changed after the file system has been created. .RE .PP .RS +3 \fB\fB-k {\fR\fB\fIposix\fR\fR\fB | nfs4 | all}\fR \fR .RE .RS +9 Specifies the type of authorization supported by the file system: .PP .RS +3 \fBposix \fR .RE .RS +9 Traditional GPFS ACLs only (NFS V4 ACLs are not allowed). Authorization controls are unchanged from earlier releases. The default is \fB-k posix\fR. .RE .PP .RS +3 \fBnfs4 \fR .RE .RS +9 Support for NFS V4 ACLs only. Users are not allowed to assign traditional GPFS ACLs to any file system objects (directories and individual files). .RE .PP .RS +3 \fBall \fR .RE .RS +9 Any supported ACL type is permitted. This includes traditional GPFS (\fBposix\fR) and NFS V4 ACLs (\fBnfs4\fR). .PP The administrator is allowing a mixture of ACL types. For example, \fBfileA\fR may have a \fBposix\fR ACL, while \fBfileB\fR in the same file system may have an NFS V4 ACL, implying different access characteristics for each file depending on the ACL type that is currently assigned. .RE .PP Neither \fBnfs4\fR nor \fBall\fR should be specified here unless the file system is going to be exported to NFS V4 clients. NFS V4 ACLs affect file attributes (mode) and have access and authorization characteristics that are different from traditional GPFS ACLs. .RE .PP .RS +3 \fB-K {no | \fB\fIwhenpossible\fR\fR | always} \fR .RE .RS +9 Specifies whether strict replication is to be enforced: .PP .RS +3 \fBno \fR .RE .RS +9 Strict replication is not enforced. GPFS will try to create the needed number of replicas, but will still return EOK as long as it can allocate at least one replica. .RE .PP .RS +3 \fBwhenpossible \fR .RE .RS +9 Strict replication is enforced provided the disk configuration allows it. If the number of failure groups is insufficient, strict replication will not be enforced. This is the default value. .RE .PP .RS +3 \fBalways \fR .RE .RS +9 Strict replication is enforced. .RE .RE .PP .RS +3 \fB-m \fIDefaultMetadataReplicas\fR \fR .RE .RS +9 Default number of copies of inodes, directories, and indirect blocks for a file. Valid values are 1 and 2 but cannot be greater than the value of \fIMaxMetadataReplicas\fR. The default is 1. .RE .PP .RS +3 \fB -M \fIMaxMetadataReplicas\fR \fR .RE .RS +9 Default maximum number of copies of inodes, directories, and indirect blocks for a file. Valid values are 1 and 2 but cannot be less than \fIDefaultMetadataReplicas\fR. The default is 1. .RE .PP .RS +3 \fB-n \fINumNodes\fR \fR .RE .RS +9 The estimated number of nodes that will mount the file system. This is used as a best guess for the initial size of some file system data structures. The default is 32. This value cannot be changed after the file system has been created. .PP When you create a GPFS file system, you might want to overestimate the number of nodes that will mount the file system. GPFS uses this information for creating data structures that are essential for achieving maximum parallelism in file system operations (see \fIAppendix A: GPFS architecture\fR in \fIGeneral Parallel File System: Concepts, Planning, and Installation Guide\fR). Although a large estimate consumes additional memory, underestimating the data structure allocation can reduce the efficiency of a node when it processes some parallel requests such as the allotment of disk space to a file. If you cannot predict the number of nodes that will mount the file system, allow the default value to be applied. If you are planning to add nodes to your system, you should specify a number larger than the default. However, do not make estimates that are not realistic. Specifying an excessive number of nodes may have an adverse affect on buffer operations. .RE .PP .RS +3 \fB-N \fINumInodes\fR[:\fINumInodesToPreallocate\fR] \fR .RE .RS +9 The \fINumInodes\fR parameter specifies the maximum number of files in the file system. This value defaults to the size of the file system at creation, divided by 1M and is also constrained by the formula: .PP \fBmaximum number of files = (total file system space/2) / (inode size + subblock size)\fR .PP For file systems that will be doing parallel file creates, if the total number of free inodes is not greater than 5% of the total number of inodes there is the potential for slowdown in file system access. Take this into consideration when creating your file system. .PP The parameter \fINumInodesToPreallocate\fR specifies the number of inodes that the system will immediately preallocate. If you do not specify a value for \fINumInodesToPreallocate\fR, GPFS will dynamically allocate inodes as needed .PP You can specify the \fINumInodes\fR and \fINumInodesToPreallocate\fR values with a suffix, for example 100K or 2M. .RE .PP .RS +3 \fB-Q {yes | \fB\fIno}\fR\fR \fR .RE .RS +9 Activates quotas automatically when the file system is mounted. The default is \fB-Q no\fR. .PP To activate GPFS quota management after the file system has been created: .RS +3 .HP 3 1. Mount the file system. .HP 3 2. To establish default quotas: .RS +3 .HP 3 a. Issue the \fBmmdefedquota\fR command to establish default quota values. .HP 3 b. Issue the \fBmmdefquotaon\fR command to activate default quotas. .RE .HP 3 3. To activate explicit quotas: .RS +3 .HP 3 a. Issue the \fBmmedquota\fR command to activate quota values. .HP 3 b. Issue the \fBmmquotaon\fR command to activate quota enforcement. .RE .RE .RE .PP .RS +3 \fB-r \fIDefaultDataReplicas\fR \fR .RE .RS +9 Default number of copies of each data block for a file. Valid values are 1 and 2, but cannot be greater than \fIMaxDataReplicas\fR. The default is 1. .RE .PP .RS +3 \fB-R \fIMaxDataReplicas\fR \fR .RE .RS +9 Default maximum number of copies of data blocks for a file. Valid values are 1 and 2 but cannot be less than \fIDefaultDataReplicas\fR. The default is 1. .RE .PP .RS +3 \fB-S {yes | \fB\fIno\fR\fR} \fR .RE .RS +9 Suppress the periodic updating of the value of \fBatime\fR as reported by the \fBgpfs_stat()\fR, \fBgpfs_fstat()\fR, \fBstat()\fR, and \fBfstat()\fR calls. The default value is \fB-S no\fR. Specifying \fB-S yes\fR for a new file system results in reporting the time the file system was created. .RE .PP .RS +3 \fB-v {\fB\fIyes\fR\fR | no} \fR .RE .RS +9 Verify that specified disks do not belong to an existing file system. The default is \fB-v yes\fR. Specify \fB-v no\fR only when you want to reuse disks that are no longer needed for an existing file system. If the command is interrupted for any reason, you must use the \fB-v no\fR option on the next invocation of the command. .RE .PP .RS +3 \fB-z {\fByes\fR | \fB\fIno\fR\fR} \fR .RE .RS +9 Enable or disable DMAPI on the file system. The default is \fB-z no\fR. For further information on DMAPI for GPFS, see \fIGeneral Parallel File System: Data Management API Guide\fR. .RE .SH "Exit status" .PP .PP .RS +3 \fB0 \fR .RE .RS +9 Successful completion. .RE .PP .RS +3 \fBnonzero \fR .RE .RS +9 A failure has occurred. .RE .SH "Security" .PP You must have root authority to run the \fBmmcrfs\fR command. .PP You may issue the \fBmmcrfs\fR command from any node in the GPFS cluster. .PP When using the \fBrcp\fR and \fBrsh\fR commands for remote communication, a properly configured \fB.rhosts\fR file must exist in the root user's home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the \fBmmcrcluster\fR or the \fBmmchcluster\fR command, you must ensure: .RS +3 .HP 3 1. Proper authorization is granted to all nodes in the GPFS cluster. .HP 3 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages. .RE .PP When considering data replication for files accessible to SANergy, see \fISANergy export considerations\fR in \fIGeneral Parallel File System: Advanced Administration Guide\fR. .SH "Examples" .PP This example creates a file system named \fBgpfs1\fR, using three disks, with a block size of 512 KB, allowing metadata and data replication to be 2 and turning quotas on: .sp .nf mmcrfs /gpfs1 gpfs1 "hd3vsdn100;sdbnsd;sdensd" -B 512K -M 2 -R 2 -Q yes .fi .sp .PP The system displays output similar to: .sp .nf The following disks of gpfs1 will be formatted on node k5n95.kgn.ibm.com: hd3vsdn100: size 17760256 KB sdbnsd: size 70968320 KB sdensd: size 70968320 KB Formatting file system ... Disks up to size 208 GB can be added to storage pool system. Creating Inode File Creating Allocation Maps Clearing Inode Allocation Map Clearing Block Allocation Map Completed creation of file system /dev/gpfs1.mmcrfs:\ Propagating the cluster configuration data to allaffected nodes.\ This is an asynchronous process. .fi .sp .SH "See also" .PP mmchfs Command .PP mmdelfs Command .PP mmdf Command .PP mmedquota Command .PP mmfsck Command .PP mmlsfs Command .SH "Location" .PP \fB/usr/lpp/mmfs/bin\fR