source: gpfs_3.1_ker2.6.20/share/man/man8/mmcrfs.8 @ 16

Last change on this file since 16 was 16, checked in by rock, 16 years ago
File size: 19.9 KB
Line 
1.TH mmcrfs 12/04/06
2mmcrfs Command
3.SH "Name"
4.PP
5\fBmmcrfs\fR - Creates a GPFS file system.
6.SH "Synopsis"
7.PP
8\fBmmcrfs\fR \fIMountpoint \fR \fIDevice\fR
9{"\fIDiskDesc\fR[;\fIDiskDesc\fR...]"
10| \fB-F\fR \fIDescFile\fR} [\fB-A\fR {\fB\fIyes\fR\fR | \fBno
11| automount\fR}] [\fB-D {\fR\fBnfs4\fR | \fB\fIposix\fR\fR\fB}\fR] [\fB-B\fR \fIBlockSize\fR]
12[\fB-E\fR {\fB\fIyes\fR\fR | \fBno\fR}] [\fB-j
13{\fR\fBcluster | scatter}\fR] [\fB-k {\fR\fB\fIposix\fR\fR\fB | nfs4 | all}\fR] [\fB-K {\fR\fBno\fR |
14\fB\fIwhenpossible\fR\fR | \fBalways}\fR] [\fB-m\fR
15\fIDefaultMetadataReplicas\fR] [\fB-M \fR
16\fIMaxMetadataReplicas\fR] [\fB-n\fR \fINumNodes\fR]
17[\fB-N\fR
18\fINumInodes\fR[:\fINumInodesToPreallocate\fR]]
19[\fB-Q\fR {\fByes\fR | \fB\fIno\fR\fR}] [\fB-r \fR
20\fIDefaultDataReplicas\fR] [\fB-R\fR
21\fIMaxDataReplicas\fR] [\fB-S\fR {\fByes\fR | \fB\fIno\fR\fR}] [\fB-v\fR {\fB\fIyes\fR\fR | \fBno\fR}]
22[\fB-z\fR {\fByes\fR | \fB\fIno\fR\fR}]
23.SH "Description"
24.PP
25Use the \fBmmcrfs\fR command to create a GPFS file system. The first
26three options \fImust\fR be \fIMountpoint\fR, \fIDevice\fR, and
27either \fIDiskDescList\fR or \fIDescFile\fR and they \fImust\fR be
28in that order. The block size and replication factors chosen affect
29file system performance. There is a maximum of 32 file systems that may
30be mounted in a GPFS cluster at one time, including remote file
31systems.
32.PP
33When deciding on the maximum number of
34files (number of inodes) in a file system, consider that for file systems that
35will be doing parallel file creates, if the total number of free inodes is not
36greater than 5% of the total number of inodes, there is the potential for
37slowdown in file system access. The total number of inodes can be
38increased using the \fBmmchfs\fR command.
39.PP
40When deciding on a block size for a file system, consider these points:
41.RS +3
42.HP 3
431. Supported block sizes are 16 KB, 64 KB 256 KB, 512 KB, 1 MB, 2 MB, 4
44MB.
45.HP 3
462. The GPFS block size determines:
47.RS +3
48.HP 3
49\(bu The minimum disk space allocation unit. The minimum amount of space
50that file data can occupy is a subblock. A subblock is 1/32 of the
51block size.
52.HP 3
53\(bu The maximum size of a read or write request that GPFS sends to the
54underlying disk driver.
55.RE
56.HP 3
573.
58From a performance perspective, you are advised to set the GPFS block size to
59match either the application buffer size, the RAID stripe size, or a multiple
60of the RAID stripe size. If the GPFS block size does not match the RAID
61stripe size, performance may be severely degraded, especially for write
62operations.
63.HP 3
644. In file systems with a high degree of variance in the size of files within
65the file system, using a small block size would have a large impact on
66performance when accessing large files. In this kind of system it is
67suggested that you use a block size of 256 KB (8 KB subblock). Even if
68only 1% of the files are large, the amount of space taken by the large files
69usually dominates the amount of space used on disk, and the waste in the
70subblock used for small files is usually insignificant. For further
71performance information, see the GPFS white papers at
72www.ibm.com/servers/eserver/clusters/library/wp_aix_lit.html.
73.HP 3
745. The effect of block size on file system performance largely depends on the
75application I/O pattern.
76.RS +3
77.HP 3
78\(bu A larger block size is often beneficial for large sequential read and
79write workloads.
80.HP 3
81\(bu A smaller block size is likely to offer better performance for small file,
82small random read and write, and metadata-intensive workloads.
83.RE
84.HP 3
856. The efficiency of many algorithms that rely on caching file data in a GPFS
86page pool depends more on the number of blocks cached rather than the absolute
87amount of data. For a page pool of a given size, a larger file system
88block size would mean fewer blocks cached. Therefore, when you create
89file systems with a block size larger than the default 256 KB, it is
90recommended that you increase the page pool size in proportion to the block
91size.
92.HP 3
937. The file system block size may not exceed the value of the GPFS
94\fBmaxBlockSize\fR configuration parameter. The
95\fBmaxBlockSize\fR parameter is set to 1 MB by default. If a larger
96block size is desired, use the \fBmmchconfig\fR command to increase the
97\fBmaxBlockSize\fR prior to starting GPFS.
98.RE
99.PP
100\fBResults\fR
101.PP
102Upon successful completion of the \fBmmcrfs\fR command, these tasks are
103completed on all GPFS nodes:
104.RS +3
105.HP 3
106\(bu
107Mount point directory is created.
108.HP 3
109\(bu
110File system is formatted.
111.RE
112.SH "Parameters"
113.PP
114.RS +3
115\fB\fIMountpoint\fR
116\fR
117.RE
118.RS +9
119The mount point directory of the GPFS file system.
120.RE
121.PP
122.RS +3
123\fB\fIDevice\fR
124\fR
125.RE
126.RS +9
127The device name of the file system to be created.
128.PP
129File system names need not be fully-qualified. \fBfs0\fR is as
130acceptable as \fB/dev/fs0\fR. However, file system names must be
131unique within a GPFS cluster. Do not specify an existing entry in
132\fB/dev\fR.
133.RE
134.PP
135.RS +3
136\fB-D {nfs4 | \fB\fIposix\fR\fR}
137\fR
138.RE
139.RS +9
140Specifies whether a 'deny-write open lock' will block writes,
141which is expected and required by NFS V4. File systems supporting NFS
142V4 must have \fB-D nfs4\fR set. The option \fB-D posix\fR allows
143NFS writes even in the presence of a deny-write open lock. If you
144intend to export the file system using NFS V4 or Samba, you must use \fB-D
145nfs4\fR. For NFS V3 (or if the file system is not NFS exported at
146all) use \fB-D posix\fR. The default is \fB-D posix\fR.
147.RE
148.PP
149.RS +3
150\fB-F \fIDescFile\fR
151\fR
152.RE
153.RS +9
154Specifies a file containing a list of disk descriptors, one per
155line. You may use the rewritten \fIDiskDesc\fR file created by the
156\fBmmcrnsd\fR command, create your own file, or enter the disk descriptors
157on the command line. When using the \fIDiskDesc\fR file created by
158the \fBmmcrnsd\fR command, the values supplied on input to the command for
159\fIDisk Usage\fR and \fIFailureGroup \fR are used. When creating
160your own file or entering the descriptors on the command line, you must
161specify these values or accept the system defaults. A sample file can
162be found in \fB/usr/lpp/mmfs/samples/diskdesc\fR.
163.RE
164.PP
165.RS +3
166\fB"\fIDiskDesc\fR[;\fIDiskDesc\fR...]"
167\fR
168.RE
169.RS +9
170A descriptor for each disk to be included. Each descriptor is
171separated by a semicolon (;). The entire list must be enclosed in
172quotation marks (' or ").
173.PP
174The current maximum number of disk descriptors that can be defined for any
175single file system is 268 million. However, to achieve this maximum
176limit, you must recompile GPFS. The actual number of disks in your file
177system may be constrained by products other than GPFS that you have
178installed. Refer to the individual product documentation.
179.PP
180A disk descriptor is defined as (second, third and sixth fields
181reserved):
182.sp
183.nf
184DiskName:::DiskUsage:FailureGroup::StoragePool:
185.fi
186.sp
187.PP
188.RS +3
189\fB\fIDiskName\fR
190\fR
191.RE
192.RS +9
193.PP
194You must specify the name of the NSD previously created by the
195\fBmmcrnsd\fR command. For a list of available disks, issue the
196\fBmmlsnsd -F\fR command.
197.RE
198.PP
199.RS +3
200\fB\fIDiskUsage\fR
201\fR
202.RE
203.RS +9
204Specify a disk usage or accept the default:
205.PP
206.RS +3
207\fBdataAndMetadata
208\fR
209.RE
210.RS +9
211Indicates that the disk contains both data and metadata. This is
212the default.
213.RE
214.PP
215.RS +3
216\fBdataOnly
217\fR
218.RE
219.RS +9
220Indicates that the disk contains data and does not contain
221metadata.
222.RE
223.PP
224.RS +3
225\fBmetadataOnly
226\fR
227.RE
228.RS +9
229Indicates that the disk contains metadata and does not contain
230data.
231.RE
232.PP
233.RS +3
234\fBdescOnly
235\fR
236.RE
237.RS +9
238Indicates that the disk contains no data and no file metadata. Such
239a disk is used solely to keep a copy of the file system descriptor, and can be
240used as a third failure group in certain disaster recovery
241configurations. For more information, see \fIGeneral Parallel File
242System: Advanced Administration\fR and search on \fISynchronous
243mirroring utilizing GPFS replication\fR.
244.RE
245.RE
246.PP
247.RS +3
248\fB\fIFailureGroup\fR
249\fR
250.RE
251.RS +9
252A number identifying the failure group to which this disk belongs.
253You can specify any value from -1 (where -1 indicates that the disk has no
254point of failure in common with any other disk) to 4000. If you do not
255specify a failure group, the value defaults to the primary server node number
256plus 4000. If an NSD server node is not specified, the value defaults
257to -1. GPFS uses this information during data and metadata placement to
258assure that no two replicas of the same block are written in such a way as to
259become unavailable due to a single failure. All disks that are attached
260to the same NSD server or adapter should be placed in the same failure
261group.
262.PP
263If replication of \fB-m\fR or \fB-r\fR is set to 2,
264storage pools must have two failure groups for the commands to work
265properly.
266.RE
267.PP
268.RS +3
269\fB\fIStoragePool\fR
270\fR
271.RE
272.RS +9
273Specifies the storage pool to which the disk is to be assigned. If
274this name is not provided, the default is \fBsystem\fR.
275.PP
276Only the \fBsystem\fR pool may contain \fBdescOnly\fR,
277\fBmetadataOnly\fR or \fBdataAndMetadata\fR disks.
278.RE
279.RE
280.SH "Options"
281.PP
282.RS +3
283\fB-A {\fB\fIyes\fR\fR | no | automount}
284\fR
285.RE
286.RS +9
287Indicates when the file system is to be mounted:
288.PP
289.RS +3
290\fByes
291\fR
292.RE
293.RS +9
294When the GPFS daemon starts. This is the default.
295.RE
296.PP
297.RS +3
298\fBno
299\fR
300.RE
301.RS +9
302Manual mount.
303.RE
304.PP
305.RS +3
306\fBautomount
307\fR
308.RE
309.RS +9
310When the file system is first accessed.
311.RE
312.RE
313.PP
314.RS +3
315\fB-B \fIBlockSize\fR
316\fR
317.RE
318.RS +9
319Size of data blocks. Must be 16 KB, 64 KB, 256 KB (the default),
320512 KB, 1 MB, 2 MB, or 4 MB. Specify this value with the character
321\fBK\fR or \fBM\fR, for example 512K.
322.RE
323.PP
324.RS +3
325\fB-E {\fB\fIyes\fR\fR | no}
326\fR
327.RE
328.RS +9
329Specifies whether to report \fIexact\fR \fBmtime\fR values (\fB-E
330yes\fR), or to periodically update the \fBmtime\fR value for a file
331system (\fB-E no\fR). If it is more desirable to display exact
332modification times for a file system, specify or use the default \fB-E
333yes\fR option.
334.RE
335.PP
336.RS +3
337\fB-j {cluster | scatter}
338\fR
339.RE
340.RS +9
341Specifies the block allocation map type. When allocating blocks for
342a given file, GPFS first utilizes a round-robin algorithm to spread the
343data across all disks in the file system. After a disk is selected, the
344location of the data block on the disk is determined by the block allocation
345map type. If \fBcluster\fR is specified, GPFS attempts to allocate
346blocks in clusters. Blocks that belong to a given file are kept
347adjacent to each other within each cluster. If \fBscatter\fR is
348specified, the location of the block is chosen randomly.
349.PP
350The \fBcluster\fR allocation method may provide better disk performance
351for some disk subsystems in relatively small installations. The
352benefits of clustered block allocation diminish when the number of nodes in
353the cluster or the number of disks in a file system increases, or when the
354file system free space becomes fragmented. The \fBcluster\fR
355allocation method is the default for GPFS clusters with eight or fewer nodes
356or files systems with eight or fewer disks.
357.PP
358The \fBscatter\fR allocation method provides more consistent file system
359performance by averaging out performance variations due to block location (for
360many disk subsystems, the location of the data relative to the disk edge has a
361substantial effect on performance). This allocation method is
362appropriate in most cases and is the default for GPFS clusters with more than
363eight nodes or file systems with more than eight disks.
364.PP
365The block allocation map type cannot be changed after the file system has
366been created.
367.RE
368.PP
369.RS +3
370\fB\fB-k {\fR\fB\fIposix\fR\fR\fB | nfs4 | all}\fR
371\fR
372.RE
373.RS +9
374Specifies the type of authorization supported by the file system:
375.PP
376.RS +3
377\fBposix
378\fR
379.RE
380.RS +9
381Traditional GPFS ACLs only (NFS V4 ACLs are not allowed).
382Authorization controls are unchanged from earlier releases. The default
383is \fB-k posix\fR.
384.RE
385.PP
386.RS +3
387\fBnfs4
388\fR
389.RE
390.RS +9
391Support for NFS V4 ACLs only. Users are not allowed to assign
392traditional GPFS ACLs to any file system objects (directories and individual
393files).
394.RE
395.PP
396.RS +3
397\fBall
398\fR
399.RE
400.RS +9
401Any supported ACL type is permitted. This includes traditional GPFS
402(\fBposix\fR) and NFS V4 ACLs (\fBnfs4\fR).
403.PP
404The administrator is allowing a mixture of ACL types. For example,
405\fBfileA\fR may have a \fBposix\fR ACL, while \fBfileB\fR in the same
406file system may have an NFS V4 ACL, implying different access characteristics
407for each file depending on the ACL type that is currently assigned.
408.RE
409.PP
410Neither \fBnfs4\fR nor \fBall\fR should be specified here unless the
411file system is going to be exported to NFS V4 clients. NFS V4 ACLs
412affect file attributes (mode) and have access and authorization
413characteristics that are different from traditional GPFS ACLs.
414.RE
415.PP
416.RS +3
417\fB-K {no | \fB\fIwhenpossible\fR\fR | always}
418\fR
419.RE
420.RS +9
421Specifies whether strict replication is to be enforced:
422.PP
423.RS +3
424\fBno
425\fR
426.RE
427.RS +9
428Strict replication is not enforced. GPFS will try to create the
429needed number of replicas, but will still return EOK as long as it can
430allocate at least one replica.
431.RE
432.PP
433.RS +3
434\fBwhenpossible
435\fR
436.RE
437.RS +9
438Strict replication is enforced provided the disk configuration allows
439it. If the number of failure groups is insufficient, strict replication
440will not be enforced. This is the default value.
441.RE
442.PP
443.RS +3
444\fBalways
445\fR
446.RE
447.RS +9
448Strict replication is enforced.
449.RE
450.RE
451.PP
452.RS +3
453\fB-m \fIDefaultMetadataReplicas\fR
454\fR
455.RE
456.RS +9
457Default number of copies of inodes, directories, and indirect blocks for a
458file. Valid values are 1 and 2 but cannot be greater than the value of
459\fIMaxMetadataReplicas\fR. The default is 1.
460.RE
461.PP
462.RS +3
463\fB -M \fIMaxMetadataReplicas\fR
464\fR
465.RE
466.RS +9
467Default maximum number of copies of inodes, directories, and indirect
468blocks for a file. Valid values are 1 and 2 but cannot be less than
469\fIDefaultMetadataReplicas\fR. The default is 1.
470.RE
471.PP
472.RS +3
473\fB-n \fINumNodes\fR
474\fR
475.RE
476.RS +9
477The estimated number of nodes that will mount the file system. This
478is used as a best guess for the initial size of some file system data
479structures. The default is 32. This value cannot be changed
480after the file system has been created.
481.PP
482When you create a GPFS file system, you might want to overestimate the
483number of nodes that will mount the file system. GPFS uses this
484information for creating data structures that are essential for achieving
485maximum parallelism in file system operations (see \fIAppendix A: GPFS
486architecture\fR in \fIGeneral Parallel File System: Concepts,
487Planning, and Installation Guide\fR). Although a large estimate
488consumes additional memory, underestimating the data structure allocation can
489reduce the efficiency of a node when it processes some parallel requests such
490as the allotment of disk space to a file. If you cannot predict the
491number of nodes that will mount the file system, allow the default value to be
492applied. If you are planning to add nodes to your system, you should
493specify a number larger than the default. However, do not make
494estimates that are not realistic. Specifying an excessive number of
495nodes may have an adverse affect on buffer operations.
496.RE
497.PP
498.RS +3
499\fB-N \fINumInodes\fR[:\fINumInodesToPreallocate\fR]
500\fR
501.RE
502.RS +9
503The \fINumInodes\fR parameter specifies the maximum number of files in
504the file system. This value defaults to the size of the file system at
505creation, divided by 1M and is also constrained by the formula:
506.PP
507\fBmaximum number of files = (total file system space/2) / (inode size +
508subblock size)\fR
509.PP
510For file systems that will be doing parallel file creates, if the total
511number of free inodes is not greater than 5% of the total number of inodes
512there is the potential for slowdown in file system access. Take this
513into consideration when creating your file system.
514.PP
515The parameter \fINumInodesToPreallocate\fR specifies the number of
516inodes that the system will immediately preallocate. If you do not
517specify a value for \fINumInodesToPreallocate\fR, GPFS will dynamically
518allocate inodes as needed
519.PP
520You can specify the \fINumInodes\fR and \fINumInodesToPreallocate\fR
521values with a suffix, for example 100K or 2M.
522.RE
523.PP
524.RS +3
525\fB-Q {yes | \fB\fIno}\fR\fR
526\fR
527.RE
528.RS +9
529Activates quotas automatically when the file system is mounted. The
530default is \fB-Q no\fR.
531.PP
532To activate GPFS quota management after the file system has been
533created:
534.RS +3
535.HP 3
5361. Mount the file system.
537.HP 3
5382. To establish default quotas:
539.RS +3
540.HP 3
541a. Issue the \fBmmdefedquota\fR command to establish default quota
542values.
543.HP 3
544b. Issue the \fBmmdefquotaon\fR command to activate default quotas.
545.RE
546.HP 3
5473. To activate explicit quotas:
548.RS +3
549.HP 3
550a. Issue the \fBmmedquota\fR command to activate quota values.
551.HP 3
552b. Issue the \fBmmquotaon\fR command to activate quota enforcement.
553.RE
554.RE
555.RE
556.PP
557.RS +3
558\fB-r \fIDefaultDataReplicas\fR
559\fR
560.RE
561.RS +9
562Default number of copies of each data block for a file. Valid
563values are 1 and 2, but cannot be greater than
564\fIMaxDataReplicas\fR. The default is 1.
565.RE
566.PP
567.RS +3
568\fB-R \fIMaxDataReplicas\fR
569\fR
570.RE
571.RS +9
572Default maximum number of copies of data blocks for a file. Valid
573values are 1 and 2 but cannot be less than
574\fIDefaultDataReplicas\fR. The default is 1.
575.RE
576.PP
577.RS +3
578\fB-S {yes | \fB\fIno\fR\fR}
579\fR
580.RE
581.RS +9
582Suppress the periodic updating of the value of \fBatime\fR as reported
583by the \fBgpfs_stat()\fR, \fBgpfs_fstat()\fR, \fBstat()\fR, and
584\fBfstat()\fR calls. The default value is \fB-S no\fR.
585Specifying \fB-S yes\fR for a new file system results in reporting the time
586the file system was created.
587.RE
588.PP
589.RS +3
590\fB-v {\fB\fIyes\fR\fR | no}
591\fR
592.RE
593.RS +9
594Verify that specified disks do not belong to an existing file
595system. The default is \fB-v yes\fR. Specify \fB-v no\fR
596only when you want to reuse disks that are no longer needed for an existing
597file system. If the command is interrupted for any reason, you must use
598the \fB-v no\fR option on the next invocation of the command.
599.RE
600.PP
601.RS +3
602\fB-z {\fByes\fR | \fB\fIno\fR\fR}
603\fR
604.RE
605.RS +9
606Enable or disable DMAPI on the file system. The default is \fB-z
607no\fR. For further information on DMAPI for GPFS, see \fIGeneral
608Parallel File System: Data Management API Guide\fR.
609.RE
610.SH "Exit status"
611.PP
612.PP
613.RS +3
614\fB0
615\fR
616.RE
617.RS +9
618Successful completion.
619.RE
620.PP
621.RS +3
622\fBnonzero
623\fR
624.RE
625.RS +9
626A failure has occurred.
627.RE
628.SH "Security"
629.PP
630You must have root authority to run the \fBmmcrfs\fR command.
631.PP
632You may issue the \fBmmcrfs\fR command from any node in the GPFS
633cluster.
634.PP
635When using the \fBrcp\fR and \fBrsh\fR commands for remote
636communication, a properly configured \fB.rhosts\fR file must exist
637in the root user's home directory on each node in the GPFS
638cluster. If you have designated the use of a different remote
639communication program on either the \fBmmcrcluster\fR or the
640\fBmmchcluster\fR command, you must ensure:
641.RS +3
642.HP 3
6431. Proper authorization is granted to all nodes in the GPFS cluster.
644.HP 3
6452. The nodes in the GPFS cluster can communicate without the use of a
646password, and without any extraneous messages.
647.RE
648.PP
649When considering data replication for files accessible to SANergy, see
650\fISANergy export considerations\fR in \fIGeneral Parallel File
651System: Advanced Administration Guide\fR.
652.SH "Examples"
653.PP
654This example creates a file system named \fBgpfs1\fR, using three disks,
655with a block size of 512 KB, allowing metadata and data replication to be 2
656and turning quotas on:
657.sp
658.nf
659mmcrfs /gpfs1 gpfs1 "hd3vsdn100;sdbnsd;sdensd" -B 512K -M 2 -R 2 -Q yes
660.fi
661.sp
662.PP
663The system displays output similar to:
664.sp
665.nf
666The following disks of gpfs1 will be formatted on node k5n95.kgn.ibm.com:
667hd3vsdn100: size 17760256 KB
668sdbnsd: size 70968320 KB
669sdensd: size 70968320 KB
670Formatting file system ...
671Disks up to size 208 GB can be added to storage pool system.
672Creating Inode File
673Creating Allocation Maps
674Clearing Inode Allocation Map
675Clearing Block Allocation Map
676Completed creation of file system /dev/gpfs1.mmcrfs:\
677Propagating the cluster configuration data to allaffected nodes.\
678This is an asynchronous process.
679.fi
680.sp
681.SH "See also"
682.PP
683mmchfs Command
684.PP
685mmdelfs Command
686.PP
687mmdf Command
688.PP
689mmedquota Command
690.PP
691mmfsck Command
692.PP
693mmlsfs Command
694.SH "Location"
695.PP
696\fB/usr/lpp/mmfs/bin\fR
Note: See TracBrowser for help on using the repository browser.