.TH mmfsctl 02/16/06 mmfsctl Command .SH "Name" .PP \fBmmfsctl\fR - Issues a file system control request. .SH "Synopsis" .PP \fBmmfsctl\fR \fIDevice\fR \fB{suspend | resume}\fR .PP Or, .PP \fBmmfsctl\fR \fIDevice\fR \fB{exclude | include}\fR \fB{\fR\fB-d "\fR\fIDiskName\fR\fB[;\fR\fIDiskName\fR\fB...]" |\fR \fB-F\fR \fIDiskFile\fR | \fB-G\fR \fIFailureGroup\fR\fB}\fR .PP Or, .PP \fBmmfsctl\fR \fIDevice\fR \fBsyncFSconfig\fR \fB{\fR\fB-n\fR \fIRemoteNodesFile\fR | \fB-C\fR \fIremoteClusterName\fR\fB}\fR [\fB-S\fR \fISpecFile\fR] .SH "Description" .PP Use the \fBmmfsctl\fR command to issue control requests to a particular GPFS file system. The command is used to temporarily suspend the processing of all application I/O requests, and later resume them, as well as to synchronize the file system's configuration state between peer clusters in disaster recovery environments. .PP See \fIEstablishing disaster recovery for your GPFS cluster\fR in \fIGeneral Parallel File System: Advanced Administration Guide\fR. .PP Before creating a FlashCopy image of the file system, the user must run \fBmmfsctl suspend\fR to temporarily quiesce all file system activity and flush the internal buffers on all nodes that mount this file system. The on-disk metadata will be brought to a consistent state, which provides for the integrity of the FlashCopy snapshot. If a request to the file system is issued by the application after the invocation of this command, GPFS suspends this request indefinitely, or until the user issues \fBmmfsctl resume\fR. .PP Once the FlashCopy image has been taken, the \fBmmfsctl resume\fR command can be issued to resume the normal operation and complete any pending I/O requests. .PP The \fBmmfsctl syncFSconfig\fR command extracts the file system's related information from the local GPFS configuration data, transfers this data to one of the nodes in the peer cluster, and attempts to import it there. .PP Once the GPFS file system has been defined in the primary cluster, users run this command to import the configuration of this file system into the peer recovery cluster. After producing a FlashCopy image of the file system and propagating it to the peer cluster using Peer-to-Peer Remote Copy (PPRC), users similarly run this command to propagate any relevant configuration changes made in the cluster after the previous snapshot. .PP The primary cluster configuration server of the peer cluster must be available and accessible using remote shell and remote copy at the time of the invocation of the \fBmmfsctl syncFSconfig\fR command. Also, the peer GPFS clusters should be defined to use the same remote shell and remote copy mechanism, and they must be set up to allow nodes in peer clusters to communicate without the use of a password. .PP Not all administrative actions performed on the file system necessitate this type of resynchronization. It is required only for those actions that modify the file system information maintained in the local GPFS configuration data, which includes: .RS +3 .HP 3 \(bu Additions, removals, and replacements of disks (commands \fBmmadddisk\fR, \fBmmdeldisk\fR, \fBmmrpldisk\fR) .HP 3 \(bu Modifications to disk attributes (command \fBmmchdisk\fR) .HP 3 \(bu Changes to the file system's mount point (command \fBmmchfs -T\fR) .HP 3 \(bu Changing the file system device name (command \fBmmchfs -W\fR) .RE .PP The process of synchronizing the file system configuration data can be automated by utilizing the \fBsyncfsconfig\fR user exit. .PP The \fBmmfsctl exclude\fR command is to be used only in a disaster recovery environment, only after a disaster has occurred, and only after ensuring that the disks in question have been physically disconnected. Otherwise, unexpected results may occur. .PP The \fBmmfsctl exclude\fR command can be used to manually override the file system descriptor quorum after a site-wide disaster. See \fIEstablishing disaster recovery for your GPFS cluster\fR in \fIGeneral Parallel File System: Advanced Administration Guide\fR. This command enables users to restore normal access to the file system with less than a quorum of available file system descriptor replica disks, by effectively excluding the specified disks from all subsequent operations on the file system descriptor. After repairing the disks, the \fBmmfsctl include\fR command can be issued to restore the initial quorum configuration. .SH "Parameters" .PP .RS +3 \fB\fIDevice\fR \fR .RE .RS +9 The device name of the file system. File system names need not be fully-qualified. \fBfs0\fR is just as acceptable as \fB/dev/fs0\fR. If \fBall\fR is specified with the \fBsyncFSconfig\fR option, this command is performed on all GPFS file systems defined in the cluster. .RE .PP .RS +3 \fBexclude \fR .RE .RS +9 Instructs GPFS to exclude the specified group of disks from all subsequent operations on the file system descriptor, and change their availability state to \fBdown\fR, if the conditions in the Note below are met. .PP If necessary, this command assigns additional disks to serve as the disk descriptor replica holders, and migrate the disk descriptor to the new replica set. The excluded disks are not deleted from the file system, and still appear in the output of the \fBmmlsdisk\fR command. .RS +3 \fBNote:\fR .RE .RS +9 The \fBmmfsctl exclude\fR command is to be used only in a disaster recovery environment, only after a disaster has occurred, and only after ensuring that the disks in question have been physically disconnected. Otherwise, unexpected results may occur. .RE .RE .PP .RS +3 \fBinclude \fR .RE .RS +9 Informs GPFS that the previously excluded disks have become operational again. This command writes the up-to-date version of the disk descriptor to each of the specified disks, and clears the \fBexcl\fR tag. .RE .PP .RS +3 \fBresume \fR .RE .RS +9 Instructs GPFS to resume the normal processing of I/O requests on all nodes. .RE .PP .RS +3 \fBsuspend \fR .RE .RS +9 Instructs GPFS to flush the internal buffers on all nodes, bring the file system to a consistent state on disk, and suspend the processing of all subsequent application I/O requests. .RE .PP .RS +3 \fBsyncFSconfig \fR .RE .RS +9 Synchronizes the configuration state of a GPFS file system between the local cluster and its peer in two-cluster disaster recovery configurations. .RE .PP .RS +3 \fB-C \fIremoteClusterName\fR \fR .RE .RS +9 Specifies the name of the GPFS cluster that owns the remote GPFS file system. .RE .PP .RS +3 \fB-d "\fIDiskName\fR\fB[;\fR\fIDiskName\fR\fB...]"\fR \fR .RE .RS +9 Specifies the names of the NSDs to be included or excluded by the \fBmmfsctl\fR command. Separate the names with semicolons (;) and enclose the list of disk names in quotation marks. .RE .PP .RS +3 \fB-F \fIDiskFile\fR \fR .RE .RS +9 Specifies a file containing the names of the NSDs, one per line, to be included or excluded by the \fBmmfsctl\fR command. .RE .PP .RS +3 \fB-G \fIFailureGroup\fR \fR .RE .RS +9 A number identifying the failure group for disks to be included or excluded by the \fBmmfsctl\fR command. .RE .PP .RS +3 \fB-n \fIRemoteNodesFile\fR \fR .RE .RS +9 Specifies a list of contact nodes in the peer recovery cluster that GPFS uses when importing the configuration data into that cluster. Although any node in the peer cluster can be specified here, users are advised to specify the identities of the peer cluster's primary and secondary cluster configuration servers, for efficiency reasons. .RE .PP .RS +3 \fB-S \fISpecFile\fR \fR .RE .RS +9 Specifies the description of changes to be made to the file system, in the peer cluster during the import step. The format of this file is identical to that of the \fIChangeSpecFile\fR used as input to the \fBmmimportfs\fR command. This option can be used, for example, to define the assignment of the NSD servers for use in the peer cluster. .RE .SH "Options" .PP None. .SH "Exit status" .PP .PP .RS +3 \fB0 \fR .RE .RS +9 Successful completion. .RE .PP .RS +3 \fBnonzero \fR .RE .RS +9 A failure has occurred. .RE .SH "Results" .PP The \fBmmfsctl\fR command returns 0 if successful. .SH "Security" .PP You must have root authority to run the \fBmmfsctl\fR command. .PP You may issue the \fBmmfsctl\fR command from any node in the GPFS cluster. .PP When using the \fBrcp\fR and \fBrsh\fR commands for remote communication, a properly configured \fB.rhosts\fR file must exist in the root user's home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the \fBmmcrcluster\fR or the \fBmmchcluster\fR command, you must ensure: .RS +3 .HP 3 1. Proper authorization is granted to all nodes in the GPFS cluster. .HP 3 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages. .RE .SH "Examples" .PP This sequence of commands creates a FlashCopy image of the file system and propagates this image to the recovery cluster using the Peer-to-Peer Remote Copy technology. The following configuration is assumed: .br .sp .RS +0.1i .nf .TS tab(~); l l. Site~LUNs Primary cluster (site A)~lunA1, lunA2 Recovery cluster (site B)~lunB1 .TE .sp .fi .RE .PP .RS +3 \fBlunA1 \fR .RE .RS +9 FlashCopy source .RE .PP .RS +3 \fBlunA2 \fR .RE .RS +9 FlashCopy target, PPRC source .RE .PP .RS +3 \fBlunB1 \fR .RE .RS +9 PPRC target .RE .PP A single GPFS file system named \fBfs0\fR has been defined in the primary cluster over lunA1. .RS +3 .HP 3 1. In the primary cluster, suspend all file system I/O activity and flush the GPFS buffers .sp .nf mmfsctl fs0 suspend .fi .sp .sp The output is similar to this: .sp .nf Writing dirty data to disk Quiescing all file system operations Writing dirty data to disk again .fi .sp .HP 3 2. Establish a FlashCopy pair using lunA1 as the source and lunA2 as the target. .HP 3 3. Resume the file system I/O activity: .sp .nf mmfsctl fs0 resume .fi .sp .sp The output is similar to this: .sp .nf Resuming operations. .fi .sp .HP 3 4. Establish a Peer-to-Peer Remote Copy (PPRC) path and a synchronous PPRC volume pair lunA2-lunB1 (primary-secondary). Use the 'copy entire volume' option and leave the 'permit read from secondary' option disabled. .HP 3 5. Wait for the completion of the FlashCopy background task. Wait for the PPRC pair to reach the duplex (fully synchronized) state. .HP 3 6. Terminate the PPRC volume pair lunA2-lunB1. .HP 3 7. If this is the first time the snapshot is taken, or if the configuration state of \fBfs0\fR changed since the previous FlashCopy snapshot, propagate the most recent configuration to site B: .sp .nf mmfsctl fs0 syncFSconfig -n recovery_clust_nodelist .fi .sp .RE .SH "Location" .PP \fB/usr/lpp/mmfs/bin\fR .PP