.TH mmcrcluster 02/16/06 mmcrcluster Command .SH "Name" .PP \fBmmcrcluster\fR - Creates a GPFS cluster from a set of nodes. .SH "Synopsis" .PP \fBmmcrcluster\fR \fB-N\fR {\fINodeDesc\fR[,\fINodeDesc\fR...] | \fINodeFile\fR} \fB-p\fR \fIPrimaryServer\fR [\fB-s\fR \fISecondaryServer\fR] [\fB-r\fR \fIRemoteShellCommand\fR] [\fB-R\fR \fIRemoteFileCopyCommand\fR] [\fB-C\fR \fIClusterName\fR] [\fB-U\fR \fIDomainName\fR] [\fB-A\fR] [\fB-c\fR \fIConfigFile\fR] .SH "Description" .PP Use the \fBmmcrcluster\fR command to create a GPFS cluster. .PP Upon successful completion of the \fBmmcrcluster\fR command, the \fB/var/mmfs/gen/mmsdrfs\fR and the \fB/var/mmfs/gen/mmfsNodeData\fR files are created on each of the nodes in the cluster. Do not delete these files under any circumstances. For further information, see the \fIGeneral Parallel File System: Concepts, Planning, and Installation Guide\fR. .PP You must follow these rules when creating your GPFS cluster: .RS +3 .HP 3 \(bu While a node may mount file systems from multiple clusters, the node itself may only be added to a single cluster using the \fBmmcrcluster\fR or \fBmmaddnode\fR command. .HP 3 \(bu The nodes must be available for the command to be successful. If any of the nodes listed are not available when the command is issued, a message listing those nodes is displayed. You must correct the problem on each node and issue the \fBmmaddnode\fR command to add those nodes. .HP 3 \(bu You must designate at least one node as a quorum node. You are strongly advised to designate the cluster configuration servers as quorum nodes. How many quorum nodes altogether you will have depends on whether you intend to use the node quorum with tiebreaker algorithm. or the regular node based quorum algorithm. For more details, see the \fIGeneral Parallel File System: Concepts, Planning, and Installation Guide\fR and search for \fIdesignating quorum nodes\fR. .RE .SH "Parameters" .PP .RS +3 \fB\fB-A\fR \fR .RE .RS +9 Specifies that GPFS daemons are to be automatically started when nodes come up. The default is not to start daemons automatically. .RE .PP .RS +3 \fB\fB-C\fR \fIClusterName\fR \fR .RE .RS +9 Specifies a name for the cluster. If the user-provided name contains dots, it is assumed to be a fully qualified domain name. Otherwise, to make the cluster name unique, the domain of the primary configuration server will be appended to the user-provided name. .PP If the \fB-C\fR flag is omitted, the cluster name defaults to the name of the primary GPFS cluster configuration server. .RE .PP .RS +3 \fB\fB-c\fR \fIConfigFile\fR \fR .RE .RS +9 Specifies a file containing GPFS configuration parameters with values different than the documented defaults. A sample file can be found in \fB/usr/lpp/mmfs/samples/mmfs.cfg.sample\fR. See the \fBmmchconfig\fR command for a detailed description of the different configuration parameters. .PP The \fB-c\fR \fB\fIConfigFile\fR\fR parameter should be used only by experienced administrators. Use this file to set up only those parameters that appear in the \fBmmfs.cfg.sample\fR file. Changes to any other values may be ignored by GFPS. When in doubt, use the \fBmmchconfig\fR command instead. .RE .PP .RS +3 \fB-N \fINodeDesc\fR[,\fINodeDesc\fR...] | \fINodeFile\fR \fR .RE .RS +9 \fINodeFile\fR specifies the file containing the list of node descriptors (see below), one per line, to be included in the GPFS cluster. .RE .PP .RS +3 \fB\fINodeDesc\fR[,\fINodeDesc\fR...] \fR .RE .RS +9 Specifies the list of nodes and node designations to be included in the GPFS cluster. Node descriptors are defined as: .sp .nf NodeName:NodeDesignations:AdminNodeName .fi .sp .PP where: .RS +3 .HP 3 1. \fBNodeName\fR is the hostname or IP address to be used by the GPFS daemon for node to node communication. .sp The hostname or IP address must refer to the communications adapter over which the GPFS daemons communicate. Alias interfaces are not allowed. Use the original address or a name that is resolved by the \fBhost\fR command to that original address. You may specify a node using any of these forms: .br .sp .RS +0.1i .nf .TS tab(~); l l. Format~Example \fBShort hostname\fR~k145n01 \fBLong hostname\fR~k145n01.kgn.ibm.com \fBIP address\fR~9.119.19.102 .TE .sp .fi .RE .HP 3 2. \fBNodeDesignations\fR is an optional, '-' separated list of node roles. .RS +3 .HP 3 \(bu \fBmanager\fR | \fB\fIclient\fR\fR Indicates whether a node is part of the pool of nodes from which configuration managers, file system managers, and token managers are selected. The default is \fBclient\fR. .HP 3 \(bu \fBquorum\fR | \fB\fInonquorum\fR\fR Indicates whether a node is counted as a quorum node. The default is \fBnonquorum\fR. .RE .HP 3 3. \fBAdminNodeName\fR is an optional field that consists of a node name to be used by the administration commands to communicate between nodes. .sp .sp If \fBAdminNodeName\fR is not specified, the \fBNodeName\fR value is used. .RE .PP You must provide a descriptor for each node to be added to the GPFS cluster. .RE .PP .RS +3 \fB-p \fIPrimaryServer\fR \fR .RE .RS +9 Specifies the primary GPFS cluster configuration server node used to store the GPFS configuration data. This node must be a member of the GPFS cluster. .RE .PP .RS +3 \fB-R \fIRemoteFileCopy\fR \fR .RE .RS +9 Specifies the fully-qualified path name for the remote file copy program to be used by GPFS. The default value is \fB/usr/bin/rcp\fR. .PP The remote copy command must adhere to the same syntax format as the \fBrcp\fR command, but may implement an alternate authentication mechanism. .RE .PP .RS +3 \fB-r \fIRemoteShellCommand\fR \fR .RE .RS +9 Specifies the fully-qualified path name for the remote shell program to be used by GPFS. The default value is \fB/usr/bin/rsh\fR. .PP The remote shell command must adhere to the same syntax format as the \fBrsh\fR command, but may implement an alternate authentication mechanism. .RE .PP .RS +3 \fB-s \fISecondaryServer\fR \fR .RE .RS +9 Specifies the secondary GPFS cluster configuration server node used to store the GPFS cluster data. This node must be a member of the GPFS cluster. .PP It is suggested that you specify a secondary GPFS cluster configuration server to prevent the loss of configuration data in the event your primary GPFS cluster configuration server goes down. When the GPFS daemon starts up, at least one of the two GPFS cluster configuration servers must be accessible. .PP If your primary GPFS cluster configuration server fails and you have not designated a secondary server, the GPFS cluster configuration files are inaccessible, and any GPFS administration commands that are issued fail. File system mounts or daemon startups also fail if no GPFS cluster configuration server is available. .RE .PP .RS +3 \fB-U \fIDomainName\fR \fR .RE .RS +9 Specifies the UID domain name for the cluster. .PP A detailed description of the GPFS user ID remapping convention is contained in \fI\fIUID Mapping for GPFS In a Multi-Cluster Environment\fR\fR at www.ibm.com/servers/eserver/clusters/library/wp_aix_lit.html. .RE .SH "Exit status" .PP .PP .RS +3 \fB0 \fR .RE .RS +9 Successful completion. .RE .PP .RS +3 \fBnonzero \fR .RE .RS +9 A failure has occurred. .RE .SH "Security" .PP You must have root authority to run the \fBmmcrcluster\fR command. .PP You may issue the \fBmmcrcluster\fR command from any node in the GPFS cluster. .PP A properly configured \fB.rhosts\fR file must exist in the root user's home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the \fBmmcrcluster\fR or the \fBmmchcluster\fR command, you must ensure: .RS +3 .HP 3 1. Proper authorization is granted to all nodes in the GPFS cluster. .HP 3 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages. .RE .SH "Examples" .PP To create a GPFS cluster made of all of the nodes listed in the file \fB/u/admin/nodelist\fR, using node \fBk164n05\fR as the primary server, and node \fBk164n04\fR as the secondary server, issue: .sp .nf mmcrcluster -N /u/admin/nodelist -p k164n05 -s k164n04 .fi .sp .PP where \fB/u/admin/nodelist\fR has the these contents: .sp .nf k164n04.kgn.ibm.com:quorum k164n05.kgn.ibm.com:quorum k164n06.kgn.ibm.com .fi .sp .PP The output of the command is similar to: .sp .nf Mon Aug 9 22:14:34 EDT 2004: 6027-1664 mmcrcluster: Processing node k164n04.kgn.ibm.com Mon Aug 9 22:14:38 EDT 2004: 6027-1664 mmcrcluster: Processing node\ k164n05.kgn.ibm.com Mon Aug 9 22:14:42 EDT 2004: 6027-1664 mmcrcluster: Processing node\ k164n06.kgn.ibm.com mmcrcluster: Command successfully completed mmcrcluster: 6027-1371 Propagating the changes to all affected. nodes. This is an asynchronous process. .fi .sp .PP To confirm the creation, issue this command: .sp .nf mmlscluster .fi .sp .PP The system displays information similar to: .sp .nf GPFS cluster information ======================== GPFS cluster name: k164n05.kgn.ibm.com GPFS cluster id: 680681562214606028 GPFS UID domain: k164n05.kgn.ibm.com Remote shell command: /usr/bin/rsh Remote file copy command: /usr/bin/rcp GPFS cluster configuration servers: ----------------------------------- Primary server: k164n05.kgn.ibm.com Secondary server: k164n04.kgn.ibm.com Node Daemon node name IP address Admin node name Designation --------------------------------------------------------------------- 1 k164n04.kgn.ibm.com 198.117.68.68 k164n04.kgn.ibm.com quorum 2 k164n05.kgn.ibm.com 198.117.68.71 k164n05.kgn.ibm.com quorum 3 k164n06.kgn.ibm.com 198.117.68.70 k164n06.kgn.ibm.com .fi .sp .SH "See also" .PP mmaddnode Command .PP mmchconfig Command .PP mmdelnode Command .PP mmlscluster Command .PP mmlsconfig Command .SH "Location" .PP \fB/usr/lpp/mmfs/bin\fR .PP