.TH mmchconfig 12/01/06 mmchconfig Command .SH "Name" .PP \fBmmchconfig\fR - Changes GPFS configuration parameters. .SH "Synopsis" .PP \fBmmchconfig\fR \fIAttribute\fR=\fIvalue\fR[,\fIAttribute\fR=\fIvalue\fR...] [\fB-i\fR | \fB-I\fR] [\fB-N {\fR\fINode\fR[,\fINode\fR...] | \fINodeFile\fR | \fINodeClass\fR}] .SH "Description" .PP Use the \fBmmchconfig\fR command to change the GPFS configuration attributes on a single node, a set of nodes, or globally for the entire cluster. .PP The \fIAttribute\fR=\fIvalue\fR flags must come before any operand. .PP When changing both \fBmaxblocksize\fR and \fBpagepool\fR, the command fails unless these conventions are followed: .RS +3 .HP 3 \(bu When increasing the values, \fBpagepool\fR must be specified first. .HP 3 \(bu When decreasing the values, \fBmaxblocksize\fR must be specified first. .RE .PP \fBResults\fR .PP The configuration is updated on each node in the GPFS cluster. .SH "Parameters" .PP .RS +3 \fB-N {\fINode\fR[,\fINode\fR...] | \fINodeFile\fR | \fINodeClass\fR} \fR .RE .RS +9 Specifies the set of nodes to which the configuration changes apply. .PP The \fB-N\fR flag is valid only for the \fBautomountDir\fR, \fBdataStructureDump\fR, \fBdesignation\fR, \fBdmapiEventTimeout\fR, \fBdmapiMountTimeout\fR, \fBdmapiSessionFailureTimeout\fR, \fBmaxblocksize\fR, \fBmaxFilesToCache\fR, \fBmaxStatCache\fR, \fBnsdServerWaitTimeWindowOnMount\fR, \fBnsdServerWaitTimeForMount\fR, \fBpagepool\fR, \fBprefetchThreads\fR, \fBunmountOnDiskFail\fR, and \fBworker1Threads\fR attributes. .PP This command does not support a \fINodeClass\fR of \fBmount\fR. .RE .SH "Options" .PP .RS +3 \fB\fIAttribute\fR \fR .RE .RS +9 The name of the attribute to be changed to the specified \fIvalue\fR. More than one attribute and value pair, in a comma-separated list, can be changed with one invocation of the command. .PP To restore the GPFS default setting for any given attribute, specify \fBDEFAULT\fR as its \fIvalue\fR. .RE .PP .RS +3 \fBautoload \fR .RE .RS +9 Starts GPFS automatically whenever the nodes are rebooted. Valid values are \fByes\fR or \fBno\fR. .RE .PP .RS +3 \fBautomountdir \fR .RE .RS +9 Specifies the directory to be used by the Linux automounter for GPFS file systems that are being automatically mounted. The default directory is \fB/gpfs/myautomountdir\fR. This parameter does not apply to AIX environments. .RE .PP .RS +3 \fBcipherList \fR .RE .RS +9 Controls whether GPFS network communications are secured. If \fBcipherList\fR is not specified, or if the value \fBDEFAULT\fR is specified, GPFS does not authenticate or check authorization for network connections. If the value \fBAUTHONLY\fR is specified, GPFS does authenticate and check authorization for network connections, but data sent over the connection is not protected. Before setting \fBcipherList\fR for the first time, you must establish a public/private key pair for the cluster by using the \fBmmauth genkey new\fR command. .PP See the Frequently Asked Questions at: publib.boulder.ibm.com/infocenter/ clresctr/topic/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html for a list of the ciphers supported by GPFS. .RE .PP .RS +3 \fBdataStructureDump \fR .RE .RS +9 Specifies a path for the storage of dumps. The default is to store dumps in \fB/tmp/mmfs\fR. Specify \fBno\fR to not store dumps. .PP It is suggested that you create a directory for the placement of certain problem determination information. This can be a symbolic link to another location if more space can be found there. Do not place it in a GPFS file system, because it might not be available if GPFS fails. If a problem occurs, GPFS may write 200 MB or more of problem determination data into the directory. These files must be manually removed when problem determination is complete. This should be done promptly so that a \fBNOSPACE\fR condition is not encountered if another failure occurs. .RE .PP .RS +3 \fBdesignation \fR .RE .RS +9 Specifies a '-' separated list of node roles. .RS +3 .HP 3 \(bu \fBmanager\fR or \fBclient\fR - Indicates whether a node is part of the pool of nodes from which configuration managers, file system managers, and token managers are selected. .HP 3 \(bu \fBquorum\fR or \fBnonquorum\fR - Indicates whether a node is to be counted as a quorum node. .RE .PP GPFS must be stopped on any quorum node that is being changed to nonquorum. GPFS does not have to stopped when the designation changes from nonquorum to quorum. .PP For more information on the roles of a node as the file system manager, see the \fIGeneral Parallel File System: Concepts, Planning, and Installation Guide\fR and search for \fIfile system manager\fR. .PP For more information on explicit quorum node designation, see the \fIGeneral Parallel File System: Concepts, Planning, and Installation Guide\fR and search for \fIdesignating quorum nodes\fR. .RE .PP .RS +3 \fBdistributedTokenServer \fR .RE .RS +9 Specifies whether the token server role for a file system should be limited to only the file system manager node (\fBno\fR), or distributed to other nodes for better file system performance (\fByes\fR). The default is \fByes\fR. .PP Using multiple token servers requires designating the nodes that should serve tokens as manager nodes (\fBmanager\fR keyword in the \fBmmcrcluster\fR node list or \fBmmchconfig designation=...\fR commands). If no manager nodes are designated, the node chosen as file system manager will act as the only token server, regardless of the setting of the \fBdistributedTokenServer\fR parameter. .PP The \fBmaxFilesToCache\fR and \fBmaxStatCache\fR parameters are indirectly affected by the \fBdistributedTokenServer\fR parameter, because distributing the tokens across multiple nodes may allow keeping more tokens than without this feature. See \fIGeneral Parallel File System: Concepts, Planning, and Installation Guide\fR and search on \fIThe GPFS token system's affect on cache settings\fR. .RE .PP .RS +3 \fBdmapiEventTimeout \fR .RE .RS +9 Controls the blocking of file operation threads of NFS, while in the kernel waiting for the handling of a DMAPI synchronous event. The parameter value is the maximum time, in milliseconds, the thread will block. When this time expires, the file operation returns \fBENOTREADY\fR, and the event continues asynchronously. The NFS server is expected to repeatedly retry the operation, which eventually will find the response of the original event and continue. This mechanism applies only to read, write, and truncate event types, and only when such events come from NFS server threads. The timeout value is given in milliseconds. The value 0 indicates immediate timeout (fully asynchronous event). A value greater than or equal to 86400000 (which is 24 hours) is considered \fIinfinity\fR (no timeout, fully synchronous event). The default value is 86400000. .PP For further information regarding DMAPI for GPFS, see \fIGeneral Parallel File System: Data Management API Guide\fR. .RE .PP .RS +3 \fBdmapiMountTimeout \fR .RE .RS +9 Controls the blocking of \fBmount\fR operations, waiting for a disposition for the mount event to be set. This timeout is activated, at most once on each node, by the first external mount of a file system that has DMAPI enabled, and only if there has never before been a mount disposition. Any \fBmount\fR operation on this node that starts while the timeout period is active will wait for the mount disposition. The parameter value is the maximum time, in seconds, that the \fBmount\fR operation will wait for a disposition. When this time expires and there is still no disposition for the mount event, the \fBmount\fR operation fails, returning the \fBEIO\fR error. The timeout value is given in full seconds. The value 0 indicates immediate timeout (immediate failure of the mount operation). A value greater than or equal to 86400 (which is 24 hours) is considered \fIinfinity\fR (no timeout, indefinite blocking until the there is a disposition). The default value is 60. .PP For further information regarding DMAPI for GPFS, see \fIGeneral Parallel File System: Data Management API Guide\fR. .RE .PP .RS +3 \fBdmapiSessionFailureTimeout \fR .RE .RS +9 Controls the blocking of file operation threads, while in the kernel, waiting for the handling of a DMAPI synchronous event that is enqueued on a session that has experienced a failure. The parameter value is the maximum time, in seconds, the thread will wait for the recovery of the failed session. When this time expires and the session has not yet recovered, the event is cancelled and the file operation fails, returning the \fBEIO\fR error. The timeout value is given in full seconds. The value 0 indicates immediate timeout (immediate failure of the file operation). A value greater than or equal to 86400 (which is 24 hours) is considered \fIinfinity\fR (no timeout, indefinite blocking until the session recovers). The default value is 0. .PP For further information regarding DMAPI for GPFS, see \fIGeneral Parallel File System: Data Management API Guide\fR. .RE .PP .RS +3 \fBmaxblocksize \fR .RE .RS +9 Changes the maximum file system block size. Valid values include 64 KB, 256 KB, 512 KB, and 1024 KB. The default value is 1024 KB. Specify this value with the character \fBK\fR or \fBM\fR, for example 512K. .PP File systems with block sizes larger than the specified value cannot be created or mounted unless the block size is increased. .RE .PP .RS +3 \fBmaxFilesToCache \fR .RE .RS +9 Specifies the number of inodes to cache for recently used files that have been closed. .PP Storing a file's inode in cache permits faster re-access to the file. The default is 1000, but increasing this number may improve throughput for workloads with high file reuse. However, increasing this number excessively may cause paging at the file system manager node. The value should be large enough to handle the number of concurrently open files plus allow caching of recently used files. .RE .PP .RS +3 \fBmaxMBpS \fR .RE .RS +9 Specifies an estimate of how many megabytes of data can be transferred per second into or out of a single node. The default is 150 MB per second. The value is used in calculating the amount of I/O that can be done to effectively prefetch data for readers and write-behind data from writers. By lowering this value, you can artificially limit how much I/O one node can put on all of the disk servers. .PP This is useful in environments in which a large number of nodes can overrun a few virtual shared disk servers. Setting this value too high usually does not cause problems because of other limiting factors, such as the size of the pagepool, the number of prefetch threads, and so forth. .RE .PP .RS +3 \fBmaxStatCache \fR .RE .RS +9 Specifies the number of inodes to keep in the stat cache. The stat cache maintains only enough inode information to perform a query on the file system. The default value is: .PP \fB4 x maxFilesToCache\fR .RE .PP .RS +3 \fBnsdServerWaitTimeForMount \fR .RE .RS +9 When mounting a file system whose disks depend on NSD servers, this option specifies the number of seconds to wait for those servers to come up. The decision to wait is controlled by the criteria managed by the \fBnsdServerWaitTimeWindowOnMount\fR option. .PP Valid values are between 0 and 1200 seconds. The default is 300. A value of zero indicates that no waiting is done. The interval for checking is 10 seconds. If \fBnsdServerWaitTimeForMount\fR is 0, \fBnsdServerWaitTimeWindowOnMount\fR has no effect. .PP The mount thread waits when the daemon delays for safe recovery. The mount wait for NSD servers to come up, which is covered by this option, occurs after expiration of the recovery wait allows the mount thread to proceed. .RE .PP .RS +3 \fBnsdServerWaitTimeWindowOnMount \fR .RE .RS +9 Specifies a window of time (in seconds) during which a mount can wait for NSD servers as described for the \fBnsdServerWaitTimeForMount\fR option. The window begins when quorum is established (at cluster startup or subsequently), or at the last known failure times of the NSD servers required to perform the mount. .PP Valid values are between 1 and 1200 seconds. The default is 600. If \fBnsdServerWaitTimeForMount\fR is 0, \fBnsdServerWaitTimeWindowOnMount\fR has no effect. .PP When a node rejoins the cluster after having been removed for any reason, the node resets all the failure time values that it knows about. Therefore, when a node rejoins the cluster it believes that the NSD servers have not failed. From the node's perspective, old failures are no longer relevant. .PP GPFS checks the cluster formation criteria first. If that check falls outside the window, GPFS then checks for NSD server fail times being within the window. .RE .PP .RS +3 \fBpagepool \fR .RE .RS +9 Changes the size of the cache on each node. The default value is 64 M. The minimum allowed value is 4 M. The maximum allowed value depends on amount of the available physical memory and your operating system. Specify this value with the character \fBM\fR, for example, 60M. .RE .PP .RS +3 \fBprefetchThreads \fR .RE .RS +9 Controls the maximum possible number of threads dedicated to prefetching data for files that are read sequentially, or to handle sequential write-behind. .PP Functions in the GPFS daemon dynamically determine the actual degree of parallelism for prefetching data. The minimum value is 2. The default value is 72. The maximum value of \fBprefetchThreads\fR plus \fBworker1Threads\fR is: .RS +3 .HP 3 \(bu On 32-bit kernels, 164 .HP 3 \(bu On 64-bit kernels, 550 .RE .RE .PP .RS +3 \fBsubnets \fR .RE .RS +9 Specifies subnets used to communicate between nodes in a GPFS cluster. .PP Enclose the subnets in quotes and separate them by spaces. The order in which they are specified determines the order that GPFS uses these subnets to establish connections to the nodes within the cluster. For example, \fBsubnets="192.168.2.0"\fR refers to IP addresses 192.168.2.0 through 192.168.2.255 inclusive. .PP An optional list of cluster names may also be specified, separated by commas. The names may contain wild cards similar to those accepted by shell commands. If specified, these names override the list of private IP addresses. For example, \fBsubnets="10.10.10.0/remote.cluster;192.168.2.0"\fR. .PP This feature cannot be used to establish fault tolerance or automatic failover. If the interface corresponding to an IP address in the list is down, GPFS does not use the next one on the list. For more information about subnets, see \fIGeneral Parallel File System: Advanced Administration Guide\fR and search on \fIUsing remote access with public and private IP addresses\fR. .RE .PP .RS +3 \fBtiebreakerDisks \fR .RE .RS +9 Controls whether GPFS will use the node quorum with tiebreaker algorithm in place of the regular node based quorum algorithm. See \fIGeneral Parallel File System: Concepts, Planning, and Installation Guide\fR and search for \fInode quorum with tiebreaker\fR. To enable this feature, specify the names of one or three disks. Separate the NSD names with semicolon (;) and enclose the list in quotes. The disks do not have to belong to any particular file system, but must be directly accessible from the quorum nodes. For example: .sp .nf tiebreakerDisks="gpfs1nsd;gpfs2nsd;gpfs3nsd"\ .fi .sp .PP To disable this feature, use: .sp .nf tiebreakerDisks=no .fi .sp .PP When changing the \fBtiebreakerDisks\fR, GPFS must be down on all nodes in the cluster. .RE .PP .RS +3 \fBuidDomain \fR .RE .RS +9 Specifies the UID domain name for the cluster. .PP A detailed description of the GPFS user ID remapping convention is contained in \fI\fIUID Mapping for GPFS in a Multi-Cluster Environment\fR\fR at www.ibm.com/servers/eserver/clusters/library/wp_aix_lit.html. .RE .PP .RS +3 \fBunmountOnDiskFail \fR .RE .RS +9 Controls how the GPFS daemon will respond when a disk failure is detected. Valid values are \fByes\fR or \fBno\fR. .PP When \fBunmountOnDiskFail\fR is set to \fBno\fR, the daemon marks the disk as failed and continues as long as it can without using the disk. All nodes that are using this disk are notified of the disk failure. The disk can be made active again by using the \fBmmchdisk\fR command. This is the suggested setting when metadata and data replication are used because the replica can be used until the disk is brought online again. .PP When \fBunmountOnDiskFail\fR is set to \fByes\fR, any disk failure will cause only the local node to force-unmount the file system that contains that disk. Other file systems on this node and other nodes continue to function normally, if they can. The local node can try and remount the file system when the disk problem has been resolved. This is the suggested setting when using SAN-attached disks in large multinode configurations, and when replication is not being used. This setting should also be used on a node that hosts \fBdescOnly\fR disks. See \fIEstablishing disaster recovery for your GPFS cluster\fR in \fIGeneral Parallel File System: Advanced Administration Guide\fR. .RE .PP .RS +3 \fBworker1Threads \fR .RE .RS +9 Controls the maximum number of concurrent file operations at any one instant. If there are more requests than that, the excess will wait until a previous request has finished. .PP The primary use is for: random read or write requests that cannot be prefetched, random I/O requests, or small file activity. The minimum value is 1. The default value is 48. The maximum value of \fBprefetchThreads\fR plus \fBworker1Threads\fR is: .RS +3 .HP 3 \(bu On 32-bit kernels, 164 .HP 3 \(bu On 64-bit kernels, 550 .RE .RE .PP .RS +3 \fB-I \fR .RE .RS +9 Specifies that the changes take effect immediately but do not persist when GPFS is restarted. This option is valid only for the \fBdataStructureDump\fR, \fBdmapiEventTimeout\fR, \fBdmapiSessionFailureTimeout\fR, \fBdmapiMountTimeoout\fR, \fBmaxMBpS\fR, \fBunmountOnDiskFail\fR, and \fBpagepool\fR attributes. .RE .PP .RS +3 \fB-i \fR .RE .RS +9 Specifies that the changes take effect immediately and are permanent. This option is valid only for the \fBdataStructureDump\fR, \fBdmapiEventTimeout\fR, \fBdmapiSessionFailureTimeout\fR, \fBdmapiMountTimeoout\fR, \fBmaxMBpS\fR, \fBunmountOnDiskFail\fR, and \fBpagepool\fR attributes. .RE .SH "Exit status" .PP .PP .RS +3 \fB0 \fR .RE .RS +9 Successful completion. .RE .PP .RS +3 \fBnonzero \fR .RE .RS +9 A failure has occurred. .RE .SH "Security" .PP You must have root authority to run the \fBmmchconfig\fR command. .PP You may issue the \fBmmchconfig\fR command from any node in the GPFS cluster. .PP When using the \fBrcp\fR and \fBrsh\fR commands for remote communication, a properly configured \fB.rhosts\fR file must exist in the root user's home directory on each node in the GPFS cluster. If you have designated the use of a different remote communication program on either the \fBmmcrcluster\fR or the \fBmmchcluster\fR command, you must ensure: .RS +3 .HP 3 1. Proper authorization is granted to all nodes in the GPFS cluster. .HP 3 2. The nodes in the GPFS cluster can communicate without the use of a password, and without any extraneous messages. .RE .SH "Examples" .PP To change the maximum file system block size allowed to 512 KB, issue this command: .sp .nf mmchconfig maxblocksize=512K .fi .sp .PP To confirm the change, issue this command: .sp .nf mmlsconfig .fi .sp .PP The system displays information similar to: .sp .nf Configuration data for cluster cluster.kgn.ibm.com: ----------------------------------------------------------- clusterName cluster.kgn.ibm.com clusterId 680681562216850737 clusterType lc multinode yes autoload no useDiskLease yes maxFeatureLevelAllowed 901 maxblocksize 512K File systems in cluster cluster.kgn.ibm.com: ---------------------------------------------------- /dev/fs1 .fi .sp .SH "See also" .PP mmaddnode Command .PP mmcrcluster Command .PP mmdelnode Command .PP mmlsconfig Command .PP mmlscluster Command .SH "Location" .PP \fB/usr/lpp/mmfs/bin\fR