source: gpfs_3.1_ker2.6.20/share/man/man8/mmchconfig.8 @ 90

Last change on this file since 90 was 16, checked in by rock, 17 years ago
File size: 19.6 KB
Line 
1.TH mmchconfig 12/01/06
2mmchconfig Command
3.SH "Name"
4.PP
5\fBmmchconfig\fR - Changes GPFS configuration parameters.
6.SH "Synopsis"
7.PP
8\fBmmchconfig\fR
9\fIAttribute\fR=\fIvalue\fR[,\fIAttribute\fR=\fIvalue\fR...]
10[\fB-i\fR | \fB-I\fR]  [\fB-N
11{\fR\fINode\fR[,\fINode\fR...] |
12\fINodeFile\fR | \fINodeClass\fR}]
13.SH "Description"
14.PP
15Use the \fBmmchconfig\fR command to change the GPFS configuration
16attributes on a single node, a set of nodes, or globally for the entire
17cluster.
18.PP
19The \fIAttribute\fR=\fIvalue\fR flags must come before any
20operand.
21.PP
22When changing both \fBmaxblocksize\fR and \fBpagepool\fR, the command
23fails unless these conventions are followed:
24.RS +3
25.HP 3
26\(bu When increasing the values, \fBpagepool\fR must be specified
27first.
28.HP 3
29\(bu When decreasing the values, \fBmaxblocksize\fR must be specified
30first.
31.RE
32.PP
33\fBResults\fR
34.PP
35The configuration is updated on each node in the GPFS cluster.
36.SH "Parameters"
37.PP
38.RS +3
39\fB-N {\fINode\fR[,\fINode\fR...] |
40\fINodeFile\fR | \fINodeClass\fR}
41\fR
42.RE
43.RS +9
44Specifies the set of nodes to which the configuration changes
45apply.
46.PP
47The \fB-N\fR flag is valid only for the \fBautomountDir\fR,
48\fBdataStructureDump\fR, \fBdesignation\fR, \fBdmapiEventTimeout\fR,
49\fBdmapiMountTimeout\fR, \fBdmapiSessionFailureTimeout\fR,
50\fBmaxblocksize\fR, \fBmaxFilesToCache\fR, \fBmaxStatCache\fR,
51\fBnsdServerWaitTimeWindowOnMount\fR, \fBnsdServerWaitTimeForMount\fR,
52\fBpagepool\fR, \fBprefetchThreads\fR, \fBunmountOnDiskFail\fR, and
53\fBworker1Threads\fR attributes.
54.PP
55This command does not support a \fINodeClass\fR of
56\fBmount\fR.
57.RE
58.SH "Options"
59.PP
60.RS +3
61\fB\fIAttribute\fR
62\fR
63.RE
64.RS +9
65The name of the attribute to be changed to the specified
66\fIvalue\fR. More than one attribute and value pair, in a
67comma-separated list, can be changed with one invocation of the
68command.
69.PP
70To restore the GPFS default setting for any given attribute, specify
71\fBDEFAULT\fR as its \fIvalue\fR.
72.RE
73.PP
74.RS +3
75\fBautoload
76\fR
77.RE
78.RS +9
79Starts GPFS automatically whenever the nodes are rebooted. Valid
80values are \fByes\fR or \fBno\fR.
81.RE
82.PP
83.RS +3
84\fBautomountdir
85\fR
86.RE
87.RS +9
88Specifies the directory to be used by the Linux automounter for
89GPFS file systems that are being automatically mounted. The default
90directory is \fB/gpfs/myautomountdir\fR. This parameter does not
91apply to AIX environments.
92.RE
93.PP
94.RS +3
95\fBcipherList
96\fR
97.RE
98.RS +9
99Controls whether GPFS network communications are secured. If
100\fBcipherList\fR is not specified, or if the value \fBDEFAULT\fR is
101specified, GPFS does not authenticate or check authorization for network
102connections. If the value \fBAUTHONLY\fR is specified, GPFS does
103authenticate and check authorization for network connections, but data sent
104over the connection is not protected. Before setting
105\fBcipherList\fR for the first time, you must establish a public/private
106key pair for the cluster by using the \fBmmauth genkey new\fR
107command.
108.PP
109See the Frequently Asked Questions at:
110publib.boulder.ibm.com/infocenter/
111clresctr/topic/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html
112for a list of the ciphers supported by GPFS.
113.RE
114.PP
115.RS +3
116\fBdataStructureDump
117\fR
118.RE
119.RS +9
120Specifies a path for the storage of dumps. The default is to store
121dumps in \fB/tmp/mmfs\fR. Specify \fBno\fR to not store
122dumps.
123.PP
124It is suggested that you create a directory for the placement of certain
125problem determination information. This can be a symbolic link to
126another location if more space can be found there. Do not place it in a
127GPFS file system, because it might not be available if GPFS fails. If a
128problem occurs, GPFS may write 200 MB or more of problem determination data
129into the directory. These files must be manually removed when problem
130determination is complete. This should be done promptly so that a
131\fBNOSPACE\fR condition is not encountered if another failure
132occurs.
133.RE
134.PP
135.RS +3
136\fBdesignation
137\fR
138.RE
139.RS +9
140Specifies a '-' separated list of node roles.
141.RS +3
142.HP 3
143\(bu \fBmanager\fR or \fBclient\fR - Indicates whether a node is
144part of the pool of nodes from which configuration managers, file system
145managers, and token managers are selected.
146.HP 3
147\(bu \fBquorum\fR or \fBnonquorum\fR - Indicates whether a node is to be
148counted as a quorum node.
149.RE
150.PP
151GPFS must be stopped on any quorum node that is being changed to
152nonquorum. GPFS does not have to stopped when the designation changes
153from nonquorum to quorum.
154.PP
155For more information on the roles of a node as the file system manager, see
156the \fIGeneral Parallel File System: Concepts, Planning, and
157Installation Guide\fR and search for \fIfile system manager\fR.
158.PP
159For more information on explicit quorum node designation, see the
160\fIGeneral Parallel File System: Concepts, Planning, and Installation
161Guide\fR and search for \fIdesignating quorum nodes\fR.
162.RE
163.PP
164.RS +3
165\fBdistributedTokenServer
166\fR
167.RE
168.RS +9
169Specifies whether the token server role for a file system should be
170limited to only the file system manager node (\fBno\fR), or distributed to
171other nodes for better file system performance (\fByes\fR). The
172default is \fByes\fR.
173.PP
174Using multiple token servers requires designating the nodes that should
175serve tokens as manager nodes (\fBmanager\fR keyword in the
176\fBmmcrcluster\fR node list or \fBmmchconfig
177designation=...\fR commands). If no manager
178nodes are designated, the node chosen as file system manager will act as the
179only token server, regardless of the setting of the
180\fBdistributedTokenServer\fR parameter.
181.PP
182The \fBmaxFilesToCache\fR and \fBmaxStatCache\fR parameters are
183indirectly affected by the \fBdistributedTokenServer\fR parameter, because
184distributing the tokens across multiple nodes may allow keeping more tokens
185than without this feature. See \fIGeneral Parallel File System:
186Concepts, Planning, and Installation Guide\fR and search on \fIThe GPFS
187token system's affect on cache settings\fR.
188.RE
189.PP
190.RS +3
191\fBdmapiEventTimeout
192\fR
193.RE
194.RS +9
195Controls the blocking of file operation threads of NFS, while in the
196kernel waiting for the handling of a DMAPI synchronous event. The
197parameter value is the maximum time, in milliseconds, the thread will
198block. When this time expires, the file operation returns
199\fBENOTREADY\fR, and the event continues asynchronously. The NFS
200server is expected to repeatedly retry the operation, which eventually will
201find the response of the original event and continue. This mechanism
202applies only to read, write, and truncate event types, and only when such
203events come from NFS server threads. The timeout value is given in
204milliseconds. The value 0 indicates immediate timeout (fully
205asynchronous event). A value greater than or equal to 86400000 (which
206is 24 hours) is considered \fIinfinity\fR (no timeout, fully synchronous
207event). The default value is 86400000.
208.PP
209For further information regarding DMAPI for GPFS, see \fIGeneral Parallel
210File System: Data Management API Guide\fR.
211.RE
212.PP
213.RS +3
214\fBdmapiMountTimeout
215\fR
216.RE
217.RS +9
218Controls the blocking of \fBmount\fR operations, waiting for a
219disposition for the mount event to be set. This timeout is activated,
220at most once on each node, by the first external mount of a file system that
221has DMAPI enabled, and only if there has never before been a mount
222disposition. Any \fBmount\fR operation on this node that starts
223while the timeout period is active will wait for the mount disposition.
224The parameter value is the maximum time, in seconds, that the \fBmount\fR
225operation will wait for a disposition. When this time expires and there
226is still no disposition for the mount event, the \fBmount\fR operation
227fails, returning the \fBEIO\fR error. The timeout value is given in
228full seconds. The value 0 indicates immediate timeout (immediate
229failure of the mount operation). A value greater than or equal to 86400
230(which is 24 hours) is considered \fIinfinity\fR (no timeout, indefinite
231blocking until the there is a disposition). The default value is
23260.
233.PP
234For further information regarding DMAPI for GPFS, see \fIGeneral Parallel
235File System: Data Management API Guide\fR.
236.RE
237.PP
238.RS +3
239\fBdmapiSessionFailureTimeout
240\fR
241.RE
242.RS +9
243Controls the blocking of file operation threads, while in the kernel,
244waiting for the handling of a DMAPI synchronous event that is enqueued on a
245session that has experienced a failure. The parameter value is the
246maximum time, in seconds, the thread will wait for the recovery of the failed
247session. When this time expires and the session has not yet recovered,
248the event is cancelled and the file operation fails, returning the
249\fBEIO\fR error. The timeout value is given in full seconds.
250The value 0 indicates immediate timeout (immediate failure of the file
251operation). A value greater than or equal to 86400 (which is 24 hours)
252is considered \fIinfinity\fR (no timeout, indefinite blocking until the
253session recovers). The default value is 0.
254.PP
255For further information regarding DMAPI for GPFS, see \fIGeneral Parallel
256File System: Data Management API Guide\fR.
257.RE
258.PP
259.RS +3
260\fBmaxblocksize
261\fR
262.RE
263.RS +9
264Changes the maximum file system block size. Valid values include 64
265KB, 256 KB, 512 KB, and 1024 KB. The default value is 1024 KB.
266Specify this value with the character \fBK\fR or \fBM\fR, for example
267512K.
268.PP
269File systems with block sizes larger than the specified value cannot be
270created or mounted unless the block size is increased.
271.RE
272.PP
273.RS +3
274\fBmaxFilesToCache
275\fR
276.RE
277.RS +9
278Specifies the number of inodes to cache for recently used files that have
279been closed.
280.PP
281Storing a file's inode in cache permits faster re-access to the
282file. The default is 1000, but increasing this number may improve
283throughput for workloads with high file reuse. However, increasing this
284number excessively may cause paging at the file system manager node.
285The value should be large enough to handle the number of concurrently open
286files plus allow caching of recently used files.
287.RE
288.PP
289.RS +3
290\fBmaxMBpS
291\fR
292.RE
293.RS +9
294Specifies an estimate of how many megabytes of data can be transferred per
295second into or out of a single node. The default is 150 MB per
296second. The value is used in calculating the amount of I/O that can be
297done to effectively prefetch data for readers and write-behind data from
298writers. By lowering this value, you can artificially limit how much
299I/O one node can put on all of the disk servers.
300.PP
301This is useful in environments in which a large number of nodes can overrun
302a few virtual shared disk servers. Setting this value too high usually
303does not cause problems because of other limiting factors, such as the size of
304the pagepool, the number of prefetch threads, and so forth.
305.RE
306.PP
307.RS +3
308\fBmaxStatCache
309\fR
310.RE
311.RS +9
312Specifies the number of inodes to keep in the stat cache. The stat
313cache maintains only enough inode information to perform a query on the file
314system. The default value is:
315.PP
316\fB4 x maxFilesToCache\fR
317.RE
318.PP
319.RS +3
320\fBnsdServerWaitTimeForMount
321\fR
322.RE
323.RS +9
324When mounting a file system whose disks depend on NSD servers, this option
325specifies the number of seconds to wait for those servers to come up.
326The decision to wait is controlled by the criteria managed by the
327\fBnsdServerWaitTimeWindowOnMount\fR option.
328.PP
329Valid values are between 0 and 1200 seconds. The default is
330300. A value of zero indicates that no waiting is done. The
331interval for checking is 10 seconds. If
332\fBnsdServerWaitTimeForMount\fR is 0,
333\fBnsdServerWaitTimeWindowOnMount\fR has no effect.
334.PP
335The mount thread waits when the daemon delays for safe recovery. The
336mount wait for NSD servers to come up, which is covered by this option, occurs
337after expiration of the recovery wait allows the mount thread to
338proceed.
339.RE
340.PP
341.RS +3
342\fBnsdServerWaitTimeWindowOnMount
343\fR
344.RE
345.RS +9
346Specifies a window of time (in seconds) during which a mount can wait for
347NSD servers as described for the \fBnsdServerWaitTimeForMount\fR
348option. The window begins when quorum is established (at cluster
349startup or subsequently), or at the last known failure times of the NSD
350servers required to perform the mount.
351.PP
352Valid values are between 1 and 1200 seconds. The default is
353600. If \fBnsdServerWaitTimeForMount\fR is 0,
354\fBnsdServerWaitTimeWindowOnMount\fR has no effect.
355.PP
356When a node rejoins the cluster after having been removed for any reason,
357the node resets all the failure time values that it knows about.
358Therefore, when a node rejoins the cluster it believes that the NSD servers
359have not failed. From the node's perspective, old failures are no
360longer relevant.
361.PP
362GPFS checks the cluster formation criteria first. If that check
363falls outside the window, GPFS then checks for NSD server fail times being
364within the window.
365.RE
366.PP
367.RS +3
368\fBpagepool
369\fR
370.RE
371.RS +9
372Changes the size of the cache on each node. The default value is 64
373M. The minimum allowed value is 4 M. The maximum allowed value
374depends on amount of the available physical memory and your operating
375system. Specify this value with the character \fBM\fR, for example,
37660M.
377.RE
378.PP
379.RS +3
380\fBprefetchThreads
381\fR
382.RE
383.RS +9
384Controls the maximum possible number of threads dedicated to prefetching
385data for files that are read sequentially, or to handle sequential
386write-behind.
387.PP
388Functions in the GPFS daemon dynamically determine the actual degree of
389parallelism for prefetching data. The minimum value is 2. The
390default value is 72. The maximum value of \fBprefetchThreads\fR plus
391\fBworker1Threads\fR is:
392.RS +3
393.HP 3
394\(bu On 32-bit kernels, 164
395.HP 3
396\(bu On 64-bit kernels, 550
397.RE
398.RE
399.PP
400.RS +3
401\fBsubnets
402\fR
403.RE
404.RS +9
405Specifies subnets used to communicate between nodes in a GPFS
406cluster.
407.PP
408Enclose the subnets in quotes and separate them by spaces. The order
409in which they are specified determines the order that GPFS uses these subnets
410to establish connections to the nodes within the cluster. For example,
411\fBsubnets="192.168.2.0"\fR refers to IP addresses
412192.168.2.0 through 192.168.2.255
413inclusive.
414.PP
415An optional list of cluster names may also be specified, separated by
416commas. The names may contain wild cards similar to those accepted by
417shell commands. If specified, these names override the list of private
418IP addresses. For example,
419\fBsubnets="10.10.10.0/remote.cluster;192.168.2.0"\fR.
420.PP
421This feature cannot be used to establish fault tolerance or automatic
422failover. If the interface corresponding to an IP address in the list
423is down, GPFS does not use the next one on the list. For more
424information about subnets, see \fIGeneral Parallel File System:
425Advanced Administration Guide\fR and search on \fIUsing remote access with
426public and private IP addresses\fR.
427.RE
428.PP
429.RS +3
430\fBtiebreakerDisks
431\fR
432.RE
433.RS +9
434Controls whether GPFS will use the node quorum with tiebreaker algorithm
435in place of the regular node based quorum algorithm. See \fIGeneral
436Parallel File System: Concepts, Planning, and Installation Guide\fR
437and search for \fInode quorum with tiebreaker\fR. To enable this
438feature, specify the names of one or three disks. Separate the NSD
439names with semicolon (;) and enclose the list in quotes. The disks
440do not have to belong to any particular file system, but must be directly
441accessible from the quorum nodes. For example:
442.sp
443.nf
444tiebreakerDisks="gpfs1nsd;gpfs2nsd;gpfs3nsd"\
445.fi
446.sp
447.PP
448To disable this feature, use:
449.sp
450.nf
451tiebreakerDisks=no
452.fi
453.sp
454.PP
455When changing the \fBtiebreakerDisks\fR, GPFS must be down on all nodes
456in the cluster.
457.RE
458.PP
459.RS +3
460\fBuidDomain
461\fR
462.RE
463.RS +9
464Specifies the UID domain name for the cluster.
465.PP
466A detailed description of the GPFS user ID remapping convention is
467contained in \fI\fIUID Mapping for GPFS in a Multi-Cluster
468Environment\fR\fR at
469www.ibm.com/servers/eserver/clusters/library/wp_aix_lit.html.
470.RE
471.PP
472.RS +3
473\fBunmountOnDiskFail
474\fR
475.RE
476.RS +9
477Controls how the GPFS daemon will respond when a disk failure is
478detected. Valid values are \fByes\fR or \fBno\fR.
479.PP
480When \fBunmountOnDiskFail\fR is set to \fBno\fR, the daemon marks the
481disk as failed and continues as long as it can without using the disk.
482All nodes that are using this disk are notified of the disk failure.
483The disk can be made active again by using the \fBmmchdisk\fR
484command. This is the suggested setting when metadata and data
485replication are used because the replica can be used until the disk is brought
486online again.
487.PP
488When \fBunmountOnDiskFail\fR is set to \fByes\fR, any disk failure
489will cause only the local node to force-unmount the file system that contains
490that disk. Other file systems on this node and other nodes continue to
491function normally, if they can. The local node can try and remount the
492file system when the disk problem has been resolved. This is the
493suggested setting when using SAN-attached disks in large multinode
494configurations, and when replication is not being used. This setting
495should also be used on a node that hosts \fBdescOnly\fR disks. See
496\fIEstablishing disaster recovery for your GPFS cluster\fR in
497\fIGeneral Parallel File System: Advanced Administration
498Guide\fR.
499.RE
500.PP
501.RS +3
502\fBworker1Threads
503\fR
504.RE
505.RS +9
506Controls the maximum number of concurrent file operations at any one
507instant. If there are more requests than that, the excess will wait
508until a previous request has finished.
509.PP
510The primary use is for: random read or write requests that cannot be
511prefetched, random I/O requests, or small file activity. The minimum
512value is 1. The default value is 48. The maximum value of
513\fBprefetchThreads\fR plus \fBworker1Threads\fR is:
514.RS +3
515.HP 3
516\(bu On 32-bit kernels, 164
517.HP 3
518\(bu On 64-bit kernels, 550
519.RE
520.RE
521.PP
522.RS +3
523\fB-I
524\fR
525.RE
526.RS +9
527Specifies that the changes take effect immediately but do not persist when
528GPFS is restarted. This option is valid only for the
529\fBdataStructureDump\fR, \fBdmapiEventTimeout\fR,
530\fBdmapiSessionFailureTimeout\fR, \fBdmapiMountTimeoout\fR,
531\fBmaxMBpS\fR, \fBunmountOnDiskFail\fR, and \fBpagepool\fR
532attributes.
533.RE
534.PP
535.RS +3
536\fB-i
537\fR
538.RE
539.RS +9
540Specifies that the changes take effect immediately and are
541permanent. This option is valid only for the
542\fBdataStructureDump\fR, \fBdmapiEventTimeout\fR,
543\fBdmapiSessionFailureTimeout\fR, \fBdmapiMountTimeoout\fR,
544\fBmaxMBpS\fR, \fBunmountOnDiskFail\fR, and \fBpagepool\fR
545attributes.
546.RE
547.SH "Exit status"
548.PP
549.PP
550.RS +3
551\fB0
552\fR
553.RE
554.RS +9
555Successful completion.
556.RE
557.PP
558.RS +3
559\fBnonzero
560\fR
561.RE
562.RS +9
563A failure has occurred.
564.RE
565.SH "Security"
566.PP
567You must have root authority to run the \fBmmchconfig\fR command.
568.PP
569You may issue the \fBmmchconfig\fR command from any node in the GPFS
570cluster.
571.PP
572When using the \fBrcp\fR and \fBrsh\fR commands for remote
573communication, a properly configured \fB.rhosts\fR file must exist
574in the root user's home directory on each node in the GPFS
575cluster. If you have designated the use of a different remote
576communication program on either the \fBmmcrcluster\fR or the
577\fBmmchcluster\fR command, you must ensure:
578.RS +3
579.HP 3
5801. Proper authorization is granted to all nodes in the GPFS cluster.
581.HP 3
5822. The nodes in the GPFS cluster can communicate without the use of a
583password, and without any extraneous messages.
584.RE
585.SH "Examples"
586.PP
587To change the maximum file system block size allowed to 512 KB, issue this
588command:
589.sp
590.nf
591mmchconfig maxblocksize=512K
592.fi
593.sp
594.PP
595To confirm the change, issue this command:
596.sp
597.nf
598mmlsconfig
599.fi
600.sp
601.PP
602The system displays information similar to:
603.sp
604.nf
605Configuration data for cluster cluster.kgn.ibm.com:
606-----------------------------------------------------------
607clusterName cluster.kgn.ibm.com
608clusterId 680681562216850737
609clusterType lc
610multinode yes
611autoload no
612useDiskLease yes
613maxFeatureLevelAllowed 901
614maxblocksize 512K
615File systems in cluster cluster.kgn.ibm.com:
616----------------------------------------------------
617/dev/fs1
618.fi
619.sp
620.SH "See also"
621.PP
622mmaddnode Command
623.PP
624mmcrcluster Command
625.PP
626mmdelnode Command
627.PP
628mmlsconfig Command
629.PP
630mmlscluster Command
631.SH "Location"
632.PP
633\fB/usr/lpp/mmfs/bin\fR
Note: See TracBrowser for help on using the repository browser.