[16] | 1 | .TH mmchconfig 12/01/06 |
---|
| 2 | mmchconfig Command |
---|
| 3 | .SH "Name" |
---|
| 4 | .PP |
---|
| 5 | \fBmmchconfig\fR - Changes GPFS configuration parameters. |
---|
| 6 | .SH "Synopsis" |
---|
| 7 | .PP |
---|
| 8 | \fBmmchconfig\fR |
---|
| 9 | \fIAttribute\fR=\fIvalue\fR[,\fIAttribute\fR=\fIvalue\fR...] |
---|
| 10 | [\fB-i\fR | \fB-I\fR] [\fB-N |
---|
| 11 | {\fR\fINode\fR[,\fINode\fR...] | |
---|
| 12 | \fINodeFile\fR | \fINodeClass\fR}] |
---|
| 13 | .SH "Description" |
---|
| 14 | .PP |
---|
| 15 | Use the \fBmmchconfig\fR command to change the GPFS configuration |
---|
| 16 | attributes on a single node, a set of nodes, or globally for the entire |
---|
| 17 | cluster. |
---|
| 18 | .PP |
---|
| 19 | The \fIAttribute\fR=\fIvalue\fR flags must come before any |
---|
| 20 | operand. |
---|
| 21 | .PP |
---|
| 22 | When changing both \fBmaxblocksize\fR and \fBpagepool\fR, the command |
---|
| 23 | fails unless these conventions are followed: |
---|
| 24 | .RS +3 |
---|
| 25 | .HP 3 |
---|
| 26 | \(bu When increasing the values, \fBpagepool\fR must be specified |
---|
| 27 | first. |
---|
| 28 | .HP 3 |
---|
| 29 | \(bu When decreasing the values, \fBmaxblocksize\fR must be specified |
---|
| 30 | first. |
---|
| 31 | .RE |
---|
| 32 | .PP |
---|
| 33 | \fBResults\fR |
---|
| 34 | .PP |
---|
| 35 | The configuration is updated on each node in the GPFS cluster. |
---|
| 36 | .SH "Parameters" |
---|
| 37 | .PP |
---|
| 38 | .RS +3 |
---|
| 39 | \fB-N {\fINode\fR[,\fINode\fR...] | |
---|
| 40 | \fINodeFile\fR | \fINodeClass\fR} |
---|
| 41 | \fR |
---|
| 42 | .RE |
---|
| 43 | .RS +9 |
---|
| 44 | Specifies the set of nodes to which the configuration changes |
---|
| 45 | apply. |
---|
| 46 | .PP |
---|
| 47 | The \fB-N\fR flag is valid only for the \fBautomountDir\fR, |
---|
| 48 | \fBdataStructureDump\fR, \fBdesignation\fR, \fBdmapiEventTimeout\fR, |
---|
| 49 | \fBdmapiMountTimeout\fR, \fBdmapiSessionFailureTimeout\fR, |
---|
| 50 | \fBmaxblocksize\fR, \fBmaxFilesToCache\fR, \fBmaxStatCache\fR, |
---|
| 51 | \fBnsdServerWaitTimeWindowOnMount\fR, \fBnsdServerWaitTimeForMount\fR, |
---|
| 52 | \fBpagepool\fR, \fBprefetchThreads\fR, \fBunmountOnDiskFail\fR, and |
---|
| 53 | \fBworker1Threads\fR attributes. |
---|
| 54 | .PP |
---|
| 55 | This command does not support a \fINodeClass\fR of |
---|
| 56 | \fBmount\fR. |
---|
| 57 | .RE |
---|
| 58 | .SH "Options" |
---|
| 59 | .PP |
---|
| 60 | .RS +3 |
---|
| 61 | \fB\fIAttribute\fR |
---|
| 62 | \fR |
---|
| 63 | .RE |
---|
| 64 | .RS +9 |
---|
| 65 | The name of the attribute to be changed to the specified |
---|
| 66 | \fIvalue\fR. More than one attribute and value pair, in a |
---|
| 67 | comma-separated list, can be changed with one invocation of the |
---|
| 68 | command. |
---|
| 69 | .PP |
---|
| 70 | To restore the GPFS default setting for any given attribute, specify |
---|
| 71 | \fBDEFAULT\fR as its \fIvalue\fR. |
---|
| 72 | .RE |
---|
| 73 | .PP |
---|
| 74 | .RS +3 |
---|
| 75 | \fBautoload |
---|
| 76 | \fR |
---|
| 77 | .RE |
---|
| 78 | .RS +9 |
---|
| 79 | Starts GPFS automatically whenever the nodes are rebooted. Valid |
---|
| 80 | values are \fByes\fR or \fBno\fR. |
---|
| 81 | .RE |
---|
| 82 | .PP |
---|
| 83 | .RS +3 |
---|
| 84 | \fBautomountdir |
---|
| 85 | \fR |
---|
| 86 | .RE |
---|
| 87 | .RS +9 |
---|
| 88 | Specifies the directory to be used by the Linux automounter for |
---|
| 89 | GPFS file systems that are being automatically mounted. The default |
---|
| 90 | directory is \fB/gpfs/myautomountdir\fR. This parameter does not |
---|
| 91 | apply to AIX environments. |
---|
| 92 | .RE |
---|
| 93 | .PP |
---|
| 94 | .RS +3 |
---|
| 95 | \fBcipherList |
---|
| 96 | \fR |
---|
| 97 | .RE |
---|
| 98 | .RS +9 |
---|
| 99 | Controls whether GPFS network communications are secured. If |
---|
| 100 | \fBcipherList\fR is not specified, or if the value \fBDEFAULT\fR is |
---|
| 101 | specified, GPFS does not authenticate or check authorization for network |
---|
| 102 | connections. If the value \fBAUTHONLY\fR is specified, GPFS does |
---|
| 103 | authenticate and check authorization for network connections, but data sent |
---|
| 104 | over the connection is not protected. Before setting |
---|
| 105 | \fBcipherList\fR for the first time, you must establish a public/private |
---|
| 106 | key pair for the cluster by using the \fBmmauth genkey new\fR |
---|
| 107 | command. |
---|
| 108 | .PP |
---|
| 109 | See the Frequently Asked Questions at: |
---|
| 110 | publib.boulder.ibm.com/infocenter/ |
---|
| 111 | clresctr/topic/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html |
---|
| 112 | for a list of the ciphers supported by GPFS. |
---|
| 113 | .RE |
---|
| 114 | .PP |
---|
| 115 | .RS +3 |
---|
| 116 | \fBdataStructureDump |
---|
| 117 | \fR |
---|
| 118 | .RE |
---|
| 119 | .RS +9 |
---|
| 120 | Specifies a path for the storage of dumps. The default is to store |
---|
| 121 | dumps in \fB/tmp/mmfs\fR. Specify \fBno\fR to not store |
---|
| 122 | dumps. |
---|
| 123 | .PP |
---|
| 124 | It is suggested that you create a directory for the placement of certain |
---|
| 125 | problem determination information. This can be a symbolic link to |
---|
| 126 | another location if more space can be found there. Do not place it in a |
---|
| 127 | GPFS file system, because it might not be available if GPFS fails. If a |
---|
| 128 | problem occurs, GPFS may write 200 MB or more of problem determination data |
---|
| 129 | into the directory. These files must be manually removed when problem |
---|
| 130 | determination is complete. This should be done promptly so that a |
---|
| 131 | \fBNOSPACE\fR condition is not encountered if another failure |
---|
| 132 | occurs. |
---|
| 133 | .RE |
---|
| 134 | .PP |
---|
| 135 | .RS +3 |
---|
| 136 | \fBdesignation |
---|
| 137 | \fR |
---|
| 138 | .RE |
---|
| 139 | .RS +9 |
---|
| 140 | Specifies a '-' separated list of node roles. |
---|
| 141 | .RS +3 |
---|
| 142 | .HP 3 |
---|
| 143 | \(bu \fBmanager\fR or \fBclient\fR - Indicates whether a node is |
---|
| 144 | part of the pool of nodes from which configuration managers, file system |
---|
| 145 | managers, and token managers are selected. |
---|
| 146 | .HP 3 |
---|
| 147 | \(bu \fBquorum\fR or \fBnonquorum\fR - Indicates whether a node is to be |
---|
| 148 | counted as a quorum node. |
---|
| 149 | .RE |
---|
| 150 | .PP |
---|
| 151 | GPFS must be stopped on any quorum node that is being changed to |
---|
| 152 | nonquorum. GPFS does not have to stopped when the designation changes |
---|
| 153 | from nonquorum to quorum. |
---|
| 154 | .PP |
---|
| 155 | For more information on the roles of a node as the file system manager, see |
---|
| 156 | the \fIGeneral Parallel File System: Concepts, Planning, and |
---|
| 157 | Installation Guide\fR and search for \fIfile system manager\fR. |
---|
| 158 | .PP |
---|
| 159 | For more information on explicit quorum node designation, see the |
---|
| 160 | \fIGeneral Parallel File System: Concepts, Planning, and Installation |
---|
| 161 | Guide\fR and search for \fIdesignating quorum nodes\fR. |
---|
| 162 | .RE |
---|
| 163 | .PP |
---|
| 164 | .RS +3 |
---|
| 165 | \fBdistributedTokenServer |
---|
| 166 | \fR |
---|
| 167 | .RE |
---|
| 168 | .RS +9 |
---|
| 169 | Specifies whether the token server role for a file system should be |
---|
| 170 | limited to only the file system manager node (\fBno\fR), or distributed to |
---|
| 171 | other nodes for better file system performance (\fByes\fR). The |
---|
| 172 | default is \fByes\fR. |
---|
| 173 | .PP |
---|
| 174 | Using multiple token servers requires designating the nodes that should |
---|
| 175 | serve tokens as manager nodes (\fBmanager\fR keyword in the |
---|
| 176 | \fBmmcrcluster\fR node list or \fBmmchconfig |
---|
| 177 | designation=...\fR commands). If no manager |
---|
| 178 | nodes are designated, the node chosen as file system manager will act as the |
---|
| 179 | only token server, regardless of the setting of the |
---|
| 180 | \fBdistributedTokenServer\fR parameter. |
---|
| 181 | .PP |
---|
| 182 | The \fBmaxFilesToCache\fR and \fBmaxStatCache\fR parameters are |
---|
| 183 | indirectly affected by the \fBdistributedTokenServer\fR parameter, because |
---|
| 184 | distributing the tokens across multiple nodes may allow keeping more tokens |
---|
| 185 | than without this feature. See \fIGeneral Parallel File System: |
---|
| 186 | Concepts, Planning, and Installation Guide\fR and search on \fIThe GPFS |
---|
| 187 | token system's affect on cache settings\fR. |
---|
| 188 | .RE |
---|
| 189 | .PP |
---|
| 190 | .RS +3 |
---|
| 191 | \fBdmapiEventTimeout |
---|
| 192 | \fR |
---|
| 193 | .RE |
---|
| 194 | .RS +9 |
---|
| 195 | Controls the blocking of file operation threads of NFS, while in the |
---|
| 196 | kernel waiting for the handling of a DMAPI synchronous event. The |
---|
| 197 | parameter value is the maximum time, in milliseconds, the thread will |
---|
| 198 | block. When this time expires, the file operation returns |
---|
| 199 | \fBENOTREADY\fR, and the event continues asynchronously. The NFS |
---|
| 200 | server is expected to repeatedly retry the operation, which eventually will |
---|
| 201 | find the response of the original event and continue. This mechanism |
---|
| 202 | applies only to read, write, and truncate event types, and only when such |
---|
| 203 | events come from NFS server threads. The timeout value is given in |
---|
| 204 | milliseconds. The value 0 indicates immediate timeout (fully |
---|
| 205 | asynchronous event). A value greater than or equal to 86400000 (which |
---|
| 206 | is 24 hours) is considered \fIinfinity\fR (no timeout, fully synchronous |
---|
| 207 | event). The default value is 86400000. |
---|
| 208 | .PP |
---|
| 209 | For further information regarding DMAPI for GPFS, see \fIGeneral Parallel |
---|
| 210 | File System: Data Management API Guide\fR. |
---|
| 211 | .RE |
---|
| 212 | .PP |
---|
| 213 | .RS +3 |
---|
| 214 | \fBdmapiMountTimeout |
---|
| 215 | \fR |
---|
| 216 | .RE |
---|
| 217 | .RS +9 |
---|
| 218 | Controls the blocking of \fBmount\fR operations, waiting for a |
---|
| 219 | disposition for the mount event to be set. This timeout is activated, |
---|
| 220 | at most once on each node, by the first external mount of a file system that |
---|
| 221 | has DMAPI enabled, and only if there has never before been a mount |
---|
| 222 | disposition. Any \fBmount\fR operation on this node that starts |
---|
| 223 | while the timeout period is active will wait for the mount disposition. |
---|
| 224 | The parameter value is the maximum time, in seconds, that the \fBmount\fR |
---|
| 225 | operation will wait for a disposition. When this time expires and there |
---|
| 226 | is still no disposition for the mount event, the \fBmount\fR operation |
---|
| 227 | fails, returning the \fBEIO\fR error. The timeout value is given in |
---|
| 228 | full seconds. The value 0 indicates immediate timeout (immediate |
---|
| 229 | failure of the mount operation). A value greater than or equal to 86400 |
---|
| 230 | (which is 24 hours) is considered \fIinfinity\fR (no timeout, indefinite |
---|
| 231 | blocking until the there is a disposition). The default value is |
---|
| 232 | 60. |
---|
| 233 | .PP |
---|
| 234 | For further information regarding DMAPI for GPFS, see \fIGeneral Parallel |
---|
| 235 | File System: Data Management API Guide\fR. |
---|
| 236 | .RE |
---|
| 237 | .PP |
---|
| 238 | .RS +3 |
---|
| 239 | \fBdmapiSessionFailureTimeout |
---|
| 240 | \fR |
---|
| 241 | .RE |
---|
| 242 | .RS +9 |
---|
| 243 | Controls the blocking of file operation threads, while in the kernel, |
---|
| 244 | waiting for the handling of a DMAPI synchronous event that is enqueued on a |
---|
| 245 | session that has experienced a failure. The parameter value is the |
---|
| 246 | maximum time, in seconds, the thread will wait for the recovery of the failed |
---|
| 247 | session. When this time expires and the session has not yet recovered, |
---|
| 248 | the event is cancelled and the file operation fails, returning the |
---|
| 249 | \fBEIO\fR error. The timeout value is given in full seconds. |
---|
| 250 | The value 0 indicates immediate timeout (immediate failure of the file |
---|
| 251 | operation). A value greater than or equal to 86400 (which is 24 hours) |
---|
| 252 | is considered \fIinfinity\fR (no timeout, indefinite blocking until the |
---|
| 253 | session recovers). The default value is 0. |
---|
| 254 | .PP |
---|
| 255 | For further information regarding DMAPI for GPFS, see \fIGeneral Parallel |
---|
| 256 | File System: Data Management API Guide\fR. |
---|
| 257 | .RE |
---|
| 258 | .PP |
---|
| 259 | .RS +3 |
---|
| 260 | \fBmaxblocksize |
---|
| 261 | \fR |
---|
| 262 | .RE |
---|
| 263 | .RS +9 |
---|
| 264 | Changes the maximum file system block size. Valid values include 64 |
---|
| 265 | KB, 256 KB, 512 KB, and 1024 KB. The default value is 1024 KB. |
---|
| 266 | Specify this value with the character \fBK\fR or \fBM\fR, for example |
---|
| 267 | 512K. |
---|
| 268 | .PP |
---|
| 269 | File systems with block sizes larger than the specified value cannot be |
---|
| 270 | created or mounted unless the block size is increased. |
---|
| 271 | .RE |
---|
| 272 | .PP |
---|
| 273 | .RS +3 |
---|
| 274 | \fBmaxFilesToCache |
---|
| 275 | \fR |
---|
| 276 | .RE |
---|
| 277 | .RS +9 |
---|
| 278 | Specifies the number of inodes to cache for recently used files that have |
---|
| 279 | been closed. |
---|
| 280 | .PP |
---|
| 281 | Storing a file's inode in cache permits faster re-access to the |
---|
| 282 | file. The default is 1000, but increasing this number may improve |
---|
| 283 | throughput for workloads with high file reuse. However, increasing this |
---|
| 284 | number excessively may cause paging at the file system manager node. |
---|
| 285 | The value should be large enough to handle the number of concurrently open |
---|
| 286 | files plus allow caching of recently used files. |
---|
| 287 | .RE |
---|
| 288 | .PP |
---|
| 289 | .RS +3 |
---|
| 290 | \fBmaxMBpS |
---|
| 291 | \fR |
---|
| 292 | .RE |
---|
| 293 | .RS +9 |
---|
| 294 | Specifies an estimate of how many megabytes of data can be transferred per |
---|
| 295 | second into or out of a single node. The default is 150 MB per |
---|
| 296 | second. The value is used in calculating the amount of I/O that can be |
---|
| 297 | done to effectively prefetch data for readers and write-behind data from |
---|
| 298 | writers. By lowering this value, you can artificially limit how much |
---|
| 299 | I/O one node can put on all of the disk servers. |
---|
| 300 | .PP |
---|
| 301 | This is useful in environments in which a large number of nodes can overrun |
---|
| 302 | a few virtual shared disk servers. Setting this value too high usually |
---|
| 303 | does not cause problems because of other limiting factors, such as the size of |
---|
| 304 | the pagepool, the number of prefetch threads, and so forth. |
---|
| 305 | .RE |
---|
| 306 | .PP |
---|
| 307 | .RS +3 |
---|
| 308 | \fBmaxStatCache |
---|
| 309 | \fR |
---|
| 310 | .RE |
---|
| 311 | .RS +9 |
---|
| 312 | Specifies the number of inodes to keep in the stat cache. The stat |
---|
| 313 | cache maintains only enough inode information to perform a query on the file |
---|
| 314 | system. The default value is: |
---|
| 315 | .PP |
---|
| 316 | \fB4 x maxFilesToCache\fR |
---|
| 317 | .RE |
---|
| 318 | .PP |
---|
| 319 | .RS +3 |
---|
| 320 | \fBnsdServerWaitTimeForMount |
---|
| 321 | \fR |
---|
| 322 | .RE |
---|
| 323 | .RS +9 |
---|
| 324 | When mounting a file system whose disks depend on NSD servers, this option |
---|
| 325 | specifies the number of seconds to wait for those servers to come up. |
---|
| 326 | The decision to wait is controlled by the criteria managed by the |
---|
| 327 | \fBnsdServerWaitTimeWindowOnMount\fR option. |
---|
| 328 | .PP |
---|
| 329 | Valid values are between 0 and 1200 seconds. The default is |
---|
| 330 | 300. A value of zero indicates that no waiting is done. The |
---|
| 331 | interval for checking is 10 seconds. If |
---|
| 332 | \fBnsdServerWaitTimeForMount\fR is 0, |
---|
| 333 | \fBnsdServerWaitTimeWindowOnMount\fR has no effect. |
---|
| 334 | .PP |
---|
| 335 | The mount thread waits when the daemon delays for safe recovery. The |
---|
| 336 | mount wait for NSD servers to come up, which is covered by this option, occurs |
---|
| 337 | after expiration of the recovery wait allows the mount thread to |
---|
| 338 | proceed. |
---|
| 339 | .RE |
---|
| 340 | .PP |
---|
| 341 | .RS +3 |
---|
| 342 | \fBnsdServerWaitTimeWindowOnMount |
---|
| 343 | \fR |
---|
| 344 | .RE |
---|
| 345 | .RS +9 |
---|
| 346 | Specifies a window of time (in seconds) during which a mount can wait for |
---|
| 347 | NSD servers as described for the \fBnsdServerWaitTimeForMount\fR |
---|
| 348 | option. The window begins when quorum is established (at cluster |
---|
| 349 | startup or subsequently), or at the last known failure times of the NSD |
---|
| 350 | servers required to perform the mount. |
---|
| 351 | .PP |
---|
| 352 | Valid values are between 1 and 1200 seconds. The default is |
---|
| 353 | 600. If \fBnsdServerWaitTimeForMount\fR is 0, |
---|
| 354 | \fBnsdServerWaitTimeWindowOnMount\fR has no effect. |
---|
| 355 | .PP |
---|
| 356 | When a node rejoins the cluster after having been removed for any reason, |
---|
| 357 | the node resets all the failure time values that it knows about. |
---|
| 358 | Therefore, when a node rejoins the cluster it believes that the NSD servers |
---|
| 359 | have not failed. From the node's perspective, old failures are no |
---|
| 360 | longer relevant. |
---|
| 361 | .PP |
---|
| 362 | GPFS checks the cluster formation criteria first. If that check |
---|
| 363 | falls outside the window, GPFS then checks for NSD server fail times being |
---|
| 364 | within the window. |
---|
| 365 | .RE |
---|
| 366 | .PP |
---|
| 367 | .RS +3 |
---|
| 368 | \fBpagepool |
---|
| 369 | \fR |
---|
| 370 | .RE |
---|
| 371 | .RS +9 |
---|
| 372 | Changes the size of the cache on each node. The default value is 64 |
---|
| 373 | M. The minimum allowed value is 4 M. The maximum allowed value |
---|
| 374 | depends on amount of the available physical memory and your operating |
---|
| 375 | system. Specify this value with the character \fBM\fR, for example, |
---|
| 376 | 60M. |
---|
| 377 | .RE |
---|
| 378 | .PP |
---|
| 379 | .RS +3 |
---|
| 380 | \fBprefetchThreads |
---|
| 381 | \fR |
---|
| 382 | .RE |
---|
| 383 | .RS +9 |
---|
| 384 | Controls the maximum possible number of threads dedicated to prefetching |
---|
| 385 | data for files that are read sequentially, or to handle sequential |
---|
| 386 | write-behind. |
---|
| 387 | .PP |
---|
| 388 | Functions in the GPFS daemon dynamically determine the actual degree of |
---|
| 389 | parallelism for prefetching data. The minimum value is 2. The |
---|
| 390 | default value is 72. The maximum value of \fBprefetchThreads\fR plus |
---|
| 391 | \fBworker1Threads\fR is: |
---|
| 392 | .RS +3 |
---|
| 393 | .HP 3 |
---|
| 394 | \(bu On 32-bit kernels, 164 |
---|
| 395 | .HP 3 |
---|
| 396 | \(bu On 64-bit kernels, 550 |
---|
| 397 | .RE |
---|
| 398 | .RE |
---|
| 399 | .PP |
---|
| 400 | .RS +3 |
---|
| 401 | \fBsubnets |
---|
| 402 | \fR |
---|
| 403 | .RE |
---|
| 404 | .RS +9 |
---|
| 405 | Specifies subnets used to communicate between nodes in a GPFS |
---|
| 406 | cluster. |
---|
| 407 | .PP |
---|
| 408 | Enclose the subnets in quotes and separate them by spaces. The order |
---|
| 409 | in which they are specified determines the order that GPFS uses these subnets |
---|
| 410 | to establish connections to the nodes within the cluster. For example, |
---|
| 411 | \fBsubnets="192.168.2.0"\fR refers to IP addresses |
---|
| 412 | 192.168.2.0 through 192.168.2.255 |
---|
| 413 | inclusive. |
---|
| 414 | .PP |
---|
| 415 | An optional list of cluster names may also be specified, separated by |
---|
| 416 | commas. The names may contain wild cards similar to those accepted by |
---|
| 417 | shell commands. If specified, these names override the list of private |
---|
| 418 | IP addresses. For example, |
---|
| 419 | \fBsubnets="10.10.10.0/remote.cluster;192.168.2.0"\fR. |
---|
| 420 | .PP |
---|
| 421 | This feature cannot be used to establish fault tolerance or automatic |
---|
| 422 | failover. If the interface corresponding to an IP address in the list |
---|
| 423 | is down, GPFS does not use the next one on the list. For more |
---|
| 424 | information about subnets, see \fIGeneral Parallel File System: |
---|
| 425 | Advanced Administration Guide\fR and search on \fIUsing remote access with |
---|
| 426 | public and private IP addresses\fR. |
---|
| 427 | .RE |
---|
| 428 | .PP |
---|
| 429 | .RS +3 |
---|
| 430 | \fBtiebreakerDisks |
---|
| 431 | \fR |
---|
| 432 | .RE |
---|
| 433 | .RS +9 |
---|
| 434 | Controls whether GPFS will use the node quorum with tiebreaker algorithm |
---|
| 435 | in place of the regular node based quorum algorithm. See \fIGeneral |
---|
| 436 | Parallel File System: Concepts, Planning, and Installation Guide\fR |
---|
| 437 | and search for \fInode quorum with tiebreaker\fR. To enable this |
---|
| 438 | feature, specify the names of one or three disks. Separate the NSD |
---|
| 439 | names with semicolon (;) and enclose the list in quotes. The disks |
---|
| 440 | do not have to belong to any particular file system, but must be directly |
---|
| 441 | accessible from the quorum nodes. For example: |
---|
| 442 | .sp |
---|
| 443 | .nf |
---|
| 444 | tiebreakerDisks="gpfs1nsd;gpfs2nsd;gpfs3nsd"\ |
---|
| 445 | .fi |
---|
| 446 | .sp |
---|
| 447 | .PP |
---|
| 448 | To disable this feature, use: |
---|
| 449 | .sp |
---|
| 450 | .nf |
---|
| 451 | tiebreakerDisks=no |
---|
| 452 | .fi |
---|
| 453 | .sp |
---|
| 454 | .PP |
---|
| 455 | When changing the \fBtiebreakerDisks\fR, GPFS must be down on all nodes |
---|
| 456 | in the cluster. |
---|
| 457 | .RE |
---|
| 458 | .PP |
---|
| 459 | .RS +3 |
---|
| 460 | \fBuidDomain |
---|
| 461 | \fR |
---|
| 462 | .RE |
---|
| 463 | .RS +9 |
---|
| 464 | Specifies the UID domain name for the cluster. |
---|
| 465 | .PP |
---|
| 466 | A detailed description of the GPFS user ID remapping convention is |
---|
| 467 | contained in \fI\fIUID Mapping for GPFS in a Multi-Cluster |
---|
| 468 | Environment\fR\fR at |
---|
| 469 | www.ibm.com/servers/eserver/clusters/library/wp_aix_lit.html. |
---|
| 470 | .RE |
---|
| 471 | .PP |
---|
| 472 | .RS +3 |
---|
| 473 | \fBunmountOnDiskFail |
---|
| 474 | \fR |
---|
| 475 | .RE |
---|
| 476 | .RS +9 |
---|
| 477 | Controls how the GPFS daemon will respond when a disk failure is |
---|
| 478 | detected. Valid values are \fByes\fR or \fBno\fR. |
---|
| 479 | .PP |
---|
| 480 | When \fBunmountOnDiskFail\fR is set to \fBno\fR, the daemon marks the |
---|
| 481 | disk as failed and continues as long as it can without using the disk. |
---|
| 482 | All nodes that are using this disk are notified of the disk failure. |
---|
| 483 | The disk can be made active again by using the \fBmmchdisk\fR |
---|
| 484 | command. This is the suggested setting when metadata and data |
---|
| 485 | replication are used because the replica can be used until the disk is brought |
---|
| 486 | online again. |
---|
| 487 | .PP |
---|
| 488 | When \fBunmountOnDiskFail\fR is set to \fByes\fR, any disk failure |
---|
| 489 | will cause only the local node to force-unmount the file system that contains |
---|
| 490 | that disk. Other file systems on this node and other nodes continue to |
---|
| 491 | function normally, if they can. The local node can try and remount the |
---|
| 492 | file system when the disk problem has been resolved. This is the |
---|
| 493 | suggested setting when using SAN-attached disks in large multinode |
---|
| 494 | configurations, and when replication is not being used. This setting |
---|
| 495 | should also be used on a node that hosts \fBdescOnly\fR disks. See |
---|
| 496 | \fIEstablishing disaster recovery for your GPFS cluster\fR in |
---|
| 497 | \fIGeneral Parallel File System: Advanced Administration |
---|
| 498 | Guide\fR. |
---|
| 499 | .RE |
---|
| 500 | .PP |
---|
| 501 | .RS +3 |
---|
| 502 | \fBworker1Threads |
---|
| 503 | \fR |
---|
| 504 | .RE |
---|
| 505 | .RS +9 |
---|
| 506 | Controls the maximum number of concurrent file operations at any one |
---|
| 507 | instant. If there are more requests than that, the excess will wait |
---|
| 508 | until a previous request has finished. |
---|
| 509 | .PP |
---|
| 510 | The primary use is for: random read or write requests that cannot be |
---|
| 511 | prefetched, random I/O requests, or small file activity. The minimum |
---|
| 512 | value is 1. The default value is 48. The maximum value of |
---|
| 513 | \fBprefetchThreads\fR plus \fBworker1Threads\fR is: |
---|
| 514 | .RS +3 |
---|
| 515 | .HP 3 |
---|
| 516 | \(bu On 32-bit kernels, 164 |
---|
| 517 | .HP 3 |
---|
| 518 | \(bu On 64-bit kernels, 550 |
---|
| 519 | .RE |
---|
| 520 | .RE |
---|
| 521 | .PP |
---|
| 522 | .RS +3 |
---|
| 523 | \fB-I |
---|
| 524 | \fR |
---|
| 525 | .RE |
---|
| 526 | .RS +9 |
---|
| 527 | Specifies that the changes take effect immediately but do not persist when |
---|
| 528 | GPFS is restarted. This option is valid only for the |
---|
| 529 | \fBdataStructureDump\fR, \fBdmapiEventTimeout\fR, |
---|
| 530 | \fBdmapiSessionFailureTimeout\fR, \fBdmapiMountTimeoout\fR, |
---|
| 531 | \fBmaxMBpS\fR, \fBunmountOnDiskFail\fR, and \fBpagepool\fR |
---|
| 532 | attributes. |
---|
| 533 | .RE |
---|
| 534 | .PP |
---|
| 535 | .RS +3 |
---|
| 536 | \fB-i |
---|
| 537 | \fR |
---|
| 538 | .RE |
---|
| 539 | .RS +9 |
---|
| 540 | Specifies that the changes take effect immediately and are |
---|
| 541 | permanent. This option is valid only for the |
---|
| 542 | \fBdataStructureDump\fR, \fBdmapiEventTimeout\fR, |
---|
| 543 | \fBdmapiSessionFailureTimeout\fR, \fBdmapiMountTimeoout\fR, |
---|
| 544 | \fBmaxMBpS\fR, \fBunmountOnDiskFail\fR, and \fBpagepool\fR |
---|
| 545 | attributes. |
---|
| 546 | .RE |
---|
| 547 | .SH "Exit status" |
---|
| 548 | .PP |
---|
| 549 | .PP |
---|
| 550 | .RS +3 |
---|
| 551 | \fB0 |
---|
| 552 | \fR |
---|
| 553 | .RE |
---|
| 554 | .RS +9 |
---|
| 555 | Successful completion. |
---|
| 556 | .RE |
---|
| 557 | .PP |
---|
| 558 | .RS +3 |
---|
| 559 | \fBnonzero |
---|
| 560 | \fR |
---|
| 561 | .RE |
---|
| 562 | .RS +9 |
---|
| 563 | A failure has occurred. |
---|
| 564 | .RE |
---|
| 565 | .SH "Security" |
---|
| 566 | .PP |
---|
| 567 | You must have root authority to run the \fBmmchconfig\fR command. |
---|
| 568 | .PP |
---|
| 569 | You may issue the \fBmmchconfig\fR command from any node in the GPFS |
---|
| 570 | cluster. |
---|
| 571 | .PP |
---|
| 572 | When using the \fBrcp\fR and \fBrsh\fR commands for remote |
---|
| 573 | communication, a properly configured \fB.rhosts\fR file must exist |
---|
| 574 | in the root user's home directory on each node in the GPFS |
---|
| 575 | cluster. If you have designated the use of a different remote |
---|
| 576 | communication program on either the \fBmmcrcluster\fR or the |
---|
| 577 | \fBmmchcluster\fR command, you must ensure: |
---|
| 578 | .RS +3 |
---|
| 579 | .HP 3 |
---|
| 580 | 1. Proper authorization is granted to all nodes in the GPFS cluster. |
---|
| 581 | .HP 3 |
---|
| 582 | 2. The nodes in the GPFS cluster can communicate without the use of a |
---|
| 583 | password, and without any extraneous messages. |
---|
| 584 | .RE |
---|
| 585 | .SH "Examples" |
---|
| 586 | .PP |
---|
| 587 | To change the maximum file system block size allowed to 512 KB, issue this |
---|
| 588 | command: |
---|
| 589 | .sp |
---|
| 590 | .nf |
---|
| 591 | mmchconfig maxblocksize=512K |
---|
| 592 | .fi |
---|
| 593 | .sp |
---|
| 594 | .PP |
---|
| 595 | To confirm the change, issue this command: |
---|
| 596 | .sp |
---|
| 597 | .nf |
---|
| 598 | mmlsconfig |
---|
| 599 | .fi |
---|
| 600 | .sp |
---|
| 601 | .PP |
---|
| 602 | The system displays information similar to: |
---|
| 603 | .sp |
---|
| 604 | .nf |
---|
| 605 | Configuration data for cluster cluster.kgn.ibm.com: |
---|
| 606 | ----------------------------------------------------------- |
---|
| 607 | clusterName cluster.kgn.ibm.com |
---|
| 608 | clusterId 680681562216850737 |
---|
| 609 | clusterType lc |
---|
| 610 | multinode yes |
---|
| 611 | autoload no |
---|
| 612 | useDiskLease yes |
---|
| 613 | maxFeatureLevelAllowed 901 |
---|
| 614 | maxblocksize 512K |
---|
| 615 | File systems in cluster cluster.kgn.ibm.com: |
---|
| 616 | ---------------------------------------------------- |
---|
| 617 | /dev/fs1 |
---|
| 618 | .fi |
---|
| 619 | .sp |
---|
| 620 | .SH "See also" |
---|
| 621 | .PP |
---|
| 622 | mmaddnode Command |
---|
| 623 | .PP |
---|
| 624 | mmcrcluster Command |
---|
| 625 | .PP |
---|
| 626 | mmdelnode Command |
---|
| 627 | .PP |
---|
| 628 | mmlsconfig Command |
---|
| 629 | .PP |
---|
| 630 | mmlscluster Command |
---|
| 631 | .SH "Location" |
---|
| 632 | .PP |
---|
| 633 | \fB/usr/lpp/mmfs/bin\fR |
---|