1 | .TH mmfsctl 02/16/06 |
---|
2 | mmfsctl Command |
---|
3 | .SH "Name" |
---|
4 | .PP |
---|
5 | \fBmmfsctl\fR - Issues a file system control request. |
---|
6 | .SH "Synopsis" |
---|
7 | .PP |
---|
8 | \fBmmfsctl\fR \fIDevice\fR \fB{suspend | resume}\fR |
---|
9 | .PP |
---|
10 | Or, |
---|
11 | .PP |
---|
12 | \fBmmfsctl\fR \fIDevice\fR \fB{exclude | include}\fR |
---|
13 | \fB{\fR\fB-d |
---|
14 | "\fR\fIDiskName\fR\fB[;\fR\fIDiskName\fR\fB...]" |
---|
15 | |\fR \fB-F\fR \fIDiskFile\fR | \fB-G\fR |
---|
16 | \fIFailureGroup\fR\fB}\fR |
---|
17 | .PP |
---|
18 | Or, |
---|
19 | .PP |
---|
20 | \fBmmfsctl\fR \fIDevice\fR \fBsyncFSconfig\fR |
---|
21 | \fB{\fR\fB-n\fR \fIRemoteNodesFile\fR | \fB-C\fR |
---|
22 | \fIremoteClusterName\fR\fB}\fR [\fB-S\fR \fISpecFile\fR] |
---|
23 | .SH "Description" |
---|
24 | .PP |
---|
25 | Use the \fBmmfsctl\fR command to issue control requests to a |
---|
26 | particular GPFS file system. The command is used to temporarily suspend |
---|
27 | the processing of all application I/O requests, and later resume them, as well |
---|
28 | as to synchronize the file system's configuration state between peer |
---|
29 | clusters in disaster recovery environments. |
---|
30 | .PP |
---|
31 | See |
---|
32 | \fIEstablishing disaster recovery for your GPFS |
---|
33 | cluster\fR in |
---|
34 | \fIGeneral Parallel File |
---|
35 | System: Advanced Administration Guide\fR. |
---|
36 | .PP |
---|
37 | Before creating a FlashCopy image of the file system, the user must run |
---|
38 | \fBmmfsctl suspend\fR to temporarily quiesce all file system activity and |
---|
39 | flush the internal buffers on all nodes that mount this file system. |
---|
40 | The on-disk metadata will be brought to a consistent state, which provides for |
---|
41 | the integrity of the FlashCopy snapshot. If a request to the file |
---|
42 | system is issued by the application after the invocation of this command, GPFS |
---|
43 | suspends this request indefinitely, or until the user issues \fBmmfsctl |
---|
44 | resume\fR. |
---|
45 | .PP |
---|
46 | Once the FlashCopy image has been taken, the \fBmmfsctl resume\fR |
---|
47 | command can be issued to resume the normal operation and complete any pending |
---|
48 | I/O requests. |
---|
49 | .PP |
---|
50 | The \fBmmfsctl syncFSconfig\fR command extracts the file system's |
---|
51 | related information from the local GPFS configuration data, transfers this |
---|
52 | data to one of the nodes in the peer cluster, and attempts to import it |
---|
53 | there. |
---|
54 | .PP |
---|
55 | Once the GPFS file system has been defined in the primary cluster, users run |
---|
56 | this command to import the configuration of this file system into the peer |
---|
57 | recovery cluster. After producing a FlashCopy image of the file system |
---|
58 | and propagating it to the peer cluster using Peer-to-Peer Remote Copy (PPRC), |
---|
59 | users similarly run this command to propagate any relevant configuration |
---|
60 | changes made in the cluster after the previous snapshot. |
---|
61 | .PP |
---|
62 | The primary cluster configuration server of the peer cluster must be |
---|
63 | available and accessible using remote shell and remote copy at the time of the |
---|
64 | invocation of the \fBmmfsctl syncFSconfig\fR command. Also, the peer |
---|
65 | GPFS clusters should be defined to use the same remote shell and remote copy |
---|
66 | mechanism, and they must be set up to allow nodes in peer clusters to |
---|
67 | communicate without the use of a password. |
---|
68 | .PP |
---|
69 | Not all administrative actions performed on the file system necessitate this |
---|
70 | type of resynchronization. It is required only for those actions that |
---|
71 | modify the file system information maintained in the local GPFS configuration |
---|
72 | data, which includes: |
---|
73 | .RS +3 |
---|
74 | .HP 3 |
---|
75 | \(bu Additions, removals, and replacements of disks (commands \fBmmadddisk\fR, \fBmmdeldisk\fR, \fBmmrpldisk\fR) |
---|
76 | .HP 3 |
---|
77 | \(bu Modifications to disk attributes (command \fBmmchdisk\fR) |
---|
78 | .HP 3 |
---|
79 | \(bu Changes to the file system's mount point (command \fBmmchfs -T\fR) |
---|
80 | .HP 3 |
---|
81 | \(bu Changing the file system device name (command |
---|
82 | \fBmmchfs |
---|
83 | -W\fR) |
---|
84 | .RE |
---|
85 | .PP |
---|
86 | The process of synchronizing the file system configuration data can |
---|
87 | be automated by utilizing the \fBsyncfsconfig\fR user exit. |
---|
88 | .PP |
---|
89 | The \fBmmfsctl exclude\fR command is to be used only in a disaster recovery |
---|
90 | environment, only after a disaster has occurred, and only after ensuring that |
---|
91 | the disks in question have been physically disconnected. Otherwise, |
---|
92 | unexpected results may occur. |
---|
93 | .PP |
---|
94 | The \fBmmfsctl exclude\fR command can be used to manually |
---|
95 | override the file system descriptor quorum after a site-wide disaster. |
---|
96 | See |
---|
97 | \fIEstablishing disaster recovery for your GPFS |
---|
98 | cluster\fR in |
---|
99 | \fIGeneral Parallel File |
---|
100 | System: Advanced Administration Guide\fR. This command |
---|
101 | enables users to restore normal access to the file system with less than a |
---|
102 | quorum of available file system descriptor replica disks, by effectively |
---|
103 | excluding the specified disks from all subsequent operations on the file |
---|
104 | system descriptor. After repairing the disks, the \fBmmfsctl |
---|
105 | include\fR command can be issued to restore the initial quorum |
---|
106 | configuration. |
---|
107 | .SH "Parameters" |
---|
108 | .PP |
---|
109 | .RS +3 |
---|
110 | \fB\fIDevice\fR |
---|
111 | \fR |
---|
112 | .RE |
---|
113 | .RS +9 |
---|
114 | The device name of the file system. File system names need not be |
---|
115 | fully-qualified. \fBfs0\fR is just as acceptable as |
---|
116 | \fB/dev/fs0\fR. If \fBall\fR is specified with the |
---|
117 | \fBsyncFSconfig\fR option, this command is performed on all GPFS file |
---|
118 | systems defined in the cluster. |
---|
119 | .RE |
---|
120 | .PP |
---|
121 | .RS +3 |
---|
122 | \fBexclude |
---|
123 | \fR |
---|
124 | .RE |
---|
125 | .RS +9 |
---|
126 | Instructs GPFS to exclude the specified group of disks from all |
---|
127 | subsequent operations on the file system descriptor, and change their |
---|
128 | availability state to \fBdown\fR, if the conditions in the Note below are |
---|
129 | met. |
---|
130 | .PP |
---|
131 | If necessary, this command assigns additional disks to serve as the disk |
---|
132 | descriptor replica holders, and migrate the disk descriptor to the new replica |
---|
133 | set. The excluded disks are not deleted from the file system, and still |
---|
134 | appear in the output of the |
---|
135 | \fBmmlsdisk\fR |
---|
136 | command. |
---|
137 | .RS +3 |
---|
138 | \fBNote:\fR |
---|
139 | .RE |
---|
140 | .RS +9 |
---|
141 | The \fBmmfsctl exclude\fR command is to be used only in a disaster |
---|
142 | recovery environment, only after a disaster has occurred, and only after |
---|
143 | ensuring that the disks in question have been physically disconnected. |
---|
144 | Otherwise, unexpected results may occur. |
---|
145 | .RE |
---|
146 | .RE |
---|
147 | .PP |
---|
148 | .RS +3 |
---|
149 | \fBinclude |
---|
150 | \fR |
---|
151 | .RE |
---|
152 | .RS +9 |
---|
153 | Informs GPFS that the previously excluded disks have become operational |
---|
154 | again. This command writes the up-to-date version of the disk |
---|
155 | descriptor to each of the specified disks, and clears the \fBexcl\fR |
---|
156 | tag. |
---|
157 | .RE |
---|
158 | .PP |
---|
159 | .RS +3 |
---|
160 | \fBresume |
---|
161 | \fR |
---|
162 | .RE |
---|
163 | .RS +9 |
---|
164 | Instructs GPFS to resume the normal processing of I/O requests on all |
---|
165 | nodes. |
---|
166 | .RE |
---|
167 | .PP |
---|
168 | .RS +3 |
---|
169 | \fBsuspend |
---|
170 | \fR |
---|
171 | .RE |
---|
172 | .RS +9 |
---|
173 | Instructs GPFS to flush the internal buffers on all nodes, bring the file |
---|
174 | system to a consistent state on disk, and suspend the processing of all |
---|
175 | subsequent application I/O requests. |
---|
176 | .RE |
---|
177 | .PP |
---|
178 | .RS +3 |
---|
179 | \fBsyncFSconfig |
---|
180 | \fR |
---|
181 | .RE |
---|
182 | .RS +9 |
---|
183 | Synchronizes the configuration state of a GPFS file system between the |
---|
184 | local cluster and its peer in two-cluster disaster recovery |
---|
185 | configurations. |
---|
186 | .RE |
---|
187 | .PP |
---|
188 | .RS +3 |
---|
189 | \fB-C \fIremoteClusterName\fR |
---|
190 | \fR |
---|
191 | .RE |
---|
192 | .RS +9 |
---|
193 | Specifies the name of the GPFS cluster that owns the remote GPFS file |
---|
194 | system. |
---|
195 | .RE |
---|
196 | .PP |
---|
197 | .RS +3 |
---|
198 | \fB-d |
---|
199 | "\fIDiskName\fR\fB[;\fR\fIDiskName\fR\fB...]"\fR |
---|
200 | \fR |
---|
201 | .RE |
---|
202 | .RS +9 |
---|
203 | Specifies the names of the NSDs to be included or excluded by the |
---|
204 | \fBmmfsctl\fR command. Separate the names with semicolons (;) |
---|
205 | and enclose the list of disk names in quotation marks. |
---|
206 | .RE |
---|
207 | .PP |
---|
208 | .RS +3 |
---|
209 | \fB-F \fIDiskFile\fR |
---|
210 | \fR |
---|
211 | .RE |
---|
212 | .RS +9 |
---|
213 | Specifies a file containing the names of the NSDs, one per line, to be |
---|
214 | included or excluded by the \fBmmfsctl\fR command. |
---|
215 | .RE |
---|
216 | .PP |
---|
217 | .RS +3 |
---|
218 | \fB-G \fIFailureGroup\fR |
---|
219 | \fR |
---|
220 | .RE |
---|
221 | .RS +9 |
---|
222 | A number identifying the failure group for disks to be included or |
---|
223 | excluded by the \fBmmfsctl\fR command. |
---|
224 | .RE |
---|
225 | .PP |
---|
226 | .RS +3 |
---|
227 | \fB-n \fIRemoteNodesFile\fR |
---|
228 | \fR |
---|
229 | .RE |
---|
230 | .RS +9 |
---|
231 | Specifies a list of contact nodes in the peer recovery cluster that GPFS |
---|
232 | uses when importing the configuration data into that cluster. Although |
---|
233 | any node in the peer cluster can be specified here, users are advised to |
---|
234 | specify the identities of the peer cluster's primary and secondary |
---|
235 | cluster configuration servers, for efficiency reasons. |
---|
236 | .RE |
---|
237 | .PP |
---|
238 | .RS +3 |
---|
239 | \fB-S \fISpecFile\fR |
---|
240 | \fR |
---|
241 | .RE |
---|
242 | .RS +9 |
---|
243 | Specifies the description of changes to be made to the file system, in the |
---|
244 | peer cluster during the import step. The format of this file is |
---|
245 | identical to that of the \fIChangeSpecFile\fR used as input to the \fBmmimportfs\fR command. This option can be used, |
---|
246 | for example, to define the assignment of the NSD servers for use in the peer |
---|
247 | cluster. |
---|
248 | .RE |
---|
249 | .SH "Options" |
---|
250 | .PP |
---|
251 | None. |
---|
252 | .SH "Exit status" |
---|
253 | .PP |
---|
254 | .PP |
---|
255 | .RS +3 |
---|
256 | \fB0 |
---|
257 | \fR |
---|
258 | .RE |
---|
259 | .RS +9 |
---|
260 | Successful completion. |
---|
261 | .RE |
---|
262 | .PP |
---|
263 | .RS +3 |
---|
264 | \fBnonzero |
---|
265 | \fR |
---|
266 | .RE |
---|
267 | .RS +9 |
---|
268 | A failure has occurred. |
---|
269 | .RE |
---|
270 | .SH "Results" |
---|
271 | .PP |
---|
272 | The \fBmmfsctl\fR command returns 0 if successful. |
---|
273 | .SH "Security" |
---|
274 | .PP |
---|
275 | You must have root authority to run the \fBmmfsctl\fR command. |
---|
276 | .PP |
---|
277 | You may issue the \fBmmfsctl\fR command from any node in the GPFS |
---|
278 | cluster. |
---|
279 | .PP |
---|
280 | When using the \fBrcp\fR and \fBrsh\fR commands for remote |
---|
281 | communication, a properly configured \fB.rhosts\fR file must exist |
---|
282 | in the root user's home directory on each node in the GPFS |
---|
283 | cluster. If you have designated the use of a different remote |
---|
284 | communication program on either the |
---|
285 | \fBmmcrcluster\fR |
---|
286 | or the |
---|
287 | \fBmmchcluster\fR command, you must |
---|
288 | ensure: |
---|
289 | .RS +3 |
---|
290 | .HP 3 |
---|
291 | 1. Proper authorization is granted to all nodes in the GPFS cluster. |
---|
292 | .HP 3 |
---|
293 | 2. The nodes in the GPFS cluster can communicate without the use of a |
---|
294 | password, and without any extraneous messages. |
---|
295 | .RE |
---|
296 | .SH "Examples" |
---|
297 | .PP |
---|
298 | This sequence of commands creates a FlashCopy image of the file system and |
---|
299 | propagates this image to the recovery cluster using the Peer-to-Peer Remote |
---|
300 | Copy technology. The following configuration is assumed: |
---|
301 | .br |
---|
302 | .sp |
---|
303 | .RS +0.1i |
---|
304 | .nf |
---|
305 | .TS |
---|
306 | tab(~); |
---|
307 | l l. |
---|
308 | Site~LUNs |
---|
309 | Primary cluster (site A)~lunA1, lunA2 |
---|
310 | Recovery cluster (site B)~lunB1 |
---|
311 | .TE |
---|
312 | .sp |
---|
313 | .fi |
---|
314 | .RE |
---|
315 | .PP |
---|
316 | .RS +3 |
---|
317 | \fBlunA1 |
---|
318 | \fR |
---|
319 | .RE |
---|
320 | .RS +9 |
---|
321 | FlashCopy source |
---|
322 | .RE |
---|
323 | .PP |
---|
324 | .RS +3 |
---|
325 | \fBlunA2 |
---|
326 | \fR |
---|
327 | .RE |
---|
328 | .RS +9 |
---|
329 | FlashCopy target, PPRC source |
---|
330 | .RE |
---|
331 | .PP |
---|
332 | .RS +3 |
---|
333 | \fBlunB1 |
---|
334 | \fR |
---|
335 | .RE |
---|
336 | .RS +9 |
---|
337 | PPRC target |
---|
338 | .RE |
---|
339 | .PP |
---|
340 | A single GPFS file system named \fBfs0\fR has been defined in the |
---|
341 | primary cluster over lunA1. |
---|
342 | .RS +3 |
---|
343 | .HP 3 |
---|
344 | 1. In the primary cluster, suspend all file system I/O activity and flush the |
---|
345 | GPFS buffers |
---|
346 | .sp |
---|
347 | .nf |
---|
348 | mmfsctl fs0 suspend |
---|
349 | .fi |
---|
350 | .sp |
---|
351 | .sp |
---|
352 | The output is similar to this: |
---|
353 | .sp |
---|
354 | .nf |
---|
355 | Writing dirty data to disk |
---|
356 | Quiescing all file system operations |
---|
357 | Writing dirty data to disk again |
---|
358 | .fi |
---|
359 | .sp |
---|
360 | .HP 3 |
---|
361 | 2. Establish a FlashCopy pair using lunA1 as the source and lunA2 as the |
---|
362 | target. |
---|
363 | .HP 3 |
---|
364 | 3. Resume the file system I/O activity: |
---|
365 | .sp |
---|
366 | .nf |
---|
367 | mmfsctl fs0 resume |
---|
368 | .fi |
---|
369 | .sp |
---|
370 | .sp |
---|
371 | The output is similar to this: |
---|
372 | .sp |
---|
373 | .nf |
---|
374 | Resuming operations. |
---|
375 | .fi |
---|
376 | .sp |
---|
377 | .HP 3 |
---|
378 | 4. Establish a Peer-to-Peer Remote Copy (PPRC) path and a synchronous PPRC |
---|
379 | volume pair lunA2-lunB1 (primary-secondary). Use the 'copy entire |
---|
380 | volume' option and leave the 'permit read from secondary' |
---|
381 | option disabled. |
---|
382 | .HP 3 |
---|
383 | 5. Wait for the completion of the FlashCopy background task. Wait for |
---|
384 | the PPRC pair to reach the duplex (fully synchronized) state. |
---|
385 | .HP 3 |
---|
386 | 6. Terminate the PPRC volume pair lunA2-lunB1. |
---|
387 | .HP 3 |
---|
388 | 7. If this is the first time the snapshot is taken, or if the configuration |
---|
389 | state of \fBfs0\fR changed since the previous FlashCopy snapshot, propagate |
---|
390 | the most recent configuration to site B: |
---|
391 | .sp |
---|
392 | .nf |
---|
393 | mmfsctl fs0 syncFSconfig -n recovery_clust_nodelist |
---|
394 | .fi |
---|
395 | .sp |
---|
396 | .RE |
---|
397 | .SH "Location" |
---|
398 | .PP |
---|
399 | \fB/usr/lpp/mmfs/bin\fR |
---|
400 | .PP |
---|