source: gpfs_3.1_ker2.6.20/share/man/man8/mmfsctl.8 @ 90

Last change on this file since 90 was 16, checked in by rock, 17 years ago
File size: 10.5 KB
RevLine 
[16]1.TH mmfsctl 02/16/06
2mmfsctl Command
3.SH "Name"
4.PP
5\fBmmfsctl\fR - Issues a file system control request.
6.SH "Synopsis"
7.PP
8\fBmmfsctl\fR \fIDevice\fR \fB{suspend | resume}\fR
9.PP
10Or,
11.PP
12\fBmmfsctl\fR \fIDevice\fR \fB{exclude | include}\fR
13\fB{\fR\fB-d
14"\fR\fIDiskName\fR\fB[;\fR\fIDiskName\fR\fB...]"
15|\fR \fB-F\fR \fIDiskFile\fR | \fB-G\fR
16\fIFailureGroup\fR\fB}\fR
17.PP
18Or,
19.PP
20\fBmmfsctl\fR \fIDevice\fR \fBsyncFSconfig\fR
21\fB{\fR\fB-n\fR \fIRemoteNodesFile\fR | \fB-C\fR
22\fIremoteClusterName\fR\fB}\fR [\fB-S\fR \fISpecFile\fR]
23.SH "Description"
24.PP
25Use the \fBmmfsctl\fR command to issue control requests to a
26particular GPFS file system. The command is used to temporarily suspend
27the processing of all application I/O requests, and later resume them, as well
28as to synchronize the file system's configuration state between peer
29clusters in disaster recovery environments.
30.PP
31See
32\fIEstablishing disaster recovery for your GPFS
33cluster\fR in
34\fIGeneral Parallel File
35System: Advanced Administration Guide\fR.
36.PP
37Before creating a FlashCopy image of the file system, the user must run
38\fBmmfsctl suspend\fR to temporarily quiesce all file system activity and
39flush the internal buffers on all nodes that mount this file system.
40The on-disk metadata will be brought to a consistent state, which provides for
41the integrity of the FlashCopy snapshot. If a request to the file
42system is issued by the application after the invocation of this command, GPFS
43suspends this request indefinitely, or until the user issues \fBmmfsctl
44resume\fR.
45.PP
46Once the FlashCopy image has been taken, the \fBmmfsctl resume\fR
47command can be issued to resume the normal operation and complete any pending
48I/O requests.
49.PP
50The \fBmmfsctl syncFSconfig\fR command extracts the file system's
51related information from the local GPFS configuration data, transfers this
52data to one of the nodes in the peer cluster, and attempts to import it
53there.
54.PP
55Once the GPFS file system has been defined in the primary cluster, users run
56this command to import the configuration of this file system into the peer
57recovery cluster. After producing a FlashCopy image of the file system
58and propagating it to the peer cluster using Peer-to-Peer Remote Copy (PPRC),
59users similarly run this command to propagate any relevant configuration
60changes made in the cluster after the previous snapshot.
61.PP
62The primary cluster configuration server of the peer cluster must be
63available and accessible using remote shell and remote copy at the time of the
64invocation of the \fBmmfsctl syncFSconfig\fR command. Also, the peer
65GPFS clusters should be defined to use the same remote shell and remote copy
66mechanism, and they must be set up to allow nodes in peer clusters to
67communicate without the use of a password.
68.PP
69Not all administrative actions performed on the file system necessitate this
70type of resynchronization. It is required only for those actions that
71modify the file system information maintained in the local GPFS configuration
72data, which includes:
73.RS +3
74.HP 3
75\(bu Additions, removals, and replacements of disks (commands \fBmmadddisk\fR, \fBmmdeldisk\fR, \fBmmrpldisk\fR)
76.HP 3
77\(bu Modifications to disk attributes (command \fBmmchdisk\fR)
78.HP 3
79\(bu Changes to the file system's mount point (command \fBmmchfs -T\fR)
80.HP 3
81\(bu Changing the file system device name (command
82\fBmmchfs
83-W\fR)
84.RE
85.PP
86The process of synchronizing the file system configuration data can
87be automated by utilizing the \fBsyncfsconfig\fR user exit.
88.PP
89The \fBmmfsctl exclude\fR command is to be used only in a disaster recovery
90environment, only after a disaster has occurred, and only after ensuring that
91the disks in question have been physically disconnected. Otherwise,
92unexpected results may occur.
93.PP
94The \fBmmfsctl exclude\fR command can be used to manually
95override the file system descriptor quorum after a site-wide disaster.
96See
97\fIEstablishing disaster recovery for your GPFS
98cluster\fR in
99\fIGeneral Parallel File
100System: Advanced Administration Guide\fR. This command
101enables users to restore normal access to the file system with less than a
102quorum of available file system descriptor replica disks, by effectively
103excluding the specified disks from all subsequent operations on the file
104system descriptor. After repairing the disks, the \fBmmfsctl
105include\fR command can be issued to restore the initial quorum
106configuration.
107.SH "Parameters"
108.PP
109.RS +3
110\fB\fIDevice\fR
111\fR
112.RE
113.RS +9
114The device name of the file system. File system names need not be
115fully-qualified. \fBfs0\fR is just as acceptable as
116\fB/dev/fs0\fR. If \fBall\fR is specified with the
117\fBsyncFSconfig\fR option, this command is performed on all GPFS file
118systems defined in the cluster.
119.RE
120.PP
121.RS +3
122\fBexclude
123\fR
124.RE
125.RS +9
126Instructs GPFS to exclude the specified group of disks from all
127subsequent operations on the file system descriptor, and change their
128availability state to \fBdown\fR, if the conditions in the Note below are
129met.
130.PP
131If necessary, this command assigns additional disks to serve as the disk
132descriptor replica holders, and migrate the disk descriptor to the new replica
133set. The excluded disks are not deleted from the file system, and still
134appear in the output of the
135\fBmmlsdisk\fR
136command.
137.RS +3
138\fBNote:\fR
139.RE
140.RS +9
141The \fBmmfsctl exclude\fR command is to be used only in a disaster
142recovery environment, only after a disaster has occurred, and only after
143ensuring that the disks in question have been physically disconnected.
144Otherwise, unexpected results may occur.
145.RE
146.RE
147.PP
148.RS +3
149\fBinclude
150\fR
151.RE
152.RS +9
153Informs GPFS that the previously excluded disks have become operational
154again. This command writes the up-to-date version of the disk
155descriptor to each of the specified disks, and clears the \fBexcl\fR
156tag.
157.RE
158.PP
159.RS +3
160\fBresume
161\fR
162.RE
163.RS +9
164Instructs GPFS to resume the normal processing of I/O requests on all
165nodes.
166.RE
167.PP
168.RS +3
169\fBsuspend
170\fR
171.RE
172.RS +9
173Instructs GPFS to flush the internal buffers on all nodes, bring the file
174system to a consistent state on disk, and suspend the processing of all
175subsequent application I/O requests.
176.RE
177.PP
178.RS +3
179\fBsyncFSconfig
180\fR
181.RE
182.RS +9
183Synchronizes the configuration state of a GPFS file system between the
184local cluster and its peer in two-cluster disaster recovery
185configurations.
186.RE
187.PP
188.RS +3
189\fB-C \fIremoteClusterName\fR
190\fR
191.RE
192.RS +9
193Specifies the name of the GPFS cluster that owns the remote GPFS file
194system.
195.RE
196.PP
197.RS +3
198\fB-d
199"\fIDiskName\fR\fB[;\fR\fIDiskName\fR\fB...]"\fR
200\fR
201.RE
202.RS +9
203Specifies the names of the NSDs to be included or excluded by the
204\fBmmfsctl\fR command. Separate the names with semicolons (;)
205and enclose the list of disk names in quotation marks.
206.RE
207.PP
208.RS +3
209\fB-F \fIDiskFile\fR
210\fR
211.RE
212.RS +9
213Specifies a file containing the names of the NSDs, one per line, to be
214included or excluded by the \fBmmfsctl\fR command.
215.RE
216.PP
217.RS +3
218\fB-G \fIFailureGroup\fR
219\fR
220.RE
221.RS +9
222A number identifying the failure group for disks to be included or
223excluded by the \fBmmfsctl\fR command.
224.RE
225.PP
226.RS +3
227\fB-n \fIRemoteNodesFile\fR
228\fR
229.RE
230.RS +9
231Specifies a list of contact nodes in the peer recovery cluster that GPFS
232uses when importing the configuration data into that cluster. Although
233any node in the peer cluster can be specified here, users are advised to
234specify the identities of the peer cluster's primary and secondary
235cluster configuration servers, for efficiency reasons.
236.RE
237.PP
238.RS +3
239\fB-S \fISpecFile\fR
240\fR
241.RE
242.RS +9
243Specifies the description of changes to be made to the file system, in the
244peer cluster during the import step. The format of this file is
245identical to that of the \fIChangeSpecFile\fR used as input to the \fBmmimportfs\fR command. This option can be used,
246for example, to define the assignment of the NSD servers for use in the peer
247cluster.
248.RE
249.SH "Options"
250.PP
251None.
252.SH "Exit status"
253.PP
254.PP
255.RS +3
256\fB0
257\fR
258.RE
259.RS +9
260Successful completion.
261.RE
262.PP
263.RS +3
264\fBnonzero
265\fR
266.RE
267.RS +9
268A failure has occurred.
269.RE
270.SH "Results"
271.PP
272The \fBmmfsctl\fR command returns 0 if successful.
273.SH "Security"
274.PP
275You must have root authority to run the \fBmmfsctl\fR command.
276.PP
277You may issue the \fBmmfsctl\fR command from any node in the GPFS
278cluster.
279.PP
280When using the \fBrcp\fR and \fBrsh\fR commands for remote
281communication, a properly configured \fB.rhosts\fR file must exist
282in the root user's home directory on each node in the GPFS
283cluster. If you have designated the use of a different remote
284communication program on either the
285\fBmmcrcluster\fR
286or the
287\fBmmchcluster\fR command, you must
288ensure:
289.RS +3
290.HP 3
2911. Proper authorization is granted to all nodes in the GPFS cluster.
292.HP 3
2932. The nodes in the GPFS cluster can communicate without the use of a
294password, and without any extraneous messages.
295.RE
296.SH "Examples"
297.PP
298This sequence of commands creates a FlashCopy image of the file system and
299propagates this image to the recovery cluster using the Peer-to-Peer Remote
300Copy technology. The following configuration is assumed:
301.br
302.sp
303.RS +0.1i
304.nf
305.TS
306tab(~);
307 l l.
308Site~LUNs
309Primary cluster (site A)~lunA1, lunA2
310Recovery cluster (site B)~lunB1
311.TE
312.sp
313.fi
314.RE
315.PP
316.RS +3
317\fBlunA1
318\fR
319.RE
320.RS +9
321FlashCopy source
322.RE
323.PP
324.RS +3
325\fBlunA2
326\fR
327.RE
328.RS +9
329FlashCopy target, PPRC source
330.RE
331.PP
332.RS +3
333\fBlunB1
334\fR
335.RE
336.RS +9
337PPRC target
338.RE
339.PP
340A single GPFS file system named \fBfs0\fR has been defined in the
341primary cluster over lunA1.
342.RS +3
343.HP 3
3441. In the primary cluster, suspend all file system I/O activity and flush the
345GPFS buffers
346.sp
347.nf
348mmfsctl fs0 suspend
349.fi
350.sp
351.sp
352The output is similar to this:
353.sp
354.nf
355Writing dirty data to disk
356Quiescing all file system operations
357Writing dirty data to disk again
358.fi
359.sp
360.HP 3
3612. Establish a FlashCopy pair using lunA1 as the source and lunA2 as the
362target.
363.HP 3
3643. Resume the file system I/O activity:
365.sp
366.nf
367mmfsctl fs0 resume
368.fi
369.sp
370.sp
371The output is similar to this:
372.sp
373.nf
374Resuming operations.
375.fi
376.sp
377.HP 3
3784. Establish a Peer-to-Peer Remote Copy (PPRC) path and a synchronous PPRC
379volume pair lunA2-lunB1 (primary-secondary). Use the 'copy entire
380volume' option and leave the 'permit read from secondary'
381option disabled.
382.HP 3
3835. Wait for the completion of the FlashCopy background task. Wait for
384the PPRC pair to reach the duplex (fully synchronized) state.
385.HP 3
3866. Terminate the PPRC volume pair lunA2-lunB1.
387.HP 3
3887. If this is the first time the snapshot is taken, or if the configuration
389state of \fBfs0\fR changed since the previous FlashCopy snapshot, propagate
390the most recent configuration to site B:
391.sp
392.nf
393mmfsctl fs0 syncFSconfig -n recovery_clust_nodelist
394.fi
395.sp
396.RE
397.SH "Location"
398.PP
399\fB/usr/lpp/mmfs/bin\fR
400.PP
Note: See TracBrowser for help on using the repository browser.