IBM Elastic Storage System 3200 Service Manual page 37

Hide thumbs Also See for Elastic Storage System 3200:
Table of Contents

Advertisement

mmvdisk: Checking resources for specified nodes.
mmvdisk: Node class 'ess3200_x86_64_mmvdisk_78E400K' has a shared recovery group disk
topology.
mmvdisk: Using 'ess3200.shared' RG configuration for topology 'ESS 3200 FN1 24 NVMe'.
mmvdisk: Updating configuration for node class 'ess3200_x86_64_mmvdisk_78E400K' (recovery
group 'ess3200_78E400K').
mmvdisk: Recording post-conversion cluster configuration in /var/mmfs/tmp/
mmvdisk.configure.after.20210816
mmvdisk: Restarting GPFS daemon on node 'ess3200rw3a-hs.gpfs.ess'.
mmvdisk: Restarting GPFS daemon on node 'ess3200rw3b-hs.gpfs.ess'.
Important: This command automatically stops and restarts GPFS on each canister server in a serial
fashion by using the --recycle 1 option. If you do not want to stop and restart GPFS, the server
can be configured without the --recycle 1 option. The customer can restart GPFS manually on
each canister server by the process at each step. For more information about manually restarting
GPFS, see "Manually restarting GPFS on the Enterprise Storage Server 3200 canisters example" on
page 29.
13. Verify that the newly added space is available to the system.
# mmvdisk pdisk list --recovery-group <recovery group name>
Example
# mmvdisk pdisk list --recovery-group ess3200_78E400K
recovery group
pdisk
state
--------------
------------
-----
ess3200_78E400K
e1s01
ess3200_78E400K
e1s02
ess3200_78E400K
e1s03
ess3200_78E400K
e1s04
ess3200_78E400K
e1s05
ess3200_78E400K
e1s06
ess3200_78E400K
e1s07
ess3200_78E400K
e1s08
ess3200_78E400K
e1s09
ess3200_78E400K
e1s10
ess3200_78E400K
e1s11
ess3200_78E400K
e1s12
ess3200_78E400K
e1s13
ess3200_78E400K
e1s14
ess3200_78E400K
e1s15
ess3200_78E400K
e1s16
ess3200_78E400K
e1s17
ess3200_78E400K
e1s18
ess3200_78E400K
e1s19
ess3200_78E400K
e1s20
ess3200_78E400K
e1s21
ess3200_78E400K
e1s22
ess3200_78E400K
e1s23
ess3200_78E400K
e1s24
Issue the following command to display the total available space combining all pdisks in the
declustered array:
# mmvdisk recoverygroup list -–recovery-group <recovery group name> --declustered-array
The following example displays the total available space combining all pdisks in the declustered
array:
# mmvdisk recoverygroup list --recovery-group ess3200_78E400K --declustered-array
declustered
needs
array
service
-----------
-------
DA1
no
mmvdisk: Total capacity is the raw space before any vdisk set definitions.
mmvdisk: Free capacity is what remains for additional vdisk set definitions.
declustered
array
-----------
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
DA1
vdisks
type
trim
user log
total spare rt
----
----
---- ---
----- ----- --
NVMe
no
4
5
24
paths
capacity
free space
-----
--------
----------
2
3576 GiB
2270 GiB
2
3576 GiB
2272 GiB
2
3576 GiB
2272 GiB
2
3576 GiB
2268 GiB
2
3576 GiB
2268 GiB
2
3576 GiB
2268 GiB
2
3576 GiB
2270 GiB
2
3576 GiB
2270 GiB
2
3576 GiB
2272 GiB
2
3576 GiB
2270 GiB
2
3576 GiB
2268 GiB
2
3576 GiB
2270 GiB
2
3576 GiB
2270 GiB
2
3576 GiB
2270 GiB
2
3576 GiB
2274 GiB
2
3576 GiB
2272 GiB
2
3576 GiB
2270 GiB
2
3576 GiB
2272 GiB
2
3576 GiB
2274 GiB
2
3576 GiB
2268 GiB
2
3576 GiB
2270 GiB
2
3576 GiB
2274 GiB
2
3576 GiB
2274 GiB
2
3576 GiB
2270 GiB
pdisks
capacity
total raw free raw
--------- --------
2
2
76 TiB
45 TiB
Chapter 1. Servicing (customer tasks) 25
FRU (type)
---------------
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
3.84TB NVMe Tie
ok
background task
---------------
scrub 14d (85%)

Advertisement

Table of Contents
loading

Table of Contents