IBM Storwize V7000 Unified Problem Determination Manual page 441

Table of Contents

Advertisement

6. Otherwise if none of the disks are up or some of the disks are not up then use
the lsvdisk CLI command to check if all of your file volumes that should be
online are online. Note that the names of file volumes are the same as the
names of the disks. For example
[kd52v6h.ibm]$ lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type
FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state
0 IFS1350385068630 0 io_grp0 online 1 meta1 100.00GB striped
6005076802AD80227800000000000000 0 1 not_empty
1 IFS1350385068806 0 io_grp0 online 1 meta1 100.00GB striped
6005076802AD80227800000000000001 0 1 not_empty
2 IFS1350385089739 0 io_grp0 online 2 meta2 100.00GB striped
6005076802AD80227800000000000002 0 1 not_empty
3 IFS1350385089889 0 io_grp0 online 2 meta2 100.00GB striped
6005076802AD80227800000000000003 0 1 not_empty
4 IFS1350385108175 0 io_grp0 online 0 mdiskgrp0 341.00GB striped
6005076802AD80227800000000000004 0 1 not_empty
7. If any file volumes are offline then refer to Recovering when a file volume
does not come back online.
8. If none of the disks are up but all file volumes are online then the multi
pathing driver in the file modules may have failed and the best way to
recover is to reboot the file modules one after the other using the procedure
below.
9. If some of the disks are not up but the volumes are online then restart all
disks used by a file system before you continue to mount it.
10. Use the chdisk CLI command to restart all disks used by the file system. For
example:
chdisk <comma separated list of disk names> --action start
11. Use the mountfs CLI command to mount the file system. For example:
mountfs <file system name>
What to do next
Rebooting the file modules if none of the disks are up but all file volumes are
online:
To reboot the file modules if the multi-pathing driver may have failed following a
recovery of the control enclosure:
1. Identify the passive and the active management nodes from the Description
column in the output from the CLI command:
lsnode -r
Reboot the file module that is the passive management node using the CLI
command:
stopcluster -node <node name> -retart
2. Wait until both nodes show OK in the Connection status column of the output
from the CLI command:
lsnode -r
3. Resume the file module back into the cluster using the CLI command:
resumenode <node name>
4. Reboot the file module that is the active management node using the CLI
command. The active management node fails over to the file module that you
rebooted first.
stopcluster -node <node name> -restart
Chapter 7. Recovery procedures
413

Advertisement

Table of Contents
loading

Table of Contents