IBM Storwize V7000 Maintenance Manual page 152

Table of Contents

Advertisement

565 • 578
The node error does not persist across restarts of the
node software and operating system.
User response: Follow troubleshooting procedures to
reload the software:
1. Get a support package (snap), including dumps,
from the node using the management GUI or the
service assistant.
2. If more than one node is reporting this error,
contact IBM technical support for assistance. The
support package from each node will be required.
3. Check the support site to see whether the issue is
known and whether a software upgrade exists to
resolve the issue. Update the cluster software if a
resolution is available. Use the manual upgrade
process on the node that reported the error first.
4. If the problem remains unresolved, contact IBM
technical support and send them the support
package.
Possible Cause-FRUs or other:
v None
565
The internal drive of the node is failing.
Explanation: The internal drive within the node is
reporting too many errors. It is no longer safe to rely
on the integrity of the drive. Replacement is
recommended.
User response: Follow troubleshooting procedures to
fix the hardware:
1. The drive of the node canister cannot be replaced
individually. Follow the hardware remove and
replace instruction to change the node canister.
Possible Cause-FRUs or other:
v Node canister (100%)
573
The node software is inconsistent.
Explanation: Parts of the node software package are
receiving unexpected results; there may be an
inconsistent set of subpackages installed, or one
subpackage may be damaged.
User response: Follow troubleshooting procedures to
reload the software.
1. Follow the procedure to run a node rescue.
2. If the error occurs again, contact IBM technical
support.
Possible Cause-FRUs or other:
v None
136
Storwize V7000: Troubleshooting, Recovery, and Maintenance Guide
574
The node software is damaged.
Explanation: A checksum failure has indicated that
the node software is damaged and needs to be
reinstalled.
User response: If the other node canister is
operational, run node rescue. Otherwise, install new
software using the service assistant. Node rescue
failures or the repeated return of this node error after
reinstallation is symptomatic of a hardware fault with
the node canister.
Possible Cause-FRUs or other:
v None
576
The cluster state and configuration data
cannot be read.
Explanation: The node has been unable to read the
saved cluster state and configuration data from its
internal drive because of a read or medium error.
User response: Follow troubleshooting procedures to
fix the hardware:
1. The drive of the node canister cannot be replaced
individually. Follow the hardware remove and
replace instructions to change the node canister.
Possible Cause-FRUs or other:
v None
578
The state data was not saved following
a power loss.
Explanation: On startup, the node was unable to read
its state data. When this happens, it expects to be
automatically added back into a cluster. However, if it
has not joined a cluster in 60 sec, it raises this node
error. This is a critical node error and user action is
required before the node can become a candidate to
join a cluster.
User response: Follow troubleshooting procedures to
correct connectivity issues between the cluster nodes
and the quorum devices.
1. Manual intervention is required once the node
reports this error.
2. Attempt to reestablish the cluster using other nodes.
This may involve fixing hardware issues on other
nodes or fixing connectivity issues between nodes.
3. If you are able to reestablish the cluster, remove the
cluster data from the node showing 578 so it goes to
candidate state, it will then be automatically added
back to the cluster. If the node does not
automatically add back to the cluster, note the name
and I/O group of the node, then delete the node
from the cluster configuration (if this has not
already happened) and then add the node back to
the cluster using the same name and I/O group.

Advertisement

Table of Contents
loading

Table of Contents