Removing Hard Disks Or Responding To Disk Failure - US Robotics USR8700 User Manual

Serial ata 4-drive network attached storage
Hide thumbs Also See for USR8700:
Table of Contents

Advertisement

3. Click Ignore to continue rebuilding the disk and return to the Web User Interface.
The Disks page shows the progress of the rebuilding progress.
Alternatively, if the information on this page is not correct, click Scan to scan the storage system again
and update the page.
You can also click Shut Down to shut down the storage system. When you restart the storage system,
this page re-appears.

Removing Hard Disks or Responding to Disk Failure

The effect of removing hard disks from your storage system or disk failure varies, depending on the disk
configuration you chose when you configured the system and the current state of the existing disks.
For example, in a linear configuration, when you remove a disk or a disk fails, the data associated with that
disk is no longer available, but the data on all the other disks remains available.
In a RAID configuration, the effect of disk removal/failure varies, depending on the RAID level and whether
the RAID is in a normal or degraded state. You can determine the effect of disk removal/failure by looking
at the Hotplug Indicator on the Disks page. If this indicator is GREEN, disk removal/failure will have no
effect on the RAID. If this indicator is YELLOW, disk removal/failure will cause RAID degradation, but you
will still be able to access all the data. If the indicator is RED, disk removal/failure will cause the entire
RAID to fail.
For example, in a RAID 5 configuration, all the disks are YELLOW. Removing any one of them will cause
the RAID to be degraded, but all the data will still be available. However, after you remove one disk, all the
other disks become RED, since removing any one of them at this point will cause the entire RAID to fail.
Note: In a linear configuration, the Hotplug Indicator is RED for all the disks because removing any
one of them will remove data from the storage system. While this will not adversely affect the
data on any of the other disks, it will affect the integrity of any file that resides partially on the
removed disk and partially on a remaining disk.
In addition, while a disk is being rebuilt, all the other disks are RED, since removing any one of
them at this point will cause the RAID to fail.
If you remove a viable disk and cause only RAID degradation, you can re-install the same disk and resume
normal operation. (For information about adding a disk, refer to
Note: If you remove two or more disks, you must re-install them in the reverse order to help maintain
data integrity. For example, if you remove disk A from slot 1 and then remove disk B from slot 2,
you must re-install disk B first, then disk A. You can put the disks back into different slots, but
they must be re-installed in the opposite order from which they were removed.
If you remove one or more viable disks and cause the entire RAID to fail, you can shut down the storage
system, re-install the same disks, and then restart the storage system. As long as you re-install the original
disks, the storage system should be able to resume proper operation, although the integrity of the data
cannot be guaranteed. However, if you replace the removed disks with new disks, you must reconfigure
Serial ATA 4-Drive NAS
"Adding Hard Disks"
on page 126.)
Disk Configurations - 129
User Guide

Advertisement

Table of Contents
loading

Table of Contents