Page 1
H3C UniServer R6900 G3 Server User Guide New H3C Technologies Co., Ltd. http://www.h3c.com Document version: 6W101-20191010...
Page 2
The information in this document is subject to change without notice. All contents in this document, including statements, information, and recommendations, are believed to be accurate, but they are presented without warranty of any kind, express or implied. H3C shall not be liable for technical or editorial errors or omissions contained herein.
Page 3
Preface This preface includes the following topics about the documentation: • Audience. • Conventions. • Documentation feedback. Audience This documentation is intended for: • Network planners. • Field technical support and servicing engineers. • Server administrators working with the R6900 G3 Server. Conventions The following information describes the conventions used in the documentation.
Page 4
Symbols Convention Description An alert that calls attention to important information that if not understood or followed WARNING! can result in personal injury. An alert that calls attention to important information that if not understood or followed CAUTION: can result in data loss, data corruption, or damage to hardware or software. An alert that calls attention to essential information.
Page 5
Documentation feedback You can e-mail your comments about product documentation to info@h3c.com. We appreciate your comments.
Page 7
Installing SAS/SATA drives ······································································································· 6-1 Installing NVMe drives ············································································································· 6-3 Installing power supplies ··········································································································· 6-4 Installing a compute module ······································································································ 6-5 Installing air baffles·················································································································· 6-7 Installing the low mid air baffle or GPU module air baffle to a compute module ······························ 6-7 ...
Page 8
Procedure ····················································································································· 7-20 Verifying the replacement ································································································· 7-21 Replacing the power fail safeguard module for a storage controller ·················································· 7-21 Replacing a GPU module ······································································································· 7-23 Replacing the GPU module in a compute module ·································································· 7-23 ...
Page 9
10 Appendix A Server specifications ··············································· 10-1 Server models and chassis view ······························································································ 10-1 Technical specifications ·········································································································· 10-2 Components ························································································································ 10-3 Front panel ·························································································································· 10-4 Front panel view of the server ··························································································· 10-4 Front panel view of a compute module ················································································ 10-6 ...
Table 1-1 Safety signs Sign Description Circuit or electricity hazards are present. Only H3C authorized or professional server engineers are allowed to service, repair, or upgrade the server. WARNING! To avoid bodily injury or damage to circuits, do not open any components marked with the electrical hazard sign unless you have authorization to do so.
General operating safety To avoid bodily injury or damage to the server, follow these guidelines when you operate the server: • Only H3C authorized or professional server engineers are allowed to install, service, repair, operate, or upgrade the server. •...
ESD prevention Electrostatic charges that build up on people and tools might damage or shorten the lifespan of boards, the midplane, and electrostatic-sensitive components. Preventing electrostatic discharge To prevent electrostatic damage, follow these guidelines: • Transport or store the server with the components in antistatic bags. •...
Battery safety The server's management module contains a system battery, which is designed with a lifespan of 5 to 10 years. If the server no longer automatically displays the correct date and time, you might need to replace the battery. When you replace the battery, follow these safety guidelines: •...
Table 2-1 Installation limits for different rack depths Rack depth Installation limits • H3C cable management arm (CMA) is not supported. • A clearance of 60 mm (2.36 in) is reserved from the server rear to the rear rack door for cabling.
Figure 2-1 Installation suggestions for a 1200 mm deep rack (top view) (1) 1200 mm (47.24 in) rack depth (2) A minimum of 50 mm (1.97 in) between the rack front posts and the front rack door (3) 830 mm (32.68 in) between the rack front posts and the rear of the chassis, including power supply handles at the server rear (not shown in the figure) (4) 830 mm (32.68 in) server depth, including chassis ears (5) 950 mm (37.40 in) between the front rack posts and the CMA...
• The air intake and outlet vents of the server are not blocked. • The front and rear rack doors are adequately ventilated to allow ambient room air to enter the cabinet and allow the warm air to escape from the cabinet. •...
Table 2-3 Harmful gas limits in an equipment room Maximum concentration (mg/m 0.006 0.04 0.05 0.01 Grounding requirements Correctly connecting the server grounding cable is crucial to lightning protection, anti-interference, and ESD prevention. The server can be grounded through the grounding wire of the power supply system and no external grounding cable is required.
Page 19
Picture Name Description Multimeter For resistance and voltage measurement. ESD wrist strap For ESD prevention when you operate the server. Antistatic gloves For ESD prevention when you operate the server. Antistatic clothing For ESD prevention when you operate the server. Ladder For high-place operations.
Installing or removing the server Installing the server As a best practice, install hardware options to the server (if needed) before installing the server in the rack. For more information about how to install hardware options, see "Installing hardware options." Installing the chassis rails and slide rails Install the chassis rails to the server and the slide rails to the rack.
Page 21
Figure 3-2 Rack-mounting the server Secure the server, as shown in Figure 3-3: a. Push the server until the chassis ears are flush against the rack front posts, as shown by callout 1. b. Unlock the latches of the chassis ears, as shown by callout 2. c.
Install the removed security bezel. For more information, see "Installing the security bezel." (Optional) Installing the CMA Install the CMA if the server is shipped with the CMA. For information about how to install the CMA, see the installation guide shipped with the CMA. Connecting external cables Cabling guidelines WARNING!
Page 23
Figure 3-4 Connecting a VGA cable Connect the other plug of the VGA cable to the VGA connector on the monitor, and fasten the screws on the plug. Connect the mouse and keyboard. For a USB mouse and keyboard, directly connect the USB connectors of the mouse and keyboard to the USB connectors on the server.
Connecting an Ethernet cable About this task Perform this task before you set up a network environment or log in to the HDM management interface through the HDM network port to manage the server. Prerequisites Install an mLOM or PCIe Ethernet adapter. For more information, see "Installing Ethernet adapters."...
USB devices are hot swappable. However, to connect a USB device to the internal USB connector or remove a USB device from the internal USB connector, power off the server first. As a best practice for compatibility, purchase H3C certified USB devices. Connecting a USB device to the internal USB connector Power off the server.
Connecting the power cord Guidelines WARNING! To avoid damage to the equipment or even bodily injury, use the power cord that ships with the server. Before connecting the power cord, make sure the server and components are installed correctly. Procedure Insert the power cord plug into the power receptacle of a power supply at the rear panel, as shown in Figure...
Page 27
Figure 3-8 Sliding the cable clamp backward b. Open the cable clamp, place the power cord through the opening in the cable clamp, and then close the cable clamp, as shown by callouts 1, 2, 3, and 4 in Figure 3-9.
Figure 3-10 Sliding the cable clamp forward Securing cables Securing cables to the CMA For information about how to secure cables to the CMA, see the installation guide shipped with the CMA. Securing cables to slide rails by using cable straps You can secure cables to either left slide rails or right slide rails by using the cable straps provided with the server.
Figure 3-11 Securing cables to a slide rail Removing the server from a rack Power down the server. For more information, see "Powering off the server." Disconnect all peripheral cables from the server. Extend the server from the rack, as shown in Figure 3-12.
Powering on and powering off the server Important information If the server is connected to external storage devices, make sure the server is the first device to power off and then the last device to power on. This restriction prevents the server from mistakenly identifying the external storage devices as faulty devices.
In the navigation pane, select Power Manager > Meter Power. The meter power configuration page opens. Click the Automatic power-on tab and then select Always power on. Click Save. To configure automatic power-on from the BIOS, set AC Restore Settings to Always Power On. For more information, see the BIOS user guide for the server.
Configuring the server The following information describes the procedures to configure the server after the server installation is complete. Configuration flowchart Figure 5-1 Configuration flowchart Powering on the server Power on the server. For information about the procedures, see "Powering on the server."...
Configuring basic BIOS settings You can set the server boot order and the BIOS user and administrator passwords from the BIOS setup utility of the server. Setting the server boot order The server has a default boot order. To change the server boot order, access the Boot menu in the BIOS setup utility.
Updating firmware IMPORTANT: Verify the hardware and software compatibility before firmware update. For information about the hardware and software compatibility, see the software release notes. You can update the following firmware from FIST or HDM: • HDM. • BIOS. • CPLD.
Installing hardware options If you are installing multiple hardware options, read their installation procedures and identify similar steps to streamline the entire installation procedure. Installing the security bezel Press the right edge of the security bezel into the groove in the right chassis ear on the server, as shown by callout 1 in Figure 6-1.
Page 36
• For efficient use of storage, use drives that have the same capacity to build a RAID. If the drives have different capacities, the lowest capacity is used across all drives in the RAID. Whether a drive with extra capacity can be used to build other RAIDs depends on the storage controller model.
Figure 6-4 Installing a drive Install the security bezel. For more information, see "Installing the security bezel." Verifying the installation Use the following methods to verify that the drive is installed correctly: • Verify the drive properties (including capacity) and state by using one of the following methods: Log in to HDM.
Install the drive. For more information, see "Installing SAS/SATA drives." Install the removed security bezel. For more information, see "Installing the security bezel." Verifying the installation Use the following methods to verify that the drive is installed correctly: • Observe the drive LEDs to verify that the drive is operating correctly. For more information, see "Drive LEDs."...
Figure 6-6 Installing a power supply Connect the power cord. For more information, see "Connecting the power cord." Verifying the installation Use one of the following methods to verify that the power supply is installed correctly: • Observe the power supply LED to verify that the power supply is operating correctly. For more information about the power supply LED, see LEDs in "Rear panel."...
Page 40
Figure 6-7 Removing the compute module blank Install the compute module: a. Press the clips at both ends of the compute module inward to release the locking levers, as shown in Figure 6-8. Figure 6-8 Releasing the locking levers b. Push the module gently into the slot until you cannot push it further. Then, close the locking levers at both ends to secure the module in place, as shown in Figure 6-9.
Figure 6-9 Installing the compute module Install the removed security bezel. For more information, see "Installing the security bezel." Connect the power cord. For more information, see "Connecting the power cord." Power on the server. For more information, see "Powering on the server."...
Page 42
Figure 6-10 Installing the low mid air baffle To install the GPU module air baffle, align the two pin holes with the guide pins (near the compute module front panel) in the compute module. Then, gently press down the air baffle onto the main board and push it forward until you cannot push it any further, as shown Figure 6-11.
Figure 6-11 Installing the GPU module air baffle Install a riser card and a PCIe module in the compute module. For more information, see "Installing a riser card and a PCIe module in a compute module." Install the compute module access panel. For more information, see "Replacing a compute module access panel."...
Page 44
Install PCIe modules to the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear." If an installed PCIe module requires external cables, remove the air baffle panel closer to the PCIe module installation slot for the cables to pass through. Figure 6-12 Removing an air baffle panel from the GPU module air baffle Install the GPU module air baffle to the rear riser card.
Installing riser cards and PCIe modules The server provides two PCIe riser connectors and three PCIe riser bays. The three riser bays are at the server rear and each compute module provides one riser connector. For more information about the locations of the bays and connectors, see "Rear panel view" and "Main board components", respectively.
Page 46
Remove the high mid air baffle. For more information, see "Replacing air baffles in a compute module." Install the low mid air baffle. For more information, see "Installing the low mid air baffle or GPU module air baffle to a compute module."...
Figure 6-16 Installing the riser card to the compute module c. Connect PCIe module cables, if any, to the drive backplane. Install the compute module access panel. For more information, see "Replacing a compute module access panel." 10. Install the compute module. For more information, see "Installing a compute module."...
Page 48
Figure 6-17 Removing the rear riser card blank Install the PCIe module to the riser card: a. Remove the riser card air baffle, if the PCIe module to be installed needs to connect cables. For more information, see "Replacing a riser card air baffle."...
Page 49
Figure 6-18 Removing the PCIe module blank d. Insert the PCIe module into the PCIe slot along the guide rails, and then close the riser card cover, as shown in Figure 6-19. 6-15...
Page 50
Figure 6-19 Installing a PCIe module to the riser card e. Connect PCIe module cables, if any, to the PCIe module. f. Install the removed riser card air baffle. For more information, see "Replacing a riser card air baffle." Install the riser card to the server. a.
b. Install the riser card to the server. As shown in Figure 6-21, gently push the riser card into the bay until you cannot push it further, and then close the ejector lever to secure the riser card. Figure 6-21 Installing the riser card to the server Connect PCIe module cables, if any.
Page 52
• RAID-LSI-9460-8i(4G). • RAID-LSI-9460-16i(4G). Procedure The procedure is the same for installing storage controllers of different models. This section uses the RAID-LSI-9361-8i(1G)-A1-X storage controller as an example. To install a storage controller: Power off the server. For more information, see "Powering off the server."...
Page 53
Figure 6-23 Installing the flash card Install the storage controller to the riser card: a. (Optional.) Connect the flash card cable (P/N 0404A0VU) to the flash card. Figure 6-24 Connecting the flash card cable to the flash card b. Install the storage controller to the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."...
Page 54
Figure 6-25 Installing the supercapacitor holder e. Connect one end of the supercapacitor cable (P/N 0404A0VT) provided with the flash card to the supercapacitor cable, as shown in Figure 6-26. Figure 6-26 Connecting the supercapacitor cable f. Insert the cableless end of the supercapacitor into the holder. Pull a clip on the holder, insert the other end of the supercapacitor into the holder, and then release the clip, as shown in by callouts 1, 2, and 3 in Figure...
Figure 6-27 Installing the supercapacitor and connecting the supercapacitor cable h. Install the compute module access panel. For more information, see "Replacing a compute module access panel." i. Install the compute module. For more information, see "Installing a compute module." j.
Page 56
For the GPU module to take effect, make sure processor 2 of the compute module is in position. Procedure The procedure is the same for installing GPU modules GPU-P4-X, GPU-P40-X, GPU-T4, GPU-P100, GPU-V100, and GPU-V100-32G. This section uses the GPU-P100 as an example. To install a GPU module in a compute module: Power off the server.
Page 57
Figure 6-29 Installing a GPU module Install the riser card on PCIe riser connector 0. Align the pin holes on the riser card with the guide pins on the main board, and place the riser card on the main board. Then, fasten the captive screws to secure the riser card into place, as shown in Figure 6-30.
Installing a GPU module to a rear riser card Guidelines You can install GPU modules only to the riser card in riser bay 1 or 3. To install only one GPU module to a rear riser card, install the GPU module in PCIe slot 2. To install two GPU modules to a rear riser card, install the GPU modules in PCIe slots 2 and 6.
Page 59
Install the mLOM Ethernet adapter to the riser card: a. Remove the screw from the mLOM Ethernet adapter slot and then remove the blank, as shown in Figure 6-31. Figure 6-31 Removing the screw b. Open the riser card cover. For more information, see "Installing riser cards and PCIe modules at the server rear."...
Figure 6-32 Installing an mLOM Ethernet adapter to the riser card If you have removed the riser card air baffle, install the removed riser card air baffle. For more information, see "Replacing a riser card air baffle." Install the riser card to the server. For more information, see "Installing riser cards and PCIe modules at the server rear."...
Install the PCIe Ethernet adapter to the riser card. For more information, see "Installing riser cards and PCIe modules." (Optional.) If the PCIe Ethernet adapter supports NCSI, connect the NCSI cable from the PCIe Ethernet adapter to the NCSI connector on the riser card. For more information about the NCSI connector location, see "Riser cards."...
Page 62
Figure 6-33 Installing the internal threaded stud b. Insert the connector of the SSD into the socket, and push down the other end of the SSD. Then, fasten the screw provided with the transfer module to secure the SSD into place, as shown in Figure 6-34.
11. Install the compute module. For more information, see "Installing a compute module." 12. Install the removed security bezel. For more information, see "Installing the security bezel." 13. Connect the power cord. For more information, see "Connecting the power cord." 14.
Page 64
Figure 6-35 Installing an SD card Installing the extended module to the management module. Align the two blue clips on the extended module with the bracket on the management module, and slowly insert the extended module downwards until it snaps into space, as shown in Figure 6-36.
Installing an NVMe SSD expander module Guidelines A riser card in a compute module is required when you install an NVMe SSD expander module. An NVMe SSD expander module is required only when NVMe drives are installed. For configurations that require an NVMe expander module, see "Drive configurations and numbering." Procedure The procedure is the same for installing a 4-port NVMe SSD expander module and an 8-port NVMe SSD expander module.
14. Power on the server. For more information, see "Powering on the server." Installing the NVMe VROC module Identify the NVMe VROC module connector on the management module. For more information, see "Management module components." Power off the server. For more information, see "Powering off the server."...
To install a 4SFF drive backplane: Power off the server. For more information, see "Powering off the server." Remove the security bezel, if any. For more information, see "Replacing the security bezel." Remove the compute module. For more information, see "Removing a compute module."...
Page 68
Identify the diagnostic panel cable before you install the diagnostic panel. The P/N for the cable is 0404A0SP. Procedure Power off the server. For more information, see "Powering off the server." Remove the security bezel, if any. For more information, see "Replacing the security bezel."...
Installing processors Guidelines • To avoid damage to the processors or main board, only H3C-authorized personnel and professional server engineers are allowed to install a processor. • Make sure the processors are the same model if multiple processors are installed.
Page 70
Table 6-2 Process installation locations Number of processors Installation locations Socket 1 of compute module 1. • For processors of model 5xxx, install a processor in socket 1 of compute module 1 and the other in socket 1 of compute module 2. •...
Page 71
Figure 6-42 Installing a processor onto the retaining bracket Install the retaining bracket onto the heatsink: CAUTION: When you remove the protective cover over the heatsink, be careful not to touch the thermal grease on the heatsink. a. Lift the cover straight up until it is removed from the heatsink, as shown in Figure 6-43.
Page 72
b. Install the retaining bracket onto the heatsink. As shown in Figure 6-44, align the alignment triangle on the retaining bracket with the cut-off corner of the heatsink. Place the bracket on top of the heatsink, with the four corners of the bracket clicked into the four corners of the heatsink.
CAUTION: Use an electric screwdriver and set the torque to 1.4 Nm (12 in-lbs) when fastening the screws. Failure to do so may result in poor contact of the processor and the main board or damage to the pins in the processor socket. Figure 6-46 Attaching the retaining bracket and heatsink to the processor socket 11.
Guidelines WARNING! The DIMMs are not hot swappable. You can install a maximum of 12 DIMMs for each processor, six DIMMs per memory controller. For more information, see "DIMM slots." For a DIMM to operate at 2933 MHz, make sure the following conditions are met: •...
Page 75
NOTE: If the DIMM configuration does not meet the requirements for the configured memory mode, the system uses the default memory mode (Independent mode). For more information about memory modes, see the BIOS user guide for the server. Figure 6-47 DIMM population schemes (one processor present) DIMM population schemes Number of DIMMs DIMM slots for processor 1 in CMOD 1...
Page 76
Figure 6-49 DIMM population schemes (two processors of model 5xxx present) DIMM population schemes Number of DIMMs DIMM slots for processor 1 in CMOD 1 DIMM slots for processor 1 in CMOD 2 √: Recommended *: Not recommended 1 DIMM ●...
• As a best practice, install DCPMMs symmetrically across the two memory processing units for a processor. • To install both DRAM DIMM and DCPMM in a channel, install the DRAM DIMM in the white slot and the DCPMM in the black slot. To install only one DIMM in a channel, install the DIMM in the white slot if the DIMM is DCPMM.
Page 78
Figure 6-52 Installing a DIMM Install the removed air baffles. For more information, see "Replacing air baffles in a compute module." 10. Reconnect the cable between the supercapacitor and the main board. 11. Install the removed riser card in the compute module. For more information, see "Installing a riser card and a PCIe module in a compute module."...
• H3C is not liable for blocked data access caused by improper use of the TCM or TPM. For more information, see the encryption technology feature documentation provided by the operating system.
Page 80
To install a TPM: Power off the server. For more information, see "Powering off the server." Disconnect all the cables from the management module. Remove the management module. For more information, see "Removing the management module." Install the TPM: a. Press the TPM into the TPM connector on the management module, as shown in Figure 6-54.
Install the management module. For more information, see "Installing the management module." Reconnect the cables to the management module. Connect the power cord. For more information, see "Connecting the power cord." Power on the server. For more information, see "Powering on the server."...
Replacing hardware options If you are replacing multiple hardware options, read their replacement procedures and identify similar steps to streamline the entire replacement procedure. Replacing the security bezel Insert the key provided with the bezel into the lock on the bezel and unlock the security bezel, as shown by callout 1 in Figure 7-1.
Observe the drive LEDs to verify that the drive is not selected by the storage controller and is not performing a RAID migration or rebuilding. For more information about drive LEDs, see "Drive LEDs." Remove the drive, as shown in Figure 7-2: a.
Install the removed security bezel, if any. For more information, see "Installing the security bezel." Verifying the replacement For information about the verification, see "Installing NVMe drives." Replacing a compute module and its main board WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.
Figure 7-4 Removing a compute module Removing the main board of a compute module Remove the compute module. For more information, see "Removing a compute module." Remove the components in the compute module: a. Remove the drives. For more information, see "Replacing a SAS/SATA drive."...
Figure 7-5 Removing the screws on a main board b. Lift the cable clamp and the riser card bracket from the main board, as shown by callouts 1 and 2 Figure 7-6. c. Lift the main board slowly out of the compute module, as shown by callout 3 in Figure 7-6.
Page 87
Figure 7-7 Installing the main board, cable clamp, and riser card bracket c. Fasten the 16 screws on the main board, as shown in Figure 7-8. Figure 7-8 Securing the screws on a main board Install the components in the compute module: a.
h. Install the compute module access panel. For more information, see "Replacing a compute module access panel." Install the compute module. For more information, see "Installing a compute module." Install the removed security bezel. For more information, see "Installing the security bezel."...
Figure 7-9 Removing the access panel Install a new compute module access panel, as shown in Figure 7-10: a. Place the access panel on top of the compute module. Make sure the pegs inside the access panel are aligned with the grooves on both sides of the compute module. b.
Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack." Remove the chassis access panel. The removal process is the same for the compute module access panel and chassis access panel. For more information, see "Replacing a compute module access panel."...
Page 91
Figure 7-12 Removing the power cord Holding the power supply by its handle and pressing the retaining latch with your thumb, pull the power supply slowly out of the slot, as shown in Figure 7-13. Figure 7-13 Removing the power supply Install a new power supply.
Replacing air baffles WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. Replacing air baffles in a compute module Power off the server. For more information, see "Powering off the server."...
Page 93
Figure 7-15 Installing the high mid air baffle To install the low mid air baffle, see "Installing the low mid air baffle or GPU module air baffle to a compute module" for more information. To install a left or right air baffle, place the air baffle in the compute module, as shown Figure 7-16, with the standouts on the air baffle aligned with the notches on the side of the compute module.
10. Install the compute module access panel. For more information, see "Replacing a compute module access panel." 11. Install the compute module. For more information, see "Installing a compute module." 12. Install the removed security bezel. For more information, see "Installing the security bezel."...
Figure 7-18 Installing the power supply air baffle Install the chassis access panel. For more information, see "Replacing the chassis access panel." Mount the server in a rack. For more information, see "Installing the server." Connect the power cord. For more information, see "Connecting the power cord."...
Page 96
Figure 7-19 Removing the riser card air baffle Install a new air baffle. Squeeze the clips at both sides of the air baffle, place the air baffle into place, and release the clips, as shown in Figure 7-20. Make sure the standouts at both ends of the air baffle are aligned with the notches inside the riser card.
Figure 7-20 Installing a riser card air baffle Install the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear." Reconnect the external cables to the riser card. Connect the power cord. For more information, see "Connecting the power cord."...
a. Disconnect all PCIe cables from the riser card. b. Loosen the captive screw on the riser card, and lift the riser card slowly out of the compute module, as shown in Figure 7-21. Figure 7-21 Removing the RS-FHHL-G3 riser card Hold and rotate the latch upward to unlock the riser card, and then pull the PCIe module out of the slot, as shown in Figure...
Page 99
Procedure Power off the server. For more information, see "Powering off the server." Disconnect external cables from the riser card, if the cables hinder riser card replacement. Remove the riser card, as shown in Figure 7-23: a. As shown by callout 1, press the latch upward to release the ejector lever on the riser card. b.
Figure 7-24 Removing a PCIe module in the riser card Install a new PCIe module to the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear." Install the riser card. For more information, see "Installing riser cards and PCIe modules at the server rear."...
Guidelines To replace the storage controller with a controller of a different model, reconfigure RAID after the replacement. For more information, see the storage controller user guide for the server. To replace the storage controller with a controller of the same model, make sure the following configurations remain the same after replacement: •...
Verifying the replacement Log in to HDM to verify that the storage controller is in a correct state. For more information, see HDM online help. Replacing the power fail safeguard module for a storage controller WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.
Page 103
10. Remove the supercapacitor, as shown in Figure 7-26. a. Disconnect the cable between the main board and the supercapacitor, as shown by callout b. Pull the clip on the supercapacitor holder, take the supercapacitor out of the holder, and then release the clip, as shown by callouts 2 and 3.
Verifying the replacement Log in to HDM to verify that the flash card and the supercapacitor are in a correct state. For more information, see HDM online help. Replacing a GPU module WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them.
Figure 7-29 Removing a GPU module Install a new GPU module. For more information, see "Installing a GPU module in a compute module." Install the compute module access panel. For more information, see "Replacing a compute module access panel." Install the compute module. For more information, see "Installing a compute module."...
Replacing an mLOM Ethernet adapter The procedure is the same for mLOM Ethernet adapters of different models. This section uses the NIC-GE-4P-360T-L3-M mLOM Ethernet adapter as an example. Procedure Power off the server. For more information, see "Powering off the server."...
Reconnect the external cables to the riser card. Connect the power cord. For more information, see "Connecting the power cord." Power on the server. For more information, see "Powering on the server." Verifying the replacement Log in to HDM to verify that the mLOM Ethernet adapter is in a correct state. For more information, see HDM online help.
Page 108
Remove the compute module access panel. For more information, see "Replacing a compute module access panel." Remove the M.2 transfer module from the riser card. For more information, see "Replacing the riser card and PCIe module in a compute module." Remove the PCIe M.2 SSD: a.
Install a new M.2 transfer module and a new PCIe M.2 SSD. For more information, see "Installing a PCIe M.2 SSD in a compute module." Install the compute module access panel. For more information, see "Replacing a compute module access panel."...
Figure 7-33 Removing an SD card Install a new SD card. For more information, see "Installing SD cards." Install the management module. For more information, see "Installing the management module." Reconnect the cables to the management module. Connect the power cord. For more information, see "Connecting the power cord."...
Figure 7-34 Removing the dual SD card extended module Remove the SD cards installed on the extended module, as shown in Figure 7-33. Install a new dual SD card extended module to the management module and install the removed SD cards. For more information, see "Installing SD cards."...
a. Disconnect the expander module from the front drive backplanes by removing the cables from the front drive backplanes. b. Remove the PCIe riser card that holds the NVMe SSD expander module. For more information, see "Replacing the riser card and PCIe module in a compute module."...
Figure 7-36 Removing the NVMe VROC module Install a new NVMe VROC module. For more information, see "Installing the NVMe VROC module." Install the management module. For more information, see "Installing the management module." Reconnect the cables to the management module. Connect the power cord.
Page 114
Figure 7-37 Removing a fan module Install a new fan module. Insert the fan module into the slot, as shown in Figure 7-38. Figure 7-38 Installing a fan module Install the chassis access panel. For more information, see "Replacing the chassis access panel."...
Guidelines • To avoid damage to a processor or a compute module main board, only H3C authorized or professional server engineers can install, replace, or remove a processor. • Make sure the processors on the server are the same model.
Page 116
Figure 7-39 Removing a processor heatsink Remove the processor retaining bracket from the heatsink, as shown in Figure 7-40: a. Insert a flat-head tool (such as a flat-head screwdriver) into the notch marked with TIM BREAKER to pry open the retaining bracket, as shown by callout 1. b.
Figure 7-40 Removing the processor retaining bracket Separate the processor from the retaining bracket with one hand pushing down and the other hand tilting the processor, as shown in Figure 7-41. Figure 7-41 Separating the processor from the retaining bracket Installing a processor Install the processor onto the retaining bracket.
Paste bar code label supplied with the processor over the original processor label on the heatsink. IMPORTANT: This step is required for you to obtain H3C's processor servicing. Install the removed air baffles in the compute module. For more information, see "Replacing air baffles in a compute module."...
Remove the security bezel, if any. For more information, see "Replacing the security bezel." Remove the compute module. For more information, see "Removing a compute module." Remove the compute module access panel. For more information, see "Replacing a compute module access panel."...
The server comes with a system battery (Panasonic BR2032) installed on the management module, which supplies power to the real-time clock and has a lifespan of 5 to 10 years. If the server no longer automatically displays the correct date and time, you might need to replace the battery. As a best practice, use the Panasonic BR2032 battery to replace the old one.
Figure 7-45 Installing the system battery Install the management module. For more information, see "Installing the management module." Connect cables to the management module. Connect the power cord. For more information, see "Connecting the power cord." Power on the server. For more information, see "Powering on the server."...
Remove the air baffles that might hinder the replacement in the compute module. For more information, see "Replacing air baffles in a compute module." Disconnect all the cables from the backplane. Loosen the captive screw on the backplane, slowly lift the backplane, and then pull it out of the compute module, as shown in Figure 7-46.
Figure 7-47 Removing the management module Remove the dual SD card extended module. For more information, see "Replacing an NVMe SSD expander module." Remove the NVMe VROC module. For more information, see "Replacing the NVMe VROC module." Installing the management module Install the NVMe VROC module.
Removing the PDB Power off the server. For more information, see "Powering off the server." Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack." Remove the chassis access panel. For more information, see "Replacing the chassis access panel."...
Figure 7-50 Unlocking the PDB c. Pull up the extension handles on the ejector levers. Hold the handles and rotate the ejector levers downward, as shown by callouts 1 and 2 in Figure 7-51. d. Pull the PDB out of the slot, as shown by callout 3 in Figure 7-51.
Page 126
Figure 7-52 Installing the PDB d. Connect cables to the PDB, as shown in Figure 7-53. Figure 7-53 Connecting cables to the PDB Install the management module. For more information, see "Installing the management module." Install the removed power supplies. For more information, see "Installing power supplies."...
Replacing the midplane WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. Removing the midplane Procedure Power off the server. For more information, see "Powering off the server." Remove the server from the rack. For more information, see "Removing the server from a rack."...
Installing the midplane Procedure Install a midplane, as shown in Figure 7-55: a. Insert the midplane into the server along the slide rails, and push the midplane toward the server front until you cannot push it further, as shown by callout 1. b.
Replacing the diagnostic panel WARNING! To avoid bodily injury from hot surfaces, allow the server and its internal modules to cool before touching them. To replace the diagnostic panel: Power off the server. For more information, see "Powering off the server."...
Page 130
To replace the left chassis ear: Power off the server. For more information, see "Powering off the server." Remove the server from the rack, if the space over the server is insufficient. For more information, see "Removing the server from a rack."...
17. Power on the server. For more information, see "Powering on the server." Replacing the TPM/TCM To avoid system damage, do not remove the installed TPM/TCM. If the installed TPM/TCM is faulty, remove the management module, and contact H3C Support for management module and TPM/TCM replacement. 7-50...
Connecting internal cables Properly route the internal cables and make sure they are not squeezed. Connecting drive cables Connecting drive cables in compute modules 24SFF SAS/SATA drive cabling Connect SAS port 1 on the 24SFF drive backplane to SAS port A1 on the main board and connect SAS port 2 to SAS port A2, as shown in Figure 8-1.
Page 133
8SFF SAS/SATA drive cabling Figure 8-2 8SFF SAS/SATA drive backplane connected to the main board (1) and (3) AUX signal cables (2) and (4) Power cords (5) SAS/SATA data cable 1 (for drive cage bay 2/4) (6) SAS/SATA data cable 2 (for drive cage bay 1/3) 8SFF NVMe drive cabling To install 8SFF NVMe drives, you must install an 8-port NVMe SSD expander module to riser card 0 in the compute module.
Page 134
NOTE: In the figure, A1 to A4 and B1 to B4 represent data ports NVMe A1 to NVMe A4 and data ports NVMe B1 to B4 on the NVMe SSD expander module. NVMe 1 to NVMe 4 represents the labels on NVMe data cables.
Page 135
4SFF SAS/SATA drive cabling Figure 8-5 4SFF SAS/SATA drive backplane connected to the main board (1) AUX signal cable (2) Power cord (3) SAS/SATA data cable 4SFF NVMe drive cabling To install 4SFF NVMe drives, you must install a 4-port NVMe SSD expander module to riser card 0 in the compute module.
Storage controller cabling in riser cards at the server rear When connecting storage controller data cables, make sure you connect the corresponding peer ports with the correct storage controller data cable. Use Table 8-1 Table 8-2 to determine the ports to be connected and the cable to use. Table 8-1 Storage controller cabling method (for all storage controllers except for the RAID-LSI-9460-16i(4G)) Location of...
Page 137
Figure 8-7 Connecting the storage controller cable in slot 3 (for all storage controllers except for the RAID-LSI-9460-16i(4G)) Figure 8-8 Connecting the storage controller cable in slot 6 (for all storage controllers except for the RAID-LSI-9460-16i(4G))
Page 138
Figure 8-9 Connecting the storage controller cables in slot 3 (for the RAID-LSI-9460-16i(4G)) Figure 8-10 Connecting the storage controller cables in slot 6 (for the RAID-LSI-9460-16i(4G))
Connecting the flash card on a storage controller When connecting a flash card cable and a supercapacitor cable, make sure you connect the correct supercapacitor connectors in the riser card and on the main board. Use Table 8-3 to determine the method for flash card and supercapacitor cabling.
Figure 8-12 Connecting the flash card on a storage controller in slot 6 Connecting the GPU power cord The method for connecting the GPU power cord is the same for different GPU models. This section uses the GPU-P100 as an example. Figure 8-13 Connecting the GPU power cord...
Connecting the front I/O component cable from the right chassis ear Figure 8-15 Connecting the front I/O component cable Connecting the cable for the front VGA and USB 2.0 connectors on the left chassis ear Figure 8-16 Connecting the cable for the front VGA and USB 2.0 connectors on the left chassis ear 8-11...
Maintenance The following information describes the guidelines and tasks for daily server maintenance. Guidelines • Keep the equipment room clean and tidy. Remove unnecessary devices and objects from the equipment room. • Make sure the temperature and humidity in the equipment room meet the server operating requirements.
The cables are in good condition and are not twisted or corroded at the connection point. Technical support If you encounter any complicated problems during daily maintenance or troubleshooting, contact H3C Support. Before contacting H3C Support, collect the following server information to facilitate troubleshooting: • Log and sensor information: Log information: −...
Server models and chassis view H3C UniServer R6900 G3 servers are 4U rack servers with two dual-processor compute modules communicating through the midplane. The servers are suitable for cloud computing, distributed storage, and video storage, as well as enterprise infrastructure and telecommunications applications.
Components Figure 10-2 R6900 G3 server components Table 10-2 R6900 G3 server components Item Description (1) Chassis access panel (2) Power supply air baffle Provides ventilation aisles for power supplies. (3) Dual SD card extended Provides two SD card slots. module (4) System battery Supplies power to the system clock.
Item Description Provides ventilation aisles for PCIe modules in riser cards at the server (14) Riser card air baffle rear. (15) Chassis Attach the server to the rack. The right ear is integrated with the front I/O component, and the left ear is integrated with VGA and USB 2.0 (16) Chassis ears connectors.
Page 149
Figure 10-3 48SFF front panel (1) Serial label pull tab module (2) USB 2.0 connectors (3) VGA connector (4) Compute module 1 (5) SAS/SATA drive or diagnostic panel (optional) (6) USB 3.0 connector (7) Compute module 2 Figure 10-4 32SFF front panel (1) Serial label pull tab module (2) USB 2.0 connectors (3) VGA connector...
Figure 10-7 8SFF compute module front panel (1) Drive cage bay 1/3 for 4SFF SAS/SATA or NVMe drives (optional) (2) Diagnostic panel (optional) (3) Drive cage bay 2/4 for 4SFF SAS/SATA or NVMe drives (optional) (4) Diagnostic panel (optional) NOTE: Drive cage bays 1 and 2 are for compute module 1, and drive cage bays 3 and 4 are for compute module 2.
Table 10-3 LEDs and buttons on the front panel Button/LED Status • Steady green—The system has started. • Flashing green (1 Hz)—The system is starting. • Steady amber—The system is in Standby state. • Off—No power is present. Possible reasons: Power on/standby button and system power LED No power source is connected.
Rear panel Rear panel view Figure 10-9 shows the rear panel view. Figure 10-9 Rear panel components (1) Power supply 1 (2) Power supply 2 (3) VGA connector (4) BIOS serial port (5) HDM dedicated network port (1 Gbps, RJ-45, default IP address 192.168.1.2/24) (6) USB 3.0 connectors (7) Power supply 3 (8) Power supply 4...
Page 154
Figure 10-10 Rear panel LEDs (1) Power supply LED for power supply 1 (2) Power supply LED for power supply 2 (3) UID LED (4) Link LED of the Ethernet port (5) Activity LED of the Ethernet port (6) Power supply LED for power supply 3 (7) Power supply LED for power supply 4 Table 10-5 LEDs on the rear panel Status...
Status • Flashing green (1 Hz)—The port is receiving or sending data. Activity LED of the Ethernet • port Off—The port is not receiving or sending data. Ports Table 10-6 Ports on the rear panel Port Type Description HDM dedicated Establishes a network connection to manage HDM RJ-45 network port...
Figure 10-11 Main board components (1) SAS port B2 (×4 SAS ports) for PCIe riser bay 3 (2) SAS port B1 (×4 SAS ports) for PCIe riser bay 3 (3) Supercapacitor connector 2 for PCIe riser bay 3 (4) PCIe riser connector 0 for processor 2 (5) Supercapacitor connector 1 for PCIe riser bay 1 (6) SAS port A1 (×4 SAS ports) for PCIe riser bay 1 (7) SAS port A2 (×4 SAS ports) for PCIe riser bay 1...
8SFF and 24SFF compute modules have the same physical layout of the DIMM slots on the main board, as shown in Figure 10-12. For more information about the DIMM slot population rules, see the guidelines in "Installing DIMMs." Figure 10-12 DIMM physical layout Management module Management module components Figure 10-13 Management module components...
System maintenance switches Use the system maintenance switch if you forget HDM username, HDM password, or BIOS password, or need to restore default BIOS settings, as described in Table 10-8. To identify the location of the switch, see Figure 10-13. Table 10-8 System maintenance switch Item Description...
About component model names The model name of a hardware option in this document might differ slightly from its model name label. A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region.
Page 161
Supported Base Number of Cache Model Power UPI speed max. data rate frequency cores (L3) links of DIMMs 6154 3.0 GHz 200 W 24.75 MB 10.4 GT/s 2666 MHz 6152 2.1 GHz 140 W 30.25 MB 10.4 GT/s 2666 MHz 6150 2.7 GHz 165 W...
Model Form factor Capacity Interface Rate Link width SSD-NVME-3.2T-PBlaze5 HHHL 3.2 TB PCIe 8 Gbps ×8 SSD-NVME-4T-P4500 HHHL 4 TB PCIe 8 Gbps ×4 SSD-NVME-4T-P4600 HHHL 4 TB PCIe 8 Gbps ×4 SSD-NVME-6.4T-PBlaze5 HHHL 6.4 TB PCIe 8 Gbps ×8 Drive LEDs The server supports SAS, SATA, and NVMe drives, of which SAS and SATA drives support hot swapping.
Fault/UID LED status Present/Active LED status Description Steady green/Flashing green The drive is faulty. Replace the drive Steady amber (4.0 Hz) immediately. Steady green/Flashing green The drive is operating correctly and selected Steady blue (4.0 Hz) by the RAID controller. The drive is performing a RAID migration or Flashing green (4.0 Hz) rebuilding, or the system is reading or...
Page 169
Figure 11-3 Drive numbering for 48SFF drive configurations (48SFF server) NOTE: For the location of the compute modules, see "Front panel view of the server." 32SFF server Table 11-8 presents the drive configurations available for the 32SFF server and their compatible types of storage controllers and NVMe SSD expander modules.
Page 170
Drive cage bay 1 in Drive cage bay 2 in Drive configuration Compute module 2 compute module 1 compute module 1 32SFF 4 SFF NVMe drives 4 SFF NVMe drives (24 SFF SAS/SATA drives and 8 SFF NVMe drives) 28SFF 4 SFF SAS/SATA drives (28 SFF SAS/SATA drives)
Storage controllers For some storage controllers, you can order a power fail safeguard module to prevent data loss from power outages. This module provides a flash card and a supercapacitor. When a system power failure occurs, the supercapacitor provides power for a minimum of 20 seconds. During this interval, the storage controller can transfer data from DDR memory to the flash card, where the data remains indefinitely or until the controller retrieves the data.
Page 175
HBA-LSI-9311-8i Item Specifications Form factor Connectors One ×8 mini-SAS-HD connector Number of internal ports 8 internal SAS ports (compatible with SATA) Drive interface 12 Gbps SAS 3.0 or 6 Gbps SATA 3.0 PCIe interface PCIe3.0 ×8 RAID levels 0, 1, 10, 1E Built-in cache memory •...
Page 176
Item Specifications PCIe interface PCIe3.0 ×8 RAID levels 0, 1, 5, 6, 10, 50, 60 Built-in cache memory 1 GB internal cache module (DDR3-1866 MHz) • SAS HDD • SAS SSD • SATA HDD Supported drives • SATA SSD The controller supports a maximum of 24 drives. BAT-LSI-G2-4U-B-X Power fail safeguard module The power fail safeguard module is optional.
Page 177
RAID-LSI-9460-8i(2G) Item Specifications Form factor Connectors One ×8 mini-SAS-HD connector Number of internal ports 8 internal SAS ports (compatible with SATA) Drive interface 12 Gbps SAS 3.0 or 6 Gbps SATA 3.0 PCIe interface PCIe3.1 ×8 RAID levels 0, 1, 5, 6, 10, 50, 60 Built-in cache memory 2 GB internal cache module (DDR4-2133 MHz) •...
Page 178
RAID-LSI-9460-16i(4G) Item Specifications Form factor Connectors Four ×4 mini-SAS-HD connector Number of internal ports 16 internal SAS ports (compatible with SATA) Drive interface 12 Gbps SAS 3.0 or 6 Gbps SATA 3.0 PCIe interface PCIe3.1 ×8 RAID levels 0, 1, 5, 6, 10, 50, 60 Built-in cache memory 4 GB internal cache module (DDR4-2133 MHz) •...
NVMe SSD expander modules Model Specifications 4-port NVMe SSD expander module, which supports a maximum of 4 EX-4NVMe-B NVMe SSD drives. 8-port NVMe SSD expander module, which supports a maximum of 8 EX-8NVMe-B NVMe SSD drives. GPU modules The GPU-V100 and GPU-V100-32G modules require PCIe I/O resources. GPU-P4-X Item Specifications...
Page 180
Item Specifications Memory bandwidth 732 GB/s Power connector Available GPU-T4 Item Specifications PCIe interface PCIe3.0 ×16 Form factor LP, single-slot wide Maximum power consumption 70 W Memory size 16 GB GDDR6 Memory bus width 256 bits Memory bandwidth 320 GB/s Power connector GPU-V100 Item...
mLOM Ethernet adapters In addition to mLOM Ethernet adapters, the server also supports PCIe Ethernet adapters (see "PCIe Ethernet adapters"). By default, port 1 on an mLOM Ethernet adapter acts as an HDM shared network port. NIC-GE-4P-360T-L3-M Item Specifications Form factor Ports Connector RJ-45...
Riser card guidelines Each PCIe slot in a riser card can supply a maximum of 75 W power to the PCIe module. You must connect a separate power cord to the PCIe module if it requires more than 75 W power. If a processor is faulty or absent, the corresponding PCIe slots are unavailable.
Riser card name Ports on the riser card Corresponding ports on the system board SAS port A1 SAS port A1 on compute module 1 SAS port A2 SAS port A2 on compute module 1 SAS port B1 SAS port A1 on compute module 2 Riser card 1 SAS port B2 SAS port A2 on compute module 2...
Figure 11-11 Fan module layout Air baffles Compute module air baffles Each compute module comes with two bilateral air baffles (a right air baffle and a left air baffle). You must install a low mid air baffle, high mid air baffle, or GPU module air baffie as required. Table 11-13 lists air baffles available for a compute module and their installation locations and usage scenarios.
Name Picture Installation location Usage scenario A GPU module is GPU module air Above the DIMMs between the installed in the compute baffle two processors. module. NOTE: For more information about the air baffle locations, see "Main board components." Power supply air baffle The server comes with one power supply air baffle installed over the fan modules for heat dissipation of the power supplies.
Name Picture Installation location Functions Provides ventilation Between the NCSI connector GPU module air aisles in an and the mLOM Ethernet baffle RS-4*FHHL-G3 riser adapter connector. card. Power supplies The power supplies have an overtemperature protection mechanism. A power supply stops working when an overtemperature occurs and automatically recovers when the overtemperature condition is removed.
Item Specifications Efficiency at 50 % load 94%, 80 Plus platinum level • Operating temperature: 0°C to 50°C (32°F to 122°F) Temperature requirements • Storage temperature: –40°C to +70°C (–40°F to +158°F) Operating humidity 5% to 90% Maximum altitude 5000 m (16404.20 ft) Redundancy N+N redundancy Hot swappable...
Item Specifications Efficiency at 50 % load 94%, 80 Plus platinum level • Operating temperature: 0°C to 50°C (32°F to 122°F) Temperature requirements • Storage temperature: –40°C to +70°C (–40°F to +158°F) Operating humidity 5% to 90% Maximum altitude 5000 m (16404.20 ft) Redundancy N+N redundancy Hot swappable...
Diagnostic panels Diagnostic panels provide diagnostics and troubleshooting capabilities. You can locate and troubleshoot component failures by using the diagnostic panels in conjunction with the event log generated in HDM. NOTE: A diagnostic panel displays only one component failure at a time. When multiple component failures exist, the diagnostic panel displays all these failures one by one at intervals of 4 seconds.
Page 198
TEMP LED LED status Error code Description A severe temperature warning is present on the component monitored by the sensor. Temperature Flashing red This warning might occur because the temperature of the sensor ID component has exceeded the upper threshold or dropped below the lower threshold.
Page 199
Error code Faulty item Compute module 1: • A1 through A9—DIMMs in slots A1 through A9 A1 through A9, • AA—DIMM in slot A10 AA, Ab, or AC • Ab—DIMM in slot A11 • AC—DIMM in slot A12 Compute module 1: •...
Page 200
Error code Faulty item PDB P5V voltage PDB P3V3_STBY voltage Management module P1V05_PCH_STBY voltage Management module PVNN_PCH_STBY voltage Management module P1V8_PCH_STBY voltage Compute module 1 HPMOS voltage Compute module 1 PVCCIO_CPU1 voltage Compute module 1 PVCCIN_CPU1 voltage Compute module 1 PVCCSA_CPU1 voltage Compute module 1 PVCCIO_CPU2 voltage Compute module 1 PVCCIN_CPU2 voltage Compute module 1 PVCCSA_CPU2 voltage...
NOTE: • The term "CPU" in this table refers to processors. • For the location of riser cards at the server rear, see "Riser cards." Fiber transceiver modules Central Max transmission Model Connector wavelength distance SFP-25G-SR-MM850-1-X 850 nm 100 m (328.08 ft) SFP-XG-SX-MM850-A1-X 850 nm 300 m (984.25 ft)
Appendix C Managed hot removal of NVMe drives Managed hot removal of NVMe drives enables you to remove NVMe drives safely while the server is operating. Table 12-1 to determine the managed hot removal method depending on the VMD status and the operating system.
Figure 12-1 Removing an NVMe drive Observe the Fault/UID LED on the drive. If the Fault/UID LED turns steady blue and the drive is removed from the Devices list, remove the drive from the server. For more information about the removal procedure, see "Replacing an NVMe drive." Performing a managed hot removal in Linux ®...
Figure 12-2 Identifying the drive letter of the NVMe drive to be removed Execute the ledctl locate=/dev/drive_letter command to turn on the Fault/UID LED on the drive. The drive_letter argument represents the drive letter, for example, nvme0n1. Execute the echo 1 > /sys/block/drive_letter/device/device/remove command to unmount the drive from the operating system.
Page 206
Figure 12-4 Viewing operating NVMe drives Click the light bulb icon to turn on the Fault/UID LED on the drive, as shown in Figure 12-5. Figure 12-5 Turning on the drive Fault/UID LED Click the removal icon, as shown in Figure 12-6.
Page 207
Figure 12-6 Removing an NVMe drive In the dialog box that opens, click Yes. Figure 12-7 Confirming the removal Remove the drive from the server. For more information about the removal procedure, see "Replacing an NVMe drive." 12-5...
Appendix D Environment requirements About environment requirements The operating temperature requirements for the server vary depending on the server model and hardware configuration. When the general and component-based requirements conflict, use the component-based requirement. Be aware that the actual maximum operating temperature of the server might be lower than what is stated because of poor site cooling performance.
Page 209
Table 13-1 Temperature requirements for the server with 8 SFF drives in each compute module Maximum server Hardware options operating temperature 30°C (86°F) GPU module GPU-V100 or GPU-V100-32G • A faulty fan • NVMe drives • GPU module GPU-P4-X, GPU-P40-X, GPU-P100, or GPU-T4 35°C (95°F) NOTE: With GPU-P4-X, GPU-P40-X, GPU-P100, or GPU-T4 installed, the server...
Appendix E Product recycling New H3C Technologies Co., Ltd. provides product recycling services for its customers to ensure that hardware at the end of its life is recycled. Vendors with product recycling qualification are contracted to New H3C to process the recycled hardware in an environmentally responsible way.
Hot swapping removed while the server is running without affecting the system operation. Integrated Fast Intelligent Scalable Toolkit is a management tool embedded in an H3C iFIST server. It allows users to manage the server it resides in and provides features such as RAID configuration, OS and driver installation, and health status monitoring.
Page 212
Item Description Redundant array of independent disks (RAID) is a data storage virtualization technology RAID that combines multiple physical hard drives into a single logical unit to improve storage and security performance. A mechanism that ensures high availability and business continuity by providing backup Redundancy modules.
Dynamic Random Access Memory FIST Fast Intelligent Scalable Toolkit Graphics Processing Unit Host Bus Adapter Hard Disk Drive H3C Device Management Internet Data Center iFIST integrated Fast Intelligent Scalable Toolkit Keyboard, Video, Mouse LRDIMM Load Reduced Dual Inline Memory Module...
Page 214
Acronym Full name Power Distribution Unit POST Power-On Self-Test RAID Redundant Array of Independent Disks RDIMM Registered Dual Inline Memory Module Serial Attached Small Computer System Interface SATA Serial ATA Secure Digital Secure Diagnosis System Small Form Factor Solid State Drive Trusted Cryptography Module Trusted Platform Module Unit Identification...
Need help?
Do you have a question about the UniServer R6900 G3 and is the answer not in the manual?
Questions and answers