Page 1
NetApp HCI configuration and management documentation NetApp March 06, 2021 This PDF was generated from https://docs.netapp.com/us-en/hci/docs/index.html on March 06, 2021. Always check docs.netapp.com for the latest.
NetApp HCI provides both storage and compute resources, combining them to build a VMware vSphere environment backed by the capabilities of NetApp Element software. You can upgrade, expand, and monitor your system with the NetApp Hybrid Cloud Control interface and manage NetApp HCI resources with NetApp Element Plug-in for vCenter Server.
NetApp Element Plug-in for vCenter Server 4.5 availability The NetApp Element Plug-in for vCenter Server 4.5 is available outside of the management node 12.2 and NetApp HCI 1.8P1 releases. To upgrade the plug-in, follow the instructions in the NetApp HCI Upgrades documentation.
Page 7
all data stored on the SSDs in the storage nodes and causes only a very small (~2%) performance impact on client IO. The following are Element API methods related to software encryption at rest (the Element API Reference Guide has more information): •...
You can find links to the latest and earlier release notes for various components of the NetApp HCI and Element storage environment. You will be prompted to log in using your NetApp Support Site credentials. NetApp HCI • NetApp HCI 1.8P1 Release Notes •...
NetApp SolidFire Active IQ. You log in to NetApp Hybrid Cloud Control by browsing to the IP address of the management node. • The NetApp Element Plug-in for vCenter Server (VCP) is a web-based tool integrated with the vSphere user interface (UI).
As part of your normal support contract, NetApp Support monitors this data and alerts you to any performance bottlenecks or potential system issues. You need to create a NetApp Support account if you do not already have one (even if you have an existing SolidFire Active IQ account) so that you can take advantage of this service.
Page 12
• Volume accounts, specific only to the storage cluster on which they were created. Storage cluster administrator accounts There are two types of administrator accounts that can exist in a storage cluster running NetApp Element software: • Primary cluster administrator account: This administrator account is created when the cluster is created.
The limitation around authentication and authorization is that users from the authoritative cluster can execute actions on other clusters tied to NetApp Hybrid Cloud Control even if they are not a user on the other storage clusters. Before proceeding with managing multiple storage clusters, you should ensure that users defined on the authoritative clusters are defined on all other storage clusters with the same permissions.
Page 14
Synchronous and asynchronous replication between clusters For clusters running NetApp Element software, real-time replication enables the quick creation of remote copies of volume data. You can pair a storage cluster with up to four other storage clusters. You can replicate volume data synchronously or asynchronously from either cluster in a cluster pair for failover and failback scenarios.
Page 15
SnapMirror runs natively on the NetApp ONTAP controllers and is integrated into Element, which runs on NetApp HCI and SolidFire clusters. The logic to control SnapMirror resides in ONTAP software; therefore, all SnapMirror relationships must involve at least one ONTAP system to perform the coordination work. Users manage relationships between Element and ONTAP clusters primarily through the Element UI;...
◦ In a cluster where each node is in a separate chassis, these two levels are functionally identical. You can manually enable protection domain monitoring using the NetApp Element Configuration extension point in the NetApp Element Plug-in for vCenter Server. You can select a protection domain threshold based on node or chassis domains.
Page 17
NetApp Hybrid Cloud Control. To find out which cluster is the authoritative GET /mnode/about...
Page 18
• Two-node storage clusters are best suited for small-scale deployments with workloads that are not dependent on large capacity and high performance requirements. • In addition to two storage nodes, a two-node storage cluster also includes two NetApp HCI Witness Nodes.
Nodes are hardware or virtual resources that are grouped into a cluster to provide block storage and compute capabilities. NetApp HCI and Element software defines various node roles for a cluster. The four types of node roles are management node, storage node, compute node, and NetApp HCI Witness Nodes.
Witness Nodes NetApp HCI Witness Nodes are virtual machines that run on compute nodes in parallel with an Element software-based storage cluster. Witness Nodes do not host slice or block services. A Witness Node enables storage cluster availability in the event of a storage node failure. You can manage and upgrade Witness Nodes in the same way as other storage nodes.
Page 21
Fibre Channel clients. The NetApp Element Plug-in for vCenter Server enables you to create, view, edit, delete, clone, backup or restore volumes for user accounts. You can also manage each volume on a cluster, and add or remove volumes in volume access groups.
Page 22
Find more information • Manage volumes • NetApp Element Plug-in for vCenter Server • SolidFire and Element Software Documentation Center Volume access groups A volume access group is a collection of volumes that users can access using either iSCSI or Fibre Channel initiators.
Compute Node) NetApp HCI and ONTAP Select licensing If you were provided a version of ONTAP Select for use in conjunction with a purchased NetApp HCI system, the following additional limitations apply: • The ONTAP Select license, which is bundled with a NetApp HCI system sale, may only be used in conjunction with NetApp HCI compute nodes.
If you exceed these tested maximums, you might experience issues with NetApp Hybrid Cloud Control, such as a slower user interface and API responses or functionality being unavailable. If you engage NetApp for product support with NetApp Hybrid Cloud Control in environments that are configured beyond the configuration maximums, NetApp Support will ask that you change the configuration to be within the documented configuration maximums.
Page 25
Multi-factor authentication (MFA) enables you to require users to present multiple types of evidence to authenticate with the NetApp Element web UI or storage node UI upon login. You can configure Element to accept only multi-factor authentication for logins integrating with your existing user management system and identity provider.
Performance and Quality of Service A SolidFire storage cluster has the ability to provide Quality of Service (QoS) parameters on a per-volume basis. You can guarantee cluster performance measured in inputs and outputs per second (IOPS) using three configurable parameters that define QoS: Min IOPS, Max IOPS, and Burst IOPS.
Page 27
sustained. QoS value limits Here are the possible minimum and maximum values for QoS. Parameters Min value Default 4 4KB 5 8KB 6 16KB 262KB Min IOPS 15,000 9,375* 5556* 385* Max IOPS 15,000 200,000** 125,000 74,074 5128 Burst IOPS 15,000 200,000** 125,000...
Page 28
Custom QoS will override and adjust QoS policy values for volume QoS settings. The selected cluster must be Element 10.0 or later to use QoS policies; otherwise, QoS policy functions are not available. Find more information • NetApp Element Plug-in for vCenter Server • NetApp HCI Resources page...
Ensure that you implement the following requirements and recommendations before you begin deployment. Before you receive your NetApp HCI hardware, ensure that you complete the checklist items in the pre- deployment workbook from NetApp Professional Services. This document contains a comprehensive list of tasks you need to complete to prepare your network and environment for a successful NetApp HCI deployment.
Page 30
The NetApp Hybrid Cloud Control web UI and API download software packages from the NetApp online software repository, which uses JFrog Bintray as a distribution hub and Akamai Technologies for file hosting. Because of this, some URLs or IP addresses might resolve to other URLs or IP addresses based on the content delivery network.
Page 31
Source Destination Port Description Compute node BMC/IPMI Management node 623 UDP Remote Management Control Protocol (RMCP) port. Required for NetApp Hybrid Cloud Control compute firmware upgrades. Management node Compute node BMC/IPMI 139 NetApp Hybrid Cloud Control API communication Management node...
Page 32
Source Destination Port Description Management node Baseboard management Hardware monitoring and controller (BMC) inventory connection (Redfish and IPMI commands) Management node 23.32.54.122, Element software upgrades 216.240.21.15 Management node Witness Node 9442 Per-node configuration API service Management node vCenter Server 9443 vCenter Plug-in registration.
Page 33
Compute node SIP Compute node API, configuration and validation, and access to software inventory System administrator PC Storage node MIP (NetApp HCI only) Landing page of NetApp Deployment Engine System administrator PC Management node HTTPS UI access to management node...
See your switch documentation for specific instructions on implementing each of the following requirements for your environment. A NetApp HCI deployment requires at least three network segments, one for each of the following types of traffic: • Management •...
1/10GbE network via two Cat 5e/6 cables (one additional Cat 5e/6 cable is optional for out-of-band management). • Ensure the network cables you use to connect the NetApp HCI system to your network are long enough to comfortably reach your switches.
Number of IP addresses needed per NetApp HCI deployment The NetApp HCI storage network and management network should each use separate contiguous ranges of IP addresses. Use the following table to determine how many IP addresses you need for your deployment:...
NetApp HCI requires a minimum of three network segments: management, storage, and virtualization traffic (which includes virtual machines and VMware vMotion traffic). You can also separate virtual machine and vMotion traffic. These network segments usually exist as logically separated VLANs in the NetApp HCI network infrastructure.
Page 38
You configure virtual machine networks using vCenter. The default virtual machine network (port group "VM_Network") in NetApp HCI deployments is configured without a VLAN ID. If you plan to use multiple tagged virtual machine networks (VLAN IDs 200 and 201 in the preceding example), ensure you include them in the initial network planning.
Page 39
Distributed Switches that require VMware vSphere Enterprise Plus licensing. NetApp HCI documentation uses letters to refer to network ports on the back panel of H-series nodes. Here are the network ports and locations on the H410C storage node:...
Page 40
H410S storage nodes. All switch ports in this example share the same configuration. Example switch commands You can use the following example commands to configure all switch ports used for NetApp HCI nodes. These commands are based on a Cisco configuration, but might require only small changes to apply to Mellanox switches.
Page 41
VLAN tagging. You can use this configuration with vSphere Standard Switches or vSphere Distributed Switches (which require VMware vSphere Enterprise Plus licensing). NetApp HCI documentation uses letters to refer to network ports on the back panel of H-series nodes. Here are the network ports and locations on the H410C compute node:...
Page 42
Example switch commands You can use the following example commands to configure all switch ports used for NetApp HCI nodes. These commands are based on a Cisco configuration, but might require only small changes to apply to Mellanox switches. See your switch documentation for the specific commands you need to implement this configuration.
Page 43
Here are the network ports and locations on the H410C storage node: Here are the network ports and locations on the H410S storage node: Here are the network ports and locations on the H610S storage node: VLAN configuration for H410C, H410S, and H610S nodes This topology option uses the following VLAN configuration on H410C, H410S, and H610S nodes: Node ports used Network name...
Page 44
Example switch commands You can use the following example switch commands to configure switch ports used for the NetApp HCI nodes. These commands are based on a Cisco configuration, but might require only minimal changes to apply to Mellanox switches.
Pointer (PTR) record and one Address (A) record for vCenter Server on any DNS servers in use before deployment. • If you are deploying NetApp HCI with a new vSphere installation using only IP addresses, you do not need to create new DNS records for vCenter.
Environmental requirements Ensure that the power for the rack used to install NetApp HCI is supplied by AC power outlets, and that your datacenter provides adequate cooling for the size of your NetApp HCI installation. For detailed capabilities of each component of NetApp HCI, see the NetApp HCI datasheet.
Page 47
Release Notes for your NetApp HCI version. When the NetApp HCI installation process installs Witness Nodes, a virtual machine template is stored in VMware vCenter that you can use to redeploy a Witness Node in case it ...
Monitor or upgrade NetApp HCI with the Hybrid Cloud Control Prepare for installation Before you begin the installation, complete the NetApp HCI Installation Discovery Workbook pre-flight checklist sent to you prior to receiving the hardware. Prepare the network and installation sites Here is a simplified NetApp HCI network topology installation: This is the simplified network topology for a single storage node and single compute node.
Page 49
• NDE requires separate networks for management, iSCSI and vMotion that are preconfigured on the switch network. Validate network readiness with NetApp Active IQ Config Advisor To ensure network readiness for NetApp HCI, install the NetApp Active IQ Config Advisor 5.8.1 or later. This network validation tool is located with other NetApp Support Tools.
Page 50
Work with your NetApp team Your NetApp team uses the NetApp Active IQ Config Advisor report and the Discovery Workbook to validate that your network environment is ready. Install NetApp HCI hardware NetApp HCI can be installed in different configurations: •...
Page 51
Launch the NDE UI NetApp HCI uses a storage node management network IPv4 address for initial access to the NDE. As a best practice, connect from the first storage node. Prerequisites • You already assigned the initial storage node management network IP address manually or by using DHCP.
Page 52
(optional). 10. Record node serial numbers in the NetApp HCI Installation Discovery Workbook. 11. Specify a VLAN ID for the vMotion Network and any network that requires VLAN tagging. See the NetApp HCI Installation Discovery Workbook. 12. Download your configuration as a .CSV file.
Page 53
Monitor or upgrade NetApp HCI with the Hybrid Cloud Control You can optionally use the NetApp HCI Hybrid Cloud Control to monitor, upgrade, or expand your system. You log in to NetApp Hybrid Cloud Control by browsing to the IP address of the management node.
1. Open a web browser and browse to the IP address of the management node. For example: https://<ManagementNodeIP> 2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials. The NetApp Hybrid Cloud Control interface appears.
Page 55
H410C and H410S H610C and H615C The terms "node" and "chassis" are used interchangeably in the case of H610C and H615C, because node and chassis are not separate components unlike in the case of a 2U, four- node chassis.
Page 56
H610S The terms "node" and "chassis" are used interchangeably in the case of H610C and H615C, because node and chassis are not separate components unlike in the case of a 2U, four- node chassis.
Page 57
Prepare for installation In preparation for installation, inventory the hardware that was shipped to you, and contact NetApp Support if any of the items are missing.
Page 58
Ensure that you have the following items at your installation location: • Rack space for the system. Node type Rack space H410C and H410S nodes Two rack unit (2U) H610C node H615C and H610S nodes One rack unit (1U) • SFP28/SFP+ direct-attach cables or transceivers •...
Page 59
1. Align the front of the rail with the holes on the front post of the rack. 2. Push the hooks on the front of the rail into the holes on the front post of the rack and then down, until the spring-loaded pegs snap into the rack holes.
Page 60
You install the H410C compute node and H410S storage node in a 2U, four-node chassis. For H610C, H615C, and H610S, install the chassis/node directly onto the rails in the rack. Starting with NetApp HCI 1.8, you can set up a storage cluster with two or three storage nodes. ...
Page 61
2. Install drives for H410S storage nodes. H610C node/chassis In the case of H610C, the terms "node" and "chassis" are used interchangeably because node and chassis are not separate components, unlike in the case of the 2U, four-node chassis. Here is an illustration for installing the node/chassis in the rack: H610S and H615C node/chassis In the case of H615C and H610S, the terms "node"...
Page 62
Install the switches If you want to use Mellanox SN2010, SN2100, and SN2700 switches in your NetApp HCI installation, follow the instructions provided here to install and cable the switches: • Mellanox hardware user manual • TR-4836: NetApp HCI with Mellanox SN2100 and SN2700 Switch Cabling Guide (login required)
Page 63
For ports D and E, connect two SFP28/SFP+ cables or transceivers for shared management, virtual machines, and storage connectivity. (Optional, recommended) Connect a CAT5e cable in the IPMI port for out-of-band management connectivity. Here is the six-cable configuration: For ports A and B, connect two CAT5e or higher cables in ports A and B for management connectivity. For ports C and F, connect two SFP28/SFP+ cables or transceivers for virtual machine connectivity.
Page 64
For ports A and B, connect two CAT5e or higher cables in ports A and B for management connectivity. For ports C and D, connect two SFP28/SFP+ cables or transceivers for storage connectivity. (Optional, recommended) Connect a CAT5e cable in the IPMI port for out-of-band management connectivity.
Page 65
H615C compute node Here is the cabling for the H615C node: H615C nodes are deployed only in the two-cable configuration. Ensure that all the VLANs are present on ports A and B. For ports A and B, connect the node to a 10/25GbE network using two SFP28/SFP+ cables. (Optional, recommended) Connect the node to a 1GbE network using an RJ45 connector in the IPMI port.
Page 66
Expand an existing NetApp HCI installation New NetApp HCI installation Steps 1. Configure an IPv4 address on the management network (Bond1G) on one NetApp HCI storage node. If you are using DHCP on the management network, you can connect to the DHCP- acquired IPv4 address of the storage system.
Page 67
2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials. 3. Follow the steps in the wizard to add storage and/or compute nodes to your NetApp HCI installation. To add H410C compute nodes, the existing installation must run NetApp HCI 1.4 or later.
Install Active IQ Config Advisor Download and install Active IQ Config Advisor on a PC that has access to the NetApp HCI networks. Steps 1. In a web browser, select Tools from the NetApp Support menu, search for Active IQ Config Advisor, and download the tool.
Page 69
6. Select Solution Based in the Collection Type drop-down menu. 7. Select NetApp HCI Pre Deployment in the Profile drop-down menu. 8. For each type of device in the Type column, select the number of that type of device in your NetApp HCI network in the Actions drop-down menu.
Page 70
Three rows appear, one for each Cisco switch you identified. If you are using Mellanox switches and NetApp Professional Services is configuring them as part of deployment, you do not need to provide switch information. 9. For any switches that you identified, enter the management IP address and administrator credentials.
Manually assign the IPMI port IP address Dynamic Host Configuration Protocol (DHCP) is enabled by default for the IPMI port of each NetApp HCI node. If your IPMI network does not use DHCP, you can manually assign a static IPv4 address to the IPMI port.
Page 72
11. Connect one end of an Ethernet cable to the IPMI port and the other end to a switch. The IPMI port for this node is ready to use. 12. Repeat this procedure for any other NetApp HCI nodes with IPMI ports that are not configured. Change the default IPMI password for H410C and H410S nodes You should change the default password for the IPMI administrator account on each compute and storage node as soon as you configure the IPMI network port.
Page 73
8. Enter a new, strong password in the Password and Confirm Password fields. 9. Click Save at the bottom of the page. 10. Repeat this procedure for any other NetApp HCI H610C, H615C, or H610S nodes with default IPMI passwords.
Access the NetApp Deployment Engine Access the NetApp Deployment Engine To deploy NetApp HCI, you need to access the NetApp Deployment Engine on one of the NetApp H-Series storage nodes via the IPv4 address assigned to the Bond1G interface, which is the logical interface that combines ports A and B for storage nodes. This storage node becomes the controlling storage node for the deployment process.
Page 75
Steps 1. Plug a KVM into the back of one of the NetApp HCI storage nodes (this node will become the controlling storage node). 2. Configure the IP address, subnet mask, and gateway address for Bond1G in the user interface. You can also configure a VLAN ID for the Bond1G network if needed.
Page 76
This configuration is not recommended. Steps 1. Plug a KVM into the back of one of the NetApp HCI storage nodes (this node will become the controlling storage node). 2. Configure the IP address, subnet mask, and gateway address for Bond1G and Bond10G in the user interface.
You can install and configure a new vSphere deployment, which also installs the NetApp Element Plug-in for vCenter Server, or you can join and extend an existing vSphere deployment. Be aware of the following caveats when you use the NetApp Deployment Engine to install a new vSphere deployment:...
Page 78
NetApp HCI compute resources, will fail. Ensure that the NetApp HCI cluster is directly under the datacenter in the vSphere web client inventory tree, and is not stored in a folder. See the NetApp Knowledgebase article for more information.
• Obtain the network details and administrator credentials for your existing vSphere deployment. About this task If you join multiple vCenter Server systems that are connected using vCenter Linked Mode, NetApp HCI only recognizes one of the vCenter Server systems.
• NetApp Hybrid Cloud Control (HCC) or Element UI: To log in to NetApp HCC or the Element user interface upon successful deployment, use the user name and password specified in this deployment step. • VMware vCenter: To log in to vCenter (if installed as part of deployment), use user name with the suffix @vsphere.local...
When deploying or expanding NetApp HCI, you can mix nodes with different reported levels of encryption, but NetApp HCI only supports the more basic form of encryption in this situation. For example, if you mix a storage node that is FIPS encryption capable with nodes that only support SED encryption, SED encryption is supported with this configuration, but FIPS drive encryption is not.
Page 82
GPU-enabled compute node, CPU-only compute nodes become unselectable, and vice versa. • The software version running on the compute node must match the major and minor version of the NetApp Deployment Engine hosting the deployment. If this is not the case, you need to reimage the compute node using the RTFI process.
SolidFire and Element Software Documentation Center Configure network settings NetApp HCI provides a network settings page with an easy form to simplify network configuration. When you complete the easy form, NetApp HCI automatically populates much of the rest of the information on the network settings page. You can then enter final network settings and verify that the network configuration is correct before proceeding.
Page 84
configured VLAN IDs on compute and storage nodes so they are discoverable by the NetApp Deployment Engine, ensure you use the correct VLANs when configuring network settings in the NetApp Deployment Engine. If you are deploying using a two-node or three-node storage cluster, you can complete IP address information for Witness Nodes on the Network Settings page after using the easy form.
Page 85
6. Click Apply to Network Settings. 7. Click Yes to confirm. This populates the Network Settings page with the settings you entered in the easy form. NetApp HCI validates the IP addresses you entered. You can disable this validation with the Disable Live Network Validation button.
Click Continue until you reach the Review page. Your previous settings are saved on each page. d. Repeat steps 2 and 3 to make any other necessary changes. 4. If you do not want to send cluster statistics and support information to NetApp-hosted SolidFire Active IQ servers, clear the final checkbox.
Post-deployment tasks Depending on your choices during the deployment process, you need to complete some final tasks before your NetApp HCI system is ready for production use, such as updating firmware and drivers and making any needed final configuration changes.
Page 88
• Change the vmnic interface failover order for any additional port groups you have added H300E, H500E, H700E and H410C compute nodes NetApp HCI expects the following network configuration for H300E, H500E, H700E and H410C nodes. The following is a six-interface configuration with VMware vSphere Distributed Switching (VDS). This configuration is only supported when used with VMware vSphere Distributed Switches, and requires VMware vSphere Enterprise Plus licensing.
Page 89
By default, the service periodically polls the drives in your compute nodes. You should disable this service on all compute nodes after you deploy NetApp HCI. Steps 1. Using SSH or a local console session, log in to VMware ESXi on the compute node using root credentials.
Page 90
VMware KB article 2133286 Keep VMware vSphere up to date After deploying NetApp HCI, you should use VMware vSphere Lifecycle Manager to apply the latest security patches for the version of VMware vSphere used with NetApp HCI. Use the Interoperability Matrix Tool to ensure that all versions of software are compatible.
Page 91
3. Extract the driver package on your computer. The resulting .VIB file is the uncompressed driver file. 4. Copy the .VIB driver file from your computer to ESXi running on the compute node. The following example $HOME/NVIDIA/ESX6.x/ commands for each version assume that the driver is located in the directory on the management host.
Page 92
SolidFire and Element Software Documentation Center Configure Fully Qualified Domain Name web UI access NetApp HCI with Element 12.2 or later enables you to access storage cluster web interfaces using the Fully Qualified Domain Name (FQDN). If you want to use the FQDN...
Page 93
CreateClusterInterfacePreference API method, and insert the cluster MVIP FQDN for the preference value: ◦ Name: mvip_fqdn ◦ Value: <Fully Qualified Domain Name for the Cluster MVIP> For example, the FQDN here is storagecluster.my.org: https://<Cluster_MVIP>/json- rpc/12.2?method=CreateClusterInterfacePreference&name=mvip_fqdn&value =storagecluster.my.org 3. Change the management node settings using the REST API on the management node: a.
Page 94
1. Open a web browser and browse to the IP address of the management node. For example: https://<ManagementNodeIP> 2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials. The NetApp Hybrid Cloud Control interface appears.
NetApp Hybrid Cloud Control will display a red error banner if it cannot communicate with the associated VMware vCenter instance in the NetApp HCI installation. VMware vCenter will display ESXi account lockout messages for individual ESXi hosts as a result of NetApp Hybrid Cloud Control using outdated credentials.
Page 97
NetApp HCI. 3. Click Authorize or any lock icon and complete the following: a. Enter the NetApp SolidFire cluster administrative user name and password. b. Enter the client ID as mnode-client. c. Click Authorize to begin a session.
NetApp Element Plug-in for vCenter Server • NetApp HCI Resources Page Manage NetApp HCI storage Manage NetApp HCI storage overview With NetApp HCI, you can manage these storage assets by using the NetApp Hybrid Cloud Control. • Create and manage user accounts •...
Page 99
To use LDAP for any user account, you must first enable LDAP. Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, click on the top right Options icon and select User Management.
Page 100
Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, click on the top right Options icon and select User Management. 3. Select Create User. 4. Select the authentication type of cluster or LDAP.
Page 101
You cannot delete the primary administrator user account for the authoritative cluster. Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, click on the icon in the top right and select User Management.
Page 102
Deleting or locking an account associated with the management node results in an inaccessible management node. Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, select Storage > Volumes.
Page 103
• Remove a storage cluster Add a storage cluster You can add a storage cluster to the management node assets inventory using NetApp Hybrid Cloud Control. This allows you to manage and monitor the cluster using the HCC UI. Steps 1.
Page 104
7. If you are adding Element eSDS clusters, enter or upload your SSH private key and SSH user account. Confirm storage cluster status You can monitor the connection status of storage clusters assets using the NetApp Hybrid Cloud Control UI. Steps 1.
Page 105
You can create a storage volume using NetApp Hybrid Cloud Control. Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, expand the name of your storage cluster on the left navigation menu.
Page 106
You can apply a QoS policy to an existing storage volume by using NetApp Hybrid Cloud Control. Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, expand the name of your storage cluster on the left navigation menu.
Page 107
6. Select Save. Edit a volume Using NetApp Hybrid Cloud Control, you can edit volume attributes such as QoS values, volume size, and the unit of measurement by which byte values are calculated. You can also modify account access for replication usage or to restrict access to the volume.
Page 108
Cloned volumes do not inherit volume access group membership from the source volume. Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster...
Page 109
administrator credentials. 2. From the Dashboard, expand the name of your storage cluster on the left navigation menu. 3. Select the Volumes > Overview tab. 4. Select each volume you want to clone and click the Clone button that appears. 5.
Page 110
Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, expand the name of your storage cluster on the left navigation menu.
Page 111
1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, expand the name of your storage cluster on the left navigation menu. 3. Select Volumes > Overview.
Page 112
9. Select Create Access Group. Edit a volume access group You can edit the properties of an existing volume access group by using NetApp Hybrid Cloud Control. You can make changes to the name, associated initiators, or associated volumes of an access group.
Page 113
What you’ll need • You have cluster administrator credentials. • You have upgraded your management services to at least version 2.17. NetApp Hybrid Cloud Control initiator management is not available in earlier service bundle versions. Options •...
Page 114
Steps 1. Log in to NetApp Hybrid Cloud Control by providing the Element storage cluster administrator credentials. 2. From the Dashboard, expand the name of your storage cluster on the left navigation menu.
Page 115
1. Log in to NetApp Hybrid Cloud Control by providing the Element storage cluster administrator credentials. 2. From the Dashboard, expand the name of your storage cluster on the left navigation menu. 3. Select Volumes. 4. Select the Initiators tab.
Page 116
This task describes how to assign a QoS policy to an individual volume by changing its settings. The latest version of NetApp Hybrid Cloud Control does not have a bulk assign option for more than one volume. Until the functionality to bulk assign is provided in a future release, you can use the Element web UI or vCenter Plug-in UI to bulk assign QoS policies.
Page 117
QoS policy. Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, expand the menu for your storage cluster. 3. Select Storage > Volumes.
QoS values previously defined by the policy but as individual volume QoS. Any association with the deleted QoS policy is removed. Steps 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, expand the menu for your storage cluster.
Page 119
Perform tasks with the management node REST APIs: • Management node REST API UI overview Disable or enable remote SSH functionality or start a remote support tunnel session with NetApp Support to help you troubleshoot: • Enable remote NetApp Support connections •...
Page 120
.iso OpenStack .iso • (Management node 12.0 and 12.2 with proxy server) You have updated NetApp Hybrid Cloud Control to management services version 2.16 before configuring a proxy server. About this task The Element 12.2 management node is an optional upgrade. It is not required for existing deployments.
Page 121
2. If you downloaded the OVA, follow these steps: a. Deploy the OVA. b. If your storage cluster is on a separate subnet from your management node (eth0) and you want to use persistent volumes, add a second network interface controller (NIC) to the VM on the storage subnet (for example, eth1) or ensure that the management network can route to the storage network.
Page 122
sudo service ntpd stop c. Edit the NTP configuration file /etc/ntp.conf: i. Comment out the default servers 0.gentoo.pool.ntp.org) by adding a (server in front of each. ii. Add a new line for each default time server you want to add. The default time servers must be the same NTP servers used on the storage cluster that you will use in a later step.
Page 123
iii. In vSphere, verify that the Synchronize guest time with host box is un-checked in the VM options. Do not enable this option if you make future changes to the VM. Set up the management node 1. Configure and run the management node setup command: ...
Page 124
Your installation has a base asset configuration that was created during installation or upgrade. 2. (NetApp HCI only) Locate the hardware tag for your compute node in vSphere: a. Select the host in the vSphere Web Client navigator.
Page 125
3. Add a vCenter controller asset for NetApp HCI monitoring (NetApp HCI installations only) and Hybrid Cloud Control (for all installations) to the management node known assets: a. Access the mnode service API UI on the management node by entering the management node IP...
Page 126
Before you begin • You know your eth0 IP address. • Your cluster version is running NetApp Element software 11.3 or later. • You have deployed a management node 11.3 or later. Configuration options Choose the option that is relevant for your environment: •...
Page 128
You can deploy a new OVA and run a redeploy script to pull configuration data from a previously installed management node running version 11.3 and later. What you’ll need • Your previous management node was running NetApp Element software version 11.3 or later with persistent volumes functionality engaged.
Page 129
Download ISO or OVA and deploy the VM Configure the network Configure time sync Configure the management node Download ISO or OVA and deploy the VM 1. Download the OVA or ISO for your installation from the NetApp Support Site: Element software: https://mysupport.netapp.com/site/products/all/details/element-software/downloads-tab NetApp HCI: https://mysupport.netapp.com/site/products/all/details/netapp-hci/downloads-tab a.
Page 130
controller (NIC) to the VM on the storage subnet (eth1) or ensure that the management network can route to the storage network. Do not power on the virtual machine prior to the step indicating to do so later in this procedure.
Page 131
vi /etc/ntp.conf #server 0.gentoo.pool.ntp.org #server 1.gentoo.pool.ntp.org #server 2.gentoo.pool.ntp.org #server 3.gentoo.pool.ntp.org server <insert the hostname or IP address of the default time server> iii. Save the configuration file when complete. d. Force an NTP sync with the newly added server. sudo ntpd -gq e.
Page 132
mkdir -p /sf/etc/mnode/mnode-archive 2. Download the management services bundle (version 2.15.28 or later) that was previously installed on the /sf/etc/mnode/ existing management node and save it in the directory. 3. Extract the downloaded bundle using the following command, replacing the value in [ ] brackets (including the brackets) with the name of the bundle file: tar -C /sf/etc/mnode -xvf /sf/etc/mnode/[management services bundle file]...
Page 133
If you access Element or NetApp HCI web interfaces (such as the management node or NetApp Hybrid Cloud Control) using the Fully Qualified Domain Name (FQDN) of the system, reconfigure authentication for the management node. If you had previously disabled SSH functionality on the management node, you need to ...
Page 134
2. Enter the management node user name and password when prompted. Access the management node REST API UI From the REST API UI, you can access a menu of service-related APIs that control management services on the management node. Steps 1.
Page 135
2. Click Authorize or any lock icon and enter cluster admin credentials for permissions to use APIs. Find more Information • Enable the Active IQ collector service for SolidFire all-flash storage • NetApp Element Plug-in for vCenter Server • NetApp HCI Resources Page Work with the management node UI Management node UI overview With the management node UI (https://<managementNodeIP>:442), you can make...
Page 136
You can configure settings to monitor alerts on your NetApp HCI system. NetApp HCI alert monitoring forwards NetApp HCI storage cluster system alerts to vCenter Server, enabling you to view all alerts for NetApp HCI from the vSphere Web Client interface.
Page 137
Controls the flow of support and monitoring data from VMware vCenter to NetApp SolidFire Active IQ. Options are the following: • Enabled: All vCenter alarms, NetApp HCI storage alarms, and support data are sent to NetApp SolidFire Active IQ. This enables NetApp to...
Page 138
• Create and manage storage cluster assets • Remove an asset from the management node • Use the REST API to collect NetApp HCI logs • Verify management node OS and services versions • Getting logs from management services Find more information •...
Page 139
REST API. What you’ll need • Ensure that your storage cluster version is running NetApp Element software 11.3 or later. • Ensure that you have deployed a management node running version 11.3 or later. Storage cluster asset management options Choose one of the following options: •...
Page 140
6. From the code 200 response body, save the value in the field, which you can find in the list of installations. This is the installation ID. For example: "installations": [ "id": "1234a678-12ab-35dc-7b4a-1234a5b6a7ba", "name": "my-hci-installation", "_links": { "collection": "https://localhost/inventory/1/installations", "self": "https://localhost/inventory/1/installations/1234a678- 12ab-35dc-7b4a-1234a5b6a7ba"...
Page 141
https://[management node IP]/storage/1/ 8. Click Authorize or any lock icon and complete the following: a. Enter the cluster user name and password. b. Enter the client ID as mnode-client. c. Click Authorize to begin a session. d. Close the window. 9.
Page 142
4. Click Try it out. 5. Enter the new storage cluster’s information in the following parameters in the Request body field: "installationId": "a1b2c34d-e56f-1a2b-c123-1ab2cd345d6e", "mvip": "10.0.0.1", "password": "admin", "userId": "admin" Parameter Type Description installationId string The installation in which to add the new storage cluster.
Page 143
2. Click Authorize or any lock icon and complete the following: a. Enter the cluster user name and password. b. Enter the client ID as mnode-client. c. Click Authorize to begin a session. d. Close the window. 3. Click PUT /clusters/{storageId}. 4.
Page 144
The command to configure a proxy server updates and then returns the current proxy settings for the management node. The proxy settings are used by Active IQ, the NetApp HCI monitoring service that is deployed by the NetApp Deployment Engine, and other Element software utilities that are installed on the management node, including the reverse support tunnel for NetApp Support.
Page 145
REST API in the management node. What you’ll need • Your cluster is running NetApp Element software 11.3 or later. • You have deployed a management node running version 11.3 or later. Options •...
Page 146
Select Try it out. c. Select the status as Running. d. Select Execute. The services that are running on the management node are indicated in the response body. Find more information • NetApp Element Plug-in for vCenter Server • NetApp HCI Resources Page...
Page 147
REST API. You can pull logs from all public services or specify specific services and use query parameters to better define the return results. What you’ll need • Your cluster version is running NetApp Element software 11.3 or later. • You have deployed a management node running version 11.3 or later. Steps 1.
Page 148
You can open a TCP port for an SSH reverse tunnel connection with NetApp Support. This connection enables NetApp Support to log in to your management node. If your management node is behind a proxy server, the following TCP ports are required in the sshd.config file:...
Page 149
• Cluster administrator permissions: You have permissions as administrator on the storage cluster. • Element software: Your cluster is running NetApp Element software 11.3 or later. • Management node: You have deployed a management node running version 11.3 or later.
Use the following tasks to power off or power on your NetApp HCI system as required. You might need to power off your NetApp HCI system under a number of different circumstances, such as: • Scheduled outages...
Page 151
• Firmware upgrades • Storage or compute resource expansion The following is an overview of the tasks you need to complete to power off a NetApp HCI system: • Power off all virtual machines except the VMware vCenter server (vCSA).
Page 152
3. Navigate to Cluster > Nodes > Active, and record the node IDs for all of the active nodes in the cluster. 4. To power off the NetApp HCI storage cluster, open a web browser and use the following URL to invoke the...
Page 153
2. When all the compute nodes are operational, log in to the ESXi host that was running the vCSA. 3. Log in to the compute host and verify that it sees all the NetApp HCI datastores. For a typical NetApp HCI...
1. Open a web browser and browse to the IP address of the management node. For example: https://[management node IP address] 2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials. 3. View the Hybrid Cloud Control Dashboard.
Page 155
To see the most recent storage cluster data, use the Storage Clusters page, where polling occurs more frequently than on the Dashboard. Monitor compute resources Use the Compute pane to see your total NetApp H-series compute environment. You can monitor the number of compute clusters and total compute nodes.
Page 156
To view cluster health, also look at the SolidFire Active IQ Dashboard. See Monitor performance, capacity, and cluster health in NetApp SolidFire Active Steps 1. Click the RAW tab, to see the total physical storage space used and available in your cluster.
Page 157
Alternatively, consider expanding your system. Expansion overview. 3. For further analysis and historical context, look at NetApp SolidFire Active IQ details. Monitor storage performance You can look at how much IOPS or throughput you can get out of a cluster without surpassing the useful performance of that resource by using the Storage Performance pane.
Page 158
b. Throughput tab: Monitor patterns or spikes in throughput. Also monitor for continuously high throughput values, which might indicate that you are nearing the maximum useful performance of the resource. c. Utilization tab: Monitor the utilization of IOPS in relation to the total IOPS available summed up at the cluster level.
Page 159
2. For further analysis, look at storage performance by using the NetApp Element Plug-in for vCenter Server. Performance shown in the NetApp Element Plug-in for vCenter Server. Monitor compute utilization In addition to monitoring IOPS and throughput of your storage resources, you also might want to view the CPU and memory usage of your compute assets.
You can view both your storage and compute assets in your system and determine their IP addresses, names, and software versions. You can view storage information for your multiple node systems and any NetApp HCI Witness Nodes associated with two-node or three-node clusters.
Page 161
BMC Connection Status column. Only if the connection attempt fails for a compute node, an error message is displayed in this column for that node. To view the number of storage and compute resources, look at the NetApp Hybrid Cloud Control (HCC) Dashboard. See Monitor storage and compute resources with the HCC Dashboard.
1. Open a web browser and browse to the IP address of the management node. For example: https://[management node IP address] 2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials. 3. In the left navigation blue box, select the NetApp HCI installation.
Page 163
Use the REST API to edit BMC information You can edit the stored BMC credentials using the NetApp Hybrid Cloud Control REST API. Steps 1. Locate the compute node hardware tag and BMC information: a.
Page 164
"nodes": [ { "bmcDetails": { "bmcAddress": "10.117.1.111", "credentialsAvailable": false, "credentialsValidated": false "chassisSerialNumber": "221111019323", "chassisSlot": "C", "hardwareId": null, "hardwareTag": "00000000-0000-0000-0000-ac1f6ab4ecf6", "id": "8cd91e3c-1b1e-1111-b00a-4c9c4900b000", 2. Open the hardware service REST API UI on the management node: https://[management node IP]/hardware/2/ 3.
You can also see details on active and deleted volumes. With this view, you might first want to monitor the Used capacity column. You can access this information only if you have NetApp Hybrid Cloud Control administrative privileges. Steps 1. Open a web browser and browse to the IP address of the management node. For example:...
Page 166
IP address] 2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials. 3. In the left navigation blue box, select the NetApp HCI installation. The Hybrid Cloud Control Dashboard appears. 4. In the left navigation, select the cluster and select Storage > Volumes.
You can obtain continually updated historical views of cluster-wide statistics. You can set up notifications to alert you about specified events, thresholds, or metrics on a cluster so that they can be addressed quickly. As part of your normal support contract, NetApp Support monitors this data and alerts you to potential system issues.
6. From the SolidFire Active IQ interface, verify that the NetApp HCI compute and storage nodes are reporting telemetry correctly to Active IQ: a. If you have more than one NetApp HCI installation, click Select a Cluster and choose the cluster from the list.
Page 169
IP address] 2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 3. From the Dashboard, click the menu on the upper right. 4. Select Collect Logs. The Collect Logs page appears. If you have collected logs before, you can download the existing log package, or begin a new log collection.
Page 170
a. Click POST /bundle. b. Click Try it out. c. Change the values of the following parameters in the Request body field depending on which type of logs you need to collect and for what time range: Parameter Type Description modifiedSince Date string Only include logs modified after...
Page 171
After execution, a download link appears in the response body area. e. Click Download file and save the resulting file to your computer. The log package is in a compressed UNIX .tgz file format. Find more information • NetApp Element Plug-in for vCenter Server • NetApp HCI Resources Page...
NetApp HCI system or SolidFire all-flash storage system. System upgrade sequences content describes the tasks that are needed to complete a NetApp HCI system or SolidFire all-flash storage system upgrade. Ideally these procedures are performed as part of the larger upgrade sequence and not in isolation.
Page 173
higher, you can simply upgrade the management services to the latest version to perform Element upgrades using NetApp Hybrid Cloud Control. Follow the management node upgrade procedure for your scenario if you would like to upgrade the management node operating system for other reasons, such as security remediation.
Page 174
You can keep your SolidFire Element storage system up-to-date after deployment by sequentially upgrading all NetApp storage components. These components include management services, HealthTools, NetApp Hybrid Cloud Control (HCC), Element software, management node, and (depending on your installation) the Element Plug-in for vCenter Server.
Update options You can update management services using the NetApp Hybrid Cloud Control (HCC) UI or the management node REST API: • Update management services using Hybrid Cloud Control (Recommended method) •...
Page 176
After the upgrade begins, you can see the upgrade status on this page. During the upgrade, you might lose connection with NetApp Hybrid Cloud Control and have to log back in to see the results of the upgrade. Update management services using the management node API Users should ideally perform management services updates from NetApp Hybrid Cloud Control.
Page 177
Power on the management node VM. • Your cluster version is running NetApp Element software 11.3 or later. • You have upgraded your management services to at least version 2.1.326. NetApp Hybrid Cloud Control upgrades are not available in earlier service bundles.
Page 178
Update management services using the management node API for dark sites Users should ideally perform management services updates from NetApp Hybrid Cloud Control. You can however manually upload, extract, and deploy a service bundle update for management services to the management node using the REST API.
Page 179
• To use HealthTools with dark sites, you need to do these additional steps: ◦ Download a JSON file from the NetApp Support Site on a computer that is not the management node and rename it to metadata.json. ◦ Have the management node up and running at the dark site.
Page 180
• Your cluster version is running NetApp Element software 11.3 or later. Health check options You can run health checks using NetApp Hybrid Cloud Control (HCC) UI, HCC API, or the HealthTools suite: • Use NetApp Hybrid Cloud Control to run Element storage health checks prior to upgrading storage (Preferred method) •...
Page 181
Storage health checks made by the service Use NetApp Hybrid Cloud Control to run Element storage health checks prior to upgrading storage Using NetApp Hybrid Cloud Control (HCC), you can verify that a storage cluster is ready to be upgraded. Steps 1.
Page 182
e. Click Execute. f. From the response, copy the installation asset ID ("id"). g. From the REST API UI, click GET /installations/{id}. h. Click Try it out. i. Paste the installation asset ID into the id field. j. Click Execute. k.
Page 184
downloaded during HealthTools upgrades to run successfully. About this task This procedure describes how to address upgrade checks that yield one of the following results: • Running the sfupgradecheck command runs successfully. Your cluster is upgrade ready. • Checks within the sfupgradecheck tool fail with an error message.
Page 185
Test Description: Verify no pending nodes in cluster More information: https://kb.netapp.com/support/s/article/ka11A0000008ltOQAQ/pendingnodes check_cluster_faults: Test Description: Report any cluster faults check_root_disk_space: Test Description: Verify node root directory has at least 12 GBs of available disk space Passed node IDs: 1, 2, 3 More information: https://kb.netapp.com/support/s/article/ka11A0000008ltTQAQ/...
Page 186
Test Description: Verify node root directory has at least 12 GBs of available disk space Severity: ERROR Failed node IDs: 2 Remedy: Remove unneeded files from root drive More information: https://kb.netapp.com/support/s/article/ka11A0000008ltTQAQ/SolidFire- Disk-space-error check_pending_nodes: Test Description: Verify no pending nodes in cluster More information: https://kb.netapp.com/support/s/article/ka11A0000008ltOQAQ/pendingnodes...
Page 187
Unable to verify latest available version of healthtools. 2. Download a JSON file from the NetApp Support Site on a computer that is not the management node and rename it to metadata.json. 3. Run the following command: sfupgradecheck -l --metadata=<path-to-metadata-json>...
Page 188
NetApp HCI Resources Page Upgrade Element software To upgrade NetApp Element software, you can use the NetApp Hybrid Cloud Control UI, REST API, or the HealthTools suite of tools. Certain operations are suppressed during an Element software upgrade, such as adding and removing nodes, adding and removing drives, and commands associated with initiators, volume access groups, and virtual networks, among others.
Page 189
UI (https://[IP related to time skew. • System ports: If you are using NetApp Hybrid Cloud Control for upgrades, you have ensured that the necessary ports are open. See Network ports for more information.
Page 190
1. Open a web browser and browse to the IP address of the management node: https://<ManagementNodeIP> 2. Log in to NetApp Hybrid Cloud Control by providing the storage cluster administrator credentials. 3. Click Upgrade near the top right of the interface.
Page 191
Option Steps Your management node is within a dark site without 1. Click Browse to upload the upgrade package external connectivity. that you downloaded. 2. Wait for the upload to complete. A progress bar shows the status of the upload. The file upload will be lost if you ...
Page 192
Error An error has occurred during the upgrade. You can download the error log and send it to NetApp Support. After you resolve the error, you can return to the page, and click Resume. When you resume the upgrade,...
Page 193
Option Steps Your management node has external connectivity. 1. Verify the repository connection: a. Open the management node REST API UI on the management node: https://[management node IP]/package-repository/1/ b. Click Authorize and complete the following: i. Enter the cluster user name and password.
Page 194
Your management node is within a dark site without 1. Download the storage upgrade package to a external connectivity. device that is accessible to the management node: ◦ For NetApp HCI systems, go to the NetApp HCI software download page and download the latest storage node image.
Page 195
2. Locate the storage cluster ID: a. Open the management node REST API UI on the management node: https://[management node IP]/inventory/1/ b. Click Authorize and complete the following: i. Enter the cluster user name and password. ii. Enter the client ID as mnode-client. iii.
Page 196
"config": {}, "packageId": "884f14a4-5a2a-11e9-9088-6c0b84e211c4", "storageId": "884f14a4-5a2a-11e9-9088-6c0b84e211c4" g. Click Execute to initiate the upgrade. The response should indicate the state as initializing: "_links": { "collection": "https://localhost:442/storage/upgrades", "self": "https://localhost:442/storage/upgrades/3fa85f64-1111- 4562-b3fc-2c963f66abc1", "log": https://localhost:442/storage/upgrades/3fa85f64-1111-4562- b3fc-2c963f66abc1/log }, ...
Page 197
"kb": "string", "description": "string", "remedy": "string", "severity": "string", "data": {}, "nodeID": 0 }, "taskId": "123f14a4-1a1a-11e9-7777-6c0b84e123b2", "dateCompleted": "2020-04-21T22:10:57.057Z", "dateCreated": "2020-04-21T22:10:57.057Z" h. Copy the upgrade ID ("upgradeId") that is part of the response. 4.
Page 198
What happens if an upgrade fails using NetApp Hybrid Cloud Control If a drive or node fails during an upgrade, the Element UI will show cluster faults. The upgrade process does not proceed to the next node, and waits for the cluster faults to resolve. The progress bar in the UI shows that...
Page 199
At this stage, clicking Pause in the UI will not work, because the upgrade waits for the cluster to be healthy. You will need to engage NetApp Support to assist with the failure investigation.
Page 200
/tmp/solidfire-rtfisodium-11.0.0.345.iso 2018-10-01 16:52:15: Newer version of sfinstall available. This version: 2018.09.01.130, latest version: 2018.06.05.901. The latest version of the HealthTools can be downloaded from: https:// mysupport.netapp.com/NOW/cgi-bin/software/ or rerun with --skip-version-check See the following sample excerpt from a successful pre-stage operation: ...
Page 201
flabv0004 ~ # sfinstall -u admin 10.117.0.87 solidfire-rtfi-sodium-patch3-11.3.0.14171.iso --stage 2019-04-03 13:19:58: sfinstall Release Version: 2019.01.01.49 Management Node Platform: Ember Revision: 26b042c3e15a Build date: 2019-03-12 18:45 2019-04-03 13:19:58: Checking connectivity to MVIP 10.117.0.87 2019-04-03 13:19:58: Checking connectivity to node 10.117.0.86 2019-04-03 13:19:58: Checking connectivity to node 10.117.0.87 2019-04-03 13:19:58: Successfully connected to cluster and all nodes 2019-04-03 13:20:00: Do you want to continue? ['Yes', 'No']: Yes 2019-04-03 13:20:55: Staging install pack on cluster nodes...
Page 202
2018-10-01 16:52:15: Newer version of sfinstall available. This version: 2018.09.01.130, latest version: 2018.06.05.901. The latest version of the HealthTools can be downloaded from: https://mysupport.netapp.com/NOW/cgi-bin/software/ or rerun with --skip -version-check See the following sample excerpt from a successful upgrade. Upgrade events can be used to monitor the progress of the upgrade.
Page 203
(phase 2) are not required. Upgrade Element software at dark sites using HealthTools You can use the HealthTools suite of tools to update NetApp Element software at a dark site. What you’ll need 1. For NetApp HCI systems, go to the NetApp HCI software download page.
Page 204
Steps 1. You should stop sfinstall with Ctrl+C. 2. Contact NetApp Support to assist with the failure investigation. 3. Resume the upgrade with the same sfinstall command.
Page 205
If the drives are ungracefully removed, adding the drives back during an upgrade will require manual intervention by NetApp Support. The node might be taking longer to do firmware updates or post update syncing activities.
Page 206
UI (https://[IP related to time skew. • System ports: If you are using NetApp Hybrid Cloud Control for upgrades, you have ensured that the necessary ports are open. See Network ports for more information.
Page 207
Use NetApp Hybrid Cloud Control UI to upgrade storage firmware You can use the NetApp Hybrid Cloud Control UI to upgrade the firmware of the storage nodes in your cluster. What you’ll need • If your management node is not connected to the internet, you have downloaded the package from the relevant location: ◦...
Page 208
Option Steps Your management node has external connectivity. 1. Click the drop-down arrow next to the cluster you are upgrading. 2. Click Firmware Only, and select from the upgrade versions available. 3. Click Begin Upgrade. The Upgrade Status changes during the upgrade to reflect the status of the process.
Page 209
On-screen messages also show node-level faults and display the node ID of each node in the cluster as the upgrade progresses. You can monitor the status of each node using the Element UI or the NetApp Element plug-in for vCenter Server UI.
Page 210
At this stage, clicking Pause in the UI will not work, because the upgrade waits for the cluster to be healthy. You will need to engage NetApp Support to assist with the failure investigation.
Page 211
1. Do one of the following depending on your connection: Option Steps Your management node has external connectivity. 1. Verify the repository connection: a. Open the management node REST API UI on the management node: https://[management node IP]/package-repository/1/ b. Click Authorize and complete the following: i.
Page 212
◦ For NetApp HCI systems, go to the NetApp HCI software download page and download the latest storage firmware image. ◦ For SolidFire storage systems, go to the...
Page 213
2. Locate the installation asset ID: a. Open the management node REST API UI on the management node: https://[management node IP]/inventory/1/ b. Click Authorize and complete the following: i. Enter the cluster user name and password. ii. Enter the client ID as mnode-client. iii.
Page 214
a. Open the storage REST API UI on the management node: https://[management node IP]/storage/1/ b. Click Authorize and complete the following: i. Enter the cluster user name and password. ii. Enter the client ID as mnode-client. iii. Click Authorize to begin a session. iv.
Page 216
Option Steps You need to correct cluster health issues due to 1. Go to the specific KB article listed for each failedHealthChecks issue or perform the specified remedy. message in the response body. 2. If a KB is specified, complete the process described in the relevant KB article.
Page 217
you can simply upgrade the management services to the latest version to perform Element upgrades using NetApp Hybrid Cloud Control. Follow the management node upgrade procedure for your scenario if you would like to upgrade the management node operating system for other reasons, such as security remediation.
Page 218
XX.X.X.XXXX.iso 4. Check the integrity of the download by running md5sum on the downloaded file and compare the output to what is available on NetApp Support Site for NetApp HCI or Element software, as in the following example: sudo md5sum <path...
Page 219
sudo cp -r /mnt/* /upgrade 6. Change to the home directory, and unmount the ISO file from /mnt: sudo umount /mnt 7. Delete the ISO to conserve space on the management node: sudo rm <path to iso>/solidfire-fdva-<Element release>-patchX- XX.X.X.XXXX.iso 8. (For configurations without persistent volumes only) Copy the contents of the container folder for backup: sudo cp -r /var/lib/docker/volumes /sf/etc/mnode 9.
Page 220
The name of the ISO is similar to XX.X.X.XXXX.iso 4. Check the integrity of the download by running md5sum on the downloaded file and compare the output to what is available on NetApp Support Site for NetApp HCI or Element software, as in the following example:...
Page 221
sudo md5sum <path iso>/solidfire-fdva-<Element release>-patchX- XX.X.X.XXXX.iso 5. Mount the management node ISO image and copy the contents to the file system using the following commands: sudo mkdir -p /upgrade sudo mount <solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso> /mnt sudo cp -r /mnt/* /upgrade 6. Change to the home directory, and unmount the ISO file from /mnt: sudo umount /mnt 7.
Page 222
4. Check the integrity of the download by running md5sum on the downloaded file and compare the output to what is available on NetApp Support Site for NetApp HCI or Element software, as in the following example: sudo md5sum -b <path to iso>/solidfire-fdva-<Element release>-patchX-...
Page 223
5. Mount the management node ISO image and copy the contents to the file system using the following commands: sudo mkdir -p /upgrade sudo mount solidfire-fdva-<Element release>-patchX-XX.X.X.XXXX.iso /mnt sudo cp -r /mnt/* /upgrade 6. Change to the home directory, and unmount the ISO file from /mnt: sudo umount /mnt 7.
Page 224
/sf/packages/mnode/upgrade-mnode -mu <mnode user> -pvm <mvip for persistent volumes> 10. (For all NetApp HCI installations and SolidFire all-flash storage installations with NetApp Element Plug-in for vCenter Server) Update the vCenter Plug-in on the 12.2 management node by following the steps in the Upgrade the Element Plug-in for vCenter Server topic.
Page 225
Your installation has a base asset configuration that was created during installation or upgrade. 12. If you have a NetApp HCI installation, locate the hardware tag for your compute node in vSphere: a. Select the host in the vSphere Web Client navigator.
Page 226
IP address on port 9443 to open the management node UI. https://<mNode 11.1 IP address>:9443 6. In vSphere, select NetApp Element Configuration > mNode Settings. (In older versions, the top-level menu is NetApp SolidFire Configuration.) 7. Click Actions > Clear.
Page 227
(https://<mNode the SIOC service from QoSSIOC Service Management. 19. Wait for one minute and check the NetApp Element Configuration > mNode Settings tab. This should display the mNode status as UP. If the status is DOWN, check the permissions for /sf/packages/sioc/app.properties. The file should have read, write, and execute permissions for the file owner.
Page 228
Verify sfcollector logs to confirm it is working. 26. In vSphere, the NetApp Element Configuration > mNode Settings tab should display the mNode status as UP. 27. Verify NMA is reporting system alerts and ONTAP Select alerts. 28. If everything is working as expected, shut down and delete management node 10.x VM.
Page 229
11.3 or later. vCenter Plug-in 4.4 or later requires a an 11.3 or later management node with a modular architecture that provides individual services. Your management node must be powered on with its IP address or DHCP address configured. • Element storage upgrades: You have a cluster running NetApp Element software 11.3 or later.
Page 230
• vSphere Web Client: For vSphere 6.5 and 6.7, you have logged out of the vSphere Web Client before beginning any plug-in upgrade. The web client for these versions will not recognize updates made during this process to your plug-in if you do not log out. For vSphere 7.0, you do not need to log out of the web client.
Page 231
3. Within Manage vCenter Plug-in, select Update Plug-in. 4. Confirm or update the following information: a. The IPv4 address or the FQDN of the vCenter service on which you will register your plug-in. b. The vCenter Administrator user name. The user name and password credentials you enter must be for a user with vCenter Administrator role privileges.
Page 232
If the vCenter Plug-in icons are not visible, see Element Plug-in for vCenter Server documentation about troubleshooting the plug-in. 9. Verify the version change in the About tab in the NetApp Element Configuration extension point of the plug-in. You should see the following version details or details of a more recent version: NetApp Element Plug-in Version: 4.6...
Page 233
Compute node health checks made by the service Use NetApp Hybrid Cloud Control to run compute node health checks prior to upgrading firmware Using NetApp Hybrid Cloud Control (HCC), you can verify that a compute node is ready for a firmware upgrade.
Page 234
Use API to run compute node health checks prior to upgrading firmware You can use REST API to verify that compute nodes in a cluster are ready to be upgraded. The health check verifies that there are no obstacles to upgrading, such as ESXi host issues or other vSphere issues. You will need to run compute node health checks for each compute cluster in your environment.
Page 236
"cluster": "domain-1" g. Click Execute to run a health check on the cluster. "resourceLink" The code 200 response gives a URL with the task ID appended that is needed to confirm the health check results. "resourceLink": "https://10.117.150.84/vcenter/1/compute/tasks/[This is the task ID for health check task results]", ...
Page 237
No KB needed to resolve informational alerts in resolve and/or issue. vCenter? acknowledge any alerts before proceeding. Are management services HCI system You must update No KB needed to resolve up to date? management services issue. See this article before you perform an more information.
Page 238
Witness Node on an are up and running. alternate ESXi host and re-run the health check. One Witness Node must be running in the HCI installation at all times. What is the status of the Node A Witness Node is not...
Page 239
Node only on this ESXi up and running on another host and re-run the health ESXi host. check. One Witness Node must be running in the HCI installation at all times. Find more information • NetApp Element Plug-in for vCenter Server •...
Page 240
5. Extract the downloaded driver bundle on your local computer. The NetApp driver bundle includes one or more VMware Offline Bundle ZIP files; do not extract these ZIP files. 6. After upgrading the firmware on the compute nodes, go to VMware Update Manager in VMware vCenter.
Page 241
• System ports: If you are using NetApp Hybrid Cloud Control for upgrades, you have ensured that the necessary ports are open. See Network ports for more information. • Minimum BMC and BIOS versions: The node you intend to upgrade using NetApp Hybrid Cloud Control...
Page 242
Use NetApp Hybrid Cloud Control UI to upgrade a compute node Starting with management services 2.14, you can upgrade a compute node using the NetApp Hybrid Cloud Control UI. From the list of nodes, you must select the node to upgrade. The Current Versions tab shows the current firmware versions and the Proposed Versions tab shows the available upgrade versions, if any.
Page 243
Option Steps Your management node has external connectivity. 1. Select the cluster you are upgrading. You will see the nodes in the cluster listed along with the current firmware versions and newer versions, if available for upgrade. 2. Select the upgrade package. 3.
Page 244
Upgrade status changes. If a failure happens during the upgrade, NetApp Hybrid Cloud Control will reboot the node, take it out of maintenance mode, and display the failure status with a link to the error log. ...
Page 245
The upgrade is in progress. A progress bar shows the upgrade status. Use NetApp Hybrid Cloud Control API to upgrade a compute node You can use APIs to upgrade each compute node in a cluster to the latest firmware version. You can use an automation tool of your choice to run the APIs.
Page 246
Option Steps Your management node has external connectivity. 1. Verify the repository connection: a. Open the package service REST API UI on the management node: https://[management node IP]/package-repository/1/ b. Click Authorize and complete the following: i. Enter the cluster user name and password.
Page 247
Option Steps Your management node is within a dark site without 1. Go to the NetApp HCI software download page external connectivity. and download the latest compute node firmware image to a device that is accessible to the management node.
Page 248
2. Locate the compute controller ID and node hardware ID for the node you intend to upgrade: a. Open the inventory service REST API UI on the management node: later step. https://[management node IP]/inventory/1/ b. Click Authorize and complete the following: i.
Page 250
"config": { "force": false, "maintenanceMode": true }, "controllerId": "a1b23456-c1d2-11e1-1234-a12bcdef123a", "packageName": "compute-firmware-12.2.109", "packageVersion": "12.2.109" g. Click Execute to initiate the upgrade. The upgrade cannot be paused after you begin. Firmware will be updated sequentially in the following order: NIC, BIOS, and BMC.
Page 251
If VMware Distributed Resource Scheduler (DRS) is enabled on the cluster (this is the default in NetApp HCI installations), virtual machines will automatically be migrated to other nodes in the cluster. 6. Insert the USB thumb drive into a USB port on the compute node and reboot the compute node using...
Page 252
7. During the compute node POST cycle, press F11 to open the Boot Manager. You may need to press F11 multiple times in quick succession. You can perform this operation by connecting a video/keyboard or by using the console in BMC. 8.
Page 253
4. Select Remote Control > Console Redirection. 5. Click Launch Console. You might have to install Java or update it. 6. When the console opens, click Virtual Media > Virtual Storage. 7. On the Virtual Storage screen, click Logical Drive Type, and select ISO File. 8.
Page 254
NOTE: Some of the firmware upgrades might cause the console to disconnect and/or cause your session on the BMC to disconnect. You can log back into the BMC, however some services, such as the console, may not be available due to the firmware upgrades. After the upgrades have completed, the node will perform a cold reboot, which can take approximately five minutes.
Page 255
7. Click Connect. A popup indicating success is displayed, along with the path and device showing at the bottom. You can close the Virtual Media window. 8. Reboot the node by pressing F12 and clicking Restart or clicking Power Control > Set Power Reset. 9.
• NetApp HCI Resources Page vSphere upgrade sequences with vCenter Plug-in Upgrade your vSphere components for a NetApp HCI system with the Element Plug-in for vCenter Server When you upgrade the VMware vSphere components of your NetApp HCI installation, there are some additional steps you will need to take for the Element Plug-in for vCenter Server.
Page 257
Upgrade your vSphere components for a NetApp SolidFire storage system with the Element Plug-in for vCenter Server When you upgrade the VMware vSphere components of your SolidFire Element storage installation, there are some additional steps you will need to take for systems with Element Plug-in for vCenter Server.
Element software version of the storage cluster. Contact NetApp Support for more information. After installing the node in the NetApp HCI chassis, you use NetApp Hybrid Cloud Control to configure NetApp HCI to use the new resources. NetApp HCI detects the existing network configuration and offers you configuration options within the existing networks and VLANs, if any.
Page 259
For each new storage node, complete the following steps: a. Hostname: If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, and insert it as the prefix for the new unique hostname you add in the Hostname field.
After you finish, click Continue on any subsequent pages to return to the Review page. 9. Optional: If you do not want to send cluster statistics and support information to NetApp hosted Active IQ servers, clear the final checkbox.
Page 261
1. Open a web browser and browse to the IP address of the management node. For example: https://[management node IP address] 2. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI storage cluster administrator credentials. 3. Click Expand at the top right corner of the interface.
Page 262
11. On the Available Inventory page, select the nodes you want to add to the existing NetApp HCI installation. For some compute nodes, you might need to enable EV at the highest level that your vCenter version supports before you can add them to your installation. You need to use the vSphere client to enable EVC for these compute nodes.
NetApp Hybrid Cloud Control. Before you begin • Ensure that the vSphere instance of NetApp HCI is using vSphere Enterprise Plus licensing if you are expanding a deployment with Virtual Distributed Switches.
Page 264
3. Click Expand at the top right corner of the interface. The browser opens the NetApp Deployment Engine. 4. Log in to the NetApp Deployment Engine by providing the NetApp HCI storage cluster administrator credentials. 5. On the Welcome page, click Yes and click Continue.
Page 265
For each new storage node, complete the following steps: a. Hostname: If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, and insert it as the prefix for the new unique hostname you add in the Hostname field.
After you expand a two-node storage cluster to four or more nodes, you can delete the pair of Witness Nodes to free up compute resources in your NetApp HCI installation. The Witness Nodes previously used by the storage cluster are still visible as standby virtual machines (VM) in vSphere Web Client.
Page 267
8. Right-click the VM that you powered off, and click Delete from Disk. 9. Confirm the action in the prompt. Find more information • NetApp HCI Two-Node Storage Cluster | TR-4823 • NetApp Element Plug-in for vCenter Server • NetApp HCI Resources page •...
DevOps teams with integrated tools for running containerized workloads. Deploying Rancher on NetApp HCI deploys the Rancher control plane, also referred to as the Rancher server, and enables you to create on-premises Kubernetes clusters. You deploy the Rancher control plane by using the NetApp Hybrid Cloud Control.
Page 269
Rancher or import into Rancher is a user cluster. • Trident: A Trident catalog is available to Rancher on NetApp HCI and runs in the user clusters. Inclusion of this catalog simplifies the Trident deployment to user clusters.
NetApp Element software storage layer in NetApp HCI. A Trident catalog is available to Rancher on NetApp HCI and runs in the user clusters. As part of the Rancher on NetApp HCI implementation, a Trident installer is available in the Rancher catalog by default. Inclusion of this catalog simplifies the Trident deployment to user clusters.
DHCP lease duration is at least 24 hours. • If you need to use an HTTP proxy to enable internet access for Rancher on NetApp HCI, you need to make a pre-deployment change to the management node. Log in to your management node using SSH and...
Page 272
◦ Demo deployments ◦ Production deployments • Rancher FQDN Rancher on NetApp HCI is not resilient to node failures unless you configure some type of network load balancing. As a simple solution, create a round robin DNS entry for the three ...
Docker Hub for NetApp Hybrid Cloud Control Deploy Rancher on NetApp HCI To use Rancher on your NetApp HCI environment, you first deploy Rancher on NetApp HCI. Before starting the deployment, be sure to check the datastore free space and other requirements for Rancher on NetApp HCI.
Page 274
Rancher UI is available on each node. • The Rancher Control Plane (or Rancher Server) is also installed, using the NetApp HCI node template in Rancher for easier deployment. The Rancher Control Plane automatically works with the configuration used in the NetApp Deployment Engine, which was used to build the NetApp HCI infrastructure.
Page 275
A popup window shows a message about getting started with Rancher. 2. Click Deploy Rancher. The Rancher UI appears.
Page 276
◦ Datastore: Select a datastore on the NetApp HCI storage nodes. This datastore should be resilient and accessible to all of the VMware hosts. Do not select a local datastore that is accessible to only one of the hosts.
Page 277
Once you have finished deployment, do not modify the configuration of the Rancher server virtual machine cluster or remove the virtual machines. Rancher on NetApp HCI relies on the deployed RKE management cluster configuration to function normally. What’s next? After deployment, you can do the following: •...
Rancher management VMs and user clusters. If you purchased Rancher Support for only part of your NetApp HCI compute resources, you need to take action in VMware vSphere to ensure that Rancher on NetApp HCI and its managed user clusters are only running on hosts for which you have purchased Rancher Support.
Page 279
NetApp HCI Resources page Install Trident Learn about how to install Trident after you install Rancher on NetApp HCI. Trident is a storage orchestrator, which integrates with Docker and Kubernetes, as well as platforms built on these technologies, such as Red Hat OpenShift, Rancher, and IBM Cloud Private.
Page 280
In this task, you use the installer catalog to install and configure Trident. As part of the Rancher installation, NetApp provides a node template. If you are not planning to use the node template that NetApp provides, and you want to provision on RHEL or CentOS, there might be additional requirements.
Page 281
The default storage tenant is NetApp HCI. You can change this value. You can also change the backend name. However, do not change the default storage driver value, which is solidfire-san. 5. Select Launch. This installs the Trident workload on the trident namespace.
Page 282
NetApp HCI Resources page Enable Trident support for user clusters If your NetApp HCI environment does not have a route between the management and storage networks, and you deploy user clusters that need Trident support, you need to further configure your user cluster networks after installing Trident. For each user cluster, you need to enable communication between the management and storage networks.
8. When you have reconfigured the network for each node in the user cluster, you can deploy applications in the user cluster that utilize Trident. Deploy user clusters and applications After deploying Rancher on NetApp HCI, you can set up user clusters and add applications to those clusters. Deploy user clusters After deployment, Dev and Ops teams can then deploy their Kubernetes user clusters, similar to any Rancher deployment, on which they can deploy apps.
Identify Rancher server cluster URLs and status You can identify Rancher server cluster URLs and determine server status. 1. Log in to NetApp Hybrid Cloud Control by providing the NetApp HCI or Element storage cluster administrator credentials. 2. From the Dashboard, select the top right Options icon and select Configure.
Page 285
3. To view nodes information, from the Hybrid Cloud Control Dashboard, expand the name of your storage cluster and click Nodes. Monitor Rancher using the Rancher UI Using the Rancher UI, you can see information about Rancher on NetApp HCI management clusters and user clusters. ...
You must upgrade to the latest management services bundle 2.17 or later for Rancher functionality. • System ports: If you are using NetApp Hybrid Cloud Control for upgrades, you have ensured that the necessary ports are open. See Network ports for more information.
Page 287
1. Open a web browser and browse to the IP address of the management node: https://<ManagementNodeIP> 2. Log in to NetApp Hybrid Cloud Control by providing the storage cluster administrator credentials. 3. Select Upgrade near the top right of the interface.
Page 288
curl -X POST "https://<managementNodeIP>/k8sdeployer/1/upgrade/rancher- versions" -H "accept: application/json" -H "Authorization: Bearer <ID>" You can find the bearer ID used by the APIs by running a GET command and retrieving it from the curl response. 2. Get task status using task ID from previous command and copy the latest version number from the response: curl -X GET "https://<mNodeIP>/k8sdeployer/1/task/<taskID>"...
Page 289
a. From the REST API UI, run PUT /upgrade/rancher/{version} with the latest version number from the previous step. b. From the response, copy the task ID. c. Run GET /task/{taskID} with the task ID from the previous step. PercentComplete results The upgrade has finished successfully when the indicates indicates...
Page 290
b. Enter the client ID as mnode-client. c. Select Authorize to begin a session. d. Close the authorization window. 3. Check for the latest upgrade package: a. From the REST API UI, run POST /upgrade/rke-versions. b. From the response, copy the task ID. c.
Page 291
From the /task/{taskID} response, verify that the upgrade has been applied. PercentComplete results The upgrade has finished successfully when the indicates indicates the upgraded version number. Find more information • NetApp Element Plug-in for vCenter Server • NetApp HCI Resources Page...
Remove Rancher on NetApp HCI using the REST API Remove Rancher on NetApp HCI using NetApp Hybrid Cloud Control You can use NetApp Hybrid Cloud Control web UI to remove the three virtual machines that were set up during deployment to host the Rancher server.
Page 293
7. Select Delete. Remove Rancher on NetApp HCI using the REST API You can use the NetApp Hybrid Cloud Control REST API to remove the three virtual machines that were set up during deployment to host the Rancher server. Steps 1.
If your chassis has a fan failure or a power issue, you should replace it as soon as possible. The steps in the chassis replacement procedure depend on your NetApp HCI configuration and cluster capacity, which requires careful consideration and planning. You should contact NetApp Support for guidance and to order a replacement chassis.
Page 295
1. Put on antistatic protection. 2. Unpack the replacement chassis. Keep the packaging for when you return the failed chassis to NetApp. 3. Insert the rails that were shipped to you along with the chassis. 4. Slide the replacement chassis into the rack.
Page 296
Note down the serial number of the node from the sticker at the back of the node. b. In the VMware vSphere Web Client, select NetApp Element Management, and copy the MVIP IP address. c. Use the MVIP IP address in a web browser to log in to the NetApp Element software UI with the user...
Page 297
MVIP is the MVIP IP address, NODEID is the node ID, USER is the user name you configured in the NetApp Deployment Engine when you set up NetApp HCI, and PASS is the password you configured in the NetApp Deployment Engine when you set up NetApp HCI.
Page 298
You should take the following points into consideration before you do the chassis replacement: • Can your storage cluster remain online without the storage nodes in the failed chassis? If the answer is no, you should shut down all the nodes (both compute and storage) in your NetApp HCI deployment.
Page 299
1. Put on antistatic protection. 2. Unpack the replacement chassis, and keep it on a level surface. Keep the packaging for when you return the failed unit to NetApp. 3. Remove the failed chassis from the rack, and place it on a level surface.
Page 300
Steps If you reinstalled all the nodes (both storage and 1. In the VMware vSphere Web Client, confirm that compute) in your NetApp HCI deployment the compute nodes (hosts) are listed in the ESXi cluster. 2. In the Element plug-in for vCenter server, confirm that the storage nodes are listed as Active.
Replace DC power supply units in H615C and H610S nodes H615C and H610S nodes support two –48 V to –60 V DC power supply units. These units are available as optional add-ons when you order H615C or H610S nodes. You can use the these instructions to remove the AC power supply units in the chassis and replace them with DC power supply units, or to replace a faulty DC power supply unit with a new DC power supply unit.
Page 302
The illustration is an example. The location of the power supply unit in the chassis and the color of the release button vary depending on the type of chassis you have. Ensure that you use both hands to support the weight of the power supply unit. 3.
The power supply LEDs are lit when the DC power supply unit comes online. Green LED lights indicate that the power supply units are working correctly. 6. Return the faulty unit to NetApp by following the instructions in the box that was shipped to you. Find more information •...
Page 304
Log in to the BMC to launch the console on the node. b. Press F2 on the keyboard to get to the Customize System/View Logs menu. c. Enter the password when prompted. The password should match what you configured in the NetApp Deployment Engine when you set up NetApp HCI.
Page 305
d. From the System Customization menu, press the down arrow to navigate to Troubleshooting Options, and press Enter. e. From the Troubleshooting Mode Options menu, use the up or down arrow to enable ESXi shell and SSH, which are disabled by default. f.
Page 306
1. Press Alt + F1 to enter shell, and log in to the node to run the command. 3. Contact NetApp Support for help with the next steps. NetApp Support requires the following information to process a part replacement: ◦ Node serial number ◦...
Page 307
2. Right-click the node that is reporting the error, and select the option to place the node in maintenance mode. 3. Migrate the virtual machines (VMs) to another available host. See the VMware documentation for the migration steps. 4. Power down the chassis or node. ...
Page 309
DIMM. If you press on other parts of the DIMM, it might result in damage to the hardware. Install the node in the NetApp HCI chassis, ensuring that the node clicks when you slide it into place.
Page 310
Node model Steps H610C 1. Lift the cover as shown in the following image: 2. Loosen the four blue lock screws at the back of the node. Here is a sample image showing the location of two lock screws; you will find the other two on the other side of the node: 3.
Page 311
Node model Steps H615C 7. Install the replacement DIMM correctly. When 1. Lift the cover as shown in the following image: you insert the DIMM into the slot correctly, the two clips lock in place. Ensure that you touch only the rear ends of the DIMM.
Learn more. Find more information • NetApp HCI Resources page • SolidFire and Element Software Documentation Center Replace drives for storage nodes If a drive is faulty or if the drive wear level falls below a threshold, you should replace it.
Page 313
• Remove all the block drives from a single node at once. Ensure that all block syncing is complete before you move on to the next node. Steps 1. Remove the drive from the cluster using either the NetApp Element software UI or the NetApp Element Management extension point in Element plug-in for vCenter server. Option...
Page 314
Unpack the replacement drive, and place it on a flat, static-free surface near the rack. Save the packing materials for when you return the failed drive to NetApp. Here is the front view of the H610S and H410S storage nodes with the drives:...
Page 315
6. Insert the replacement drive into the slot all the way into the chassis using both hands. 7. Press down the cam handle until it clicks. 8. Reinstall the bezel. 9. Notify NetApp Support about the drive replacement. NetApp Support will provide instructions for returning the failed drive.
Page 316
The drive tray handle springs open. 6. Insert the replacement drive without using excessive force. When the drive is inserted fully, you hear a click. 7. Close the drive tray handle carefully. Reinstall the bezel. Notify NetApp Support about the drive replacement.
8. 9. 3. Add the drive back to the cluster using either the Element UI or the NetApp Element Management extension point in Element plug-in for vCenter server. NetApp Support will provide instructions for returning the failed drive. When you install a new drive in an existing node, the drive automatically registers as ...
Page 318
1. In the VMware vSphere Web Client, perform the steps to migrate the VMs to another available host. See the VMware documentation for the migration steps. 2. Perform the steps to remove the node from the inventory. The steps depend on the version of NetApp HCI in your current installation:...
Page 319
2. Unpack the new node, and set it on a level surface near the chassis. Keep the packaging material for when you return the failed node to NetApp. 3. Label each cable that is inserted at the back of the node that you want to remove.
Page 320
Remove the compute node asset in NetApp HCI 1.7 and later In NetApp HCI 1.7 and later, after you physically replace the node, you should remove the compute node asset using the management node APIs. To use REST APIs, your storage cluster must be running NetApp Element software 11.5 or later and you should have deployed a management node running version 11.5 or later.
Page 321
Enter the parent and id values you got in step 7. 9. Select Execute. Add the compute node to the cluster You should add the compute node back to the cluster. The steps vary depending on the version of NetApp HCI you are running. NetApp HCI 1.6P1 and later You can use NetApp Hybrid Cloud Control only if your NetApp HCI installation runs on version 1.6P1 or later.
Page 322
For the new compute node, perform the following steps: a. If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, and insert it as the prefix for the new unique hostname you add in the Hostname field.
Page 323
20. Optional: Verify that the new compute node is visible in vCenter. NetApp HCI 1.4 P2, 1.4, and 1.3 If your NetApp HCI installation runs version 1.4P2, 1.4, or 1.3, you can use the NetApp Deployment Engine to add the node to the cluster.
Page 324
You should use the same master credentials that were created during the initial NetApp HCI deployment. 9. Select Continue. 10. On the Available Inventory page, select the node you want to add to the existing NetApp HCI installation. For some compute nodes, you might need to enable EVC at the highest level your ...
Page 325
When finished making changes, select Continue on any subsequent pages to return to the Review page. 14. Optional: If you do not want to send cluster statistics and support information to NetApp-hosted Active IQ servers, clear the final checkbox.
Page 326
In the window that is displayed, enter the port information. You should contact NetApp Support for this information. NetApp Support logs in to the node to set the boot configuration file and complete the configuration task. f. Restart the node.
Page 327
ii. In the graphic that is displayed, select VM Network, and click X to remove the VM Network port group. iii. Confirm the action. iv. Select vSwitch0, and then select the pencil icon to edit the settings. v. In the vSwitch0 - Edit settings window, select Teaming and failover. vi.
Page 328
iii. Select +. iv. In the Add Physical Adapters to Switch window, select vmnic0 and vmnic4, and select OK. vmnic0 and vmnic4 are now listed under Active adapters. v. Select Next. vi. Under connection settings, verify that VM Network is the network label, and select Next. vii.
Page 329
ii. Select Teaming and failover, and select the Override checkbox. iii. Move vmnic4 to Standby adapters by using the arrow icon. iv. Select OK. i. With vSwitch1 selected, from the Actions drop-down menu, select Add Networking and enter the following details in the window that is displayed: i.
Page 330
window that is displayed: i. For connection type, select VMkernel Network Adapter, and select Next. ii. For target device, select the option to use an existing standard switch, browse to vSwitch2, and select Next. iii. Under port properties, change the network label to iSCSI-A, and select Next. iv.
Page 331
Redeploy Witness Nodes for two and three-node storage clusters After you physically replace the failed compute node, you should redeploy the NetApp HCI Witness Node VM if the failed compute node was hosting the Witness Node. These instructions apply only to compute nodes that are part of a NetApp HCI installation with two or three-node storage clusters.
Page 332
You should redeploy Witness Nodes in the following scenarios: • You replaced a failed compute node that is part of a NetApp HCI installation, which has a two or three-node storage cluster and the failed compute node was hosting a Witness Node VM.
Page 333
Select Data Center, locate the VM template, and select Next. b. Enter a name for the VM in the following format: NetApp-Witness-Node-## ## should be replaced with a number. c. Leave the default selection for VM location as is, and select Next.
Page 334
Press the Tab key to navigate to OK button, and press Enter. 6. Add the Witness Node to the storage cluster as follows: a. From the vSphere Web Client, access the NetApp Element Management extension point from the Shortcuts tab or the side panel.
Page 335
Upgrade the BMC firmware on your node After you replace the compute node, you might have to upgrade the firmware version. You can download the latest firmware file from the drop-down menu on the NetApp Support Site (login required). Steps 1.
Alarms in the VMware vSphere Web Client alert you when a storage node is faulty. You should use the NetApp Element software UI to get the serial number (service tag) of the failed node. You need this information to locate the failed node in the chassis.
Page 337
About this task The replacement procedure applies to H410S storage nodes in a two rack unit (2U), four-node NetApp HCI chassis. Here is the rear view of a four-node chassis with H410S nodes: Here is the front view of a four-node chassis with H410S nodes, showing the bays that correspond to each...
Page 338
1. Unpack the new storage node, and set it on a level surface near the chassis. Keep the packaging material for when you return the failed node to NetApp. 2. Label each cable that is inserted at the back of the storage node that you want to remove.
Page 339
9. Press the button at the front of the node to power it on. Add the storage node to the cluster You should add the storage node back to the cluster. The steps vary depending on the version of NetApp HCI you are running.
Page 340
Perform the following steps: a. If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, and insert it as the prefix for the new unique hostname you add in the Hostname field.
Page 341
NetApp HCI 1.4 P2, 1.4, and 1.3 If your NetApp HCI installation runs version 1.4P2, 1.4, or 1.3, you can use the NetApp Deployment Engine to add the node to the cluster. Steps 1. Browse to the management IP address of one of the existing storage nodes: http://<storage_node_management_IP_address>/...
Page 342
The cluster membership changes from Available to Pending. 7. In NetApp Element Plug-in for vCenter Server, select NetApp Element Management > Cluster > Nodes. 8. Select Pending from the drop-down list to view the list of available nodes.
If you have a faulty DIMM in your H610C compute node that runs NetApp HCI Bootstrap OS version 1.6 or later, you can replace the DIMM and do not have to replace the chassis. For H615C nodes, you need not replace the chassis if a DIMM fails;...
Page 344
1. Unpack the new chassis, and set it on a level surface. Keep the packaging material for when you return the failed chassis to NetApp. 2. Label each cable that is inserted at the back of the chassis that you are going to remove.
Page 345
Ensure that you do not use excessive force when sliding the chassis on to the rails. 6. Only for H615C. Remove the DIMMs from the failed chassis and insert these DIMMs in the replacement chassis. You should replace the DIMMs in the same slots they were removed from in the failed node.
Page 346
You should configure NetApp HCI to use the new compute node. What you’ll need • The vSphere instance NetApp HCI is using has vSphere Enterprise Plus licensing if you are adding the node to a deployment with Virtual Distributed Switches.
Page 347
• None of the vCenter or vSphere instances in use with NetApp HCI have expired licenses. • You have free and unused IPv4 addresses on the same network segment as existing nodes (the new node must be installed on the same network as existing nodes of its type).
Page 348
For each new compute node, perform the following steps: a. If NetApp HCI detected a naming prefix, copy it from the Detected Naming Prefix field, and insert it as the prefix for the new unique hostname you add in the Hostname field.
What you’ll need • You have contacted NetApp Support. If you are ordering a replacement, you should have a case open with NetApp Support. • You have obtained the replacement node. • You have an electrostatic discharge (ESD) wristband, or you have taken other antistatic protection.
Page 350
2. Verify that the serial number on the service tag matches the NetApp Support case number when you ordered the replacement chassis. 3. Plug in the keyboard and monitor to the back of the failed chassis. 4. Verify the serial number of the failed node with NetApp Support.
Alarms in the VMware vSphere Web Client provide information about the failed power supply unit, referring to it as PS1 or PS2. In a NetApp HCI 2U, four-node chassis, PS1 refers to the unit on the top row of the chassis and PS2 refers to the unit on the bottom row of the chassis.
Page 352
The power supply units are located differently based on the type of chassis. See the images below for the locations of the power supply units: Model Location of the power supply units 2U, four-node NetApp HCI storage chassis The nodes in your chassis might look different depending on the...
5. Plug in the power cord. 6. Return the faulty unit to NetApp by following the instructions in the box that was shipped to you. Find more information •...
Page 354
About this task You should perform the steps in this procedure in the order below. This is to ensure that the downtime is minimal and the replacement switch is pre-configured before the switch replacement. Contact NetApp Support if you need guidance.
Page 355
Here is an overview of the steps in the procedure: Prepare to replace the faulty switch Create the configuration file Remove the faulty switch and install the replacement Verify the operating system version on the switch Configure the replacement switch Complete the replacement Prepare to replace the faulty switch Perform the following steps before you replace the faulty switch.
Page 356
Option Steps Create the backup configuration file from the faulty 1. Connect to your switch remotely using SSH as switch shown in the following example: ssh admin@<switch_IP_address 2. Enter Configuration mode as shown in the following example: switch > enable switch # configure terminal 3.
Page 357
Option Steps Create the backup configuration file by modifying the 1. Connect to your switch remotely using SSH as file from another switch shown in the following example: ssh admin@<switch_IP_address 2. Enter Configuration mode as shown in the following example: switch >...
Page 358
Verify the OS software version on the switch. The version on the faulty switch and the healthy switch should match. Steps 1. Connect to your switch remotely using SSH. 2. Enter Configuration mode. 3. Run the show version command. See the following example: SFPS-HCI-SW02-A (config) #show version Product name: Onyx Product release: 3.7.1134 Build ID: #1-dev Build date:...
Page 359
details. Configure the replacement switch Perform the steps to configure the replacement switch. See Mellanox configuration management for details. Steps 1. Choose from the option that applies to you: Option Steps From the BIN configuration file 1. Fetch the BIN configuration file as shown in the following example: switch (config) # configuration fetch scp://myusername@my-...
Page 360
Complete the replacement Perform the steps to complete the replacement procedure. Steps 1. Insert the cables by using the labels to guide you. 2. Run NetApp Config Advisor. Access the Quick Start Guide from here (login required). 3. Verify your storage environment.
7. After the node is added to the cluster, add the drives. 8. After the sync is finished, remove the failed drives and the failed node from the cluster. 9. Use NetApp Hybrid Cloud Control to configure the new storage node that you added. See Expand NetApp HCI storage resources.
Legal notices provide access to copyright statements, trademarks, patents, and more. Copyright http://www.netapp.com/us/legal/copyright.aspx Trademarks NETAPP, the NETAPP logo, and the marks listed on the NetApp Trademarks page are trademarks of NetApp, Inc. Other company and product names may be trademarks of their respective owners. http://www.netapp.com/us/legal/netapptmlist.aspx Patents A current list of NetApp owned patents can be found at: https://www.netapp.com/us/media/patents-page.pdf...
Page 363
NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp.
Need help?
Do you have a question about the HCI and is the answer not in the manual?
Questions and answers