Cisco Nexus 1000V Deployment Manual

Switch for microsoft hyper-v
Hide thumbs Also See for Nexus 1000V:
Table of Contents

Advertisement

Deployment Guide
Cisco Nexus 1000V Switch for
Microsoft Hyper-V
Version 1
Deployment Guide
July 2013
© 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 1 of 1

Advertisement

Table of Contents
loading

Summary of Contents for Cisco Nexus 1000V

  • Page 1 Deployment Guide Cisco Nexus 1000V Switch for Microsoft Hyper-V Version 1 Deployment Guide July 2013 © 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 1...
  • Page 2: Table Of Contents

    Management, Live Migration, and Cluster Traffic................... 34 Deployment Topology 2: For Customers with Limited pNICs per Microsoft Hyper-V Host ......... 35 © 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 2 of 48...
  • Page 3 Network-State Tracking ............................44 Cisco Nexus 1000V Switch for Microsoft Hyper-V Sample Configuration ............45 Conclusion ................................47 For More Information............................. 47 © 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 3 of 48...
  • Page 4: What You Will Learn

    (Figure 1). The VSM can run as a virtual machine on any Microsoft Hyper-V host or as a virtual service node on the Cisco Nexus 1010 and 1110. The VEM runs as a plug-in (extension) to the Microsoft Hyper-V switch in the hypervisor kernel, providing switching between virtual machines.
  • Page 5: Template-Based Network Policy

    Cisco Nexus 1000V sees the VSMs and VEMs as modules. In the current release, a single VSM can manage up to 64 VEMs. The VSMs are always associated with slot numbers 1 and 2 in the virtual chassis. The VEMs are sequentially assigned to slots 3 through 66 based on the order in which their respective hosts were added to the Cisco Nexus 1000V Switch.
  • Page 6: Microsoft Hyper-V Networking

    For network administrators, the combination of the Cisco Nexus 1000V feature set and the capability to define a port profile using the same syntax as for existing physical Cisco switches helps ensure that consistent policy is enforced without the burden of having to manage individual virtual switch ports.
  • Page 7: Microsoft Scvmm Networking

    A VSEM is created by connecting to the VSM management IP address and the switch administrator credentials. In Figure 3, a Cisco Nexus 1000V VSM is being added as a VSEM by connecting to the switch management IP address of 10.10.1.10 using HTTP. A RunAs account called VSM Admin has been created using the switch administrator credentials.
  • Page 8: Logical Switch

    Switch instance has been created. When Cisco Nexus 1000V is used with Microsoft SCVMM, a Logical Switch that uses the Cisco Nexus 1000V as a forwarding extension is created on Microsoft SCVMM. This Logical Switch is then instantiated on all Microsoft Hyper-V hosts on which virtual networking needs to be managed with Cisco Nexus 1000V (Figure 4).
  • Page 9 Uplink profiles and port classifications are explained in the next sections of this document. Note: When a Cisco Nexus 1000V Logical Switch is created on Microsoft SCVMM, only one extension is used. The Cisco Nexus 1000V is used as a forwarding extension.
  • Page 10: Port Classifications And Virtual Machine Networks

    When the Cisco Nexus 1000V is used to manage the virtual access layer on Microsoft Hyper-V servers, the VSM administrator creates port profiles and network segments. The Microsoft SCVMM administrator uses the port profile created on the Cisco Nexus 1000V Switch for Microsoft Hyper-V to create a port classification.
  • Page 11 In Figure 7, the Cisco Nexus 1000V administrator has defined a simple port profile called RestrictedProfile that applies an access control list (ACL) network policy. Figure 7. Simple Port Profile Defined on the Cisco Nexus 1000 VSM The Microsoft SCVMM administrator uses RestrictedProfile when he creates a port classification. In Figure 8, the administrator is creating a port classification, also called RestrictedProfile, with only one port profile: the RestrictedProfile port profile defined on the VSM.
  • Page 12: Logical Network And Network Sites

    Port classifications are similar to port groups defined in VMware vCenter for VMware ESX environments. However, in VMware vCenter, creation of a port profile on the Cisco Nexus 1000V results in the automatic creation of a port group, whereas in Microsoft SCVMM, the user has to manually create a port classification. The extra step is needed because a port classification can represent network policies from more than one provider.
  • Page 13: Virtual Machine Networks

    When the Cisco Nexus 1000V is used to manage the virtual access layer on Microsoft Hyper-V, Logical Networks and Network Sites are created from the VSM. Network sites are referred to as network segment pools on the VSM because they are a collection of VLAN and IP subnets: that is, network segments. Figure 10 shows an example of how a Logical Network and network segment pool (Network Site) are created on Microsoft SCVMM.
  • Page 14 VSM administrator. Figure 12. Creating a Virtual Machine Network That Uses an External Virtual Machine Network for Isolation © 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 14 of 48...
  • Page 15 Networks and then allowing the tenant administrator to create virtual machine networks using the resources available in Microsoft SCVMM Logical Networks. Figure 13. Tenant Administrator Creating Virtual Machine Networks Using Resources Configured in Logical Networks © 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 15 of 48...
  • Page 16: Ip Address Pools

    IP address from the pool is used as the static IP address on the virtual machine. When the Cisco Nexus 1000V is used to configure the Microsoft Hyper-V virtual network, the VSM administrator must define IP pools for a network segment.
  • Page 17: Virtual Supervisor Module

    Unlike a traditional Cisco switch, in which the management plane is integrated into the hardware, on the Cisco Nexus 1000V the VSM is deployed either as a virtual machine on a Microsoft Hyper-V server or as a virtual service blade (VSB) on the Cisco Nexus 1010 or 1110 appliance (Figure 15).
  • Page 18: Control Interface

    Some customers like to keep network management traffic in a network separate from the host management network. By default, the Cisco Nexus 1000V uses the management interface on the VSM to communicate with the VEM. However, this communication can be moved to the control interface by configuring server virtualization switch (SVS) mode to use the control interface.
  • Page 19: Vsm-To-Vem Communication

    Each instance of the Cisco Nexus 1000V is typically composed of two VSMs (in a high-availability pair) and one or more VEMs. The maximum number of VEMs supported by a VSM is 64.
  • Page 20 Nexus1000v(config)# vrf context default Nexus1000v(config)# ip route 0.0.0.0/0 192.168.150.1 Figure 19 shows the control0 interface used to communicate with the VEM. © 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 20 of 48...
  • Page 21 Some customers prefer to move the Microsoft Hyper-V host management interface behind a Microsoft virtual switch and share the physical interface with other virtual machines. In this scenario, no special Cisco Nexus 1000V configuration is needed to enable VSM-to-VEM communication (Figure 20).
  • Page 22 Figure 20. Management and Control0 Interface on the VSM and the Virtual Management NIC Connected to the Microsoft Logical Switch © 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 22 of 48...
  • Page 23 Typically, a pNIC does not use VLAN tags for communication; therefore, while moving the management vNIC behind the Cisco Nexus 1000V, set the management VLAN on the uplink profile to native. Failure to do so may lead to loss of connectivity to the host.
  • Page 24 It is highly recommended that user adds only the management pNIC to the Cisco Nexus 1000V while moving the management NIC behind the VEM. Other pNICs can be added to the Cisco Nexus 1000V after the module successfully attaches to the VSM...
  • Page 25: Cisco Nexus 1000V Switch Installation

    Cisco Nexus 1000V Switch Installation Installation of the Cisco Nexus 1000V Switch is beyond the scope of this document. Figure 22 shows the Cisco Nexus 1000V installation steps at a high level for conceptual completeness. For guidance and detailed instructions about installation, please refer to the Cisco Nexus 1000V installation guide.
  • Page 26: Cisco Nexus 1000V Switch Features

    MAC address dynamically, through the pNICs in the server. Each VEM maintains a separate MAC address table. Thus, a single Cisco Nexus 1000V Switch may learn a given MAC address multiple times: as often as once per VEM. For example, one VEM may be hosting a virtual machine, and the virtual machine’s MAC address will be statically learned on the VEM.
  • Page 27: Loop Prevention

    Every ingress packet on a physical Ethernet interface is inspected to help ensure that the destination MAC address is internal to the VEM. If the source MAC address is internal to the VEM, the Cisco Nexus 1000V Switch will drop the packet. If the destination MAC address is external, the switch will drop the packet, preventing a loop back to the physical network.
  • Page 28: Switch Port Interfaces

    Microsoft Hyper-V host. An Ethernet, or Eth, interface is represented in standard Cisco interface notation (EthX/Y) using the Cisco NX-OS naming convention “Eth” rather than a speed such as “Gig” or “Fast,” as is the custom in Cisco IOS Software. These Eth interfaces are module specific and are designed to be fairly static within the environment.
  • Page 29: Port Profiles

    A port profile is a collection of interface-level configuration commands that are combined to create a complete network policy. The port profile concept is new, but the configurations in port profiles use the same Cisco syntax that is used to manage switch ports on traditional switches. The VSM administrator:...
  • Page 30: Ethernet Port Profiles

    Eth Port Profile Example Uplink port profiles are applied to a pNIC when a Microsoft Hyper-V host is first added to the Cisco Nexus 1000V Switch. The Microsoft SCVMM administrator is presented with a dialog box in which the administrator selects the pNICs to be associated with the VEM and the specific uplink port profiles to be associated with the pNICs.
  • Page 31: Network Segments

    The network segment command is a new command introduced in the Cisco Nexus 1000V Switch for Microsoft Hyper-V. Network segments are used to create Layer 2 networks on the VSM. In the first release of the Cisco Nexus 1000V Switch for Microsoft Hyper-V, only VLAN-based network segments are supported. Other segmentation technology is not supported in this release.
  • Page 32: Dynamic Port Profiles

    Cisco Nexus 1000V port classification and virtual machine network, a dynamic port profile is created on the Cisco Nexus 1000V VSM and is applied to the virtual switch port on which the virtual machine is deployed. These dynamic port profiles are shared by all virtual machines that have the same virtual machine network and port classification.
  • Page 33: Policy Mobility

    In addition to migrating the policy, the Cisco Nexus 1000V Switches move the virtual machine’s network state, such as the port counters and flow statistics. Virtual machines participating in traffic monitoring activities, such as Cisco NetFlow or Encapsulated Remote Switched Port Analyzer (ERSPAN), can continue these activities uninterrupted by Microsoft live migration operations.
  • Page 34: Management Microsoft Hyper-V Clusters And Hosts

    Microsoft Active Directory servers, DNS servers, SQL servers, and Microsoft SCVMM and other Microsoft System Center roles. The Cisco Nexus 1000V VSM virtual machine should also be deployed on an infrastructure host or cluster. The Cisco Nexus 1000V Logical Switch (VEM) is not created on the infrastructure hosts; instead, the native Microsoft Hyper-V switch is used.
  • Page 35: Layer 3 Mode

    Data Virtual Machine Cluster The Cisco Nexus 1000V Logical Switch must be created only on Microsoft Hyper-V hosts that run workload virtual machines. As shown earlier in Figure 29, the Cisco Nexus 1000V Logical Switch (VEM) is not created on infrastructure hosts and is created only on workload hosts.
  • Page 36: Deployment Topology 2: For Customers With Limited Pnics Per Microsoft Hyper-V Host

    Microsoft Hyper-V switch. The workload virtual machines are deployed on the Cisco Nexus 1000V Logical Switch. The Logical Switch will have at least two adapters connected as the switch uplinks. vPC host mode (explained in detail later in this document) is the recommended configuration for the Cisco Nexus 1000V uplinks to help ensure the high availability of the workload virtual machines.
  • Page 37: Cisco Virtual Interface Card

    Some Cisco UCS functions are similar to those offered by the Cisco Nexus 1000V Switches, but with a different set of applications and design scenarios. Cisco UCS offers the capability to present adapters to physical and virtual machines directly. This solution is a hardware-based Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) solution, whereas the Cisco Nexus 1000V is a software-based VN-Link solution.
  • Page 38 Cisco Nexus 1000V Switch uplinks. This configuration helps ensure that the uplinks are bound to a team. When a member link in the team fails, the Cisco Nexus 1000V VEM helps ensure that traffic from workload virtual machines fails over to one of the remaining links.
  • Page 39 NIC failover configuration required in the OS, hypervisor, or virtual machine. The Cisco VIC adapters - the Cisco UCS M81KR VIC, VIC 1240, and VIC 1280 adapter types - enable a fabric failover capability in which loss of connectivity on a path in use causes traffic to be remapped through a redundant path within Cisco UCS.
  • Page 40: Quality Of Service

    Another distinguishing feature of Cisco UCS is the capability of the VIC to perform CoS-based queuing in hardware. CoS is a value marked on Ethernet packets to indicate the priority in the network. The Cisco UCS VIC has eight traffic queues, which use CoS values of 0 through 7. The VIC also allows the network administrator to specify a minimum bandwidth that must be reserved for each CoS during congestion.
  • Page 41: Upstream Switch Connectivity

    Cisco Nexus 1000V QoS configuration guide. Upstream Switch Connectivity The Cisco Nexus 1000V can be connected to any upstream switch (any Cisco switch as well as switches from other vendors) that supports standards-based Ethernet and does not require any additional capability to be present on the upstream switch to function properly.
  • Page 42: Lacp Offload

    This clustering is transparent to the Cisco Nexus 1000V. When the upstream switches are clustered, the Cisco Nexus 1000V Switch should be configured to use LACP with one port profile, using all the available links. This configuration will make more bandwidth available for the virtual machines and accelerate Live Migration.
  • Page 43: Special Portchannel

    Most access-layer switches do not support clustering technology, yet most Cisco Nexus 1000V designs require PortChannels to span multiple switches. The Cisco Nexus 1000V offers several ways to connect the Cisco Nexus 1000V Switch to upstream switches that cannot be clustered. To enable this spanning of switches, the Cisco Nexus 1000V provides a PortChannel-like method that does not require configuration of a PortChannel upstream.
  • Page 44 However, this approach does not prevent the Cisco Nexus 1000V Switch from constructing a PortChannel on its side, providing the required redundancy in the data center in the event of a failure. If a failure occurs, the Cisco Nexus 1000V Switch will send a gratuitous ARP packet to alert the upstream switch that the MAC address of the VEM learned on the previous link will now be learned on a different link, enabling failover in less than a second.
  • Page 45: Load Balancing

    PortChannel. These algorithms can be divided into two categories: source-based hashing and flow-based hashing. The type of load balancing that the Cisco Nexus 1000V uses can be specified in VEM-level detail, so one VEM can implement flow-based hashing, using the better load sharing offered by that mode, and another VEM not connected to a clustered upstream switch can use MAC address pinning and thus source-based hashing.
  • Page 46: Flow-Based Hashing

    (the Cisco Nexus 1000V VEM in this case). NST mitigates this problem by using NST packets, which probe interfaces on other subgroups of the same VEM.
  • Page 47: Cisco Nexus 1000V Switch For Microsoft Hyper-V Sample Configuration

    Microsoft SCVMM console, Microsoft SCVMM can set the IP address and the default gateway on the virtual machines. When Cisco Nexus 1000V is used to manage virtual networking on Microsoft Hyper-V, the network administrator must define the IP-pool range to be used when virtual machines are deployed on a VLAN-based virtual machine network.
  • Page 48 Create an uplink network. The network uplink command is new in the Cisco Nexus 1000V Switch for Microsoft Hyper-V. Each uplink network configured on the VSM is available as a uplink port profile to the Microsoft SCVMM administrator. The example here creates an uplink network that uses the Ethernet profile UplinkProfile and allows the network segment pools: DMZ-SFO and DMZ-NY.
  • Page 49: Conclusion

    The comprehensive feature set of the Cisco Nexus 1000V allows the networking team to troubleshoot more rapidly any problems in the server virtualization environment, increasing the uptime of virtual machines and protecting the applications that propel the data center.

Table of Contents