Table of Contents

Advertisement

Quick Links

IBM i Virtualization and Open Storage Read-me First
Vess Natchev
Cloud | Virtualization | Power Systems
IBM Rochester, MN
vess@us.ibm.com
th
July 9
, 2010
1

Advertisement

Table of Contents
loading

Summary of Contents for IBM I VIRTUALIZATION - READ ME FIRST 7-9-2010

  • Page 1 IBM i Virtualization and Open Storage Read-me First Vess Natchev Cloud | Virtualization | Power Systems IBM Rochester, MN vess@us.ibm.com July 9 , 2010...
  • Page 2: Table Of Contents

    1. IBM i virtualization solutions 1.1. IBM i logical partition (LPAR) hosting another IBM i partition 1.2. IBM i using open storage as a client of the Virtual I/O Server (VIOS) 1.3. IBM i on a Power blade 2. IBM i hosting IBM i supported configurations 2.1.
  • Page 3 11.1. Configure IBM i networking 11.2. How to perform IBM i operator panel functions 11.3. How to display the IBM i partition System Reference Code (SRC) history 11.4. Client IBM i LPARs considerations and limitations 11.5. Configuring Electronic Customer Support (ECS) over LAN 11.6.
  • Page 4: Ibm I Virtualization Solutions

    Windows® workloads on the same platform. The same virtualization technology, which is part of the IBM i operating system, can now be used to host IBM i LPARs. IBM i hosting IBM i is the focus of the first half of this document.
  • Page 5: Software And Firmware

    To determine the storage, network and optical devices supported on each IBM Power server model, refer to the Sales Manual for each model: http://www.ibm.com/common/ssi/index.wss. To determine the storage, network and optical devices supported only by IBM i 6.1, refer to the upgrade planning Web site: https://www- 304.ibm.com/systems/support/i/planning/upgrade/futurehdwr.html.
  • Page 6: Virtual Scsi And Ethernet Adapters

    HMC. 3.2. Storage virtualization To virtualize integrated disk (SCSI, SAS or SSD) or LUNs from a SAN system to an IBM i client partition, both HMC and IBM i objects must be created. In the HMC, the minimum required configuration is: •...
  • Page 7 One Network Server Storage Space (NWSSTG) object The NWSD object associates a virtual SCSI server adapter in IBM i (which in turn is connected to a virtual SCSI client adapter in the HMC) with one or more NWSSTG objects. At least one NWSD object must be created in the host for each client, though more are supported.
  • Page 8: Optical Virtualization

    MB minimum size is a requirement from the storage management Licensed Internal Code (LIC) on the client partition. For an IBM i client partition, up to 16 NWSSTGs can be linked to a single NWSD, and therefore, to a single virtual SCSI connection. Up to 32 outstanding I/O operations from the client to each storage space are supported for IBM i clients.
  • Page 9: Performance

    4.3. Dual hosting An IBM i client partition has a dependency on its host: if the host partition fails, IBM i on the client will lose contact with its disk units. The virtual disks would also become unavailable if the host...
  • Page 10: Implementing Ibm I Client Lpars With An Ibm I Host

    An IBM i client partition can also use the new HEA capability of POWER6 processor-based servers. To assign a logical port (LHEA) on an HEA to an IBM i client partition, see the topic Creating a Logical Host Ethernet Adapter for a running logical partition using the HMC in...
  • Page 11: How To Perform Ibm I Operator Panel Functions

    LAN to a client partition on a virtual LAN are Proxy ARP and Network Address Translation (NAT). To configure Proxy ARP or NAT in the IBM i host partition, follow the instructions in section 5.2 of the Redbook Implementing POWER Linux on IBM System i Platform (http://www.redbooks.ibm.com/redbooks/pdfs/sg246388.pdf).
  • Page 12: Configuring Electronic Customer Support (Ecs) Over Lan

    SCSI configuration in the HMC and the NWSD object in the host IBM i partition. 6.7. Backups As mentioned above, an IBM i client partition with an IBM i host can use a mix of virtual and physical I/O resources. Therefore, the simplest backup and restore approach is to assign an available tape adapter on the system to it and treat it as a standard IBM i partition.
  • Page 14: Ibm I Using Open Storage Supported Configurations

    IBM i using open storage as a client of VIOS. This document will not attempt to list the full device support of VIOS, nor of any other clients of VIOS, such as AIX and Linux. For the general VIOS support statements, including other clients, see the VIOS Datasheet at: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html.
  • Page 15 EXP5060 expansion unit with It is strongly recommended that SATA Fibre Channel or SATA drives drives are used only for test or archival IBM i applications for performance reasons DS6800 DS8100 using Fibre Channel It is strongly recommended that FATA...
  • Page 16: Software And Firmware

    Storage subsystems The supported list for IBM i as a client of connected to SVC VIOS follows that for SVC DS5020 using Fibre Channel It is strongly recommended that SATA or SATA drives drives are used only for test or archival IBM i...
  • Page 17: Ibm I Using Open Storage Through Vios Concepts

    See Section 15 of this document for details 8. IBM i using open storage through VIOS concepts The capability to use open storage through VIOS extends the IBM i storage portfolio to include 512-byte-per-sector storage subsystems. The existing IBM i storage portfolio includes integrated SCSI, SAS or Solid State disk, as well as Fibre Channel-attached storage subsystems that support 520 bytes per sector, such as the IBM DS8000 product line.
  • Page 18 AIX and Linux clients. While code changes were made in VIOS to accommodate IBM i client partitions, there is no special version of VIOS in use for IBM i. If you have existing skills in attaching open storage to VIOS and virtualizing I/O resources to client partitions, they will continue to prove useful when creating a configuration for an IBM i client partition.
  • Page 19: Virtual Scsi And Ethernet Adapters

    Hypervisor for Logical Remote Direct Memory Access (LRDMA), data are transferred directly from the Fibre Channel adapter in VIOS to a buffer in memory of the IBM i client partition. In an IBM i client partition, a virtual SCSI client adapter is recognized as a type 290A DCxx storage controller device.
  • Page 20 DDxxx disk unit is not the open storage LUN itself, but the corresponding vtscsiX device. The vtscsiX device correctly reports the parameters of the LUN, such as size, to the virtual storage code in IBM i, which in turn passes them on to storage management.
  • Page 21: Optical Virtualization

    If the physical optical drive is writeable, IBM i will be able to write to it. Similar to LUNs, optical devices can be virtualized to IBM i using the enhanced functions of the HMC; it is not necessary to use the IVM/VIOS command line to create...
  • Page 22: Network Virtualization

    8.3.2. VIOS media repository VIOS provides a capability similar to that of an image catalog (IMGCLG) in IBM i: a repository of media images on disk. Unlike IMGCLGs in IBM i, a single media repository may exist per VIOS. The media repository allows file-backed virtual optical volumes to be made available to the IBM i client partition through a separate vtoptX device.
  • Page 23: Prerequisites For Attaching Open Storage To Ibm I Through Vios

    (7.1), it is strongly recommended that only Fibre Channel or SAS physical drives are used to create LUNs for IBM i as a client of VIOS. The reason is the performance and reliability requirements of IBM i production workloads. For non-I/O-intensive workloads or nearline storage, SATA or FATA drives may also be used.
  • Page 24: Dual Hosting And Multi-Path I/O (Mpio)

    Note that the dual-VIOS solution above provides a level of redundancy by attaching two separate sets of open storage LUNs to the same IBM i client through separate VIOS partitions. It is not an MPIO solution that provides redundant paths to a single set of LUNs. There are two MPIO scenarios possible with VIOS that remove the requirement for two sets of LUNs: •...
  • Page 25: Redundant Vios Lpars With Client-Side Mpio

    9.3.2.2. Redundant VIOS LPARs with client-side MPIO Beginning with IBM i 6.1 with Licensed Internal Code (LIC) 6.1.1, the IBM i Virtual SCSI (VSCSI) client driver supports MPIO through two or more VIOS partitions to a single set of LUNs (up to a maximum of 8 VIOS partitions).
  • Page 26 Some storage subsystems such as XIV default to no_reserve and do not require a change, while others such as DS4000 and DS5000 default to single_path. The change must be made prior to mapping the LUNs to IBM i and it does not require a restart of VIOS.
  • Page 27: Subsystem Device Driver - Path Control Module (Sddpcm)

    As long as the alternate path through the second VIOS is active, the IBM i LPAR will be able to IPL. from the same load source LUN. The alternate path must exist prior to the loss of the original IPL path and prior to powering the IBM i LPAR off.
  • Page 28: Hmc Provisioning Of Open Storage In Vios

    10.3. IBM i installation and configuration The IBM i client partition configuration as a client of VIOS is the same as that for a client of an IBM i 6.1 host partition. Consult the topic Creating an IBM i logical partition that uses IBM i virtual I/O resources using the HMC in the Logical Partitioning Guide: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf.
  • Page 29: End-To-End Lun Device Mapping

    An IBM i client partition can also use the new HEA capability of POWER6 processor-based servers. To assign a logical port (LHEA) on an HEA to an IBM i client partition, see the topic Creating a Logical Host Ethernet Adapter for a running logical partition using the HMC in the Logical Partitioning Guide: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf.
  • Page 30: How To Perform Ibm I Operator Panel Functions

    IBM i networking topic in the Information Center: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/topic/rzajy/rzajyoverview.htm. If the IBM i client partition is using a virtual Ethernet adapter for networking, an SEA must be created in VIOS to bridge the internal virtual LAN (VLAN) to the external LAN. Use the HMC and the instructions in section 8.4 to perform the SEA configuration.
  • Page 31: Configuring Electronic Customer Support (Ecs) Over Lan

    11.6. Backups As mentioned above, IBM i as a client of VIOS can use a mix of virtual and physical I/O resources. Therefore, the simplest backup and restore approach is to assign an available tape adapter on the system to it and treat it as a standard IBM i partition. The tape adapter can be any adapter supported by IBM i on IBM Power servers and can be shared with other partitions.
  • Page 32 12.1.2. FlashCopy and VolumeCopy support statements The use of DS4000 and DS5000 FlashCopy and VolumeCopy with IBM i as a client of VIOS is supported as outlined below. Please note that to implement and use this solution, multiple manual steps on the DS4000 or DS5000 storage subsystem, in VIOS and in IBM i are required.
  • Page 33: Enhanced Remote Mirroring (Erm)

    • Full-system FlashCopy and VolumeCopy of the production IBM i logical partition (LPAR) after only using the IBM i 6.1 memory flush to disk (quiesce) function are not supported • Full-system FlashCopy and VolumeCopy when the production IBM i LPAR is running are not supported •...
  • Page 34: Ibm I Using San Volume Controller (Svc) Storage Through Vios

    IBM support organization and not solely by the IBM i Support Center. IBM PowerHA for IBM i is also supported for IBM i as a client of VIOS. PowerHA for IBM i provides an automated, IBM i-driven replication solution that allows clients to leverage their existing IBM i skills.
  • Page 35: Attaching Svc Storage To Ibm I

    The IBM i client partition configuration as a client of VIOS is the same as that for a client of an IBM i 6.1 host partition. Consult the topic Creating an IBM i logical partition that uses IBM i virtual I/O resources using the HMC in the Logical Partitioning Guide: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf.
  • Page 36: Flashcopy Statements

    14.1.2. FlashCopy statements The use of SVC FlashCopy with IBM i as a client of VIOS is supported as outlined below. Please note that to implement and use this solution, multiple manual steps in SVC, in VIOS and in IBM i are required. Currently, no toolkit exists that automates this solution and it is not part of IBM PowerHA for IBM i.
  • Page 37 14.2.2. Metro and Global Mirror support statements The use of SVC Metro and Global Mirror with IBM i as a client of VIOS is supported as outlined below. Please note that to implement and use this solution, multiple manual steps in SVC, in VIOS and in IBM i are required.
  • Page 38: Metro And Global Mirror

    • Both SVC Metro and Global Mirror are supported by IBM i as a client of VIOS on both IBM Power servers and IBM Power blades • Only full-system replication is supported • Replication of IASPs with Metro or Global Mirror is not supported •...
  • Page 39: Ds5000 Direct Attachment To Ibm I

    LUNs on the DS5100 or DS5300 are then mapped to the world-wide port names (WWPNs) of the FC adapter(s) in the IBM i LPAR. VIOS is no longer required in this case. Note that one or more VIOS LPARs may still exist on the same Power server to take advantages of other technologies, such as Active Memory Sharing (AMS), N_Port ID Virtualization (NPIV), Live Partition Mobility for AIX and Linux or other I/O virtualization.
  • Page 40: Best Practices, Limitations And Performance

    There are no special requirements for the load source LUN, except of course sufficient size to qualify as a load source for IBM i 6.1. The load source does not need to be the very first LUN mapped to IBM i. When performing an Initial Program Load (IPL), an active path to the load source LUN is required.
  • Page 41: Sizing And Configuration

    IBM i attachment on the storage subsystem, the host type should be “IBM i.” To configure the SAN fabric, follow the instructions from your FC switch manufacturer. To create the IBM i LPAR and assign the FC adapter(s) to it, follow the instructions in the Logical Partitioning Guide, available at: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphat/iphat.pdf.
  • Page 42: N_Port Id Virtualization (Npiv) For Ibm I

    SCSI disk. Instead, a port on the physical FC adapter is mapped to a Virtual Fibre Channel (VFC) server adapter in VIOS, which in turn is connected to a VFC client adapter in IBM i. When the VFC client adapter is created, two unique world-wide port names (WWPNs) are generated for it.
  • Page 43: Supported Hardware And Software

    From the storage subsystem’s perspective, the LUNs, volume group and host connection are created as though the IBM i LPAR is directly connected to the storage through the SAN fabric. While VIOS still plays a role with NPIV, that role is much more of a passthrough one compared to VSCSI.
  • Page 44: Copy Services Support

    To perform the LPAR and VIOS setup, consult Chapter 2.9 in the Redbook PowerVM Managing and Monitoring (SG 24-7590): http://www.redbooks.ibm.com/abstracts/sg247590.html?Open. While the examples given are for an AIX client of VIOS, the procedure is identical for an IBM i client.
  • Page 45: Additional Resources

    SDD-PCM driver: http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1 S4000201&loc=en_US&cs=utf-8&lang=en • IBM Redbooks site: http://www.redbooks.ibm.com • IBM i and IBM System Storage: A Guide to Implementing External Disks on IBM i (Redbook): http://www.redbooks.ibm.com/abstracts/sg247120.html?Open. • Implementing IBM Tape in i5/OS (Redbook): http://www.redbooks.ibm.com/abstracts/sg247440.html?Open. • Introduction to the IBM System Storage DS5000 Series (Redbook): http://www.redbooks.ibm.com/abstracts/sg247676.html?Open.
  • Page 46 • Advanced POWER Virtualization on IBM System p5: Introduction and Configuration (Redbook): http://www.redbooks.ibm.com/abstracts/sg247940.html?Open • VIOS command reference: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?topic=/iphb1/iphb1_ vios_commandslist.htm • VIOS Datasheet: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/datasheet.html • DLPAR Checklist: http://www.ibm.com/developerworks/systems/articles/DLPARchecklist.html. • PowerVM Managing and Monitoring (Redbook): http://www.redbooks.ibm.com/abstracts/sg247590.html?Open. 18.4. SVC • SVC overview Web site: http://www.ibm.com/systems/storage/software/virtualization/svc •...
  • Page 47: Trademarks And Disclaimers

    19. Trademarks and disclaimers This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area.
  • Page 48 Virtualization Engine, Visualization Data Explorer, Workload Partitions Manager, X-Architecture, z/Architecture, z/9. A full list of U.S. trademarks owned by IBM may be found at: http://www.ibm.com/legal/copytrade.shtml. The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.

This manual is also suitable for:

I virtualization and open storage

Table of Contents