HP XP20000/XP24000 Operation Manual

HP XP20000/XP24000 Operation Manual

Hp storageworks xp disk array mainframe host attachment and operations guide (a5951-96154, september 2010)
Hide thumbs Also See for XP20000/XP24000:
Table of Contents

Advertisement

HP StorageWorks
XP Disk Array Mainframe Host Attachment and
Operations Guide
HP XP24000 Disk Array
nl
HP XP20000 Disk Array
Abstract
This guide provides requirements and procedures for connecting an XP disk array to a host system, and for
configuring the disk array for use with the mainframe operating system. This document is intended for system
administrators, HP representatives, and authorized service providers who are involved in installing, configuring,
and operating the HP XP storage systems.
Part Number: A5951-96154
Third edition: October 2010

Advertisement

Table of Contents
loading

Summary of Contents for HP XP20000/XP24000

  • Page 1 HP StorageWorks XP Disk Array Mainframe Host Attachment and Operations Guide HP XP24000 Disk Array HP XP20000 Disk Array Abstract This guide provides requirements and procedures for connecting an XP disk array to a host system, and for configuring the disk array for use with the mainframe operating system. This document is intended for system administrators, HP representatives, and authorized service providers who are involved in installing, configuring, and operating the HP XP storage systems.
  • Page 2 Legal and notice information © Copyright 2007-2010 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S.
  • Page 3: Table Of Contents

    Contents 1 Overview of mainframe operations ............7 Mainframe compatibility and functionality ..................7 Connectivity ..........................8 Program products for mainframe ....................9 2 FICON/zHPF and ESCON host attachment ......... 11 FICON/zHPF and ESCON for zSeries hosts ................. 11 Setting up ESCON and FICON/zHPF links ................11 Multi-pathing for ESCON and FICON/zHPF .................
  • Page 4 Hardware definition using IOCP (MVS, VM, or VSE) ............... 30 Hardware definition using HCD (MVS/ESA) ................32 Defining the storage system to VM/ESA and z/VSE systems ............. 42 Defining the storage system to TPF ..................42 Defining the storage system to mainframe Linux ..............42 Mainframe operations .......................
  • Page 5 Figures Fiber connectors ..................... 15 FICON protocol read/write sequence ................ 17 zHPF protocol read/write sequence ................17 Mainframe Logical Paths (Example 1) ................ 18 Mainframe logical paths (example 2) ................ 19 FICON/zHPF channel adapter support for logical paths (example 1) ......19 FICON/zHPF channel adapter support for logical paths (example 2) ......
  • Page 6 Tables Mainframe operating system support ................7 XP Remote Web Console-based software for mainframe users ......... 9 Host/Server-Based Software for Mainframe Users ............10 Comparing ESCON and FICON/zHPF physical specifications ........14 Comparing ESCON and FICON/zHPF logical specifications ........15 Operating environment required for FICON/zHPF .............
  • Page 7: Overview Of Mainframe Operations

    1 Overview of mainframe operations This chapter provides an overview of mainframe host attachment issues, functions, and operations. Mainframe compatibility and functionality The XP24000/XP20000 disk arrays provide full System-Managed Storage (SMS) compatibility and support the following features in the mainframe environment: Sequential data striping Cache fast write (CFW) and DASD fast write (DFW) Enhanced dynamic cache management...
  • Page 8: Connectivity

    Connectivity The storage system supports all-mainframe, all-open-system, and multi-platform configurations. The CHAs process the channel commands from the hosts and manage host access to cache. In the mainframe environment, the channel adapters (CHAs) perform CKD-to-FBA and FBA-to-CKD conversion for the data in cache. Each CHA feature (pair of boards) is composed of one type of host channel interface: FICON/zHPF, Extended Serial Adapter (ExSA) (compatible with ESCON protocol), or Fibre Channel Protocol (FCP).
  • Page 9: Program Products For Mainframe

    Program products for mainframe The following tables list and describe the XP Remote Web Console-based products and the host- and server-based products for use with mainframe systems. Table 2 XP Remote Web Console-based software for mainframe users Name Description Provides virtual storage capacity to simplify storage addition and administration, XP Thin Provisioning eliminate application service interruptions, and reduce costs.
  • Page 10: Host/Server-Based Software For Mainframe Users

    Name Description Allows users to protect data from I/O operations performed by hosts. Users Volume Retention Manager can assign an access attribute to each logical volume to restrict read and/or write operations, preventing unauthorized access to data. Table 3 Host/Server-Based Software for Mainframe Users Name Description Provides control and monitoring of mainframe-based replication products that...
  • Page 11: Ficon/Zhpf And Escon Host Attachment

    2 FICON/zHPF and ESCON host attachment This chapter describes and provides general instructions for attaching the storage system to a mainframe host using a FICON/zHPF or ESCON CHA. For details on FICON/zHPF connectivity, FICON/Open intermix configurations, and supported HBAs, switches, and directors, contact your HP representative. FICON/zHPF and ESCON for zSeries hosts This section describes some things you should consider before you configure your system with FICON/zHPF or ESCON adapters for zSeries hosts.
  • Page 12: Escon Connectivity On Zseries Hosts

    For zSeries host attachment using ESCON, only the first 16 addresses of the LSSs can be used. The ranges of supported device addresses may be noncontiguous. Devices that are not mapped to a logical device respond and show address exceptions. When a primary controller connects to a secondary controller, the primary connection converts to a channel.
  • Page 13: Cable Lengths And Types For Escon Adapters On Zseries Hosts

    NOTE: For optimum performance, use a cable shorter than 103 km (64 mi). An ESCON host channel can connect to more than one storage unit port through an ESCON director. The S/390 or zSeries host system attaches to one port of an ESCON host adapter in the storage unit. Each storage unit adapter card has two ports.
  • Page 14: Attaching Ficon/Zhpf Chas

    are tag numbers. You need tag numbers for the path setup. To determine the tag number, use the devserv command with the rcd (read configuration data) parameter. Attaching FICON/zHPF CHAs You can attach the storage system to a host system using FICON adapters. The storage system can be configured with up to 112 FICON physical channel interfaces.
  • Page 15: Ficon/Zhpf Logical Specifications

    Item ESCON FICON/zHPF Multi Mode/ 62.5um 300m (1Gbps) / 150m (2Gbps) / 75m (4Gbps) Short wave 3 km Multi Mode/ 50um 500m (1Gbps) / 300m (2Gbps) / 150m (4Gbps) Connector ESCON connector LC-duplex (see Figure Point-to-Point Point-to-Point Topologies Switched Point-to-Point Switched Point-to-Point Figure 1 Fiber connectors FICON/zHPF logical specifications...
  • Page 16: Ficon/Zhpf Operating Environment

    FICON/zHPF operating environment The following table lists the operating environment required for FICON/zHPF. For more information about FICON/zHPF operating environments, including supported hosts and operating environments, see the IBM Redbooks. Table 6 Operating environment required for FICON/zHPF Items Specification z9, z10 FICON Express FICON Express2 FICON Express4...
  • Page 17: Hardware Specifications

    Figure 2 FICON protocol read/write sequence Figure 3 zHPF protocol read/write sequence Hardware specifications For details on FICON/zHPF connectivity, FICON/Open intermix configurations, and supported HBAs, switches, and directors for the storage array, contact your HP representative. FICON CHAs The CHAs contain the microprocessors that process the channel commands from the host(s) and manage host access to cache.
  • Page 18: Logical Paths

    Parameter 8MFS 8MFLR 8/ 16 / 24 / 32 / 40/ 48 / 56 / 16 / 24 / 32 / 40/ 48 / 56 Number of Ports per Storage System / 64 / 72 / 80 / 88/ 96 / 104 / 64 / 72 / 80 / 88/ 96 / (DKA Slot Used) / 112...
  • Page 19: Mainframe Logical Paths (Example 2)

    Figure 5 Mainframe logical paths (example 2) The FICON /zHPF CHAs provide logical path bindings that map user-defined FICON/zHPF logical paths. Specifically: Each CU port can access 255 CU images (all CU images of logical storage system). To the CU port, each logical host path (LPn) connected to the CU port is defined as a channel image number and a channel port address.
  • Page 20: Supported Topologies

    Maximum number of CU images per storage system is 64. Maximum number of logical paths per storage system is 131,072 (2048 X 48). Figure 7 FICON/zHPF channel adapter support for logical paths (example 2) Supported topologies FICON and FICON/zHPF support the same topologies. Point-to-point topology A channel path that consists of a single link interconnecting a FICON channel in FICON native (FC) mode to one or more FICON control unit images (logical control units) forms a point-to-point...
  • Page 21: Cascaded Ficon Topology

    Sharing a control unit through a Fibre Channel switch allows communication between a number of channels and the control unit to occur either: Over one switch-to-CU link, such as when a control unit has only one link to the Fibre Channel switch, or Over multiple-link interfaces, such as when a control unit has more than one link to the Fibre Channel switch.
  • Page 22: Example Of A Cascaded Ficon Topology

    Figure 10 Example of a cascaded FICON topology In a cascaded FICON topology, one or two switches reside at the topmost (root) level, between the channel (CHL) and disk controller (DKC) ports (see Figure 11. With this configuration, multiple channel images and multiple control unit images can share the resources of the Fibre Channel link and Fibre Channel switches, so that multiplexed I/O operations can be performed.
  • Page 23: Required High-Integrity Features

    Figure 11 Example of ports in a cascaded FICON topology Required high-integrity features FICON directors in a cascaded configuration (see Figure 12) require switches that support the following high-integrity features: Fabric binding: This feature lets an administrator control the switch composition of a fabric by re- gistering WWNs in a membership list and explicitly defining which switches are capable of forming a fabric.
  • Page 24: Viewing Path Information

    Viewing path information Depending on the microcode version installed on the storage system, the Link values displayed on the service processor (SVP) Logical Path Status screen may appear as 3-byte values for cascaded topologies: The first byte corresponds to the switch address. The second byte corresponds to the port address.
  • Page 25: Logical Host Connection Specifications

    Table 8 FICON host and RAID physical connection specifications XP disk array Host z900 z990 z9, z10 G5/G6 Native FICON FICON Express FICON Express Native FICON FICON Express2 FICON Express4 1 / 2 / 4 Gbps Link 1 Gbps 1 / 2 Gbps 4 Gbps 1 Gbps Bandwidth...
  • Page 26: Enabling Xpfor Compatible High Perf Ficon Connectivity Software (Zhpf) Operations

    Figure 16, up to 16,384 unit addresses are supported for each CHL port (in this case, the maximum limit is increased). In this example, CHL path configuration can be reduced. Figure 16 Logical host connections (example 2 - FICON/zHPF) Enabling XPfor Compatible High Perf FICON connectivity software (zHPF) operations Activating the zHPF program product The zHPF PP license is required to activate zHPF on the storage system.
  • Page 27: Installing Zhpf In A Switch Cascading Configuration

    Installing zHPF in a switch cascading configuration In the storage system, when the zHPF PP is installed, the zHPF feature is enabled per channel path. For point-to-point (direct) connection and single-switch connection, zHPF is dynamically enabled per channel path. However, in cascading-switch connections, it is not automatically enabled (if you perform option (1) or (2) from “Enabling the zHPF function”...
  • Page 28 FICON/zHPF and ESCON host attachment...
  • Page 29: Mainframe Operations

    3 Mainframe operations This chapter discusses the operations available for mainframe hosts. Mainframe configuration The first step in configuring the storage system is to define the storage system to the mainframe host(s). The three basic areas requiring definition are: Subsystem ID (SSIDs) Hardware definitions, including I/O Configuration Program (IOCP) or Hardware Configuration Definition (HCD) Operating system definitions (HCD or OS commands)
  • Page 30: Mainframe Hardware Definition

    Mainframe hardware definition Hardware definition using IOCP (MVS, VM, or VSE) The I/O Configuration Program (IOCP) can be used to define the storage system in MVS, VM, and VSE environments (wherever HCD cannot be used). The storage system supports up to 255 control unit (CU) images and up to 65,280 LDEVs.
  • Page 31 UNIT=2105,CUADD=04,UNITADD=((00,256)) CNTLUNIT CUNUMBR=8600,PATH=(F9,FB,FD,FF), LINK=(04,**,**,05), UNIT=2105,CUADD=05,UNITADD=((00,256)) IODEVICE ADDRESS=(8100,064),CUNUMBR=(8100),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8180,128),CUNUMBR=(8100),STADET=Y,UNIT=3390A* ,UNITADD=80 IODEVICE ADDRESS=(8200,064),CUNUMBR=(8200),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8280,128),CUNUMBR=(8200),STADET=Y,UNIT=3390A* ,UNITADD=80 IODEVICE ADDRESS=(8300,064),CUNUMBR=(8300),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8380,128),CUNUMBR=(8300),STADET=Y,UNIT=3390A* ,UNITADD=80 IODEVICE ADDRESS=(8400,064),CUNUMBR=(8400),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8480,128),CUNUMBR=(8400),STADET=Y,UNIT=3390A* ,UNITADD=80 IODEVICE ADDRESS=(8500,128),CUNUMBR=(8500),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8580,128),CUNUMBR=(8500),STADET=Y,UNIT=3390A IODEVICE ADDRESS=(8600,128),CUNUMBR=(8600),STADET=Y,UNIT=3390B IODEVICE ADDRESS=(8680,128),CUNUMBR=(8600),STADET=Y,UNIT=3390A The following example shows a sample IOCP hardware definition for a storage system configured with: 3990 ID Two (2) LPARs called PROD and TEST sharing 4 ExSA (ESCON) channels connected over 2 ESCDs...
  • Page 32: Hardware Definition Using Hcd (Mvs/Esa)

    To protect data integrity due to multiple operating systems sharing these volumes, these devices require FEATURE=SHARED. NOTE: If you maintain separate IOCP definitions files and create your SCDS or IOCDS manually by running the IZP IOCP or ICP IOCP program, you must define each CU image on a storage system using one CNTLUNIT statement in IOCP.
  • Page 33: Hcd Definition For 256 Lvis

    Parameter Value Number of devices Device type 3390 Connected to CUs Specify the control unit number(s). Table 11 HCD definition for 256 LVIs Parameter Value Control Frame: Control unit number Specify the control unit number. NOCHECK* Use UIM 3990 for more than 128 logical Control unit type paths.
  • Page 34 From an ISPF/PDF primary options menu, select the HCD option to display the basic HCD panel. On this panel you must verify the name of the IODF or IODF.WORK I/O definition file to be used. San Diego OS/390 R2.8 Master MENU OPTION ===>...
  • Page 35 On the Define, Modify, or View Configuration Data panel, select option 4 to display the Control Unit List panel. OS/390 Release 5 HCD .------------- Define, Modify, or View Configuration Data --------------. _ Select type of objects to define, modify, or view data. 4_ 1.
  • Page 36 On the Add Control Unit panel, input the following new information, or edit the information if pre-loaded from an “Add like” operation, and then press Enter: Control unit number Control unit type: 2105 Switch information only if a switch exists. Otherwise leave switch and ports blank. .-------------------------- Add Control Unit ---------------------------.
  • Page 37 On the Add Control Unit panel, enter CHPIDs that attach to the control unit, the control unit address, the device starting address, and the number of devices supported, and then press Enter. Goto Filter Backup Query Help .--------------------------- Add Control Unit ----------------------------. | Specify or revise the following values.
  • Page 38 On the Control Unit List panel, add devices to the new Control Unit, input an S next to CU 8000, and then press Enter. Goto Filter Backup Query Help -------------------------------------------------------------------------- Control Unit List Row 40 of 41 Command ===> ___________________________________________ Scroll ===> PAGE Select one or more control units, then press Enter.
  • Page 39 On the Add Device panel, enter the following, and then press Enter: Device number Number of devices Device type: 3390, 3390B for PAV base device, or 3390A for PAV alias device Goto Filter Backup Query Help .-------------------------------- Add Device ---------------------------------. | Specify or revise the following values.
  • Page 40 On the Define Device / Processor panel, enter the values shown in the following screen, and press Enter. .------------------------- Define Device / Processor -------------------------. | Specify or revise the following values. | Device number . : 8000 Number of devices ..: 128 | Device type .
  • Page 41 On the Define Device to Operating System Configuration panel, input an S next to the Config ID, and then press Enter. .----------- Define Device to Operating System Configuration -----------. Row 1 of 1 | | Command ===> _____________________________________ Scroll ===> PAGE | Select OSs to connect or disconnect devices, then press Enter.
  • Page 42: Defining The Storage System To Vm/Esa And Z/Vse Systems

    The Update Serial Number, Description and VOLSER panel now displays the device addresses. To add more control units or device addresses, repeat the previous steps. .---------- Update Serial Number, Description and VOLSER -----------. Row 1 of 128 | | Command ===> _________________________________ Scroll ===> PAGE | Device number .
  • Page 43: Mainframe Operations

    For information on supported versions, Linux kernel levels, and other details, contact your HP representative. For information on preparing the storage system for Linux host attachment, see the IBM publication. Mainframe operations Initializing the LVIs The storage system LVIs require only minimal initialization before being brought online. The following shows an MVS ICKDSF JCL example of a minimal init job to write a VOLID and VTOC.
  • Page 44 Command Argument Storage System Return Code NOPRESERVE, RAMAC CC = 0, ALT information not displayed. NOSKIP, NOCHECK XP disk array CC = 0 ALLTRACKS, ASSIGN, RAMAC CC = 12 Invalid parameter(s) for device type. RECLAIM In case of PRESERVE: CC = 12 XP disk array In case of NO PRESERVE: CC = 0.
  • Page 45: Z/Os (Mvs) Cache Operations

    Command Argument Storage System Return Code CPVOLUME RAMAC CC = 0, Readcheck parameter not allowed. XP disk array CC=0 z/OS (MVS) Cache operations To display the cache statistics under MVS DFSMS, use the operator command: D SMS, CACHE. The following example shows the cache statistics reported by the storage system. The storage system reports cache statistics for each SSID in the storage system.
  • Page 46: Zvm (Vm/Esa) Cache Operations

    NOTE: In normal cache replacement, bypass cache, or inhibit cache loading mode, the storage system performs a special function to determine whether the data access pattern from the host is sequential. If the access pattern is sequential, the storage system transfers contiguous tracks from the disks to cache ahead of time to improve cache hit rate.
  • Page 47: Zvse (Vse/Esa) Cache Operations

    DESTAGE SUBSYSTEM zVSE (VSE/ESA) Cache operations When using VSE/ESA to manage the storage system, the following CACHE commands are effective across multiple SSIDs: CACHE SUBSYS=cuu,ON|OFF|STATUS CACHE SUBSYS=cuu,FAST,ON|OFF CACHE SUBSYS=cuu,NVS,ON|OFF CACHE SUBSYS=cuu,REINIT NOTE: SIMs indicating a drive failure may not be reported to the VSE/ESA console. Because the RAID technology and dynamic spare drives ensure non-stop processing, a drive failure may not be noticed by the console operator.
  • Page 48 Mainframe operations...
  • Page 49: Linux Operations

    4 Linux operations This chapter describes storage system operations in a Linux host environment. Overview of zLinux operations The storage system supports attachment to the following mainframe Linux operating systems: Red Hat Linux for S/390 and zSeries SuSE Linux Enterprise Server for IBM zSeries For information on supported versions, Linux kernel levels, and other details, contact your HP representative.
  • Page 50: Attaching Fcp Adapters To Zseries Hosts Running Linux

    NOTE: zSeries FCP host system running SuSE SLES 9 or Red Hat Enterprise Linux 3.0 can only be attached through a switched-fabric configuration. Hosts cannot be attached using a direct configuration. Attaching FCP adapters to zSeries hosts running Linux Linux solutions are available for the 31- and 64-bit environments. The availability of this option depends on the zSeries model and the Linux distribution.
  • Page 51: Setting Up Storage Units For Zseries Hosts Running Linux

    Setting up storage units for zSeries hosts running Linux You must begin by first collecting the following software configuration information to prepare a Linux system for accessing the storage unit through a Fibre Channel: Host name of the server hosting the Linux system. Device address and CHPID of the FCP port that is attached to the Linux machine.
  • Page 52: Setting Up A Linux System To Use Fcp Protocol Devices On Zseries Hosts

    FCP port on the storage unit: Enclosure 3 Slot 1 WWPN of the FCP port on the storage unit: 50:05:07:63:00:c8:95:89 Setting up a Linux system to use FCP protocol devices on zSeries hosts Begin by collecting the following software configuration information to prepare a Linux system to access the storage unit through a Fibre Channel: Host name of the server hosting the Linux system Device address (and CHPID) of the FCP port that is attached to the Linux machine...
  • Page 53: Ficon And Escon Migration Overview

    “Setting up storage units for zSeries hosts running Linux” on page 51 provides an example of the prerequisite information that must be obtained to run FCP Linux on the zSeries. Choose one of the following methods to add devices: write a script or manually add the device. To add more than one device to your SCSI configuration, write a small script that includes all the parameters included.
  • Page 54: Escon To Ficon Migration Example For A Zseries Or S/390 Host

    Figure 17 Basic ESCON Configuration The channels are grouped into a channel-path group for multi-pathing capability to the storage unit ESCON adapters. ESCON to FICON migration example for a zSeries or S/390 host The following figure shows another example of a S/390 or zSeries host system with four ESCON channels.
  • Page 55: Four Channel Escon System With Two Ficon Channels Added

    Figure 18 Four channel ESCON system with two FICON channels added This figure shows four ESCON adapters and two ESCON directors. The illustration also shows the channel path group and FICON directors through which the two FICON adapters are installed in the storage unit.
  • Page 56: Ficon Configuration Example For A Zseries Or S/390 Host

    FICON configuration example for a zSeries or S/390 host The following figure provides a FICON configuration example for a zSeries or S/390 host that illustrates how to remove the ESCON paths. The S/390 or zSeries host has four ESCON channels connected to two ESCON directors. The S/390 or zSeries host system also has two FICON channels.
  • Page 57: Migrating From A Ficon Bridge To A Native Ficon Attachment

    Migrating from a FICON bridge to a native FICON attachment FICON bridge overview The FICON bridge is a feature card of the ESCON Director 9032 Model 5. The FICON bridge supports an external FICON attachment and connects internally to a maximum of eight ESCON links. The volume on these ESCON links is multiplexed on the FICON link.
  • Page 58: Native Ficon Configuration On A Zseries Or S/390 Host

    Figure 21 FICON mixed channel configuration In the example, one FICON bridge was removed from the configuration. The FICON channel that was connected to that bridge is reconnected to the new FICON director using the storage unit FICON adapter. The channel-path group was changed to include the new FICON path. The channel-path group is now a mixed ESCON and FICON path group.
  • Page 59: Rscns On Zseries Hosts

    Figure 22 Native FICON Configuration RSCNs on zSeries hosts McDATA and CNT switches ship without any configured zoning. This unzoned configuration enables the default zone on some McDATA switches. This configuration enables all ports in the switch with a FC connection to communicate with each other and to receive registered state change notifications about each other.
  • Page 60 Linux operations...
  • Page 61: Troubleshooting

    5 Troubleshooting This chapter provides error codes, troubleshooting guidelines and customer support contact information. Troubleshooting For troubleshooting information on the storage system, see the HP StorageWorks XP24000/XP20000 Disk Array Owner Guide. For troubleshooting information on XP Remote Web Console, see the HP StorageWorks XP24000/XP20000 Remote Web Console User Guide.
  • Page 62 Troubleshooting...
  • Page 63: Support And Other Resources

    6 Support and Other Resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.com/support Before contacting HP, collect the following information: Product model names and numbers Technical support registration number (if applicable) Product serial numbers Error messages Operating system type and revision level Detailed questions Subscription service...
  • Page 64: Conventions For Storage Capacity Values

    http://www.hp.com/go/storage http://www.hp.com/support/manuals http://www.hp.com/storage/spock http://www.hp.com/go/xpmainframe Conventions for storage capacity values HP XP storage systems use the following values to calculate physical storage capacity values (hard disk drives): 1 KB (kilobyte) = 1,000 (10 ) bytes 1 MB (megabyte) = 1,000 bytes 1 GB (gigabyte) = 1,000 bytes 1 TB (terabyte) = 1,000...
  • Page 65: Glossary

    Glossary AL-PA Arbitrated loop physical address. Channel Adapters. CHPID Channel path ID. command device A volume on the disk array that accepts HP StorageWorks Continuous Access or HP StorageWorks Business Copy control operations which are then executed by the array. Control Unit.
  • Page 66 FICON Fibre connectivity. Hardware that connects the mainframe to the control unit. Host bus adapter. Hardware Configuration Definition. Hardware management console. host mode Each port can be configured for a particular host type. These modes are represented as two-digit hexadecimal numbers. For example, host mode 08 represents an HP-UX host.
  • Page 67 R-SIM Remote service information message. Storage area network. A network of storage devices available to one or more servers. Service information message. SNMP Simple Network Management Protocol. A widely used network monitoring and control protocol. Data is passed from SNMP agents, which are hardware and/or software processes reporting activity in each network device (hub, router, bridge, and so on) to the workstation console used to oversee the network.
  • Page 68 Glossary...
  • Page 69: Index

    Index functions algorithms nocheck, dynamic cache mgmt, alias address range, glossary, cache loading mode, statistics, hardware definition CNTLUNIT statement, iocp, commands help cache, obtaining, idcams setcache, listdata, technical support, listdata status, contacting HP, conventions storage capacity values, minimal init job, conversion fba to ckd, copy functions...
  • Page 70 storage capacity values conventions, Subscriber's Choice, HP, switch information, system vm/esa, technical support, time interval mih max, track transfer counters report, utilities hcd for lvi, ickdsf, volume logical caching status, websites HP Subscriber's Choice for Business, product manuals,...

This manual is also suitable for:

Storageworks xp20000Storageworks xp24000

Table of Contents