HP P9000 Configuration Manual

Hp storageworks p9000 configuration guide (av400-96380, may 2011).
Hide thumbs
Also See for P9000
Reference manual - 399 pages
Manual - 333 pages
User manual - 322 pages

Advertisement

HP StorageWorks
P9000 Configuration Guide
P9500 Disk Array
Abstract
This guide provides requirements and procedures for connecting a P9000 disk array to a host system, and for configuring the
disk array for use with a specific operating system. This document is intended for system administrators, HP representatives,
and authorized service providers who are involved in installing, configuring, and operating P9000 disk arrays.
HP Part Number: AV400-96380
Published: May 201 1
Edition: Second

Advertisement

   Also See for HP P9000

HP P9000 Manual
HP P9000 Manual 333 pages

   Related Manuals for HP P9000

   Summary of Contents for HP P9000

  • Page 1

    P9500 Disk Array Abstract This guide provides requirements and procedures for connecting a P9000 disk array to a host system, and for configuring the disk array for use with a specific operating system. This document is intended for system administrators, HP representatives, and authorized service providers who are involved in installing, configuring, and operating P9000 disk arrays.

  • Page 2

    © Copyright 2010, 201 1 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 1 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

  • Page 3: Table Of Contents

    Contents 1 Overview....................10 What's in this guide........................10 Audience..........................10 Features and requirements.......................10 Fibre Channel interface......................11 Device emulation types......................12 Failover..........................12 SNMP configuration........................13 RAID Manager command devices.....................13 2 HP-UX.....................14 Installation roadmap.......................14 Installing and configuring the disk array..................14 Defining the paths......................15 Setting the host mode and host group mode for the disk array ports.........15 Setting the system option modes..................15 Configuring the Fibre Channel ports..................16 Installing and configuring the host.....................16...

  • Page 4: Table Of Contents

    Verifying the host recognizes array devices................34 Configuring disk devices......................34 Writing signatures......................34 Creating and formatting disk partitions.................35 Verifying file system operations ...................35 4 Novell NetWare..................36 Installation roadmap.......................36 Installing and configuring the disk array..................36 Defining the paths......................36 Setting the host mode and host group mode for the disk array ports.........37 Configuring the Fibre Channel ports..................37 Installing and configuring the host.....................37 Loading the operating system and software................37...

  • Page 5: Table Of Contents

    Setting the host mode for the disk array ports................54 Setting the UUID........................54 Setting the system option modes..................55 Configuring the Fibre Channel ports..................55 Installing and configuring the host.....................55 Loading the operating system and software................56 Installing and configuring the FCAs ..................56 Clustering and fabric zoning....................56 Fabric zoning and LUN security for multiple operating systems..........57 Configuring FC switches......................57 Connecting the disk array......................57...

  • Page 6: Table Of Contents

    Creating the file systems.....................74 Creating file systems with ext2..................74 Creating the mount directories.....................74 Creating the mount table....................74 Verifying file system operation.....................75 9 Solaris....................76 Installation roadmap.......................76 Installing and configuring the disk array..................76 Defining the paths......................76 Setting the host mode and host group mode for the disk array ports.........77 Setting the system option modes..................78 Configuring the Fibre Channel ports..................78 Installing and configuring the host.....................78...

  • Page 7: Table Of Contents

    1 1 Citrix XenServer Enterprise................99 Installation roadmap.......................99 Installing and configuring the disk array..................99 Defining the paths......................99 Setting the host mode and host group mode for the disk array ports........100 Configuring the Fibre Channel ports...................100 Setting the system option modes..................100 Installing and configuring the host...................100 Installing and configuring the FCAs ...................101 Loading the operating system and software.................101 Clustering and fabric zoning.....................101...

  • Page 8: Table Of Contents

    Emulation specifications....................128 OpenVMS...........................129 Supported emulations.......................129 Emulation specifications....................129 VMware..........................132 Supported emulations.......................132 Emulation specifications....................132 Linux...........................135 Supported emulations.......................135 Emulation specifications....................135 Solaris..........................138 Supported emulations.......................138 Emulation specifications....................138 IBM AIX..........................141 Supported emulations.......................141 Emulation specifications....................141 Disk parameters by emulation type..................143 Byte information table.......................149 Physical partition size table....................151 D Using Veritas Cluster Server to prevent data corruption........153 Using VCS I/O fencing......................153 E Reference information for the HP System Administration Manager (SAM)..156...

  • Page 9

    Contents Contents...

  • Page 10: Overview, What's In This Guide, Audience, Features And Requirements

    HP StorageWorks P9000 Mainframe Host Attachment and Operations Guide. Audience This document is intended for system administrators, HP representatives, and authorized service providers who are involved in installing, configuring, and operating the HP P9000 storage systems. Features and requirements The disk array provides following features:...

  • Page 11: Fibre Channel Interface

    HP StorageWorks P9000 Array Manager Software Check with your HP representative for other P9000 software available for your system. NOTE: Linux, NonStop, and Novell NetWare: Make sure you have superuser (root) access. OpenVMS firmware version: Alpha System firmware version 5.6 or later for Fibre Channel support.

  • Page 12: Device Emulation Types, Failover

    Alternate link for I/O path failover (included in HP-UX) Logical volume management (included in HP-UX) OpenVMS The P9000 family of disk arrays is supported with OpenVMS's resident Multipath software, which provides I/O path failover. Solaris The Veritas Cluster Server, Solaris Cluster, and Fujitsu Siemens Computers PRIMECLUSTER host failover products are supported for the Solaris operating system.

  • Page 13: Snmp Configuration, Raid Manager Command Devices

    For instructions on STMS, Storage Multipathing, or VxVM, see the manufacturers' manuals. SNMP configuration The P9000 family of disk arrays supports standard SNMP for remotely managing arrays. The SNMP agent on the SVP performs error-reporting operations requested by the SNMP manager.

  • Page 14: Hp-ux, Installation Roadmap, Installing And Configuring The Disk Array

    2 HP-UX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 15: Defining The Paths

    Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.

  • Page 16: Configuring The Fibre Channel Ports, Installing And Configuring The Host

    P9500 Available from initial release Available from initial release HP also recommends setting host group mode 13 with P9000 storage systems that are connected to HP NonStop systems. Configuring the Fibre Channel ports Configure the disk array Fibre Channel ports by using Command View Advanced Edition or Remote Web Console.

  • Page 17: Fabric Zoning And Lun Security For Multiple Operating Systems, Connecting The Disk Array

    Figure 2 Multi-cluster environment (HP-UX) Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows:...

  • Page 18: Verifying Device Recognition

    Use the ioscan f command, and verify that the rows shown in the example are displayed. If these rows are not displayed, check the host adapter installation (hardware and driver installation) or the host configuration. Example # ioscan Class I H/W Path Driver S/W State H/W Type Description...

  • Page 19: Configuring Disk Array Devices

    z = LUN c stands for controller t stands for target ID d stands for device The numbers x, y, and z are hexadecimal. Table 4 Device file name example (HP-UX) SCSI bus instance Hardware path SCSI TID File name number 14/12.6.0 c6t0d0...

  • Page 20: Verifying The Device Files And Drivers, Creating The Device Files

    Verifying the device files and drivers The device files for new devices are usually created automatically during HP-UX startup. Each device must have a block-type device file in the /dev/dsk directory and a character-type device file in the /dev/rdsk directory. However, some HP-compatible systems do not create the device files automatically.

  • Page 21

    repeat the procedures in “Verifying device recognition” (page 18) to verify new device recognition and the device files and driver. Example # insf -e insf: Installing special files for mux2 instance 0 address 8/0/0 Failure of the insf e command indicates a SAN problem. If the device files for the new disk array devices cannot be created automatically, you must create the device files manually using the mknodcommand as follows: Retrieve the device information you recorded earlier.

  • Page 22: Creating The Physical Volumes, Creating New Volume Groups

    Create the device files for all disk array devices (SCSI disk and multiplatform devices) using the mknodcommand. Create the block-type device files in the /dev/dsk directory and the character-type device files in the /dev/rdsk directory. Example # cd /dev/dsk Go to /dev/dsk directory. # mknod /dev/dsk/c2t6d0 b 31 0x026000 Create block-type file.

  • Page 23

    The physical volumes that make up one volume group can be located either in the same disk array or in other disk arrays. To allow more volume groups to be created, use SAM to modify the HP-UX system kernel configuration. See Reference information for the HP System Administrator Manager SAM for details.

  • Page 24: Creating Logical Volumes

    Use vgdisplay v to verify that the volume group was created correctly. The v option displays the detailed volume group information. Example # vgdisplay v /dev/vg06 - - - Volume groups - - - VG Name /dev/vg06 VG Write Access read/write VG Status available...

  • Page 25

    To create logical volumes: Use the lvcreate L command to create a logical volume. Specify the volume size (in megabytes) and the volume group for the new logical volume. HP-UX assigns the logical volume numbers automatically (lvol1, lvol2, lvol3). Use the following capacity values for the size parameter: OPEN-K = 1740 OPEN-3 = 2344...

  • Page 26: Creating The File Systems, Setting The I/o Timeout Parameter

    Creating the file systems Create the file system for each new logical volume on the disk array. The default file system types are: HP-UX OS version 10.20 = hfs or vxfs, depending on entry in the /etc/defaults/fs file. HP-UX OS version 1 1.0 = vxfs HP-UX OS version 1 1.i = vxfs To create file systems: Use the newfs command to create the file system using the logical volume as the argument.

  • Page 27: Creating The Mount Directories, Mounting And Verifying The File Systems

    Example # pvchange -t 60 /dev/dsk/c0t6d0 Physical volume "/dev/dsk/c0t6d0" has been successfully changed. Volume Group configuration for /dev/vg06 has been saved in /etc/lvmconf/vg06.conf. Verify that the new I/O timeout value is 60 seconds using the pvdisplay command: Example # pvdisplay /dev/dsk/c0t6d0 --- Physical volumes --- PV Name /dev/dsk/c0t6d0...

  • Page 28: Setting And Verifying The Auto-mount Parameters

    /ldev/vg00/lvol1 59797 59364 100% /ldev/vg06/lvol1 2348177 2113350 /AHPMD-LU00 As a final verification, perform some basic UNIX operations (for example file creation, copying, and deletion) on each logical device to make sure that the devices on the disk array are fully operational.

  • Page 29

    Use the bdf command to verify the file system again. Configuring disk array devices...

  • Page 30: Windows, Installation Roadmap, Installing And Configuring The Disk Array, Defining The Paths

    3 Windows You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 31: Setting The Host Mode And Host Group Mode For The Disk Array Ports

    Mapping volumes and WWN/host access permissions to the storage groups For more information about LUN mapping, see the HP StorageWorks P9000 Provisioning for Open Systems User Guide or Remote Web Console online help. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.

  • Page 32: Setting The System Option Modes, Configuring The Fibre Channel Ports, Installing And Configuring The Host

    Table 9 Host group modes (options) Windows Host Group Function Default Mode Parameter Setting Failure for TPRLO Inactive When using the Emulex FCA in the Windows environment, the parameter setting for TPRLO failed. After receiving TPRLO and FCP_CMD, respectively. PRLO will respond when HostMode=0x0C/ 0x2C and HostModeOption=0x06.

  • Page 33: Fabric Zoning And Lun Security, Connecting The Disk Array

    Fabric zoning and LUN security By using appropriate zoning and LUN security, you can connect various servers with various operating systems to the same switch and fabric with the following restrictions: Storage port zones can overlap if more than one operating system needs to share an array port.

  • Page 34: Verifying The Host Recognizes Array Devices, Configuring Disk Devices, Writing Signatures

    Verifying operational status of the disk array channel adapters, LDEVs, and paths. Connecting the Fibre Channel cables between the disk array and the fabric switch or host. Verifying the ready status of the disk array and peripherals. Verifying the host recognizes array devices Log into the host as an administrator.

  • Page 35: Creating And Formatting Disk Partitions, Verifying File System Operations

    Creating and formatting disk partitions Dynamic Disk is supported with no restrictions for a disk array connected to a Windows 2000/2003/2008 system. For more information, see Microsoft's online help. CAUTION: Do not partition or create a file system on a device that will be used as a raw device (for example, some database applications use raw devices.) In the Disk Management main window, select the unallocated area for the SCSI disk you want to partition.

  • Page 36: Novell Netware, Installation Roadmap, Installing And Configuring The Disk Array, Defining The Paths

    4 Novell NetWare You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 37: Installing And Configuring The Host

    Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.

  • Page 38: Configuring Netware Consoleone, Clustering And Fabric Zoning

    NetWare Client software is required for the client system. After installing the software on the NetWare server, follow these steps: Open the Novell Client Configuration dialog and click the Advanced Settings tab. Change the following parameters: Give up on Requests to Sas: 180 Net Status Busy Timeout: 90 Configuring NetWare ConsoleOne NetWare 6.x...

  • Page 39: Connecting The Disk Array, Fabric Zoning And Lun Security For Multiple Operating Systems

    Figure 4 Multi-cluster environment (Novell NetWare) Within the SAN, the clusters must be homogeneous (all the same operating system). Heterogeneous (mixed operating systems) clusters are not allowed. How you configure LUN security and fabric zoning depends on the SAN configuration. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows:...

  • Page 40: Configuring Disk Devices, Creating The Disk Partitions

    In the NetWare directory, enter SERVER to get to the server console. At the server console, enter LIST DEVICES to display all devices. Use the Pause key as needed. The device number (for example, 0x000B) and device type are displayed for each device: Example NetWare prompt>...

  • Page 41

    1 1. Press Esc until you are returned to the Available Devices screen. Repeat Step 4–Step 10 create the disk partition on each new OPEN-x and LUSE device. 12. When you are finished creating disk partitions, return to the Available Disk Options screen, click Return to previous menu and press Enter.

  • Page 42: Assigning The New Devices To Volumes

    Assigning the new devices to volumes A volume can span as many as 32 devices, so you can assign more than one device to a volume. The addition of new volumes to the NetWare server might require a memory upgrade. See the NetWare documentation or contact Novell customer support.

  • Page 43: Mounting The New Volumes, Verifying Client Operations

    Specify the Virtual Server Name, IP Address, Advertising Protocols and, if necessary, the CIFS Server Name. Select Create. Mounting the new volumes NetWare 5.x From the Available Disk Options screen, click NetWare Volume options to display the volume list and volume options, and then click Mount/Dismount an existing volume and press Enter. On the Directory Services Login/Authentication screen, enter the NetWare administrator password, then press Enter.

  • Page 44: Middleware Configuration, Host Failover, Multipath Failover

    For assistance with NHAS or SFT III operations, see the Novell user documentation, or contact Novell customer support. Multipath failover The P9000 disk arrays support NetWare multipath failover. If multiple FCAs are connected to the disk array with commonly-shared LUNs, you can configure path failover to recognize each new device path: In the startup.cfg file, enter...

  • Page 45: Helpful Multipath Commands

    To see a list of the failover devices and paths, at the server prompt enter: list failover devices Example failover device path listing 0x20 [V6E0-A2-D0:0] HP OPEN-3 rev:HP16 Up 0x0D [V6E0-A2-D0:0] HP OPEN-3 rev:HP16 Priority = 0 selected 0x1B [V6E0-A3-D0:0] HP OPEN-3 rev:HP16 Priority = 0 0x21 [V6E0-A2-D0:2] HP OPEN-3 rev:HP16 0x0F [V6E0-A2-D0:2] HP OPEN-3 rev:HP16...

  • Page 46: Configuring Netware 6.x Servers For Cluster Services, Installing Cluster Services

    http://www.support.novell.com. Configuring NetWare 6.x servers for Cluster Services The following requirements must be met in order to use clustering: NetWare 6.x on each server in the cluster. All servers must be in the same NDS tree. Cluster Services running on each server in the cluster. All servers must have a connection to the shared disk array.

  • Page 47

    Install the licenses: Insert the appropriate Cluster License diskette into drive A: of the client. Click Next. Click Next to select all available licenses. Click Next at the summary screen. 10. Click Finish to complete installation. Main file copy starts now. 1 1.

  • Page 48: Nonstop, Installation Roadmap, Installing And Configuring The Disk Array, Defining The Paths

    5 NonStop You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. The HP NonStop operating system runs on HP S-series and Integrity NonStop servers to provide continuous availability for applications, databases, and devices.

  • Page 49: Setting System Option Modes

    NOTE: For the highest level of availability and fault tolerance, HP recommends the use of two P9000 disk arrays, one for the Primary disks and one for the Mirror disks. This process is also called “LUN mapping.” In Remote Web Console, LUN mapping includes:...

  • Page 50: Loading The Operating System And Software, Configuring The Fibre Channel Ports

    Available from initial release Available from initial release HP also recommends setting host group mode 13 with P9000 storage systems that are connected to HP NonStop systems. System option mode 724 is used to balance the load across the cache PC boards by improving the process of freeing pre-read slots.

  • Page 51: Verifying Disk Array Device Recognition, Connecting The Disk Array, Configuring Disk Devices

    Table 13 Fabric zoning and LUN security settings (NonStop) Environment Fabric Zoning LUN Security Single node SAN Not required Must be used Multiple node SAN Not required Must be used Connecting the disk array The HP service representative performs the following steps to connect the disk array to the host: Verifying operational status of the disk array channel adapters, LDEVs, and paths.

  • Page 52: Openvms, Installation Roadmap, Installing And Configuring The Disk Array

    6 OpenVMS You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 53

    IMPORTANT: For optimal performance when configuring any P9000 disk array with a Tru64 host, HP does not recommend: Sharing of CHA (channel adapter) microprocessors Multiple host groups sharing the same CHA port NOTE: As illustrated in “Microprocessor port sharing (OpenVMS)” (page 53), there is no microprocessor sharing with 8-port module pairs.

  • Page 54: Setting The Host Mode For The Disk Array Ports, Setting The Uuid

    $run sys$system:sysman sysman> set environment/cluster sysman> io autoconfigure/log Verify the online status of the P9000 LUNs, and confirm that all expected LUNs are shown online. Setting the host mode for the disk array ports After the disk array is installed, you must set the host mode for each host group that is configured on a disk array port to match the host OS.

  • Page 55

    If host mode option 33 is not set, then the default behavior is to present the volumes to the OpenVMS host by calculating the decimal value of the hexadecimal CU:LDEV value. That calculated value will be the value of the DGA device number. CAUTION: The UUID (or by default the decimal value of the CU:LDEV value) must be unique across the SAN for the OpenVMS host and/or OpenVMS cluster.

  • Page 56: Installing And Configuring The Fcas, Loading The Operating System And Software, Clustering And Fabric Zoning

    Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer. Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers.

  • Page 57: Configuring Fc Switches, Fabric Zoning And Lun Security For Multiple Operating Systems

    Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. WARNING! For OpenVMS — HP recommends that a volume be presented to one OpenVMS cluster or stand alone system at a time.

  • Page 58: Initializing And Labeling The Devices, Configuring Disk Array Devices, Mounting The Devices

    Check the list of peripherals on the host to verify the host recognizes all disk array devices. If any devices are missing: If host mode option 33 is enabled, check the UUID values in the Remote Web Console LUN mapping If host mode option 33 is not enabled, check the CU:LDEV mapping To ensure the created OpenVMS device number is correct, check the values do not conflict with other device numbers or LUNs already created on the SAN...

  • Page 59: Verifying File System Operation

    Verifying file system operation Use the show device d command to list the devices: Example $ show device dg NOTE: Use the show device/full dga100 command to show the path information for the device: Example: $ show device/full $1$dga100: Disk $1$DGA100: (NODE01), device type HP OPEN-V, is online, file-oriented device, shareable, device has multiple I/O paths, served to cluster via MSCP Server, error logging is enabled.

  • Page 60

    $ directory Directory $1$DGA100:[USER] TEST.TXT;1 Total of 1 file. Verify the content of the data file: Example $ type test.txt this is a line of text for the test file test.txt Delete the data file: Example $ delete test.txt; $ directory %DIRECT-W-NOFILES, no files found $ type test.txt %TYPE-W-SEARCHFAIL,error searching for...

  • Page 61: Vmware, Installation Roadmap, Installing And Configuring The Disk Array, Defining The Paths

    7 VMware You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 62

    Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.

  • Page 63: Clustering And Fabric Zoning, Fabric Zoning And Lun Security For Multiple Operating Systems

    Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers. Clustering is the organization of multiple servers into groups. Within a cluster, each server is a node. Multiple clusters compose a multi-cluster environment. The following example shows a multi-cluster environment with three clusters, each containing two nodes.

  • Page 64: Configuring Vmware Esx Server, Connecting The Disk Array

    Configuring VMware ESX Server VMware ESX Server 2.5x Open the management interface, select the Options tab, and then click Advanced Settings..In the “Advanced Settings” window, scroll down to Disk.MaskLUN. Verify that the value is large enough to support your configuration (default=8). If the value is less than the number of LUNs you have presented then you will not see all of your LUNs.

  • Page 65: Setting Up Virtual Machines (vms) And Guest Operating Systems

    Setting up virtual machines (VMs) and guest operating systems Setting the SCSI disk timeout value for Windows VMs To ensure Windows VM’s (Windows 2000 and Windows Server 2003) wait at least 60 seconds for delayed disk operations to complete before generating errors, you must set the SCSI disk timeout value to 60 seconds by editing the registry of the guest operating system as follows: CAUTION: Before making any changes to the registry file, make a back up copy of the existing...

  • Page 66

    Select the Bus Sharing mode (virtual or physical) appropriate for your configuration, and then click OK. VMware...

  • Page 67: Selecting The Scsi Emulation Driver

    NOTE: Sharing VMDK disks is not supported. VMware ESX Server 3.0x In VirtualCenter, select the VM you plan to edit, and then click Edit Settings. Select the SCSI controller for use with your shared LUNs. NOTE: If only one SCSI controller is present, add another disk that uses a different SCSI bus than your current configured devices.

  • Page 68

    Linux For the 2.4 kernel use the LSI Logic SCSI driver. For the 2.6 kernel use the BusLogic SCSI driver. VMware...

  • Page 69: Linux, Installation Roadmap, Installing And Configuring The Disk Array, Defining The Paths

    8 Linux You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 70

    Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.

  • Page 71

    Installing and configuring the host This section explains how to install and configure Fibre Channel adapters (FCAs) that connect the host to the disk array. Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host.

  • Page 72: Restarting The Linux Server, Connecting The Disk Array, Verifying New Device Recognition

    Table 19 Fabric zoning and LUN security settings (Linux) Environment OS Mix Fabric Zoning LUN Security Standalone SAN homogeneous (a single OS type present Not required Must be used when multiple (non-clustered) in the SAN) hosts or cluster nodes connect through a shared port Clustered SAN heterogeneous (more than one OS type...

  • Page 73: Partitioning The Devices, Configuring Disk Array Devices

    1048560 cciss/c0d0p2 2 16470960 cciss/c0d0p3 168193 352166 4166736... In the previous example, the “sd” devices represent the P9000 disk partitions and the “cciss” devices represent the internal hard drive partitions on an HP Proliant system. Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any new disk on the host.

  • Page 74: Creating File Systems With Ext2, Creating The File Systems, Creating The Mount Directories

    Select w to write the partition information to disk and complete the fdisk command. Other commands that you might want to use include: d to remove partitions q to stop a change Repeat steps 1–5 for each device. Creating the file systems The supported file system for Linux is ext2.

  • Page 75

    Edit the /etc/fstab file to add one line for each device to be automounted. Each line of the file contains: (A) device name, (B) mount point, (C) file system type (“ext2”), (D) mount options (“defaults”), (E) enhance parameter (“1”), and (F) fsck pass 2. Example /dev/sdb /A5700F_ID08...

  • Page 76: Solaris, Installation Roadmap, Installing And Configuring The Disk Array, Defining The Paths

    9 Solaris You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 77

    Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.

  • Page 78

    Table 20 Host group modes (options) Solaris (continued) Host Group Mode Function Default Comments SIM report at link failure Inactive Optional This mode is common to all host Select HMO 13 to enable SIM notification platforms. when the number of link failures detected between ports exceeds the threshold.

  • Page 79: Setting The Disk And Device Parameters, Installing And Configuring The Fcas

    Installing and configuring the FCAs Install and configure the FCA driver software and setup utilities according to the manufacturer's instructions. Configuration settings specific to the P9000 array differ depending on the manufacturer. Specific configuration information is detailed in the following sections.

  • Page 80: Configuring Fcas With The Oracle San Driver Stack

    Oracle branded FCAs are only supported with the Oracle SAN driver stack. The Oracle SAN driver stack also supports current Emulex and QLogic FCAs. NOTE: Ensure host group mode 7 is set for the P9000 array ports where the host is connected to enable automatic LUN recognition using this driver. To configure the FCA:...

  • Page 81: Configuring Emulex Fcas With The Lpfc Driver

    For Solaris 8/9, perform a reconfiguration reboot of the host to implement changes to the configuration file. For Solaris 10, use the stmsboot command which will perform the modifications and then initiate a reboot. For Solaris 8/9, after you have rebooted and the LDEV has been defined as a LUN to the host, use the cfgadm command to configure the controller instances for SAN connectivity.

  • Page 82: Configuring Qlogic Fcas With The Qla2300 Driver, Verifying The Fca Configuration

    name="sd" parent="lpfc" target=30 lun=1; name="sd" parent="lpfc" target=30 lun=2; Perform a reconfiguration reboot to implement the changes to the configuration files. If LUNs have been preconfigured in the /kernel/drv/sd.conf file, use the devfsadm command to perform LUN rediscovery after configuring LUNs as explained in “Defining the paths”...

  • Page 83

    Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers. Clustering is the organization of multiple servers into groups. Within a cluster, each server is a node. Multiple clusters compose a multi-cluster environment. The following example shows a multi-cluster environment with three clusters, each containing two nodes.

  • Page 84: Adding The New Device Paths To The System, Verifying Host Recognition Of Disk Array Devices

    Host FCA configuration (WWN information, driver instance, target and LUN assignment, and /var/adm/messages) If you are using the Oracle SAN driver and P9000 LUNs were not present when the configuration was done, you may need to reset each FCA if no LUNs are visible. The following example shows the commands to detect the FC-fabric attached FCAs (c3, c5) and resetting them.

  • Page 85: Labeling And Partitioning The Devices, Creating The File Systems

    CAUTION: The repair, analyze, defect, and verify commands/menus are not applicable to the P9000 arrays. When selecting disk devices, be careful to select the correct disk as using the partition/label commands on disks that have data can cause data loss.

  • Page 86: Configuring For Use With Veritas Volume Manager 4.x And Later, Creating The Mount Directories

    Read the TechFile that appears and follow the instructions to download and install the ASL. After installing the ASL, verify that the P9000 array is visible and the ASL is present using the vxdmpadm listctlr all and vxddladm listsupport all commands.

  • Page 87: Installation Roadmap, Installing And Configuring The Disk Array, Defining The Paths

    10 IBM AIX You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 88

    Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.

  • Page 89

    CAUTION: Changing host group modes for ports where servers are already installed and configured is disruptive and requires the server to be rebooted. Setting the system option modes The HP service representative sets the system option mode(s) based on the operating system and software configuration of the host.

  • Page 90: Verifying Host Recognition Of Disk Array Devices

    Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows: Storage port zones can overlap if more than one operating system needs to share an array...

  • Page 91: Changing The Device Parameters, Configuring Disk Array Devices

    Use the lscfg command to identify the AIX disk device's corresponding array LDEV designation. For example, enter the following command to display the emulation type, LDEV number, CU number and array port designation for disk device hdisk3. # lscfg vl hdisk3 Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any new disk on the host.

  • Page 92

    To change the queue depth parameter, enter: chdev l hdiskx a queue_depth='x' where x is a value from the previous table. To change the queue type parameter, enter: chdev l hdiskx a q_type='simple' For example, enter the following command to change the queue depth for the device hdisk3: # chdev l hdisk3 a queue_depth='2'...

  • Page 93: Assigning The New Devices To Volume Groups

    Status Location Parent adapter Connection address Physical volume IDENTIFIER ASSIGN physical volume identifier Queue DEPTH Queuing TYPE [simple] Use QERR Bit [yes] Device CLEARS its Queue on Error [no] READ/WRITE time out value [60] START unit time out value [60] REASSIGN time out value [120] APPLY change to DATABASE only...

  • Page 94

    Physical Volumes Paging Space Select Add a Volume Group. Example Volume Groups Move cursor to desired item and press Enter. List All Volume Groups Add a Volume Group Set Characteristics of a Volume Group List Contents of a Volume Group Remove a Volume Group Activate a Volume Group Deactivate a Volume Group...

  • Page 95: Creating The Journaled File Systems

    Press Enter again. The Command Status screen appears. To ensure the devices have been assigned to a volume group, wait for OK to appear on the Command Status line. 10. Repeat these steps for each volume group needed. Creating the journaled file systems Create the journaled file systems using SMIT.

  • Page 96

    Add / Change / Show / Delete File Systems Move cursor to desired item and press Enter. Journaled File Systems CDROM File Systems Network File System (NFS) Cache Fs Select Add a Journaled File System. Example Journaled File System Move cursor to desired item and press Enter. Add a Journaled File System Add a Journaled File System on a Previously Defined Logical Volume...

  • Page 97: Mounting And Verifying The File Systems

    Start Disk Accounting? Fragment Size (bytes) 4096 Number of bytes per inode 4096 Compression algorithm Allocation Group Size (Mbytes) 10. Press Enter to create the Journaled File System. The Command Status screen appears. Wait for “OK” to appear on the Command Status line. 1 1.

  • Page 98

    Use the df command to verify that the file systems have successfully automounted after a reboot. Any file systems that were not automounted can be set to automount using the SMIT Change a Journaled File System screen. If you are using HACMP or HAGEO, do not set the file systems to automount. Example # df File system 512-blocks...

  • Page 99: Citrix Xenserver Enterprise, Installation Roadmap, Installing And Configuring The Disk Array, Defining The Paths

    1 1 Citrix XenServer Enterprise You and the HP service representative each play a role in installation. The HP service representative is responsible for installing the disk array and formatting the disk devices. You are responsible for configuring the host server for the new devices with assistance from the HP service representative. Installation roadmap Perform these actions to install and configure the disk array: “Installing and configuring the disk array”...

  • Page 100

    Mapping volumes and WWN/host access permissions to the storage groups For details see the HP StorageWorks P9000 Provisioning for Open Systems User Guide. Note the LUNs and their ports, WWNs, nicknames, and LDEVs for later use in verifying host and device configuration.

  • Page 101

    Installing and configuring the FCAs Install and configure the Fibre Channel adapters using the FCA manufacturer's instructions. Loading the operating system and software Follow the manufacturer's instructions to load the operating system and software onto the host. Load all OS patches and configuration utilities supported by HP and the FCA manufacturer. Clustering and fabric zoning If you plan to use clustering, install and configure the clustering software on the servers.

  • Page 102: Verifying New Device Recognition, Connecting The Disk Array, Restarting The Linux Server

    Table 27 Fabric zoning and LUN security settings (Linux) Environment OS Mix Fabric Zoning LUN Security Standalone SAN homogeneous (a single OS type present Not required Must be used when multiple (non-clustered) in the SAN) hosts or cluster nodes connect through a shared port Clustered SAN heterogeneous (more than one OS type...

  • Page 103: Configuring Multipathing, Configuring Disk Array Devices

    <host> host1 </host> <name> qlogic </name> <manufacturer> QLogic HBA Driver </manufacturer> <id> </id> </Adapter> <Adapter> <host> host0 </host> <name> qlogic </name> <manufacturer> QLogic HBA Driver </manufacturer> <id> </id> </Adapter> </Devlist> [root@cb-xen-srv31 ~]# Configuring disk array devices Disks in the disk array are configured using the same procedure for configuring any new disk on the host.

  • Page 104

    Click Enter Maintenance Mode . Select the General tab and then click Properties. 104 Citrix XenServer Enterprise...

  • Page 105

    Select the Multipathing tab, check the Enable multipathing on this server check box, and then click OK. Right-click the domU that was placed in maintenance mode and select Exit Maintenance Mode. Configuring disk array devices 105...

  • Page 106: Creating A Storage Repository

    Open a command line interface to the dom0 and edit the /etc/multipath-enable.conf file with the appropriate array. NOTE: HP recommends that you use the RHEL 5.x device mapper config file and multipathing parameter settings on HP.com. Use only the array-specific settings, and not the multipath.conf file bundle into the device mapper kit.

  • Page 107

    Select the type of virtual disk storage for the storage array and then click Next. Configuring disk array devices 107...

  • Page 108: Adding A Virtual Disk To A Domu

    NOTE: For Fibre Channel, select Hardware HBA. Complete the template and then click Finish. Adding a Virtual Disk to a domU After the Storage Repository has been created on the dom0, the vdisk from the Storage Repository can be assigned to the domU. This section describes how to pass vdisks to the domU. HP Proliant 108 Citrix XenServer Enterprise...

  • Page 109

    Virtual Console can be used with HP Integrated CitrixXen Server Enterprise Edition to complete this process. Select the domU. Select the Storage tab and then click Add. Configuring disk array devices 109...

  • Page 110: Adding A Dynamic Lun

    Type a name, description, and size for the new disk and then click Add. Adding a dynamic LUN To add a LUN to a dom0 dynamically, follow these steps. Create and present a LUN to a dom0 from the array. Enter the following command to rescan the sessions that are connected to the arrays for the new LUN: xe sr-probe type=lvmohba.

  • Page 111: Troubleshooting, Error Conditions

    12 Troubleshooting This chapter includes resolutions for various error conditions you may encounter. If you are unable to resolve an error condition, ask your HP support representative for assistance. Error conditions Depending on your system configuration, you may be able to view error messages (R-SIMS) as follows: In Remote Web Console (Status tab) In Command View Advanced Edition (Alerts window)

  • Page 112

    Table 28 Error conditions (continued) Error condition Recommended action The host detects a parity error. Check the FCA and make sure it was installed properly. Reboot the host. The host hangs or devices are declared Make sure there are no duplicate disk array TIDs and that disk array TIDs and the host hangs.

  • Page 113: Support And Other Resources, Contacting Hp, Subscription Service, Documentation Feedback, Related Information

    In the Storage section, click Disk Storage Systems and then select a product. Conventions for storage capacity values HP P9000 storage systems use the following values to calculate physical storage capacity values (hard disk drives): 1 KB (kilobyte) = 1,000 (10...

  • Page 114

    1 PB (petabyte) = 1,000 bytes 1 EB (exabyte) = 1000 bytes HP P9000 storage systems use the following values to calculate logical storage capacity values (logical devices): 1 block = 512 bytes 1 KB (kilobyte) = 1,024 (2 ) bytes...

  • Page 115: A Path Worksheet, Worksheet

    A Path worksheet Worksheet Table 29 Path worksheet LDEV (CU:LDEV) (CU = Device Type SCSI Bus Path 1 Alternate Paths control unit) Number 0:00 TID: TID: TID: LUN: LUN: LUN: 0:01 TID: TID: TID: LUN: LUN: LUN: 0:02 TID: TID: TID: LUN: LUN:...

  • Page 116: B Path Worksheet (nonstop), Worksheet

    B Path worksheet (NonStop) Worksheet Table 30 Path worksheet (NonStop) LUN # CU:LDEV Array Emulation Array Array Port NSK Server NSK SAC NSK SAC Path Group type Port name (G-M-S-S) volume name Example: 00 01:00 1- 1 1 OPEN-E 50060E80 /OSDNSK3 1 10-2-3- 1 50060B00...

  • Page 117: C Disk Array Supported Emulations, Hp-ux, Supported Emulations, Emulation Specifications

    C Disk array supported emulations HP-UX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 118

    Table 32 Emulation specifications (HP-UX) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track OPEN-E CVS SCSI disk OPEN-E-CVS Footnote Footnote Footnote OPEN-V SCSI disk OPEN-V Footnote Footnote Footnote CVS LUSE OPEN-3*n CVS SCSI disk OPEN-3*n-CVS...

  • Page 119: Luse Device Parameters

    OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 ×...

  • Page 120

    Table 33 LUSE device parameters (HP-UX) (continued) Device type Physical extent size (PE) Max physical extent size (MPE) n = 1 1 19102 n = 12 20839 n = 13 22576 n = 14 24312 n = 15 26049 n = 16 27786 n = 17 29522...

  • Page 121: Scsi Tid Map For Fibre Channel Adapters

    SCSI TID map for Fibre Channel adapters When an arbitrated loop (AL) is established or reestablished, the port addresses are assigned automatically to prevent duplicate TIDs. With the SCSI over Fibre Channel protocol (FCP), there is no longer a need for target IDs in the traditional sense. SCSI is a bus-oriented protocol requiring each device to have a unique address because all commands go to all devices.

  • Page 122: Supported Emulations, Windows, Emulation Specifications

    Windows This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 123: General Notes

    Table 36 Emulation specifications (Windows) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track CVS LUSE OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote Footnote Footnote OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote Footnote Footnote OPEN-9*n CVS...

  • Page 124

    Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...

  • Page 125: Emulation Specifications, Novell Netware, Supported Emulations

    Novell NetWare This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 126

    Table 38 Emulation specifications (Novell NetWare) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track OPEN-V SCSI disk OPEN-V Footnote Footnote Footnote CVS LUSE OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote Note 6 Note 7 OPEN-8*n CVS...

  • Page 127

    OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) × 1024/720 × n Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 ×...

  • Page 128

    NonStop This appendix provides information about supported emulations and emulation specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 129

    OpenVMS This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 130

    Table 42 Emulation specifications (OpenVMS) (continued) Emulation Category Product Blocks Sector size # of Heads Sectors Capacity MB* name (512 bytes) (bytes) cylinders per track OPEN-E SCSI disk OPEN-E-CVS Footnote Footnote Footnote OPEN-V SCSI disk OPEN-V Footnote Footnote Footnote CVS LUSE OPEN-3*n SCSI disk OPEN- 3 *n- C VS...

  • Page 131

    OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) × 16/15 Example For an OPEN-V CVS volume with capacity = 49 MB: # of cylinders = 49 × 16/15 52.26 (rounded up to next integer) = 53 cylinders OPEN-3/8/9/E: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...

  • Page 132

    VMware This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 133

    Table 44 Emulation specifications (VMware) (continued) Emulation Category Product Blocks Sector size # of Heads Sectors Capacity MB* name (512 bytes) (bytes) cylinders per track OPEN-8 SCSI disk OPEN-8-CVS Footnote Footnote Footnote OPEN-9 SCSI disk OPEN-9-CVS Footnote Footnote Footnote OPEN-E SCSI disk OPEN-E-CVS Footnote...

  • Page 134

    For an OPEN-3 CVS volume with capacity = 37 MB: # of cylinders = 37 × 1024/720 52.62 (rounded up to next integer) = 53 cylinders OPEN-V: The number of cylinders for a CVS volume = # of cylinders = (capacity (MB) specified by user) ×...

  • Page 135

    Linux This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 136

    Table 46 Emulation specifications (Linux) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track CVS LUSE OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote Note 6 Footnote OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote Note 6 Footnote...

  • Page 137

    Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...

  • Page 138

    Solaris This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 139

    Table 48 Emulation specifications (Solaris) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track CVS LUSE OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Footnote Footnote Footnote OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Footnote Footnote Footnote OPEN-9*n CVS...

  • Page 140

    Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...

  • Page 141

    IBM AIX This appendix provides information about supported emulations and device type specifications. Some parameters might not be relevant to your array. Consult your HP representative for information about supported configurations for your system. Supported emulations HP recommends using OPEN-V as the emulation for better performance and features that may not be supported with the legacy emulations (OPEN-[389LE]).

  • Page 142

    Table 50 Emulation specifications (IBM AIX) (continued) Emulation Category Product name Blocks Sector size # of Heads Sectors Capacity (512 bytes) (bytes) cylinders per track CVS LUSE OPEN-3*n CVS SCSI disk OPEN-3*n-CVS Note 5 Footnote Footnote OPEN-8*n CVS SCSI disk OPEN-8*n-CVS Note 5 Footnote...

  • Page 143: Disk Parameters By Emulation Type

    Example For a CVS LUSE volume with capacity = 37 MB and n = 4: # of cylinders = 37 × 1024/720 × 4 = 52.62 × 4 = 53 × 4 = 212 OPEN-V: The number of cylinders for a CVS LUSE volume = # of cylinders = (capacity (MB) specified by user) ×...

  • Page 144

    Table 51 OPEN-3 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-3 OPEN-3*n (n=2 OPEN-3 CVS OPEN-3 CVS*n to 36) (n=2 to 36) e partition size Set optionally Set optionally Set optionally Set optionally f partition size Set optionally Set optionally Set optionally Set optionally...

  • Page 145

    Table 52 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-8 OPEN-8*n (n=2 OPEN-8 CVS OPEN-8 CVS*n to 36) (n=2 to 36) b partition offset (Starting block Set optionally Set optionally Set optionally Set optionally in b partition) c partition offset (Starting block in c partition) d partition offset (Starting block...

  • Page 146

    Table 52 OPEN-8 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-8 OPEN-8*n (n=2 OPEN-8 CVS OPEN-8 CVS*n to 36) (n=2 to 36) g partition fragment size 1,024 1,024 1,024 1,024 h partition fragment size 1,024 1,024 1,024 1,024 “Notes for disk parameters”...

  • Page 147

    Table 53 OPEN-9 parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-9 OPEN-9*n (n=2 OPEN-9 CVS OPEN-9 CVS*n to 36) (n=2 to 36) g partition size Set optionally Set optionally Set optionally Set optionally h partition size Set optionally Set optionally Set optionally Set optionally...

  • Page 148

    Table 54 OPEN-E parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-E OPEN-E*n (n=2 to OPEN-E CVS OPEN-E CVS*n (n=2 to 36) c partition offset (Starting block in c partition) d partition offset (Starting block Set optionally Set optionally Set optionally Set optionally in d partition)

  • Page 149: Byte Information Table

    Table 54 OPEN-E parameters by emulation type (IBM AIX) (continued) Emulation Type Parameter OPEN-E OPEN-E*n (n=2 to OPEN-E CVS OPEN-E CVS*n (n=2 to 36) h partition fragment size 1,024 1,024 1,024 1,024 “Notes for disk parameters”. Notes for disk parameters The value of pc is calculated as follows: pc = nc * nt * ns The nc of OPEN-x CVS corresponds to the capacity specified by SVP or remote console.

  • Page 150

    Table 55 Byte information (IBM AIX) Category LU product name Number of bytes per Inode OPEN-3 OPEN-3 OPEN-3*2 to OPEN-3*28 4096 OPEN-3*29 to OPEN-3*36 8192 OPEN-8 OPEN-8 OPEN-8*2 to OPEN-8*9 4096 OPEN-8*10 to OPEN-8*18 8192 OPEN-8*19 to OPEN-8*36 16384 OPEN-9 OPEN-9 OPEN-9*2 to OPEN-9*9 4096 OPEN-9*10 to OPEN-9*18...

  • Page 151: Physical Partition Size Table

    Physical partition size table Table 56 Physical partition size (IBM AIX) Category LU product name Physical partition size in megabytes OPEN-3 OPEN-3 OPEN-3*2 to OPEN-3*3 OPEN-3*4 to OPEN-3*6 OPEN-3*7 to OPEN-3*13 OPEN-3*14 to OPEN-3*27 OPEN-3*28 to OPEN-3*36 OPEN-8 OPEN-8 OPEN-8*2 OPEN-8*3 to OPEN-8*4 OPEN-8*5 to OPEN-8*9 OPEN-8*10 to OPEN-8*18...

  • Page 152

    Table 56 Physical partition size (IBM AIX) (continued) Category LU product name Physical partition size in megabytes OPEN-x*n CVS 35 to1800 1801 to 2300 2301 to 7000 7001 to 16200 13201 to 32400 32401 to 64800 64801 to 126000 126001 to 259200 259201 - 518400 518401 and higher 1024...

  • Page 153: D Using Veritas Cluster Server To Prevent Data Corruption, Using Vcs I/o Fencing

    154)). For each array port, calculate the number of VCS registration keys needed as follows: number of WWNs visible to a P9000 port x number of disk groups = number of registration keys Where the number of WWNs visible to a P9000 port = number of hosts x number of WWNs per P9000 port.

  • Page 154

    Figure 1 1 Nodes and ports 154 Using Veritas Cluster Server to prevent data corruption...

  • Page 155

    Table 57 Port 1A Key Registration Entries e r v E n t R e s LU - Disk Group i b l e v i s r a t i o n r e g i s t P o r t a b l 0 A P 0 0 0...

  • Page 156: E Reference Information For The Hp System Administration Manager (sam), Configuring The Devices Using Sam

    E Reference information for the HP System Administration Manager (SAM) The HP System Administration Manager (SAM) is used to perform HP-UX system administration functions, including: Setting up users and groups Configuring the disks and file systems Performing auditing and security activities Editing the system kernel configuration This appendix provides instructions for: Using SAM to configure the disk devices...

  • Page 157: Setting The Maximum Number Of Volume Groups Using Sam

    To configure the newly-installed disk array devices: Select Disks and File Systems, then select Disk Devices. Verify that the new disk array devices are displayed in the Disk Devices window. Select the device to configure, select the Actions menu, select Add, and then select Using the Logical Volume Manager.

  • Page 158: F Hp Clustered Gateway Deployments, Windows, Hba Configuration, Mpio Software, Array Configuration, Lun Presentation

    P9000 disk array. They have both been tested with the P9000 disk arrays and this appendix details configuration requirements specific to P9000 deployments using HP PolyServe Software on Windows.

  • Page 159: Hba Configuration, Linux, Mpio Software, Array Configuration, Lun Presentation, Membership Partitions

    P9000 disk array. They have both been tested with the P9000 disk arrays and this appendix details configuration requirements specific to P9000 deployments using HP PolyServe Software on Linux.

  • Page 160: Snapshots, Dynamic Volume And File System Creation

    Snapshots To take hardware snapshots on P9000 storage arrays, you must install the latest version of firmware on the array controllers. Also, the latest versions of Business Copy and Snapshot must also be installed on the array controllers. On the servers, you must install and configure the latest version...

  • Page 161: Glossary

    This value becomes the last byte of the address identifier for each public port on the loop. command device A volume in the disk array that accepts Continuous Access, Business Copy, or P9000 for Business Continuity Manager control operations, which are then executed by the array. Control unit.

  • Page 162

    VERITAS Cluster Server. volume On the P9000 or XP array, a volume is a uniquely identified virtual storage device composed of a control unit (CU) component and a logical device (LDEV) component separated by a colon. For example 00:00 and 01:00 are two uniquely identified volumes; one is identified as CU = 00 and LDEV = 00, and the other as CU = 01 and LDEV = 00;...

  • Page 163: Index

    Index emulations, supported, 17, 129, 132, 135, 138, file system, Array Manager, 1, 14, 30, 36, 48, 49, 52, 53, 61, files 69, 76, 87, creating, auto-mount parameters, setting, verifying, labeling, logical volume, Business Copy, logical, not recognized by host, 1 1 1 LUSE device parameters, mounting,...

  • Page 164

    Host Mode, 15, 31, 37, 49, 54, 62, 77, 88, NetWare client settings, service representative tasks, ConsoleOne settings, technical support, 1 13 supported versions of HP-UX, HP-UX, supported versions, P9000 arrays I/O timeout parameter, setting, storage capacity, installation 164 Index...

  • Page 165

    parameter tables configuration, byte information, Virtual Machines physical partition size, setup, parity error, 1 12 volume(s) partitioning devices, assigning, partitions groups creating, creating, path(s) setting maximum number, adding, groups, assigning new device, defining, 15, 30, 36, 49, 53, 61, 76, 87, logical, SCSI, auto-mount parameters,...

This manual also for:

Xp p9500

Comments to this Manuals

Symbols: 0
Latest comments: