Download Print this page

IBM Hub/Switch Installation Manual

High performance storage system release 4.5
Hide thumbs

Advertisement

Quick Links

HPSS

Installation Guide

High Performance Storage System
Release 4.5
September 2002 (Revision 2)

Advertisement

loading

  Related Manuals for IBM Hub/Switch

  Summary of Contents for IBM Hub/Switch

  • Page 1: Installation Guide

    HPSS Installation Guide High Performance Storage System Release 4.5 September 2002 (Revision 2)
  • Page 2: September 2002 Hpss Installation Guide Release 4.5, Revision

    HPSS Installation Guide Copyright (C) 1992-2002 International Business Machines Corporation, The Regents of the University of California, Sandia Corporation, and Lockheed Martin Energy Research Corporation. All rights reserved. Portions of this work were produced by the University of California, Lawrence Livermore National Labo- ratory (LLNL) under Contract No.
  • Page 3: Table Of Contents

    Table of Contents ..........3 List of Figures .
  • Page 4 HPSS Operational Planning ... 42 HPSS Deployment Planning ... 43 Requirements and Intended Usages for HPSS ... 43 Storage System Capacity... 43 Required Throughputs ... 44 Load Characterization ... 44 Usage Trends ... 44 Duplicate File Policy ... 44 Charging Policy ... 44 Security...
  • Page 5 Metadata Monitor ... 77 NFS Daemons... 77 Startup Daemon ... 78 Storage System Management ... 79 HDM Considerations... 79 Non-DCE Client Gateway... 81 Storage Subsystem Considerations ... 81 Storage Policy Considerations... 81 Migration Policy... 82 Purge Policy ... 84 Accounting Policy and Validation ... 85 Security Policy ...
  • Page 6 Planning for Encina SFS Servers... 147 AIX ... 148 Solaris ... 148 Setup for HPSS Metadata Backup... 149 Setup Tape Libraries and Drives ... 149 IBM 3584 ... 149 3494 ... 150 STK ... 151 AML ... 151 Tape Drive Verification... 152 Setup Disk Drives ...
  • Page 7 Chapter 4 HPSS Installation........205 Overview...
  • Page 8 HPSS Configuration Limits... 250 Using SSM for HPSS Configuration... 251 Server Reconfiguration and Reinitialization ... 252 SSM Configuration and Startup ... 252 SSM Server Configuration and Startup... 253 SSM User Session Configuration and Startup ... 253 Global Configuration ... 255 Configure the Global Configuration Information ...
  • Page 9 Recommended Settings for Tape Devices... 411 Chapter 7 HPSS User Interface Configuration ..... . . 413 Client API Configuration ... 413 Non-DCE Client API Configuration...
  • Page 10 Performance... 469 The Global Fileset File ... 469 Appendix A Glossary of Terms and Acronyms ......471 Appendix B References.
  • Page 11 Appendix G High Availability........535 Overview...
  • Page 12 September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 13: List Of Figures

    Figure 1-1 Migrate and Stage Operations ... 24 Figure 1-2 Relationship of HPSS Data Structures ... 25 Figure 1-3 The HPSS System ... 26 Figure 2-1 The Relationship of Various Server Data Structures ... 63 Figure 2-2 Relationship of Class of Service, Storage Hierarchy, and Storage Class ... 93 Figure 6-1 HPSS Health and Status Window ...
  • Page 14 Figure 6-36 AML PVR Server Configuration Window ... 377 Figure 6-37 STK PVR Server Configuration Window ... 378 Figure 6-38 STK RAIT PVR Server Configuration Window ... 379 Figure 6-39 Operator PVR Server Configuration Window ... 380 Figure 6-40 Disk Storage Server Configuration Window ... 396 Figure 6-41 Tape Storage Server Configuration Window ...
  • Page 15: List Of Tables

    Table 1-1 HPSS Client Interface Platforms ... 37 Table 2-1 Cartridge/Drive Affinity Table ... 56 Table 2-2 Gatekeeping Call Parameters ... 90 Table 2-3 Suggested Block Sizes for Disk ... 98 Table 2-4 Suggested Block Sizes for Tape ... 99 Table 2-5 HPSS Dynamic Variables (Subsystem Independent) ...
  • Page 16 Table 6-25 Solaris System Parameters ... 352 Table 6-26 Linux System Parameters ... 353 Table 6-27 Name Server Configuration Variables ... 356 Table 6-28 NFS Daemon Configuration Variables ... 361 Table 6-29 Non-DCE Client Gateway Configuration Variables ... 368 Table 6-30 Physical Volume Library Configuration Variables ...
  • Page 17: Preface

    Preface Conventions Used in This Book Example commands that should be typed at a command line will be proceeded by a percent sign (‘%’) and be presented in a boldface courier font: % sample command Names of files, variables, and variable values will appear in a boldface courier font: Sample file, variable, or variable value Any text preceded by a pound sign (‘#’) should be considered shell script comment lines: # This is a comment...
  • Page 18 September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 19: Chapter 1 Hpss Basics

    (DCE) products that form the infrastructure of HPSS. HPSS is the result of a collaborative effort by leading US Government supercomputer laboratories and industry to address very real, very urgent high-end storage requirements. HPSS is offered commercially by IBM Worldwide Government Industry, Houston, Texas.
  • Page 20: High Data Transfer Rate

    HPSS Movers and the Client API have been ported to non-DCE platforms. HPSS has been implemented on the IBM AIX and Sun Solaris platforms. In addition, selected components have been ported to other vendor platforms. The non-DCE Client API and Mover have been ported to SGI IRIX, while the Non-DCE Client API has also been ported to Linux.
  • Page 21: Storage Subsystems

    added, new classes of service can be set up. HPSS files reside in a particular class of service which users select based on parameters such as file size and performance. A class of service is implemented by a storage hierarchy which in turn consists of multiple storage classes, as shown in Figure 1-2.
  • Page 22: Hpss Files, Filesets, Volumes, Storage Segments And Related Metadata

    Chapter 1 HPSS Basics 1.3.1 HPSS Files, Filesets, Volumes, Storage Segments and Related Metadata The components used to define the structure of the HPSS name space are filesets and junctions. The components containing user data include bitfiles, physical and virtual volumes, and storage segments.
  • Page 23 Virtual Volumes. • abstraction or mapping of physical volumes. A virtual volume may include one or more physical volumes. Striping of storage media is accomplished by the Storage Servers by collecting more than one physical volume into a single virtual volume. A virtual volume is primarily used inside of HPSS, thus hidden from the user, but its existence benefits the user by making the user’s data independent of device characteristics.
  • Page 24: Figure 1-1 Migrate And Stage Operations

    Chapter 1 HPSS Basics Figure 1-1 Migrate and Stage Operations September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 25: Hpss Core Servers

    Figure 1-2 Relationship of HPSS Data Structures 1.3.2 HPSS Core Servers HPSS servers include the Name Server, Bitfile Server, Migration/Purge Server, Storage Server, Gatekeeper Server, Location Server, DMAP Gateway, Physical Volume Library, Physical Volume Repository, Mover, Storage System Manager, and Non-DCE Client Gateway. Figure 1-3 provides a simplified view of the HPSS system.
  • Page 26: Figure 1-3 The Hpss System

    Chapter 1 HPSS Basics • Name Server (NS). The NS translates a human-oriented name to an HPSS object identifier. Objects managed by the NS are files, filesets, directories, symbolic links, junctions and hard links. The NS provides access verification to objects and mechanisms for manipulating access to these objects.
  • Page 27 • Migration/Purge Server (MPS). The MPS allows the local site to implement its storage management policies by managing the placement of data on HPSS storage media using site-defined migration and purge policies. By making appropriate calls to the Bitfile and Storage Servers, MPS copies data to lower levels in the hierarchy (migration), removes data from the current level once copies have been made (purge), or moves data between volumes at the same level (lateral move).
  • Page 28 Chapter 1 HPSS Basics parallel I/O to that set of resources, and schedules the mounting and dismounting of removable media through the Physical Volume Library (see below). • Gatekeeper Server (GK). The Gatekeeper Server provides two main services: A. It provides sites with the ability to schedule the use of HPSS resources using the Gate- keeping Service.
  • Page 29: Hpss Storage Subsystems

    exactly one PVR. Multiple PVRs are supported within an HPSS system. Each PVR is typically configured to manage the cartridges for one robot utilized by HPSS. For information on the types of tape libraries supported by HPSS PVRs, see Section 2.4.2: Tape Robots on page 54.
  • Page 30 Chapter 1 HPSS Basics purge, and storage servers must now exist within a storage subsystem. Each storage subsystem may contain zero or one gatekeepers to perform site specific user level scheduling of HPSS storage requests or account validation. Multiple storage subsystems may share a gatekeeper. All other servers continue to exist outside of storage subsystems.
  • Page 31: Hpss Infrastructure

    Storage Subsystems will effectively be running an HPSS with a single Storage Subsystem. Note that sites are not required to use multiple Storage Subsystems. Since the migration/purge server is contained within the storage subsystem, migration and purge operate independently in each storage subsystem. If multiple storage subsystems exist within an HPSS, then there are several migration/purge servers operating on each storage class.
  • Page 32 Chapter 1 HPSS Basics provides HPSS with an environment in which a job or action that requires the work of multiple servers either completes successfully or is aborted completely within all servers. • Metadata Management. Each HPSS server component has system state and resource data (metadata) associated with the objects it manages.
  • Page 33: Hpss User Interfaces

    whereby a user's access permissions to an HPSS bitfile are specified by the HPSS bitfile authorization agent, the Name Server. These permissions are processed by the bitfile data authorization enforcement agent, the Bitfile Server. The integrity of the access permissions is certified by the inclusion of a checksum that is encrypted using the security context key shared between the HPSS Name Server and Bitfile Server.
  • Page 34 Chapter 1 HPSS Basics and the HPSS Movers. This provides the potential for using multiple client nodes as well as multiple server nodes. PFTP supports transfers via TCP/IP. The FTP client communicates directly with HPSS Movers to transfer data at rates limited only by the underlying communications hardware and software.
  • Page 35: Hpss Management Interface

    1.3.6 HPSS Management Interface HPSS provides a powerful SSM administration and operations GUI through the use of the Sammi product from Kinesix Corporation. Detailed information about Sammi can be found in the Sammi Runtime Reference, Sammi User’s Guide, and Sammi System Administrator’s Guide. SSM simplifies the management of HPSS by organizing a broad range of technical data into a series of easy-to-read graphic displays.
  • Page 36 Chapter 1 HPSS Basics • Logging Policy. The logging policy controls the types of messages to log. On a per server basis, the message types to write to the HPSS log may be defined. In addition, for each server, options to send Alarm, Event, or Status messages to SSM may be defined. •...
  • Page 37: Hpss Hardware Platforms

    Solaris listed above is on the HPSS distribution tape. Maintenance of the PFTP and Client API software on these platforms is the responsibility of the customer, unless a support agreement is negotiated with IBM. Contact IBM for information on how to obtain the software mentioned above. HPSS Installation Guide Release 4.5, Revision 2...
  • Page 38: Server And Mover Platforms

    Chapter 1 HPSS Basics The MPI-IO API can be ported to any platform that supports a compatible host MPI and the HPSS Client API (DCE or Non-DCE version). See Section 2.5.6: MPI-IO API on page 60 for determining a compatible host MPI. The XFS HDM is supported on standard Intel Linux platforms as well as the OpenNAS network appliance from Consensys Corp.
  • Page 39: Chapter 2 Hpss Planning

    HPSS Planning Chapter 2 2.1 Overview This chapter provides HPSS planning guidelines and considerations to help the administrator effectively plan, and make key decisions about, an HPSS system. Topics include: • Requirements and Intended Usages for HPSS on page 43 •...
  • Page 40 Chapter 2 HPSS Planning that are introduced into your HPSS system. For example, if you plan to use HPSS to backup all of the PCs in your organization, it would be best to aggregate the individual files into large individual files before moving them into the HPSS name space.
  • Page 41: Purchasing Hardware And Software

    6. Define the HPSS storage characteristics and create the HPSS storage space to satisfy the site’s requirements: Define the HPSS file families. Refer to Section 2.9.4: File Families on page 105 for more information about configuring families. Define filesets and junctions. Refer to Section 8.7: Creating Filesets and Junctions on page 465 for more information.
  • Page 42: Hpss Operational Planning

    Chapter 2 HPSS Planning If deciding to purchase Sun or SGI servers for storage purposes, note that OS limitations will only allow a static number of raw devices to be configured per logical unit (disk drive or disk array). Solaris currently allows only eight partitions per logical unit (one of which is used by the OS). Irix currently allows only sixteen partitions per logical unit.
  • Page 43: Hpss Deployment Planning

    2.1.4 HPSS Deployment Planning The successful deployment of an HPSS installation is a complicated task which requires reviewing customer/system requirements, integration of numerous products and resources, proper training of users/administrators, and extensive integration testing in the customer environment. Early on, a set of meetings and documents are required to ensure the resources and intended configuration of those resources at the customer location can adequately meet the expectations required of the system.
  • Page 44: Required Throughputs

    Chapter 2 HPSS Planning 2.2.2 Required Throughputs Determine the required or expected throughput for the various types of data transfers that the users will perform. Some users want quick access to small amounts of data. Other users have huge amounts of data they want to transfer quickly, but are willing to wait for tape mounts, etc. In all cases, plan for peak loads that can occur during certain time periods.
  • Page 45: Security

    2.2.7 Security The process of defining security requirements is called developing a site security policy. It will be necessary to map the security requirements into those supported by HPSS. HPSS authentication, authorization, and audit capabilities can be tailored to a site’s needs. Authentication and authorization between HPSS servers is done through use of DCE cell security authentication and authorization services.
  • Page 46: Availability

    2.2.8 Availability The High Availability component allows HPSS to inter operate with IBM’s HACMP software. When configured with the appropriate redundant hardware, this allows failures of individual system components (network adapters, core server nodes, power supplies, etc.) to be overcome, returning the system to resume servicing requests with minimal downtime.
  • Page 47 For U.S. sites, assuming some level of encryption is desired for secure DCE communication, the DCE Data Encryption Standard (DES) library routines are required. For non-U.S. sites or sites desiring to use non-DES encryption, the DCE User Data Masking Encryption Facility is required. Note that if either of these products are ordered, install them on all nodes containing any subset of DCE and/ or Encina software.
  • Page 48 Chapter 2 HPSS Planning 2.3.1.3 HPSS uses the open source Linux version of SGI’s XFS filesystem as a front-end to an HPSS archive. The following nodes must have XFS installed: • Nodes that run the HPSS/XFS HDM servers 2.3.1.4 Encina HPSS uses the Encina distributed transaction processing software developed by Transarc Corporation, including the Encina Structured File Server (SFS) to manage all HPSS metadata.
  • Page 49 The HPSS Server Sammi License and, optionally, the HPSS Client Sammi License available from the Kinesix Corporation are required. The Sammi software must be installed separately prior to the HPSS installation. In addition, the Sammi license(s) for the above components must be obtained from Kinesix and set up as described in Section 4.5.3: Set Up Sammi License Key (page 213) before running Sammi.
  • Page 50: Prerequisite Summary For Aix

    Chapter 2 HPSS Planning C++ interfaces may be selectively disabled in Makefile.macros if these components of MPI-IO cannot be compiled. • Sites using the Command Line SSM utility, hpssadm, will require Java 1.3.0 and JSSE (the Java Secure Sockets Extension) 1.0.2. These are required not only for hpssadm itself but also for building the SSM Data Server to support hpssadm.
  • Page 51: Prerequisite Summary For Solaris

    2.3.4 Prerequisite Summary for Solaris 2.3.4.1 HPSS Server/Mover Machine 1. Solaris 5.8 2. DCE for Solaris Version 3.2 (patch level 1 or later) 3. DFS for Solaris Version 3.1 (patch level 4 or later ) if HPSS HDM is to be run on the machine 4.
  • Page 52: Prerequisite Summary For Linux And Intel

    Chapter 2 HPSS Planning 2.3.5 Prerequisite Summary for Linux and Intel 2.3.5.1 HPSS/XFS HDM Machine 1. Linux kernel 2.4.18 or later (Available via FTP from linux/kernel/v2.4) 2. Linux XFS 1.1 (Available via FTP as a 2.4.18 kernel patch at projects/xfs/download/Release-1.1/kernel_patches) 3.
  • Page 53: Hardware Considerations

    2.3.5.2 HPSS Non-DCE Mover Machine 1. Linux kernel 2.4.18 2. HPSS KAIO Patch It will be necessary to apply the HPSS KAIO kernel patch (kaio-2.4.18-1). This patch adds asynchronous I/O support to the kernel which is required for the Mover. The procedure for applying this patch is outlined in Section 3.10: Setup Linux Environment for Non-DCE Mover on page 195 2.3.5.3...
  • Page 54: Tape Robots

    Chapter 2 HPSS Planning transfer method, which provides for intra-machine transfers between either Movers or Movers and HPSS clients directly via a shared memory segment. Along with shared memory, HPSS also supports a Local File Transfer data path, for client transfers that involve HPSS Movers that have access to the client's file system.
  • Page 55 2.4.2.2 IBM 3584 (LTO) The IBM 3584 Tape Library and Robot must be attached to an AIX workstation either through an LVD Ultra2 SCSI or HVD Ultra SCSI interface. The library shares the same SCSI channel as the first drive, so in actuality the first drive in the 3584 must be connected to the AIX workstation. This workstation must be an HPSS node running the PVR.
  • Page 56: Tape Devices

    The tape devices/drives supported by HPSS are listed below, along with the supported device host attachment methods for each device. • IBM 3480, 3490, 3490E, 3590, 3590E and 3590H are supported via SCSI attachment. • IBM 3580 devices are supported via SCSI attachment.
  • Page 57: Disk Devices

    Special Bid Considerations Some hardware and hardware configurations are supported only by special bid. These items are listed below: • IBM 3580 Drive • IBM 3584 Tape Libraries • ADIC AML Tape Libraries HPSS Installation Guide Release 4.5, Revision 2 Chapter 2...
  • Page 58: Hpss Interface Considerations

    Chapter 2 HPSS Planning • High Availability HPSS configuration 2.5 HPSS Interface Considerations This section describes the user interfaces to HPSS and the various considerations that may impact the use and operation of HPSS. 2.5.1 Client API The HPSS Client API provides a set of routines that allow clients to access the functions offered by HPSS.
  • Page 59: Ftp

    2.5.3 HPSS provides an FTP server that supports standard FTP clients. Extensions are also provided to allow additional features of HPSS to be utilized and queried. Extensions are provided for specifying Class of Service to be used for newly created files, as well as directory listing options to display Class of Service and Accounting Code information.
  • Page 60: Mpi-Io Api

    Chapter 2 HPSS Planning a stateless protocol. This allows use of a connectionless networking transport protocol (UDP) that requires much less overhead than the more robust TCP. As a result, client systems must time out requests to servers and retry requests that have timed out before a response is received. Client time- out values and retransmission limits are specified when a remote file system is mounted on the client system.
  • Page 61: Dfs

    Transarc’s DFS SMT kernel extensions installed. These extensions are available for Sun Solaris and IBM AIX platforms. Once an aggregate has been set up, end users can access filesets on the aggregate from any machine that supports DFS client software, including PCs. The wait/retry logic in DFS client software was modified to account for potential delays caused by staging data from...
  • Page 62: Xfs

    Chapter 2 HPSS Planning 2.5.8 XFS for Linux is an open source filesystem from SGI based on SGI’s XFS filesystem for IRIX. HPSS has the capability to backend XFS and transparently archive inactive data. This frees XFS disk to handle data that is being actively utilized, giving users the impression of an infinitely large XFS filesystem that performs at near-native XFS speeds.
  • Page 63: Bitfile Server

    2.6.2 Bitfile Server The Bitfile Server (BFS) provides a view of HPSS as a collection of files. It provides access to these files and maps the logical file storage into underlying storage objects in the Storage Servers. When a BFS is configured, it is assigned a server ID. This value should never be changed. It is embedded in the identifier that is used to name bitfiles in the BFS.
  • Page 64: Disk Storage Server

    Chapter 2 HPSS Planning 2.6.3 Disk Storage Server Each Disk Storage Server manages random access magnetic disk storage units for HPSS. It maps each disk storage unit onto an HPSS disk Physical Volume (PV) and records configuration data for the PV. Groups of one or more PVs (disk stripe groups) are managed by the server as disk Virtual Volumes (VVs).
  • Page 65: Migration/Purge Server

    The Tape Storage Server is designed to scale up its ability to manage tapes as the number of tapes increases. As long as sufficient memory and CPU capacity exist, threads can be added to the server to increase its throughput. Additional Storage Subsystems can also be added to a system, increasing concurrency even further.
  • Page 66 Chapter 2 HPSS Planning storage class reaches the threshold configured in the purge policy for that storage class. Remember that simply adding migration and purge policies to a storage class will cause MPS to begin running against the storage class, but it is also critical that the hierarchies to which that storage class belongs be configured with proper migration targets in order for migration and purge to perform as expected.
  • Page 67 There are two different tape migration algorithms, tape volume migration and tape file migration. The algorithm which is applied to a tape storage class is selected in the migration policy for that class. The purpose of tape volume migration is to move data stored in a tape storage class either downward to the next level of the storage hierarchy (migration) or to another tape volume within the same storage class (lateral move) in order to empty tape volumes and allow them to be reclaimed.
  • Page 68: Gatekeeper

    Chapter 2 HPSS Planning MPS provides the capability of generating migration/purge report files that document the activities of the server. The specification of the UNIX report file name prefix in the MPS server specific configuration enables the server to create these report files. It is suggested that a complete path be provided as part of this file name prefix.
  • Page 69 are associated with storage subsystems using the Storage Subsystem Configuration screen (see Section 6.4: Storage Subsystems Configuration on page 259). If a storage subsystem has no Gatekeeper, then the Gatekeeper field will be blank. A single Gatekeeper can be associated with every storage subsystem, a group of storage subsystems, or one storage subsystem.
  • Page 70: Location Server

    Chapter 2 HPSS Planning requests from a particular host or user. The Site Interfaces will be located in a shared library that is linked into the Gatekeeper Server. It is important that the Site Interfaces return a status in a timely fashion. Create, open, and stage requests from DFS, NFS, and MPS are timing sensitive, thus the Site Interfaces won't be permitted to delay or deny these requests, however the Site Interfaces may choose to be involved in keeping statistics on these requests by monitoring requests from Authorized Callers.
  • Page 71: Pvr

    might be necessary by making requests to the appropriate Physical Volume Repository (PVR). The PVL communicates directly with HPSS Movers in order to verify media labels. The PVL is not required to be co-resident with any other HPSS servers and is not a CPU-intensive server.
  • Page 72: Mover

    LTO PVR The LTO PVR manages the IBM 3584 Tape Library and Robot, which mounts, dismounts and manges LTO tape cartridges and IBM 3580 tape drives. The PVR uses the Atape driver interface to issue SCSI commands to the library.
  • Page 73 2.6.10.1 Asynchronous I/O Asynchronous I/O must be enabled manually on AIX and Linux platforms. There should be no asynchronous I/O setup required for Solaris or IRIX platforms. 2.6.10.1.1 To enable asynchronous I/O on an AIX platform, use either the chdev command: chdev -l aio0 -a autoconfig=available or smitty: smitty aio...
  • Page 74 Chapter 2 HPSS Planning 6. Now, rebuild the kernel configuration by running the "make config" command and answering "yes" when questioned about AIO support. The default value of 4096 should be sufficient for the number of system-wide AIO requests. At this time, you should also configure the kernel to support your disk or tape devices. If tape device access is required, be sure to also enable the kernel for SCSI tape support.
  • Page 75 For Solaris, the method used to enable variable block sizes for a tape device is dependent on the type of driver used. Supported devices include Solaris SCSI Tape Driver and IBM SCSI Tape Driver. For the IBM SCSI Tape Driver, set the block_size parameter in the /opt/IBMtape/IBMtape.conf configuration file to 0 and perform a reboot with the reconfiguration option.
  • Page 76 Movers: • Each Mover executable is built to handle a single particular device interface (e.g., IBM SCSI-attached 3490E/3590 drives, IBM BMUX-attached 3480/3490/3490E drives). If multiple types of device specific interfaces are to be supported, multiple Movers must be configured.
  • Page 77: Logging Service

    • Mover to Mover data transfers (accomplished for migration, staging, and repack operations) also will impact the planned Mover configuration. For devices that support storage classes for which there will be internal HPSS data transfers, the Movers controlling those devices should be configured such that there is an efficient data path among them. If Movers involved in a data transfer are configured on the same node, the transfer will occur via a shared memory segment (involving no explicit data movement from one Mover to the other).
  • Page 78: Startup Daemon

    Chapter 2 HPSS Planning Even if no client NFS access is required, the NFS interface may provide a useful mechanism for HPSS name space object administration. The HPSS NFS Daemon cannot be run on a processor that also runs the native operating system's NFS daemon.
  • Page 79: Storage System Management

    use the descriptive name “Startup Daemon (tardis)”. In addition, choose a similar convention for CDS names (for example, /.:/hpss/hpssd_tardis). The Startup Daemon is started by running the script /etc/rc.hpss. This script should be added to the /etc/inittab file during the HPSS infrastructure configuration phase. However, the script should be manually invoked after the HPSS is configured and whenever the Startup Daemon dies.
  • Page 80 Chapter 2 HPSS Planning requests to the DMAP Gateway. Migration processes (hpss_hdm_mig) migrate data to HPSS, and purge processes (hdm_hdm_pur) purge migrated data from DFS and XFS. A set of processes (hpss_hdm_tcp) accept requests from the DMAP Gateway, and perform the requested operation in DFS.
  • Page 81: Non-Dce Client Gateway

    must be in use before purging begins, and a lower bound specifying the target percentage of free space to reach before purging is stopped. 2.6.17 Non-DCE Client Gateway The Non-DCE Client Gateway provides HPSS access to applications running without DCE and/or Encina which make calls to the Non-DCE Client API.
  • Page 82: Migration Policy

    Chapter 2 HPSS Planning 2.8.1 Migration Policy The migration policy provides the capability for HPSS to copy (migrate) data from one level in a hierarchy to one or more lower levels. The migration policy defines the amount of data and the conditions under which it is migrated, but the number of copies and the location of those copies is determined by the storage hierarchy definition.
  • Page 83: Migration Policy For Tape

    • The Migrate At Warning Threshold option causes MPS to begin a migration run immediately when the storage class warning threshold is reached regardless of when the Runtime Interval is due to expire. This option allows MPS to begin migration automatically when it senses that a storage space crisis may be approaching.
  • Page 84: Purge Policy

    Chapter 2 HPSS Planning laterally to another volume in the same storage class. Tape file migration with purge avoids moving read active files at all. If a file is read inactive, all three algorithms migrate it down the hierarchy. The purpose of this field is to avoid removing the higher level copy of a file which is likely to be staged again.
  • Page 85: Accounting Policy And Validation

    • The Start purge when space used reaches <nnn> percent parameter allows sites control over the amount of free space that is maintained in a disk storage class. A purge run will be started for this storage class when the total space used in this class exceeds this value. •...
  • Page 86 In UNIX-style accounting, each user has one and only one account index, their UID. This, combined with their Cell Id, uniquely identifies how the information may be charged. In Site-style accounting, each user may have more than one account index, and may switch between them at runtime.
  • Page 87: Security Policy

    If a user has their default account index encoded in a string of the form AA=<default-acct-idx> in their DCE account's gecos field or in their DCE principal's HPSS.gecos extended registry attribute (ERA), then Site-style accounting will be used for them. Otherwise it will be assumed that they are using UNIX-style accounting.
  • Page 88 Chapter 2 HPSS Planning 2.8.4.3 FTP/PFTP By default, FTP and Parallel FTP (PFTP) interfaces use a username/password mechanism to authenticate and authorize end users. The end user identity credentials are obtained from the principal and account records in the DCE security registry. However, FTP and PFTP users do not require maintenance of a login password in the DCE registry.
  • Page 89: Logging Policy

    2.8.4.7 Name Space Enforcement of access to HPSS name space objects is the responsibility of the HPSS Name Server. The access rights granted to a specific user are determined from the information contained in the object's ACL. 2.8.4.8 Security Audit HPSS provides capabilities to record information about authentication, file creation, deletion, access, and authorization events.
  • Page 90: Gatekeeping

    Chapter 2 HPSS Planning 2.8.7 Gatekeeping Every Gatekeeper Server has the ability to supply the Gatekeeping Service. The Gatekeeping Service provides a mechanism for HPSS to communicate information through a well-defined interface to a policy software module to be completely written by the site. The site policy code is placed in a well-defined site shared library for the gatekeeping policy (/opt/hpss/lib/ libgksite.[a|so]) which is linked to the Gatekeeper Server.
  • Page 91: Storage Characteristics Considerations

    Site "Stat" Interface will be called (gk_site_CreateStats, gk_site_OpenStats, gk_site_StageStats) and the Site Interface will not be permitted to return any errors on these requests. Otherwise, if AuthorizedCaller is set to FALSE, then the normal Site Interface will be called (gk_site_Create, gk_site_Open, gk_site_Stage) and the Site Interface will be allowed to return no error or return an error to either retry the request later or deny the request.
  • Page 92 Chapter 2 HPSS Planning to determine HPSS hardware requirements and determine how to configure this hardware to provide the desired HPSS system. The process of organizing the available hardware into a desired configuration results in the creation of a number of HPSS metadata objects. The primary objects created are classes of service, storage hierarchies, and storage classes.
  • Page 93: Storage Class

    Figure 2-2 Relationship of Class of Service, Storage Hierarchy, and Storage Class 2.9.1 Storage Class Each virtual volume and its associated physical volumes belong to some storage class in HPSS. The SSM provides the capability to define storage classes and to add and delete virtual volumes to and from the defined storage classes.
  • Page 94 Chapter 2 HPSS Planning Explanation: For example, if a site has ESCON attached tape drives on an RS6000, the driver can handle somewhat less than 64 KB physical blocks on the tape. A good selection here would be 32 KB. See Section 2.9.1.12 for recommended values for tape media supported by HPSS. 2.9.1.2 Virtual Volume Block Size Selection (disk) Guideline: The virtual volume (VV) block size must be a multiple of the underlying media block...
  • Page 95 cannot be greater than half the number of drives available. Also, doing multiple copies from disk to two tape storage classes with the same media type will perform very poorly if the stripe width in either class is greater than half the number of drives available. The recover utility also requires a number of drives equivalent to 2 times the stripe width to be available to recover data from a damaged virtual volume if invoked with the repack option.
  • Page 96 Chapter 2 HPSS Planning 2.9.1.5 Blocks Between Tape Marks Selection Blocks between tape marks is the number of physical media blocks written before a tape mark is generated. The tape marks are generated for two reasons: (1) To force tape controller buffers to flush so that the Mover can better determine what was actually written to tape, and (2) To quicken positioning for partial file accesses.
  • Page 97 Explanation: The Class of Service (COS) mechanism can be used to place files in the appropriate place. Note that although the Bitfile Server provides the ability to use COS selection, current HPSS interfaces only take advantage of this in two cases. First, the pput command in PFTP automatically takes advantage of this by selecting a COS based on the size of the file.
  • Page 98: Table 2-3 Suggested Block Sizes For Disk

    Chapter 2 HPSS Planning 2.9.1.10 PV Estimated Size / PV Size Selection Guideline: For tape, select a value that represents how much space can be expected to be written to a physical volume in this storage class with hardware data compression factored in. Explanation: The Storage Server will fill the tape regardless of the value indicated.
  • Page 99: Table 2-4 Suggested Block Sizes For Tape

    Table 2-4 contains attributes settings for the supported tape storage media types. Table 2-4 Suggested Block Sizes for Tape Tape Type Media Block Size Ampex DST-312 1 MB Ampex DST-314 1 MB IBM 3480 32 KB IBM 3490 32 KB HPSS Installation Guide Release 4.5, Revision 2 Chapter 2 Minimum...
  • Page 100 Chapter 2 HPSS Planning Table 2-4 Suggested Block Sizes for Tape Tape Type Media Block Size IBM 3490E 32 KB IBM 3580 256 KB IBM 3590 256 KB IBM 3590E 256 KB IBM 3590H 256 KB Sony GY-8240 256 KB...
  • Page 101: Storage Hierarchy

    Table 2-4 Suggested Block Sizes for Tape Tape Type Media Block Size StorageTek Redwood 256 KB StorageTek Timberline 64 KB The STK RAIT PVR cannot be supported at this time since STK has not yet made RAIT generally available. In Table 2-4: •...
  • Page 102: Class Of Service

    Chapter 2 HPSS Planning and its associated attributes. For detailed descriptions of each attribute associated with a storage hierarchy, see Section 6.7.2: Configure the Storage Hierarchies (page 315). The following is a list of rules and guidelines for creating and managing storage hierarchies. Rule 1: All writes initiated by clients are directed to the highest level (level 0) in the hierarchy.
  • Page 103 2.9.3.1 Selecting Minimum File Size Guideline: This field can be used to indicate the smallest file that should be stored in this COS. Explanation: This limit is not enforced and is advisory in nature. If the COS Hints mechanism is used, minimum file size can be used as a criteria for selecting a COS.
  • Page 104 Chapter 2 HPSS Planning Guideline 4: Select the Stage on Open Background option if you want the stage to be queued internally in the Bitfile Server and processed by a background BFS thread on a scheduled basis. Explanation: The open request will return with success if the file is already staged. If the file needs to be staged an internal staged request is placed in a queue and will be selected and processed by the Bitfile Server in the background.
  • Page 105: File Families

    2.9.3.6 Selecting Transfer Rate This field can be used via the COS Hints mechanism to affect COS selection. Guideline 1: This field should generally be set to the value of the Transfer Rate field in the storage class that is at the top level in the hierarchy. This should always be the case if the data is being staged on open.
  • Page 106: Hpss Storage Space

    Chapter 2 HPSS Planning 2.10.1 HPSS Storage Space HPSS files are stored on the media that is defined to HPSS via the import and create storage server resources mechanisms provided by the Storage System Manager. You must provide enough physical storage to meet the demands of your user environment. HPSS assists you in determining the amount of space needed by providing SSM screens with information on total space and used space in all of the storage classes that you have defined.
  • Page 107 bitfile.# mpchkpt.# nsacls.# nsfilesetattrs.# nsobjects.# nstext.# sspvdisk.# sspvtape.# storagemapdisk.# storagemaptape.# storagesegdisk.# storagesegtape.# vvdisk.# vvtape.# The following files are part of an HPSS system, but are not associated with a particular subsystem: accounting acctsnap acctsum acctvalidate cartridge_3494 cartridge_3495 cartridge_aml cartridge_lto cartridge_operator cartridge_stk HPSS Installation Guide Release 4.5, Revision 2...
  • Page 108 Chapter 2 HPSS Planning cartridge_stk_rait The STK RAIT PVR cannot be supported at this time since STK has not yet made RAIT generally available. dmgfileset filefamily gkconfig globalconfig hierarchy logclient logdaemon logpolicy lspolicy migpolicy mmonitor mountd mover moverdevice ndcg nsconfig nsglobalfilesets purgepolicy pvlactivity...
  • Page 109 pvldrive pvljob pvlpv sclassthreshold serverconfig site storageclass storsubsysconfig 2.10.2.1 Global Configuration Metadata Global Configuration File (globalconfig) - This file contains configuration data that is global to an HPSS system. Since it only ever contains one small metadata record, it plays a negligible role in determining metadata size.
  • Page 110 Chapter 2 HPSS Planning • Metadata Monitor Configurations (mmonitor) • Migration/Purge Server Configurations (mps) • Mount Daemon Configurations (mountd) • Mover Configurations (mover) • NFS Daemon Configurations (nfs) • Non-DCE Client Gateway Configurations (ndcg) • NS Configurations (nsconfig) • PVL Configurations (pvl) •...
  • Page 111 Mount Daemon Configurations. Each NFS Mount Daemon must have an entry in this configuration metadata file describing various startup/control arguments. There will be one Mount Daemon entry for each NFS server defined. Mover Configurations. Each Mover (MVR) must have an entry in this configuration metadata file describing various startup/control arguments.
  • Page 112 Chapter 2 HPSS Planning Server. However, that would leave no space for any symbolic links, hard links, or growth. To cover these needs, the total number of SFS records might be rounded up to 1,500,000. If more name space is needed, additional space can be obtained by allocating more SFS records, by adding more storage subsystems, and/or by “attaching”...
  • Page 113 • Bitfile Tape Segments (bftapesegment.#) • BFS Storage Segment Checkpoint (bfsssegchkpt.#) • BFS Storage Segment Unlinks (bfssunlink.#) • Bitfile COS Changes (bfcoschange.#) • Bitfile Migration Records (bfmigrrec.#) • Bitfile Purge Records (bfpurgerec.#) • Accounting Summary Records (acctsum) • Accounting Logging Records (acctlog.#) Storage Classes.
  • Page 114 Chapter 2 HPSS Planning map records is the total number of disk storage segments divided by 2. Another way to put an upper bound on the number of disk map records is as follows: • For each disk storage class defined, determine the total amount of disk space in bytes available in the storage class.
  • Page 115: Disk Storage Server Metadata

    bitfiles and bytes stored by the account index in the given COS, 2) if the storage class is not 0, this is a statistics record and contains the total number of bitfile accesses and bytes transferred associated with the account index, COSid, and referenced storage class. The number of records in this file should be the number of users in the HPSS system multiplied by the average number of levels in a hierarchy plus 1.
  • Page 116: Tape Storage Server Metadata

    Chapter 2 HPSS Planning Expect both the disk storage segment metadata file and the disk storage map metadata file to be quite volatile. As files are added to HPSS, disk storage segments will be created, and as files are migrated to tape and purged from disk, they will be deleted. If SFS storage is available on a selection of devices, the disk storage segment metadata file and the storage disk map file would be good candidates for placement on the fastest device of suitable size.
  • Page 117 SS Tape Physical Volumes. The tape PV metadata file describes each tape physical volume imported into HPSS. The number of records in this file will therefore equal the total number of tape cartridges that will be managed by this Storage Server. SS Tape Virtual Volumes.
  • Page 118 Chapter 2 HPSS Planning PVL Activities. This metadata file stores individual PVL activity requests such as individual tape mounts. Depending on how many concurrent I/O requests are allowed to flow through the Storage Server and PVL at any given time, this metadata file should not grow beyond a few hundred records.
  • Page 119 The MPS shares the following SFS metadata files with the Bitfile Server (the default SFS file names are shown in parentheses). • Migration Policies (migpolicy) • Purge Policies (purgepolicy) • Migration Records (bfmigrrec.#, where # is the storage subsystem ID) •...
  • Page 120 Chapter 2 HPSS Planning 2.10.2.15 Storage System Management Metadata The SSM System Manager is the primary user of the following SFS metadata files (the default SFS filenames are shown in parentheses): • File Family (filefamily) In addition, the SSM System Manager requires an entry in the generic server configuration file. The SSM Data Server does not require an entry in any SFS file.
  • Page 121: Metadata Constraints

    ToolsRepository/md43.xls. To access the website, an account is required. To request an account, use URL http://www4.clearlake.ibm.com/cgi-bin/hpss/requests/ id_request.pl. The spreadsheet is divided into two worksheets: the first defines the various HPSS and SFS sizing assumptions; the second calculates the sizing estimates.
  • Page 122: Table 2-5 Hpss Dynamic Variables (Subsystem Independent)

    Chapter 2 HPSS Planning Table 2-5 HPSS Dynamic Variables (Subsystem Independent) Variable Description Total Number of Defines the total number of users in the HPSS system. This value is used in Users conjunction with “Avg. Number of Levels Per Hierarchy” to define the size of accounting records.
  • Page 123: Table 2-6 Hpss Dynamic Variables (Subsystem Specific)

    Table 2-6 HPSS Dynamic Variables (Subsystem Specific) Variable Description Max Total Bitfiles The maximum number of bitfiles that will exist in HPSS. The spreadsheet also considers this value to also be the total number of bitfiles on tape, since it is assumed that every HPSS bitfile will eventually migrate to tape.
  • Page 124 Chapter 2 HPSS Planning Table 2-6 HPSS Dynamic Variables (Continued)(Subsystem Specific) Variable Description Avg. Text Overflows A name-space object record can store a filename that is 23 characters long (the base Per Name Space name, not the full pathname). If a filename is longer than 23 characters, a text overflow Object record must be generated.
  • Page 125: Table 2-7 Hpss Static Configuration Values

    Table 2-6 HPSS Dynamic Variables (Continued)(Subsystem Specific) Variable Description Total Disk Physical The maximum total number of disk physical volumes. Volumes Total Disk Virtual The total number of disk virtual volumes that will be created, which is equal to the Volumes number of disk physical volumes divided by the average disk stripe width.
  • Page 126 Chapter 2 HPSS Planning Table 2-7 HPSS Static Configuration Values (Continued) Variable Description Total Log Clients The total number of Log Clients that will be used, which will be equal to the total number of nodes running any type of HPSS server. Total Metadata The total number of Metadata Monitor servers, which should equal the total number of Monitor Servers...
  • Page 127 Total Records—This column shows the projected number of metadata records for each metadata file, given the assumptions from the assumption worksheet. The formulas for computing the number of records are shown below. Note that the variable names on the left match the metadata file names listed in the Subsystem/Metadata File column while variables on the right of the equals sign (“=”) represent values from the assumptions worksheet (unless otherwise noted).
  • Page 128 Chapter 2 HPSS Planning MPS/Server Configs (mps) = Total Migration/Purge Servers NDCG Non-DCE Gateway Configuration = Total Non-DCE Gateways NFS/Mount Daemons (mountd) = Total NFS Mount Daemons NFS/Server Configs (nfs) = Total NFS Servers NS/Global Filesets (nsglobalfilesets) = Global Number of Filesets NS/Server Configs (nsconfig) = Total Name Servers PVL/Activities (pvlactivity) = Max Queued PVL Jobs * Avg Activities Per PVL Job PVL/Drives (pvldrive) = Total Disk Physical Volumes + Total Tape Drives...
  • Page 129 BFS/Disk Bitfile Segments (bfdisksegment.#) = Avg. Storage Segments Per Disk VV * Total Disk Virtual Volumes BFS/Storage Segment Checkpoint (bfsssegchkpt.#) = Max BFS Storage Segment Checkpoints BFS/Tape Bitfile Segments (bftapesegment.#) = Avg Bitfile Segments Per Tape Bitfile * (Max Total Bitfiles + (Max Total Bitfiles * (Avg Copies Per Bitfile - 1) * Percent of Extra Copies Stored on Tape)) BFS/Unlink Records (bfssunlink.#) = Max Queued BFS Storage Segment Unlinks MPS/Bitfile Migration Records (bfmigrrec.#) = Max Disk Bitfiles Queued for Migration...
  • Page 130 For more information on how the size of SFS B-trees can be calculated, refer to Encina Overview, an IBM ITSO Redbook (GG24-2512-00). Secondary Index 1—This section shows the disk space requirement for the first secondary index associated with each metadata file, if such an index is used.
  • Page 131 • Encina SFS Data Actual HPSS metadata is stored in SFS data volumes. SFS data volumes store the actual SFS record data as well as associated index and B-tree overhead information. SFS must have sufficient disk space allocated for its data volumes in order to store the projected amount of HPSS metadata. The metadata sizing spreadsheet, discussed in the previous section, needs to be used to calculate the size of each SFS data volume and the distribution of the SFS data files across the data volumes.
  • Page 132: System Memory And Disk Space

    Chapter 2 HPSS Planning If the file systems used to store the MRA files becomes full, SFS will not run. MRA files are written to the directory /opt/encinalocal/encina/sfs/hpss/archives. Before Encina is configured as described in Section 5.5.2: Configure Encina SFS Server (page 243), a separate mirrored file system should be created (e.g.
  • Page 133 2.10.3.2.1 Disk Space Requirements for Core files and Encina Trace Buffers The /var/hpss/adm/core is the default directory where HPSS creates core and Encina trace files resulting from subsystem error conditions. The actual size of the files differ depending on the subsystems involved, but it is recommended that there should be at least 512 MB reserved for this purpose on the core server node and at least 256 MB on Mover nodes.
  • Page 134 Chapter 2 HPSS Planning 2.10.3.2.8 Disk Space Requirements for Running NFS Daemon The HPSS NFS server memory and disk space requirements are largely determined by the configuration of the NFS request processing, attribute cache, and data cache. Data cache memory requirements can be estimated by multiplying the data cache buffer size by the number of memory data cache buffers.
  • Page 135: Hpss Performance Considerations

    2.10.3.3 System Memory and Paging Space Requirements Specific memory and disk space requirements for the nodes on which the HPSS servers will execute will be influenced by the configuration of the servers—both as to which nodes the servers will run on and the amount of concurrent access they are set up to handle.
  • Page 136: Dce

    Chapter 2 HPSS Planning 2.11.1 DCE Due to several problems observed by HPSS, it is highly recommended that all DCE client implementations use the RPC_SUPPORTED_PROTSEQS=ncadg_ip_udp environment variable. Frequent timeouts may be observed if this is not done. Each HPSS system should be periodically checked for invalid/obsolete endpoints.
  • Page 137: Workstation Configurations

    for most HPSS installations. A value of 16000 or 32000 is more reasonable and can be changed by adding a -b 16000 to the arguments int /etc/rc.encina. Be sure to update the /etc/security/limits file (and reboot) to allow the system to handle the added processing size that both this and the increase in SFS threads will create.
  • Page 138: Configuration

    Chapter 2 HPSS Planning application, a number of the disks can be grouped together in a striped Storage Class to allow each disk to transfer data in parallel to achieve improved data transfer rates. If after forming the stripe group, the I/O or processor bandwidth of a single machine becomes the limiting factor, the devices can be distributed among a number of machines, alleviating the limitation of a single machine.
  • Page 139: Client Api

    Parallel transfers move the data between the Mover and the end-client processes bypassing the HPSS FTPD. Customers should be educated to use the parallel functions rather than the non- parallel functions. NOTE: ASCII transfers are not supported by the parallel functions and the non- parallel functions will need to be specified for ASCII transfers.
  • Page 140: Logging

    Chapter 2 HPSS Planning of sites. Usually, the only time the policy values need to be altered is when there is unusual HPSS setup. The Location Server itself will give warning when a problem is occurring by posting alarms to SSM. Obtain the information for the Location Server alarms listed in the HPSS Error Manual.
  • Page 141: Cross Cell

    2.11.12 Cross Cell Cross Cell Trust should be established with the minimal reasonable set of cooperating partners (N- squared problem). Excessive numbers of Cross Cell connections may diminish Security and may cause performance problems due to Wide Area Network delays. The communication paths between cooperating cells should be reliable.
  • Page 142: Gatekeeping

    Chapter 2 HPSS Planning of time. This makes a ‘migrate early, migrate often’ strategy a feasible way to keep XFS disks clear of inactive data. The only inherent size limitation for XFS is a 2 TB maximum filesystem size, which is a limitation of the Linux kernel.
  • Page 143: Rules For Backing Up Sfs Log Volume And Mra Files

    and protect the SFS metadata, it is important that each site review this list of “rules” and check to insure that their site’s backup is consistent with these policies. The main I/O from Encina SFS is in the transaction log, the actual SFS data files, and media log archiving.
  • Page 144: Miscellaneous Rules For Backing Up Hpss Metadata

    Chapter 2 HPSS Planning • The file system used to store TRB (i.e., data volume backup) files must be mirrored on at least two separate physical disks or be stored on a redundant RAID device. • Separate disks must be used to store the SFS data volumes versus the file system used to store TRB files.
  • Page 145: Chapter 3 System Preparation

    HPSS and its infrastructure. 3.1 General • Each HPSS administrator should request a login id and password to the IBM HPSS web site at http://www4.clearlake.ibm.com/hpss/support.jsp. • Download a copy of the HPSS Installation and Management Guides for the version of HPSS being installed.
  • Page 146: Setup Filesystems

    Verify on all DCE nodes that DCE is automatically started in /etc/inittab after system reboot. • Obtain your HPSS Cell Id from IBM. This information will be needed when mkhpss is used to configure HPSS with DCE. (http:// September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 147: Encina

    3.2.2 Encina Configure /opt/encinalocal and /opt/encinamirror such that the contents are either mirrored or each of these two directories is stored on separate disks. Make sure these file systems are automatically mounted at system reboot. 3.2.3 HPSS Configure /var/hpss as a separate file system on each HPSS server node while considering the following: •...
  • Page 148: Aix

    Chapter 3 System Preparation Disk Storage Server metadata files (separate volume per disk storage server) Tape Storage Server metadata files (separate volume per tape storage server) All others • As a common rule, target having 5-8 data volumes • The SFS log volume must be mirrored on at least two(2) separate physical disks or be stored on a redundant RAID device •...
  • Page 149: Setup For Hpss Metadata Backup

    3.5.1 IBM 3584 If using an IBM 3584 tape library, install the Atape driver, configure the SCSI Medium Changer (SMC) library device on the node that will run the HPSS LTO PVR, and verify that you can talk to the library.
  • Page 150 Run this before HPSS has started since only one process can have an open smc device file descriptor. For drives in an IBM 3584 robot, identify the robot-specific device id (LTO drive locations) for each Mover tape device. This will be required when configuring the tape drives within HPSS.
  • Page 151: Stk

    % mtlib -l/dev/lmcp0 -d -V<tapeLabel> -x<deviceNumber> To automatically start the lmcp daemon after system reboot, add /etc/methods/startatl to the /etc/inittab file Refer to Section 6.8.13.3: IBM 3494/3495 PVR Information on page 388 for more information. 3.5.3 For an STK tape library: •...
  • Page 152: Tape Drive Verification

    • If using STK drives (e.g., Redwoods), verify that the drive type is not incorrectly set to Generic tape drive or IBM Emulation mode. • On each Tape Mover node, verify that the raw read and write I/O performance of all HPSS tape drives are at expected levels.
  • Page 153 % iocheck -r -t 20 -b 1mb /dev/rmt1.1 % iocheck -r -t 20 -b 1mb /dev/rmt1.1 WARNING: The contents of this tape will be overwritten so be sure to mount the correct tape cartridge. To unload a tape: % tctl -f <device> rewoffl Repeat the above steps for each tape drive.
  • Page 154 Chapter 3 System Preparation WARNING: The contents of this tape will be overwritten so be sure to mount the correct tape cartridge. 3.5.5.3 IRIX On each Tape Mover node, verify that each tape drive has variable-length block size set. To manually check whether variable block size is enabled (see warning below), the following should complete successfully: % dd if=/dev/null of=/dev/rmt/tps2d6nr bs=80 count=1 % dd if=/dev/null of=/dev/rmt/tps2d6nr bs=1024 count=1...
  • Page 155: Setup Disk Drives

    To measure uncompressed write performance (see warning below) on st1 (Note that specifying nst1 will cause the tape not to rewind): % iocheck -w -t 20 -b 1mb /dev/rmt/nst1 To measure the maximum-compressed write performance on st1 (and then rewind the tape): % iocheck -w -t 20 -f 0 -b 1mb /dev/nst1 To measure read performance on drive st1 using the previously-written uncompressed and compressed files:...
  • Page 156 Chapter 3 System Preparation There are two loops (a and b) per adapter and two ports per loop (a1, a2, b1, b2) The physical order of the disks are shown from the perspective of each port A disk is accessed according to its closest port (e.g., either a1 or a2, b1 or b2) When planning to configure striped SSA disks in HPSS, it is important to select disks for each striped virtual volume that span ports, loops, and/or adapters.
  • Page 157: Linux

    where logicalVolume is a raw logical volume that is sized to provide at least 20 seconds of I/O throughput. To measure write performance on a single disk (see warning below): % iocheck -w -t 20 -b 1mb -o 1mb /dev/r<logicalVolume> where logicalVolume is a raw logical volume that is sized to provide at least 20 seconds of I/O throughput.
  • Page 158: Solaris & Irix

    • Install and configure all network interfaces and corresponding network connections. Refer to IBM's internal network technologies home page for resources on configuring and tuning networks and TCP/IP. The network interfaces section of the lsnode report from each node shows the network interfaces that are configured.
  • Page 159 To test whether an IP address is reachable (non-zero exit status indicates the ping was not successful): % ping -c 1 <ipAddress> • Determine which networks will be used for control vs. data paths. DCE should not use all available networks on a multi-homed system unless each of those networks is guaranteed to have connectivity to other DCE services.
  • Page 160 Chapter 3 System Preparation gather performance data using a variety of settings to determine the optimal combinations. The primary values that govern performance include send/receive buffers, size of reads/ writes, and rfc1323 value for high performance networks (HIPPI, G-Enet). Create a table showing these values.
  • Page 161: Table 3-1 Network Options

    There are also attributes that are specific to the individual network interface that may affect network performance. For example, the network interface for the IBM SP TB3 switch provides HPSS Installation Guide Release 4.5, Revision 2...
  • Page 162: Hpss.conf Configuration File

    Chapter 3 System Preparation settings for the size of the send and receive pool buffer size, which have had an effect on throughput. It is recommended that the available interface specific documentation be referenced for more detailed information. The anticipated load should also be taken into account when determining the appropriate network option settings.
  • Page 163: Table 3-2 Pftp Client Stanza Fields

    • Blank Lines are ignored. NOTE: HPSS and Network Tuning are highly dependent on the application environment. The values specified herein are NOT expected to be applicable to any installation! 3.7.1.1 PFTP Client Stanza The Parallel FTP Client configuration options are in two distinct stanzas of the HPSS.conf file (Section 3.7.1.1: PFTP Client Stanza on page 163, and Section 3.7.1.2: PFTP Client Interfaces Stanza on page 165).
  • Page 164 Chapter 3 System Preparation Table 3-2 PFTP Client Stanza Fields SubStanza SubStanza SubStanza Note: All PFTP Client SubStanzas are optional. The PFTP Client = { … } stanza contains several optional specifications for the pftp_client and/or krb5_gss_pftp_client executables. The DEFAULT COS = value substanza assigns a default Class of Service (COS) for HPSS. If this option is not specified, the Bitfile Server is responsible for determining the default COS unless the user explicitly issues a “site setcos <ID>”...
  • Page 165 This syntax is identical to DCE’s RPC_RESTRICTED_PORTS environment variable. Only the ncacn_ip_tcp[start_port-end_port] (TCP component) is used so the ncadg_ip_udp component may be omitted. Additional options are available for controlling the size of the PFTP transfer buffers, Transfer Buffer Size, and the buffer size for the sockets in the PDATA_ONLY protocol, Socket Buffer Size. The value may be specified as a decimal number (1048576) or in the format: xMB.
  • Page 166: Table 3-3 Pftp Client Interfaces Stanza Fields

    Chapter 3 System Preparation particularly useful if both low speed and high speed interfaces are available to the client host and the PFTP data transfers should use the high speed interfaces. Table 3-3 PFTP Client Interfaces Stanza Fields Configuration Type Stanza (Compound) SubStanza (Compound)
  • Page 167 PFTP Client Interfaces Stanza Example: PFTP Client Interfaces = { ; PFTP Client Host Name(s) water.clearlake.ibm.com water = { ; Next Specification is the PFTP Daemon Host ; water has 4 interfaces that can talk to the HPSS Movers ; associated with the PFTP Daemon Host "water"...
  • Page 168: Table 3-4 Multinode Table Stanza Fields

    Chapter 3 System Preparation Default = { ; Client Host Name water water.clearlake.ibm.com = { 134.253.14.227 3.7.1.3 Multinode Table Stanza The HPSS PFTP Client normally forks children to provide multiple network paths between the PFTP Client and the Mover(s). In some instances, it may be preferable to have these processes (pseudo children) running on independent nodes.
  • Page 169: Table 3-5 Realms To Dce Cell Mappings Stanza Fields

    ; Options read by the Multinode Daemon Multinode Table = { ; Hostname of the Client water water.clearlake.ibm.com = { ; Specification of the Multinode Host ; If the Data Node is a different interface than the interface ; specified by the Control Node; then enter the Data Node/ ;...
  • Page 170 Chapter 3 System Preparation Table 3-5 Realms to DCE Cell Mappings Stanza Fields SubStanza The Realms to DCE Cell Mappings = { … } stanza contains one or more substanzas providing Kerberos Realm to DCE Cell Mappings. The substanza(s) are used to specify the appropriate mappings.
  • Page 171: Table 3-6 Network Options Stanza Fields

    entry to the specified destination address. A “Default” destination may be specified for all sources/destinations not explicitly specified in the HPSS.conf file. Table 3-6 Network Options Stanza Fields Configuration Type Stanza (Compound) SubStanza SubStanza (Compound) Section (Compound) SubSection SubSection SubSection SubSection HPSS Installation Guide Release 4.5, Revision 2...
  • Page 172 field) is that the size of the write request is the size of the data buffer. On some networks (e.g. the SP/x switch), improved performance has been measured by using a smaller value (e.g. 32KB) for the size of the individual writes to the network. If no entry is found that matches a network connection or the value specified is zero, HPSS will query an environment...
  • Page 173 • The Source Interface Name SubStanza may specify one or more names [ subject to the 128 character limit (including the “= {“.) ] NOTE: Do not include the quotes when specifying Default. • Destination IP Address must be specified in Decimal Dot Notation. •...
  • Page 174: Sp/X Switch Device Buffer Driver Buffer Pools

    SP/x Switch Device Buffer Driver Buffer Pools IBM SP/x systems provide the capability to tune the buffer pool allocation in the switch device driver. Two variables can be changed: rpoolsize, which is the size of the buffer pool for incoming data, and spoolsize which is the buffer pool size for outgoing data.
  • Page 175: Install And Configure Java And Hpssadm

    3.8 Install and Configure Java and hpssadm 3.8.1 Introduction The hpssadm utility and the modifications to the SSM Data Server necessary to support hpssadm require the installation and configuration of Java 1.3.0 and the Java Secure Sockets Extension. The default prebuilt Data Server executable and shared library require Java. If the hpssadm utility is not used, these can be replaced with the no-Java prebuilt versions of these files, which are also shipped with HPSS.
  • Page 176 % vi $JAVA_HOME/lib/security/java.security Look for the following lines: % List of providers and their preference orders (see above): % security.provider.1=sun.security.provider.Sun security.provider.2=com.ibm.crypto.provider.IBMJCA Add the following line: security.provider.N=com.sun.net.ssl.internal.ssl.Provider where "N" is the next available number (3 in this example) On the machine where SSMDS will be executed: These procedures use the keytool utility to create the ds keystore file.
  • Page 177 D. Set up SSMDS Java security policy file % cp /opt/hpss/config/templates/java.policy.ds.template \ java.policy.ds % chmod 640 java.policy.ds % vi java.policy.ds Change "*.hpss.acme.com" to appropriate host info, such as "*.clearlake.ibm.com", in the following section: grant { permission java.net.SocketPermission "*.hpss.acme.com:1024-", "connect,accept,listen,resolve"; };...
  • Page 178 % cd /var/hpss/ssm % cp /opt/hpss/config/templates/java.policy.hpssadm.template \ java.policy.hpssadm % chmod 640 java.policy.hpssadm % vi java.policy.hpssadm Change "*.hpss.acme.com" to appropriate host info, such as "*.clearlake.ibm.com", in the following section: grant { permission java.net.SocketPermission "*.hpss.acme.com:1024-", "connect,accept,listen,resolve"; Save change and exit vi. September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 179 D. Create hpssadm user keytab files % mkdir -p /var/hpss/ssm/keytabs % cd /var/hpss/ssm/keytabs For each hpssadm user on this machine, create DCE keytab file. This example creates a keytab file for user joe: % dce_login cell_admin % rgy_edit rgy_edit> ktadd -f keytab.joe -p joe (Need to enter joe's password twice) rgy_edit>...
  • Page 180 Chapter 3 System Preparation To use the hpssadm utility and the Java version of the Data Server, continue following the instructions for the remainder of this section. 3.8.1.3 Prerequisite Software This required software is: 1. One of the following: Java 1.3.0 JRE (Java Runtime Environment) Java 1.3.0 SDK (Software Development Kit) 2.
  • Page 181: Installing Java

    the host from which the hpssadm utility is executed and are transmitted to the Data Server, who authenticates them against the DCE registry. This file is discussed in Section 3.8.6: Setting up the hpssadm Keytab File on page 189. Section 3.8.9: Background Information on page 191 provides a high level discussion of DCE keytab files, the Java Security Policy, X.509 certificates and public key encryption as they are used by the hpssadm utility and the Data Server, and includes references to additional documentation for these technologies.
  • Page 182 (Section 3.8.2.1: Obtaining the Software on page 181). Remember that the recommended patch sets are subject to change at any time by Sun or IBM. HPSS was tested with Java 1.3.0 under AIX 5.1 Maintenance Level 2, and under Solaris 5.8 with Recommended Patch Cluster 73001.
  • Page 183: Configuring Ssl

    You will be prompted for the password, WHICH WILL BE ECHOED AS YOU TYPE IT, so make sure you are working from a location where the password cannot be compromised. Type in the default password ("changeit"). The utility should list the certificates in the file. 4.
  • Page 184 Chapter 3 System Preparation There should already be at least one security provider listed in this file, probably in a format something like: security.provider.1=sun.security.provider.Sun If there is more than one provider listed, they should be numbered in increasing numerical order: security.provider.2=XXX.security.provider.foox security.provider.3=YYY.security.provider.fooy security.provider.4=ZZZ.security.provider.fooz...
  • Page 185 This command will generate a public key and an associated private key for the Data Server with alias "hpss_ssmds". It will also generate a self-signed certificate for hpss_ssmds which includes his public key. The key will be valid for 365 days. The keys and certificate will be stored in the file "keystore.ds".
  • Page 186: Configuring The Java Security Policy File

    Chapter 3 System Preparation % cp cacerts cacerts.ORIG % $JAVA_HOME/bin/keytool -keystore cacerts -import \ -file /tmp/ds.cer -alias hpss_ssmds The keytool utility will print out the information about the certificate, including the fingerprints, and will ask whether the certificate should be trusted. Compare the owner, issuer, and fingerprints carefully with those obtained from the original certificate in step 2.
  • Page 187 Security Manager, or if none of these policy files exists, the default policy is the original Java sandbox policy, which is rather liberal. Any system access is further limited by whatever protections the local operating system supplies. So, for example, if the policy file allows access to file "foo", but the file system permissions do not permit access to "foo"...
  • Page 188: Setting Up The Client Authorization File

    Chapter 3 System Preparation 2. The Data Server requires read FilePermission on its user authorization file, whose default location is /var/hpss/ssm/hpssadm.config. The hpssadm utility requires read FilePermis- sion for the user's keyfile file, the default location for which is /var/hpss/ssm/keytab grant { permission java.io.FilePermission "/var/hpss/-", "read";...
  • Page 189: Setting Up The Hpssadm Keytab File

    The hpssadm.config file is a flat ASCII file which lists the users who are authorized to use the hpssadm utility. The template for this file is config/templates/hpssadm.config.template. The default name for this file is /var/hpss/ssm/hpssadm.config This pathname can be changed in the hpss_env file by setting the HPSS_SSMDS_JAVA_CONFIG variable as desired.
  • Page 190: Securing The Data Server And Client Host Machines

    Chapter 3 System Preparation The keytab file must be stored on each host from which the user will execute the hpssadm utility, and must be specified on the hpssadm command line with the -k option: hpssadm -k keytab_file_path_name The keytab file should be owned by the user and protected so that it is readable only by the user. The keytab is interpreted on the host on which the Data Server runs, not that on which the hpssadm client utility runs.
  • Page 191: Updating Expired Ssl Certificates

    3.8.8 Updating Expired SSL Certificates When the Data Server certificate expires, the Data Server itself will be able to start up and execute, but any hpssadm client attempting to connect to it will fail with the error "untrusted server cert chain".
  • Page 192 Chapter 3 System Preparation call returns silently if it determines the code is allowed the requested access, and otherwise throws an exception, which halts the program. Applet code runs under a security manager (usually) because most browsers implement one. The security manager won't let the applet do anything not allowed by the policy file(s).
  • Page 193 you can pay to issue X.509 certificates to you. Certificates can also be created by individuals and self-signed by the party owning the certificate. A program uses a file of these certificates as its "trusted store", the set of certificates of parties it will trust. Whereas a digital signature confirms that the issuing party possesses the private key corresponding to a particular public key, a certificate confirms that some verification has been done as to the identity of its owner.
  • Page 194: Setup Linux Environment For Xfs

    Chapter 3 System Preparation hpssadm program, such as new alarms or changes in HPSS server statuses. This session does not pass any private data such as passwords, does not use SSL, and is not encrypted. For security reasons, an application can bind or unbind only to an RMI registry running on the same host.
  • Page 195: Create The Dmapi Device

    1. Download the patch, xfs-2.4.18-1, from the HPSS website lake.ibm.com/hpss/support/patches/xfs-2.4.18-1.tar). 2. Untar the downloaded file. % tar -xvf xfs-2.4.18-1.tar 3. Copy xfs-2.4.18-1 (the patch file) to /usr/src. % cp xfs-2.4.18-1 /usr/src 4. Change directory to /usr/src/linux-2.4.18 (or the root of your 2.4.18 kernel tree).
  • Page 196: Verify System Readiness

    Chapter 3 System Preparation 1. Download the patch, kaio-2.4.18-1.tar, from the HPSS website (http://www4.clearlake.ibm.com/hpss/support/patches/kaio-2.4.18- 1.tar). 2. Untar the downloaded file. % tar -xvf kaio-2.4.18-1.tar 3. Copy kaio-2.4.18-1 (the patch file) to /usr/src. % cp kaio-2.4.18-1 /usr/src Change directory to /usr/src/linux-2.4.18 (or the root of your 2.4.18 kernel tree).
  • Page 197 % lshpss -s sfsServer -cos -hier -sc -migp -purgep -dev -drv \ To generate the HPSS configuration information, store it locally in /var/hpss/stats/ lshpss.out, and have it automatically emailed to IBM Houston, issue the following on the core HPSS server node: % lshpss.ksh...
  • Page 198 Chapter 3 System Preparation Storage Class Sharing - Are any storage classes shared amongst more than one hierarchy? If so, is that intentional and if so, discuss ramification of sharing vs. not sharing (usually boils down to avoiding “pockets” of unused storage – one SC full while another is nearly empty –...
  • Page 199 Virtual Volume Block Size and Stripe Width - See the discussion for disk Virtual Volume Block Size & Stripe Width. Thresholds - Again, make sure that these seem reasonable. Do they match whatever repack or tape migration scheme (if any) of the site Max VVs to Write - Does this value make sense given the number of drives available and the migration policy? Migration Policy &...
  • Page 200 Executable Name - Verify that the named executable supports the specific options that are required for this Mover. On the device end, make sure that the correct device driver interface is enabled (e.g., _ssd for the IBM Atape driver for 3490E/3590/3580, _omi for the Greshem/OMI driver for StorageTek drives, _dd2 for Ampex DST drives, _bmux for the IBM BMUX attached 3490s).
  • Page 201 Copy Count & Skip Factor - These control multiple copies. Typically these are set for 1 or 2 copies on tape (with a disk level at the top of the hierarchy). If not one of these, what is the rationale? Request Count - Verify that the request counts (remember that more than one disk SC can use the same migration policy) do not appear to make unreasonable demands on the available tape drives –...
  • Page 202 DCE server, and SFS server node: root% lsnode To generate the node configuration information, store it locally in /var/hpss/stats/ lsnode.out, and have it automatically emailed to IBM Houston, issue the following on each HPSS server, DCE server, and SFS server node: root% lsnode.ksh root% more /var/hpss/stats/lsnode.out...
  • Page 203 /opt/hpss/tools/deploy/bin root% lsencina > /var/hpss/stats/lsencina.out root% more /var/hpss/stats/lsencina.out To generate the SFS configuration information, store it locally in /var/hpss/stats/ lsencina.out, and have it automatically emailed to IBM Houston, issue the following: root% lsencina.ksh root% more /var/hpss/stats/lsencina.out • Description of SFS backup procedures, including printouts of all SFS backup configuration files and local/custom scripts.
  • Page 204 Chapter 3 System Preparation Disaster recovery requirements Disaster recovery test plan • Pre-Production Test Plan, including: Customer's requirements (users, admin, management, and operations staff) for the production storage system considering all involved hardware and software, which may include detailed requirements in one or more of the following areas depending on customer requirements: Functionality Single-transfer performance...
  • Page 205: Chapter 4 Hpss Installation

    HPSS Installation Chapter 4 4.1 Overview This chapter provides instructions and supporting information for installing the HPSS software from the HPSS distribution media. To install this system, we recommend that the administrator be familiar with UNIX commands and configuration, be familiar with a UNIX text editor, and have some experience with the C language and shell scripts.
  • Page 206: Installation Roadmap

    Chapter 4 HPSS Installation • Non-DCE Package - Contains HPSS Non-DCE Client API include files and libraries and the HPSS Non-DCE Mover binaries. • Source Code Package - Contains the HPSS Source Code The HPSS software package names and sizes for the supported platforms are as follows: Table 4-1 Installation Package Sizes and Disk Requirements Platform HPSS Package Name hpss_runtime-4.5.0.0.lpp...
  • Page 207: Create Owner Account For Hpss Files

    4. Verify HPSS installed files (Section 4.5.1). 4.2 Create Owner Account for HPSS Files The HPSS software must be installed by a root user. In addition, a UNIX User ID of hpss and Group ID of hpss is required for the HPSS installation process to assign the appropriate ownership for the HPSS files.
  • Page 208: Solaris Installation

    Chapter 4 HPSS Installation 4.4.1.1 Install an HPSS Package Using the installp Command Log on to the node as root user and issue the installp command as follows to install an HPSS package (e.g., HPSS core package): % installp -acgNQqwX -d \ <input device/directory>/hpss_runtime.4.5.0.0.lppimage \ -f hpss.core 2>&1 4.4.1.2...
  • Page 209: Irix Installation

    4.4.3 IRIX Installation 4.4.3.1 Install the HPSSndapi Package The HPSSndapi package is in a compressed tar file format. Perform the following steps to install this package: 1. Log on to the node as root user Place the compressed tar file in a working directory (e.g., /var/spool/pkg/HPSSndapi-4.5.0.0.tar.Z) 3.
  • Page 210: Remake Hpss

    Chapter 4 HPSS Installation /opt/hpss/include/<HPSS include, idl, and tidl files> /opt/hpss/msg/<HPSS message catalog> /opt/hpss/tools/<HPSS tools> /opt/hpss/man/<HPSS man pages> /opt/hpss/config/<HPSS configuration scripts> /opt/hpss/stk/<STK files> /opt/hpss/sammi/hpss_ssm/<HPSS Sammi files> /opt/hpss/src/<HPSS source files> 2. In addition, verify that the HPSS file ownerships and file permissions set by the installation process are preserved as follows: Executable files: rwxr-xr-x bin Include files:...
  • Page 211 1. Construct the HPSS source tree 2. Compile the HPSS binaries 4.5.2.1 Construct the HPSS Source Tree 4.5.2.1.1 Construct the HPSS Base Source Tree The HPSS base source tree contains the source code for all the HPSS components except the STK PVR proprietary code.
  • Page 212 Chapter 4 HPSS Installation 1. Log on as root. 2. Change directory to the HPSS base source tree (the default location is /opt/hpss). 3. Review the Makefile.macros file. 4. Ensure that the target directory tree (where the source tree will be constructed) is empty. 5.
  • Page 213: Set Up Sammi License Key

    1. Log on as root. 2. Place the constructed HPSS source tree (i.e., HPSS base source tree, HPSS HDM source tree, or HPSS Non-DCE source tree) in the desired build directory (the default location is /opt/hpss). 3. Review the Makefile.macros file which is in the root of the source tree. This file defines the "make"...
  • Page 214 Chapter 4 HPSS Installation September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 215: Chapter 5 Hpss Infrastructure Configuration

    HPSS Infrastructure Chapter 5 Configuration 5.1 Overview This chapter provides instructions and supporting information for the infrastructure configuration of HPSS. Before configuring the HPSS infrastructure, we recommend that the administrator be familiar with UNIX commands and configuration, be familiar with a UNIX text editor, and have some experience with the C language, shell scripts, DCE, and Encina.
  • Page 216: Define The Hpss Environment Variables

    Chapter 5 HPSS Infrastructure Configuration • WebSphere (Encina TxSeries) • Sammi • Java • DCE/DFS server/client machine(s) • HPSS software Refer to the DCE Version 3.1 Administration Guide and the Encina Server Administration: System Administrator's Guide and Reference for more information on installing and configuring DCE and Encina.
  • Page 217 #=============================================================================== Name: hpss_env - Site defined HPSS global variables/environment shell script Synopsis: hpss_env Arguments: none Outputs: - HPSS global variable definitions and environment parameters Description: This script defines the HPSS global variables and environment parameters which override the values specified in the ./include/hpss_env_defs.h file.
  • Page 218 Chapter 5 HPSS Infrastructure Configuration 3.27 11/18/98 3.28 11/29/98 3.29 11/30/98 3.30 07/01/99 3.31 03/03/00 3.32 03/17/00 3.33 03/24/00 3.34 06/30/00 3.35 09/14/00 3.36 09/15/00 3.37 10/15/00 3.38 10/30/00 3.39 11/22/00 3.40 11/29/00 3.41 07/11/01 3.42 08/21/01 Notes: Licensed Materials Copyright (C) 1992-2001 International Business Machines Corporation, The Regents of the University of California, Sandia Corporation, and UT-Battelle.
  • Page 219 D i r e c t o r i e s HPSS_PATH S A M M I D i r e c t o r i e s HPSS_PATH_SAMMI_INSTALLPathname where SAMMI is installed S y s t e m / U s e r I n f o r m a t i o n HPSS_SYSTEM HPSS_SYSTEM_VERSION...
  • Page 220 Chapter 5 HPSS Infrastructure Configuration export HPSS_SYSTEM_VERSION=$(uname -r) export HPSS_HOST_FULL_NAME=$(hostname) else export HPSS_SYSTEM_VERSION=$(oslevel) export HPSS_HOST_FULL_NAME=`host $HPSS_HOST |cut -f1 -d’ ‘` D C E V a r i a b l e s . . . export HPSS_CELL_ADMIN=”cell_admin” export HPSS_CDS_PREFIX=/.:/hpss export HPSS_CDS_HOST=$HPSS_HOST E n c i n a V a r i a b l e s .
  • Page 221: Hpss_Env_Defs.h

    5.3.2 hpss_env_defs.h The following is a verbatim listing of the hpss_env_defs.h file: /* static char SccsId[] = “ @(#)71 12:28:46”; */ /*============================================================================== * Include Name: hpss_env_defs.h * Description: Contains default definitions for HPSS environment variables. * Traceability: Vers Author Date ---- ------ --------------------------------------------- 1.10...
  • Page 222 Chapter 5 HPSS Infrastructure Configuration 1.33 09/14/00 1.34 guidryg 09/15/00 1.35 ctnguyen10/09/00 1.36 ctnguyen11/13/00 1.37 shreyas 11/27/00 1.39 01/25/01 1.40 1.41 shreyas 03/02/01 1.42 07/11/01 1.43 08/21/01 1.44 04/15/02 Notes: Licensed Materials Copyright (C) 1992-2001 International Business Machines Corporation, The Regents of the University of California, Sandia Corporation, and UT-Battelle.
  • Page 223 *************************************************************************** typedef struct env { char *name; char *def; char *value; } env_t; static env_t hpss_env_defs[] = { *************************************************************************** HPSS_ROOT HPSS_HOST HPSS_KEYTAB_FILE_SERVER- Fully qualified DCE HPSS server keytab file HPSS_KEYTAB_FILE_CLIENT - Fully qualified DCE HPSS client keytab file HPSS_PATH HPSS_PATH_BIN - Pathname for HPSS executables HPSS_PATH_SLASH_BIN- Pathname for /bin HPSS_PATH_SLASH_ETC HPSS_PATH_USR_BIN...
  • Page 224 Chapter 5 HPSS Infrastructure Configuration HPSS_PRINCIPAL- DCE Principal name for HSEC Server HPSS_PRINCIPAL_BFS- DCE Principal name for Bitfile Server HPSS_PRINCIPAL_CLIENT_API HPSS_PRINCIPAL_DMG- DCE Principal name for DMAP Gateway HPSS_PRINCIPAL_FTPD- DCE Principal name for FTP Daemon HPSS_PRINCIPAL_GK- DCE Principal name for Gatekeeper Server HPSS_PRINCIPAL_HPSSD- DCE Principal name for Startup Daemon HPSS_PRINCIPAL_LOG- DCE Principal name for Log Client and Daemon HPSS_PRINCIPAL_LS- DCE Principal name for Location Server...
  • Page 225 HPSS_PRINCIPAL_MPS_UID HPSS_PRINCIPAL_NDCG_UID HPSS_PRINCIPAL_MVR_UID HPSS_PRINCIPAL_NFSD_UID HPSS_PRINCIPAL_NS_UID HPSS_PRINCIPAL_PFSD_UID HPSS_PRINCIPAL_PVL_UID HPSS_PRINCIPAL_PVR_UID HPSS_PRINCIPAL_SS_UID HPSS_PRINCIPAL_SSM_UID * NOTE: Principal UID must be in the format of “-uid <number of uid>” For example: { “HPSS_PRINCIPAL_BFS_UID” *************************************************************************** { “HPSS_PRINCIPAL_BFS_UID”, { “HPSS_PRINCIPAL_CLIENT_API_UID”, { “HPSS_PRINCIPAL_DMG_UID”, { “HPSS_PRINCIPAL_FTPD_UID”, { “HPSS_PRINCIPAL_GK_UID”, { “HPSS_PRINCIPAL_HPSSD_UID”, { “HPSS_PRINCIPAL_LOG_UID”, { “HPSS_PRINCIPAL_LS_UID”, { “HPSS_PRINCIPAL_MM_UID”,...
  • Page 226 Chapter 5 HPSS Infrastructure Configuration HPSS_EXEC_PVR_AMPEX- executable name for PVR Ampex HPSS_EXEC_PVR_OPER- executable name for PVR Operator HPSS_EXEC_PVR_STK- executable name for PVR STK HPSS_EXEC_PVR_3494- executable name for PVR 3494 HPSS_EXEC_PVR_3495- executable name for PVR 3495 HPSS_EXEC_PVR_LTO- executable name for PVR LTO HPSS_EXEC_PVR_AML HPSS_EXEC_SSDISK- executable name for Storage Server - Disk HPSS_EXEC_SSTAPE- executable name for Storage Server - Tape...
  • Page 227 { “HPSS_EXEC_REPACK”,“${HPSS_PATH_BIN}/repack” }, *************************************************************************** * Logging Unix files HPSS_PATH_LOG HPSS_UNIX_LOCAL_LOG- local log file *************************************************************************** { “HPSS_PATH_LOG”,“${HPSS_PATH_VAR}/log”}, { “HPSS_UNIX_LOCAL_LOG”,“${HPSS_PATH_LOG}/local.log”}, *************************************************************************** * Accounting Unix files HPSS_PATH_ACCT HPSS_UNIX_ACCT_CHECKPOINT - checkpoint file HPSS_UNIX_ACCT_REPORT- report file HPSS_UNIX_ACCT_COMMENTARY- commentary file *************************************************************************** { “HPSS_PATH_ACCT”,“${HPSS_PATH_VAR}/acct” { “HPSS_UNIX_ACCT_CHECKPOINT”,“${HPSS_PATH_ACCT}/acct_checkpoint” }, { “HPSS_UNIX_ACCT_REPORT”, “${HPSS_PATH_ACCT}/acct_report”...
  • Page 228: Configuration

    Chapter 5 HPSS Infrastructure Configuration HPSS_PATH_GK HPSS_UNIX_GK_SITE_POLICY *************************************************************************** { “HPSS_PATH_GK”,“${HPSS_PATH_VAR}/gk” { “HPSS_UNIX_GK_SITE_POLICY”,“${HPSS_PATH_GK}/gksitepolicy” *************************************************************************** * SFS Files HPSS_SFS HPSS_SFS_SERVER- Encina SFS server name with CDS prefix HPSS_SFS_SUFFIX- Suffix to add to end of SFS file names ENCINA_LOCAL - Encina local directory ENCINA_MIRROR - Encina mirror directory *************************************************************************** { “HPSS_SFS”,...
  • Page 229 { “HPSS_CONFIG_STORAGECLASS”, { “HPSS_CONFIG_SCLASS”, { “HPSS_CONFIG_SCLASSTHRESHOLD”, { “HPSS_CONFIG_NSGLOBALFILESETS”, { “HPSS_CONFIG_FILE_FAMILY”, { “HPSS_CONFIG_ACCOUNTING”, { “HPSS_CONFIG_ACCT”,“${HPSS_CONFIG_ACCOUNTING}”}, { “HPSS_CONFIG_ACCTSUM”, { “HPSS_CONFIG_LOGPOLICY”, { “HPSS_CONFIG_LOGP”, “${HPSS_CONFIG_LOGPOLICY}”}, { “HPSS_CONFIG_LSPOLICY”, { “HPSS_CONFIG_LS”, “${HPSS_CONFIG_LSPOLICY}”}, { “HPSS_CONFIG_MIGPOLICY”, { “HPSS_CONFIG_MIGRP”, “${HPSS_CONFIG_MIGPOLICY}”}, { “HPSS_CONFIG_MOVERDEVICE”, { “HPSS_CONFIG_PURGEPOLICY”, { “HPSS_CONFIG_PURGP”,“${HPSS_CONFIG_PURGEPOLICY}”}, { “HPSS_CONFIG_SERVER”, *************************************************************************** * Storage Subsystem SFS Files - These SFS files are specific to a HPSS_CONFIG_BFMIGRREC- Bitfile migration records HPSS_CONFIG_BFPURGEREC- Bitfile purge records...
  • Page 230 Chapter 5 HPSS Infrastructure Configuration HPSS_CONFIG_VVDISK- SS virtual volume - disk HPSS_CONFIG_VVTAPE- SS virtual volume - tape *************************************************************************** { “HPSS_CONFIG_BFMIGRREC”, { “HPSS_CONFIG_BFPURGEREC”, { “HPSS_CONFIG_ACCTLOG”, { “HPSS_CONFIG_BITFILE”, { “HPSS_CONFIG_BFCOSCHANGE”, { “HPSS_CONFIG_BFDISKSEGMENT”, { “HPSS_CONFIG_BFTAPESEGMENT”, { “HPSS_CONFIG_BFDISKALLOCREC”, { “HPSS_CONFIG_BFSSSEGCHKPT”, { “HPSS_CONFIG_BFSSUNLINK”, { “HPSS_CONFIG_NSACLS”, { “HPSS_CONFIG_NSFILESETATTRS”, { “HPSS_CONFIG_NSOBJECTS”, { “HPSS_CONFIG_NSTEXT”,...
  • Page 231 { “HPSS_CONFIG_ACCTVALIDATE”, *************************************************************************** * BFS SFS Files HPSS_CONFIG_BFS *************************************************************************** { “HPSS_CONFIG_BFS”, *************************************************************************** * NameServer SFS Files HPSS_CONFIG_NS *************************************************************************** { “HPSS_CONFIG_NS”, *************************************************************************** * DMAP Gateway SFS Files HPSS_CONFIG_DMG HPSS_CONFIG_DMGFILESET- DMG filesets *************************************************************************** { “HPSS_CONFIG_DMG”, { “HPSS_CONFIG_DMGFILESET”, *************************************************************************** * Gatekeeper Server SFS Files HPSS_CONFIG_GK *************************************************************************** { “HPSS_CONFIG_GK”,...
  • Page 232 Chapter 5 HPSS Infrastructure Configuration { “HPSS_CONFIG_SITE”, *************************************************************************** * Metadata Monitor SFS Files HPSS_CONFIG_MM *************************************************************************** { “HPSS_CONFIG_MM”, *************************************************************************** * Migration/Purge Server SFS Files HPSS_CONFIG_MPS *************************************************************************** { “HPSS_CONFIG_MPS”, *************************************************************************** * Mover SFS Files HPSS_CONFIG_MVR *************************************************************************** { “HPSS_CONFIG_MVR”, *************************************************************************** * Non-DCE SFS Files HPSS_CONFIG_NDCG- Gateway type specific *************************************************************************** { “HPSS_CONFIG_NDCG”,...
  • Page 233 { “HPSS_CONFIG_PVLACTIVITY”, { “HPSS_CONFIG_PVLDRIVE”, { “HPSS_CONFIG_PVLJOB”, { “HPSS_CONFIG_PVLPV”, *************************************************************************** * PVR SFS Files HPSS_CONFIG_PVR HPSS_CONFIG_CART_AMPEX- PVR ampex cartridges HPSS_CONFIG_CART_OPER- PVR operator cartridges HPSS_CONFIG_CART_STK- PVR STK cartridges HPSS_CONFIG_CART_STK_RAIT HPSS_CONFIG_CART_3494- PVR 3494 cartridges HPSS_CONFIG_CART_3495- PVR 3495 cartridges HPSS_CONFIG_CART_LTO- PVR LTO cartridges HPSS_CONFIG_CART_AML- PVR AML cartridges *************************************************************************** { “HPSS_CONFIG_PVR”, { “HPSS_CONFIG_CART_AMPEX”,...
  • Page 234 Chapter 5 HPSS Infrastructure Configuration HPSS_CDS_LOGC HPSS_CDS_LOGD HPSS_CDS_LS HPSS_CDS_MM HPSS_CDS_MOUNTD HPSS_CDS_MPS HPSS_CDS_MVR HPSS_CDS_NDCG HPSS_CDS_NFSD HPSS_CDS_NS HPSS_CDS_PFSD HPSS_CDS_PVL HPSS_CDS_PVR_AMPEX- CDS name - PVR - Ampex HPSS_CDS_PVR_OPER- CDS name - PVR - Operator HPSS_CDS_PVR_STK- CDS name - PVR - STK HPSS_CDS_PVR_STK_RAIT- CDS name - PVR - STK RAIT HPSS_CDS_PVR_3494- CDS name - PVR - 3494 HPSS_CDS_PVR_3495- CDS name - PVR - 3495 HPSS_CDS_PVR_LTO...
  • Page 235 HPSS_DESC_BFS HPSS_DESC_DMG HPSS_DESC_FTPD HPSS_DESC_GK HPSS_DESC_HPSSD HPSS_DESC_LOGC HPSS_DESC_LOGD HPSS_DESC_LS HPSS_DESC_MM HPSS_DESC_MOUNTD- Descriptive name - Mount Daemon HPSS_DESC_MPS HPSS_DESC_MVR HPSS_DESC_NDCG HPSS_DESC_NFSD HPSS_DESC_NS HPSS_DESC_PFSD HPSS_DESC_PVL HPSS_DESC_PVR_AMPEX- Descriptive name - PVR - Ampex HPSS_DESC_PVR_OPER- Descriptive name - PVR - Operator HPSS_DESC_PVR_STK- Descriptive name - PVR - STK HPSS_DESC_PVR_STK_RAIT HPSS_DESC_PVR_3494- Descriptive name - PVR - 3494 HPSS_DESC_PVR_3495- Descriptive name - PVR - 3495...
  • Page 236 Chapter 5 HPSS Infrastructure Configuration *************************************************************************** * System Manager Specific The SM attempts to throttle the connection attempts to other servers. It will attempt to reconnect to each server every HPSS_SM_SRV_CONNECT_INTERVAL_MIN seconds until the number of failures for that server has reached HPSS_SM_SRV_CONNECT_FAIL_COUNT. After the failure count has been reached the SM will only try to reconnect to the server every HPSS_SM_SRV_CONNECT_INTERVAL_MAX seconds until a successful i connection is made at which time the connection interval for the server...
  • Page 237 HPSS_NOTIFY_Q_LOG_THREADS - HPSS_NOTIFY_Q_TAPE_THREADS - Number of threads to create per client HPSS_NOTIFY_Q_TAPE_CHECKIN_THREADS - Number of threads to create per *************************************************************************** { “HPSS_NOTIFY_Q_DATA_THREADS”,“1”}, { “HPSS_NOTIFY_Q_LIST_THREADS”,“1”}, { “HPSS_NOTIFY_Q_LOG_THREADS”,“1” }, { “HPSS_NOTIFY_Q_TAPE_THREADS”,“1”}, { “HPSS_NOTIFY_Q_TAPE_CHECKIN_THREADS”,“1”}, *************************************************************************** * The HPSS_NOTIFY_Q_*_SIZE_MAX variables define the maximum size to which * each queue is allowed to grow.
  • Page 238 Chapter 5 HPSS Infrastructure Configuration HPSS_SSMDS_INTERVAL- Interval at which DS checks idle RMI clients HPSS_SSMDS_RMI_HOST- Java RMI host for DS HPSS_SSMDS_RMI_NAME- Java RMI base name for DS HPSS_SSMDS_RMI_PORT- Port for Java RMI (Remote Method Invocation) HPSS_SSMDS_KEYSTORE- Data Server keystore file (for SSL) HPSS_SSMDS_KEYSTORE_PW- File holding password to Data Server keystore HPSS_SSMDS_JAVA_CONFIG- Pathname of DS Java config file HPSS_SSMDS_JAVA_POLICY- Java policy file for DS...
  • Page 239 { “HPSS_SSMDS_JAVA_POLICY”,”${HPSS_PATH_VAR}/ssm/java.policy.ds” { “HPSS_HPSSADM_JAVA_POLICY”, { “JAVA_ROOT”, { “JAVA_HOME”, { “JAVA_BIN”, { “JAVA_LIB1”, { “JAVA_LIB2”, { “JAVA_LIB3”, { “HPSS_SSMDS_LIBPATH”, { “JAVA_CLS1”, { “JAVA_CLS2”, { “JAVA_CLS3”, { “HPSS_SSMDS_CLASSPATH”, “${JAVA_CLS1}:${JAVA_CLS2}:${JAVA_CLS3}” { “AIXTHREAD_MNRATIO”, { “AIXTHREAD_SCOPE”, { “AIXTHREAD_MUTEX_DEBUG”, “OFF” { “AIXTHREAD_RWLOCK_DEBUG”,”OFF” { “AIXTHREAD_COND_DEBUG”, *************************************************************************** * HDM Specific HPSS_HDM_SHMEM_KEY - The HDM’s shared memory key (default 3789) HPSS_HDM_SERVER_ID - The HDM’s Server ID (default 1) ***************************************************************************...
  • Page 240: Configure The Hpss Infrastructure On A Node

    Chapter 5 HPSS Infrastructure Configuration HPSS_LS_NAME *************************************************************************** { “HPSS_LS_NAME”, *************************************************************************** * NDAPI Specific HPSS_NDCG_KRB5_SERVICENAME - Non DCE Gateway kerberos servicename HPSS_KRB_TO_DCE_FILE HPSS_KCHILD_PATH HPSS_KCHILD_KEYTAB HPSS_NDCL_KEY_CONFIG_FILE *************************************************************************** { “HPSS_NDCG_KRB5_SERVICENAME”, “ndcg” }, { “HPSS_KRB_TO_DCE_FILE”, krb5_Realms_to_DCE_Cells” }, { “HPSS_KCHILD_PATH”, { “HPSS_KCHILD_KEYTAB”, { “HPSS_NDCL_KEY_CONFIG_FILE”, *************************************************************************** * Installation &...
  • Page 241: Using The Mkhpss Utility

    1. Configure HPSS with DCE (only on the first core server node in the cell). 2. Configure Encina SFS (only if Encina SFS will run on this node). 3. Create and Manage SFS Files (only if Encina SFS will run on this node). 4.
  • Page 242: Configure Hpss With Dce

    Chapter 5 HPSS Infrastructure Configuration <mkhpss> <mkhpss> <mkhpss> <mkhpss> <mkhpss> <mkhpss> <mkhpss> <mkhpss> <mkhpss> Reply ===> (Select Option [1-7, E, U, X]): Messages will be provided to indicate the status of the HPSS infrastructure configuration process at each stage. The HPSS infrastructure configuration results can be obtained from the messages displayed during the HPSS infrastructure configuration process or can be reviewed by reading the /opt/hpss/config/mkhpss.data file.
  • Page 243: Configure Encina Sfs Server

    4. Create keytab files for HPSS servers and clients. 5. Randomize the keys in the created server and client keytabs. 6. Set up the DCE CDS directory for HPSS servers. 7. Set up HPSS-related Extended Registry Attributes, and create hpss_cross_cell_members group.
  • Page 244: Manage Sfs Files

    Chapter 5 HPSS Infrastructure Configuration If the server machine is running Solaris, mkhpss will prompt the administrator to enter disk name to add mirror to the logical volume. For Solaris server machines, the disk name should start with /dev/rdsk. 5.5.3 Manage SFS Files The mkhpss script invokes the managesfs utility, a front-end to the sfsadmin commands provided by Encina, to allow the user to create and manage the SFS files created for HPSS.
  • Page 245: Start Up Ssm Servers/User Session

    Section 6.2.2: SSM User Session Configuration and Startup on page 253 for instructions to create an SSM user session at a later time. Even though all SSM User IDs and their prerequisite accounts can now be created, we recommend that only an administrative SSM ID be created at this time. The hpssuser utility should be invoked to create a new HPSS account after the HPSS system is operational so that all desired accounts (UNIX, DCE, SSM, or FTP) for new HPSS users can be created at the same time.
  • Page 246: Customize Dce Keytabs Files

    Chapter 5 HPSS Infrastructure Configuration <mkhpss> Prompt ==> <mkhpss> <mkhpss> <mkhpss> Reply ===> (Select Option [1-2, M, U]): Option 2 on the Unconfiguration Menu will issue the following warning message, and the user will have the option to abort or continue with the unconfiguration: <mkhpss>...
  • Page 247 • hpss_log • hpss_ls • hpss_mm • hpss_mountd • hpss_mps • hpss_mvr • hpss_ndcg • hpss_nfs • hpss_pvl • hpss_pvr • hpss_ssm • hpss_ss To adhere to the site local password policy, these principals were created with known keys and subsequently changed with randomized keys.
  • Page 248 Chapter 5 HPSS Infrastructure Configuration -random \ -registry For each entry in /krb5/hpssclient.keytab do: % dcecp -c keytab add \ /.:/hosts/$HPSS-CDS_HOST/config/keytab/hpssclient.keytab \ -member <entry_name> \ -random \ -registry where <entry_name> refers to an entry in the keytab file; e.g., hpss_ssm, and $HPSS_CDS_HOST refers to the CDS machine host name;...
  • Page 249: Chapter 6 Hpss Configuration

    HPSS Configuration Chapter 6 6.1 Overview This chapter provides instructions for creating the configuration data to be used by the HPSS servers. This includes creating the server configuration, defining the storage policies, and defining the storage characteristics. The configuration data can be created, viewed, modified, or deleted using the HPSS SSM GUI windows.
  • Page 250: Hpss Configuration Limits

    Chapter 6 HPSS Configuration 8. Create a specific configuration entry for each HPSS server (Section 6.8: Specific Server Configuration on page 323) 9. Configure MVR devices and PVL drives (Section 6.9: Configure MVR Devices and PVL Drives on page 401) 6.1.2 HPSS Configuration Limits The following configuration limits are imposed by SSM and/or the HPSS servers:...
  • Page 251: Using Ssm For Hpss Configuration

    • Delete: 10,000 physical volumes per SSM delete request 6.1.3 Using SSM for HPSS Configuration The HPSS server and resource configuration data may be created, viewed, updated, or deleted using the SSM windows. The configuration data are kept in Encina SFS files. When you submit a request to configure a new server, SSM displays all fields with the appropriate default data.
  • Page 252: Server Reconfiguration And Reinitialization

    Chapter 6 HPSS Configuration Figure 6-1 HPSS Health and Status Window 6.1.4 Server Reconfiguration and Reinitialization HPSS servers read their respective configuration file entries during startup to set their initial running conditions. Note that modifying a configuration file while a server is running does not immediately change the running condition of the server.
  • Page 253: Ssm Server Configuration And Startup

    script, start_ssm, is provided to bring up the SSM System Manager and the SSM Data Server. Another provided script, start_ssm_session, can be used to bring up an SSM user session. 6.2.1 SSM Server Configuration and Startup The SSM System Manager will automatically create an SSM configuration entry, if one does not already exist, using the environment variables defined in the /opt/hpss/config/hpss_env file.
  • Page 254 Chapter 6 HPSS Configuration 3. privileged. This security level is normally assigned to a privileged user such as an HPSS system analyst. This SSM user can view most of the SSM windows but cannot perform any SSM control functions. 4. user. This security level is normally assigned to an user who may have a need to monitor some HPSS functions.
  • Page 255: Global Configuration

    6.3 Global Configuration The HPSS Global Configuration metadata record provides important information that is used by all HPSS servers. This is the first configuration that must be done through SSM. 6.3.1 Configure the Global Configuration Information The global configuration information can be configured using the HPSS Global Configuration window.
  • Page 256: Global Configuration Variables

    Chapter 6 HPSS Configuration 6.3.2 Global Configuration Variables Figure 6-3 HPSS Global Configuration screen Table 6-1 lists the Global Configuration variables and provides specific recommendations for the Global Configuration. Table 6-1 Global Configuration Variables Display Field Name General Section Acceptable Description Values September 2002...
  • Page 257 Table 6-1 Global Configuration Variables Display Field Name Root User ID Root Is Superuser Default Class of Service Root Name Server HPSS Installation Guide Release 4.5, Revision 2 Acceptable Description Values The UID of the user who has Valid root user root access privileges to the NS database, if the Root Is Superuser flag is set to ON.
  • Page 258 Chapter 6 HPSS Configuration Table 6-1 Global Configuration Variables Display Field Name Default Logging Policy Accounting Policy Accounting Summary Classes of Service File Families General Server Configurations Location Server Policies Logging Policies Migration Policies Mover Devices NS Global Filesets Purge Policies Acceptable Description Values...
  • Page 259: Storage Subsystems Configuration

    Table 6-1 Global Configuration Variables Display Field Name Storage Classes Storage Hierarchies Storage Subsystems Subsystem Storage Class Thresholds 6.4 Storage Subsystems Configuration The storage subsystem configuration information can be configured using the HPSS Storage Subsystem Configuration window. Due to the importance of the storage subsystem configuration metadata, great care must be taken before updating it once the system is configured.
  • Page 260: Storage Subsystem Configuration Variables

    Chapter 6 HPSS Configuration Figure 6-4 Storage Subsystem Configuration window 6.4.1 Storage Subsystem Configuration Variables Table 6-2 lists the Storage Subsystem Configuration Variables and provides specific recommendations. Table 6-2 Storage Subsystem Configuration Variables Display Field Description Name Subsystem ID A unique integer ID for the storage subsystem.
  • Page 261 Table 6-2 Storage Subsystem Configuration Variables Display Field Description Name Subsystem The descriptive name of the storage Name subsystem. This field may only be modified at create time for the storage subsystem. Migrate The name of the SFS file where Records File migration records are stored for this (SFS)
  • Page 262: Basic Server Configuration

    Chapter 6 HPSS Configuration 6.5 Basic Server Configuration All HPSS servers use a similar metadata structure for the basic server configuration. A basic server configuration entry must be created for each of the following server types: • Bitfile Server • DMAP Gateway •...
  • Page 263: Configure The Basic Server Information

    6.5.1 Configure the Basic Server Information A basic server configuration entry can be created using the Basic Server Configuration window. After the configuration entry is created, it can be viewed, updated, or deleted through this window. From the HPSS Health and Status window (shown in Figure 6-1), click on the Admin menu, select the Configure HPSS option and click on the Servers option.
  • Page 264: Figure 6-5 Hpss Servers Window

    Chapter 6 HPSS Configuration Figure 6-5 HPSS Servers Window September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 265: Figure 6-6 Basic Server Configuration Window

    Figure 6-6 Basic Server Configuration Window 6.5.1.1 Basic Server Configuration Variables The fields in the Server Configuration window describe information necessary for servers to successfully operate. The fields also contain information that is necessary for successful interaction with the SSM component. In addition, each HPSS basic server configuration includes Security Information and Audit Policy fields that determine the server's security environment.
  • Page 266: Table 6-3 Basic Server Configuration Variables

    Chapter 6 HPSS Configuration • Execution Controls fields • DCE Controls fields • Security Controls fields • Audit Policy fields To save window space, the last four categories are presented in “layers”, and each layer has its name displayed on a “tab”. To access a different layer, click on the appropriate tab. Table 6-3 lists the fields on the Basic Server Configuration window in the approximate order that they appear on the window.
  • Page 267 Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Server Subtype The subtype of the selected server. Most servers do not have subtypes. Server CDS Name The name of a Cell Directory Service (CDS) directory in which a server creates additional CDS objects and registers its endpoint information.
  • Page 268 Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Storage Subsystem Name of the HPSS Storage Subsystem to which this server will be assigned. No more than one BFS, MPS, NS, Disk SS, and Tape SS can be assigned to any one subsystem.
  • Page 269 Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Execute Hostname The hostname on which a particular server is to run. SSM uses this field to locate the Startup Daemon that will execute the server. Advice: In order for a server to start, a Startup Daemon must be running. The server will be started on the host which has a startup daemon running whose configuration has a Execute Hostname field which matches exactly the one specified in the server's basic configuration record.
  • Page 270 Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Auto Restart Count The HPSS Startup Daemon uses this field to control automatic server restarts. If the server shuts down unexpectedly, the Startup Daemon will restart the server, without any intervention by SSM, up to this many times;...
  • Page 271 Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Maximum Connections The highest number of connection contexts this server can establish. Advice: This value should be set based on the anticipated number of concurrent clients. Too large a value may slow down the system. HPSS Installation Guide Release 4.5, Revision 2 Chapter 6...
  • Page 272 Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Thread Pool Size The highest number of threads this server can spawn in order to handle concurrent requests. Advice: If necessary, the default values can be changed when defining servers. Too large a value may slow down the system.
  • Page 273 Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Principal Name The DCE principal name defined for the server during the infrastructure configuration phase. Advice: Ensure that the key table named by the Keytab Pathname field contains an entry for this principal; otherwise, authentication will fail. Protection Level The amount of encryption that will be used in...
  • Page 274 Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Authorization Service The authorization service to use when passing identity information in communications to HPSS components. Advice: The recommended authorization server is DCE. This ensures that the most complete identity information about the client is sent to the server.
  • Page 275 Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Keytab Pathname The absolute pathname of the UNIX file containing the keytab entry that will be used by the server when setting up its identity. Advice: The server must have read access to this file. Do not set other access permissions on this file or your security can be breached.
  • Page 276 Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description CHMOD The Security Audit Policy for Name Server Object Permissions events. If set, security audit messages will be sent to the logging subsystem. CHOWN The Security Audit Policy for Name Server Object Owner events.
  • Page 277 Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description RENAME The Security Audit Policy for Name Server Object Rename events. If set, security audit messages will be sent to the logging subsystem. Advice: Sites that must audit object deletion should set the RENAME field to ALL for Name Server.
  • Page 278 Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description BFSETATTRS The Security Audit Policy for Bitfile Server Set Bitfile Attribute events. If set, security audit messages will be sent to the logging subsystem. 6.5.1.2 Server CDS Security ACLs The server's CDS directory inherits ACLs from the parent but SSM adds the following ACL for the HPSS Server Group:...
  • Page 279: Hpss Storage Policy Configuration

    {group:subsys/dce/cds-server:rwdtc} {any_other:r--t-} CDS Security object for the MVR Security object: {user:${HPSS_PRINCIPAL_SSM}:rw-tc} {group:subsys/dce/cds-admin:rwdtc} {group:subsys/dce/cds-server:rwdtc} {any_other:---t-} CDS Security object for the NS Security object: {user:${HPSS_PRINCIPAL_FTPD}:r---c} {user:${HPSS_PRINCIPAL_BFS}:r---c} {user:${HPSS_PRINCIPAL_NDCG}:r---c} {user:${HPSS_PRINCIPAL_NFSD}:r---c} {user:${HPSS_PRINCIPAL_DMG}:rw--c} {user:${HPSS_PRINCIPAL_SSM}:rw--c} {user:cell_admin:rwdtc} {any_other:---t-} CDS Security object for the PVL Security object: {user:${HPSS_PRINCIPAL_PVR}:rwdt-} {user:${HPSS_PRINCIPAL_SS}:rwdt-} {user:${HPSS_PRINCIPAL_SSM}:rwdtc} {group:subsys/dce/cds-admin:rwdtc} {group:subsys/dce/cds-server:rwdtc}...
  • Page 280: Configure The Migration Policies

    Chapter 6 HPSS Configuration 6.6.1 Configure the Migration Policies A migration policy is associated with a storage class and defines the criteria by which data is migrated from that storage class to storage classes at lower levels in the storage hierarchies. Note, however, that it is the storage hierarchy definitions, not the migration policy, which determines the number and location of the migration targets.
  • Page 281 Before deleting a basic migration policy, make sure that it is not referenced in any storage class configurations. If a storage class configuration references a migration policy which does not exist, the Migration/Purge and Bitfile Servers will not start. When a migration policy is added to or removed from a storage class configuration, the Migration/Purge Servers must be restarted in order for migration to begin or end on this storage class.
  • Page 282: Figure 6-7 Migration Policy Configuration Window

    Chapter 6 HPSS Configuration Figure 6-7 Migration Policy Configuration Window 6.6.1.1 Migration Policy Configuration Variables Table 6-4 lists the fields on the Migration Policy window and provides specific recommendations for configuring the Migration Policy for use by HPSS. Note that descriptions of fields which appear both in the Basic Policy and Storage Subsystem-Specific Policy sections of the window apply to both fields.
  • Page 283: Table 6-4 Migration Policy Configuration Variables

    Table 6-4 Migration Policy Configuration Variables Display Field Name Description Policy ID A unique ID associated with the Migration Policy. Policy Name The descriptive name of a Migration Policy. Advice: A policy’s descriptive name should be meaningful to local site administrators and operators.
  • Page 284 Chapter 6 HPSS Configuration Table 6-4 Migration Policy Configuration Variables (Continued) Display Field Name Description Runtime Interval An integer, in minutes, that dictates how often the migration process will occur within a storage class. Note: The value specifies the interval between the completion of one migration run and the beginning of the next.
  • Page 285: Configure The Purge Policies

    Table 6-4 Migration Policy Configuration Variables (Continued) Display Field Name Description Migrate At Critical A flag that indicates a Threshold migration run should be started immediately when the storage class critical threshold is exceeded. This option applies to disk migration only. Storage Subsystem The descriptive name of the storage subsystem to...
  • Page 286 Chapter 6 HPSS Configuration basic policy. Change the desired fields in the subsystem specific policy and then use the Add Specific button to write the new subsystem specific policy to the metadata file. To update an existing basic purge policy, select the Load Existing button on the Basic Policy portion of the Purge Policy window and select the desired policy from the popup list.
  • Page 287: Figure 6-8 Purge Policy Window

    Refer to the window's help file for more information on the individual fields and buttons as well as the supported operations available from the window. 6.6.2.1 Purge Policy Configuration Variables Table 6-5 lists the fields on the Purge Policy window and provides specific recommendations for configuring the Purge Policy for use by HPSS.
  • Page 288: Table 6-5 Purge Policy Configuration Variables

    Chapter 6 HPSS Configuration Table 6-5 Purge Policy Configuration Variables Display Field Name Description Policy ID A unique ID associated with the Purge Policy. Policy Name The descriptive name of a Purge Policy. Advice: A policy’s descriptive name should be meaningful to local site administrators and operators.
  • Page 289: Configure The Accounting Policy

    Table 6-5 Purge Policy Configuration Variables (Continued) Display Field Name Description Purge Locks expire after Maximum number of minutes that a file may hold a purge lock. Purge locked files are not eligible for purging. Storage Subsystem The descriptive name of the storage subsystem to which a subsystem- specific policy applies.
  • Page 290: Figure 6-9 Accounting Policy Window

    Chapter 6 HPSS Configuration 6.6.3.1 Accounting Policy Configuration Variables Table 6-6 lists the fields on the Accounting Policy window and provides specific recommendations for configuring the Accounting Policy for use by HPSS. Figure 6-9 Accounting Policy Window September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 291: Table 6-6 Accounting Policy Configuration Variables

    Table 6-6 Accounting Policy Configuration Variables Display Field Name Description Policy ID The unique ID associated with the Accounting Policy. Advice: This number is always 1 for this version of HPSS since only one accounting policy is currently allowed. Accounting Style Style of accounting that is used by the entire HPSS system.
  • Page 292 Chapter 6 HPSS Configuration Table 6-6 Accounting Policy Configuration Variables (Continued) Display Field Name Description Accounting Validation File The Encina SFS filename of the Accounting Validation metadata Advice: You need to create and populate this file with the account validation editor if Account Validation is enabled and Site-style accounting is in use (see Section 12.2.23: hpss_avaledit —...
  • Page 293: Configure The Logging Policies

    Table 6-6 Accounting Policy Configuration Variables (Continued) Display Field Name Description Last Run Time The starting timestamp of the current accounting run or the completion time of the last accounting run. Advice: This time is set when an accounting run begins, and it is set again when the accounting run terminates.
  • Page 294 Chapter 6 HPSS Configuration return a policy marked MOD back to its original settings. Deletions (DEL) can be "undone" by using the Cancel Delete button which becomes visible only after a policy has been marked for deletion. To create a new logging policy, click on the Start New button. A new line will be highlighted and you can fill in the Descriptive Name field.
  • Page 295: Figure 6-10 Hpss Logging Policies Window

    Figure 6-10 HPSS Logging Policies Window 6.6.4.2 HPSS Logging Policies List Variables Table 6-7 Logging Policies List Configuration Variables Display Field Name Description Default Logging Policy The descriptive name of the default logging policy. This policy will apply to all servers which do not have their own policy defined.
  • Page 296 Chapter 6 HPSS Configuration Display Field Name Description Descriptive Name The descriptive name of the HPSS server to which the Logging Policy will apply. Record Types Record types that are to be to Log logged for the specified server. Advice: It is recommended that at least the Alarm, Event, Security and Status record types be selected for all servers while they are running normally.
  • Page 297 Display Field Name Description SSM Types Record types that are to be sent to SSM for display. Advice: If SSM or system performance is degraded because excessive messages are being sent to SSM by a particular server(s), set the Logging Policy to filter out Alarm and Event record types.
  • Page 298 Chapter 6 HPSS Configuration • Event—defines an informational message (e.g., subsystem initializing, subsystem terminating). Typically, the policy would be to send events to both the log and to SSM for displaying in the HPSS Alarms and Events window (Figure 1-5 on page 38 of the HPSS Management Guide).
  • Page 299: Figure 6-11 Logging Policy Window

    6.6.4.4 Logging Policy Configuration Variables Table 6-8 lists the fields on the Logging Policy window and provides Logging Policy configuration information. Table 6-8 Logging Policy Configuration Variables Display Field Name Description Name of Server to Which The descriptive name of Policy Applies the HPSS server to which the Logging Policy will...
  • Page 300: Configure The Location Policy

    Chapter 6 HPSS Configuration Table 6-8 Logging Policy Configuration Variables (Continued) Display Field Name Description Record Types Record types that are to be to Log logged for the specified server. Advice: It is recommended that at least the Alarm, Event, Security and Status record types be selected for all servers while they are running normally.
  • Page 301: Figure 6-12 Location Policy Window

    Once a Location Policy is created or updated, it will not be in effect until all local Location Servers are started or reinitialized. The Reinitialize button on the HPSS Servers window(Figure 1-1 on page 20 of the HPSS Management Guide) can be used to reinitialize a running Location Server. 6.6.5.1 Location Policy Configuration Variables Table 6-9 lists the fields on the Location Policy window and provides Location Policy configuration...
  • Page 302 Chapter 6 HPSS Configuration Table 6-9 Location Policy Configuration Variables (Continued) Display Field Name Description Location Map Update Interval in seconds that the Interval LS rereads general server configuration metadata. Advice: If this value is set too low, a load will be put on SFS while reading configuration metadata and the LS will be unable to contact all remote LSs within the timeout period.
  • Page 303: Configure The Remote Site Policy

    Table 6-9 Location Policy Configuration Variables (Continued) Display Field Name Description RPC Group Name The CDS pathname where the DCE RPC group containing local LS path information should be stored. Advice: All clients will need to know this group name since it is used by them when initializing to contact the Location Server.
  • Page 304: Figure 6-13 Remote Hpss Site Configuration Window

    Chapter 6 HPSS Configuration Figure 6-13 Remote HPSS Site Configuration Window To add a new Remote Site, enter the information about the remote HPSS system and click on the Add button. To update an existing Remote Site, modify the desired fields and click on the Update button to write the changes to the SFS file.
  • Page 305: Hpss Storage Characteristics Configuration

    Table 6-10 Remote HPSS Site Configuration Fields Field Description Site Name The descriptive text identifier for this site. RPC Group The name of the Name DCE rpcgroup of the Location Servers at the remote site. 6.7 HPSS Storage Characteristics Configuration Before an HPSS system can be used, storage classes, storage hierarchies, and classes of service must be created.
  • Page 306: Figure 6-14 Disk Storage Class Configuration Window

    Chapter 6 HPSS Configuration Before deleting a storage class configuration, be sure that all of the storage subsystem specific warning and critical thresholds are set to default. If this is not done, one or more threshold records will remain in metadata and will become orphaned when the storage class configuration is deleted.
  • Page 307: Figure 6-15 Tape Storage Class Configuration Window

    Figure 6-15 Tape Storage Class Configuration Window 6.7.1.1 Storage Class Configuration Variables Table 6-11 lists the fields on the Storage Class Configuration window and provides specific recommendations for configuring the storage class for use by HPSS. SSM enforces certain relationships between the SC fields and will not allow fields to be set to inappropriate values. HPSS Installation Guide Release 4.5, Revision 2 Chapter 6...
  • Page 308: Table 6-11 Storage Class Configuration Variables

    Chapter 6 HPSS Configuration Table 6-11 Storage Class Configuration Variables Display Field Name Description Storage Class ID A unique numeric ID associated with this storage class. Storage Class Name A text string used to describe this storage class. Advice: Choose a Storage Class Name that describes the function of the storage class. Good examples are 4-Way striped 3490 or Non-Striped SCSI Disk.
  • Page 309 Table 6-11 Storage Class Configuration Variables (Continued) Display Field Name Description VV Block Size The size of the logical data block on the new virtual volume. Advice: The VV Block Size chosen will determine the performance characteristics of the virtual volume. This value must meet the following constraining requirements: (1) it must be an integer multiple of the Media Block Size.
  • Page 310 Chapter 6 HPSS Configuration Table 6-11 Storage Class Configuration Variables (Continued) Display Field Name Description Stripe Transfer Rate The approximate data transfer rate for the entire stripe. Blocks Between Tape The maximum number of Marks data blocks that can be written on a tape between consecutive tape marks.
  • Page 311 Table 6-11 Storage Class Configuration Variables (Continued) Display Field Name Description Warning Threshold Low threshold for the amount of space used in this storage class. For disk this is the percentage of total space used. For tape this is the number of free VVs remaining.
  • Page 312 Chapter 6 HPSS Configuration Table 6-11 Storage Class Configuration Variables (Continued) Display Field Name Description Average Latency The average time (in seconds) that elapses when a data transfer request is scheduled and the time the data transfer begins. This field is only applicable to tape storage classes.
  • Page 313: Figure 6-16 Storage Subsystem-Specific Thresholds Window

    across all subsystems. This is the simplest way to configure storage class thresholds. If it is desired to override these default values for one or more subsystems, use the Subsystem-Specific Thresholds button on the Storage Class Configuration window to bring up the Storage Subsystem-Specific Thresholds window, shown in Figure 6-16.
  • Page 314: Table 6-12 Storage Subsystem-Specific Thresholds Variables

    Chapter 6 HPSS Configuration to and Change selected Critical Threshold to and press [Enter]. The override values will be applied in the Threshold Table. Use the Update button at the bottom of the screen to apply these changes to metadata. Changes to the override values are accomplished the same way. To delete the override values, select the desired storage subsystem in the Threshold Table and click on the Set To Defaults button.
  • Page 315: Configure The Storage Hierarchies

    Table 6-12 Storage Subsystem-Specific Thresholds Variables Display Field Description Name Change selected Override value of the Warning Threshold Storage Class Warning Threshold for the selected subsystem. Change selected Override value of the Critical Threshold Storage Class Critical Threshold for the selected subsystem.
  • Page 316: Figure 6-17 Storage Hierarchy Configuration Window

    Chapter 6 HPSS Configuration To delete an existing storage hierarchy, select the Load Existing button on the Storage Hierarchy Configuration window and select the desired storage hierarchy from the popup list. The window will be refreshed with the configured data. Click on the Delete button to delete the storage hierarchy.
  • Page 317: Table 6-13 Storage Hierarchy Configuration Variables

    6.7.2.1 Storage Hierarchy Configuration Variables Table 6-13 lists the fields on the Storage Hierarchy Configuration window and provides specific recommendations for configuring the Storage Hierarchy for use by HPSS. Table 6-13 Storage Hierarchy Configuration Variables Display Field Name Description Hierarchy ID The unique, numeric ID associated with this hierarchy.
  • Page 318: Configure The Classes Of Service

    Chapter 6 HPSS Configuration 6.7.3 Configure the Classes of Service Class of Service (COS) information must be created for each class of service that is to be supported by the HPSS system. A COS can be created using the HPSS Class of Service window. After the configuration entry is created, it can be viewed, updated, or deleted through the same window.
  • Page 319: Figure 6-18 Class Of Service Configuration Window

    Figure 6-18 Class of Service Configuration Window 6.7.3.1 Class of Service Configuration Variables Table 6-14 lists the fields on the HPSS Class of Service window and provides Class of Service configuration information. Table 6-14 Class of Service Configuration Variables Display Field Name Description Class ID An unique integer ID for...
  • Page 320 Chapter 6 HPSS Configuration Table 6-14 Class of Service Configuration Variables (Continued) Display Field Name Description Class Name The descriptive name of the COS. Advice: Select a name that describes the COS in some functional way. A good example would be High Speed Disk Over Tape. Storage Hierarchy The name of the storage hierarchy associated with...
  • Page 321 Table 6-14 Class of Service Configuration Variables (Continued) Display Field Name Description Retry Stage Failures from When this flag is turned Secondary Copy on, HPSS will automatically retry a failed stage from the primary copy if a valid secondary copy exists. For this to work properly, the COS must be set up with at least 2 copies and a valid...
  • Page 322: File Family Configuration

    Chapter 6 HPSS Configuration 6.7.4 File Family Configuration File family information must be created for each file family that is to be supported by the HPSS system. A file family can be created using the File Family Configuration window. After the configuration entry is created, it can be viewed, updated, or deleted through the same window.
  • Page 323: Specific Server Configuration

    6.7.4.1 Configure File Family Variables Table 6-15 describes the file family variables. Table 6-15 Configure File Family Variables Display Description Field Name Family ID An unsigned non-zero integer which serves as a unique identifier for this file family. A unique default value is provided, which may be overwritten if desired.
  • Page 324: Bitfile Server Specific Configuration

    Chapter 6 HPSS Configuration • Mover • Name Server • NFS Daemon • Non-DCE Client Gateway • Physical Volume Library • Physical Volume Repository • Storage Server Sections 6.8.1 through 6.8.14 describe the specific configuration for each of the above servers. The SSM servers, Location Servers, NFS Mount Daemons, and Startup Daemons do not have specific configurations.
  • Page 325: Figure 6-20 Bitfile Server Configuration Window

    Figure 6-20 Bitfile Server Configuration Window 6.8.1.1 Bitfile Server Configuration Variables Table 6-16 lists the fields on the Bitfile Server Configuration window and provides specific recommendations for configuring the BFS for use by HPSS. HPSS Installation Guide Release 4.5, Revision 2 Chapter 6 September 2002 HPSS Configuration...
  • Page 326: Table 6-16 Bitfile Server Configuration Variables

    Chapter 6 HPSS Configuration Table 6-16 Bitfile Server Configuration Variables Display Field Name Description Server Name Descriptive name of the BFS. This name is copied over from the BFS general configuration entry. Server ID The UUID of the Bitfile Server. This ID is copied over from the BFS general configuration entry.
  • Page 327 Table 6-16 Bitfile Server Configuration Variables (Continued) Display Field Name Description Storage Class Statistics An interval in seconds that Interval indicates how often the BFS needs to contact each SS to get up-to-date statistics on each storage class that the SS manages. This information is used in load balancing across multiple storage classes.
  • Page 328 Chapter 6 HPSS Configuration Table 6-16 Bitfile Server Configuration Variables (Continued) Display Field Name Description COS Copy To Disk A flag affecting the COS changes for a bitfile. By default, when the COS of a bitfile is changed, the BFS copies the file to the highest level tape storage class in the target...
  • Page 329: Dmap Gateway Specific Configuration

    Table 6-16 Bitfile Server Configuration Variables (Continued) Display Field Name Description SS Unlink Records The file name of the SFS file where the information representing storage segments to be unlinked is stored. COS Changes The file name of the SFS file where the information indicating which bitfiles need to have the COS...
  • Page 330: Figure 6-21 Dmap Gateway Configuration Window

    Chapter 6 HPSS Configuration Figure 6-21 DMAP Gateway Configuration Window 6.8.2.1 DMAP Gateway Configuration Variables Table 6-17 lists the fields on the HPSS DMAP Gateway Configuration window and provides specific recommendations for configuring the DMAP Gateway for use by HPSS. Table 6-17 DMAP Gateway Configuration Variables Display Field Name Description...
  • Page 331: Gatekeeper Specific Configuration

    Table 6-17 DMAP Gateway Configuration Variables (Continued) Display Field Name Description Encryption Key A number used as an encryption key in message passing. A specific value can be typed in, or the Generate New Key button can be clicked to generate a random key value.
  • Page 332: Figure 6-22 Gatekeeper Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-22 Gatekeeper Server Configuration window To use a Gatekeeper Server for Gatekeeping Services then the Gatekeeper Server must also be configured into the Storage Subsystem (see Section 6.4: Storage Subsystems Configuration on page 259). To use the Gatekeeper Server for Account Validation Services, then the Account Validation button of the Accounting Policy must be ON (see Section 6.6.3: Configure the Accounting Policy on page 289).
  • Page 333: Log Client Specific Configuration

    Table 6-18 Gatekeeper Configuration Fields Display Field Description Name Default The default number of seconds the Wait Time client will wait before retrying a request if not determined by the Site Interface. Note: The Client API uses the environment variable HPSS_GKTOTAL_DELAY to place a maximum limit on the number of seconds a call will delay because of HPSS_ERETRY status codes returned from the Gatekeeper.
  • Page 334: Figure 6-23 Logging Client Configuration Window

    Chapter 6 HPSS Configuration Refer to the window’s help file for more information on the individual fields and buttons as well as the supported operations available from the window Figure 6-23 Logging Client Configuration Window 6.8.4.1 Log Client Configuration Variables Table 6-19 lists the fields on the Logging Client Configuration window and provides specific recommendations for configuring a Log Client for use by HPSS.
  • Page 335 Table 6-19 Log Client Configuration Variables (Continued) Display Field Name Description Maximum Local Log Size The maximum size in bytes of the local log file. Once this size is reached, the log will be reused in a wraparound fashion. The local log is not automatically archived.
  • Page 336: Log Daemon Specific Configuration

    Chapter 6 HPSS Configuration Table 6-19 Log Client Configuration Variables (Continued) Display Field Name Description Log Messages To: A mask of options to apply to local logging. Advice: If neither Local LogFile nor Syslog is specified, no local logging will occur. If Log Daemon is not specified, messages from HPSS processes executing on the same node as this Log Client will not be written to the central log.
  • Page 337: Figure 6-24 Logging Daemon Configuration Window

    To delete an existing configuration, select the Log Daemon entry on the HPSS Servers window and click on the Type-specific... button from the Configuration button group. The Logging Daemon Configuration window will be displayed with the configured data. Click on the Delete button to delete the specific configuration entry.
  • Page 338: Table 6-20 Log Daemon Configuration Variables

    The maximum size in bytes of the central log file. Once this size is reached, logging will switch to a second log file. The log file that filled up will then be archived to an HPSS file if the Archive Flag is on.
  • Page 339: Metadata Monitor Specific Configuration

    files should be automatically archived when they fill. Switch Logfiles A flag that indicates whether a switch to the second log file should be performed if the archive of the second log file has not yet completed. Advice: Use the default value of Always. Although a log may not get archived, the alternative may be a loss of logged messages to the current log from executing servers.
  • Page 340: Figure 6-25 Metadata Monitor Configuration Window

    Chapter 6 HPSS Configuration To update an existing configuration, select the Metadata Monitor entry on the HPSS Servers window and click on the Type-specific... button from the Configuration button group. The Metadata Monitor Configuration window will be displayed with the configured data. After modifying the configuration, click on the Update button to write the changes to the appropriate SFS file.
  • Page 341: Migration Purge Server Specific Configuration

    Table 6-21 Metadata Monitor Configuration Variables Display Field Name Description Server Name The descriptive name of the MMON. This name is copied over from the MMON general configuration entry. Server ID The UUID of the Metadata Monitor. This ID is copied over from the MMON general configuration entry.
  • Page 342: Figure 6-26 Migration/Purge Server Configuration Window

    Chapter 6 HPSS Configuration To add a new specific configuration, select the Migration/Purge Server entry and click on the Type- specific... button from the Configuration button group on the HPSS Servers window. The Migration/Purge Server Configuration window will be displayed as shown in Figure 6-26 with default values.
  • Page 343: Table 6-22 Migration/Purge Server Configuration Variables

    6.8.7.1 Migration/Purge Server Configuration Variables Table 6-22 lists the fields on the HPSS Migration/Purge Server Configuration window and provides specific recommendations for configuring the MPS for use by HPSS. Table 6-22 Migration/Purge Server Configuration Variables Display Field Name Description Server Name The descriptive name of the MPS.
  • Page 344 Chapter 6 HPSS Configuration Table 6-22 Migration/Purge Server Configuration Variables (Continued) Display Field Name Description Report File (Unix) A prefix string used by the MPS to construct a report file name. The full file name will consist of this string with a date string and subsystem Id appended to it.
  • Page 345: Mover Specific Configuration

    6.8.8 Mover Specific Configuration The Mover (MVR) specific configuration entry can be created using the Mover Configuration window. After the configuration entry is created, it can be viewed, updated, or deleted through the same window. From the HPSS Health and Status window (shown in Figure 6-1), click on the Admin menu, select the Configure HPSS option and click on the Servers option.
  • Page 346: Figure 6-27 Mover Configuration Window

    Chapter 6 HPSS Configuration Figure 6-27 Mover Configuration window 6.8.8.1 Mover Configuration Variables Table 6-23 lists the fields on the Mover Configuration window and provides specific recommendations for configuring the MVR for use by HPSS. Table 6-23 Mover Configuration Variables Display Field Name Description Server Name...
  • Page 347 Table 6-23 Mover Configuration Variables (Continued) Display Field Name Description Server ID The UUID of the MVR. This ID is copied over from the selected MVR general configuration entry. Buffer Size The buffer size (of each buffer) used for double buffering during data transfers.
  • Page 348 Chapter 6 HPSS Configuration Table 6-23 Mover Configuration Variables (Continued) Display Field Name Description Port Range Start The beginning of a range of local TCP/IP port numbers to be used by the Mover when connecting to clients (required by some sites for communication across a firewall).
  • Page 349 TCP/IP, and shared memory data transfers: The *_omi executables indicate support for the Gresham AdvanTape Device Driver. The *_ssd executables indicate support for the IBM SCSI Tape Device Driver. Check the sources for a complete list of supported devices and platform availability...
  • Page 350 Chapter 6 HPSS Configuration 6.8.8.2.2 /etc/services, /etc/inetd.conf, and /etc/xinetd.d To invoke the non-DCE/Encina part of the mover, the remote nodes’ inetd is utilized to start the parent process when a connection is made to a port based on the Mover’s type specific configuration (see section 6.8.8).
  • Page 351: Table 6-24 Irix System Parameters

    port user server server_args = /var/hpss/etc/mvr_ek The specified port will be one greater than the port listed as the TCP Listen Port in the Mover’s type specific configuration. For example, the port value in the example corresponds to a Mover with a TCP Listen Port value of 5001.
  • Page 352: Table 6-25 Solaris System Parameters

    Chapter 6 HPSS Configuration Table 6-24 IRIX System Parameters Parameter Name maxdmasz Solaris When running the Mover or non-DCE Mover process on a Solaris platform, there are a number of system configuration parameters which may need to be modified before the Mover can be successfully run.
  • Page 353: Table 6-26 Linux System Parameters

    Note that the SEMMSL value should be increased if running more than one Mover on the Linux machine (multiply the minimum value by the number of Movers to be run on that machine). Table 6-26 Linux System Parameters Parameter Header File Name SEMMSL include/linux/sem.h...
  • Page 354: Configure The Name Server Specific Information

    Chapter 6 HPSS Configuration # the start of the client path matches any of the paths in this list # then the transfer will proceed, otherwise the Mover will not transfer # the file. # The format of this file is simply a list of paths, one per line. /gpfs /local/globalfilesystem In the above sample configuration, any file under the path,...
  • Page 355: Figure 6-28 Name Server Configuration Window

    Figure 6-28 Name Server Configuration Window 6.8.9.1 Name Server Configuration Variables Table 6-27 lists the fields on the Name Server Configuration window and provides specific recommendations for configuring the NS for use by HPSS. HPSS Installation Guide Release 4.5, Revision 2 Chapter 6 September 2002 HPSS Configuration...
  • Page 356: Table 6-27 Name Server Configuration Variables

    Chapter 6 HPSS Configuration Table 6-27 Name Server Configuration Variables Display Field Name Description Server Name The descriptive name of the NS. This name is copied over from the NS general configuration entry. Server ID The UUID of the NS. This ID is copied over from the NS general configuration entry.
  • Page 357: Nfs Daemon Specific Configuration

    Table 6-27 Name Server Configuration Variables (Continued) Display Field Name Description Root Fileset Name The name to be assigned to the Name Server’s local root fileset. The name must be unique among all filesets within the DCE cell. SFS Filenames. The fields below list the names of the SFS files used by the NS. NS Objects The path name to the SFS file containing the...
  • Page 358 Chapter 6 HPSS Configuration Configuration window will be displayed with the configured data. After modifying the configuration, click on the Update button to write the changes to the appropriate SFS file. To delete an existing configuration, select the NFS Daemon entry on the HPSS Servers window and click on the Type-specific...
  • Page 359: Figure 6-29 Nfs Daemon Configuration Window (Left Side)

    Chapter 6 HPSS Configuration Figure 6-29 NFS Daemon Configuration Window (left side) HPSS Installation Guide September 2002 Release 4.5, Revision 2...
  • Page 360: Figure 6-30 Nfs Daemon Configuration Window (Right Side)

    Chapter 6 HPSS Configuration Figure 6-30 NFS Daemon Configuration window (right side) September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 361: Table 6-28 Nfs Daemon Configuration Variables

    6.8.10.1 NFS Daemon Configuration Variables Table 6-28 lists the fields on the NFS Daemon Configuration window and provides specific recommendations for configuring the NFS Daemon for use by HPSS. Table 6-28 NFS Daemon Configuration Variables Display Field Name Description General Parameters. The following fields define general information for the NFS Daemon. Server Name The descriptive name of the NFS Daemon.
  • Page 362 Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Exports File (Unix) The name of a UNIX file containing the HPSS directory/file exported for NFS access. Use Privileged Port A flag that indicates whether the NFS clients will use privileged port.
  • Page 363 Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Grace Interval An interval, in seconds, to indicate how long after a credential’s last use before it will be expired. Dump Interval An interval, in seconds, to determine how often the credentials map cache is checkpointed to a UNIX file.
  • Page 364 Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Expire Credentials A flag that indicates whether the credentials map entries will be expired and removed based on the grace and purge intervals. Header Cache. The following fields are used to cache names and attributes of the HPSS file objects. Class of Service The name of the COS used when creating the NFS...
  • Page 365 Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Buffer Size The size of a single data cache entry. Advice: This value is the amount of data read from and written to HPSS by the data cache layer at a time. Is is recommended that this value be a multiple of 8 KB and, if possible, a power of 2.
  • Page 366 Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Thread Interval An interval, in seconds, that specifies how often the cleanup threads wake up to look for dirty entries. Advice: This value should not be too small because extra cleanup threads are spawned as needed when the cache starts to fill up.
  • Page 367: Non-Dce Client Gateway Specific Configuration

    Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Recover Cached Data A flag that indicates whether the data cache layer will be forced to look for dirty cache entries in the CheckPoint File. When ON, the data cache layer looks for dirty entries.
  • Page 368: Figure 6-31 Non-Dce Client Gateway Configuration Window

    Chapter 6 HPSS Configuration Refer to the window’s help file for more information on the individual fields and buttons as well as the supported operations available from the window. Figure 6-31 Non-DCE Client Gateway Configuration Window 6.8.11.1 Non-DCE Client Gateway Configuration Variables Table 6-29 lists the fields on the Non-DCE Client Gateway Configuration window and provides specific recommendations for configuring the Non-DCE Client Gateway for use by HPSS.
  • Page 369 Table 6-29 Non-DCE Client Gateway Configuration Variables Display Field Name Description Server ID The UUID of the Non- DCE Client Gateway. This ID is copied over from the Non-DCE Client Gateway general configuration entry. TCP Port Default port number to be used by the Non-DCE Client Gateway’s listener process.
  • Page 370: Configure The Pvl Specific Information

    Chapter 6 HPSS Configuration Table 6-29 Non-DCE Client Gateway Configuration Variables Display Field Name Description Maximum Request Queue The maximum number of Size requests to queue until a request thread becomes available. If this queue fills, no more requests will be processed for this particular Non-DCE client until there is more room in...
  • Page 371: Figure 6-32 Pvl Server Configuration Window

    To add a new specific configuration, select the PVL Server entry and click on the Type-specific... button from the Configuration button group on the HPSS Servers window. The PVL Server Configuration window will be displayed as shown in Figure 6-32 with default values. If the default data is not desired, change the fields with the desired values.
  • Page 372: Table 6-30 Physical Volume Library Configuration Variables

    Chapter 6 HPSS Configuration 6.8.12.1 Physical Volume Library Configuration Variables Table 6-30 lists the fields on the PVL Server Configuration window and provides specific recommendations for configuring the PVL for use by HPSS. Table 6-30 Physical Volume Library Configuration Variables Display Field Name Description Server Name...
  • Page 373: Configure The Pvr Specific Information

    After the configuration entry is created, it can be viewed, updated, or deleted through the same window. If you are configuring a PVR for StorageTek, IBM 3494/3495/3584, or ADIC AML; before proceeding with PVR configuration you should read the PVR-specific section (Section 6.8.13.4: StorageTek PVR and RAIT PVR Information on page 389,Section 6.8.13.3: IBM 3494/3495 PVR...
  • Page 374: Figure 6-33 3494 Pvr Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-33 3494 PVR Server Configuration Window September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 375: Figure 6-34 3495 Pvr Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-34 3495 PVR Server Configuration Window HPSS Installation Guide September 2002 Release 4.5, Revision 2...
  • Page 376: Figure 6-35 3584 Lto Pvr Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-35 3584 LTO PVR Server Configuration Window September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 377: Figure 6-36 Aml Pvr Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-36 AML PVR Server Configuration Window HPSS Installation Guide September 2002 Release 4.5, Revision 2...
  • Page 378: Figure 6-37 Stk Pvr Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-37 STK PVR Server Configuration Window September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 379: Figure 6-38 Stk Rait Pvr Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-38 STK RAIT PVR Server Configuration Window HPSS Installation Guide September 2002 Release 4.5, Revision 2...
  • Page 380: Figure 6-39 Operator Pvr Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-39 Operator PVR Server Configuration Window 6.8.13.1 Physical Volume Repository Configuration Variables Table 6-31 lists the fields on the PVR Server Configuration window and provides specific recommendations for configuring a PVR for use by HPSS. Table 6-31 Physical Volume Repository Configuration Variables Display Field Name Description...
  • Page 381 SFS file for cartridges. Suggested names are: cartridge_3494 for IBM 3494 PVRs cartridge_aml for ADIC AML PVRs cartridge_lto for IBM 3584 LTO PVRs cartridge_stk for StorageTek-based PVRs cartridge_rait for StorageTek RAIT PVRs ccartridge_operator for Operator PVRs If two robots of the same type are managed by two different PVRs, be sure that each one has a different SFS file for cartridges.
  • Page 382 Chapter 6 HPSS Configuration Table 6-31 Physical Volume Repository Configuration Variables (Continued) Display Field Name Description Same Job On Controller The number of cartridges from this job mounted on this drive’s controller. The larger the number, the harder the PVR will try to avoid mounting two tapes in the same stripe set on drives attached to the same...
  • Page 383 Table 6-31 Physical Volume Repository Configuration Variables (Continued) Display Field Name Description Advice: The Same Job on Controller, Other Job on Controller, and Distance to Drive values are used by the PVR when selecting a drive for a tape mount operation. The three values are essentially weights that are used to compute an overall score for each possible drive.
  • Page 384 Chapter 6 HPSS Configuration Table 6-31 Physical Volume Repository Configuration Variables (Continued) Display Field Name Description Support Shelf Tape A toggle button. If ON, the PVR will support the removal of cartridges from the tape library. If OFF, the PVR will not support the removal of cartridges from the tape library.
  • Page 385 Table 6-31 Physical Volume Repository Configuration Variables (Continued) Display Field Name Description Drive Error Limit This field is intended to be used in conjunction with the PVR Server “Retry Mount Time Limit”. If the number of consecutive mount errors which occur to any drive in this PVR equal or exceed this value, the drive is automatically...
  • Page 386 _DEVICE will override the value entered in this field. Advice: For Block Multiplexer Channel (BMUX)-attached IBM robots, the Async Device must be different from the Command Device. For TTY and LAN-attached robots, the devices can be the same. Client Name...
  • Page 387 6.8.13.2.1 Vendor Software requirements HPSS is designed to work with the AIX tape driver (Atape) software to talk to the IBM 3584 LTO Library over a SCSI channel. Currently HPSS is only supported for the AIX version of the Atape driver.
  • Page 388 BMUX case. HPSS can share an IBM robot with other tape management systems. If a robot is shared, care must be taken to make sure that a drive is not used by any other tape management system while that drive is configured as unlocked in the HPSS PVL.
  • Page 389 6.8.13.3.3 Cartridge Import and Export When importing new cartridges into HPSS, the cartridges must be entered into the IBM robot before any HPSS import operations are performed. Cartridges placed in the convenience I/O port will automatically be imported by the robot.
  • Page 390 Chapter 6 HPSS Configuration The STK RAIT PVR cannot be supported at this time since STK has not yet made RAIT generally available. The SSI requires that the system environment variables CSI_HOSTNAME and ACSAPI_PACKET_VERSION be correctly set. Note that due to limitations in the STK Developer's Toolkit, if the SSI is not running when the HPSS PVR is started, or if the SSI crashes while the HPSS PVR is running, the HPSS PVR will lock up and will have to be manually terminated by issuing a kill -9 command.
  • Page 391 the Server System Interface (ssi) and the Toolkit event logger. These binaries and associated script files are distributed with the HPSS, but are maintained by the STK Corporation. The binaries and script files for starting the STK client side processes are located in the $HPSS_PATH/stk/bin directory.
  • Page 392 Chapter 6 HPSS Configuration Enter Remote Host Version (ACSAPI_PACKET_VERSION): 4 Starting /opt/hpss/stk/bin/mini_el... Attempting startup of /opt/hpss/ bin/mini_el ... Starting /opt/hpss/bin/ssi... Attempting startup of PARENT for /opt/ hpss/bin/ssi... SIGHUP received Parent Process ID is: 17290 Attempting startup of /opt/hpss/bin/ssi... SIGHUP received Parent Process #17290 EXITING NORMALLY Initialization Done.
  • Page 393 User needs to set the Server Name and Client Name, which are case sensitive, in the AML PVR Server Configuration panel to establish the connectivity between the HPSS software and the OS/2 controlling the robot. The Server Name is the name of the controller associated with the TCP/IP address, as defined in the TCP/IP HOST file, and the Client Name is the name of the OS/2 administrator client as defined in the DAS configuration.
  • Page 394: Configure The Storage Server Specific Information

    Chapter 6 HPSS Configuration 1. Make sure the AMU archive management software is running and the hostname is resolved, 2. Select an OS/2 window from the Desktop and change the directory to C:\DAS, C:> cd \das 3. At the prompt, type tcpstart and make sure that TCP/IP gets configured and that the port mapper program is started, C:\das>...
  • Page 395 default data is not desired, change the fields with the desired values. Click on the Add button to create the configuration entry. To update an existing configuration, select the Storage Server entry on the HPSS Servers window and click on the Type-specific... button from the Configuration button group. The Storage Server Configuration window will be displayed with the configured data.
  • Page 396: Figure 6-40 Disk Storage Server Configuration Window

    Chapter 6 HPSS Configuration Figure 6-40 Disk Storage Server Configuration Window September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 397: Figure 6-41 Tape Storage Server Configuration Window

    Figure 6-41 Tape Storage Server Configuration Window 6.8.14.1 Storage Server Configuration Variables Table 6-32 lists the fields on the Storage Server Configuration window and provides specific recommendations for configuring a Storage Server for use by HPSS. Table 6-32 Storage Server Configuration Variables Display Field Name Description Server Name...
  • Page 398 Chapter 6 HPSS Configuration Table 6-32 Storage Server Configuration Variables (Continued) Display Field Name Description Server ID The UUID of this SS. This ID is copied from the SS general configuration entry. Statistics Fields. The following fields are displayed for Tape Storage Servers only. Total Virtual Volumes The number of virtual volumes that are managed...
  • Page 399 Table 6-32 Storage Server Configuration Variables (Continued) Display Field Name Description Total Bytes The total number of bytes of used and available storage known to the SS. This field is applicable to the Tape Storage Server only. Advice: This value should be set to zero when the specific configuration record is first created.
  • Page 400 Chapter 6 HPSS Configuration Table 6-32 Storage Server Configuration Variables (Continued) Display Field Name Description Physical Volumes The name of the SFS file that contains the physical volume metadata. Advice: Use a name that is meaningful to the type of SS and metadata being stored. Virtual Volumes The name of the SFS file that contains the virtual...
  • Page 401: Configure Mvr Devices And Pvl Drives

    Table 6-32 Storage Server Configuration Variables (Continued) Display Field Name Description Storage Segments The name of the SFS file that contains the storage segment metadata. Advice: Use a name that is meaningful to the type of SS and metadata being stored. 6.9 Configure MVR Devices and PVL Drives The MVR Device and PVL Drive objects refer to the same physical drive, but are maintained in two separate SFS files based on what information is required by the PVL for drive management and...
  • Page 402 Before proceeding with device and drive configuration associated with StorageTek, or IBM robotics, refer to , Section 6.8.13.2: LTO PVR Information on page 387, Section 6.8.13.3: IBM 3494/3495 PVR Information on page 388, Section 6.8.13.4: StorageTek PVR and RAIT PVR Information on page 389, or Section 6.8.13.5: ADIC Automatic Media Library Storage Systems...
  • Page 403: Figure 6-42 Hpss Devices And Drives Window

    Figure 6-42 HPSS Devices and Drives Window To configure a new device and drive, click on the Add New... button on the HPSS Devices and Drives window. The Mover Device and PVL Drive Configuration window will be displayed as shown in Figure 6-44 with default values for a new tape device/drive. If a disk device/drive is desired, click the Disk button to display the default disk data (as shown in Figure 6-43) before modifying any other fields.
  • Page 404: Figure 6-43 Disk Mover Device And Pvl Drive Configuration Window

    Chapter 6 HPSS Configuration Figure 6-43 Disk Mover Device and PVL Drive Configuration Window September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 405: Device And Drive Configuration Variables

    Figure 6-44 Tape Mover Device and PVL Drive Configuration Window 6.9.1 Device and Drive Configuration Variables Table 6-33 lists the fields on the Mover Device and PVL Drive Configuration window. Table 6-33 Device/Drive Configuration Variables Display Field Name Description Device/Drive ID The unique, numeric ID associated with this device/drive.
  • Page 406 Chapter 6 HPSS Configuration Table 6-33 Device/Drive Configuration Variables (Continued) Display Field Name Description Device /Drive Type The type of device over which data will move. Mover The name of the MVR that controls the device. Advice: There is a maximum of 64 devices that can be configured for a Mover. The name of the PVR that handles the removable media operations for the...
  • Page 407 Table 6-33 Device/Drive Configuration Variables (Continued) Display Field Name Description Starting Offset The offset in bytes from the beginning of the disk logical volume at which the Mover will begin using the volume. The space preceding the offset will not be used by HPSS. Advice: This value is used for disk devices only.
  • Page 408 Chapter 6 HPSS Configuration Table 6-33 Device/Drive Configuration Variables (Continued) Display Field Name Description Device Name The name by which the MVR can access the device. Advice: This name is usually the path name of a device special file such as /dev/ rmt0/ For locally attached disk devices, the pathname should refer the raw/character special file (e.g., /dev/rhpss_disk1).
  • Page 409 Atape driver or using the library console. For IBM 3494/3495 robots: The Drive Address configuration entries correspond to the hexadecimal Library device number of the drive. Determine the Library device number by running the command “/opt/hpss/bin/GetESANumbers /dev/rmtX”...
  • Page 410 LBAs and relative addresses. LBA positioning provides for faster access of data residing on tape. The benefit will be realized for read requests with many source descriptors specifying locations spread sparsely down the tape. This is only supported with the IBM SCSI tape device driver. Acceptable Values September 2002...
  • Page 411: Supported Platform/Driver/Tape Drive Combinations

    In Table 6-34, the “Driver” column uses the following abbreviations: AdvanTAPE - Gresham Tape Device Driver Ampex - Ampex Tape Device Driver IBM - IBM SCSI Tape Device Driver Native - AIX, Solaris, IRIX, or Linux native SCSI Tape Device Driver 6.9.3...
  • Page 412 Chapter 6 HPSS Configuration September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 413: Chapter 7 Hpss User Interface Configuration

    HPSS User Interface Chapter 7 Configuration 7.1 Client API Configuration The following environment variables can be used to define the Client API configuration: The HPSS_LS_NAME defines the CDS name of the Location Server RPC Group entry for the HPSS system that the Client API will attempt to contact. The default is /.:/hpss/ls/group. The HPSS_MAX_CONN defines the number of connections that are supported by the Client API within a single client process.
  • Page 414 Chapter 7 HPSS User Interface Configuration The HPSS_SERVER_NAME environment variable is used to specify the server name to be used when initializing the HPSS security services. The default value is /.:/hpss/client. This variable is primarily intended for use by HPSS servers that use the Client API. The HPSS_DESC_NAME environment variable is used to control the descriptive name used in HPSS log messages if the logging feature of the Client API is enabled.
  • Page 415: Non-Dce Client Api Configuration

    This is only needed when the client must support DFS in a cross-cell environment. The default registry is “/.../dce.clearlake.ibm.com”. The HPSS_DMAP_WRITE_UPDATES environment variable is used control the frequency of cache invalidates that are issued to the DMAPI file system while writing to a file that is mirror in...
  • Page 416: Environment Variables

    Chapter 7 HPSS User Interface Configuration Thus if the key on the NDCG SSM screen is 0123456789ABCDEF then the key in the ndcl.keyconfig file must look like the sample file shown below: 0x01234567 0x89ABCDEF • Make sure you set the appropriate permissions on this file. Only users authorized to use the Non DCE Client API should have access to this file.
  • Page 417 The HPSS_HOSTNAME environment variable is used to specify the hostname to be used for TCP/IP listen ports created by the Client API. The default value is the default hostname of the machine on which the Client API is running. This value can have a significant impact on data transfer performance for data transfers that are handled by the Client API (i.e., those that use the hpss_Read and hpss_Write interfaces).
  • Page 418: Authentication Setup

    This is only needed when the client must support DFS in a cross-cell environment. The default registry is “/.../dce.clearlake.ibm.com”. The HPSS_DMAP_WRITE_UPDATES environment variable is used control the frequency of cache invalidates that are issued to the DMAPI file system while writing to a file that is mirror in...
  • Page 419 1. Create a kerberos realm that includes the client machine as well as the machine running the NDCG. This includes setting up the /etc/krb5.conf file on the client and the server. Sample /etc/krb5.conf: [libdefaults] default_realm = dopey_cell.clearlake.ibm.com default_keytab_name = /krb5/v5srvtab default_tkt_enctypes = des-cbc-crc default_tgs_enctypes = des-cbc-crc [realms] dopey_cell.clearlake.ibm.com = {...
  • Page 420 2. Make sure you already have the /etc/krb5.conf and the /krb5/v5srvtab files 3. Add ndcg and host entries for target kerberos database. Thus if your hostname is dopey.clearlake.ibm.com your ndcg service will be called ndcg/ dopey.clearlake.ibm.com and your host will be called host/dopey.clearlake.ibm.com...
  • Page 421 Enter misc info: () KRB5 service account [all other prompts can be answered with "Enter"] This would create host and ndcg entries for dopey.clearlake.ibm.com. Replace this host name with the actual name of the target. Use the fully domain-qualified name;...
  • Page 422: Ftp Daemon Configuration

    Chapter 7 HPSS User Interface Configuration 1. Compile the client library with kerberos mode enabled and link with the kerberos libraries. Use the following flags: -lkrb5 -lcrypto -lcom_err Make sure the linker knows where to find these libraries using the -L flag. 2.
  • Page 423: Table 7-1 Parallel Ftp Daemon Options

    possible, use a “TCP Wrapper” application for initiating the HPSS FTP Daemon. This enhances on-the-fly changes to the startup of the HPSS FTP Daemon. Additionally, this provides for enhanced security. Several TCP Wrapper applications are available in the Public domain. HPSS Parallel FTP Daemon options: The only options which accept additional arguments are the –p, -s, -D, and –F options.
  • Page 424 Chapter 7 HPSS User Interface Configuration Table 7-1 Parallel FTP Daemon Options Option Description s string Specify the syslog facility for the HPSS PFTPD. The syntax on the -s option is -slocal7. The default syslog facility is LOG_DAEMON (reference: /usr/include/sys/syslog.h). Alternatives are local0 - local7. Incorrect specification will default back to LOG_DAEMON.
  • Page 425 Table 7-1 Parallel FTP Daemon Options Option Description Used to disallow login for users whose home directory does not exist or is not properly configured. The default behavior (without the H option) is to put the user in the “/” directory. Toggle the use of trusted hosts.
  • Page 426 Chapter 7 HPSS User Interface Configuration HPSS.) Most of these applications, do not exhibit the line length limitation observed by the inetd superdaemon and they also allow “on the fly” modification of initialization parameters for network services; e.g., PFTP, telnet, etc., without having to refresh (kill -HUP) the inetd superdaemon. The “-D string”...
  • Page 427 private <private_group_pathname> # Enable/Disable compression filter # NOT CURRENTLY SUPPORTED. compress [ yes | no ] <class> [ <class> ... ] # Enable/Disable tar filter # NOT CURRENTLY SUPPORTED. tar [ yes | no ] <class> [ <class> ... ] # Control for logging (sent to syslog()).
  • Page 428: Table 7-2 Banner Keywords

    Chapter 7 HPSS User Interface Configuration The following “hpss_options” are read only if the corresponding flag (See the FTP Daemon Flags above) appears on the inetd.conf initialization line for the HPSS PFTP Daemon and may be left “active” (not commented out with the # symbol) even if the default value is desired. # Define the Authentication Manager hpss_option AMGR </opt/hpss/bin/auth_type>...
  • Page 429 Keyword • The format of the <shutdown_info_pathname> file is: <year> <mon> <day> <hour> <min> <deny> <disc> Message lines contain keywords mentioned above. For example: 1994 1 28 16 0 120 30 System shutdown at %s (current time is %T) New FTP sessions denied at %r FTP users disconnected at %d indicates that shutdown will be 1/28/94 at 16:00, with users disconnected at 15:30 and sessions denied at 14:40 (the 120 indicates 1 hour, 20 minutes).
  • Page 430 Chapter 7 HPSS User Interface Configuration Step 4. Creating FTP Users In order for an HPSS user to use FTP, a DCE userid and password must be created. Refer to Section 8.1.1: Adding HPSS Users (page 215) in the HPSS Management Guide for information on how to use the hpssuser utility to create the DCE userid and password and set up the necessary configuration for the user to use FTP.
  • Page 431: Nfs Daemon Configuration

    7.4 NFS Daemon Configuration Before the HPSS NFS daemon can be started, any existing AIX or Solaris native NFS daemons must be stopped and prevented from restarting. This is important because the NFS protocol does not provide a way for clients to specify which of two daemons is wanted. When the system is set up correctly, there should be no 'nfsd' or 'mountd' processes running.
  • Page 432: The Hpss Exports File

    Chapter 7 HPSS User Interface Configuration The alternative approach would have been to mount jupiter's files directly on /users and tardis' files directly on /hpss. But this is the layout of mount points that should be avoided; it can cause the two NFS daemons to interact badly with each other.
  • Page 433: Examples

    Table 7-3 Directory Export Options Option anon=UID root=HostName[:HostName,...] access=Client[:Client,...] NOSUID NOSGID UIDMAP key=Key A # (pound sign) anywhere in the HPSS exports file indicates a comment that extends to the end of the line. 7.4.2 Examples 1. To export the HPSS directory /usr to network sandia.gov, enter: /usr -id=1,access=sandia.gov To export the HPSS /usr/local directory to the world, enter: /usr/local -id =4...
  • Page 434: Files

    Chapter 7 HPSS User Interface Configuration /usr/tps -id=20,root=hermes:zip 5. To convert client root users to guest UID=100, enter: /usr/new -id=10,anon=100 6. To export read-only to everyone, enter: /usr/bin -id=15,ro 7. To allow several options on one line, enter: /usr/stuff -id=255,access=zip,anon=-3,ro 8.
  • Page 435: Hdm Configuration

    • IBM’s Parallel Operating Environment MPI, Version 3 Release 2 • Sun HPC MPI, version 4.1 • ANL MPICH, version 1.2 Other versions of MPI may be compatible with HPSS MPI-IO as well. The mpio_MPI_config.h file is dynamically generated from the host MPI’s mpi.h file, making it possible to tailor the interaction of HPSS MPI-IO with the host MPI.
  • Page 436: Filesets

    Chapter 7 HPSS User Interface Configuration of cells to allow data access and authorization between clients and servers in different cells. DFS uses the Episode physical file system, although it can use other native file systems, such as UFS. HPSS provides a similar interface through the Linux version of SGI’s XFS file system. A standard interface is used to couple DFS and XFS (the “managed”...
  • Page 437: Architectural Overview

    updates made through the DFS interface being visible through the HPSS interface and vice versa. Filesets managed with this option are called mirrored filesets. Objects in mirrored filesets have corresponding entries in both DFS and HPSS with identical names and attributes. A user may access data through DFS, at standard DFS rates, or when high performance I/O rates are important, use the HPSS interface.
  • Page 438: Figure 7-1 Dfs/Hpss Xdsm Architecture

    Chapter 7 HPSS User Interface Configuration Figure 7-1 DFS/HPSS XDSM Architecture 7.6.2.4 XDSM Implementation for DFS The XDSM implementation supported by Transarc is called the DFS Storage Management Toolkit (DFS SMT). It is fully compliant with the corresponding standard XDSM specification. In addition, it provides optional features: persistent opaque Data Management (DM) attributes, persistent event masks, persistent managed regions, non-blocking lock upgrades and the ability to scan for objects with a particular DM attribute.
  • Page 439 The bulk of DFS SMT is implemented in the DFS file server, but there is also a user space shared library that implements all APIs in the XDSM specification. The kernel component maintains XDSM sessions, XDSM tokens, event queues, and the metadata which describes the events for which various file systems have registered.
  • Page 440 Chapter 7 HPSS User Interface Configuration To support persistent DM-related metadata, XFS utilizes its standard extended attribute facility. DM attributes, event masks, managed regions, and attribute change times (dtime values) are stored as extended attributes. These extended attributes are treated as file metadata. The xfsdump and xfsrestore utilities include extended attributes and migrated regions.
  • Page 441 7.6.2.6.2 DMAP Gateway Server The DMAP Gateway is a conduit and a translator between HDM and HPSS. HPSS servers use DCE/RPCs to communicate, however the DMAP Gateway encodes requests using XDR and sends these requests via sockets to HDM. In addition, it translates XDR from the HDM to DCE/TRPC/ Encina calls to the appropriate HPSS servers.
  • Page 442 Chapter 7 HPSS User Interface Configuration Meetings with Transarc and IBM Austin have taken place to discuss the issue. The above restriction may be fixed in the future. 7.6.2.7.2 Migration and Purge Algorithms Currently, the HDM reads through all the anodes in an aggregate to determine migration and purge candidates.
  • Page 443 7.6.2.7.4 Mirrored Fileset Recovery Speed Mirrored fileset recovery time must be considered when configuring the system. Mirrored fileset recovery can be tedious and slow. The DFS fileset is recovered using HPSS metadata, which requires many SFS accesses. The three time consuming steps when recovering mirrored filesets are: •...
  • Page 444: Configuration

    Chapter 7 HPSS User Interface Configuration 7.6.2.8.1 Migration/Purge Algorithms and the MPQueue Migration and purge are handled differently in an HPSS/XFS system than in an HPSS/DFS system. In an HPSS/XFS system, a queue of migration and purge candidates are kept in shared memory using the Migration/Purge Queue or MPQueue.
  • Page 445 For Linux systems, it is assumed that the system on which XFS is running has been configured with the appropriate kernel and XFS versions as given in Section 2.3.5.1: HPSS/XFS HDM Machine on page 52. For DFS HDMs, two additional steps must be performed before continuing to the configuration of the HDM: Configure DFS SMT Kernel Extensions (AIX) Configure DCE DFS...
  • Page 446 Chapter 7 HPSS User Interface Configuration Here is a sample user_cmd.tcl: #!/bin/ksh set pre_start_dfs set pre_start_dfs_fail_on_error $TRUE set pre_stop_dfs set post_stop_dfs The pre_start_dfs Korn shell script will break before trying to start DFS and export DFS files. The script ensures that the HDMs are all running. If there is any problem doing that DFS will not be started, giving the system administrator a chance to fix the problem.
  • Page 447 echo " exit $status else echo " hdm$id is already running" done echo " all hdm servers are running" exit 0 An example of the pre_start_dfs for Solaris is as follows: #!/bin/ksh # Start the servers (two of them in this example): for id in 0 1;...
  • Page 448 Chapter 7 HPSS User Interface Configuration $HPSS_PATH_BIN/hdm_admin -k $key -s $id -v $var tcp \ done exit 0 The post_stop_dfs script is executed once it has completed detaching the DFS aggregates and shutting down DFS. The script stops any HDMs that are still running. Here is a sample for post_stop_dfs: #!/bin/ksh export HPSS_PATH_BIN=/opt/hpss/bin...
  • Page 449: Configuration

    The fifth file (filesys.dat) is automatically updated by HDM as new aggregates and filesets are created. Therefore, this file should not ordinary be edited by the administrator. HDM cannot be started if this file is missing or does not contain correct information. Before starting HDM for the first time, a special version of filesys.dat must be created so that HDM will recognize that the file is correct.
  • Page 450 Chapter 7 HPSS User Interface Configuration The following paragraphs discuss each parameter found in the file. Except as noted, each parameter must be specified. HDM will not start if a mandatory parameter is omitted. The configuration parameters can be specified in any order. The keywords must be spelled correctly, using the specified upper and lower case letters.
  • Page 451: Table 7-4 Logrecordmask Keywords

    are run on one machine, but leads to the possibility that an aggregate will be overlooked and not kept properly synchronized. On the other hand, if "permissiveMount" is not specified, HDM will abort mount events for aggregates it does not manage. While this is safer, it cannot be used on a machine where multiple copies of HDM are running.
  • Page 452 Chapter 7 HPSS User Interface Configuration In normal operation, only alarm and event messages need to be enabled. Trace and debug messages should be enabled when it is necessary to track down the root cause of a problem. Logging too many different types of messages will impact HDM performance.
  • Page 453 MaxStages specifies the maximum number of data event processes that can concurrently stage files from HPSS to Episode. When this limit is reached, further transfers from HPSS are deferred until one of the stages completes. This value must be less than NumDataProcesses. A value in the range 1-3 is a good starting point.
  • Page 454 Chapter 7 HPSS User Interface Configuration When multiple HDM servers are to be run on the same machine, each HDM must have a unique SharedMemoryKey. HDM servers cannot share memory or logs without serious consequences. ZapLogName specifies the name of the file used for the zap log. Typically, this will be /var/hpss/ hdm/hdm<id>/hdm_zap_log.
  • Page 455 /opt/dcelocal/var/dfs/aggrs/mirror1 /dev/mirror1 4 mirrored run wait \ partial new.mirror ? ? 0.295 ? 0 hpss.mirror /:/hpss/mirror /var/hpss/hdm/hdm1/aggr/mirror1 \ hpss.mirror 0.361 server.domain.ibm.com 7001 # HDM filesys.dat version 1 The following is an example HPSS/XFS filesys.dat: # HDM filesys.dat version 1 # Sample filesys.dat xfsbackup1 /mnt/xfsbackup1 ide0(3,74) archive/rename run wait 10000 \ 1079914467.1018724031 server.domain.ibm.com 7001...
  • Page 456 Chapter 7 HPSS User Interface Configuration Fsid specifies the file system id for this aggregate. The value is defined by dfstab. Option specifies how the filesets on the aggregate will be managed by HPSS. The parameter may be either archive/delete, archive/rename, or mirror. If mirror is selected, the name and data space will be mirrored by HPSS, and the end user can access the name and data space from either DFS or HPSS.
  • Page 457 Gateway specifies the fully qualified name of the host where the DMAP Gateway that will manage this fileset runs. To keep the example above short, Gateway is shown as tardis, but in practice, the name should be tardis.ca.sandia.gov. If the fileset is partially configured, the host name is represented by a ‘?’.
  • Page 458 Chapter 7 HPSS User Interface Configuration Block devices: 2 fd 3 ide0 8 sd 22 ide1 65 sd 66 sd The block device that matches our major number (3) is ide0. Now put this information together to form the media descriptor. Since the format is <adapter>(<major>,<minor>), for our example the media descriptor is ide0(3,71).
  • Page 459 When multiple HDM Servers and DMAP Gateways are running, they must use different TCP ports. 7.6.3.3.3 gateways.dat File The gateway configuration file, gateways.dat, is a text file identifying DMAP gateways that will communicate with HDM. The file must be located in the same directory as config.dat, typically / var/hpss/hdm/hdm<id>.
  • Page 460 Chapter 7 HPSS User Interface Configuration The file consists of a number of sections, where each section defines a migration or purge policy. Each section begins with a line that identifies the type of policy being defined (a migration or purge policy) and gives it a name.
  • Page 461 LastAccessTimeBeforePurge specifies the number of seconds that must elapse after a file is accessed before the file becomes eligible for purging. PurgeDelayTime is the time, in seconds, that the purge process waits between passes in which it looks for files to purge. If this time is set to zero, HDM waits an infinite amount of time, meaning the purge process waits for a signal before looking for files to purge.
  • Page 462 Chapter 7 HPSS User Interface Configuration KeyTabFile specifies the name of a UNIX file containing a copy of the DCE key for the HDM Security Server component. The file must exist and must contain an entry for the given Principal. ObjectID specifies the DCE object UUID for an HDM.
  • Page 463: Chapter 8 Initial Startup And Verification

    Initial Startup and Chapter 8 Verification 8.1 Overview This chapter provides instructions for starting up the HPSS servers, performing post-startup configuration, and verifying that the system is configured as desired. Briefly, here are the steps involved: 1. Start up the HPSS servers (Section 8.2: Starting the HPSS Servers (page 463)) 2.
  • Page 464 Chapter 8 Initial Startup and Verification SSM can be used to start up the following types of HPSS server: • Bitfile Server • DMAP Gateway • Gatekeeper Server • Location Server • Log Client • Log Daemon • Metadata Monitor •...
  • Page 465: Unlocking The Pvl Drives

    8.3 Unlocking the PVL Drives As a default, all newly configured drives are locked. They must be unlocked before the PVL can use them. Refer to Section Section 5.5.1: Unlocking a Drive on page 102 of the HPSS Management Guide for more information.
  • Page 466: Creating Hpss Directories

    Chapter 8 Initial Startup and Verification 8.8 Creating HPSS directories If Log Archiving is enabled, use an HPSS namespace tool such as scrub or pftp, create the /log directory in HPSS. This directory must be owned by hpss_log and have permissions rwxr-xr-x. 8.9 Verifying HPSS Configuration After HPSS is up and running, the administrator should use the following checklist to verify that HPSS was configured correctly:...
  • Page 467: Storage Classes

    • For tape devices, verify that the “Locate Support” option is enabled (unless there are unusual circumstances why this functionally is not or cannot be supported). • For tape devices, verify that the “NO-DELAY” option is enabled (unless there are unusual circumstances why this functionally is not or cannot be supported).
  • Page 468: File Families, Filesets, And Junctions

    Chapter 8 Initial Startup and Verification 8.9.8 File Families, Filesets, and Junctions • Verify that file families and filesets are created according to the site’s requirement. • Verify that each fileset is associated with the appropriate file family and/or COS. •...
  • Page 469: Performance

    8.9.11 Performance Measure data transfer rates in each COS for: • Client writes to disk • Migration from disk to tape • Staging from tape to disk • Client reads from disk Transfer rates should be as fast as the underlying hardware. The actual hardware speeds can be obtained from their specification and by testing directly from the operating system.
  • Page 470 Chapter 8 Initial Startup and Verification September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 471: Appendix A Glossary Of Terms And Acronyms

    Glossary of Terms and Appendix A Acronyms ACSLS ADIC accounting aggregate alarm ANSI Archive archived fileset HPSS Installation Guide Release 4.5, Revision 2 Automatic Media Library Client Interface Access Control List Automated Cartridge System Library Software (Science Technology Corporation) Advanced Digital Information Corporation A log record message type used to log information to be used by the HPSS Accounting process.
  • Page 472 Appendix A Glossary of Terms and Acronyms attribute attribute change audit (security) Bar code bitfile bitfile segment Bitfile Server BMUX bytes between tape marks cartridge central log class Class of Service cluster When referring to a managed object, an attribute is one discrete piece of information, or set of related information, within that object.
  • Page 473 configuration configuration file daemon Data Server debug delog deregistration descriptive name device DFS/HPSS fileset directory dismount DMAP Gateway DMAPI HPSS Installation Guide Release 4.5, Revision 2 Appendix A a virtual volume is a cluster. The process of initializing or modifying various parameters affecting the behavior of an HPSS server or infrastructure service.
  • Page 474 Appendix A Glossary of Terms and Acronyms DMLFS drive Encina ESCON event export FDDI file file family file server fileset fileset id fileset name file system Shorthand for DMAP Gateway. A DCE Local File System that has been modified to support XDSM Data Management APIs.
  • Page 475 file system id Gatekeeper Server Gatekeeping Service Gatekeeping Site Interface Gatekeeping Site Policy GECOS global mount point HACMP halt hierarchy HIMF HiPPI HPSS HPSS Installation Guide Release 4.5, Revision 2 Appendix A A 32-bit number that uniquely identifies an aggregate. File Transfer Protocol An HPSS server that provides two main services: the ability to schedule the use of HPSS resources referred to as the Gatekeeping Service, and the...
  • Page 476 Appendix A Glossary of Terms and Acronyms HPSS-only fileset HPSS/DMAP IEEE IETF Imex import IOD/IOR IRIX junction LANL LARC latency An HPSS fileset that has no counterpart in DFS. A Data Management Application that monitors DFS or XFS activity in order to keep DFS or XFS synchronized with HPSS.
  • Page 477 LLNL LMCP local log local mount point location server Log Client Log Daemon log record log record type logging service managed object metadata HPSS Installation Guide Release 4.5, Revision 2 Appendix A Library Control Unit A DCE Local File System, which is a high performance log-based file system that supports the use of access control lists and multiple filesets within a single aggregate.
  • Page 478 Appendix A Glossary of Terms and Acronyms Metadata Manager Metadata Monitor method migrate Migration/Purge Server mirrored fileset MMON mount mount point Mount Daemon Mover MSSRM NASA Name Server The subsystem/component within HPSS responsible for the physical storage and management of HPSS metadata as well as the transactional mechanisms for manipulating HPSS meta data.
  • Page 479 name space NDCG NDAPI NERSC Network File System NFS Daemon Non-DCE Client Gateway Non-DCE Client Application Program Interface notification object OS/2 PFTP HPSS Installation Guide Release 4.5, Revision 2 Appendix A verification and provides the Portable Operating System Interface (POSIX). The set of name-object pairs managed by the HPSS Name Server.
  • Page 480 Appendix A Glossary of Terms and Acronyms physical volume Physical Volume Library Physical Volume Repository PIOFS POSIX purge purge lock RAID RAIT reclaim registration reinitialization repack request An HPSS object managed jointly by the Storage Server and the Physical Volume Library that represents the portion of a cartridge that can be contiguously accessed when mounted.
  • Page 481 RISC RMI registry Sammi SCSI security shelf tape shutdown sink SMIT SOID source HPSS Installation Guide Release 4.5, Revision 2 Appendix A Reduced Instruction Set Computer/Cycles Remote Method Invocation; the Java form of remote procedure call The service with which Java programs register themselves to run remote methods and by which they find the locations of other Java programs which offer remote methods.
  • Page 482 Appendix A Glossary of Terms and Acronyms SSM session stage start-up status storage class storage hierarchy storage level storage map storage segment Storage Server Storage Subsystem Storage System Management (SSM) Storage System Management The environment in which an SSM user interacts with SSM to monitor and control HPSS through the SSM windows.
  • Page 483 stripe length stripe width System Manager TCP/IP trace transaction UUID virtual volume virtual volume block size HPSS Installation Guide Release 4.5, Revision 2 Appendix A between the System Manager and the GUI, and (3) the GUI itself, which includes the Sammi Runtime Environment and the set of SSM windows. The number of bytes that must be written to span all the physical storage media (physical volumes) that are grouped together to form the logical storage media (virtual volume).
  • Page 484 Appendix A Glossary of Terms and Acronyms XDSM of a striped virtual volume before switching to the next physical volume. Virtual Volume Cross Cell Trust The Open Group’s Data Storage Management standard. It defines APIs that use events to notify Data Management applications about operations on files.
  • Page 485: Appendix B References

    References Appendix B 1. 3580 Ultrium Tape Drive Setup, Operator and Service Guide GA32-0415-00 2. 3584 UltraScalable Tape Library Planning and Operator Guide GA32-0408-01 3. 3584 UltraScalable Tape Library SCSI Reference WB1108-00 4. AIX Performance Tuning Guide 5. Data Storage Management (XDSM) API, ISBN 1-85912-190-X 6.
  • Page 486: Appendix B References

    33. IBM Ultrium Device Drivers Installation and User’s Guide GA32-0430-00.1 34. IBM Ultrium Device Drivers Programming Reference WB1304-01 35. IBM WebSphere Application Server TXSeries for Solaris Version 3.0: Planning and Instal- lation Guide (Configuring Encina). 36. Installing, Managing, and Using the IBM AIX Parallel I/O File System, SH34-6065-02 37.
  • Page 487 46. Solaris 5.8 11/99 Sun Hardware Platform Guide 47. Solaris System Administration Guide, Volume I 48. Solaris System Administration Guide, Volume II 49. STK Automated Cartridge System Library Software (ACSLS) System Administrator's Guide, PN 16716 50. STK Automated Cartridge System Library Software Programmer’s Guide, PN 16718 51.
  • Page 488 Appendix B References September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 489: Appendix C Developer Acknowledgments

    HPSS development was performed jointly by IBM Worldwide Government Industry, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NASA Langley Research Center, Oak Ridge National Laboratory, and Sandia National Laboratories.
  • Page 490 Appendix C Developer Acknowledgments September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 491: Appendix D Accounting Examples

    Accounting Examples Appendix D D.1 Introduction This appendix describes how to set up the gathering of accounting data at a customer site. The accounting data is used by the customer to calculate charges for the use of HPSS resources. The accounting data represents a blurred snapshot of the system storage usage as it existed during the accounting run.
  • Page 492 Appendix D Accounting Examples about the storage used by a particular HPSS Account Index (AcctId) in a particular Class Of Service (COS): • The total number of file accesses (#Accesses) to files owned by the Account Index in the Class Of Service. In general, file accesses are counted against the account of the user accessing the file, not the owner of the file itself.
  • Page 493: Site Accounting Table

    Sites may wish to write a module that will redirect the accounting data into a local accounting data base. This module would replace the default HPSS module, acct_WriteReport(), which writes out the HPSS accounting data to a flat text file. Where should the accounting data be stored? The HPSS accounting file and a copy of the current Account Map should be named with the date and time and stored for future reference.
  • Page 494: Maintaining And/Or Modifying The Account Map

    Appendix D Accounting Examples 3152 45(DDI) 25(DND)30(CBC) 5674 100(DDI) Note: The Account Apportionment Table and Account Maps can be created by the individual sites. They are not created or maintained by HPSS. Some sites may wish to add more information, such as department and text name, or include less information, such as only the UID.
  • Page 495: Accounting Intervals And Charges

    What kind of reports will be needed for your site? Learning what kind of accounting reports your site will need to generate will help you determine how detailed the collected accounting information should be. A typical Account Map will allow reports to be generated for the following: •...
  • Page 496 Appendix D Accounting Examples September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 497: Appendix E Infrastructure Configuration Example

    Perform HPSS Infrastructure Configuration: ========================================== <mkhpss> Status ==> Platform: AIX <mkhpss> Status ==> Host Name: host.clearlake.ibm.com <mkhpss> Status ==> Start Time: Fri Aug 10 11:36:58 CDT 2001 <mkhpss> Set up HPSS Special Directories and Links ========================================= on Fri Aug 10 11:36:58 CDT 2001 <mkhpss>...
  • Page 498 <mkhpss> Perform DCE Register <hpss_dce_register> Status => Running /opt/hpss/config/hpss_dce_register on host.clear- lake.ibm.com by root on Fri Aug 10 11:41:51 CDT 2001 <hpss_dce_register> Prompt => Password for cell_admin: <hpss_dce_register> Status => Create the groups, principals and accounts <hpss_dce_register> Status => Create HPSS Server CDS directories <hpss_dce_register>...
  • Page 499 Check the network and DCE on both sides Found: 1 Trusted Cell Members TrustedCells[0].cell_id = b0e840f4-2206-11d5-9453-0004ac498ce4 TrustedCells[0].uid = 101 TrustedCells[0].cell_name = /.../host_cell.clearlake.ibm.com TrustedCells[0].hpss_cell_id = 200090 /.../host_cell.clearlake.ibm.com successfully opened Completed Cross Cell Trust Check <mkhpss> Status ==> Configure HPSS with DCE completed, continue...
  • Page 500 <mkhpss> Prompt ==> Reenter password for encina/sfs/hpss: <mkhpss> Status ==> Creating account for encina/sfs/hpss ... <mkhpss> Status ==> Destroying credentials ... <mkhpss> Status ==> Creating keytab file /.../host_cell.clearlake.ibm.com/hosts/host/ config/keytab/hpss ... <mkhpss> Status ==> Randomizing encina/sfs/hpss to keytab file ... <mkhpss> Prompt ==> Password for cell_admin: <mkhpss>...
  • Page 501 LPs: STALE PPs: INTER-POLICY: maximum INTRA-POLICY: middle MOUNT POINT: MIRROR WRITE CONSISTENCY: on EACH LP COPY ON A SEPARATE PV ?: yes <mkhpss> Prompt ==> Logical volume exists - use it, redefine it, or quit (u/d/q)(u): <mkhpss> Prompt ==> SFS log volume name to use (logVol): <mkhpss>...
  • Page 502 Appendix E Infrastructure Configuration Example <mkhpss> Status ==> Adding {user encina_admin ACQ} ... <mkhpss> Status ==> Adding {user hosts/host/self ACQ} ... <mkhpss> Status ==> Clearing exclusive authority ... <mkhpss> Status ==> Stopping server ... <mkhpss> Status ==> Destroying credentials ... <mkhpss>...
  • Page 503 ===================== <mkhpss> Status ==> DCE is running, continue... <mkhpss> Prompt ==> Password for encina_admin: Reading /opt/hpss/config/hpss_env Enter CDS name of SFS to work with Querying SFS server /.:/encina/sfs/hpss Root SFS server: /.:/encina/sfs/hpss All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.:/encina/sfs/hpss Currently selected data volume: dataVol Filename extension: <none>...
  • Page 504 Appendix E Infrastructure Configuration Example All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.:/encina/sfs/hpss Currently selected data volume: dataVol Filename extension: <none> Create a file (on current SFS) Create all global files (on root SFS) Create all files for a storage subsystem (on current SFS) Empty a file (on current SFS) Destroy a file (on current SFS) Query a file (on current SFS)
  • Page 505 GK Configurations HPSS Global Configuration Storage Hierarchies Non-DCE Gateway Configurations NS Configurations NS Global Filesets Log Client Configurations Log Daemon Configurations Logging Policies Location Server Policies Metadata Monitor Configurations Migration/Purge Configurations Subsystem Storage Class Thresho Reassign files between data volumes Show mapping of file names to volumes Create the global files as they are allocated Select operation (<RETURN>...
  • Page 506 Appendix E Infrastructure Configuration Example dataVol: Account Log Records Migration Records Purge Records Bitfiles Bitfile COS Changes Bitfile Tape Segments Bitfile Disk Segments Bitfile Disk Allocation Maps BFS Storage Segment Checkpoints BFS Storage Segment Unlinks NS ACL Extensions NS Fileset Attrs Choose a different subsystem number Reassign files between data volumes Show mapping of file names to volumes...
  • Page 507 ================================= <mkhpss> Prompt ==> Select Infrastructure Configuration Option: <mkhpss> [1] Configure HPSS with DCE <mkhpss> [2] Configure SFS Server <mkhpss> [3] Create and Manage SFS Files <mkhpss> [4] Set Up FTP Daemon <mkhpss> [5] Set Up Startup Daemon <mkhpss> [6] Add SSM Administrative User <mkhpss>...
  • Page 508 Appendix E Infrastructure Configuration Example <mkhpss> Perform HPSS Startup Daemon Setup ================================= <mkhpss> Status ==> HPSS Startup Daemon will be invoked at system restart <mkhpss> Infrastructure Configuration Menu ================================= <mkhpss> Prompt ==> Select Infrastructure Configuration Option: <mkhpss> [1] Configure HPSS with DCE <mkhpss>...
  • Page 509: Aix Infrastructure Un-Configuration Example

    <mkhpss> Infrastructure Configuration Menu ================================= <mkhpss> Prompt ==> Select Infrastructure Configuration Option: <mkhpss> [1] Configure HPSS with DCE <mkhpss> [2] Configure SFS Server <mkhpss> [3] Create and Manage SFS Files <mkhpss> [4] Set Up FTP Daemon <mkhpss> [5] Set Up Startup Daemon <mkhpss>...
  • Page 510 <mkhpss> Status ==> Remove encina/sfs/hpss from encina_admin_group <mkhpss> Status ==> Remove encina/sfs/hpss from encina_servers_group <mkhpss> Status ==> Remove encina/sfs/hpss from organzation none <mkhpss> Status ==> Delete keytab file /.../host_cell.clearlake.ibm.com/hosts/host.clearlake.ibm.com/config/keytab/hpss <mkhpss> Status ==> Delete principal encina/sfs/hpss <mkhpss> Status ==> Remove /opt/encinalocal/encina/sfs/hpss? (y/n)(y) Perform /opt/hpss/config/hpss_ftpd_unconfig...
  • Page 511: Solaris Infrastructure Configuration Example

    <mkhpss> Status ==> Remove /opt/encinamirror/encina/sfs/hpss? (y/n)(y) <mkhpss> Prompt ==> Remove /opt/encinalocal? (y/n)(y) <mkhpss> Prompt ==> Remove /opt/encinamirror? (y/n)(y) <mkhpss> Perform DCE Deregister <hpss_dce_deregister> Status => Remove /.:/hpss? (y/n)(y) <hpss_dce_deregister> Status => Removing keytab file, /krb5/hpss.keytabs. <hpss_dce_deregister> Status => Removing keytab file, /krb5/hpssclient.keytab. <hpss_dce_deregister>...
  • Page 512 <hpss_setup_and_check_cell> Status => DCE group, hpss_cross_cell_members, Empty <hpss_setup_and_check_cell> Status => The Local Cell has NO HPSS.CellId Extended Registry Attribute <hpss_setup_and_check_cell> Status => Please obtain your HPSS.CellId from IBM <hpss_setup_and_check_cell> Prompt => Enter Assigned HPSS.CellId:200040 <hpss_setup_and_check_cell> Status => Extended Registry Attribute, HPSS.homedir , NOT Defined”...
  • Page 513 Check the network and DCE on both sides Found: 1 Trusted Cell Members TrustedCells[0].cell_id = b499b8ba-98d3-11d5-8d0d-c05e2fc1aa77 TrustedCells[0].uid = 101 TrustedCells[0].cell_name = /.../host_cell.clearlake.ibm.com TrustedCells[0].hpss_cell_id = 200040 /.../host_cell.clearlake.ibm.com successfully opened Completed Cross Cell Trust Check <mkhpss> Status ==> Configure HPSS with DCE completed, continue...
  • Page 514 <mkhpss> Status ==> Creating account for encina/sfs/hpss ... <mkhpss> Status ==> Destroying credentials ... <mkhpss> Status ==> Creating keytab file /.../host_cell.clearlake.ibm.com/hosts/host.clearlake.ibm.com/config/keytab/hpss ... <mkhpss> Status ==> Randomizing encina/sfs/hpss to keytab file ... <mkhpss> Prompt ==> Password for cell_admin: <mkhpss> Status ==> Creating cds directory /.:/encina ...
  • Page 515 Appendix E Infrastructure Configuration Example <mkhpss> Status ==> Init disk ... tkadmin init disk -server /.:/encina/sfs/hpss /dev/rdsk/c0t8d0s1 Initialized disk partition /dev/rdsk/c0t8d0s1 disk size (in pages): 128401 <mkhpss> Status ==> Create a physical volume of /dev/rdsk/c0t8d0s1 ... tkadmin create pvol logpvhpss 64 1 /dev/rdsk/c0t8d0s1 0 <mkhpss>...
  • Page 516 Appendix E Infrastructure Configuration Example <mkhpss> Status ==> Adding {user encina_admin ACQ} ... <mkhpss> Status ==> Adding {user hosts/host.clearlake.ibm.com/self ACQ} ... <mkhpss> Status ==> Clearing exclusive authority ... <mkhpss> Status ==> Stopping server ... <mkhpss> Status ==> Destroying credentials ...
  • Page 517 Enter CDS name of SFS to work with Querying SFS server /.:/encina/sfs/hpss Root SFS server: /.:/encina/sfs/hpss All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.:/encina/sfs/hpss Currently selected data volume: dataVol Filename extension: <none> Create a file (on current SFS) Create all global files (on root SFS) Create all files for a storage subsystem (on current SFS) Empty a file (on current SFS) Destroy a file (on current SFS)
  • Page 518 Appendix E Infrastructure Configuration Example Creating “serverconfig” New file serverconfig created Creating “accounting” New file accounting created Root SFS server: /.:/encina/sfs/hpss All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.:/encina/sfs/hpss Currently selected data volume: dataVol Filename extension: <none> Create a file (on current SFS) Create all global files (on root SFS) Create all files for a storage subsystem (on current SFS) Empty a file (on current SFS)
  • Page 519 New file bfmigrrec.1 created oot SFS server: /.:/encina/sfs/hpss All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.:/encina/sfs/hpss Currently selected data volume: dataVol Filename extension: <none> Create a file (on current SFS) Create all global files (on root SFS) Create all files for a storage subsystem (on current SFS) Empty a file (on current SFS) Destroy a file (on current SFS) Query a file (on current SFS)
  • Page 520 Appendix E Infrastructure Configuration Example data port = 4020) (y/n)(y) <mkhpss> Status ==> FTP Daemon setup completed, continue... <mkhpss> Infrastructure Configuration Menu ================================= <mkhpss> Prompt ==> Select Infrastructure Configuration Option: <mkhpss> [1] Configure HPSS with DCE <mkhpss> [2] Configure SFS Server <mkhpss>...
  • Page 521 DCE: User ‘hpss’ created SunOS: Adding UNIX User ‘hpss’ ... Enter Group [hpss]: SSM: Adding SSM User ‘hpss’ ... SSM: [ Hostname = host : Full hostname = host.clearlake.ibm.com ] Select SAMMI Security Level : 1. User 2. Privileged User 3.
  • Page 522: Solaris Infrastructure Un-Configuration Example

    Appendix E Infrastructure Configuration Example <mkhpss> Start SSM Session ================= <mkhpss> Verify User ID ============== <mkhpss> Status ==> User root; verified; continue... <mkhpss> Prompt ==> Do you wish to start SSM servers under user id root? <mkhpss> Reply ===> (Y) <mkhpss>...
  • Page 523 <mkhpss> Status ==> Remove encina/sfs/hpss from encina_servers_group <mkhpss> Status ==> Remove encina/sfs/hpss from organzation none <mkhpss> Status ==> Delete keytab file /.../host_cell.clearlake.ibm.com/hosts/host.clearlake.ibm.com/config/keytab/hpss <mkhpss> Status ==> Delete principal encina/sfs/hpss <mkhpss> Status ==> Remove /opt/encinalocal/encina/sfs/hpss? (y/n)(y) <mkhpss> Status ==> Remove /opt/encinamirror/encina/sfs/hpss? (y/n)(y) <mkhpss>...
  • Page 524 Appendix E Infrastructure Configuration Example September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 525: Appendix F Additional Ssm Information

    Additional SSM Appendix F Information F.1 Using the SSM Windows Before using the SSM windows, it is helpful to be aware of some of the conventions used by SSM and by Sammi (on which SSM is based). While the following list does not cover all features of all windows, it does describe the most important points.
  • Page 526 Appendix F Additional SSM Information • Most non-enterable text fields have gray backgrounds slightly lighter than the window background, and no borders. Some multi-line fields have the same background color, but use borders to help set them off from their surroundings. Some special fields display a fixed set of text strings, and use different background colors for different strings.
  • Page 527: Ssm On-Line Help

    of the popup. For potentially longer option lists, a “selection list” is used. This type uses scrollbars, if necessary, to display all the option data, and it has a “Cancel” button at the bottom of the list. You must click the “Cancel” button to dismiss this type of popup. •...
  • Page 528: Customizing Ssm And Sammi

    Appendix F Additional SSM Information The About Sammi menu option opens a window which displays information about Sammi and about Kinesix, the Sammi developer. Among other items, it shows the current version of the Sammi runtime, the host operating system and operating system version, and the hostname where Sammi is running.
  • Page 529 when it starts, and hpss.def is used to override the settings in s2_defaults.def. Some of the user-preference features which can be set in a defaults file are: 1. Whether or not popup items on windows cause a beep tone. 2. The volume of the beep which is issued when the user tries to type beyond the end of a field.
  • Page 530 “.motifbind”. In practice, on AIX systems, the file is not required because the settings in .motifbind.ibm are the same as the defaults built into Motif. On other platforms, you may want to experiment to see if using one of these files solves any keyboard problems you may be having.
  • Page 531: Detailed Information On Setting Up An Ssm User

    preferences may be saved in disk files which SSM can automatically load each time the user logs into an SSM session. Preferences are saved in disk files in each user’s SSM work area, /opt/hpss/sammi/ssm_user/<user>, where <user> is an actual SSM username. While each SSM work area is owned by an SSM user, the SSM Data Server process is the entity which actually writes and reads the preferences files.
  • Page 532: Non-Standard Ssm Configurations

    Appendix F Additional SSM Information SSM work areas for the new user are created. These work areas are /opt/hpss/sammi/ ssmuser/<user> (where <user> is the actual ID of the new user), and /opt/hpss/sammi/ssmuser/<user>/mapfiles. A template Sammi configuration file (ssm_console.dat) is copied to the new user’s work area and modified to be user-specific.
  • Page 533 Multiple SSM Sessions. The defaults assume that each user will run only one SSM session at a time. If one user must run multiple SSM sessions, the easiest way to configure this is to create multiple user names for that user with hpssuser. If you choose not to do this, you must create completely separate Sammi execution environments for each concurrent session that a user may want to run.
  • Page 534 Appendix F Additional SSM Information September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 535: Appendix G High Availability

    High Availability Appendix G G.1 Overview The High Availability (HA) feature of HPSS allows a properly configured HPSS system to automatically recover from a number of possible failures, with the goal of eliminating all single points of failure in the system. The same functionality can be used to minimize the impact of regularly scheduled maintenance and/or software upgrades.
  • Page 536: Architecture

    Appendix G High Availability High availability is not the same as fault tolerance. The failures above are “protected against” from the standpoint that the HA HPSS system will be able to return to an operational state without intervention when any one of the above failures occur. There certainly may be some down-time, especially when the core server fails (crashes).
  • Page 537: Planning

    • Each node has two connections to the ethernet network. One is a “standby” that can take over the IP and hardware addresses of the primary adapter in case of failure. • There is an RS-232 serial cable connecting Node 1 and Node 2 to enable communication even in the event that the main network fails.
  • Page 538 Appendix G High Availability • Cluster Event Worksheet However, this is a large list of worksheets to go through, so they have been condensed down to cover only what is needed for an HA HPSS system. The following pages contain the condensed HA HPSS Planning Worksheet with suggested values.
  • Page 539 Network Name _ether1___ Network Attr Service Adapters: IP Label Function IP Address Network Name _ether1___ Network Attr HW Address Serial Networks: Network Name _serial1___ Network Type _RS232____ Node Names Serial Network Adapter Worksheet (node A): Slot Number Interface Name __________ Adapter Label __________ Network Name _serial1___ Serial Network Adapter Worksheet (node B):...
  • Page 540 Appendix G High Availability Adapter Logical Name __________, __________ Adapter Bus ID Shared Disk Bus IDs Shared SCSI-2 Differential or Differential Fast/Wide Disks Worksheet (bus2): Type of Bus Node Name Slot Number Adapter Logical Name __________, __________ Adapter Bus ID Shared Disk Bus IDs Non-Shared Volume Group Worksheet (Non-Concurrent Access) (hanode1): Volume Group Name...
  • Page 541 Volume Group Name Major Number Log LV Name (if any) Physical Volumes Volume Group Name Major Number Log LV Name (if any) Physical Volumes Volume Group Name Major Number Log LV Name (if any) Physical Volumes HPSS Resource Group: Resource Group Name Resource Group Type Application Server Name Start Command...
  • Page 542 Appendix G High Availability Cluster Name _HAHPSS_ Network: Name Type Attr Netmask Node names Boot and Standby Adapters (hanode1): IP Label Function IP Address Network Name _ether1___ Network Attr Boot and Standby Adapters (hanode2): IP Label Function IP Address Network Name _ether1___ Network Attr Service Adapters: IP Label...
  • Page 543 Network Name _serial1___ Network Type _RS232____ Node Names Serial Network Adapter Worksheet (node A): Slot Number Interface Name _/dev/tty1_ Adapter Label _ha1tty___ Network Name _serial1___ Serial Network Adapter Worksheet (node B): Slot Number Interface Name _/dev/tty1_ Adapter Label _ha2tty___ Network Name _serial1___ Shared SCSI-2 Differential or Differential Fast/Wide Disks Worksheet (bus1): Type of Bus Node Name...
  • Page 544 Appendix G High Availability Non-Shared Volume Group Worksheet (Non-Concurrent Access) (hanode1): Volume Group Name Physical Volumes Logical Volumes Mirrored? Non-Shared Volume Group Worksheet (Non-Concurrent Access) (hanode2): Volume Group Name Physical Volumes Logical Volumes Mirrored? Shared Volume Groups/Filesystems: Volume Group Name Major Number Log LV Name (if any) Physical Volumes...
  • Page 545: System Preparation

    HPSS Resource Group: Resource Group Name Resource Group Type Application Server Name Start Command Stop Command G.3 System Preparation G.3.1 Physically set up your system and install AIX The first step to prepare a system for HA HPSS is to perform the physical setup. This includes all the physical cabling for power, networks, disk devices, etc.
  • Page 546: Diagram The Disk Layout

    Appendix G High Availability % bootlist -m normal hdisk0 hdisk1 % shutdown -Fr Note that the last step reboots the machine. This is necessary to disable the normal quorum checking for rootvg. G.3.3 Diagram the Disk Layout One key aspect to setting up your disks, volume groups, logical volumes, and file systems properly is knowing which disks to use.
  • Page 547: Mirror Shared Jfs Logs

    • /opt/encinamirror • /usr/lpp/encina • /opt/hpss • /var/hpss • /usr/java130 • /usr/local/sammi Note that this lists default file system mount points only. Determine the appropriate file systems and mount points for your site before continuing. The sizing of these file systems is very important. To determine sizing for HPSS-related file systems, see Section 2.10.3: System Memory and Disk Space on page 132.
  • Page 548: Initial Install And Configuration For Hacmp

    Appendix G High Availability G.4 Initial Install and Configuration for HACMP G.4.1 Install HACMP Install the following file sets from the HACMP 4.4 installation media on each node: cluster.base cluster.cspoc cluster.doc.en_US cluster.man.en_US cluster.taskguides cluster.vsm G.4.2 Setup the AIX Environment for HACMP 1.
  • Page 549: Initial Hacmp Configuration

    Note that you do not configure your service adapter. This is because HACMP will change your boot adapter into your service adapter when it brings up HPSS. Therefore, without HACMP running, you configure the adapter with its boot name and address. 6.
  • Page 550: Figure H-2 Adding A Cluster Definition

    Appendix G High Availability G.4.3.2 Define the two nodes in the cluster Now tell the HACMP the names of the nodes in the cluster. These names do not have to be the same as the hostnames or adapters of the nodes since the names are used internally to HACMP. % smitty hacmp Cluster Configuration ->...
  • Page 551: Figure H-3 Adding Cluster Nodes

    G.4.3.3 Define the networks The SMIT path for configuring each network (ethernet and RS232) is the same % smitty hacmp Cluster Configuration -> Cluster Topology -> Configure Networds At this point, you will be offered two options, IP-based Network and Non IP-based Network.
  • Page 552: Figure H-4 Adding An Ip-Based Network

    Appendix G High Availability Figure H-4 Adding an IP-based Network Choose Non IP-based Network to configure your RS232 network: Figure H-5 Adding a Non IP-based Network September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 553 G.4.3.4 Define the network adapters When defining network adapters, there are some slight differences between the values supplied for service, boot, standby, and serial adapters: • Boot, standby, and serial adapters are tied to particular nodes, while service adapters are not.
  • Page 554: Figure H-6 Adding An Ethernet Boot Adapter

    Appendix G High Availability Figure H-6 Adding an Ethernet Boot Adapter Figure H-7 Adding an Ethernet Standby Adapter September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 555: Figure H-8 Adding An Ethernet Service Adapter

    Appendix G High Availability Figure H-8 Adding an Ethernet Service Adapter Figure H-9 Adding a Serial Adapter HPSS Installation Guide September 2002 Release 4.5, Revision 2...
  • Page 556: Figure H-10 Adding A Resource Group

    Appendix G High Availability G.4.3.5 Synchronize the Cluster Topology By this point in the configuration, you should have given HACMP all the information it needs about the topology of your networks. However, only one of the nodes has this configuration information, and it needs to be on both nodes.
  • Page 557: Figure H-11 Configuring A Resource Group

    -> Cluster Resources -> Change/Show Resources/Attributes for a Resource Group There are only three fields that need to be filled in on the SMIT screen at this time: Service IP Label, Filesystems, and Volume Groups. Special care should be taken to ensure that all shared file systems and volume groups are listed (it’s usually easy if you use the F4 pick lists): Figure H-11 Configuring a Resource Group G.4.3.8...
  • Page 558: Configure Dce, Sfs, And Hpss

    One thing that will need to be done is to copy the /etc/rc.hpss file to both nodes. By default, it will only be copied to the /etc directory of the node that is active when HPSS is being configured. G.5 Configure HA HPSS G.5.1 HA HPSS Scripts http://www4.clearlake.ibm.com/hpss/about/ September 2002 HPSS Installation Guide Release 4.5, Revision 2...
  • Page 559 HACMP is able to control HPSS by using a set of scripts that are included in the HPSS installation under $HPSS_ROOT/tools/ha (by default, /opt/hpss/tools/ha): hpss_environment hpss_start.ksh hpss_stop.ksh hpss_sync.ksh hpss_verify.ksh hpss_snapshot.ksh hpss_notify.ksh hpss_aix_error.ksh hpss_cluster_notify.ksh These scripts need to be stored locally on each node’s internal disks, not on shared storage. They will also have to be customized to operate within any particular HA HPSS environment.
  • Page 560 Appendix G High Availability 6. Create /var/hahpss (or corresponding directory) on node 2. Synchronize the scripts using hpss_sync.ksh. If you are setup properly with your /.rhost files, run the following from the script directory: % ./hpss_sync.ksh <other node’s standby> Where <other node’s standby> is the standby address of node 2. This will copy the HA HPSS scripts from the current node (node 1) to the specified node (node 2).
  • Page 561: Finish The Hacmp Configuration

    4. Remove lines that meet any of the following criteria: Executable name doesn’t begin with hpss Executable name is hpssd, hpss_ssmds, or hpss_ssmsm The process is a Non-DCE Client Gateway subprocess, hpss_ndcg_* The process is a Mover subprocess, hpss_mvr_* 5. From the beginning of each line, replace all the text preceeding the executable name with “./”.
  • Page 562: Figure H-12 Adding An Application Server

    Appendix G High Availability Figure H-12 Adding an Application Server G.5.2.2 Attach the Application Server to the Resource Group % smitty hacmp Cluster Configuration -> Cluster Resources -> Change/Show Resources/Attributes for a Resource Group There are only four fields that need to be filled in on this SMIT screen: “Service IP Label”, “Filesystems”, “Volume Groups”, and ”Application Servers”.
  • Page 563: Figure H-13 Adding An Application Server To A Resource Group

    Figure H-13 Adding an Application Server to a Resource Group G.5.2.3 Bring Down the Cluster Now that the cluster is back in sync and that it knows how to run the hpss_stop.ksh script, it’s time to bring down the cluster. However, HACMP doesn’t yet know how to stop HPSS, SFS, and DCE (it won’t know that until the cluster is synchronized in the next step), so you’ll need to shut HPSS, SFS, and DCE down manually.
  • Page 564: Define Ha Hpss Verification Method

    Appendix G High Availability G.5.2.5 Bring Up the Cluster (just to test that it works) Now bring the cluster back up using the instructions in Section G.10.1: Startup the Cluster on page 570. When the cluster is active again, HPSS will be fully operational. It will be able to service requests immediately, and all an administrator needs to do is start an SSM session to begin administering the system.
  • Page 565: Setup Error Notification

    G.5.4 Setup Error Notification Even though an HA HPSS system is designed to recover from failures, the recovered system is often unable to handle subsequent failures. For this reason, it is important that administrators know immediately when a component fails so that it can be replaced or fixed quickly in order to get the HA HPSS system back to a highly available state.
  • Page 566: Figure H-15 Configuring Aix Error Notification

    Appendix G High Availability Figure H-15 Configuring AIX Error Notification G.5.4.2 HACMP Notify Events When some failures occur, they generate events in HACMP. These events can be configured to cause a notification to be sent before and after the event occurs using the hpss_cluster_notify.ksh script.
  • Page 567: Crontab Considerations

    Figure H-16 Configuring Cluster Event Notification Fill in the Notify Command field using the following syntax: hpss_cluster_notify.ksh There are no arguments to pass. The events that you should consider setting up this way include: fail_standby join_standby network_down network_up node_down node_up node_up_complete swap_adapter It may be necessary to setup these event notification on each node independently.
  • Page 568: Monitoring And Maintenance

    Appendix G High Availability The answer to this problem will depend from site to site, but one good way to make this work is to have a set of intermediate scripts between the crontab file and the commands it executes. These scripts could test for the existence of any prerequisite files and/or file systems and only execute the associated command if all the prerequisites are met.
  • Page 569: Metadata Backup Considerations

    G.7 Metadata Backup Considerations Special care should be taken when configuring your HA HPSS system to work with the sfs_backup_util to keep the source, configuration, and backup (LA and TRB) files stored on the shared disks. Otherwise a failover could easily make backup files or the sfs_backup_util program itself unreachable.
  • Page 570: Important Information

    Appendix G High Availability After setting up ssh and scp for your cluster, follow these steps to configure HA HPSS to use them: 1. Edit the /var/hahpss/hpss_environment file on either of the cluster nodes, and set the following environment variables: HPSS_REMOTE_SHELL=ssh HPSS_REMOTE_COPY=scp Now synchronize this change to the other node:...
  • Page 571: Shutdown The Cluster

    Alternatively, it is possible to use a slightly different SMIT path to start the Cluster Manager on the local node. Of course, this requires logging into each node independently to activate both Cluster Managers. % smitty hacmp Cluster Services -> Start Cluster Services Take the defaults and press <Enter>.
  • Page 572: Verify The Cluster

    Appendix G High Availability % smitty hacmp Cluster Services -> Stop Cluster Services G.10.3 Verify the Cluster Once the HA HPSS verification method has been defined in HACMP (Section G.5.3: Define HA HPSS Verification Method on page 564), go to SMIT to verify your cluster: % smitty hacmp Cluster Configuration ->...
  • Page 573 G.10.4.1 Synchronize Topology In order to synchronize topology changes to the cluster, go to SMIT: % smitty hacmp Cluster Configuration -> Cluster Topology -> Synchronize Cluster Topology This will take you to the “Synchronize Cluster Topology” SMIT window. Accept the defaults by pressing <Enter>, and your topology should synchronize successfully.
  • Page 574: Move A Resource Group

    Appendix G High Availability hpss_start_list hpss_stop.ksh hpss_sync.ksh hpss_verify.ksh Update complete G.10.5 Move a Resource Group It is often useful to have HACMP move a resource group to another node in the cluster. This will result in a short period of down time as HA HPSS is shutdown on the active node and brought up on the standby node, but it is a convenient way to free up a node for maintenance.
  • Page 575: Index

    Access Control List (ACL) access control list extensions, 112 Account Apportionment Table, 493 Account Index, 493 Account Map site style, 493 UNIX style, 493, 494 Accounting account apportionment table, 493 account index, 493 Account Map, 493 accounting policy configuration, 289 accounting policy, 36 accounting reports, 495 charging policy, 44...
  • Page 576 vendor software requirements, 392 API, see Client Application Program Interface Application Program Interface (API), see Client Application Program Interface Audit, see Security Authentication, see Security Authorization, see Security Automated Cartridge System Library Software (ACSLS), 390 BFS, see Bitfile Server Bitfile Server, 22, 26 BFS metadata, 112 BFS storage segment checkpoint, 114 BFS storage segment unlinks, 114...
  • Page 577 Client Application Program Interface (API), 29, 34 interface considerations, 58 performance considerations, 139 security policy, 87 Client Platforms Supported, 37 Configuration Planning, 39 Configuration, also see Creating Configurations configuring HPSS infrastructure on installation node, 240 configuring HPSS with DCE, 242 defining HPSS environment variables, 216 HPSS configuration limits, 250 HPSS configuration performance considerations, 138...
  • Page 578 storage hierarchy, 315 storage policy, 279 storage server specific, 394 storage space, 465 Data Server, 29, 79, 110, 120 configuration 445 Configuring on AIX 445 HDM Server 448 DCE, see Distributed Computing Environment Delog, 77 Devices/Drives creating the device/drive configuration, 401 disk devices, 57 tape robots 54 archived filesets 436...
  • Page 579 transaction log, 131 See also Structured File Server (SFS) Environment variables 216 Client API environment 413 File Family, 22 File Transfer Protocol (FTP), 32, 33 FTP Daemon configuration, 422 interface considerations, 59 performance considerations, 138 security policy, 88 set up FTP Daemon, 244 Files duplicate file policy, 44 Filesets...
  • Page 580 105–135 static configuration variables, 125 storage characteristics configuration, 305 HIPPI, see High Performance Parallel Interface HPSS, see High Performance Storage System IBM 3494/3495 PVR, 55, 389 cartridge import and export, 389 configuration requirements, 388 server considerations, 72 vendor information, 389...
  • Page 581 PFTP, 59, 138 Latency, 104 Location Policy, 36 Log Client, 33, 77 configuration metadata, 110 creating the Log Client specific configuration, 333 Log Daemon, 33, 77 configuration metadata, 110 creating the Log Daemon specific configuration, 336 log file archival, 337 Logging, 33 creating the logging policy configuration, 293, 301 logging policy, 36, 119...
  • Page 582 NFS and Mount Daemons, 120 PVL, 117 PVR, 118 server configuration metadata, 109 sizing assumptions, 121 sizing computations, 126 sizing spreadsheet, 121 tape storage server, 116 Migration migration policy, 35 Migration/Purge Server (MPS), 27 checkpoints, 118 configuration metadata, 110 creating the MPS specific configuration, 341 metadata, 118 server considerations, 65 MM, see Metadata Manager...
  • Page 583 creating the NFS Daemon specific configuration, 357 memory and disk space requirements, 134 metadata, 120 NFS Daemon configuration, 431 NFS Mount Daemon, 77 configuration metadata, 111 metadata, 120 NFS, 33 NFS Server, see NFS Daemon NFS, see Network File System Non-DCE Client Application Program Interface (Non-DCE Client API), interface considerations, 58 Non-DCE Client Gateway Configurations,...
  • Page 584 Physical Volume Repository (PVR), 28, 71 AML PVR information, 72 cartridges, 118 configuration metadata, 111 creating the PVR specific configuration, 373 IBM 3494/3495 PVR information, 72, 389 metadata, 118 operator mounted drives, 56 operator PVR, 72 server considerations, 71 StorageTek PVR information, 71, ??–392...
  • Page 585 storage system capacity, 43 storage system conversion, 45 usage trends, 44 Sammi, 29, 35 setting up Sammi license key, 213 software considerations, 48 SSM, 79 Security, 32, 45 audit, 32, 89 authentication, 32 authorization, 32 enforcement, 32 management, 32 security policy, 36 site security policy, 87 storage policy considerations, 87 Servers...
  • Page 586 Startup Daemon server considerations, 78 set up Startup Daemon, 244 STK, see StorageTek PVR Storage Classes, 23, 93 metadata, 112 storage class characteristics considerations, 93 storage class configuration, 305 Storage Hierarchies, creating storage hierarchy configuration, 315 metadata, 113 storage characteristics considerations, 101 Storage Level migration policy for tape, 83 Storage Map, 23...
  • Page 587 Storage System Management (SSM), 20 adding SSM administrative user, 244 additional SSM information, 525–?? components, 29 management interface, 35 metadata, 120 Non-standard SSM configurations, 532 server considerations, 79 setting up an SSM user, 531 SSM configuration and start up, 252 SSM server configuration and start up, 253 SSM user session configuration and start up, 253 start up SSM servers/user session, 245...
  • Page 588 Transmission Control Protocol/Internet Protocol (TCP/IP) Mover, 72 network considerations, 53 NFS, 60 PFTP, 59, 138 STK, 55 user interface FTP, 33 user interface PFTP, 33 UDP, see user Data Protocol UID, see User Identifier UNIX-style Accounting accounting policy, 36 site accounting requirements, 491 site accounting table, 493 User Data Protocol (UDP) network considerations, 54...
  • Page 589 Accounting Policy, 290 Bitfile Server Configuration, 325 Configure Mover Device and PVL Drive, 404 File Family Configuration, 322 HPSS Class of Service, 319 HPSS DMAP Gateway Server Configuration, 330 HPSS Health and Status, 252 HPSS Logon, 255 HPSS Migration/Purge Server Configuration, 342 HPSS Storage Class, 306 HPSS Storage Hierarchy, 316 Logging Client Configuration, 334...
  • Page 590 September 2002 HPSS Installation Guide Release 4.5, Revision 2...

This manual is also suitable for:

Hpss