Hdm Configuration; Introduction - IBM Hub/Switch Installation Manual

High performance storage system release 4.5
Table of Contents

Advertisement

IBM's Parallel Operating Environment MPI, Version 3 Release 2
Sun HPC MPI, version 4.1
ANL MPICH, version 1.2
Other versions of MPI may be compatible with HPSS MPI-IO as well. The mpio_MPI_config.h file
is dynamically generated from the host MPI's mpi.h file, making it possible to tailor the interaction
of HPSS MPI-IO with the host MPI. Earlier versions of these MPI hosts are known to be compatible
with HPSS MPI-IO, for example, but we recommend the use of the above versions for this release
of HPSS.
This configuration results in the construction of the file include/mpio_MPI_config.h. Applications
in C must be compiled with #include "mpio.h" which will also include the
mpio_MPI_config.h to determine the host interface requirements. Fortran applications must
include mpiof.h and C++ applications are further required to include mpio.h before including the
host mpi.h
HPSS MPI-IO supports the Fortran77 and C++ interfaces for the MPI-IO API, as specified in the
MPI-2 standard. These interfaces require compatible Fortran77 and C++ compilers as described in
Section 2.3.1.6: Miscellaneous on page 49. The Makefile.macros allows tailoring of the MPI-IO
environment to selectively disable support for Fortran or C++, if compatible compilers are not
available, by setting one or both of MPIO_CPLUSPLUS_SUPPORT and
MPIO_FORTRAN_SUPPORT to off.
The following environment variables can also be used to amend the MPI-IO API configuration:
The MPIO_LOGIN_NAME defines the DCE login name of the principal who will be executing the
application. If specified along with MPIO_KEYTAB_PATH, which specifies the path name to a file
containing the principal's security keys, each process in a distributed application will be able to
create the credentials necessary for DCE authentication to HPSS. If either is not specified, the
default is to use the DCE credentials for the current login session, if any.
The MPIO_DEBUG can be toggled to direct MPI-IO to give messages when errors are detected. A
value of 0 (zero) will disable message reporting, and any nonzero value will enable reporting.
In addition to the MPI-IO-specific variables, any of the HPSS Client API variables can be used (see
Section 7.1: Client API Configuration on page 413). If using MPI-IO with the non-DCE API, the
HPSS_NDCG_SERVERS environment variable must be appropriately set, as described in Section
7.2.2: Environment Variables on page 416.
7.6 HDM Configuration
7.6.1

Introduction

HPSS provides distributed file system services by optionally allowing HPSS to interface with
Transarc's DFS . DFS is a scalable distributed file system that provides a uniform view of file data
to all users through a global name space. DFS supports directory hierarchies, and an administrative
entity called filesets. DFS also supports ACLs on both directories and files to allow granting
different permissions to different users and groups accessing an object. DFS uses the DCE concept
HPSS Installation Guide
Release 4.5, Revision 2
Chapter 7
September 2002
HPSS User Interface Configuration
435

Advertisement

Table of Contents
loading

This manual is also suitable for:

Hpss

Table of Contents