Advantage For Pgas Programming Model - IBM Power Systems 775 Manual

For aix and linux hpc solution
Table of Contents

Advertisement

Mathematical Acceleration Subsystem libraries for POWER7
This section provides details about the Mathematical Acceleration Subsystem (MASS)
libraries.
Vector libraries
The vector MASS library libmassvp7.a contains vector functions that are tuned for the
POWER7 architecture. The functions are used in 32-bit mode or 64-bit mode.
Functions supporting previous POWER processors (single-precision or double-precision) are
included for POWER7 processors.
The following functions are added in single-precision and double-precision function groups:
exp2
exp2m1
log21p
log2
SIMD libraries
The MASS SIMD library libmass_simdp7.a contains an accelerated set of frequently used
math intrinsic functions that provide improved performance over the corresponding standard
system library functions.
POWER7 hardware intrinsic
New hardware intrinsic are added to support the following POWER7 processor features:
New POWER7 prefetch extensions and cache control
New POWER7 hardware instructions
New compiler option for POWER7 processors
The -qarch compiler option specifies the processor architecture for which code is generated.
The -qtune compiler option tunes instruction selection, scheduling, and other
architecture-dependent performance enhancements to run best on a specific hardware
architecture.
-qarch=pwr7 produces object code containing instructions that run on the POWER7 hardware
platforms. With -qtune=pwr7, optimizations are tuned for the POWER7 hardware platforms.
For more information: For more information, see the XL C/C++ Optimization and
Programming Guide, SC23-5890, and the XL Fortran Optimization and Programming
Guide, SC23-5836.

2.3.2 Advantage for PGAS programming model

The partitioned global address space (PGAS) programming model is an explicitly parallel
programming model that divides the global shared address space into a number of logical
partitions. As is the case in the shared memory programming model, each thread addresses
the entire shared memory space. In addition, a thread has a logical association with the
portion of the global shared address space that is physically on the computational node
where the thread is running.
The PGAS programming model is designed to combine the advantages of the shared
memory programming model and the message passing programming model. In the message
passing programming model, each task has direct access to only its local memory. To access
Chapter 2. Application integration
107

Advertisement

Table of Contents
loading

Table of Contents