Table of Contents

Advertisement

IBM z13s Technical Guide
Octavian Lascu
Barbara Sannerud
Cecilia A. De Leon
Edzard Hoogerbrug
Ewerson Palacio
Franco Pinto
Jin J. Yang
John P. Troy
Martin Soellig

Front cover

In partnership with
IBM Academy of Technology
Redbooks

Advertisement

Table of Contents
loading

Summary of Contents for IBM z13s

  • Page 1: Front Cover

    Front cover IBM z13s Technical Guide Octavian Lascu Barbara Sannerud Cecilia A. De Leon Edzard Hoogerbrug Ewerson Palacio Franco Pinto Jin J. Yang John P. Troy Martin Soellig In partnership with IBM Academy of Technology Redbooks...
  • Page 3 International Technical Support Organization IBM z13s Technical Guide June 2016 SG24-8294-00...
  • Page 4 First Edition (June 2016) This edition applies to IBM z13s servers. © Copyright International Business Machines Corporation 2016. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule...
  • Page 5: Table Of Contents

    1.4 Hardware Management Consoles and Support Elements ..... . 27 1.5 IBM z BladeCenter Extension (zBX) Model 004 ......27 1.5.1 Blades .
  • Page 6 2.5.2 General z13s RAS features ........
  • Page 7 3.5.4 Internal Coupling Facility......... . . 103 3.5.5 IBM z Integrated Information Processor ....... 105 3.5.6 System assist processors .
  • Page 8 6.1 Cryptography in IBM z13 and z13s servers ....... .
  • Page 9 7.2.7 z13s function support summary ........
  • Page 10 7.3.46 OSA-Express4S 1000BASE-T Ethernet ......272 7.3.47 Open Systems Adapter for IBM zAware ......273 7.3.48 Open Systems Adapter for Ensemble.
  • Page 11 8.1.1 Overview of upgrade types ......... 312 8.1.2 Terminology related to CoD for z13s systems ......313 8.1.3 Permanent upgrades .
  • Page 12 9.7 Considerations for PowerHA in zBX environment......366 9.8 IBM z Advanced Workload Analysis Reporter ......367 9.9 RAS capability for Flash Express .
  • Page 13 10.4.3 System Activity Display and Monitors Dashboard..... . . 382 10.4.4 IBM Systems Director Active Energy Manager ......383 10.4.5 Unified Resource Manager: Energy management .
  • Page 14 B.3.4 IBM zAware graphical user interface ....... . . 464...
  • Page 15 F.4 Using IBM Cloud Manager with OpenStack ....... . 520...
  • Page 16 J.3.2 IBM SDK 7 for z/OS Java support........554 J.3.3 IBM z Systems Batch Network Analyzer ....... 554 Related publications .
  • Page 17: Figures

    3-4 z13s CPC drawer communication topology ....... . . 87...
  • Page 18 8-9 Machine profile ............331 8-10 IBM z13s Perform Model Conversion window ......332 8-11 Customer Initiated Upgrade Order Activation Number window.
  • Page 19 B-4 HMC Image Profile for an IBM zAware LPAR ......460...
  • Page 20 F-2 Open source virtualization (KVM for IBM z Systems) ......516 F-3 KVM for IBM z Systems management interface ......518 F-4 KVM management by using the libvirt API layers .
  • Page 21: Notices

    This information was developed for products and services offered in the US. This material might be available from IBM in other languages. However, you may be required to own a copy of the product or product version in that language in order to access it.
  • Page 22: Trademarks

    IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright and trademark information” at http://www.ibm.com/legal/copytrade.shtml...
  • Page 23: Ibm Redbooks Promotions

    Redbooks publication, featuring your business or solution with a link to your web site. Qualified IBM Business Partners may place a full page promotion in the most popular Redbooks publications. Imagine the power of being seen by users who download ibm.com/Redbooks...
  • Page 24 THIS PAGE INTENTIONALLY LEFT BLANK...
  • Page 25: Preface

    4.3 GHz. This configuration can run more than 18,000 millions of instructions per second (MIPS) and up to 4 TB of client memory. The IBM z13s Model N20 is estimated to provide up to 100% more total system capacity than the IBM zEnterprise® BC12 Model H13.
  • Page 26 Cecilia A. De Leon is a Certified IT Specialist in the Philippines. She has 15 years of experience in the z Systems field. She has worked at IBM for 7 years. She holds a degree in Computer Engineering from Mapua Institute of Technology. Her areas of expertise include z Systems servers and operating system.
  • Page 27: Now You Can Become A Published Author, Too

    Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks...
  • Page 28: Stay Connected To Ibm Redbooks

    Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html xxvi...
  • Page 29: Chapter 1. Introducing Ibm Z13S Servers

    “as a service” cloud delivery models. It also must support the needs of traditional and mobile knowledge workers. This chapter introduces the basic concepts of IBM z13s servers. This chapter includes the following sections: Overview of IBM z13s servers...
  • Page 30: Overview Of Ibm Z13S Servers

    1.1 Overview of IBM z13s servers The IBM z13s server, like its predecessors, is designed from the chip level up for intense data serving and transaction processing, the core of business. IBM z13s servers are designed to provide unmatched support for data processing by providing these features:...
  • Page 31: Z13S Servers Highlights

    IBM continues its technology leadership at a consumable entry point with the z13s servers. z13s servers are built using the IBM modular multi-drawer design that supports 1 or 2 CPC drawers per CPC. Each CPC drawer contains eight-core SCMs with either 6 or 7 cores enabled for use.
  • Page 32: Capacity And Performance

    SMT capability, and provides 139 SIMD vector instruction subset for better performance. Depending on the model, z13s servers can support from 64 GB to a maximum of 4 TB of usable memory, with up to 2 TB of usable memory per CPC drawer. In addition, a fixed amount of 40 GB is reserved for the hardware system area (HSA) and is not part of customer-purchased memory.
  • Page 33: I/O Subsystem And I/O Features

    PCIe I/O drawers. Up to two PCIe I/O drawers per z13s server are supported, providing space for up to 64 PCIe features. When upgrading a zBC12 or a z114 to a z13s server, one I/O drawer is also supported as carry forward. I/O drawers were introduced with the IBM z10™...
  • Page 34 Copy (PPRC) secondary devices, and IBM FlashCopy® devices, the third subchannel set allows extending the amount of addressable external storage. In addition to performing an IPL from subchannel set 0, z13s servers allow you to also perform an IPL from subchannel set 1 (SS1), or subchannel set 2 (SS2).
  • Page 35 RoCE Express feature. SMC-R affords low latency communications within a CEC by using an RDMA connection. IBM z Systems servers (z13 and z13s) now support a new ISM virtual PCIe (vPCIe) device to enable optimized cross-LPAR TCP communications that use a new “sockets-based DMA”, the SMC-D.
  • Page 36: Virtualization

    CPC drawers and features. For more information about PR/SM functions, see 3.7, “Logical partitioning” on page 116. LPAR configurations can be dynamically adjusted to optimize the virtual servers’ workloads. On z13s servers, PR/SM supports an option to limit the amount of physical processor IBM z13s Technical Guide...
  • Page 37 This partition with its infrastructure is the z Appliance Container Infrastructure. zACI is designed to shorten the deployment and implementation of building and deploying appliances. zACI will be delivered as part of the base code on z13s and z13 (Driver 27) servers.
  • Page 38 TCP/IP network access without requiring a TCP/IP stack in z/VSE. The appliance uses zACI, which was introduced on z13 and z13s servers. Compared to a TCP/IP stack in z/VSE, this configuration can support higher TCP/IP traffic throughput while reducing the processing resource consumption in z/VSE.
  • Page 39: Reliability, Availability, And Serviceability Design

    Beginning with z13s servers, IBM zAware runs in a mode logical partition. Either CPs or IFLs can be configured to the IBM zAware partition. This special partition is defined for the exclusive use of the IBM z Systems Advanced Workload Analysis Reporter (IBM zAware) offering.
  • Page 40: Models

    IBM z/Architecture processor unit (processor core) on the SCM. On z13s servers, some PUs are part of the system base. That is, they are not part of the PUs that can be purchased by clients. They are characterized by default as follows: System assist processor (SAP) that is used by the channel subsystem.
  • Page 41: Model Upgrade Paths

    When a z114 is upgraded to a z13s server, the z114 Driver level must be at least 93. If a zBX is involved, the Driver 93 must be at bundle 27 or later. Family to family (z114 or zBC12) upgrades are frame rolls, but all z13s upgrade paths are disruptive.
  • Page 42: Cpc Drawer

    1.3.4 CPC drawer Up to two CPC drawers (minimum one) can be installed in the z13s frame. Each CPC drawer houses the SCMs, memory, and fanouts. The CPC drawer supports up to 8 PCIe fanouts and 4 IFB fanouts for I/O and coupling connectivity.
  • Page 43 SMT on z13s servers to optimize their workloads while providing repeatable metrics for capacity planning and chargeback.
  • Page 44: I/O Connectivity: Pcie And Infiniband

    6 GBps and the HCA3-O LR 1X InfiniBand links have a bandwidth of 5 Gbps. 1.3.6 I/O subsystem The z13s I/O subsystem is similar to z13 servers and includes a new PCIe Gen3 infrastructure. The I/O subsystem is supported by both a PCIe bus and an I/O bus similar to...
  • Page 45 I/O infrastructures. It can be concurrently added and removed in the field, easing planning. Only PCIe cards (features) are supported, in any combination. Up to two PCIe I/O drawers can be installed on a z13s server with up to 32 PCIe I/O features per drawer (64 in total).
  • Page 46 Also, when carried forward on an upgrade, the z13s servers support one I/O drawer on which the FICON Express8 SX and LX (10 km) feature can be installed. In addition, InfiniBand coupling links HCA3-O and HCA3-O LR, which attach directly to the CPC drawers, are supported.
  • Page 47 Connectivity to the intranode management network (INMN) Top of Rack (ToR) switch in the zBX is not supported on z13s servers. When the zBX model 002 or 003 is upgraded to a model 004, it becomes an independent node that can be configured to work with the ensemble.
  • Page 48: Parallel Sysplex Coupling And Server Time Protocol Connectivity

    Support for Parallel Sysplex includes the Coupling Facility Control Code and coupling links. Coupling links support The z13s CPC drawer supports up to 8 PCIe Gen3 fanouts and up to four IFB fanouts for Parallel Sysplex coupling links. A z13s Model N20 with optional second CPC drawer supports a total of 16 PCIe Gen3 fanouts and 8 IFB fanout slots (four per CPC drawer).
  • Page 49 Attention: IBM z13s servers do not support ISC-3 connectivity. CFCC Level 21 CFCC level 21 is delivered on z13s servers with driver level 27. CFCC Level 21 introduces the following enhancements: Support for up to 20 ICF processors per z Systems CPC: –...
  • Page 50 LPARs that run on the same server share the CFCC level. A CF running on a z13s server (CFCC level 21) can coexist in a sysplex with CFCC levels 17 and 19. Review the CF LPAR size by using the CFSizer tool: http://www.ibm.com/systems/z/cfsizer...
  • Page 51: Special-Purpose Features

    CAR processing. ECAR is only available on z13 GA2 and z13s servers. In a mixed environment with previous generation machines, you should define a z13 or z13s server as the PTS and CTS.
  • Page 52 The Trusted Key Entry (TKE) workstation and the most recent TKE 8.1 LIC are optional features on the z13s. The TKE 8.1 requires the crypto adapter FC 4767. You can use TKE 8.0 to collect data from previous generations of Cryptographic modules and apply the data to Crypto Express5S coprocessors.
  • Page 53 Cards are installed in pairs, which provide mirrored data to ensure a high level of availability and redundancy. A maximum of four pairs of cards (four features) can be installed on a z13s server, for a maximum capacity of 5.6 TB of storage.
  • Page 54: Reliability, Availability, And Serviceability

    For more information, see Appendix H, “Flash Express” on page 529. zEDC Express zEDC Express , an optional feature that is available to z13, z13s, zEC12, and zBC12 servers, provides hardware-based acceleration for data compression and decompression with lower CPU consumption than the previous compression technology on z Systems.
  • Page 55: Hardware Management Consoles And Support Elements

    The zBX Model 004 is only available as an optional upgrade from a zBX Model 003 or a zBX Model 002, through MES, in an ensemble that contains at least one z13s CPC and consists of these components: Two internal 1U rack-mounted Support Elements providing zBX monitoring and control functions.
  • Page 56: Blades

    These capabilities facilitate the management of planned and unplanned outages. 1.5.1 Blades Two types of blades can be installed and operated in the IBM z BladeCenter Extension (zBX): Optimizer Blades: IBM WebSphere DataPower® Integration Appliance XI50 for zBX blades IBM Blades: –...
  • Page 57: Ibm Z Unified Resource Manager

    FCP attached storage. The IBM DPM provides simplified z Systems hardware and virtual infrastructure management including integrated dynamic I/O management for users that intend to run KVM for IBM z Systems as hypervisor or Linux on z Systems running in LPAR mode.
  • Page 58: Operating Systems And Software

    1.8 Operating systems and software IBM z13s servers are supported by a large set of software, including independent software vendor (ISV) applications. This section lists only the supported operating systems. Use of various features might require the latest releases. For more information, see Chapter 7, “Software support”...
  • Page 59 The following operating systems will be supported on zBX Model 004: An AIX (on POWER7) blade in IBM BladeCenter Extension Mod 004): AIX 5.3, AIX 6.1, AIX 7.1 and later releases, and PowerVM Enterprise Edition Linux (on IBM BladeCenter HX5 blade installed in zBX Mod 004): –...
  • Page 60 OSA-Express when handling larger traffic loads. z/OS Support z/OS uses many of the new functions and features of IBM z13 servers that include but are not limited to the following: z/OS V2.2 supports zIIP processors in SMT mode to help improve throughput for zIIP workloads.
  • Page 61: Ibm Compilers

    They support the latest IBM middleware products (CICS, DB2, and IMS), allowing applications to use their latest capabilities. To fully use the capabilities of z13s servers, you must compile by using the minimum level of each compiler as specified in Table 1-2.
  • Page 62 Because specifying the architecture level of 11 results in a generated application that uses instructions that are available only on the z13s or z13 servers, the application will not run on earlier versions of the hardware. If the application must run on z13s servers and on older hardware, specify the architecture level corresponding to the oldest hardware on which the application needs to run.
  • Page 63: Chapter 2. Central Processor Complex Hardware Components

    The main objective of this chapter is to explain the z13s hardware building blocks, and how these components interconnect from a physical point of view. This information can be useful for planning purposes, and can help to define configurations that fit your requirements.
  • Page 64: Frame And Drawers

    Two Support Element (SE) servers that are installed at the top of the A frame. In previous z Systems, the SEs were notebooks installed on the swing tray. For z13s servers, the SEs are 1U servers that are mounted at the top of the 42U EIA frame. The external LAN interface connections are now directly connected to the SEs at the rear of the system.
  • Page 65 Power Regulators (BPRs), a controller, and two distributor cards. The number of BPRs varies depending on the configuration of the z13s servers. For more information see , “The Top Exit I/O Cabling feature adds 15 cm (6 in.) to the width of the frame and about 60 lbs (27.3 kg) to the weight.”...
  • Page 66: Pcie I/O Drawer And I/O Drawer Features

    N10/N20(1) N20(1) Figure 2-2 z13s (one CPC drawer) I/O drawer configurations In Figure 2-3, the various I/O drawer configurations are displayed when two CPC drawers are installed for the Model N20 only. The view is from the rear of the A frame.
  • Page 67: Processor Drawer Concept

    FICON Express8 cards, maximum quantity eight. Each card has four ports, LX or SX, and four PCHIDs. 2.2 Processor drawer concept The z13s CPC drawer contains up to six SCMs, memory, symmetric multiprocessor (SMP) connectivity, and connectors to support: PCIe I/O drawers (through PCIe fanout hubs)
  • Page 68: Model N10 Components (Top View)

    CPC drawers installed (the minimum is one CPC drawer). The contents of the CPC drawer and its components are model dependent. The Model N10 CPC drawer that is shown in Figure 2-4 has a single node, half the structure of the Model N20 CPC drawer that has two nodes.
  • Page 69: Model N20 Components (Top View)

    4x PU SCMs 2x SC SCMs (air cooled) (air cooled) Front PCIe fanout slots (x4) NODE 0 Memory DIMM InfiniBand fanout slots (x4) Slots NODE 1 Rear PCIe fanout slots (x4) Figure 2-5 Model N20 components (top view) Figure 2-6 shows the front view of the CPC drawer with fanout slots and FSP slots for both a Model N20 and a Model N10.
  • Page 70: Cpc Drawer Logical Structure

    The drawing shows eight cores per PU chip. This is by design. The PU chip is a FRU shared with the z13. When installed in a z13s server, each PU chip can have either six or seven active cores, IBM z13s Technical Guide...
  • Page 71: Cpc Drawer Interconnect Topology

    A cable connection from the PPS port on the OSC to the PPS output of the NTP server is required when the z13s server is using STP and is configured in an STP-only CTN using NTP with PPS as the external time source.
  • Page 72: System Control

    Various system elements are managed through the flexible service processors (FSPs). An FSP is based on the IBM PowerPC® microprocessor technology. Each FSP card has two ports that connect to two internal Ethernet LANs, through system control hubs (SCH1 and SCH2).
  • Page 73: Conceptual Overview Of System Control Elements

    Drawer Figure 2-10 Conceptual overview of system control elements Note: The maximum number of drawers (CEC and I/O) is four for z13s servers. The diagram in Figure 2-10 references the various supported FSP connections A typical FSP operation is to control a power supply. An SE sends a command to the FSP to start the power supply.
  • Page 74: Cpc Drawer Power

    (28.4 mm x 23.9 mm). Each node of a CPC drawer has three SCMs, two PU SCMs, and one SC SCM. The Model N20 CPC drawer has six SCMs, four PU SCMs, and two SC SCMs, with more than 20 billion transistors in total. IBM z13s Technical Guide...
  • Page 75: Processor Units And System Control Chips

    Each node has three SCMs: Two PU SCMs and one SC SCM. 2.3.2 Processor unit (PU) chip The z13s PU chip (installed as a PU SCM) is an evolution of the zBC12 core design. It uses CMOS 14S0 technology, out-of-order instruction processing, pipeline enhancements, dynamic simultaneous multithreading (SMT), single-instruction multiple-data (SIMD), and redesigned, larger caches.
  • Page 76: Pu Chip Floorplan

    By design, each PU chip has eight cores. Core frequency is 4.3 GHz with a cycle time of 0.233 ns. When installed in a z13s server, the PU chips have either six or seven active cores. This limit means that a Model N10 has 13 active cores, and the Model N20 has 26 active cores.
  • Page 77: Processor Unit (Core)

    2.3.3 Processor unit (core) Each processor unit, or core, is a superscalar and out-of-order processor that has 10 execution units and two load/store units, which are divided into two symmetric pipelines as follows: Four fixed-point units (FXUs) (integer) Two load/store units (LSUs) Two binary floating-point units (BFUs) Two binary coded decimal floating-point units (DFUs) Two vector floating point units (vector execution units (VXUs))
  • Page 78: Pu Characterization

    PU. The remaining installed PUs can be characterized for client use. The Model N10 uses unassigned PUs as spares when available. A z13s model nomenclature includes a number that represents the maximum number of PUs that can be characterized for client use, as shown in Table 2-3.
  • Page 79: System Control Chip

    2.3.5 System control chip The SC chip uses the CMOS 14S0 22 nm SOI technology, with 15 layers of metal. It measures 28.4 x 23.9 mm, has 7.1 billion transistors, and has 2.1 billion cells of eDRAM. Each node of the CPC drawer has one SC chip. The L4 cache on each SC chip has 480 MB of non-inclusive cache and a 224 MB non-data inclusive coherent (NIC) directory, which results in 960 MB of on-inclusive L4 cache and 448 MB in a NIC directory that is shared per CPC drawer.
  • Page 80: Cache Level Structure

    2.3.6 Cache level structure z13s implements a four level cache structure, as shown in Figure 2-16. Node 1 Node 0 PU chip (7 cores) PU chip (6 cores) PU chip (6 cores) PU chip (7 cores) 64MB eDRAM 64MB eDRAM...
  • Page 81: Memory

    5,120 GB of installed (physical) memory per system. A z13s has more memory installed than the amount that is ordered. Part of the physical installed memory is used to implement the redundant array of independent memory (RAIM) design.
  • Page 82: Memory Subsystem Topology

    The z13s memory subsystem uses high speed, differential-ended communications memory channels to link a host memory to the main memory storage devices. Figure 2-17 shows an overview of the CPC drawer memory topology of a z13s server. MCU 1 MCU2...
  • Page 83: Redundant Array Of Independent Memory

    2.4.2 Redundant array of independent memory z13s servers use the RAIM technology. The RAIM design detects and recovers from failures of DRAMs, sockets, memory channels, or DIMMs. The RAIM design requires the addition of one memory channel that is dedicated for reliability, availability, and serviceability (RAS), as shown in Figure 2-18.
  • Page 84: Memory Configurations

    40 GB of HSA memory. The memory controller on the adjacent PU chip manages the five memory slots. DIMM changes require a disruptive IML on z13s. Model N10 memory configurations The memory in Model N10 can be configured in the following manner: Ten DIMM slots per N10 drawer supported by two memory controllers on PU chips with five slots each.
  • Page 85 Table 2-6 shows the standard memory plug summary by node for new build systems. Table 2-6 Model N10 physically installed memory Customer Total Increment Node 0 Memory Physical DIMM location / size GB (GB) MD16-MD20 MD21-MD25 Dial Max 1280 Model N20 single drawer memory configurations The memory in Model N20 with a single drawer can be configured in the following manner: Ten or twenty DIMM slots per N20 drawer supported by two or four memory controllers with five slots each.
  • Page 86: Model N20 One Drawer Memory Plug Locations

    Table 2-7 Model N20 single CPC drawer - physically installed memory Cust Total Increm Node 1 Node 0 Phys DIMM location / size GB DIMM location / size GB (GB) MD06-MD10 MD11-MD15 MD16-MD20 MD21-MD25 Dial Max IBM z13s Technical Guide...
  • Page 87 Cust Total Increm Node 1 Node 0 Phys DIMM location / size GB DIMM location / size GB (GB) MD06-MD10 MD11-MD15 MD16-MD20 MD21-MD25 Dial Max 1280 1112 2560 2008 1240 1368 1496 1624 1752 1880 2008 Model N20 two drawer memory configurations The memory in Model N20 with two drawers can be configured in the following manner: Ten or twenty DIMM slots per N20 drawer supported by two or four memory controllers with five slots each.
  • Page 88: Model N20 Two Drawer Memory Plug Locations

    Node 1DIMM loc Node 0 D IMM Node 1 DIMM loc Node 0 DIMM Dial / size loc/size /size loc/size MD06- MD11- MD16- MD21- MD06- MD11- MD16- MD21- MD10 MD15 MD20 MD25 MD10 MD15 MD20 MD25 IBM z13s Technical Guide...
  • Page 89 Cust. Physi Increm Drawer 1 Drawer 0 (Second drawer) (GB) Node 1DIMM loc Node 0 D IMM Node 1 DIMM loc Node 0 DIMM Dial / size loc/size /size loc/size MD06- MD11- MD16- MD21- MD06- MD11- MD16- MD21- MD10 MD15 MD20 MD25 MD10...
  • Page 90: Memory Allocation Diagram

    HSA size = 40GB Figure 2-22 Memory allocation diagram As an example, a z13s Model N20 (one CPC drawer) that is ordered with 1496 GB of memory has the following memory sizes: Physical installed memory is 2560 GB: 1280 GB on Node 0 and 1280 GB on Node 1.
  • Page 91: Memory Upgrades

    LPAR has access to that memory up to a maximum of 4 TB. This access is possible because despite the CPC drawer structure, the z13s is still an SMP system. The existence of an I/O drawer in the CPC limits the memory LPAR to 1 TB. For more information, see 3.7, “Logical partitioning”...
  • Page 92: Reliability, Availability, And Serviceability

    Patented error correction technology in the memory subsystem continues to provide the most robust error correction from IBM to date. Two full DRAM failures per rank can be spared and a third full DRAM failure can be corrected. DIMM level failures, including components such as the memory controller application-specific integrated circuit (ASIC), the power regulators, the clocks, and the system board can be corrected.
  • Page 93: General Z13S Ras Features

    The power supplies for the z13s servers are also based on the N+1 design. The second power supply can maintain operations and avoid an unplanned outage of the system.
  • Page 94: Model N10 Drawer: Location Of The Pcie And Ifb Fanouts

    Integrated Coupling Adapter (ICA SR): This adapter provides coupling connectivity between z13s and z13/z13s servers. Host Channel Adapter (HCA3-O (12xIFB)): This optical fanout provides 12x InfiniBand coupling link connectivity up to 150 meters (492 ft.) distance to z13s, z13, zEC12, zBC12, z196, and z114 servers. IBM z13s Technical Guide...
  • Page 95: Redundant I/O Interconnect

    In a system that is configured for maximum availability, alternative paths maintain access to critical I/O devices, such as disks and networks. Note: Fanout plugging rules for z13s servers are different than previous EC12 and z114 servers. Preferred plugging for PCIe Generation 3 fanouts will always be in CPC Drawer 1 (bottom drawer) alternating between the two nodes.
  • Page 96: Redundant I/O Interconnect For I/O Drawer

    Note: Both IFB-MP cards must be installed in the I/O Drawer to maintain the interconnect across I/O domains. If one of the IFB-MP cards is removed, then the I/O cards in that domain (up to four) become unavailable. IBM z13s Technical Guide...
  • Page 97: Model Configurations

    I/O cards in that domain (up to eight) become unavailable. 2.7 Model configurations When a z13s order is configured, PUs are characterized according to their intended usage. They can be ordered as any of the following items: CP: The processor is purchased and activated.
  • Page 98 PUs that are characterized as CPs. Not all PUs on a model must be characterized. The following items are present in z13s servers, but they are not part of the PUs that clients purchase and require no characterization: An SAP to be used by the channel subsystem.
  • Page 99: Upgrades

    2.7.1 Upgrades Concurrent CP, IFL, ICF, zIIP, or SAP upgrades are done within a z13s server. Concurrent upgrades require available PUs, and that extra PUs be installed previously, but not activated. Spare PUs are used to replace defective PUs. On Model N10, unassigned PUs are used as spares.
  • Page 100: Concurrent Pu Conversions

    For more information about STSI output, see “Processor identification” on page 351. Capacity identifiers: Within a z13s server, all CPs have the same capacity identifier. Specialty engines (IFLs, zAAPs, zIIPs, and ICFs) operate at full speed. IBM z13s Technical Guide...
  • Page 101: Model Capacity Identifier And Msu Values

    2.7.4 Model capacity identifier and MSU values All model capacity identifiers have a related MSU value that is used to determine the software license charge for MLC software, as shown in Table 2-12. Table 2-12 Model capacity identifier and MSU values Model Model Model...
  • Page 102: Capacity Backup

    For example, a 3-year CBU record has three test activations, and a 1-year CBU record has one test activation. You can increase the number of tests up to a maximum of 15 for each CBU record. IBM z13s Technical Guide...
  • Page 103 Number of total CBU years ordered (duration of the contract) Expiration date of the CBU contract The record content of the CBU configuration is documented in the IBM configurator output, as shown in Example 2-1. In the example, one CBU record is made for a 5-year CBU contract without additional CBU tests for the activation of one CP CBU.
  • Page 104: On/Off Capacity On Demand And Cps

    CPs are activated is equal to the number of temporary CPs ordered. For example, when a configuration with model capacity identifier D03 specifies four temporary CPs through On/Off CoD, the result is a server with model capacity identifier E05. IBM z13s Technical Guide...
  • Page 105: Power And Cooling

    For ancillary equipment, such as the HMC, its display, and its modem, extra single-phase outlets are required. The power requirements depend on the number of processor and I/O drawers in the z13s server. Table 10-1 on page 372 shows the maximum power consumption tables for the various configurations and environments.
  • Page 106: High-Voltage Dc Power

    2.8.3 Internal Battery Feature IBF is an optional feature on the z13s CPC server. See Figure 2-1 on page 36 for a pictorial view of the location of this feature. This optional IBF provides the function of a local uninterrupted power source (UPS).
  • Page 107: Cooling Requirements

    HMC System Activity Display. 2.8.6 Cooling requirements The z13s is an air-cooled system. It requires chilled air, ideally coming from under a raised floor, to fulfill the air cooling requirements. The chilled air is usually provided through perforated floor tiles. The amount of chilled air that is required for various temperatures under the floor of the computer room is indicated in the z13s Installation Manual for Physical Planning, GC28-6953.
  • Page 108 1 phase, 3 phase 1 phase, 3 phase 1 phase, 3 phase Optional external DC 520 V / 380 V DC 520 V / 380 V DC 520 V / 380 V DC Internal Battery Feature Optional Optional Optional IBM z13s Technical Guide...
  • Page 109: Chapter 3. Central Processor Complex System Design

    This chapter explains some of the design considerations for the IBM z13s processor unit. This information can be used to understand the functions that make the z13s server a system that accommodates a broad mix of workloads for large enterprises.
  • Page 110: Overview

    The z13s symmetric multiprocessor (SMP) system is the next step in an evolutionary trajectory that began with the introduction of the IBM System/360 in 1964. Over time, the design was adapted to the changing requirements that were dictated by the shift toward new types of applications that clients depend on.
  • Page 111 Have a balanced system design, providing large data rate bandwidths for high performance connectivity along with processor and system capacity. The remaining sections describe the z13s system structure, showing a logical representation of the data flow from PUs, caches, memory cards, and various interconnect capabilities.
  • Page 112: Cpc Drawer Design

    3.3 CPC drawer design A z13s system can have up to two CPC drawers in a full configuration, up to 20 PUs can be characterized, and up to 4 TB of customer usable memory capacity can be ordered. Each CPC drawer is physically divided into two nodes to improve the processor and memory affinity and availability.
  • Page 113: Z13S Cache Topology

    NIC directory. Main storage has up to 2.0 TB addressable memory per CPC drawer, using 20 DIMMs (total of 5 per feature). A z13s Model N20 with two CPC drawers can have up to 4 TB of addressable main storage.
  • Page 114: Z13S And Zbc12 Cache Level Comparison

    However, in the z13s cache design, some lines of the L3 cache are not included in the L4 cache. The L4 cache has a NIC directory that has entries that point to the non-inclusive lines of L3 cache.
  • Page 115: Cpc Drawer Interconnect Topology

    Compared to zBC12, the z13s cache design has much larger cache level sizes. z13s servers have more affinity between the memory of a partition, the L4 cache in the SC SCM, and the cores in the PU SCMs of a node. The access time of the private cache usually occurs in one cycle.
  • Page 116: Processor Unit Design

    0.263 ns (3.8 GHz) and 0.238 ns (4.2 GHz). z13s servers have a cycle time of 0.232 ns (4.3 GHz), and an improved design that allows the increased number of processors that share larger caches to have quick access times and improved capacity and performance.
  • Page 117: Simultaneous Multithreading

    Millicode improvements Decimal floating-point (DFP) improvements The z13s enhanced Instruction Set Architecture (ISA) includes a set of instructions that are added to improve compiled code efficiency. These instructions optimize PUs to meet the demands of a wide variety of business and analytics workload types without compromising the performance characteristics of traditional workloads.
  • Page 118: Single-Instruction Multiple-Data

    SMT-enabled partition. 3.4.2 Single-instruction multiple-data The z13s superscalar processor has 32 vector registers and an instruction set architecture that includes a subset of 139 new instructions, known as SIMD, added to improve the efficiency of complex mathematical models and vector processing.
  • Page 119: Schematic Representation Of Add Simd Instruction With 16 Elements In Each Vector

    The 32 new vector registers have 128 bits. The 139 new instructions include string operations, vector integer, and vector floating point operations. Each register contains multiple data elements of a fixed size. The instructions code specifies which data format to use and the size of the elements: Byte (sixteen 8-bit operands) Halfword (eight 16-bit operands)
  • Page 120: Out-Of-Order Execution

    3.4.3 Out-of-order execution z13s servers have an OOO core, much like the previous z114 and zBC12 systems. OOO yields significant performance benefits for compute-intensive applications. It does so by reordering instruction execution, allowing later (younger) instructions to be run ahead of a stalled instruction, and reordering storage accesses and parallel storage accesses.
  • Page 121: Z13S Pu Core Logical Diagram

    (program) execution. This implementation requires special circuitry to make execution and memory accesses display in order to the software. The logical diagram of a z13s core is shown in Figure 3-9. Out-of-Order (OoO) addition Clump of instruction (up to 6)
  • Page 122: Z13S In-Order And Out-Of-Order Core Execution Improvements

    For this reason, various history-based branch prediction mechanisms are used, as shown on the in-order part of the z13s PU core logical diagram in Figure 3-9 on page 93. The branch target buffer (BTB) runs ahead of instruction cache pre-fetches to prevent branch misses in an early stage.
  • Page 123: Superscalar Processor

    On z13s servers, up to six instructions can be decoded per cycle and up to 10 instructions or operations can be in execution per cycle. Execution can occur out of (program) order. These improvements also make the simultaneous execution of two threads in the same processor possible.
  • Page 124: Decimal Floating Point Accelerator

    (MSA). For more information about these instructions, see z/Architecture Principles of Operation, SA22-7832. For more information about cryptographic functions on z13s servers, see Chapter 6, “Cryptography” on page 199. 3.4.6 Decimal floating point accelerator The DFP accelerator function is present on each of the microprocessors (cores) on the 8-core chip.
  • Page 125: Ieee Floating Point

    Allows COBOL programs that use zoned-decimal operations to take advantage of the z/Architecture DFP instructions. z13s servers have two DFP accelerator units per core, which improve the decimal floating point execution bandwidth. The floating point instructions operate on newly designed vector registers (32 new 128-bit registers).
  • Page 126: Branch Prediction

    BHT, which improves processing times for calculation routines. In addition to the BHT, z13s servers use various techniques to improve the prediction of the correct branch to be run. The following techniques are used:...
  • Page 127: Translation Lookaside Buffer

    3.4.11 Translation lookaside buffer The TLB in the instruction and data L1 caches use a secondary TLB to enhance performance. In addition, a translator unit is added to translate misses in the secondary TLB. The size of the TLB is kept as small as possible because of its short access time requirements and hardware space limitations.
  • Page 128: Instruction Set Extensions

    This section describes the PU functions. 3.5.1 Overview All PUs on a z13s server are physically identical. When the system is initialized, one integrated firmware processor (IFP) is allocated from the pool of PUs that is available for the whole system. The other PUs can be characterized to specific functions (CP, IFL, ICF, zIIP, or SAP).
  • Page 129 All assigned PUs are grouped in the PU pool. These PUs are dispatched to online logical PUs. As an example, consider a z13s server with 6 CPs, two IFLs, five zIIPs, and one ICF. This ool width system has a PU pool of 14 PUs, called the p .
  • Page 130: Central Processors

    Coupling Facility Control Code (CFCC), and IBM zAware. Up to 6 PUs can be characterized as CPs, depending on the configuration. z13s servers can be initialized either in LPAR mode or in Elastic PR/SM mode. For more information, see Appendix E, “IBM Dynamic Partition Manager” on page 501. CPs are defined as either dedicated or shared.
  • Page 131: Integrated Facility For Linux

    20 PUs can be characterized as IFLs, depending on the configuration. IFLs can be dedicated to a Linux, a z/VM, or an IBM zAware LPAR, or they can be shared by multiple Linux guests, z/VM LPARs, or IBM zAware running on the same z13s server. Only z/VM, Linux on z Systems operating systems, KVM for z, IBM zAware, and designated software products can run on IFLs.
  • Page 132 ICFs or CPs) is not a preferable production configuration. A production CF should generally operate by using dedicated ICFs. With CFCC Level 19 (and later; z13s servers run CFCC level 21), Coupling Thin Interrupts are available, and dedicated engines continue to be recommended to obtain the best coupling facility performance.
  • Page 133: Ibm Z Integrated Information Processor

    UDB for z/OS Version 8 or later, freeing up capacity for other workload requirements. A zIIP enables eligible z/OS workloads to have a portion of them directed to zIIP. The zIIPs do not increase the MSU value of the processor, and so do not affect the IBM software license changes.
  • Page 134: Logical Flow Of Java Code Execution On A Ziip

    This process reduces the CP time that is needed to run Java WebSphere applications, freeing that capacity for other workloads. Figure 3-14 shows the logical flow of Java code running on a z13s server that has a zIIP available. When JVM starts the execution of a Java program, it passes control to the z/OS dispatcher that verifies the availability of a zIIP.
  • Page 135 A zIIP runs only IBM authorized code. This IBM authorized code includes the z/OS JVM in association with parts of system code, such as the z/OS dispatcher and supervisor services. A zIIP cannot process I/O or clock comparator interruptions, and it does not support operator controls, such as IPL.
  • Page 136 One CP must be installed with or before any zIIP is installed. In z13s servers, the zIIP to CP ratio is 2:1, which means that up to 12 zIIPs on any specific model x06 can be characterized.
  • Page 137: System Assist Processors

    Assignment of more SAPs can increase the capability of the channel subsystem to run I/O operations. In z13s systems, the number of SAPs can be greater than the number of CPs. However, additional SAPs plus standard SAPs cannot exceed 6.
  • Page 138: Reserved Processors

    The PU assignment is based on CPC drawer plug ordering. The CPC drawers are populated from the bottom upward. This process defines the low-order and the high-order CPC drawers: CPC drawer 1: Plug order 1 (low-order CPC drawer) CPC drawer 0: Plug order 2 (high-order CPC drawer) IBM z13s Technical Guide...
  • Page 139: Sparing Rules

    Note: The addition of a second CPC drawer is disruptive. 3.5.10 Sparing rules On a z13s Model N20 system, two PUs are reserved as spares. There are no spare PUs reserved on a z13s Model N10. The reserved spares are available to replace any characterized PUs, whether they are CP, IFL, ICF, zIIP, SAP, or IFP.
  • Page 140: Increased Flexibility With Z/Vm Mode Partitions

    3.5.11 Increased flexibility with z/VM mode partitions z13s servers provide a capability for the definition of a z/VM mode LPAR that contains a mix of processor types that includes CPs and specialty processors, such as IFLs, zIIPs, and ICFs.
  • Page 141 Different sized DIMMs are available for use (16 GB, 32 GB, 64 GB, or 128 GB). On a z13s server, the CPC drawer design allows for only same-sized DIMMs to be used in one particular CPC drawer.
  • Page 142 CPC drawer memory information On a z13s model N10 machine, the maximum amount of memory is limited to 984 GB of customer addressable memory. If more memory is needed by the customer, a model upgrade to Model N20 is mandatory. For availability and performance reasons, all available memory slots in both nodes of the same CPC drawer are populated.
  • Page 143: Main Storage

    On z13s servers, 1 MB large pages become pageable if Flash Express is available and enabled. They are available only for 64-bit virtual private storage, such as virtual memory above 2 GB.
  • Page 144: Hardware System Area (Hsa)

    Any reliance on these statements of general direction is at the relying party’s sole risk and will not create liability or obligation for IBM. z13s servers can run in LPAR or Dynamic Partition Manager (DPM) mode. For DPM mode, see Appendix E, “IBM Dynamic Partition Manager” on page 501.
  • Page 145: Overview

    3.7.1 Overview Logical partitioning is a function that is implemented by the PR/SM on z13s servers. The z13s server runs either in LPAR-, or Dynamic Partition Manager mode. Therefore, all system aspects are controlled by PR/SM functions. PR/SM is aware of the CPC drawer structure on the z13s server. However, LPARs do not have this awareness.
  • Page 146 Removal of an option for the way shared logical processors are managed under PR/SM LPAR: IBM z13 and z13s servers will be the last z Systems servers to support selection of the option to “Do not end the time slice if a partition enters a wait state” when the option to set a processor runtime value has been previously selected in the CPC RESET profile.
  • Page 147 The weight and number of online logical processors of an LPAR can be dynamically managed by the LPAR CPU Management function of the Intelligent Resource Director (IRD). These processors can then be used to achieve the defined goals of this specific partition and of the overall system.
  • Page 148 Table 3-4 shows the modes of operation, summarizing all available mode combinations, including their operating modes and processor types, operating systems, and addressing modes. Only the currently supported versions of operating systems are considered. Table 3-4 z13s modes of operation Image mode PU type...
  • Page 149 Logically partitioned mode If the z13s server runs in LPAR mode, each of the 40 LPARs can be defined to operate in one of the following image modes: ESA/390 mode to run the following systems: – A z/Architecture operating system, on dedicated or shared CPs –...
  • Page 150 Support Element “Logical Processor Add” function under the “CPC Operational Customization” task. This SE function allows the initial and reserved processor values to be dynamically changed. The operating system must support IBM z13s Technical Guide...
  • Page 151 PUs. The goals are to pack the LPAR on fewer CPC drawers and also on fewer PU chips, based on the z13s CPC drawers’ topology. The effect of this process is evident in dedicated and shared LPARs that use HiperDispatch.
  • Page 152: Storage Operations

    DPM, see Appendix E, “IBM Dynamic Partition Manager” on page 501. 3.7.2 Storage operations In z13s servers, memory can be assigned as a combination of main storage and expanded storage, supporting up to 40 LPARs. Expanded storage is used only by the z/VM operating system.
  • Page 153 Table 3-6 shows the z13 storage allocation and usage possibilities, depending on the image mode. Table 3-6 Main storage definition and usage possibilities Image mode Architecture mode Maximum main storage Expanded storage (addressability) Architecture z13s Operating definition definable system usage ESA/390 z/Architecture (64-bit) 16 EB 4 TB...
  • Page 154 16 EB to be used as main storage. However, the current main storage limit for LPARs is 10 TB of main storage on a z13s server. The operating system that runs in z/Architecture mode must be able to support the real storage. Currently, z/OS supports up to 4 TB of real storage (z/OS V1R10 and later releases).
  • Page 155: Reserved Storage

    Provide an economical Java execution environment under z/OS on zIIPs zACI mode In zACI mode, storage addressing is 64 bit for an IBM zAware image running IBM z Advanced Workload Analysis Reporter firmware. This configuration allows for an addressing range up to 16 EB.
  • Page 156: Logical Partition Storage Granularity

    LPAR storage granularity information is required for LPAR image setup and for z/OS RSU definition. LPARs are limited to a maximum size of 10 TB of main storage (4 TB on z13s servers). For z/VM V6R2, the limit is 256 GB, whereas for z/VM V6.3, the limit is 1 TB.
  • Page 157: Lpar Dynamic Storage Reconfiguration

    LPAR releases a storage increment. 3.8 Intelligent Resource Director IRD is a z13, z13s, and z Systems capability that is used only by z/OS. IRD is a function that optimizes processor and channel resource utilization across LPARs within a single z Systems server.
  • Page 158 DCM within a channel subsystem. Channel subsystem priority queuing: This function on the z13, z13s, and z Systems servers allow the priority queuing of I/O requests in the channel subsystem and the specification of relative priority among LPARs.
  • Page 159: Clustering Technology

    The appropriate CF link technology (ICA SR, 1x IFB, or 12x IFB) selection depends on the system configuration and how distant they are physically. The ISC-3 coupling link is not supported on z13 and z13s servers. For more information about link technologies, see 4.9.1, “Coupling links” on page 174.
  • Page 160: Coupling Facility Control Code

    The introduction of z13s servers into existing installations might require more planning. CFCC Level 21 CFCC level 21 is delivered on z13s servers with driver level 27. CFCC Level 21 introduces the following enhancements: Asynchronous CF Duplexing for lock structures.
  • Page 161: Coupling Thin Interrupts

    CF structure sizing changes are expected when going from CFCC Level 17 (or earlier) to CFCC Level 20. Review the CF structure size by using the CFSizer tool, which is available at this website: http://www.ibm.com/systems/z/cfsizer/ For latest recommended levels, see the current exception letter on the Resource Link at the following website: https://www.ibm.com/servers/resourcelink/lib03020.nsf/pages/exceptionLetters...
  • Page 162: Dynamic Cf Dispatching (Shared Cps Or Shared Icf Pus)

    IMAGE Profile setup Figure 3-17 Dynamic CF dispatching (shared CPs or shared ICF PUs) For more information about CF configurations, see Coupling Facility Configuration Options, GF22-5042, which is also available at the Parallel Sysplex website: http://www.ibm.com/systems/z/advantages/pso/index.html IBM z13s Technical Guide...
  • Page 163: Cfcc And Flash Express Use

    From CFCC Level 19 and later, Flash Express can be used. It improves resilience while providing cost-effective standby capacity to help manage the potential overflow of IBM WebSphere MQ shared queues. Structures can now be allocated with a combination of real memory and SCM that is provided by the Flash Express feature.
  • Page 164 IBM z13s Technical Guide...
  • Page 165: Chapter 4. Central Processor Complex I/O System Structure

    I/O drawer PCIe I/O drawer PCIe I/O drawer and I/O drawer offerings Fanouts I/O features (cards) Connectivity Parallel Sysplex connectivity Cryptographic functions Integrated firmware processor Flash Express 10GbE RoCE Express zEDC Express © Copyright IBM Corp. 2016. All rights reserved.
  • Page 166: Introduction To The Infiniband And Pcie For I/O Infrastructure

    I/O drawer. 4.1.2 PCIe I/O infrastructure IBM continues the use of industry standards on the z Systems platform by offering a Peripheral Component Interconnect Express Generation 3 (PCIe Gen3) I/O infrastructure. The PCIe I/O infrastructure that is provided by the central processor complex (CPC) improves I/O capability and flexibility, while allowing for the future integration of PCIe adapters and accelerators.
  • Page 167: Infiniband Specifications

    For details and the standard for InfiniBand, see the InfiniBand website at: http://www.infinibandta.org 4.1.4 PCIe Generation 3 The z13 and z13s servers are the first generation of z Systems servers to support the PCIe Generation 3 (Gen3) protocol. PCIe Generation 3 uses 128b/130b encoding for data transmission. This encoding reduces the encoding effort by approximately 1.54% when compared to the PCIe Generation 2, which...
  • Page 168: I/O System Overview

    The actual performance is dependent upon many factors that include latency through the adapters, cable lengths, and the type of workload. PCIe Gen3 x16 links are used in z13s servers for driving the PCIe I/O drawers, and coupling links for CPC to CPC communications.
  • Page 169: Summary Of Supported I/O Features

    CPC drawer. The maximum number of ICA SR links on z13s servers is 16, and the maximum number of IFB links (any combination of HCA3-O and HCA3-O LR) is 32. Therefore, the maximum number of coupling links on a z13s server is 48.
  • Page 170: I/O Drawer

    Figure 4-1 I/O drawer The I/O structure in a z13s CPC is illustrated in Figure 4-2 on page 143. An IFB cable connects the HCA fanout card in the CPC drawer to an IFB-MP card in the I/O drawer. The passive connection between two IFB-MP cards is a design of redundant I/O interconnection (RII).
  • Page 171: Pcie I/O Drawer

    The I/O drawer domains and their related I/O slots are shown in Figure 4-2. The IFB-MP cards are installed at slot 09 at the rear side of the I/O drawer. The I/O features are installed from the front and rear side of the I/O drawer. Two I/O domains (A and B) are supported. Each I/O domain has up to four I/O features for FICON Express8 only (carry-forward).
  • Page 172: Pcie I/O Drawer

    The locations of the DCAs, AMDs, PCIe switch cards, and I/O feature cards in the PCIe I/O drawer are shown in Figure 4-3. Front   Domain 0 Domain 2 16 GBps PCIe switch card Rear   ~311mm Domain 3 Domain 1 560mm (max) Figure 4-3 PCIe I/O drawer IBM z13s Technical Guide...
  • Page 173: Z13S (N20) Connectivity To Pcie I/O Drawers

    The I/O structure in a z13s CPC is shown in Figure 4-4. The PCIe switch card provides the fanout from the high-speed x16 PCIe host bus to eight individual card slots. The PCIe switch card is connected to the drawers through a single x16 PCIe Gen3 bus from a PCIe fanout card.
  • Page 174: Pcie I/O Drawer And I/O Drawer Offerings

    33. All I/O cards connect to the PCIe switch card through the backplane board. Note: The limitation of up to two native PCIe features (Flash Express, zEDC Express, and 10GbE RoCE Express) per I/O domain is eliminated on z13 and z13s servers. Table 4-2 lists the I/O domains and slots.
  • Page 175: Fanouts

    Consideration: On a z13s server, only PCIe I/O drawers are supported. A mixture of I/O drawers, and PCIe I/O drawers are available only on upgrades to a z13s server. The PCIe I/O drawers support the following PCIe features:...
  • Page 176: Infrastructure For Pcie And Infiniband Coupling Links

    Five types of fanout cards are supported by z13s servers. Each slot can hold one of the following five fanouts: PCIe Gen3 fanout card: This copper fanout provides connectivity to the PCIe switch card in the PCIe I/O drawer. ICA SR: This adapter provides 8x PCIe optical coupling connectivity between z13, z13s servers, up to 150-meter (492 ft.) distance, 8 GB/s link rate.
  • Page 177: Pcie Generation 3 Fanout (Fc 0173)

    I/O domains within the PCIe I/O drawer. The pairs of PCIe fanout cards of a z13s Model N20 are split across the two logical nodes within a CPC drawer (LG02 - LG06 and LG11 - LG15), or are split across two CPC drawers for redundancy purposes.
  • Page 178: Hca3-O (12X Ifb) Fanout (Fc 0171)

    The ICA SR can be used only for coupling connectivity between z13 and z13s servers. It does not support connectivity to zEC12, zBC12, z196, or z114 servers, and it cannot be connected to HCA3-O or HCA3-O LR coupling fanouts. The ICA SR fanout requires new cabling that is different from the 12x IFB cables. For distances up to 100 m, clients can choose the OM3 fiber optic cable.
  • Page 179: Hca3-O Lr (1X Ifb) Fanout (Fc 0170)

    Each connection supports a link rate of up to 5 Gbps if connected to a z13, z13s, zEC12, zBC12, z196, or z114 server. It supports a link rate of 2.5 Gbps when connected to a z Systems qualified DWDM.
  • Page 180: Fanout Considerations

    This limitation can be important for coupling connectivity planning, especially for a z13s Model N10, as shown in Table 4-3. No more fanout adapter slots are available for an N10 model after a FICON Express8 feature is carried forward. Table 4-3 shows for the z13s...
  • Page 181 Table 4-4 illustrates the AID assignment for each fanout slot relative to the drawer location on a new build system. Table 4-4 AID number assignment Drawer Location Fanout slot AIDs First A21A LG03-LG06 (PCIe) 1B-1E LG07-LG10 (IFB) 04-07 LG11-LG14 (PCIe) 1F-22 Second A25A...
  • Page 182: I/O Features (Cards)

    Fanout features that are supported by the z13s server are shown in Table 4-5. The table provides the feature type, feature code, and information about the link that is supported by the fanout feature. Table 4-5 Fanout summary Fanout feature...
  • Page 183: I/O Feature Card Ordering Information

    4.7.1 I/O feature card ordering information Table 4-6 lists the I/O features that are supported by z13s servers and the ordering information for them. Table 4-6 I/O features and ordering information Channel feature Feature code New build Carry-forward FICON Express16S 10KM LX...
  • Page 184: Physical Channel Report

    OSA Express5S GbE LX 2 Ports 0418 16 Gbps FICON/FCP LX 2 Ports 0411 10GbE RoCE Express Resource Group Two 0171 HCA3 O PSIFB 12x 2 Links 0170 HCA3 0 LR PSIFB 1x 4 Links 0172 ICA SR 2 Links IBM z13s Technical Guide...
  • Page 185: Connectivity

    The following list explains the content of this sample PCHID REPORT: Feature code 0170 (HCA3-O LR (1xIFB)) is installed in CPC drawer 1 (location A21A, slot LG07), and has AID 04 assigned. Feature code 0172 (Integrated Coupling Adapter - ICA SR) is installed in CPC drawer 1 (location A21A, slot LG04), and has AID 1C assigned.
  • Page 186: I/O Feature Support And Configuration Rules

    The following features can be shared and spanned: FICON channels that are defined as FC or FCP OSA-Express5S features that are defined as OSC, OSD, OSE, OSM, OSN, or OSX OSA-Express4S features that are defined as OSC, OSD, OSE, OSM, OSN, or OSX IBM z13s Technical Guide...
  • Page 187 FTS supports Fiber Quick Connect (FQC), a fiber harness that is integrated in the frame of a z13s server for quick connection. The FQC is offered as a feature on z13s servers for connection to FICON LX channels.
  • Page 188: Ficon Channels

    Each FICON Express16S or FICON Express8S feature occupies one I/O slot in the PCIe I/O drawer. Each feature has two ports, each supporting an LC Duplex connector, with one PCHID and one CHPID associated with each port. IBM z13s Technical Guide...
  • Page 189 (SX). The features are connected to a FICON capable control unit, either point-to-point or switched point-to-point, through a Fibre Channel switch. Statement of Direction: The z13 and z13s servers are the last z Systems servers to support FICON Express8 features for 2 Gbps connectivity.
  • Page 190 150 m (492 feet) of distance depending on the fiber used. Statement of Direction: The IBM z13 and z13s servers will be the last z Systems servers to offer ordering of FICON Express8S channel features. Enterprises that have 2 Gb device connectivity requirements must carry forward these channels.
  • Page 191 FICON channels have traditionally been known for. With the IBM DS8870, z13s servers can extend the use of FEC to the fabric N_Ports for a complete end-to-end coverage of 16Gbps FC links. For more information, see IBM DS8884 and z13s: A new cost optimized solution, REDP-5327.
  • Page 192 FCP I/Os to a single round trip. Originally this benefit is limited to writes that are less than 64 KB. zHPF on z13s and z13 servers has been enhanced to allow all large write operations (> 64 KB) at distances up to 100 km to be run in a single round trip to the control unit.
  • Page 193 FICON feature summary Table 4-9 shows the FICON card feature codes, cable type, maximum unrepeated distance, and the link data rate on a z13s server. All FICON features use LC Duplex connectors. Table 4-9 z13 channel feature support Channel feature...
  • Page 194: Osa-Express5S

    4.8.3 OSA-Express5S The OSA-Express5S feature is exclusively in the PCIe I/O drawer. The following OSA-Express5S features can be installed on z13s servers: OSA-Express5S 10 Gigabit Ethernet LR, FC 0415 OSA-Express5S 10 Gigabit Ethernet SR, FC 0416 OSA-Express5S Gigabit Ethernet LX, FC 0413...
  • Page 195 CHPID type OSX, the 10 GbE port provides connectivity and access control to the IEDN from z13s servers to zBX. The 10 GbE feature is designed to support attachment to a multimode fiber 10 Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps. The port can be defined as a spanned channel and can be shared among LPARs within and across logical channel subsystems.
  • Page 196: Osa-Express4S Features

    This section addresses the characteristics of all OSA-Express4S features that are supported on z13s servers. The OSA-Express4S feature is exclusively in the PCIe I/O drawer. The following OSA-Express4S features can be installed on z13s servers: OSA-Express4S 10 Gigabit Ethernet LR, FC 0406 OSA-Express4S 10 Gigabit Ethernet SR, FC 0407...
  • Page 197 The port supports CHPID types OSD and OSX. When defined as CHPID type OSX, the 10 GbE port provides connectivity and access control to the IEDN from z13s servers to IBM zBX. The 10 GbE feature is designed to support attachment to a single mode fiber 10-Gbps Ethernet LAN or Ethernet switch that is capable of 10 Gbps.
  • Page 198 Each port has an RJ-45 receptacle for cabling to an Ethernet switch. The RJ-45 receptacle must be attached by using an EIA/TIA Category 5 or Category 6 UTP cable with a maximum length of 100 meters (328 ft). IBM z13s Technical Guide...
  • Page 199: Osa-Express For Ensemble Connectivity

    For redundancy, one port each from two OSA-Express 10 GbE features must be configured. The connection is from the z13s server to the IEDN top-of-rack (ToR) switches on the zBX Model 004. With a stand-alone z13s node (no-zBX), the connection is interconnect pairs of OSX ports through LC Duplex directly connected cables, not wrap cables as was previously recommended.
  • Page 200: Hipersockets

    An HMC can manage multiple z Systems servers and can be at a local or a remote site. If the z13s server is defined as a member of an ensemble, a pair of HMCs (a primary and an alternate) is required, and certain restrictions apply. The primary HMC is required to manage ensemble network connectivity, the INMN, and the IEDN network.
  • Page 201 HiperSockets network and an external Ethernet network. It also can be used to connect to the HiperSockets Layer 2 networks of different servers. HiperSockets Layer 2 in the z13s server is supported by Linux on z Systems, and by z/VM for Linux guest use.
  • Page 202: Parallel Sysplex Connectivity

    SMC-D data exchange by associating their VF with same VCHID. z13s servers support up to 32 ISM VCHIDs per CPC, and each VCHID supports up to 255 VFs, with a total maximum of 8K VFs. For more information about the SMC-D and ISM, see Appendix D, “Shared Memory Communications”...
  • Page 203 Internal Coupling (IC): CHPIDs (type ICP) that are defined for internal coupling can connect a CF to a z/OS LPAR in the same z13s server. IC connections require two CHPIDs to be defined, which can be defined only in peer mode. A maximum of 32 IC CHPIDs (16 connections) can be defined.
  • Page 204: Z13 Parallel Sysplex Coupling Connectivity

    The maximum for IFB links is 32, for ICA SR is 16, and for IC is 32. The maximum number of combined external coupling links (active ICA SR links and IFB LR) is 80 per z13s server. And z13s servers support up to 256 coupling CHPIDs per CPC, which provides enhanced connectivity and scalability for a growing number of coupling channel types that is twice the 128 coupling CHPIDs that are supported on zEC12.
  • Page 205 HCA3-O LR fanout for 1x InfiniBand, FC 0170 ICA SR fanout, FC0172 Various link options are available to connect a z13s server to other z Systems and zEnterprise servers: ICA SR fanout at 8 GBps to z13 or z13s servers...
  • Page 206: Migration Considerations

    The change in the supported type of coupling link adapters and the number of available fanout slots of the z13s CPC drawer, as compared to the number of available fanout slots of the previous generation z Systems, z114, and zBC12, need to be planned for.
  • Page 207: Cpc Drawer Front View: Coupling Links

    IFB connectivity. In a 1:1 link migration scenario, this server cannot be migrated to a z13s-N10 because the z13s-N10 cannot accommodate more than two InfiniBand fanouts. Furthermore, if FICON Express8 features are carried forward, the total number of InfiniBand fanout are reduced by two.
  • Page 208: Hca3-O Fanouts: Z13S Versus Z114 / Zbc12 Servers

    In this case, a z13s-N20 is needed to fulfill all IFB connectivity, as shown in Figure 4-10. Figure 4-10 HCA3-O Fanouts: z13s versus z114 / zBC12 servers It is beyond the scope of this book to describe all possible migration scenarios. Always involve subject matter experts (SMEs) to help you to develop your migration strategy.
  • Page 209 – Coupling Link Analysis: Capacity Planning tools and services can help. For z13s to z13s, or z13 links, adopt the new ICA SR coupling link. Use ICA SR channel as much as possible to replace existing InfiniBand links for z13s to z13, and z13s connectivity.
  • Page 210 If a server does not have a CF LPAR, timing-only links can be used to provide STP connectivity. The z13s server does not support attachment to the IBM Sysplex Timer. A z13s server cannot be added into a Mixed CTN and can participate only in an STP-only CTN.
  • Page 211: Pulse Per Second Input

    STP tracks the highly stable and accurate PPS signal from ETSs. It maintains accuracy of 10 µs as measured at the PPS input of the z13s server. If STP uses an NTP server without PPS, a time accuracy of 100 ms to the ETS is maintained. ETSs with PPS output are available from various vendors that offer network timing solutions.
  • Page 212: Flash Express

    EADM access is initiated with a Start Subchannel instruction. z13s servers support a maximum of four pairs of Flash Express cards. Only one Flash Express card is allowed per I/O domain. Each PCIe I/O drawer has four I/O domains, and can install two pairs of Flash Express cards.
  • Page 213: Ibm Flash Express Read/Write Cache

    FUNCTION statement or in the HCD. For zEC12 and zBC12, each feature can only be dedicated to an LPAR, and z/OS can support only one of the two ports. In z13s servers, both two ports are supported by z/OS and can be shared by up to 31 partitions (LPARs).The 10GbE RoCE Express feature uses an SR laser as the optical transceiver, and supports the use of a multimode fiber optic cable that terminates with an LC Duplex connector.
  • Page 214: Zedc Express

    Peripheral Component Interconnect Express” on page 521. 4.14 zEDC Express zEDC Express is an optional feature (FC 0420) that is available on z13s, z13, zEC12, and zBC12 servers. It is designed to provide hardware-based acceleration for data compression and decompression.
  • Page 215: Chapter 5. Central Processor Complex Channel Subsystem

    Central processor complex Chapter 5. channel subsystem This chapter addresses the concepts of the IBM z13s channel subsystem, including multiple channel subsystems and multiple subchannel sets. It also describes the technology, terminology, and implementation aspects of the channel subsystem. This chapter includes the following sections:...
  • Page 216: Channel Subsystem

    (IOCDS). The IOCDS is loaded into the hardware system area (HSA) during a CPC is power-on reset (POR), to initial all the channel subsystems. On z13s servers, the HSA is pre-allocated in memory with a fixed size of 40 GB, as well as the customer...
  • Page 217: Multiple Logical Channel Subsystems

    I/O configuration and pre-planning for future I/O expansions. CPC drawer repair: For z13s servers, the CPC drawer repair actions are always disruptive. The objects are always reserved in the z13s HSA during POR, no matter if they are defined in the IOCDS for use or not:...
  • Page 218: Multiple Subchannel Sets

    Each channel path in a channel subsystem has a unique 2-digit heximal identifier that is known as a channel-path identifier (CHPID), range from 00 to FF. So a total of 256 CHPIDs are supported by a CSS, and a maximum of 768 CHPIDs are available on z13s servers, with three logical channel subsystems.
  • Page 219 Note: Do not confuse the multiple subchannel sets function with multiple channel subsystems. Subchannel number The subchannel number is a four-digit heximal number, ranging from 0x0000 to 0xFFFF, assigned to a subchannel within a subchannel set of a channel subsystem. Subchannels in each subchannel set are always assigned subchannel numbers within a single range of contiguous numbers.
  • Page 220 Devices that are used early during IPL processing now can be accessed by using subchannel set 1, or subchannel set 2 on a z13s server. This configuration allows the users of Metro Mirror secondary devices that are defined by using the same device number and a new device type in an alternate subchannel set to be used for IPL, an I/O definition file (IODF), and stand-alone memory dump volumes when needed.
  • Page 221: Channel Path Spanning

    The display ios,config command The z/OS display ios,config(all) command that is shown in Figure 5-2 includes information about the multiple subchannel sets. D IOS,CONFIG(ALL) IOS506I 11.32.19 I/O CONFIG DATA 340 ACTIVE IODF DATA SET = SYS6.IODF39 CONFIGURATION ID = L06RMVS1 EDT ID = 01 TOKEN: PROCESSOR DATE TIME...
  • Page 222: Systems Css: Channel Subsystems With Channel Spanning

    16, 17 of LCSS1, same apply to CHPID 04. Channel spanning is supported for internal links (HiperSockets and IC links) and for certain types of external links. External links that are supported on z13s servers include FICON Express16S, FICON Express8S, FICON Express8 channels, OSA-Express5S, OSA-Express4S, and Coupling Links.
  • Page 223: Css, Lpar, And Identifier Example

    LCSSs. If a MIF image ID is not defined, an arbitrarily ID is assigned when the I/O configuration activated. The z13s server supports a maximum of three LCSSs, with a total number of 40 LPARs that can be defined. Each LCSS of a z13s server has these maximum number of LPARs: LCSS0 and LCSS1 support 15 LPARs each, and MIF image ID range from 1 to F within each LCSS.
  • Page 224: I/O Configuration Management

    PCHID mapping with high availability for the target I/O configuration. Built-in mechanisms generate a mapping according to customized I/O performance groups. Additional enhancements are implemented in CMT to support the z13 and z13s servers. The CMT is available for download from the IBM Resource Link website: http://www.ibm.com/servers/resourcelink...
  • Page 225 Maximum number of subchannels per CSS 191.74 K SS0: 65280 SS1 - SS2: 65535 Maximum number of CHPIDs per CSS Chapter 5. Central processor complex channel subsystem...
  • Page 226 IBM z13s Technical Guide...
  • Page 227: Chapter 6. Cryptography

    Systems, followed by a more detailed description of the features the IBM z13s servers are offering. The chapter concludes with a summary of the cryptographic features and the software required.
  • Page 228: Cryptography In Ibm Z13 And Z13S Servers

    6.1 Cryptography in IBM z13 and z13s servers IBM z13 and z13s servers introduce the new PCI Crypto Express5S feature, together with a redesigned CPACF Coprocessor, managed by a new Trusted Key Entry (TKE) workstation. Also, the IBM Common Cryptographic Architecture (CCA) and the IBM Enterprise PKCS #11 (EP11) Licensed Internal Code (LIC) have been enhanced.
  • Page 229: Kerckhoffs' Principle

    that the intended party can de-scramble the data but an interloper cannot. This idea is also referred to as confidentiality. Authentication: This is the process of determining whether the partners in communication are who they claim to be, which can be done by using certificates and signatures. It must be possible to clearly identify the owner of the data or the sender and the receiver of the message.
  • Page 230: Keys

    If used and stored outside of the HSM, a secure key master key must be encrypted with a , which is created within the HSM and never leaves the HSM. IBM z13s Technical Guide...
  • Page 231: Algorithms

    Because a secure key must be handled in a special hardware device, the use of secret keys is usually far slower than using clear keys as illustrated in Figure 6-1. Figure 6-1 Three levels of protection with three levels of speed. 6.2.4 Algorithms The algorithms of modern cryptography are differentiated by whether they use the same key for the encryption of the message as for the decryption:...
  • Page 232: Cryptography On Ibm Z13S Servers

    For more information, see 6.4, “CP Assist for Cryptographic Functions” on page 207. The Crypto Express5S card is an HSM placed in the PCIe I/O drawer of the z13s server. It also supports cryptographic algorithms by using secret keys. This feature is described in more detail in 6.5, “Crypto Express5S”...
  • Page 233 TKE 8.0 Licensed Internal Code (LIC): Shipped with the TKE tower workstation FC 0847 since z13 GA. This LIC is not orderable with a z13s server, but it is able to manage a Crypto Express5S card FC 0890 installed in a z13s server.
  • Page 234: Z13S Cryptographic Support In Z/Os

    United States Department of Commerce. It is your responsibility to understand and adhere to these regulations when you are moving, selling, or transferring these products. To access and use the cryptographic hardware devices that are provided by z13s servers, the application must use an application programming interface (API) provided by the operating system.
  • Page 235: Cp Assist For Cryptographic Functions

    6.4 CP Assist for Cryptographic Functions As already mentioned, attached to every PU on an SCM in a CPC of a z13s server are two independent engines, one for compression and one for cryptographic purposes, as shown in Figure 6-4. This cryptographic coprocessor, called the CPACF, is not an HSM and is therefore not suitable for handling algorithms that use secret keys.
  • Page 236: Cryptographic Synchronous Functions

    6.4.1 Cryptographic synchronous functions As the CPACF is working synchronously to the PU, it provides cryptographic synchronous functions. For IBM and client-written programs, CPACF functions can be started by the MSA libica instructions. z/OS ICSF callable services on z/OS, in-kernel crypto APIs, and a cryptographic functions library running on Linux on z Systems can also start CPACF synchronous functions.
  • Page 237: Cpacf Protected Key

    The CPACF functions are supported by z/OS, z/VM, z/VSE, zTPF, and Linux on z Systems. 6.4.2 CPACF protected key z13s server support the protected key implementation. Since IBM 4764 PCI-X cryptographic coprocessor (PCIXCC) deployment, secure keys are processed on the PCI-X and PCIe cards.
  • Page 238: Cpacf Key Wrapping

    Crypto Express5S is not available. A new segment in the profiles of the CSFKEYS class in IBM RACF restricts which secure keys can be used as protected keys. By default, all secure keys are considered not eligible to be used as protected keys.
  • Page 239: Crypto Express5S

    Each feature has one PCIe cryptographic adapter. The Crypto Express5S feature occupies one I/O slot in a z13 or z13s PCIe I/O drawer. This feature is an HSM and provides a secure programming and hardware environment on which crypto processes are run.
  • Page 240 Each z13s server supports up to 16 Crypto Express5S features. Table 6-2 shows configuration information for Crypto Express5S. Table 6-2 Crypto Express5S features Feature Quantity Minimum number of orderable features for each server Order increment above two features Maximum number of features for each server...
  • Page 241: Cryptographic Asynchronous Functions

    6.5.1 Cryptographic asynchronous functions The optional PCIe cryptographic coprocessor Crypto Express5S provides asynchronous cryptographic functions to z13s servers. Over 300 cryptographic algorithms and modes are supported, including the following: DES/TDES w DES/TDES MAC/CMAC: The Data Encryption Standard is a widespread symmetrical encryption algorithm.
  • Page 242: Crypto Express5S As A Cca Coprocessor

    Level (MCL) or ICSF release migration. In z13s servers, up to four UDX files can be imported. These files can be imported only from a DVD. The UDX configuration window is updated to include a Reset to IBM Default button.
  • Page 243 IBM CCA LIC to enhance the functions that secure financial transactions and keys: Greater than 16 domains support, up to 40 LPARs on z13s servers and up to 85 LPARs on z13 servers, exclusive to z13 or z13s servers, and to Crypto Express5S...
  • Page 244 Here are the FPE requirements: Hardware requirements: – z13s or z13 servers and Crypto Express5S with CCA V5.2 firmware Software requirements: – z/OS V2.2 – z/OS V2.1 and z/OS V1.13 with the Cryptographic Support for z/OS V1R13-z/OS V2R1 web deliverable (FMID HCR77B0) –...
  • Page 245 New Key Check Value (KCV) algorithm for service CSNBKYT2 Key Test 2 New key derivation options for CSNDEDH EC Diffie-Hellman service Here are the requirements for this function: Hardware requirements: – z13s or z13 servers and Crypto Express5S with CCA V5.2 firmware Chapter 6. Cryptography...
  • Page 246: Crypto Express5S As An Ep11 Coprocessor

    – z/VM 5.4, 6.2, and 6.3 with PTFs for guest exploitation 6.5.3 Crypto Express5S as an EP11 coprocessor A Crypto Express5S card that is configured in Secure IBM Enterprise PKCS #11 (EP11) coprocessor mode provides PKCS #11 secure key support for public sector requirements.
  • Page 247: Management Of Crypto Express5S

    With z13, this number is raised to 85. This amount corresponds to the maximum number of LPARs running on a z13, which is also 85. Therefore, with z13s servers, the number of register sets for a Crypto Express5S card is 40. Each of these 40 sets...
  • Page 248: Customize Image Profiles: Crypto

    ICSF. The same usage domain index can be used by multiple partitions regardless to which CSS they are defined. However, the combination of PCI-X adapter number and usage domain index number must be unique across all active partitions. IBM z13s Technical Guide...
  • Page 249: Se: View Lpar Cryptographic Controls

    With this function, the cryptographic feature can be added and removed dynamically, without stopping a running operating system. For more information about the management of Crypto Express5S cards see the corresponding chapter in IBM z13 Configuration Setup, SG24-8260. Chapter 6. Cryptography...
  • Page 250: Tke Workstation

    Ethernet LAN connectivity only. Up to 10 TKE workstations can be ordered. TKE FCs 0847 and 0097 can be used to control the Crypto Express5S cards on z13s servers. They can also be used to control the Crypto Express5S on z13, and the older crypto cards on zEC12, zBC12, z196, z114, z10 EC, z10 BC, z9 EC, z9 BC, z990, and z890 servers.
  • Page 251: Tke Workstation With Licensed Internal Code 8.0

    6.6.3 TKE workstation with Licensed Internal Code 8.0 To control the Crypto Express5S card in a z13s server, a TKE workstation (FC 0847 or 0097) with LIC 8.0 (FC 0877) or LIC 8.1 (FC 0878) is required. LIC 8.0 does not provide the new functions of LIC 8.1.
  • Page 252: Tke Hardware Support And Migration Information

    6.6.5 TKE hardware support and migration information The new TKE 8.1 LIC (FC 0878) is shipped with z13 servers after GA2, and with z13s servers. If a new TKE 8.1 is purchased, two versions are available: TKE 8.1 tower workstation (FC 0847) TKE 8.1 rack-mounted workstation (FC 0097)
  • Page 253: Cryptographic Functions Comparison

    That is, the TKE does not care whether a Crypto Express is running on a z196, z114, zEC12, zBC12, z13, or z13s servers. Therefore, the LIC can support any CPC where the coprocessor is supported, but the TKE LIC must support the specific crypto module.
  • Page 254 Usable for data integrity: Hashing and message authentication Usable for financial processes and key management operations Crypto performance IBM RMF™ monitoring Requires system master keys to be loaded System (master) key storage Retained key storage Tamper-resistant hardware packaging Designed for FIPS 140-2 Level 4 certification...
  • Page 255: Cryptographic Software Support

    Functions or attributes CPACF CEX5C CEX5P CEX5A ISO 16609 CBC mode triple DES message authentication code (MAC) support AES GMAC, AES GCM, AES XTS mode, CMAC SHA-2 (384,512), HMAC Visa Format Preserving Encryption AES PIN support for the German banking industry ECDSA (192, 224, 256, 384, 521 Prime/NIST) ECDSA (160, 192, 224, 256, 320, 384, 512 BrainPool) ECDH (192, 224, 256, 384, 521 Prime/NIST)
  • Page 256 IBM z13s Technical Guide...
  • Page 257: Chapter 7. Software Support

    This chapter lists the minimum operating system requirements and support considerations for the IBM z13s (z13s) and its features. It addresses z/OS, z/VM, z/VSE, z/TPF, Linux on z Systems, and KVM for IBM z Systems. Because this information is subject to change, always see the Preventive Service Planning (PSP) bucket for 2965DEVICE for the most current information.
  • Page 258: Operating Systems Summary

    7.1 Operating systems summary Table 7-1 lists the minimum operating system levels that are required on the z13s. For similar information about the IBM zBX Model 004, see 7.15, “IBM z BladeCenter Extension (zBX) Model 004 software support” on page 304.
  • Page 259: Z/Os

    September of 2014, a fee-based extension for defect support (for up to three years) can be obtained by ordering IBM Software Support Services - Service Extension for z/OS 1.12. Also, z/OS.e is not supported on z13s, and z/OS.e Version 1 Release 8 was the last release of z/OS.e.
  • Page 260: Z/Vse

    V6.2 is running as a guest (second level). This is in conjunction with the statement of direction that the IBM z13 and z13s servers will be the last to support ESA/390 architecture mode, which z/VM V6.2 requires. z/VM V6.2 will continue to be supported until December 31, 2016, as announced in announcement letter # 914-012.
  • Page 261: Kvm For Ibm Z Systems

    7.2.6 KVM for IBM z Systems KVM for IBM z Systems (KVM for IBM z) is an open virtualization alternative for z Systems built on Linux and KVM. KVM for IBM z delivers a Linux-familiar administrator experience that can enable simplified virtualization management and operation. See Table F-1 on page 513 for a list of supported features.
  • Page 262 Table 7-3 shows the minimum support levels for z/OS and z/VM. Table 7-3 z13s function minimum support requirements summary (part 1 of 2) Function z/OS z/OS z/OS z/OS z/VM z/VM V2 R2 V2 R1 V1R13 V1R12 V6R3 V6R2 z13s Maximum processor unit (PUs) per...
  • Page 263 SMC-D over ISM (Internal Shared Memory) FICON (Fibre Connection) and FCP (Fibre Channel Protocol) FICON Express 8S (channel-path identifier (CHPID) type FC) when using z13s FICON or channel-to-channel (CTC) FICON Express 8S (CHPID type FC) for support of High Performance FICON for z Systems...
  • Page 264 CHPID type OSD (two ports per CHPID) OSA-Express5S Gigabit Ethernet LX and SX CHPID type OSD (one port per CHPID) OSA-Express5S 1000BASE-T Ethernet CHPID type OSC OSA-Express5S 1000BASE-T Ethernet CHPID type OSD (two ports per CHPID) IBM z13s Technical Guide...
  • Page 265 Function z/OS z/OS z/OS z/OS z/VM z/VM V2 R2 V2 R1 V1R13 V1R12 V6R3 V6R2 OSA-Express5S 1000BASE-T Ethernet CHPID type OSD (one port per CHPID) OSA-Express5S 1000BASE-T Ethernet CHPID type OSE OSA-Express5S 1000BASE-T Ethernet CHPID type OSM OSA-Express5S 1000BASE-T Ethernet CHPID type OSN OSA-Express4S 10-Gigabit Ethernet LR and SR...
  • Page 266 IOCP definitions need to be migrated to support the HiperSockets definitions (CHPID type IQD). VCHID specifies the virtual channel identification number associated with the channel path. Valid range is 7E0 - 7FF. VCHID is not valid on z Systems before z13s servers.
  • Page 267 Linux on z V6R1 V5R2 V5R1 V1R1 Systems z13s Maximum PUs per system image Support of IBM zAware IBM z Integrated Information Processors (zIIPs) Java Exploitation of Transactional Execution Large memory support 32 GB 32 GB 32 GB 4 TB...
  • Page 268 GRS FICON CTC toleration N_Port ID Virtualization (NPIV) for FICON CHPID type FCP FICON Express8S support of hardware data router CHPID type FCP FICON Express8S and FICON Express8 and FICON Express8S support of T10-DIF CHPID type FCP IBM z13s Technical Guide...
  • Page 269 Function z/VSE z/VSE z/VSE z/TPF Linux on z V6R1 V5R2 V5R1 V1R1 Systems FICON Express8S, FICON Express8, FICON Express16S 10KM LX, and FICON Express4 SX support of SCSI disks CHPID type FCP FICON Express8S CHPID type FC FICON Express8 CHPID type FC FICON Express 16S (CHPID type FC) when using FICON or CTC FICON Express 16S (CHPID type FC)
  • Page 270 CHPID type OSE (one or two ports per CHPID) OSA-Express4S 1000BASE-T CHPID type OSM OSA-Express4S 1000BASE-T CHPID type OSN (one or two ports per CHPID) Parallel Sysplex and other STP enhancements Server Time Protocol (STP) Coupling over InfiniBand CHPID type IBM z13s Technical Guide...
  • Page 271: Support By Function

    The z13s IOCP definitions therefore must be migrated to support the HiperSockets definitions (CHPID type IQD). VCHID specifies the virtual channel identification number associated with the channel path. Valid range is 7E0 - 7FF. VCHID is not valid on z Systems before z13s servers.
  • Page 272 Total characterizable PUs including zIIPs and CPs on z13s servers. d. 64 PUs without SMT mode and 32 PUs with SMT. e. IBM is working with its Linux distribution partners to provide the use of this function in future Linux on z Systems distribution releases.
  • Page 273: Ziip Support

    CPs. It also allows z/OS to fully use zIIPs. IBM Dynamic Partition Manager A new administrative mode is being introduced for Linux only CPCs for z13 and z13s servers with SCSI storage attached through FCP channels. IBM Dynamic Partition Manager (DPM)
  • Page 274: Transactional Execution

    The functioning of a zIIP is transparent to application programs. In z13s, the zIIP processor is designed to run in SMT mode, with up to two threads per processor. This new function is designed to help improve throughput for zIIP workloads and provide appropriate performance measurement, capacity planning, and SMF accounting data.
  • Page 275: Maximum Main Storage Size

    Table 7-6 lists the maximum amount of main storage that is supported by current operating systems. A maximum of 4 TB of main storage can be defined for an LPAR on a z13s. If an I/O drawer is present (as carry-forward), the LPAR maximum memory is limited to 1 TB.
  • Page 276 IMS V12 are targeted for Flash Express usage. There is a statement of direction to support traditional WebSphere V8. The support is for just-in-time (JIT) Code Cache and Java Heap to improve performance for pageable large pages. IBM z13s Technical Guide...
  • Page 277: Enterprise Data Compression Express

    Software-implemented compression algorithms are costly in terms of processor resources, and storage costs are not negligible either. zEDC is an optional feature that is available to z13, z13s, zEC12, and zBC12 servers that addresses those requirements by providing hardware-based acceleration for data compression and decompression.
  • Page 278: Large

    For more information, see Appendix D, “Shared Memory Communications” on page 475. 7.3.8 Large page support In addition to the existing 1-MB large pages, 4-KB pages, and page frames, z13s supports pageable 1-MB large pages, large pages that are 2 GB, and large page frames. For more information, see “Large page support”...
  • Page 279: Hardware Decimal Floating Point

    Red Hat RHEL 6 7.3.10 Up to 40 LPARs This feature, first made available in z13s, allows the system to be configured with up to 40 LPARs. Because channel subsystems can be shared by up to 15 LPARs, you must configure three channel subsystems to reach the 40 LPARs limit.
  • Page 280: Separate Lpar Management Of Pus

    7.3.13 LPAR physical capacity limit enforcement On the IBM z13s, PR/SM is enhanced to support an option to limit the amount of physical processor capacity that is consumed by an individual LPAR when a PU that is defined as a central processor (CP) or an IFL is shared across a set of LPARs.
  • Page 281: Capacity Provisioning Manager

    Table 7-15 lists the minimum operating system level that is required on z13s. Table 7-15 Minimum support requirements for LPAR physical capacity limit enforcement Operating system Support requirements z/OS z/OS V1R12 z/VM z/VM V6R3 z/VSE z/VSE V5R1 a. PTFs are required.
  • Page 282: Hiperdispatch

    The PR/SM in the z13s seeks to assign all memory in one drawer striped across the two nodes to take advantage of the lower latency memory access in a drawer and smooth performance variability across nodes in the drawer.
  • Page 283: The 63.75-K Subchannels

    The supported processor limit has been increased to 64, whereas with SMT, it remains at 32, supporting up to 64 threads running simultaneously.
  • Page 284: Multiple Subchannel Sets

    63.75-K I/O devices and aliases for ESCON (CHPID type CNC) and FICON (CHPID types FC) on the z13, z13s, zEC12, zBC12, z196, z114, z10 EC, and z9 EC. z196 introduced the third subchannel set (SS2), which is not available in zBC12. With z13s servers, three subchannel sets are now available.
  • Page 285: Ipl From An Alternative Subchannel Set

    Red Hat RHEL 6 7.3.20 IPL from an alternative subchannel set z13s supports IPL from subchannel set 1 (SS1), or subchannel set 2 (SS2) in addition to subchannel set 0. For more information, see “IPL from an alternate subchannel set” on page 192.
  • Page 286: Hipersockets Integration With The Intraensemble Data Network

    PTFs are required. 7.3.24 HiperSockets Virtual Switch Bridge The HiperSockets Virtual Switch Bridge is implemented on z13, z13s, zEC12, zBC12, z196, and z114 servers. HiperSockets Virtual Switch Bridge can integrate with the IEDN through OSA-Express for zBX (OSX) adapters. It can then bridge to another central processor complex (CPC) through OSD adapters.
  • Page 287: Hipersockets Multiple Write Facility

    For flexible and efficient data transfer for IP and non-IP workloads, the HiperSockets internal networks on z13s can support two transport modes. These modes are Layer 2 (Link Layer) and the current Layer 3 (Network or IP Layer). Traffic can be Internet Protocol (IP) Version 4 or Version 6 (IPv4, IPv6) or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA).
  • Page 288: Hipersockets Network Traffic Analyzer For Linux On Z Systems

    4 or 8 Gbps for synergy with existing switches, directors, and storage devices. With support for native FICON, zHPF, and FCP, the z13s server enables SAN for even higher performance, helping to prepare for an end-to-end 16 Gbps infrastructure to meet the increased bandwidth demands of your applications.
  • Page 289: Ficon Express8S

    Operating z/OS z/VM z/VSE z/TPF Linux on z Systems system Support of SCSI V6R2 V5R1 SUSE Linux Enterprise Server 12 devices SUSE Linux Enterprise Server 11 CHPID type Red Hat RHEL 7 Red Hat RHEL 6 Support of V6R3 SUSE Linux Enterprise Server 12 hardware data SUSE Linux Enterprise Server 11 router...
  • Page 290: Ficon Express8

    LX and SX connections are offered (in a feature, all connections must have the same type). Important: The z13 and z13s servers are the last z Systems servers to support FICON Express 8 channels. Enterprises should begin upgrading from FICON Express8 channel features (FC 3325 and FC 3326) to FICON Express16S channel features (FC 0418 and FC 0419).
  • Page 291: Z/Os Discovery And Auto-Configuration

    (IODF) that can be converted to production IODFs and activated. zDAC is designed to run discovery for all systems in a sysplex that support the function. Therefore, zDAC helps to simplify I/O configuration on z13s systems that run z/OS, and reduces complexity and setup time.
  • Page 292: High-Performance Ficon

    FICON features that are supported on z13s when configured as CHPID type FC. Table 7-32 lists the minimum support requirements for zDAC. Table 7-32 Minimum support requirements for zDAC Operating system Support requirements z/OS z/OS V1R12 a.
  • Page 293: Request Node Identification Data

    FICON architecture. Certain complex CCW chains are not supported by zHPF. zHPF is available to z13, z13s, zEC12, zBC12, z196, z114, and System z10 servers. The FICON Express8S, FICON Express8, and FICON Express16S (CHPID type FC) concurrently support both the existing FICON protocol and the zHPF protocol in the server LIC.
  • Page 294: Subchannels For The Ficon Express16S

    7.3.35 32 K subchannels for the FICON Express16S To help facilitate growth and continue to enable server consolidation, the z13s supports up to 32 K subchannels per FICON Express16S channel (CHPID). More devices can be defined per FICON channel, which includes primary, secondary, and alias devices. The maximum number of subchannels across all device types that are addressable within an LPAR remains at 63.75 K for subchannel set 0 and 64 K (64 X 1024)-1 for subchannel sets 1 and 2.
  • Page 295: Ficon Link Incident Reporting

    16-Gbps link speeds. For more information about FCP channel performance, see the performance technical papers on the z Systems I/O connectivity website at: http://www.ibm.com/systems/z/hardware/connectivity/fcp_performance.html 7.3.40 N_Port ID Virtualization N_Port ID Virtualization (NPIV) allows multiple system images (in LPARs or z/VM guests) to use a single FCP channel as though each were the sole user of the channel.
  • Page 296: Osa-Express5S 10-Gigabit Ethernet Lr And Sr

    SUSE Linux Enterprise Server 11 SP1 SUSE Linux Enterprise Server 12 Red Hat RHEL 6 Red Hat RHEL 7 a. IBM Software Support Services is required for support. b. Maintenance update is required. 7.3.42 OSA-Express5S Gigabit Ethernet LX and SX OSA-Express5S Gigabit Ethernet feature is installed exclusively in the PCIe I/O drawer.
  • Page 297: Osa-Express5S 1000Base-T Ethernet

    QDIO mode, with CHPID types OSD and OSN. Non-QDIO mode, with CHPID type OSE. Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC. OSA-ICC (OSC Channel) supports Secure Sockets Layer on z13s and z13 Driver 27 servers. – Designed to improve security of console operations.
  • Page 298: Osa-Express4S 10-Gigabit Ethernet Lr And Sr

    OSA-Express3, and half the size as well. This configuration results in an increased number of installable features. It also facilitates the purchase of the correct number of ports to help satisfy your application requirements and to better avoid redundancy. IBM z13s Technical Guide...
  • Page 299: Osa-Express4S Gigabit Ethernet Lx And Sx

    V6R3 for dynamic I/O only z/VSE OSD: z/VSE V5R1 OSX: z/VSE V5R1 z/TPF OSD: z/TPF V1R1 OSX: z/TPF V1R1 IBM zAware Linux on z Systems OSD: SUSE Linux Enterprise Server 12 SUSE Linux Enterprise Server 11 Red Hat RHEL 7 Red Hat RHEL 6...
  • Page 300: Osa-Express4S 1000Base-T Ethernet

    Non-QDIO mode, with CHPID type OSE Local 3270 emulation mode, including OSA-ICC, with CHPID type OSC: – OSA-ICC (OSC Channel) supports Secure Sockets Layer on z13s and z13 Driver 27 servers – Up to 48 secure sessions per CHPID (the overall maximum of 120 connections is...
  • Page 301: Open Systems Adapter For Ibm Zaware

    A client-provided data network that is provided through an OSA Ethernet channel. A HiperSockets subnetwork within the z13s. IEDN on the z13s to other CPC nodes in the ensemble. The z13s server also supports the use of HiperSockets over the IEDN.
  • Page 302: Intranode Management Network

    OSA-Express5S 1000BASE-T or OSA-Express4S 1000BASE-T features, which are configured as CHPID type OSM. One port per CHPID is available with CHPID type OSM. The OSA connection is through the system control hub (SCH) on the z13s to the HMC network interface. 7.3.50 Intraensemble data network The IEDN is one of the ensemble’s two private and secure internal networks.
  • Page 303: Integrated Console Controller

    (10, 100, or 1000 Mbps, half-duplex or full-duplex). Starting with z13s (z13 Driver 27) TLS/SSL with Certificate Authentication will be added to the OSC CHPID to provide a secure and validated method for connecting clients to the z Systems host.
  • Page 304: Vlan Management Enhancements

    In a heavily mixed workload environment, this “off the wire” network traffic separation is provided by OSA-Express5S and OSA-Express4S. IWQ reduces the conventional z/OS processing that is required to identify and separate unique workloads. This advantage results in improved overall system performance and scalability. IBM z13s Technical Guide...
  • Page 305: Inbound Workload Queuing For Enterprise Extender

    Enterprise Extender traffic to a dedicated input queue. IWQ for Enterprise Extender is exclusive to OSA-Express5S and OSA-Express4S, CHPID types OSD and OSX, and the z/OS operating system. This limitation applies to z13, z13s, zEC12, zBC12, z196, and z114 servers. The minimum support requirements are listed in Table 7-48.
  • Page 306: Link Aggregation Support For Z/Vm

    QDIO data connection isolation allows disabling internal routing for each QDIO connected. It also allows creating security zones and preventing network traffic between the zones. It is supported by all OSA-Express5S and OSA-Express4S features on z13s and zBC12. 7.3.61 QDIO interface isolation for z/OS Some environments require strict controls for routing data traffic between servers or nodes.
  • Page 307: Qdio Optimized Latency Mode

    Large send support for IPv6 packets applies to the OSA-Express5S and OSA-Express4S features (CHPID type OSD and OSX), and is exclusive to z13, z13s, zEC12, zBC12, z196, and z114 servers. With z13s, large send for IPv6 packets (segmentation offloading) for LPAR-to-LPAR traffic is supported.
  • Page 308: Osa-Express5S And Osa-Express4S Checksum Offload

    HiperSockets, provide an efficient, high-performance technique for I/O interruptions to reduce path lengths and processor usage. These reductions are in both the host operating system and the adapter (OSA-Express5S and OSA-Expres4S when using CHPID type OSD). IBM z13s Technical Guide...
  • Page 309: Osa Dynamic Lan Idle

    QDIO Diagnostic Synchronization is supported by the OSA-Express5S and OSA-Express4S features on z13s when in QDIO mode (CHPID type OSD). It is used by z/OS V1R12 and later. 7.3.70 Network Traffic Analyzer The z13s offers systems programmers and network administrators the ability to more easily solve network problems despite high traffic.
  • Page 310: Program-Directed Re-Ipl

    The Network Traffic Analyzer is supported by the OSA-Express5S and OSA-Express4S features on z13s when in QDIO mode (CHPID type OSD). It is used by z/OS V1R12 and later. 7.3.71 Program-directed re-IPL First available on System z9, program directed re-IPL allows an operating system on a z13s to re-IPL without operator intervention.
  • Page 311: Dynamic I/O Support For Infiniband And Ica Chpids

    SMT is supported only by zIIP and IFL speciality engines on z13s, and must be used by the operating system. An operating system with SMT support can be configured to dispatch work HCA2-O is not supported on z13s.
  • Page 312: Single Instruction Multiple Data

    IBM. 7.3.75 Single Instruction Multiple Data The SIMD feature introduces a new set of instructions with z13s to enable parallel computing that can accelerate code with string, character, integer, and floating point data types. The SIMD instructions allow a larger number of operands to be processed with a single complex instruction.
  • Page 313: Cryptographic Support

    V1R1 Linux on z Systems SUSE Linux Enterprise Server 12 Red Hat RHEL 7 a. CPACF also is used by several IBM software product offerings for z/OS, such as IBM WebSphere Application Server for z/OS. Chapter 7. Software support...
  • Page 314: Crypto Express5S

    7.4.3 Web deliverables For web-deliverable code on z/OS, see the z/OS downloads website: http://www.ibm.com/systems/z/os/zos/downloads/ For Linux on z Systems, support is delivered through IBM and the distribution partners. For more information, see Linux on z Systems on the IBM developerWorks® website: http://www.ibm.com/developerworks/linux/linux390/ 7.4.4 z/OS Integrated Cryptographic Service Facility FMIDs...
  • Page 315 V1R12 CPACF Protected Key Extended PKCS #11 ICSF Restructure (Performance, RAS, and ICSF-CICS Attach Facility) HCR7780 V1R13 Cryptographic Support for IBM zEnterprise 196 support V1R12 z/OS V1R10-V1R12 Elliptic Curve Cryptography V1R11 Included as a base Message-Security-Assist-4 V1R10...
  • Page 316 AES MAC Enhancements PKCS #11 Enhancements Improved CTRACE Support HCR77B0 V2R2 Cryptographic Support for z13 / z13s & CEX5 support, including V2R1 z/OS V1R13-z/OS V2R1 support for sharing cryptographic V1R13 Included as a base coprocessors across a maximum of 85 (for...
  • Page 317: Icsf Migration Considerations

    Except for base processor support, z/OS software changes do not require any of the functions that are introduced with the z13s. Also, the functions do not require functional software. The approach, where applicable, allows z/OS to automatically enable a function based on the presence or absence of the required hardware and software.
  • Page 318: General Guidelines

    7.6.1 General guidelines The IBM z13s introduces the latest z Systems technology. Although support is provided by z/OS starting with z/OS V1R12, use of z13s depends on the z/OS release. z/OS.e is supported on z13s. In general, consider the following guidelines: Do not change software releases and hardware at the same time.
  • Page 319: Decimal Floating Point And Z/Os Xl C/C++ Considerations

    Important: Use the previous z Systems ARCHITECTURE or TUNE options for C/C++ programs if the same applications run on both the z13s and on previous z Systems servers. However, if C/C++ applications run only on z13s servers, use the latest ARCHITECTURE and TUNE options to ensure that the best performance possible is delivered through the latest instruction set additions.
  • Page 320: Z Appliance Container Infrastructure Mode Lpar

    Coupling facility connectivity to a z13s is supported on the z13, zEC12, zBC12, z196, z114, or another z13s server. The LPAR running the CFCC can be on any of the previously listed supported systems. For more information about CFCC requirements for supported systems, see Table 7-63 on page 294.
  • Page 321: Cfcc Level 21

    Also, consider the level of CFCC. For more information, see “Coupling link considerations” on page 181. 7.8.1 CFCC Level 21 CFCC level 21 is delivered on the z13s with driver level 27. CFCC Level 21 introduces the following enhancements: Usability enhancement: –...
  • Page 322: Flash Express Exploitation By Cfcc

    To support an upgrade from one CFCC level to the next, different levels of CFCC can be run concurrently while the coupling facility LPARs are running on different servers. CF LPARs that run on the same server share the CFCC level. The CFCC level for z13s servers is CFCC Level 21, as shown in Table 7-63.
  • Page 323: Cfcc Coupling Thin Interrupts

    CORE: This value specifies that z/OS should configure a processor view of core, where a core can have one or more threads. The number of threads is limited by z13s to two. If the underlying hardware does not support SMT, a core is limited to one thread.
  • Page 324: Result Of The Display Core Command

    When PROCVIEW CORE or CORE,CPU_OK are specified in z/OS running in z13s, HiperDispatch is forced to run as enabled. You cannot disable it.The PROCVIEW statement cannot be changed dynamically, so you must run an IPL after changing it to make the new setting effective.
  • Page 325: Simultaneous Multithreading

    IFL cores. IBM is working with its Linux distribution partners to support SMT. KVM for IBM z Systems SMT support is planned for 1Q2016.
  • Page 326: The Midaw Facility

    The use of new hardware instructions through XL C/C++ ARCH(11) and TUNE(11) or SIMD usage by MASS and ATLAS libraries requires the z13s support for z/OS V2R1 XL C/C++ web deliverable. The followings compilers have built-in functions for SMID:...
  • Page 327: Idaw Usage

    FICON channel connect time, director ports, and control unit processor usage. IBM laboratory tests indicate that applications that use EF data sets, such as DB2, or long chains of small blocks can gain significant performance benefits by using the MIDAW facility.
  • Page 328: Midaw Format

    CCW. The skip flag in the MIDAW can be used instead. The data count in the CCW must equal the sum of the data counts in the MIDAWs. The CCW operation ends IBM z13s Technical Guide...
  • Page 329: Extended Format Data Sets

    when the CCW count goes to zero or the last MIDAW (with the last flag) ends. The combination of the address and count in a MIDAW cannot cross a page boundary. This configuration means that the largest possible count is 4 K. The maximum data count of all the MIDAWs in a list cannot exceed 64 K, which is the maximum count of the associated CCW.
  • Page 330: Iocp

    On z13s servers, the CHPID statement of HiperSockets devices requires the keyword VCHID. VCHID specifies the virtual channel identification number associated with the channel path. Valid range is 7E0 - 7FF. VCHID is not valid on z Systems before z13s servers.
  • Page 331: Worldwide Port Name Tool

    7.13 Worldwide port name tool Part of the installation of your z13s system is the pre-planning of the SAN environment. IBM has a stand-alone tool to assist with this planning before the installation.
  • Page 332: Ibm Z Bladecenter Extension (Zbx) Model 004 Software Support

    For information about zBX upgrades, see 8.3.4, “MES upgrades for the zBX” on page 327. 7.15.1 IBM Blades IBM offers a selected subset of IBM POWER7 blades that can be installed and operated on the zBX Model 004. The blades are virtualized by PowerVM Enterprise Edition. Their LPARs run either AIX Version 5 Release 3 technology level (TL) 12 (IBM POWER6®...
  • Page 333: Software Licensing

    AIX, Linux on System x, and Windows environments. PowerVM Enterprise Edition licenses must be ordered for IBM POWER7 blades. For the z13s, two metric groups for software licensing are available from IBM, depending on the software product: Monthly license charge (MLC) International Program License Agreement (IPLA) MLC pricing metrics have a recurring charge that applies each month.
  • Page 334: Monthly License Charge Pricing Metrics

    (except for products that are licensed by using the select application license charge (SALC) pricing metric). This type of charge requires measuring the utilization and reporting it to IBM. The 4-hour rolling average utilization of the logical partition can be limited by a defined capacity value on the image profile of the partition.
  • Page 335: Advanced Workload License Charges

    AWLC, when all nodes are z13, z13s, zEC12, zBC12, z196, or z114 servers. Variable workload license charge (VWLC), allowed only under the AWLC Transition Charges for Sysplexes when not all of the nodes are z13, z13s, zEC12, zBC12, z196, or z114 servers.
  • Page 336: Advanced Entry Workload License Charge

    MWLC is not available. Similar to workload license charges, MWLC can be implemented in full-capacity or subcapacity mode. An MWLC applies to z/VSE V4 and later, and several IBM middleware products for z/VSE. All other z/VSE programs continue to be priced as before.
  • Page 337: Parallel Sysplex License Charges

    Certain WebSphere for z/OS products Linux middleware products z/VM V5 and V6 Generally, three pricing metrics apply to IPLA products for z13s and z Systems: Value unit (VU) VU pricing applies to the IPLA products that run on z/OS. Value Unit pricing is typically based on the number of MSUs and allows for a lower cost of incremental growth.
  • Page 338: Zbx Licensed Software

    The hypervisor for the select System x blades for zBX is provided as part of the zEnterprise Unified Resource Manager. IBM z Unified Resource Manager The IBM z Unified Resource Manager is available through z13, z13s, zEC12, and zBC12 hardware features, either ordered with the system or ordered later. No separate software licensing is required.
  • Page 339: Chapter 8. System Upgrades

    System upgrades Chapter 8. This chapter provides an overview of IBM z Systems z13s upgrade capabilities and procedures, with an emphasis on capacity on demand (CoD) offerings. The upgrade offerings to the z13s central processor complex (CPC) have been developed from previous IBM z Systems servers.
  • Page 340: Upgrade Types

    The two replacement capacity offerings that are available are Capacity BackUp (CBU) and Capacity for Planned Events (CPE). For more information, see 8.1.2, “Terminology related to CoD for z13s systems” on page 313. MES: The MES provides a system upgrade that can result in more enabled processors and a separate central processor (CP) capacity level, but also in a second CPC drawer, memory, I/O drawers, and I/O features (physical upgrade).
  • Page 341: Terminology Related To Cod For Z13S Systems

    For more information, see 8.8, “Nondisruptive upgrades” on page 349. 8.1.2 Terminology related to CoD for z13s systems Table 8-1 briefly describes the most frequently used terms that relate to CoD for z13s systems. Table 8-1 CoD terminology...
  • Page 342 Shows the current active capacity on the server, including all replacement identifier (MCI) and billable capacity. For z13s servers, the model capacity identifier is in the form of Axx to Zxx, where xx indicates the number of active CPs. Note that xx can have a range of 01-06.
  • Page 343: Permanent Upgrades

    The two replacement offerings available are CPE and CBU. Resource Link IBM Resource Link is a technical support website that is included in the comprehensive set of tools and resources available from the IBM Systems technical support site: http://www.ibm.com/servers/resourcelink/...
  • Page 344: Temporary Upgrades

    (for example, an upgrade of Model N10 to N20, or adding memory). Permanent upgrades initiated through CIU on IBM Resource Link Ordering a permanent upgrade by using the CIU application through the IBM Resource Link enables you to add capacity to fit within your existing hardware: Add model capacity.
  • Page 345: Concurrent Upgrades

    (A to Z) of the CPs. A hardware configuration upgrade might require more physical hardware (processor or I/O drawers, or both). A z13s upgrade can change either, or both, the server model and the MCI. Chapter 8. System upgrades...
  • Page 346 The concurrent I/O upgrade capability can be better used if a future target configuration is considered during the initial configuration. Concurrent PU conversions (MES-ordered) z13s servers support concurrent conversion between all PU types, such as any-to-any PUs, including SAPs, to provide flexibility to meet changing business requirements. IBM z13s Technical Guide...
  • Page 347: Customer Initiated Upgrade Facility

    MCI. 8.2.2 Customer Initiated Upgrade facility The CIU facility is an IBM online system through which a customer can order, download, and install permanent and temporary upgrades for z Systems servers. Access to and use of the CIU facility requires a contract between the customer and IBM, through which the terms and conditions for use of the CIU facility are accepted.
  • Page 348: The Provisioning Architecture

    MES process. To order and activate the upgrade, log on to the IBM Resource Link website and start the CIU application to upgrade a server for processors or memory. Requesting a customer order approval to conform to your operation policies is possible.
  • Page 349: Example Of Temporary Upgrade Activation Sequence

    Temporary upgrades are represented in the z13s server by a . All temporary upgrade records, downloaded from the RSF or installed from portable media, are resident on the SE hard disk drive (HDD). At the time of activation, the customer can control everything locally.
  • Page 350: Summary Of Concurrent Upgrade Functions

    MCI, or both. A CBU contract must be in place before the special code that enables this capability can be loaded on the server. The standard CBU contract provides for five 10-day tests and one 90-day disaster activation over a five-year period. Contact your IBM representative for details.
  • Page 351: Miscellaneous Equipment Specification Upgrades

    (SSR) and are restricted to add Blades to existing entitlements. An MES upgrade requires IBM service personnel for the installation. In most cases, the time that is required for installing the LICCC and completing the upgrade is short. To better use the MES upgrade function, carefully plan the initial configuration to enable a concurrent upgrade to a target configuration.
  • Page 352: Mes Upgrade For Processors

    An MES upgrade for processors can concurrently add CPs, ICFs, zIIPs, IFLs, and SAPs to a z13s server by assigning available PUs that are on the CPC drawers, through LICCC. Depending on the quantity of the additional processors in the upgrade, an additional CPC drawer might be required before the LICCC is enabled.
  • Page 353: Memory Sizes And Upgrades For The N10

    For each additional 8 GB of memory to be activated, one FC 1903 must be added, and one FC 1993 must be removed. See Figure 8-3 for details of memory configurations and upgrades for a z13s Model N10. N10 Physical Memory RAIM GB...
  • Page 354: Memory Sizes And Upgrades For The Single Drawer N20

    Figure 8-4 shows the memory configurations and upgrades for a z13s Model N20 single CPC drawer. N20 one drawer Client Increment Physical Memory RAIM GB Addressable Memory GB 32 RAIM (2 x 16GB) + 128 for HSA (40) + Client (88)
  • Page 355: Feature On Demand Window For Zbx Blades Features Hwms

    114 (z114), the HWMs are stored in the processor and memory LICCC record. On zBC12, zEC12, z13, and z13s servers, the HWMs are found in the FoD LICCC record. The current zBX installed and staged feature values can be obtained by using the Perform a Model Conversion function on the SE, or from the HMC by using a Single Object Operation (SOO) to the servers’...
  • Page 356: Permanent Upgrade Through The Ciu Facility

    Because the system upgrade is always disruptive, the zBX upgrade will also be a disruptive task. If installing a new build z13s server and planning to take over an existing zBX attached to another existing CPC, and the zBX is not a model 004, the zBX conversion to a zBX model 004 can be done during the installation phase of the z13s server.
  • Page 357: Permanent Upgrade Order Example

    LPAR usage rather than on the server total capacity. See 7.16.3, “Advanced Workload License Charges” on page 307 for more information about the WLC. Figure 8-7 illustrates the CIU facility process on IBM Resource Link. ibm.com/servers/resourcelink Customer...
  • Page 358: Ciu-Eligible Order Activation Example

    Warning messages are issued if you select invalid upgrade options. The process enables only one permanent CIU-eligible order for each server to be placed at a time. For a tutorial, see the following website: https://www.ibm.com/servers/resourcelink/hom03010.nsf/pages/CIUInformation IBM z13s Technical Guide...
  • Page 359: Machine Profile

    It supports upgrades only within the bounds of the currently installed hardware. 8.4.2 Retrieval and activation After an order is placed and processed, the appropriate upgrade record is passed to the IBM support system for download. Chapter 8. System upgrades...
  • Page 360: Ibm Z13S Perform Model Conversion Window

    1. In the Perform Model Conversion window, select Permanent upgrades to start the process, as shown in Figure 8-10. Figure 8-10 IBM z13s Perform Model Conversion window 2. The window provides several possible options. If you select the Retrieve and apply option, you are prompted to enter the order activation number to start the permanent upgrade, as shown in Figure 8-11.
  • Page 361: Overview

    8.5.1 Overview The capacity for CPs is expressed in MSUs. The capacity for speciality engines is expressed Capacity tokens in the number of speciality engines. are used to limit the resource consumption for all types of processor capacity. Capacity tokens are introduced to provide better control over resource consumption when On/Off CoD offerings are activated.
  • Page 362: Ordering

    (24 hours). The resources remain active until they are deactivated or until one of the resource tokens is consumed, or until the record expires, usually 180 days after its installation. If one capacity token type is consumed, resources from the entire record are deactivated. IBM z13s Technical Guide...
  • Page 363 As an example, for a z13s server with capacity identifier D02, you can used the following methods to deliver a capacity upgrade through On/Off CoD: One option is to add CPs of the same capacity setting. With this option, the MCI can be changed to a D03, which adds one extra CP (making a three-way configuration) or to a D04, which adds two extra CPs (making a four-way configuration).
  • Page 364: Order On/Off Cod Record Window

    Resource Link provides the interface that enables you to order a dynamic upgrade for a specific z13s server. You are able to create, cancel, and view the order. Configuration rules are enforced, and only valid configurations are generated based on the configuration of the individual z13s server.
  • Page 365: On/Off Cod Order Example

    In addition, you can perform administrative testing. During this test, no additional capacity is added to the z13s server, but you can test all of the procedures and automation for the management of the On/Off CoD facility. Figure 8-13 is an example of an On/Off CoD order on the Resource Link web page.
  • Page 366: Activation And Deactivation

    On/Off CoD upgrade is active. Repair capability during On/Off CoD If the z13s server requires service while an On/Off CoD upgrade is active, the repair can take place without affecting the temporary capacity.
  • Page 367: The Capacity Provisioning Infrastructure

    “Store system information instruction” on page 351 for more details. 8.5.6 IBM z/OS capacity provisioning The z13s provisioning capability, combined with CPM functions in z/OS, provides a flexible, automated process to control the activation of On/Off Capacity on Demand. The z/OS provisioning environment is shown in Figure 8-14.
  • Page 368 The CPCC is not required for regular CPM operation. The CPCC will over time be moved into the z/OS Management Facility (z/OSMF). Parts of the CPCC have been included in z/OSMF V1R13. IBM z13s Technical Guide...
  • Page 369: A Capacity Provisioning Domain

    The control over the provisioning infrastructure is run by the CPM through the Capacity Provisioning Domain (CPD), which is controlled by the Capacity Provisioning Policy (CPP). An example of a CPD is shown in Figure 8-15. Figure 8-15 A Capacity Provisioning Domain The CPD represents the central processor complexes (CPCs) that are controlled by the CPM.
  • Page 370 Planning considerations for using automatic provisioning Although only one On/Off CoD offering can be active at any one time, several On/Off CoD offerings can be present on the zBC12. Changing from one to another requires that the active IBM z13s Technical Guide...
  • Page 371: Capacity For Planned Event

    In a situation where a CBU offering is active on a z13s server, and that CBU offering is 100% or more of the base capacity, activating any On/Off CoD is not possible. The On/Off CoD offering is limited to 100% of the base configuration.
  • Page 372 The processors that can be activated by CPE come from the available unassigned PUs on any installed CPC drawer. CPE features can be added to an existing z13s server non-disruptively. A one-time fee is applied for each individual CPE event, depending on the contracted configuration and its resulting feature codes.
  • Page 373: Capacity Backup

    CBU is the quick, temporary activation of PUs, and is available in the following durations:...
  • Page 374 PUs. Therefore, your CBU contract requires more CBU features if the capacity setting of the CPs is changed. CBU can add CPs through LICCC only, and the z13s server must have the correct number of higher CPC drawers installed to support the required upgrade. CBU can change the MCI to a...
  • Page 375: Cbu Activation And Deactivation

    CBU option to activate the 90-day period. Image upgrades After the CBU activation, the z13s server can have more capacity, more active PUs, or both. The additional resources go into the resource pools, and are available to the LPARs. If the LPARs must increase their share of the resources, the LPARs’...
  • Page 376: Example Of C02 With Three Cbu Features

    CPs, which means that you can activate D02, E02, and F02 through Z02. 1-way 2-way 3-way 4-way 5-way 6-way Specialty Specialty Specialty Specialty Specialty Specialty Engine Engine Engine Engine Engine Engine Figure 8-16 Example of C02 with three CBU features IBM z13s Technical Guide...
  • Page 377: Automatic Cbu For Geographically Dispersed Parallel Sysplex

    While CBU is active, you can change the target configuration at any time. 8.7.3 Automatic CBU for Geographically Dispersed Parallel Sysplex The intent of the IBM Geographically Dispersed Parallel Sysplex (GDPS) CBU is to enable automatic management of the PUs provided by the CBU feature during a server or site failure.
  • Page 378: Processors

    Enabling and using the additional processor capacity is transparent to most applications. However, certain programs depend on processor model-related information (for example, ISV products). You need to consider the effect on the software that is running on a z13s server when you perform any of these configuration upgrades.
  • Page 379: Stsi Output On Z13S Server

    The STSI instruction returns the MCI for the permanent configuration, and the MCI for any temporary capacity. This process is key to the functioning of CoD offerings. Figure 8-17 shows the relevant output from the STSI instruction. Figure 8-17 STSI output on z13s server Chapter 8. System upgrades...
  • Page 380 Online permanent upgrades, On/Off CoD, CBU, and CPE can be used to concurrently upgrade a z13s server. However, certain situations require a disruptive task to enable the new capacity that was recently added to the CPC. Several of these situations can be avoided if...
  • Page 381: Summary Of Capacity On-Demand Offerings

    CPs supported by the OS. IBM z/OSV1R11, z/OS V1R12, and z/OS V1R13 with program temporary fixes (PTFs) supports up to 100 processors. IBM z/OS V2R1 also supports up to 141-way in non-SMT mode and up to 128-way in SMT mode. IBM z/VM supports up to 64 processors.
  • Page 382 In addition, the activation of the CBU does not require a password. z13s servers can have up to eight offerings installed at the same time, with the limitation that only one of them can be an On/Off CoD offering. The others can be any combination of types.
  • Page 383: Chapter 9. Reliability, Availability, And Serviceability

    The design of the memory on z13s servers is based on the fully redundant memory infrastructure, redundant array of independent memory (RAIM). RAIM was first introduced with the z196.
  • Page 384: The Ras Strategy

    The RAS strategy is to manage change by learning from previous generations and investing in new RAS function to eliminate or minimize all sources of outages. Enhancements to z Systems RAS designs are implemented on the z13s system through the introduction of new technology, structure, and requirements. Continuous improvements in RAS are associated with new features and functions to ensure that z Systems servers deliver exceptional value to clients.
  • Page 385 9.8, “IBM z Advanced Workload Analysis Reporter” on page 367. Deploying zAware Version 2: The zAware LPAR type is not supported on z13s or z13 at Driver level 27 servers. The IBM z Appliance Container Infrastructure (zACI) LPAR zACI type is used instead.
  • Page 386: Ras Functions

    A planned outage can be caused by a capacity upgrade or a driver upgrade. A planned outage is usually requested by the customer, and often requires pre-planning. The z13s design phase focused on this pre-planning effort, and was able to simplify or eliminate it.
  • Page 387: Scheduled Outages

    Memory subsystem improvements z13s servers use RAIM, which is a concept that is known in the disk industry as RAID. RAIM design detects and recovers from DRAM, socket, memory channel, or DIMM failures. The RAIM design includes the addition of one memory channel that is dedicated data for RAS.
  • Page 388: Enhanced Driver Maintenance

    VLAN to the redundant internal Ethernet support network, the VLAN makes the support network itself easier to handle and more flexible. PCIe I/O drawer The PCIe I/O drawer is available for z13s servers. It can be installed concurrently, as can all PCIe I/O drawer-supported features. 9.4 Enhanced Driver Maintenance EDM is one more step toward reducing both the necessity for and the duration of a scheduled outage.
  • Page 389 Previous firmware updates, which require an initial machine load (IML) of the z13s server to be activated, can block the ability to run a concurrent driver upgrade. An icon on the Support Element (SE) allows you or your IBM service support representative (SSR) to define the concurrent driver upgrade sync point to be used for an EDM.
  • Page 390: Ras Capability For The Hmc And Se

    For more information, see 11.5.4, “HMC and SE microcode” on page 409. Support Element (SE) z13s servers are provided with two 1U System x servers inside the z Systems frame. One is always the primary SE and the other is the alternate SE. The primary SE is the active one.
  • Page 391: Ras Capability For Zbx Model 004

    Systems quality of service (QoS) to include RAS capabilities. The zBX Mod 004 offering provides extended service capability independent of the z13s hardware management structure. zBX Model 004 is a stand-alone machine that can be added to an existing ensemble HMC as an individual ensemble member.
  • Page 392: Zbx Firmware

    Model 004. When upgrading a z114 or a zBC12 with zBX to a z13s server, the zBX must also be upgraded from a Model 002 or a Model 003 to a zBX Model 004.
  • Page 393 The zBX Model 004 is based on the BladeCenter and blade hardware offerings that contain IBM certified components. zBX Model 004 BladeCenter and blade RAS features are extended considerably for IBM z Systems servers: Hardware redundancy at various levels: – Redundant power infrastructure –...
  • Page 394: Considerations For Powerha In Zbx Environment

    PowerHA can be configured to perform automated service recovery for the applications that run in virtual servers that are deployed in zBX. PowerHA automates application failover from one virtual server in an IBM System p blade to another virtual server in a different System p blade with a similar configuration.
  • Page 395: Typical Powerha Cluster Diagram

    It represents a first in a new generation of “smart monitoring” products with pattern-based message analysis. IBM zAware runs as a firmware virtual appliance in a z13s LPAR. It is an integrated set of analytic applications that creates a model of normal system behavior that is based on prior system data.
  • Page 396: Flash Express Ras Components

    IBM zAware improves the overall RAS capability of z13s servers by providing these advantages: Identify when and where to look for a problem Drill down to identify the cause of the problem Improve problem determination in near real time Reduce problem determination efforts significantly For more information about IBM zAware, see Appendix B, “IBM z Systems Advanced...
  • Page 397: Chapter 10. Environmental Requirements

    The physical installation of z13s servers has a number of options, including raised floor and non-raised floor options, cabling from the bottom of the frame or off the top of the frame, and the option to have a high-voltage DC power supply directly into the z13s server, instead of the usual AC power supply.
  • Page 398: Ibm Z13S Cabling Options

    10.1 IBM z13s power and cooling z13s servers have a number of options for installation on a raised floor, or on a non-raised floor. Furthermore, the cabling can come into the bottom of the machine, or into the top of the machine.
  • Page 399: Top Exit I/O Cabling Feature

    chimneys Figure 10-2 shows the frame extensions, also called , that are installed to each corner on the left side of the A frame when the Top Exit I/O Cabling feature (FC 7920) is ordered. The bottom of the chimney is closed with welded sheet metal. Figure 10-2 Top Exit I/O cabling feature The Top Exit I/O Cabling feature adds 15 cm (6 in.) to the width of the frame and about 60 lbs (27.3 kg) to the weight.
  • Page 400: Internal Battery Feature

    The power requirements depend on the number of central processor complex (CPC) drawers, and the number of I/O drawers, that are installed. Table 10-1 lists the maximum power requirements for z13s servers. These numbers assume the maximum memory configurations, all drawers are fully populated, and all fanout cards installed.
  • Page 401: Balanced Power Plan Ahead

    Installation Manual for Physical Planning, GC28-6953. The front and the rear of z13s frame dissipate separate amounts of heat. Most of the heat comes from the rear of the system. To calculate the heat output expressed in kilo British Thermal Units (kBTU) per hour for z13s configurations, multiply the table entries from Table 10-2 on page 372 by 3.4.
  • Page 402: Recommended Environmental Conditions

    The IBM z13s server is the first z Systems server to meet the ASHRAE Class A3 Required specifications. Environmental specifications are presented in two categories: Recommended .
  • Page 403: Weights And Dimensions

    10.2.2 Four-in-one (4-in-1) bolt-down kit A bolt-down kit can be ordered for the z13s frame. The kit provides hardware to enhance the ruggedness of the frame, and to tie down the frame to a concrete floor. The kit is offered in the...
  • Page 404: Ibm Zbx Configurations

    Note: Only zBX Model 004 is supported by z13s servers. zBX Model 004 only exists as a result of an miscellaneous equipment specification (MES) from a previous zBX Model 002 or Model 003. Also, the only additional features that can be ordered to a zBX Model 004 are the PS501 and HX5 Blades to fill up empty slot entitlements.
  • Page 405: Ibm Zbx Cooling

    3.4. For 3-phase installations, phase balancing is accomplished with the power cable connectors between the BladeCenters and the PDUs. 10.3.3 IBM zBX cooling The individual BladeCenter configuration is air cooled with two hot-swap blower modules. The blower speeds vary depending on the ambient air temperature at the front of the BladeCenter unit and the temperature of the internal BladeCenter components: If the ambient temperature is 25°C (77°F) or below, the BladeCenter unit blowers run at...
  • Page 406 Heat released by configurations Table 10-6 shows the typical heat that is released by the various zBX solution configurations. Table 10-6 IBM zBX power consumption and heat output Number of blades Maximum utility power (kW) Heat output (kBTU/hour) 24.82 12.1 41.14...
  • Page 407: Rear Door Heat Exchanger (Left) And Functional Diagram

    Building chilled water Figure 10-4 Rear Door Heat eXchanger (left) and functional diagram The IBM Rear Door Heat eXchanger also offers a convenient way to handle hazardous “hot spots”, which might help you lower the total energy cost of your data center.
  • Page 408: Top Exit Support For The Zbx

    The hardware components in the z Systems CPC and the optional zBX are monitored and managed by the Energy Management component in the SE and HMC. The GUI of the SE and the HMC provide views, for instance, with the System Activity Display or Monitors Dashboard. IBM z13s Technical Guide...
  • Page 409: Power Estimation Tool

    384. A few aids are available to plan and monitor the power consumption and heat dissipation of z13s servers. This section summarizes the tools that are available to plan and monitor the energy consumption of z13s servers:...
  • Page 410: Maximum Potential Power

    See 10.4.1, “Power estimation tool” on page 381. 10.4.3 System Activity Display and Monitors Dashboard The System Activity Display presents you with the current power usage, among other information, as shown in Figure 10-7. Figure 10-7 Power usage on the System Activity Display IBM z13s Technical Guide...
  • Page 411: Ibm Systems Director Active Energy Manager

    Active Energy Manager Version 4.4 is a plug-in to IBM Systems Director Version 6.2.1 and is available for installation on Linux on z Systems. It can also run on Windows, Linux on IBM System x, and AIX and Linux on IBM Power Systems™. For more information, see Implementing IBM Systems Director Active Energy Manager 4.1.1, SG24-7780.
  • Page 412: Unified Resource Manager: Energy Management

    Enabled can perform power capping functions. z13s servers support power capping, which gives you the ability to limit the maximum power consumption and reduce cooling requirements. To use power capping, Automate Firmware Suite (FC 0020) must be ordered.
  • Page 413 When capping is enabled for a z13s server, this capping level is used as a threshold for a warning message that informs you that the z13s server went above the set cap level. Being under the limit of the cap level is equal to the maximum potential power value (see 10.4.2, “Query maximum potential power”...
  • Page 414 IBM z13s Technical Guide...
  • Page 415 The Hardware Management Console (HMC) supports many functions and tasks to extend the management capabilities of z13s servers. When tasks are performed on the HMC, the commands are sent to one or more Support Elements (SEs), which then issue commands to their central processor complexes (CPCs) or IBM z BladeCenter Extension (zBX).
  • Page 416: Introduction To The Hmc And Se

    Systems CPCs and can be at a local or a remote site. If the z13s server is defined as a member of an ensemble, a pair of HMCs is required (a primary and an alternate). When a z13s server is defined as a member of an ensemble, certain restrictions apply.
  • Page 417: Driver Level 27 Hmc And Se Enhancements And Changes

    “Rack-mounted HMC” on page 392. New SE server The SEs are no longer two notebooks in one z13s server. They are now two 1U servers that are installed in the top of the CPC frame. For more information, see 11.2.3, “New Support Elements”...
  • Page 418: Diagnostic Sampling Authorization Control

    SE and HMC against unauthorized booting from removable media. An IBM Service Support Representative (IBM SSR) might perform the following tasks, which require booting from removable media: – Engineering change (EC) upgrade – Save or restore of Save/Restore data – Hard disk drive (HDD) restore Note: When a UEFI admin password is set, it must be available for the SSR to perform these tasks.
  • Page 419: Change Lpar Security

    – z Systems Hardware Management Console Operations Guide for Ensembles Version 2.13.1 – z Systems Support Element Operations Guide Version 2.13.0 (zBX 004) Alternatively, see the HMC and SE (Version 2.13.1) console help system or go to the IBM Knowledge Center at the following website: http://www.ibm.com/support/knowledgecenter...
  • Page 420: Rack-Mounted Hmc

    Systems servers. The HMC is a 1U IBM server and comes with an IBM 1U standard tray containing a monitor and a keyboard. The system unit and tray must be mounted in the rack in two adjacent 1U locations in the “ergonomic zone”...
  • Page 421: Ses Location

    With Driver 22 or later, you can perform a backup of primary SEs or HMCs to an FTP server. Note: If you do a backup to an FTP server for a z13s or zBX Model 004 server, ensure that you have set up a connection to the FTP server by using the Configure Backup Setting task.
  • Page 422: Configure Backup Ftp Server

    – Several previous z Systems SE backups Backup of primary SEs The backup for the primary SE of a z13s or zBX Model 004 server can be made to the following media: The primary SE HDD and alternate SE HDD...
  • Page 423: Backup Critical Data Destinations Of Ses

    It is no longer possible to do the primary SE backup to a UFD of a z13s or zBX Model 004 SE. Table 11-1 shows the SE Backup options for external media. Table 11-1 SE Backup options System Type UFD Media...
  • Page 424: Scheduled Operation For Hmc Backup

    An HMC with Version 2.13.1 can support different z Systems types. Some functions that are available on Version 2.13.1 and later are supported only when the HMC is connected to a z13s server with Driver 27. IBM z13s Technical Guide...
  • Page 425: Hmc Feature Codes

    HMCs running older drivers. 11.2.6 HMC feature codes HMCs older than FC 0091 are not supported for z13s servers at Driver 27. FC 0091 M/T 7327 FC 0091 can be carried forward. These HMCs need an upgrade to driver 27. ECA398 is required and can be ordered by the local IBM support representative.
  • Page 426: Tree Style User Interface And Classic Style User Interface

    Removal of support for Classic Style User Interface on the Hardware Management Console and Support Element: The IBM z13 and z13s servers will be the last z Systems servers to support Classic Style User Interface. In the future, user interface enhancements will be focused on the Tree Style User Interface.
  • Page 427: Hmc And Se Connectivity

    J04 port on the SEs. Other z Systems servers and HMCs also can be connected to the switch. To provide redundancy, install two Ethernet switches. Only the switch (and not the HMC directly) can be connected to the SEs. Figure 11-10 shows the connectivity between HMCs and the SEs. Figure 11-10 HMC and SE connectivity The LAN ports for the SEs installed in the CPC are shown in Figure 11-11.
  • Page 428: Network Planning For The Hmc And Se

    Enterprise, SC28-6951. It is available on IBM Resource Link. For more information about the HMC settings that are related to access and security, see the HMC and SE (Version 2.13.1) console help system or go to the IBM Knowledge Center at the following link: http://www.ibm.com/support/knowledgecenter...
  • Page 429: Hmc Connectivity Examples

    Facility (RSF) Figure 11-12 HMC connectivity examples For more information, see the following resources: The HMC and SE (Version 2.13.1) console help system, or go to the IBM Knowledge Center at the following link: http://www.ibm.com/support/knowledgecenter After you get to the IBM Knowledge Center, click z Systems, and then click z13.
  • Page 430: Rsf Is Broadband-Only

    11.3.3 RSF is broadband-only is not supported RSF through a modem on the z13s HMC. Broadband is needed for hardware problem reporting and service. For more information, see 11.4, “Remote Support Facility” on page 403. 11.3.4 TCP/IP Version 6 on the HMC and SE The HMC and SE can communicate by using IPv4, IPv6, or both.
  • Page 431: Remote Support Facility

    11.4.1 Security characteristics The following security characteristics are in effect: RSF requests always are initiated from the HMC to IBM. An inbound connection is never initiated from the IBM Service Support System. All data that is transferred between the HMC and the IBM Service Support System is encrypted with high-grade Secure Sockets Layer (SSL)/Transport Layer Security (TLS) encryption.
  • Page 432: Rsf Connections To Ibm And Enhanced Ibm Service Support System

    11.4.2 RSF connections to IBM and Enhanced IBM Service Support System If the HMC and SE are at Driver 27, the driver uses a new remote infrastructure at IBM when the HMC connects through RSF for certain tasks. Check your network infrastructure settings to ensure that this new infrastructure will work.
  • Page 433: Hmc And Se Key Capabilities

    The HMC and SE have many capabilities. This section covers the key areas. For a complete list of capabilities, see the HMC and SE (Version 2.13.1) console help system or go to the IBM Knowledge Center at the following link: http://www.ibm.com/support/knowledgecenter...
  • Page 434: Logical Partition Management

    (0.01 - 255.0). LPAR group absolute capping This is the next step in partition capping options available on z13s and z13 at Driver level 27 servers. Follow on to LPAR absolute capping, using a similar methodology to enforce:...
  • Page 435: Change Lpar Group Controls - Group Absolute Capping

    A group name, processor capping value, and partition membership are specified at HW console: Set an absolute capacity cap by CPU type on a group of LPARs. Allows each of the partitions to consume capacity up to their individual limits as long as the group's aggregate consumption does not exceed the group absolute capacity limit.
  • Page 436: Manage Trusted Signing Certificates

    Signing Certificates, is used to add trusted signing certificates. For example, if the certificate that is associated with the 3270 server on the IBM host is signed and issued by a corporate certificate, it must be imported, as shown in Figure 11-14.
  • Page 437: Hmc And Se Microcode

    The microcode has these characteristics: The driver contains EC streams. Each EC stream covers the code for a specific component of z13s servers. It has a specific name and an ascending number. The EC stream name and a specific number are one MCL.
  • Page 438: Microcode Terms And Interaction

    Figure 11-17 shows how the driver, bundle, EC stream, MCL, and MCFs interact with each other. Figure 11-17 Microcode terms and interaction IBM z13s Technical Guide...
  • Page 439: System Information: Bundle Level

    Microcode installation by MCL bundle target bundle is a set of MCLs that are grouped during testing and released as a group on the same date. You can install an MCL to a specific target bundle level. The System Information window is enhanced to show a summary bundle level for the activated level, as shown in Figure 11-18.
  • Page 440: Hmc Monitor Task Group

    Monitor task group The Monitor task group on the HMC and SE includes monitoring-related tasks for the z13s server, as shown in Figure 11-19. Figure 11-19 HMC Monitor task group The Monitors Dashboard task The Monitors Dashboard task supersedes the System Activity Display (SAD). In the z13s server, the Monitors Dashboard task in the Monitor task group provides a tree-based view of resources.
  • Page 441: Monitors Dashboard Task

    Figure 11-20 shows an example of the Monitors Dashboard task. Figure 11-20 Monitors Dashboard task Starting with Driver 27, you can display the activity for an LPAR by processor type, as shown in Figure 11-21. Figure 11-21 Display the activity for an LPAR by processor type The Monitors Dashboard is enhanced to show SMT usage, as shown in Figure 11-22.
  • Page 442: Monitors Dashboard: Crypto Function Integration

    LPAR, as shown in Figure 11-23. Figure 11-23 Monitors Dashboard: Crypto function integration For Flash Express, a new window is added, as shown in Figure 11-24. Figure 11-24 Monitors Dashboard - Flash Express function integration IBM z13s Technical Guide...
  • Page 443: Environmental Efficiency Statistics

    The task shows a list of all installed or staged LIC configuration code (LICCC) records to help you manage them. It also shows a history of recorded activities. The HMC for z13s servers has these CoD capabilities: SNMP API support: – API interfaces for granular activation and deactivation –...
  • Page 444: Features On Demand Support

    PPS, a time accuracy of 100 milliseconds to the ETS is maintained. z13s servers cannot be in the same CTN with a System z10 (n-2) or earlier systems. As a consequence, z13s servers cannot become member of an STP mixed CTN.
  • Page 445 Monitor the status of the CTN. Monitor the status of the coupling links that are initialized for STP message exchanges. For diagnostic purposes, the PPS port state on a z13s server can be displayed and fenced ports can be reset individually.
  • Page 446: Customize Console Date And Time

    As shown in Figure 11-26, the NTP option is the recommended option, if an NTP server is available. If an NTP server is not available for this HMC, any defined CPC SE can be selected after you select Selected CPCs. Figure 11-26 Customize Console Date and Time IBM z13s Technical Guide...
  • Page 447: Timing Network Window With Scheduled Dst And Scheduled Leap Second Offset

    The Timing Network window now includes the next scheduled Daylight Saving Time change and the next leap second adjustment, as shown in Figure 11-27. The schedules that are shown are the ones for the next DLS change (either given per automatic or scheduled adjustment) and for the next Leap Second change (given per scheduled adjustment).
  • Page 448: Ntp Client And Server Support On The Hmc

    Systems (such as AIX, Microsoft Windows, and others) that have NTP clients. The HMC can act as an NTP server. With this support, the z13s server can get time from the HMC without accessing a LAN other than the HMC/SE network. When the HMC is used as an NTP server, it can be configured to get the NTP source from the Internet.
  • Page 449: Hmc Ntp Broadband Authentication Support

    NTP symmetric key and autokey authentication With symmetric key and autokey authentication, the highest level of NTP security is available. HMC Level 2.12.0 and later provide windows that accept and generate key information to be configured into the HMC NTP configuration. They can also issue NTP commands, as shown in Figure 11-28.
  • Page 450: Time Coordination For Zbx Components

    With z13s servers, you can offload the following HMC and SE log files for customer audit: Console event log...
  • Page 451 If none is provided, a message window is displayed that points to the Manage SSH Keys task to input a public key. The following tasks provide this support: Import/Export IOCDS Advanced Facilities FTP IBM Content Collector Load Audit and Log Management (Scheduled Operations only) FCP Configuration Import/Export...
  • Page 452: System Input/Output Configuration Analyzer On The Se And Hmc

    Systems servers, such as CPCs, images, and activation profiles. z13s servers contain a number of enhancements to the CIM systems management API. The function is similar to that provided by the SNMP API.
  • Page 453: Cryptographic Support

    Crypto Express5S provides a secure programming and hardware environment on which crypto processes are run. Each Crypto Express5S adapter can be configured by the installation as a Secure IBM Common Cryptographic Architecture (CCA) coprocessor, a Secure IBM Enterprise Public Key Cryptography Standards (PKCS) #11 (EP11) coprocessor, or an accelerator.
  • Page 454: Cryptographic Configuration Window

    This operation ensures that any changes that are made to the data are detected during the upgrade process by verifying the digital signature. It helps ensure that no malware can be installed on z Systems products during firmware updates. It enables the z13s Central Processor Assist for Cryptographic Function (CPACF) functions to comply with Federal Information Processing Standard (FIPS) 140-2 Level 1 for Cryptographic Licensed Internal Code (LIC) changes.
  • Page 455: Enabling Dynamic Partition Manager

    Setting up is a disruptive action. The selection of DPM mode of operation is done by using a function called Enable Dynamic Partition Manager under the SE CPC Configuration menu as shown in Figure 11-31. Figure 11-31 Enabling Dynamic Partition Manager Chapter 11.
  • Page 456: Hmc Welcome Window (Cpc In Dpm Mode)

    IBM zEnterprise System nodes, and up to eight zBX Model 004 systems. Each node is a zEnterprise CPC or a zBX Model 004. The ensemble provides an integrated way to manage virtual server resources and the workloads that can be deployed on those resources.
  • Page 457: Unified Resource Manager Functions And Suites

    Figure 11-33 shows the Unified Resource Manager functions and suites. Figure 11-33 Unified Resource Manager functions and suites Overview Unified Resource Manager provides the following functions: Hypervisor management Provides tasks for managing the hypervisor lifecycle, managing storage resources, providing RAS and first-failure data capture (FFDC) features, and monitoring the supported hypervisors.
  • Page 458 Manage suite Provides the Unified Resource Manager function for core operational controls, installation, and energy monitoring. It is configured by default and activated when an ensemble is created. IBM z13s Technical Guide...
  • Page 459 Table 11-3 lists the feature codes that must exist to enable Unified Resource Manager. To get ensemble membership, ensure that you also have FC 0025 for the zBC12. Restriction: No new features can be ordered for IBM z Unified Resource Manager with IBM z13s servers.
  • Page 460: Ensemble Definition And Management

    CPCs and zBXs (Model 004) as members along with images, workloads, virtual networks, and storage pools. If a z13s server is entered into an ensemble, the CPC Details task on the SE and the HMC reflects the ensemble name.
  • Page 461 Event monitoring: Displays ensemble-related events, but you cannot change or delete the event. HMC considerations when you use IBM zEnterprise Unified Resource Manager to manage an ensemble The following considerations are valid when you use Unified Resource Manager to manage an ensemble: All HMCs at the supported code level are eligible to create an ensemble.
  • Page 462: Hmc Availability

    CPC and with each zBX Model 004 SE. If the z13s node is defined as a member of an ensemble, the primary HMC is the authoritative controlling (stateful) component for the Unified Resource Manager configuration. It is also the stateful component for policies that have a scope that spans all of the managed CPCs and SEs in the ensemble.
  • Page 463: Ensemble Example With Primary And Alternate Hmcs

    System Control Hub (SCH) in the z13s server, the BPH in the zEC12, and the INMN switch in the zBX Model 004. The OSA-Express5S (or OSA Express 4S) 10 GbE ports (CHPID type OSX) in the z13s server are plugged with customer-provided 10 GbE cables to the IEDN zBX switch.
  • Page 464 IBM z13s Technical Guide...
  • Page 465: Chapter 12. Performance

    Performance Chapter 12. This chapter describes the performance considerations for IBM z13s. This chapter includes the following sections: Performance information LSPR workload suite Fundamental components of workload capacity performance Relative nest intensity LSPR workload categories based on relative nest intensity...
  • Page 466: Z13S To Zbc12, Z114, Z10 Bc, And Z9 Bc Performance Comparison

    12.1 Performance information The IBM z13s Model Z06 is designed to offer approximately 51% more capacity and 8 times the amount of memory than the IBM zEnterprise BC12 (zBC12) Model Z06 system. Uniprocessor performance also has increased. A z13s Model Z01 offers, on average, performance improvements of 34% over the zBC12 Model Z01.
  • Page 467: Lspr Workload Suite

    LSPR contains the internal throughput rate ratios (ITRRs) for the zBC12 and the previous generation processor families. These ratios are based on measurements and projections that use standard IBM benchmarks in a controlled environment. The throughput that any user experiences can vary depending on the amount of multiprogramming in the user’s job stream, the I/O configuration, and the workload processed.
  • Page 468: Instruction Complexity

    Other caches are shared by multiple microprocessor memory nest cores. The term for a z Systems processor refers to the shared caches and memory along with the data buses that interconnect them. IBM z13s Technical Guide...
  • Page 469: Memory Hierarchy On The Z13S One Cpc Drawer System (Two Nodes)

    Figure 12-2 shows a memory nest in a z13s single CPC drawer system with two nodes. Figure 12-2 Memory hierarchy on the z13s one CPC drawer system (two nodes) Workload capacity performance is sensitive to how deep into the memory hierarchy the processor must go to retrieve the workload instructions and data for running.
  • Page 470: Relative Nest Intensity

    The I/O rate can be influenced somewhat through buffer pool tuning. software configuration tuning However, one factor, , is often overlooked but can have a direct effect on RNI. This term refers to the number of address spaces (such as CICS IBM z13s Technical Guide...
  • Page 471: Lspr Workload Categories Based On Relative Nest Intensity

    application-owning regions (AORs) or batch initiators) that are needed to support a workload. This factor always has existed, but its sensitivity is higher with the current high frequency microprocessors. Spreading the same workload over more address spaces than necessary can raise a workload’s RNI. This increase occurs because the working set of instructions and data from each address space increases the competition for the processor caches.
  • Page 472: New Z/Os Workload Categories Defined

    However, as addressed in 12.5, “LSPR workload categories based on relative nest intensity” on page 443, the underlying performance sensitive factor is how a workload interacts with the The IBM zPCR tool reflects the latest IBM LSPR measurements. It is available at no extra charge at http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS1381...
  • Page 473: Workload Performance Variation

    BC to z10 BC will benefit less than the average when moving from z10 BC to z114. Nevertheless, the workload variability for moving from zBC12 to z13s is expected to be less than the last few upgrades.
  • Page 474: Main Performance Improvement Drivers With Z13S

    IBM technical support are recommended. 12.7.1 Main performance improvement drivers with z13s The z13s is designed to deliver new levels of performance and capacity for large-scale consolidation and growth. The following attributes and design points of the z13s contribute to overall performance and throughput improvements as compared to the zBC12.
  • Page 475 Clock frequency at 4.3 GHz IBM CMOS 14S0 22 nm SOI technology with IBM eDRAM technology The z13s design has the following enhancements as compared with the zBC12: Increased total number of PUs that are available on the system, from 18 to 26, and number of characterizable cores, from 13 to 20.
  • Page 476 IBM z13s Technical Guide...
  • Page 477 This appendix introduces the IBM z Appliance Container Infrastructure (zACI) framework available on z13s and z13 Driver Level 27 servers. It briefly describes the reason why IBM has created the framework and provides a description about how the zACI environment is intended to be used.
  • Page 478: A.1 What Is Zaci

    Appliances can be implemented as firmware or software, depending on the environment in which the appliance runs and the function it must provide. Introduced with z13 and z13s Driver Level 27 servers, the common framework called zACI is intended to provide a standard set of behaviors and operations that help simplify deploying infrastructure functions.
  • Page 479: Zaci Framework Basic Outline

    The new type of LPAR called will be used to deploy the IBM zAware server application. The IBM zAware logical partition (LPAR) type is not supported on z13s or z13 at Driver level 27 servers. The zACI LPAR type is used instead.
  • Page 480: Ibm Zaware Image Profile Based On Zaci

    Existing IBM zAware LPARs will be automatically converted during Enhanced Driver Maintenance from Driver 22 to Driver 27. No reconfiguration of IBM zAware is required. A new icon will identify the zACI LPARs in the HMC user interface (UI), as shown in Figure A-3 (right - Classic UI, left - tree style UI).
  • Page 481 This appendix introduces IBM z Systems Advanced Workload Analysis Reporter (IBM zAware), which was first available with IBM zEnterprise zEC12. This feature is designed to offer near real-time, continuous learning, diagnostics and monitoring capabilities. IBM zAware helps you pinpoint and resolve potential problems quickly enough to minimize impacts to your business.
  • Page 482: B.1 Troubleshooting In Complex It Environments

    A z/OS sysplex might produce more than 40 GB of message traffic per day for its images and components alone. Application messages can significantly increase that number. There are more than 40,000 unique message IDs defined in z/OS and the IBM software that runs on z/OS. Independent software vendor (ISV) or client messages can increase that number.
  • Page 483: B.2 Introducing Ibm Zaware

    It can process message streams that do not have message IDs, which makes it possible to handle a broader variety of unstructured data. IBM zAware Version 2 delivered on z13s and z13 at Driver 27 servers provides the following capabilities:...
  • Page 484: Ibm Zaware Complements An Existing Environment

    IBM zAware and Tivoli® Service Management can be integrated by using the IBM zAware API to provide the following capabilities: Provide visibility into IBM zAware anomalies by using Event Management Improve mean time to repair (MTTR) through integration with existing problem...
  • Page 485: Ibm Zaware Shortens The Business Impact Of A Problem

    The IBM zAware GUI fits into existing monitoring structure and can also feed other processes or tools so that they can take corrective action for faster problem resolution.
  • Page 486: B.3 Understanding Ibm Zaware Technology

    Included in z/OS. b. Installable as Red Hat Package Manager (RPM). You can use IBM zAware along with problem diagnosis solutions that are included in z/OS with any large and complex z/OS installation with mission-critical applications and middleware.
  • Page 487: Basic Components Of The Ibm Zaware Environment

    IBM zAware runs in an independent LPAR as firmware. IBM zAware has the following characteristics: IBM zAware V2 requires the z13s or z13 systems with a priced feature code. Important: IBM zAware Version 2 server DR configuration requires z13 or z13s servers.
  • Page 488: Hmc Image Profile For An Ibm Zaware Lpar

    Figure B-4 shows IBM zAware Image Profile on the Hardware Management Console (HMC). IBM zAware is displayed on the General tab of the image profile. Figure B-4 HMC Image Profile for an IBM zAware LPAR Figure B-5 shows the tab of the image profile on the HMC. The setup information is configured on the tab of the image profile.
  • Page 489: Ibm Zaware Heat Map View Analysis

    ISV and application-generated messages, to build sysplex and LPAR detailed views in the IBM zAware graphical user interface (GUI). Linux on z Systems images must be configured so that the syslog daemon sends data to IBM zAware. IBM zAware can create model groups based on similar operational characteristics for Linux images that run on z Systems.
  • Page 490: Ibm Zaware Bar Score With Intervals

    An unstable system requires a larger interval score to be marked as For each interval, IBM zAware provides details of all of the unique and unusual message IDs within the interval. This data includes how many, how rare, and how much the messages contributed to the intervals score (anomaly score, interval contribution score, rarity score, and appearance count) when they first appeared.
  • Page 491: B.3.1 Training Period

    B.3.3 IBM zAware ignore message support When a new workload is added to a system that is monitored by IBM zAware or is moved to a different system, it often generates messages that are not part of that system’s model.
  • Page 492: B.3.4 Ibm Zaware Graphical User Interface

    IBM zAware creates XML data with the status of the z/OS, Linux image, and details about the message traffic. This data is rendered by the web server that runs as a part of IBM zAware. The web server is available by using a standard web browser (Internet Explorer or Mozilla Firefox).
  • Page 493: Feature On Demand (Fod)

    (LICCC) record. From zEC12 onwards, the HWMs are in the FoD record. The IBM zAware feature availability and installed capacity are also controlled by the FoD LICCC record. The current IBM zAware installed and staged feature values can be obtained by using the Perform Model Conversion function on the SE or from the HMC by using a single object operation (SOO) to the server SE.
  • Page 494: Feature On Demand Window For Ibm Zaware Feature

    CPs (HWM), and round up to the nearest factor of 10 (z13s or z13). Example: z13s 3 CPs + z13 20 CPs + 5 IFLs + zEC12 16 CPs = 44 44 is rounded up to nearest factor of 10 = 50 A disaster recovery option (IBM zAware DR CP packs) is also available and indicates that IBM zAware is installed on a DR z13s, z13, zEC12, or zBC12 server.
  • Page 495: B.4.2 Ibm Zaware Operating Requirements

    This section describes the components that are required for IBM zAware. IBM zAware host system requirements The z13s and z13 servers can host the IBM zAware Version 2 server. The zEC12, or zBC12 can host a IBM zAware Version 1 server. The IBM zAware server requires a dedicated LPAR and runs its own self-contained firmware stack.
  • Page 496: B.5 Configuring And Using Ibm Zaware Virtual Appliance

    IBM zAware monitored client requirements IBM zAware monitored clients can be in the same CPC as the IBM zAware host system or in different CPCs. They can be in the same site or multiple sites. Distance between the IBM zAware host systems and monitored clients can be up to a maximum of 3500 km (2174.79 miles).
  • Page 497 Verify that each z/OS system meets the sysplex configuration and OPERLOG requirements for IBM zAware virtual appliance monitored clients. h. Configure the z/OS system logger to send data to the IBM zAware virtual appliance server. i. Configure the Linux SYSLOG to send data to the IBM zAware virtual appliance server.
  • Page 498 IBM z13s Technical Guide...
  • Page 499 This appendix includes the following sections: C.1, “Channel options supported on z13s servers” on page 472 C.2, “Maximum unrepeated distance for FICON SX features” on page 473 © Copyright IBM Corp. 2016. All rights reserved.
  • Page 500: C.1 Channel Options Supported On Z13S Servers

    12 fibers, and the MPO connector of the ICA connection has two rows of 12 fibers. The electrical Ethernet cable for the Open Systems Adapter (OSA) connectivity is connected through an RJ45 jack. Table C-1 lists the attributes of the channel options that are supported on z13s servers. Table C-1 z13s channel feature support Channel feature...
  • Page 501: C.2 Maximum Unrepeated Distance For Ficon Sx Features

    Channel feature Feature Bit rate Cable type Maximum Ordering codes in Gbps unrepeated information (or stated) distance Remote Direct 10GbE 0411 300 m New build Memory Access (RDMA) over Converged Ethernet (RoCE) Express Parallel Sysplex ICA (PCIe-O SR) 0172 8 GBps 150 m New build 100 m...
  • Page 502 IBM z13s Technical Guide...
  • Page 503 Communications This appendix briefly describes the optional Shared Memory Communications (SMC) functionality implemented on IBM z Systems servers as Shared Memory Communications - Remote Direct Memory Access (SMC-R) and the new Shared Memory Communications - Direct Memory Access (SMC-D) of the IBM z13 and z13s servers.
  • Page 504: D.1 Shared Memory Communications Overview

    31 partitions and the two ports are enabled to be used in z/OS. IBM z13 and z13s servers improve the usability of the RoCE feature by using existing z Systems servers and industry standard communications technology along with emerging new...
  • Page 505: Rdma Technology Overview

    A reliable “lossless” Ethernet network fabric (LAN for layer 2 data center network distance) An RDMA network interface card (RNIC) Host A Host B Memory Memory RNIC Rkey A RNIC Rkey B Ethernet Fabric Figure D-1 RDMA technology overview RDMA technology is now available on Ethernet. RoCE uses an existing Ethernet fabric (switches with Global Pause enabled) and requires advanced Ethernet hardware (RDMA-capable NICs on the host).
  • Page 506: Dynamic Transition From Tcp To Smc-R

    Peripheral Component Interconnect Express (PCIe) 10GbE RoCE Express adapter. For example, one LPAR cannot cause errors visible to other virtual functions or other LPARs. Each operating system LPAR has its own application queue in its own memory space. IBM z13s Technical Guide...
  • Page 507: Shared Roce Mode Concepts

    Figure D-3 shows the concept of the Shared RoCE Mode. z/OS CS implements N number of z/OS instances PCIe RoCE 2 Transparent Firmware SPs Firmware SPs Device Driver provide PCIe RoCE (with SR-IOV) PCIe Firmware z/OS LP SR-IOV Physical Support Partition Function RoCE PCI Physical Function...
  • Page 508: 10Gbe Roce Express

    (without any switch), this type of direct physical connectivity forms a single physical point-to-point connection, disallowing any other connectivity with other LPARs (for example, any other SMC-R peers). Although this is a viable option for test scenarios, it is not practical (nor recommended) for production deployment. IBM z13s Technical Guide...
  • Page 509: 10Gbe Roce Express Sample Configuration

    If the IBM 10GbE RoCE Express features are connected to 10 GbE switches, the switches must support the following requirements: Global Pause function frame (as described in the IEEE 802.3x standard) should be enabled Priority Flow Control (PFC) disabled No firewalls, no routing, and no intraensemble data network (IEDN) The maximum supported unrepeated point-to-point distance is 300 meters (984.25 ft.).
  • Page 510: Rnic And Osd Interaction

    PNET IDs (defined in HCD). Simultaneous use of both 10 GbE ports on a 10GbE RoCE Express feature and sharing by up to 31 LPARs on the same CPC is available on z13 and z13s servers. An OSA-Express feature, defined as channel-path identifier (CHPID) type OSD, is required to establish SMC-R.
  • Page 511: D.2.7 Hardware Configuration Definitions

    (drawer and slot) determines the physical channel identifier (PCHID). Only one FID can be defined for dedicated mode. Up to 31 FIDs can be defined for shared mode (on a z13 and a z13s server) for each physical cards (PCHID). Virtual Function ID Virtual Function ID is defined when PCIe hardware is shared between LPARs.
  • Page 512: Physical Network Id Example

    SMC-R can be implemented on the RoCE and can communicate memory to memory, thus avoiding the CPU resources of TCP/IP by reducing network latency and improving wall clock time. It focuses on “time to value” and widespread performance benefits for all TCP socket-based middleware. IBM z13s Technical Guide...
  • Page 513: Reduced Latency And Improved Wall Clock Time With Smc-R

    SMC-R cannot be used in IEDN. Software IOCP required level for z13s: The required level of IOCP for the z13s server is V5 R2 L1 (IOCP 5.2.1) or later with program temporary fixes (PTFs). For more information, see the following manuals: z Systems Stand-Alone Input/Output Configuration Program User's Guide, SB10-7166.
  • Page 514: D.2.10 Smc-R Use Cases For Z/Os To Z/Os

    You cannot roll back to previous z/OS releases. z/OS guests under z/VM 6.3 are supported to use 10GbE RoCE features IBM is working with its Linux distribution partners to include support in future Linux on z Systems distribution releases.
  • Page 515: Sysplex Distributor Before Roce

    CICS to CICS connectivity through Internet Protocol interconnectivity (IPIC) Optimized Sysplex Distributor intra-sysplex load balancing Dynamic virtual IP address (VIPA) and Sysplex Distributor support are often deployed for high availability (HA) and scalability in the sysplex environment. When the clients and servers are all in the same sysplex, SMC-R offers a significant performance advantage.
  • Page 516: Sysplex Distributor After Roce

    – SMC-R is not supported on any other interface types. Note: The IPv4 INTERFACE statement (IPAQENET) must also specify an IP subnet mask Repeat in each host (at least two hosts). Start the TCP/IP traffic and monitor it with Netstat and IBM VTAM displays. IBM z13s Technical Guide...
  • Page 517: D.3 Shared Memory Communications - Direct Memory Access

    With the z13 (Driver 27) and z13s servers, IBM introduces SMC-D. SMC-D maintains the socket-API transparency aspect of SMC-R so that applications that use TCP/IP communications can benefit immediately without requiring any application software or IP topology changes.
  • Page 518: Connecting Two Lpars On The Same Cpc Using Smc-D

    D.3.1 Internal Shared Memory technology overview ISM is a new function supported by the z13 and z13s machines. It is the firmware that provides the connectivity for shared memory access between multiple operating systems within the same CPC. It provides the same functionality as SMC-R but without physical adapters like the RoCE card, using instead virtual ISM devices as SMC-R.
  • Page 519: Dynamic Transition From Tcp To Smc-D By Using Two Osa-Express Adapters

    Socket application data is exchanged through ISM (write operations). The TCP connection remains to control the SMC-D connection. This model preserves many critical existing operational and network management features of TCP/IP. IBM z13 or z13s z/OS LPAR A z/OS LPAR B Middleware/Application...
  • Page 520: D.3.3 Internal Shared Memory - Introduction

    It supports up to 32 ISM VCHIDs per CPC (z13 or z13s servers, each VCHID represents a unique internal shared memory network each with a unique Physical Network ID) Each VCHID supports up to 255 VFs per VCHID (the maximum is 8k VFs per z13 or z13s CPC), which provide significant scalability.
  • Page 521: Concept Of Vpci Adapter Implementation

    Figure D-14 shows the basic concept of vPCI adapters: Figure D-14 Concept of vPCI adapter implementation Note: Basic z/VM support is available: Generic zPCI pass-through support starting from z/VM 6.3 The use of the zPCI architecture remains basically unchanged Appendix D. Shared Memory Communications...
  • Page 522: Smc-D Configuration That Uses Ethernet To Provide Connectivity

    Figure D-15 shows an SMC-D configuration in which Ethernet provides the connectivity. IBM z13 or z13s z/OS A (LP 1) z/OS B (LP 2) IP Interface A.2 IP Interface A.1 Each host has an IP QDIO QDIO network interface /...
  • Page 523: Smc-D Configuration That Uses Hipersockets To Provide Connectivity

    The following are reasons why z/OS might use extra ISM FIDs: IBM supports up to 8 TCP/IP stacks per z/OS LPAR. SMC-D can use up to 8 FIDs or VFs (one per TCP/IP stack). IBM supports up to 32 ISM PNet IDs per CEC. Each TCP/IP stack can have access to a PNet ID that consumes up to 32 FIDs (one VF per PNet ID).
  • Page 524: Ism Adapters Shared Between Lpars

    D.3.7 Sample IOCP FUNCTION statements Example D-2 shows IOCP FUNCTION statements that describe the configuration that defines ISM adapters shared between LPARs on the same CPC as shown in Figure D-17. IBM z13 and z13s (Driver 27) System z vPCI Firmware Shared Memory Communications...
  • Page 525: Multiple Lpars Connected Through Multiple Vlans

    ISM networks access to 2 VLANs on the (VCHIDs); requires 2 same ISM network; requires a unique ISM FIDs (VFs) single ISM FID (VF) System z13 or z13s (Driver 27) LPAR 1 LPAR 2 LPAR 3 LPAR 4 LPAR 5...
  • Page 526: D.3.8 Software Exploitation Of Ism

    PNet ID. It cannot be associated to both. D.3.9 SMC-D over ISM prerequisites SMC-D over ISM has these prerequisites: z13s or z13 server (Driver 27): – HMC/SE for ISM vPCI Functions. At least two z/OS V2.2 systems in two LPARs on the same CPC with required service installed: –...
  • Page 527: Smcd Parameter In Globalconfig

    Table D-2 shows a list of required APARs per z/OS subsystem. Table D-2 Table with prerequisite APARs for SMC-D enablement Subsystem FMID APAR HBB77A0 OA47913 CommServer SNA VTAM HVT6220 OA48411 CommServer IP HIP6220 PI45028 HCS77A0 OA46010 HCS7790 HCS7780 HCS7770 HCS7760 HCS7750 IOCP HIO1104...
  • Page 528: D.3.11 Smc-D Support Overview

    D.3.11 SMC-D support overview SMC-D requires IBM z13 and IBM z13s servers to be at driver level 27 or later for support of ISM. IOCP required level for z13s: The required level of IOCP for the z13s server is V5 R2 L1 or later with PTFs.
  • Page 529 IBM Dynamic Partition Manager Appendix E. This appendix contains an introduction to IBM Dynamic Partition Manager (DPM) on z Systems. It provides a description about how Dynamic Partition Manager environment can be initialized and managed. This appendix includes the following sections:...
  • Page 530: E.1 What Is Ibm Dynamic Partition Manager

    Note: When z Systems servers are set to run in DPM mode, only Linux virtual servers can be defined by using the provided user server definition interface. KVM for IBM z Systems is also supported in DPM mode.
  • Page 531: High-Level View Of Dpm Implementation

    E.3 IBM z Systems servers and DPM Traditional IBM z Systems servers are highly virtualized with the goal of maximizing the utilization of compute and I/O (storage and network) resources, and simultaneously lowering the total amount of resources needed for workloads. For decades, virtualization has been embedded in the z Systems architecture and built into the hardware and firmware.
  • Page 532: Enabling Dpm Mode Of Operation From The Se Cpc Configuration Options

    Note: DPM is a feature code (FC 0016) that can be selected during the machine order process. After they are selected, a pair of OSA Express5s 1000BASE-T adapters must be included in the configuration. Figure E-2 Enabling DPM mode of operation from the SE CPC configuration options IBM z13s Technical Guide...
  • Page 533: Entering The Osa Ports That Will Be Used By The Management Network

    Express5s 1000BASE-T ports selected and cabled to the System Control Hubs (SCHs) during the z Systems installation. This window is shown in the Figure E-3. Note: During the installation process, the IBM SSR connects the two OSA Express5s 1000BASE-T cables to the SCHs provided ports.
  • Page 534: Dpm Mode Welcome Window

    CPCs need to be defined to the HMC by using the Object Definition task, and adding the CPC object. The welcome window shown in Figure E-4 only appears when at least one HMC-defined CPC is active in DPM mode. Otherwise, the traditional HMC window is presented when you log on to the HMC. IBM z13s Technical Guide...
  • Page 535: Traditional Hmc Welcome Window When No Cpcs Are Running In Dpm Mode

    The Guides option provides Tutorials, videos, and information about What’s New in DPM. The Learn More option covers the application programming interfaces (APIs), and the Support option takes the user to the IBM ResourceLink website. Appendix E. IBM Dynamic Partition Manager...
  • Page 536 10.Summary: This wizard window provides a view of all of the defined partition resources. The final step after the partition creation process is to start it. After the partition is started (Status: Active), the user can start the messages or the Integrated ASCII console interface to operate it. IBM z13s Technical Guide...
  • Page 537: Dpm Wizard Welcome Window Options

    – Environmentals - Ambient Temperature in Fahrenheit Adapters that exceed a user predefined threshold value Overall port utilization in the last 36 hours Utilization details (available by selecting one of the performance indicators) Manage Adapters Task Appendix E. IBM Dynamic Partition Manager...
  • Page 538: E.4.2 Summary

    DPM provides simplified z Systems hardware and virtual infrastructure management that includes integrated dynamic I/O management for users who intend to run Linux on z Systems and KVM for IBM z as hypervisor, running in a partition. The new mode, DPM, provides partition lifecycle and dynamic I/O management capabilities...
  • Page 539 KVM for IBM z Systems Appendix F. This appendix contains an introduction to open fertilization with KVM for IBM z Systems and a description of how the environment can be managed. This appendix includes the following sections: Why KVM for IBM z Systems...
  • Page 540: F.1 Why Kvm For Ibm Z Systems

    With KVM for IBM z Systems (KVM for IBM z), IT organizations can unleash the power of kernel-based virtual machine (KVM) open virtualization to improve productivity, and to simplify administration and management for a quick start on their journey to a highly virtualized environment on IBM z Systems servers.
  • Page 541: Appendix F. Kvm For Ibm Z Systems

    (LPUs), memory, and I/O resources. LPUs are defined and managed by PR/SM and are perceived by KVM for IBM z as real CPUs. PR/SM is responsible for accepting requests for work on LPUs and dispatching that...
  • Page 542: Kvm Running In Z Systems Lpars

    Fibre Channel Protocol (FCP): A standard protocol for communicating with disk and tape devices. FCP supports Small Computer System Interface (SCSI) devices. Linux on z Systems and KVM for IBM z Systems can make use of both protocols by using the FICON features.
  • Page 543: F.2.3 Hardware Management Console

    The HMC can set up, manage, monitor, and operate one or more z Systems platforms. It manages and provides support utilities for the hardware and its LPARs. The HMC is used to install KVM for IBM z and to provide an interface to the IBM z Systems hardware for configuration management functions.
  • Page 544: Open Source Virtualization (Kvm For Ibm Z Systems)

    F.2.5 What comes with KVM for IBM z Systems KVM for IBM z Systems provides standard Linux and KVM interfaces for operational control of the environment, such as standard drivers and application programming interfaces (APIs), as well as system emulation support and virtualization management. Shipped as part of KVM for...
  • Page 545 Nagios Remote Plugin Executor (NRPE) can be used with KVM for IBM z. NRPE is an addon that allows you to run plug-ins on KVM for IBM z. You can monitor resources, such as disk usage, CPU load, and memory usage. For more information about how to...
  • Page 546: Kvm For Ibm Z Systems Management Interface

    Manager is created and maintained by IBM and built on OpenStack. Figure F-3 KVM for IBM z Systems management interface KVM for IBM z Systems can be managed just like any another KVM hypervisor by using the Linux CLI. The Linux CLI provides a familiar experience for platform management.
  • Page 547: Kvm Management By Using The Libvirt Api Layers

    – Stop a virtual machine – Suspend a virtual machine – Resume a virtual machine – Delete a virtual machine – Take and restore snapshots For more information about libvirt go to: http://libvirt.org Appendix F. KVM for IBM z Systems...
  • Page 548: F.3.1 Hypervisor Performance Manager

    Neutron agent for Open vSwitch Ceilometer support Cinder support OpenStack compute node has an abstraction layer for compute drivers to support different hypervisors, including QEMU and KVM for IBM z through the libvirt API layer (Figure F-4 on page 519. IBM z13s Technical Guide...
  • Page 549 Appendix G. Interconnect Express This appendix introduces the design of native Peripheral Component Interconnect Express (PCIe) features management on z13s and includes the concepts of the integrated firmware processor (IFP) and resource groups (RGs). The following topics are included: Design of native PCIe I/O adapter management...
  • Page 550: G.1 Design Of Native Pcie I/O Adapter Management

    (POR) phase. Although the IFP is allocated to one of the physical PUs, it is not visible to the users. In an error or failover scenario, PU sparing also happens for an IFP, with the same rules as other PUs. IBM z13s Technical Guide...
  • Page 551: I/O Domains And Resource Groups Managed By The Ifp

    I/O domain this adapter is in. As shown in Figure G-1, each I/O domain in a PCIe I/O drawer of z13s is logically attached to one of the two resource groups: I/O domain 0 and 2 in the front of the drawer attached to RG1 and I/O domain 1 and 3 attached to RG2.
  • Page 552: G.1.4 Management Tasks

    (if applicable). G.2 Native PCIe feature plugging rules The following are maximum number of native PCIe adapters that can be installed in a z13s: Maximum of four Flash Express features, each of which requires two Flash Express adapters configured, and can only be installed into slot 1 and 14, or 25 and 33 of a PCIe I/O drawer.
  • Page 553: Sample Output Of Ao Data Or Pchid Report

    Native PCIe features, they are not part of any resource group. The Flash Express features are not defined by using the input/output configuration data set (IOCDS). Figure G-2 shows a sample PCHID report of a z13s configuration with four zEDC Express features and four 10GbE RoCE Express features. Source...
  • Page 554 – PCHID 110 in I/O domain 0 of drawer Z22B, in resource group RG1 – PCHID 170 in I/O domain 1 of drawer Z22B, in resource group RG2 Both LPARs have access to both networks, which PNETID as NET1 and NET2. IBM z13s Technical Guide...
  • Page 555: Example Of Iocp Statements For Zedc Express And 10Gbe Roce Express

    However, a FUNCTION cannot be shared between LPARs. It is only dedicated or reconfigurable by using the PART parameter. The TYPE parameter is new for z13s and is required. G.3.2 Virtual function number If you want several LPARs to be able to use a zEDC Express feature (the 10GbE RoCE Express feature cannot be shared between LPARs), you need to use a virtual function (VF) number.
  • Page 556 Definition of10 GbE RoCE Express feature is required to pair up with an OSD CHPID definition, by the parameter of PNETID. The OSD CHPID definition statement is not listed in the example. The PNETID is limited to 2 for a 10GbE RoCE Express definition statement on z13s. IBM z13s Technical Guide...
  • Page 557 Flash Express Appendix H. This appendix covers the IBM Flash Express feature introduced on the zEC12 server. Flash memory is a non-volatile computer storage technology. It was introduced on the market decades ago. Flash memory is commonly used today in memory cards, USB flash drives, solid-state drives (SSDs), and similar products for general storage and data transfer.
  • Page 558: Systems Storage Hierarchy

    Flash Express is an optional PCIe card feature that is available on zEC12, zBC12, z13, and z13s servers. Flash Express cards are supported in PCIe I/O drawers, and can be mixed with other PCIe I/O cards, such as Fibre Channel connection (FICON) Express16S, Crypto Express5S, and Open Systems Adapter (OSA) Express5S cards.
  • Page 559: Flash Express Pcie Adapter

    cards, redundant PCIe paths to Flash Express cards are provided by redundant I/O interconnect. Unlike other PCIe I/O cards, they can be accessed from the host only by a unique protocol. A Flash Express PCIe adapter integrates four SSD cards of 400 GB each for a total of 1.4 TB of usable data per card, as shown in Figure H-2.
  • Page 560: H.2 Using Flash Express

    For higher resiliency and high availability, Flash Express cards are always installed in pairs. A maximum of four pairs are supported in a z13s system, providing a maximum of 5.6 TB of storage. In each Flash Express card, data is stored in a RAID configuration. If an SSD fails, data is reconstructed dynamically.
  • Page 561: Sample Se/Hmc Window For Flash Express Allocation To Lpar

    Table H-1 gives the minimum support requirements for Flash Express. Table H-1 Minimum support requirements for Flash Express Operating system Support requirements z/OS z/OS V1R13 and V2R1 CFCC CF Level 20 a. Web delivery and program temporary fixes (PTFs) are required. You can use the Flash Express allocation windows on the SE or Hardware Management Console (HMC) to define the initial and maximum amount of Flash Express available to an LPAR.
  • Page 562: Flash Express Allocation In Z/Os Lpars

    HyperSwap Critical Address Space are placed in flash memory. If flash space is not available, these pages are kept in memory and only paged to disk when the system is real storage constrained, and no other alternatives exist. IBM z13s Technical Guide...
  • Page 563: H.3 Security On Flash Express

    Data type Data page placement Pageable large pages If contiguous flash space is available, pageable large pages are written to flash. All other data If space is available on both flash and disk, the system makes a selection that is based on response time. Flash Express is used by the Auxiliary Storage Manager (ASM) with paging data sets to satisfy page-out and page-in requests received from the real storage manager (RSM).
  • Page 564: Integrated Key Controller

    Flash Express adapter. This process can be either upon request from the firmware at initial microcode load (IML) time or from the SE as the result of a request to “change” or “roll” the key. IBM z13s Technical Guide...
  • Page 565: H.3.2 Key Serving Topology

    During the alternate SE initialization, application programming interfaces (APIs) are called to initialize the alternate smart card in it with the applet code and create the RSA public/private key pair. The API returns the public key of the smart card that is associated with the alternate SE.
  • Page 566: Key Serving Topology

    Alternate Support Element failure during switchover from the primary If the alternate SE fails during the switchover to become the primary SE, the key serving state is lost. When the primary comes back up, the key serving operation can be restarted. IBM z13s Technical Guide...
  • Page 567 Primary and alternate Support Elements fail If the primary and the alternate SE both fail, the key cannot be served. If the devices are still up, the key is still valid. If either or both SEs are recovered, the files holding the Flash encryption key/authentication key can still be valid.
  • Page 568 IBM z13s Technical Guide...
  • Page 569 GDPS Virtual Appliance Appendix I. This appendix discusses the Geographically Dispersed Parallel Sysplex (GDPS) Virtual Appliance. This appendix includes the following sections: GDPS overview Overview of GDPS Virtual Appliance GDPS Virtual Appliance recovery scenarios © Copyright IBM Corp. 2016. All rights reserved.
  • Page 570: Gdps Offerings

    Each offering uses a combination of server and storage hardware or software-based replication, automation, and clustering software technologies. In addition to the infrastructure that makes up a GDPS solution, IBM also includes services, particularly for the first installation of GDPS and optionally for subsequent installations, to ensure that the solution meets your business objectives.
  • Page 571 An HA/DR implementation has various levels: High availability (HA): The attribute of a system to provide service during defined periods at agreed upon levels by masking unplanned outages from users. HA employs component duplication (hardware and software), automated failure detection, retry, bypass, and reconfiguration.
  • Page 572: Positioning A Virtual Appliance

    Purposed for a specific, high-level business context or IT architecture by installing particular applications and hardening then before delivery Optimized by choosing the appropriate configuration, knowing all elements of the system and removing unnecessary attributes IBM z13s Technical Guide...
  • Page 573 – A supported distribution of Linux on z Systems with the latest recommended fix pack – IBM Tivoli System Automation for Multiplatforms with the latest recommended fix pack. The separately priced xDR for Linux feature is required (one Linux guest as xDR proxy).
  • Page 574: Gdps Virtual Appliance Architecture Overview

    Disaster detection ensures successful and faster recovery by using automated processes. A single point of control is implemented from the GDPS Virtual Appliance. There is no need for involvement of all experts (for example, storage team, hardware team, OS team, application team, and so on). IBM z13s Technical Guide...
  • Page 575: I.3 Gdps Virtual Appliance Recovery Scenarios

    The GDPS Virtual Appliance implements the following functions: Awareness of a failure in a Linux on z Systems node or cluster by monitoring (heartbeats) all nodes and cluster master nodes. If a node or cluster fails, it can be set up to automatically IPL the node or all the nodes in the cluster again.
  • Page 576: Gdps Storage Failover

    Again, without HyperSwap, this process can take more than an hour even when done properly. The systems are quiesced, removed from the cluster, and restarted on the other side. With HyperSwap, the same operation can take seconds. IBM z13s Technical Guide...
  • Page 577: I.3.3 Disaster Recovery

    I.3.3 Disaster recovery freeze In a site disaster, GDPS Appliance will immediately issue a for all applicable primary devices. This action is taken to protect the integrity of the secondary data. The GDPS cluster then resets Site 1 and Site 2 systems and updates all the IPL information to point to the secondary devices, and IPLs all the production systems in the LPARs in Site 2 again.
  • Page 578 IBM z13s Technical Guide...
  • Page 579 IBM zEnterprise Data Appendix J. Compression Express This appendix describes the optional IBM zEnterprise Data Compression (zEDC) Express feature of the z13 and z13s, zEC12, and IBM zBC12 servers. The appendix includes the following sections: Overview zEDC Express Software support...
  • Page 580: J.1 Overview

    In addition, software-implemented compression algorithms can be costly in terms of processor resources and storage costs. zEDC Express, an optional feature available on z13, z13s, zEC12, and zBC12 servers, addresses those requirements by providing hardware-based acceleration for data compression and decompression.
  • Page 581: Groups

    See the appropriate fixcat for SMP/E to install prerequisite PTFs. The fixcat function is explained here: http://www.ibm.com/systems/z/os/zos/features/smpe/fix-category.html The fix category named IBM.Function.zEDC that identifies the fixes that enable or use the zEDC function. For more Information about Implementation and usage of the zEDC feature, see Reduce Storage Occupancy and Increase Operations Efficiency with IBM zEnterprise Data Compression, SG24-8259.
  • Page 582: J.3.1 Z/Vm V6R3 Support With Ptfs

    IBM 31-bit and 64-bit SDK for z/OS Java Technology Edition, Version 7 Release 1 (5655-W43 and 5655-W44) (IBM SDK 7 for z/OS Java) now provides use of the zEDC Express feature and Shared Memory Communications-Remote Direct Memory Access (SMC-R), which is used by the 10GbE RoCE Express feature.
  • Page 583: Related Publications

    The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book. IBM Redbooks The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only.
  • Page 584 IBM z13s Technical Guide...
  • Page 588 Back cover SG24-8294-00 ISBN 0738441678 Printed in U.S.A ® ibm.com...

Table of Contents