Configuring Large Pages For Java Heap And Code Cache; Prefetching; Compressed References - IBM Power7 Optimization And Tuning Manual

Table of Contents

Advertisement

The -Xlp64k and -Xlp16m options can be used to select the wanted page granularity for the
heap and code cache. The -Xlp option is an alias for -Xlp16m. Large pages, specifically
16 MB pages, do have some processing impact and are best suited for long running
applications with large memory requirements. The -Xlp64k option provides many of the
benefits of 16 MB pages with less impact and can be suitable for workloads that benefit from
large pages but do not take full advantage of 16 MB pages.
Starting with IBM Java 6 SR7, the default page size is 64 KB.

7.3.2 Configuring large pages for Java heap and code cache

Large pages must be configured on AIX by the system administrator by running vmo. The
following example demonstrates how to dynamically configure 1 GB of 16 MB pages:
# vmo -o lgpg_regions=64 -o lgpg_size=16777216
To permanently configure large pages, the -r option must be specified with the vmo
command. Run bosboot to configure the large pages at boot time:
# vmo -r -o lgpg_regions=64 -o lgpg_size=16777216
# bosboot -a
Non-root users must have the CAP_BYPASS_RAC_VMM capability on AIX enabled to use large
pages. The system administrator can add this capability by running chuser:
# chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE <
On Linux, 1 GB of 16 MB pages are configured by running echo:
# echo 64 > /proc/sys/vm/nr_hugepages

7.3.3 Prefetching

Prefetching is an important strategy to reduce memory latency and take full advantage of
on-chip caches. The -XtlhPrefetch option can be specified to enable aggressive prefetching
of thread-local heap memory shortly before objects are allocated. This option ensures that
the memory required for new objects that are allocated from the TLH is fetched into cache
ahead of time if possible, reducing latency and increasing overall object allocation speed.
This option can give noticeable gains on workloads that frequently allocate objects, such as
transactional workloads.

7.3.4 Compressed references

For huge workloads, 64-bit JVMs might be necessary to meet application needs. The 64-bit
processes primarily offer a much larger address space, allowing for larger Java heaps, JIT
code caches, and reducing the effects of memory fragmentation in the native heap. However,
64-bit processes also must deal with the increased processing impact. The impact comes
from the increased memory usage and decreased cache usage. This impact is present with
every object allocation, as each object must now be referred to with a 64-bit address rather
than a 32- bit address.
128
POWER7 and POWER7+ Optimization and Tuning Guide
user_id
>

Advertisement

Table of Contents
loading

This manual is also suitable for:

Power7+

Table of Contents