1) Check the memory of each canister.
2) If any issues, go back to the main page of essutils -> Advanced Tasks -> Check the memory.
This option dumps the complete list of DIMMs in each slot.
7. Configure GPFS page pool size to the 60% target (customer task).
Find the node class name to use, and list the current pagepool settings by issuing the following
commands from either one of the server canisters:
# mmvdisk nc list
a. Identify the node class name that is associated with the system by going through MES.
Example
[root@ess3k5a ~]# mmvdisk nc list
node class
--------------------
ess_x86_64_mmvdisk
ess_x86_64_mmvdisk_5
gssio1_ibgssio2_ib
b. Gather the current pagepool configuration by issuing the following command:
# mmvdisk server list –-nc <node class name> --config
Example
[root@ess3k5a ~]# mmvdisk server list --nc ess_x86_64_mmvdisk_5 --config
node
number
server
------
--------------------------------
21
ess3k5a-ib.example.net
22
ess3k5b-ib.example.net
Here you can see that the pagepool is less than 25% of physical memory.
c. To change the pagepool percentage, check that GPFS is running:
d. Restart the GPFS by issuing the following command:
# mmstartup -N <node class name>
Example
[root@ess3k5b ~]# mmstartup -N ess_x86_64_mmvdisk_5
Wed Feb 19 16:37:02 EST 2020: mmstartup: Starting GPFS ...
e. Change the pagepool to 60%, which is 460G by issuing the following command:
# mmchconfig pagepool=460G -N <node class name>
f. Ensure that the 460G pagepool setting is listed for the target node class by issuing the following
command:
# mmlsconfig -Y | grep -i pagepool
8. Restore GPFS normal operational mode and confirm pagepool configuration setting (customer task).
Do the following steps on both canisters:
a. Restart the server by issuing the following command:
# systemctl reboot
b. When the server is up again, do a basic ping test between the canister over the high-speed
interface.
c. If the ping is successful, start GPFS again by issuing the following command:
38 IBM Elastic Storage System 3000: Service Guide
recovery group
---------------
ess3k
ess3k5
-
active
-------
memory
pagepool
--------
--------
no
754 GiB
75 GiB
no
754 GiB
75 GiB
nsdRAIDTracks
-------------
131072
131072