Check_MK rack1 Mark III - Manual
47 / 61
13.5.2 Device settings
The settings (e.g. time synchronisation or name resolution settings) that have been made independently
on the individual devices until now, are synchronised between the two nodes in the cluster.
However, you can only execute these settings on the node that is active at the time. The settings are
locked on the inactive node.
There are some device-specific settings, (e.g. those of the management interface of the Check_MK
rack1) which you can adapt to the individual devices at any time.
13.5.3 IP addresses or host names of the nodes
To be able to edit the IP configuration of the individual nodes, you must first disband the connection
between the nodes. To do this, click on Disband cluster on the cluster page. You can then adapt the
desired settings via the web interface of the individual nodes.
Once you have made the adjustments, you must now select Reconnect cluster on the cluster page. If
the nodes can be successfully reconnected, the cluster will resume operation after a few minutes. You
can see the status on the cluster page.
13.5.4 Administrating Check_MK versions and monitoring instances
The monitoring instances and Check_MK versions are also synchronised between the two nodes. You
can only modify these in the web interface of the active node.
If, to do this, you also access the cluster IP address directly, you will always be referred to the device
with which you can configure these things.
13.6 Administrative tasks
13.6.1 Firmware updates in the cluster
The firmware version of a device is not synchronised in cluster operation. The update is thus carried out
for each node. You have the advantage however that one node can continue performing the monitoring
while the other node is updated.
When updating to a compatible firmware version, you should always proceed as follows:
First open the Clustering module in the web interface of the node to be updated.
Now click on the heart symbol in the column of this node and confirm the security prompt that follows.
This will put the node into maintenance state.
Nodes that are in maintenance state release all resources currently active on the node, upon which the
other node takes control of them.
While a node is in maintenance state, the cluster is not failsafe. So if the active node is now switched off,
the inactive node in maintenance state will not take control of the resources. If you now additionally put
the second node into maintenance state, all resources will be shut down. These will only be reactivated
when a node is taken out of maintenance state. You must always remove the maintenance state again
manually.
If the cluster page shows the following, you will see that the node is in maintenance state.
©Copyright Mathias Kettner GmbH 2018
mathias-kettner.de
Need help?
Do you have a question about the rack1 Mark III and is the answer not in the manual?