™ Veritas Storage Foundation Cluster File System Installation Guide HP-UX N18486G...
Page 2
SFCFS 5.0 Symantec, the Symantec logo, Veritas, and Veritas Storage Foundation Cluster File System are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners.
Third-party legal notices Third-party software may be recommended, distributed, embedded, or bundled with this Veritas product. Such third-party software is licensed separately by its copyright holder. All third-party copyrights associated with this product are listed in the accompanying release notes. HP-UX is a registered trademark of Hewlett-Packard Development Company, L.P.
Chapter Installing and configuring the product This chapter describes how to install the Veritas Storage Foundation Cluster File System (SFCFS). SFCFS requires several Veritas software packages to configure a cluster and to provide messaging services. These packages include the Veritas Cluster Server (VCS) to monitor systems and application services, Veritas Low Latency Transport (LLT) and Veritas Group Membership and Atomic Broadcast (GAB) for messaging and cluster membership, the Veritas Volume Manager (VxVM) to create the shared volumes necessary for cluster file...
10 Installing and configuring the product Hardware overview Hardware overview VxFS cluster functionality runs optimally on a Fibre Channel fabric. Fibre Channel technology provides the fastest, most reliable, and highest bandwidth connectivity currently available. By employing Fibre Channel technology, SFCFS can be used in conjunction with the latest Veritas Storage Area Network (SAN) applications to provide a complete data storage and retrieval solution.
Installing and configuring the product Hardware overview Shared storage Shared storage can be one or more shared disks or a disk array connected either directly to the nodes of the cluster or through a Fibre Channel Switch. Nodes can also have non-shared or local devices on a local I/O channel. It is advisable to have /, /usr, /var and other system partitions on local devices.
The following table shows the package name and contents for each package: Package Contents Veritas Perl 5.8.8 Redistribution VRTSperl Veritas Licensing VRTSvlic Symantec Common Infrastructure VRTSicsco Symantec Private Branch Exchange VRTSpbx Symantec Service Management Framework VRTSsmf Symantec Product Authentication Service VRTSat Veritas Enterprise Administrator Core Service VRTSobc33 Veritas Enterprise Administrator Service...
Page 11
Installing and configuring the product Software components Package Contents Veritas Cluster Server VRTSvcs Veritas ACC Library VRTSacclib Veritas Cluster Server Bundled Agents VRTSvcsag Veritas Cluster Server Message Catalogs VRTSvcsmg Veritas Java Runtime Environment Redistribution VRTSjre Veritas Java Runtime Environment Redistribution VRTSjre15 Veritas Cluster Utilities VRTScutil...
Page 12
14 Installing and configuring the product Software components Optional packages for SFCFS and SFCFS HA Packages Contents Veritas Cluster File System Documentation VRTScfsdc Veritas Cluster Management Console Cluster Connector VRTScmccc Veritas Cluster Management Console (Single Cluster Mode) VRTScmcs Veritas Cluster Server Cluster Manager VRTScscm Veritas Cluster Server Simulator VRTScssim...
Installing and configuring the product Required HP-UX patches Required HP-UX patches HP-UX required patches include the following: HP-UX Patch ID Description Enables fscat(1M). PHCO_32385 Enables getext(1M). PHCO_32387 Enables setext(1M). PHCO_32388 Enables vxdump(1M). PHCO_32389 Enables vxrestore(1M). PHCO_32390 Enables vxfsstat(1M). PHCO_32391 Enables vxtunefs(1M). PHCO_32392 Enables vxupgrade(1M).
Page 14
16 Installing and configuring the product Required HP-UX patches HP-UX Patch ID Description Changes to separate vxfs symbols from libdebug.a, so that PHKL_32430 symbols of VxFS 4.1and later are easily available in q4/p4. Changes to disallow mounting of a file system on a vnode having PHKL_32431 VNOMOUNT set.
Installing and configuring the product Preinstallation Preinstallation Release Notes Read the Release Notes for all products included with this product. Portable Document Format (.pdf) versions of the Release Notes are included on the software disc in the storage_foundation_cluster_file_system/release_notes directory and on the documentation disc that came with your software. Because product Release Notes are not installed by any packages, it is recommended that you copy them from the disc to the /opt/VRTS/docs directory on your system so that they are available for future reference.
18 Installing and configuring the product Preinstallation Also, you can get the patches from Hewlett-Packard’s Patch Database offered under the Maintenance and Support section of the HP Services & Support - IT Resource Center. HP’s Patch Database provides fast, accurate searches for the latest recommended and superseded patches available for Veritas File System or Veritas Volume Manager.
Installing and configuring the product Prerequisites Veritas Enterprise Administrator The Veritas Enterprise Administrator (VEA) client can be installed and run on any machine that supports the Java Runtime Environment. VEA is required to access the graphical user interface (GUI) for Veritas Storage Foundation.
See Veritas Storage Foundation and High Availability Solutions Getting Started Guide. Symantec recommends configuring the cluster with I/O fencing enabled. I/O fencing requires shared devices to support SCSI-3 Persistent Reservations (PR). Enabling I/O fencing prevents data corruption caused by a split brain scenario.
Page 19
Installing and configuring the product Installing the product To install the product Log in as superuser. Insert the appropriate media disc into your system’s DVD-ROM drive connected to your system. Determine the block device file for the DVD drive: # ioscan -fnC disk Make a note of the device file as it applies to your system.
22 Installing and configuring the product Configuring the Components Configuring the Components This sections describes the configuration of SFCFS components. To configure the components Log in as superuser. Run the command to install the SFCFS. For example: installer # cd /cdrom # ./installer From the Installation menu, choose the C option for Configuration and select 6 the Veritas Storage Foundation Cluster File System.
Page 21
Installing and configuring the product Configuring the Components Answer the prompts to configure VCS for SFCFS. You are prompted to configure SFCFS to use Veritas Security Services. Would you like to configure SFCFS to use Veritas Security Services? [y,n,q] (n) Enter y or n to configure SFCFS to use Veritas Security Services.
24 Installing and configuring the product Using the log files 16 Enter y or n if the VxVM default disk group information is correct. You are prompted to enable centralized management. Enable Centralized Management? [y,n,q] (y) n 17 Enter y or n to enable centralized management. You are prompted to verify the fully qualified domain name for system01.
Installing and configuring the product Verifying the configuration files Verifying the configuration files You can inspect the contents of the configuration files that were installed and modified after a successful installation process. These files reflect the configuration based on the information you supplied. To verify the configuration files Log in as superuser to any system in the cluster.
26 Installing and configuring the product Verifying the configuration files Checking Low Latency Transport operation Use the command to verify that links are active for LLT. This command lltstat returns information about the links for LLT for the system on which it is typed. See the (1M) manual page.
Page 25
Installing and configuring the product Verifying the configuration files CONNWAIT lan1 DOWN lan2 DOWN Note: The output lists 32 nodes. It reports on the two cluster nodes, system01 and system02, plus non-existent nodes. For each correctly configured system, the information shows a state of , a status for each OPEN link of...
28 Installing and configuring the product Verifying the configuration files Group Membership and Atomic Broadcast configuration files The following files are required by the VCS communication services for Group Membership and Atomic Broadcast (GAB). /etc/gabtab After installation, the file /etc/gabtab contains a gabconfig(1M) command that configures the GAB driver for use.
Installing and configuring the product Verifying the configuration files Checking cluster operation This section describes how to check cluster operation. To check cluster operation Enter the following command on any system: # hastatus -summary The output for an SFCFS HA installation resembles: -- SYSTEM STATE -- System State...
Page 28
30 Installing and configuring the product Verifying the configuration files #System Attribute Value system01 ConfigDiskState CURRENT system01 ConfigFile /etc/VRTSvcs/conf/config system01 ConfigInfoCnt system01 ConfigModDate Tues June 25 23:00:00 2006 system01 CurrentLimits system01 DiskHbStatus system01 DynamicLoad system01 Frozen system01 GUIIPAddr system01 LLTNodeId system01 Limits system01...
Installing and configuring the product Verifying agent configuration #System Attribute Value system01 UserStr Verifying agent configuration This section describes how to verify the agent configuration. To verify the agent configuration Enter the cluster status command from any node in the cluster: # cfscluster status Output resembles: Node...
32 Installing and configuring the product Configuring VCS In a VCS cluster, the first system to be brought online reads the configuration file and creates an internal (in-memory) representation of the configuration. Systems brought online after the first system derive their information from systems running in the cluster.
Chapter Upgrading the product If you are running an earlier release of Veritas Storage Foundation Cluster File System, you can upgrade your product using the procedures described in this chapter. Topics covered in this chapter include: Preparing to upgrade the product Upgrade Overview Upgrading from 3.5 to 5.0 Upgrading from 4.1 to 5.0...
36 Upgrading the product Preparing to upgrade the product Preparing to upgrade the product This section prepares you for the Veritas Storage Foundation Cluster File System upgrade. Planning the upgrade Complete the following tasks in advance of upgrading: Review the Veritas Storage Foundation Cluster File System Release Notes for any late-breaking information on upgrading your system.
Page 35
Upgrading the product Preparing to upgrade the product From Upgrade to Tasks Storage Foundation Storage Foundation Proceed to “Upgrading from 3.5 Cluster File System 3.5 Cluster File System 5.0 5.0” on page 39. Update 3 (formerly known as, SANPoint Foundation Suite 3.5 Update 3) Storage Foundation Storage Foundation...
38 Upgrading the product Upgrade Overview Upgrade Overview There are two ways to upgrade cluster nodes to the latest version of Storage Foundation Cluster File System: phased and full. Phased upgrade A phased upgrade minimizes downtime by upgrading portions of the cluster, one at a time.
Upgrading the product Upgrading from 3.5 to 5.0 Upgrading from 3.5 to 5.0 SFCFS can be upgraded from 3.5 to 5.0 using phased or full upgrade procedure. Phased upgrade Following procedure assumes a 4 node cluster system01, system02, system03, system04 where system01 and system02 are initially upgraded and rest of the cluster is brought up later.
Page 38
40 Upgrading the product Upgrading from 3.5 to 5.0 Uninstall VCS 3.5 from system01 and system02. Run the following commands from one of the nodes. See the Veritas Cluster Server Installation Guide. # cd /opt/VRTSvcs/install # ./uninstallvcs Note: Ignore any errors from the uninstallvcs script and proceed with the uninstall of VCS.
Page 39
Upgrading the product Upgrading from 3.5 to 5.0 15 Change the configuration files by running the following commands on one of the upgraded nodes, say system01. # /opt/VRTS/bin/hastart # /opt/VRTS/bin/haconf -makerw # hagrp -unfreeze cvm -persistent # hagrp -unfreeze service_group -persistent # /opt/VRTS/bin/hatype -add CVMVxconfigd # /opt/VRTS/bin/hares -add cvm_vxconfigd CVMVxconfigd cvm # /opt/VRTS/bin/hares -modify cvm_vxconfigd Enabled 1...
42 Upgrading the product Upgrading from 3.5 to 5.0 18 Configure SFCFS on system01 and system02. See “Using the log files”. Note: VCS configuration files are not changed during this configuration. 19 Upgrade file systems to proper disk layout version as mentioned in “Upgrading the disk layout versions”...
Page 41
Upgrading the product Upgrading from 3.5 to 5.0 Uninstall VCS 3.5 from all the nodes. Run the following commands from one of the nodes. See the Veritas Cluster Server Installation Guide. # cd /opt/VRTSvcs/install # ./uninstallvcs Note: Ignore any errors from the uninstallvcs script and proceed with the uninstall of VCS.
Page 42
44 Upgrading the product Upgrading from 3.5 to 5.0 13 Change the configuration files by running the following commands from one of the nodes. # /opt/VRTS/bin/hastart # /opt/VRTS/bin/haconf -makerw # /opt/VRTS/bin/hatype -add CVMVxconfigd # /opt/VRTS/bin/hares -add cvm_vxconfigd CVMVxconfigd cvm # /opt/VRTS/bin/hares -modify cvm_vxconfigd Enabled 1 # /opt/VRTS/bin/hares -delete qlogckd # /opt/VRTS/bin/haconf -dump -makero # /opt/VRTS/bin/hastop -all -force...
Upgrading the product Upgrading from 4.1 to 5.0 16 Configure SFCFS on system01 and system02. See “Using the log files”. Note: VCS configuration files are not changed during this configuration. 17 Upgrade file systems to proper disk layout version as mentioned in “Upgrading the disk layout versions”...
Page 44
46 Upgrading the product Upgrading from 4.1 to 5.0 # mv /etc/llthosts /etc/llthosts.bak Install all the prerequisite patches and reboot the machines. Move /etc/llthosts to /etc/llthosts.bak on all the nodes to be upgraded. # mv /etc/llthosts.bak /etc/llthosts Offline all SFCFS resources on nodes selected in step 2 by running the following commands on one of the cluster nodes.
Page 45
Upgrading the product Upgrading from 4.1 to 5.0 12 Change the configuration files by running the following commands on one of the upgraded nodes. For example, system01. # /opt/VRTS/bin/hastart # /opt/VRTS/bin/haconf -makerw # hagrp -unfreeze cvm -persistent # hagrp -unfreeze service_group -persistent # /opt/VRTS/bin/hares -delete qlogckd # /opt/VRTS/bin/haconf -dump -makero # /opt/VRTS/bin/hastop -all -force...
48 Upgrading the product Upgrading from 4.1 to 5.0 16 Configure SFCFS on system01 and system02. See “Using the log files”. Note: VCS configuration files are not changed during this configuration. 17 Upgrade file systems to proper disk layout version as mentioned in “Upgrading the disk layout versions”...
Page 47
Upgrading the product Upgrading from 4.1 to 5.0 Install SFCFS 5.0 and reboot all the nodes. “Installing the product” on page 19. Note: Do not configure SFCFS after reboot. Start on all the nodes. can be started either in disable or enable vxfen vxfen mode.
50 Upgrading the product Upgrading the disk layout versions 12 Verify the syntax of the /etc/VRTSvcs/conf/config/main.cf file by running the following commands on system01: # cd /etc/VRTSvcs/conf/config # /opt/VRTS/bin/hacf -verify . 13 Run the following command on all the nodes to start VCS. # /opt/VRTS/bin/hastart 14 Configure SFCFS on all the nodes.
Page 49
Upgrading the product Upgrading the disk layout versions On the node selected in step 1, after the disk layout has been successfully upgraded, the file system. unmount # umount /mnt1 This file system can be mounted on all nodes of the cluster using cfsmount...
Page 50
52 Upgrading the product Upgrading the disk layout versions...
Chapter Adding and removing a node This chapter provides information on how to add a node to an existing cluster and removing a node from a cluster. Topics include: Adding a node to a cluster Configuring SFCFS and CVM agents on the new node Removing a node from a cluster...
54 Adding and removing a node Adding a node to a cluster Adding a node to a cluster If you want to add a new node to a multi-node cluster, first prepare the new system hardware. Physically connect the new system to the cluster using private networks and attach to any shared storage.
Page 53
Adding and removing a node Adding a node to a cluster 11 Enter y or n for another license key. You are prompted to press Return to continue. Do you want to enter another license key for system03? [y,n,q,?] (n) 12 Enter 1 or 2 to be installed on all systems.
56 Adding and removing a node Configuring SFCFS and CVM agents on the new node Configuring SFCFS and CVM agents on the new node You must configure the SFCFS and CVM agents, after rebooting the new system. To configure SFCFS and CVM agents on the new node Start the VCS server and vxfen on system03.
Adding and removing a node Removing a node from a cluster # haconf —dump -makero Put the CVM resources back online, in the following order: # hagrp -online cvm -sys system01 # hagrp -online cvm -sys system02 # hagrp -online cvm -sys system03 10 Check the system status to see whether the new node is online: # hastatus —sum -- SYSTEM STATE...
Page 56
10 If necessary, modify the /etc/gabtab file. No change is required to this file if the /sbin/gabconfig command has only the argument -c, although Symantec recommends using the -nN option, where N is the number of cluster systems. If the command has the form /sbin/gabconfig -c -nN,...
Page 57
Adding and removing a node Removing a node from a cluster 12 From the scripts directory, run the script and remove uninstallsfcfs SFCFS on system03: # ./uninstallsfcfs If you do not want to remove the Veritas Cluster Server software, enter n when prompted to uninstall VCS.
Page 58
60 Adding and removing a node Removing a node from a cluster...
Chapter Uninstalling the product If you need to uninstall SFCFS software. Use the script. uninstallsfcfs To uninstall SFCFS HA Log in as superuser. Note: Do not use the command to stop VCS. hastop -force Change directory to /opt/VRTS/install: # cd /opt/VRTS/install Run the command to uninstall SFCFS.
Appendix Troubleshooting and recovery Installation issues If you encounter any issues installing SFCFS, refer to the following paragraphs for typical problems and their solutions. Incorrect permissions for root on remote system The permissions are inappropriate. Make sure you have remote root access permission on each system to which you are installing.
Storage Foundation Cluster File System problems Resource temporarily unavailable If the installation fails with the following error message on the console: fork() failed: Resource temporarily unavailable The value of nkthread tunable parameter nay not be large enough. The nkthread tunable requires a minimum value of 600 on all systems in the cluster.
Storage Foundation Cluster File System problems Unmount failures command can fail if a reference is being held by an NFS server. umount Unshare the mount point and try the unmount again. Mount failures Mounting a file system can fail for the following reasons: The file system is not using disk layout Version 6 or 7.
Storage Foundation Cluster File System problems If this error message displays: mount: slow The node may be in the process of joining the cluster. If you try to mount a file system that is already mounted without –o cluster option (that is, not in shared mode) on another cluster node, # mount -F vxfs /dev/vx/dsk/share/vol01 /vol01 The following error message displays: vxfs mount: /dev/vx/dsk/share/vol01 is already mounted,...
Storage Foundation Cluster File System problems High availability issues Network partition/jeopardy Network partition (or split brain) is a condition where a network failure can be misinterpreted as a failure of one or more nodes in a cluster. If one system in the cluster incorrectly assumes that another system failed, it may restart applications already running on the other system, thereby corrupting data.
Page 66
Storage Foundation Cluster File System problems Low memory Under heavy loads, software that manages heartbeat communication links may not be able to allocate kernel memory. If this occurs, a node halts to avoid any chance of network partitioning. Reduce the load on the node if this happens frequently.