HP A7533A - Brocade 4Gb SAN Switch Base Release Note page 33

Hp storageworks fabric os 6.2.2a release notes (5697-0318, february 2010)
Hide thumbs Also See for A7533A - Brocade 4Gb SAN Switch Base:
Table of Contents

Advertisement

Whenever initNode is performed, new certificates for CP and KAC (SKM) are generated. Hence,
each time InitNode is performed, the new KAC Certificate must be loaded onto key vaults for
Secure Key Manager (SKM). Without this step, errors will occur, such as key vault not responding
and ultimately key archival and retrieval problems.
The HTTP server should be listening to port 9443. Secure Key Manager is supported only when
configured to port 9443.
The HP Encryption Switch and HP Encryption blade support registration of only one HPSKM Key
Vault for Fabric OS 6.2.2a. Multiple HP SKM Key Vaults can be clustered at the SKM server level.
Registration of a second SKM key vault is not blocked.
When the registered key vault connection goes down or the registered key vault is down, you
must correct the connection with Key Vault, or replace the failed SKM and re-register (deregister
failed SKM entry and register the new SKM entry) on the HP Encryption Switch or HP Encryption
blade. You must ensure that the replaced (new) SKM key vault is in sync with the rest of the SKM
units in Cluster in terms of Keys Database (manually sync the Key Database from existing SKM
Key Vault in Cluster to new or replacing SKM Key Vault using SKM Admin Guide Provided Key
Synchronization methods).
When all nodes in an EG (HA Cluster or DEK Cluster) are powered down (due to catastrophic
disaster or a power outage to the data center) and later nodes come back online (in the event of
the Group Leader (GL) node failing to come back up or the GL node being kept powered down)
the member nodes lose information and knowledge about the EG. This leads to no crypto operations
or commands (except node initialization) being available on the member nodes after the power-
cycle. This condition persists until the GL node is back online.
Workaround. In the case of a datacenter power down, bring the GL node online first, before
the other member nodes are brought back up.
In the event of the GL node failing to come back up, the GL node can be replaced with a new
node. The following are the procedures to allow an EG to function with existing member nodes
and to replace the failed GL node with a new node
Make one of the existing member nodes the Group Leader node and continue operations:
1.
On one of the member nodes, create the EG with the same EG name. This will make that
node the GL node and the rest of the Crypto Target Container configurations will remain
intact in this EG.
2.
For any containers hosted on the failed GL node, issue cryptocfg --replace to
change the WWN association of containers from the failed GL node to the new GL node.
Replace the failed GL node with a new node:
1.
On the new node, follow the switch/node initialization steps.
2.
Create an EG on this fresh switch/node with the same EG name as before.
3.
Perform a configdownload to the new GL node of a previously uploaded configuration
file for the EG from an old GL Node.
4.
For any containers hosted on the failed GL node, issue cryptocfg --replace to
change the WWN association of containers from failed GL node to the new GL node.
The Encryption SAN Switch and Encryption FC blade do not support QoS. When using encryption
or Frame Redirection, participating flows should not be included in QoS Zones.
With Windows and Veritas Volume Manager/Veritas Dynamic Multipathing, when LUN sizes less
than 400 MB are presented to the Encryption SAN Switch for encryption, a host panic can occur.
Fabric OS 6.2.2a does not support this configuration.
To clean up the stale rekey information for the LUN, use one of the following methods:
Method 1
HP StorageWorks Fabric OS 6.2.2a Release Notes
33

Advertisement

Table of Contents
loading

This manual is also suitable for:

Ae370a - brocade 4gb san switch 4/12

Table of Contents