IBM BS029ML - WebSphere Portal Server Self Help Manual page 59

Self help guide
Table of Contents

Advertisement

iteratively searching through the member list of all groups. A second limitation of the Lotus
Domino LDAP implementation is that the number of members in a group is limited by the size
of the field. To work around this issue, nested groups can be implemented, whereby members
are divided across two or more groups and then each of these groups are added as members
to the original group. Unfortunately, both these limitations impact the amount of time it takes
to perform the Portal login step. For situations where large LDAP deployments have been
configured within excess of 900 groups and 80,000 users, it is commonly acknowledged that
the Portal login action will take a longer than usual time.
LDAP directory server high availability
WebSphere Portal Server V6.0.x introduced support for multiple LDAP directory servers with
respect to new multi-realm capabilities. Not surprisingly, this has lead to some confusion
when deploying multiple LDAP directory servers in response to the requirements of
high-availability. As such, when multiple LDAP directory servers are deployed in support of a
multi-realm deployment, often used in conjuction with Virtual Portals, these LDAP directory
servers need to be highly available in their own right.
For Tivoli Directory Server based implementations, high availability is achieveable through the
deployment of two directory servers that operate in a master peer-to-peer topology. However,
in a slight deviation from the standard peer-to-peer practice, which works on a concept that
there are multiple master peers in an environment each being capable of processing read and
write requests, the recommend solution is to utilize a load balancer to preference one master
peer as the active member for all read and write requests. The reason for this decision is to
eliminate any potential conflicts that would otherwise result from two-way replication.
As such, the load balancer should be configured to always route read and write requests to
the nominated master peer during normal operation. However, should the load balancer
detect a failure of the master peer, the load balancer will re-route all requests to the alternate
master peer. During write requests, there will be replication from 'node 1' to 'node 2', not the
other way round, as there should not be any write requests being distributed across both
LDAP servers or peers. It follows that read only requests can be evenly distributed to both
peer LDAP servers. This can be achieved by configuring a second load balancer cluster
group, with a different virtual host name to make a distinction from the first load balancer
cluster group and virtual host name.
Note: It is not implied that by deploying a load balancer as a mechanism for handling LDAP
directory server failover that it is possible to distinguish between the actual read and write
requests of a particular application. For example, this technique does not imply that it is
possible to determine which requests originating from WebSphere Portal Server are of a
read nature and which are of a write nature, on a per request basis.
For those software products that include built-in LDAP redundancy, such as the Authorization
Server and WebSEAL components of Tivoli Access Manager, there is no requirement for a
dedicated load balancer. Moreover, the inclusion of a load balancer could impact the ability of
the built-in fail-over mechanism to work effective.
It is not uncommon for the same load balancer, as mentioned above, to also serve a critical
part in the overall solution architecture. That is, the load balancer or, more accurately, the
back-end load balancer, is responsible for load balancing the many requests that originate
from the WebSphere Portal Server cluster members to the various back-end servers. For
example, it should be apparent that user requests do not bypass Portal Server to directly
access the various back-end servers. Rather, it is the actual Portlet applications deployed
within WebSphere Portal Server that invoke the services provided by the back-end servers. A
Portal page, as such, may aggregate the response from several back-end servers. In such
circumstances, it is important to ensure that the load balancer itself does not become a
Chapter 2. Architecture and planning
45

Advertisement

Table of Contents
loading

This manual is also suitable for:

Websphere portal v6

Table of Contents