LDT Communication Groups and LDT GuardPoint Groups
The LDT Communication Group provides the services for management and monitoring of the members of LDT GuardPoint Group, and supports the LDT Messaging Services. An LDT Communication Group contains a collection of LDT-enabled CTE clients that are all on the same network, managed by one CipherTrust Manager and can communicate with each other over that network. An LDT GuardPoint Group consists of CTE clients guarding a common NFS/CIFS share. An LDT Communication Group enables the primary host of an LDT GuardPoint Group to communicate with the secondary hosts that are members of that LDT GuardPoint Group.
LDT Communication Group
An LDT Communication Group is mandatory when using LDT over NFS/CIFS across all CTE clients managed through CipherTrust Manager. CipherTrust Manager pushes the LDT Communication Group details to all of the CTE clients in a group.
The first CTE client added to the group is initially designated as the master node. The remaining CTE clients are replicas. All active communications go through the master node.
All CTE clients intending to guard NFS/CIFS shares with an LDT policy must be able to communicate with each other. In order for the CTE clients to communicate, they must be on the same network and in a single LDT Communication Group. The agents can specify the LDT Communication Group to join at the time of registration.
Note
Even if the CTE clients do not guard the same share, they must be in the same LDT Communication Group on CipherTrust Manager.
LDT Communication Group Limitations
-
You must create an LDT Communication Group on your CipherTrust Manager and manually add all of the CTE clients to it.
-
Minimum of three CTE clients required for membership to the LDT Communication Group for proper and continuous operations of LDT services across all LDT GuardPoint Groups.
-
The majority of the CTE clients must be fully operational for proper LDT operations across all LDT GuardPoint Groups.
The LDT Communication Group requires that the majority of CTE clients are active, in order for the LDT Communication Group to work properly. If 50% of the LDT Communication Group servers, or 2 out of 3 triad nodes fail, then the LDT Communication Group cannot recover from that situation. To fix this issue, you must either:
-
Reboot all of the CTE clients in the entire LDT Communication Group.
-
In Windows, restart
secfsd
through the Control Panel > Services page on all of the CTE clients in the entire LDT Communication Group.
-
-
The name of the LDT Communication Group cannot contain special characters.
-
If you are repurposing a CTE client that is currently being used for LDT over NFS/CIFS, but still runs CTE and is still registered to the same CM, then you must:
- Remove all CIFS/NFS LDT GuardPoint Groups from the host prior to removing the host from the LDT Communication Group.
- Reboot the host.
-
Remove a host from the LDT Communication Group if the host is being decommissioned, meaning that you plan to power it down for the foreseeable future, or while CTE is being uninstalled.
LDT Communication Group Considerations
-
The LDT Communication Group must be specified at the time of client registration.
Note
For upgrades, add the agents to the LDT Communication Group using the Add Client function on CM. See Adding Clients to an LDT Communication Group for more information.
-
Do not shut down the entire LDT Communication Group at once. This is so that maintenance cycles can activate and run. Perform maintenance in a round robin fashion.
-
Remove a host from the LDT Communication Group if the host is being decommissioned, meaning that you plan to power it down for the foreseeable future, or while CTE is being uninstalled.
-
The CTE host that is the LDT Communication Group master is critical for proper operations of LDT across all LDT GuardPoint Groups. Thales recommends that you dedicate a separate host as your LDT Communication Group master, although it is not mandatory. This isolates the host from unexpected system failures, hence preventing failover of the LDT Communication service to another CTE client critical for your production workload.
-
Thales recommends using a minimum of three nodes for optimal performance.
-
Before rebooting a node, Thales recommends stopping the Secfsd service:
For Windows:
-
Go to Control Panel > Services (local).
-
Select secfsd.
-
Select Stop the Service.
For Linux, type:
/etc/vormetric/secfs stop
-
To create and manage LDT Communication Group in the UI, see: Managing LDT Communication Groups.
Multiple LDT Communication Groups Considerations
CipherTrust Transparent Encryption now supports multiple LDT Communication Groups. Be careful to not misconfigure the multiple LDT Communication Groups:
-
You cannot use identical NFS/CIFS share paths in multiple LDT Communication Groups registered to a single CipherTrust Manager. Each share path must be unique.
-
You cannot use identical NFS/CIFS share paths in an LDT Communication Group with the same name on multiple CipherTrust Managers. LDT Communication Group names must be unique.
Moving Client Nodes from one LDT Communication Group to another LDT Communication Group
If you are repurposing a CTE client node that is currently used for LDT over NFS/CIFS in an LDT Communication Group, but still runs CipherTrust Transparent Encryption and is still registered to the same CipherTrust Manager, then you must remove all CIFS/NFS LDT GuardPoints from the host prior to removing the host from the LDT Communication Group. Then you can reboot the host and add that CTE client node to another LDT Communication Group.
LDT GuardPoint Group
As described, an LDT GuardPoint Group is a group of three or more CTE hosts sharing access to the same GuardPoint over NFS/CIFS. A CTE host with n number of GuardPoints has membership to n number of distinct LDT GuardPoint Group and is specific to the GuardPoint directory. A CTE host guarding multiple GuardPoints over NFS/CIFS is a member of multiple LDT GuardPoint Groups.
LDT GuardPoint Group Primary
One CTE host in each LDT GuardPoint Group assumes the Primary role for that group. The CTE host that enables a GuardPoint directory first is assigned the primary role for the group associated with the GuardPoint directory. Subsequent hosts enabling the same GuardPoint are assigned Member or Secondary status. A CTE host can be a primary for a group, or multiple groups, and a secondary for other groups.
LDT manages primary host departure from the group during transformation. Departure of a primary host can occur as the result of disabling a GuardPoint directory on the primary host, or an unexpected failure of a primary host. In the event of the primary host departure from the group, LDT elects and delegates the primary role to one of the members of the group.
For additional information, see:
LDT GuardPoint Group Capability Level
In CTE 7.6.0, LDT NFS introduces a feature called: Capability Level, which is for an LDT GuardPoint Group. The feature is based on the compatibility level for each member in the group. The Capability Level of a member determines what features that host can access and use, and allows the group to decide on a common capability level at which to run. This allows support for multiple versions of CTE in the same LDT GuardPoint Group as the group runs at the capability level of the least capable member. The capability level of the group is calculated when a member leaves the group, or a primary failover occurs. As lesser capable members depart the group, the overall capability level of the group increases. The capability level of a group cannot be decreased. As such, members at a lesser capability level cannot join once the capability level of the group has increased. New members joining an existing group will attempt to join with the highest capability level supported by the host and, if rejected, retry with progressively lower capability levels until the group accepts the join request. If the group is running at a capability level that exceeds the joining member’s highest capability, that host is not allowed to join the group.
When a new feature is introduced that is unavailable to, or incompatible with, older versions of CTE agents, the maximum capability level of the newer agents is increased to a value greater than the maximum capability level of the older agents. If there is a group with a mix of the new and old agents, the group runs at the capability level of the older agent. This means that no member of the group will be allowed to use the new feature, regardless of the actual version of CTE running on the member. Only after all of the members running the older agent have left the group will the group be upgraded to the new capability level and use of the new feature is enabled. Older agents will no longer be allowed to join the group as there are now features in use that are not available to, or compatible with, older agents. If the LDT GuardPoint Group needs to contain the older agent, then a member running that older agent must be present at all times to ensure that the compatibility level of the group is not upgraded to the level of the newer agents.
LDT GuardPoint Groups with Different CTE Versions (Linux Only)
LDT does not allow key rotation to begin in an LDT GuardPoint Group over NFS when hosts in that group are not homogenous, meaning they are running different versions of CTE. When an LDT GuardPoint Group containing a mix of CTE versions needs to rotate a key, the rekey is blocked and the GuardPoint is marked with the relaunch flag. This persists until the group only contains homogenous hosts; after which rekey automatically starts. If rekey is already in progress on a GuardPoint, disparate hosts are blocked from joining the group until rekey completes. This block on joining also includes when the GuardPoint is rekeyed but flagged for relaunch.
Note
Patch releases within the same major/minor version do not block rekey from starting; ex. 7.7.0.x and 7.7.0.y.
Scenario
For example, given an LDT GuardPoint Group over NFS with:
- One CTE 7.6.x host as the primary
- One CTE 7.6.x host as a member
- Two CTE 7.7.y hosts as members
When the key rotates, the GuardPoint transitions into the rekeyed/relaunch state and remains there until only CTE 7.6.x or CTE 7.7.y hosts remain.
While the GuardPoint is in the rekeyed/relaunch state, no disparate CTE hosts are allowed to guard. This remains true even if the CTE 7.6.x primary unguards, and one of the CTE 7.7.y members is promoted into the primary role. Only after all CTE 7.6.x hosts have unguarded will CTE 7.7.y hosts be allowed to guard, at which point rekey can begin. However, after all of the CTE 7.6.y hosts have unguarded, no CTE 7.6.y hosts will be able to guard moving forward, as the group is now running at the CTE 7.7.x level.
Note
-
Thales recommends that you upgrade CTE on all hosts in an LDT GuardPoint Group group during the same maintenance window. If that is not possible, Thales recommends that subsequent maintenance windows for upgrades occur before the key is scheduled to rotate.
-
It is possible that a host that was previously part of the group, could fail to rejoin after a reboot due to key rotation occurring and blocking hosts with mismatched versions from joining. Upgrade all hosts to the same version of CTE to ensure that key rotation does not block any host from joining.
-
CTE 7.6.0 agents prior to 7.6.0.134 acting as primary will start rekey even if there are CTE 7.7.0 hosts in the group; CTE 7.7.0 hosts will log an alert in this case. Thales recommends all CTE 7.6.0 hosts using LDT NFS GuardPoints be upgraded to CTE 7.6.0.134 or a subsequent version.
CTE Version Capability Levels
-
CTE 7.5.0 members are automatically assigned to a maximum capability level of 2
-
CTE 7.6.0 members can run at a maximum capability level of 3
The difference in capability between CTE 7.5.0 and CTE 7.6.0 is due to the inclusion of sharing LDT NFS GuardPoints with Windows. See Windows/Linux Compatibility for more information on this feature.
-
CTE 7.7.0 members can run at a maximum capability level of 4
To find the capability level of an LDT GuardPoint Group, type:
voradmin ldt group capability <GuardPoint Path>
This command outputs the capability level that the current host is running at for the specified GuardPoint. If the host is the primary host, then the value also represents the capability level for the entire group.