Clusters and Nodes
Clusters and Nodes are the resources used to create and manage CipherTrust Manager clustering.
A cluster is a group of connected CipherTrust Manager appliances that share data.
The main purpose of clustering is to support High Availability. When clustered, all CipherTrust Manager appliances in the cluster (called cluster nodes or nodes) will continuously synchronize their databases with each other - any appliance in the cluster may be contacted for any operation. A cluster supports any combination of appliance models; both virtual and physical appliances can join the same cluster.
Note
Cluster Node Limitation - The maximum number of cluster nodes is 20. Joining a 21st node will fail.
If a member of the cluster gets temporarily disconnected, an alarm is set and syslog messages are sent. When it comes back it synchronizes with other members of the cluster. If a member of the cluster gets permanently disconnected, other members should be notified by sending a ksctl cluster nodes delete
command to one of them - the others will tell each other. This prevents the remaining nodes from continually storing catch-up changes for the missing node, which will eventually fill up their local volumes.
Nodes in a cluster communicate over port 5432. Members of the cluster must have bi-directional access to each other on port 5432.
Use the ksctl command line interface for managing clusters and nodes. The relevant commands are:
ksctl cluster csr
ksctl cluster delete
ksctl cluster fulljoin
ksctl cluster info
ksctl cluster join
ksctl cluster new
ksctl cluster nodes list
ksctl cluster nodes get
ksctl cluster nodes create
ksctl cluster nodes delete
Note
All global flags can be configured using a configuration file. Otherwise, you will need to specify the URL, username and password for each call as demonstrated in a few examples below.
Note
The cluster commands require an IP address (or Hostname) of both the cluster member and of the joining node. It is important to note that these will be used by the nodes to talk to each other, and not by the CLI to talk to the nodes. For example, in AWS if the nodes are on the same VPC, the internal IP address of the member and joining node should be used, not a public IP.
What is not clustered?
Most things are clustered, however the following node specific items are not clustered, that is, are not replicated across the cluster.
Backup files
Backup keys
System logs
NTP configuration
HSM configuration
Instance name
Virtual CipherTrust Manager license
Interface certificates
Proxy settings
Note
The 'Connector Lock Code', unlike the 'Key Manager Lock Code', is cluster-wide. This means that a license applied using the Connector Lock Code is replicated across the cluster.
Managing Clusters and Nodes
To check the status of a cluster
To check the status of a cluster enter the command:
$ ksctl cluster info
This returns the following response:
{
"nodeID": "",
"status": {
"code": "none",
"description": "not clustered"
},
}
To create a new cluster
Make the following call on a running appliance (you will need to insert the hostname/IP of the appliance). When running cluster new, --host and --url are most likely going to be identical except that --host will not have the protocol. To create a new cluster enter the command:
$ ksctl cluster new --host=localHostName --url=urlOfCurrentNode --user=username --password=Password_1
This returns the following response:
{
"nodeID": "ab40e178-5f1d-4f03-8b26-7ca378f74988",
"status": {
"code": "r",
"description": "ready"
},
"nodeCount": 1
}
To show the status of all nodes in the cluster
Enter the following command on any node:
$ ksctl cluster nodes list
This returns the following response:
{
"skip": 0,
"limit": 256,
"total": 1,
"resources": [
{
"nodeID": "c05ead39-94ac-4459-a371-9915e3e16ebf",
"status": {
"code": "r",
"description": "ready"
},
"host": "kylo_pg_1",
"isThisNode": true
}
]
}
To show the status of a single node in the cluster
To show the status of a single node in a cluster enter the command:
$ ksctl cluster nodes get --id=c05ead39-94ac-4459-a371-9915e3e16ebf
This returns the following response:
{
"nodeID": "c05ead39-94ac-4459-a371-9915e3e16ebf",
"status": {
"code": "r",
"description": "ready"
},
"host": "kylo_pg_1",
"isThisNode": true
}
To join a node to a cluster
When joining a node to a cluster, three steps need to happen:
Get the csr from the joining node by entering the command ksctl cluster csr
Get the certificate and CA chain from a member node by entering the command ksctl cluster nodes create
Run the join command from the joining node by entering the command ksctl cluster join
While all of these commands can be run individually, it is simpler to use the fulljoin command, which does all of these steps. For the cluster fulljoin command, you are required to input a member's IP or DNS, the joining node's IP or DNS, and either the joining node's configuration file, or its username, password, and URL. To complete the process using the fulljoin command enter the command:
$ ksctl cluster fulljoin --member=<member_IP_or_DNS> --newnodehost=<joining_IP_or_DNS> --newnodeconfig=<config_File_of_Joining_Node>.
CipherTrust Manager internally uses a flexible security architecture to allow clustering instances with both shared and distinct 'root of trust' configurations.
If the member node and the new node share the same HSM partition, it is recommended to pass the flag --shared-hsm-partition
during nodes create or fulljoin. The flag further increases the security of cluster join procedure by ensuring that the new node indeed has access to the same HSM partition root keys before it can join the cluster.
When the new node uses a distinct HSM partition or does not use any HSM, it is still possible to join it to an existing cluster by not specifying the --shared-hsm-partition
flag.
Note
Each CipherTrust Manager node in the cluster uses its own root of trust configuration to protect the KEKs chain that secure the sensitive data of the cluster. When a new node, not connected to an HSM, joins a CipherTrust Manager cluster with all HSM connected nodes, it becomes the weakest point in the key hierarchy and potentially weakens the overall security of the cluster. Use with caution.
An example of the cluster fulljoin command using a configuration file:
$ ksctl cluster fulljoin --member=memberIPOrDNS --newnodehost=joiningIPOrDNS --newnodeconfig=configFileOfJoiningNode
Response:
When you add a node to a cluster, the existing data of the node is deleted. Are you sure want to join? [y/N]
Attempting to get the CSR from the joining node...
Finished getting the CSR from the joining node...
Attempting to get the Certificate and CA Chain from the member node...
Finished getting the Certificate and CA Chain from the member node...
Attemping to join the new node to the member's cluster...
{
"nodeID": "",
"status": {
"code": "creating",
"description": "joining: creating (1/5)"
}
}
When a node joins a cluster, the node adopts the credentials of the cluster. Would you like to write the new cluster credentials
to the provided configuration file? [y/N]
y
An example of the cluster fulljoin command without using a configuration file:
$ ksctl cluster fulljoin --member=memberIPOrDNS --newnodehost=joiningIPOrDNS --newnodepass=joiningNodePass --newnodeuser=joiningNodeUser --newnodeurl=joiningNodeURL
Response:
When you add a node to a cluster, the existing data of the node is deleted. Are you sure want to join? [y/N]
y
Attempting to get the CSR from the joining node...
Finished getting the CSR from the joining node...
Attempting to get the Certificate and CA Chain from the member node...
Finished getting the Certificate and CA Chain from the member node...
Attemping to join the new node to the member's cluster...
{
"nodeID": "",
"status": {
"code": "creating",
"description": "joining: creating (1/5)"
}
}
To remove a node from a cluster
To remove a node from a cluster, first remove that node from the cluster then delete the cluster configuration on the removed node. Deleting the cluster configuration allows a node to rejoin the original cluster, join another cluster, or create a new cluster.
Remove the node from the cluster. A node cannot remove itself, so you must call this on some other node in the cluster:
$ ksctl cluster nodes delete --id=ebab0738-6e09-4b0d-8c99-850d7f24dfac
Delete the cluster configuration on the removed node:
$ ksctl cluster delete
Note
If
ksctl cluster delete
doesn't work for any reason it is always possible to perform a full system reset to ensure any left over data is removed from the node. A node can be reset using theksctl services reset
command.
Warning
Refer to section Services Reset for important information on using this command.
To rejoin a node to a cluster
Before rejoining it is recommended to reset the node.
To reset the node enter the command:
$ ksctl services reset
Warning
Refer to section Services Reset for important information on using this command.
Join the node as described in to join a node to a cluster.
To create a new cluster from a removed node
Ensure the node have been successfully removed from the cluster and that ksctl cluster delete
has been performed, as described in to remove a node from a cluster.
Create a new cluster as described in To create a new cluster.
Cluster Upgrade
A cluster can be upgraded in two ways:
In-place Cluster Upgrade
A cluster can be upgraded in-place since version 1.9.0. The upgrade is generally limited to one minor version at a time, for example, from 2.0.0 to 2.1.0; from 2.1.0 to 2.2.0; or from 2.2.0 to 2.3.0. Be aware of the following considerations when performing an in-place cluster upgrade.
Note
If you attempt to upgrade to 1.10 from 2.0 with DNS entries in the cluster configuration, that upgrade might fail with database errors. In this situation, run kscfg system reset
on the affected node, upgrade your other nodes from 1.10 directly to 2.1, upgrade the affected node to 2.1, re-join the cluster, and continuing upgrading nodes to 2.2.0.
The node being upgraded will be inaccessible during the upgrade. This may be as long at 10 or more minutes. Clients must be able to handle this outage.
There will be a brief period of time (under 30 seconds) where the database will be locked while upgrading the first node. This affects all nodes at the same time, and some nodes may give error responses during this time.
All nodes in the cluster should be upgraded as soon as possible - nodes running different version of the firmware will behave differently, potentially causing problems with applications.
To perform an in-place cluster upgrade
Before doing any upgrade operation, ensure that you have a backup, and that you have downloaded the backup and associated backup key.
Ensure all nodes in the cluster are up and operating normally. Resolve any issues (like removing any obsolete nodes) before performing the upgrade.
Perform a system upgrade on each node, one at a time. Ensure the upgrade of each node is complete and that the node is operating normally, before proceeding to the next node.
Note
When updating the first node in a cluster, the cluster nodes may briefly experience slower than usual response times. This occurs because the shared database schema for the cluster is updated with the first node.
Upgrade using the cluster remove/rebuild method
On one of the cluster nodes, create and download a backup with corresponding backup key, in case there are any problems.
Remove all nodes from the cluster except one.
Delete the cluster configuration on the remaining node.
$ ksctl cluster delete
Perform the upgrade on the remaining node.
Ensure there is at least 12GB of space available (not including the upgrade file) before proceeding.
scp the archive file to the CipherTrust Manager:
$ scp -i <identity_file> <update file name> ksadmin@<ip>:.
SSH into the CipherTrust Manager as ksadmin and run the following command:
$ sudo /opt/keysecure/ks_upgrade.sh -f <~/filename>
The signature of the archive file is verified and the upgrade is applied.
After the appliance reboots, re-build the cluster by creating a new cluster on this node.
Perform the upgrade on all other removed nodes.
Note
If a previously used node is to be re-used, the cluster must first be deleted from that system.
Join new instances to the cluster.