Using CTE with Pacemaker
This chapter describes how to configure CTE with Pacemaker and MySQL on Red Hat 7 or Red Hat 8. It contains the following sections:
Overview
Pacemaker is a high-availability cluster resource manager that runs on a set of hosts (a cluster of nodes) in order to preserve integrity and minimize downtime of desired services (resources). Every resource has a resource agent that abstracts the service it provides and present a consistent view to the cluster.
Thales provides a resource agent for CTE that allows CTE to guard nodes running MySQL databases over an NFS share in a Pacemaker environment.
Considerations and Requirements
-
System Requirements:
-
Red Hat 7 or Red Hat 8. Other versions of Red Hat are not supported.
-
Pacemaker with Corosync and the MySQL database service configured. Make sure that the Pacemaker properties settings for features such as STONITH and quorum are correct based on your Pacemaker environment.
-
Note
Other database applications may be used instead of MySQL, but Thales has only tested the CTE resource agent with MySQL.
-
The CTE resource agent supports manual GuardPoints on a device or folder. Automatic GuardPoints are not supported.
-
The CTE resource agent supports GuardPoints created from Standard or Live Data Transformation policies. It does not support IDT-Capable GuardPoints or CTE-Efficient StorageGuardPoints.
-
If you install Pacemaker after you have installed CTE, you must copy the CTE resource agent to the
ofc
directory.cp /opt/vormetric/DataSecurityExpert/agent/secfs/systemd/pacemaker_ra_mgp //CTE-U cp /opt/vormetric/DataSecurityExpert/agent/secfs/.sec/bin/pacemaker_ra_mgp //CTE
-
Upgrades to the CTE resource agent will require using Pacemaker to put each node into maintenance mode while the upgrade is performed.
Creating GuardPoints
-
Install the CTE Agent on each node in the Pacemaker cluster, and register that node in the same domain in your key manager. For details, see Getting Started with CTE for Linux.
-
In your key manager, do the following:
-
Create a manual GuardPoint for the MySQL data directory
/var/lib/mysql/
on all of the nodes in the cluster. You can use either a Standard or a Live Data Transformation policy to create the GuardPoint, but you must use the same policy on the GuardPoint for each node. -
Create any other manual GuardPoints you want to use. Each GuardPoint must be created on all nodes in the cluster and each set of GuardPoints must use the same policy.
For example, if you have three nodes in the cluster and you want to add a Standard GuardPoint for
/hr/shared/files
and a Live Data Transformation GuardPoint for/dir/accounting/data
, on each of the three nodes in the cluster you would create:-
A Standard or Live Data Transformation GuardPoint for
/var/lib/mysql/
. -
A Standard GuardPoint for
/hr/shared/files
. -
A Live Data Transformation GuardPoint for
/dir/accounting/data
.
-
-
Select one of the nodes in your cluster and log into that node as
root
. -
On the selected node, add a Pacemaker resource for the manual GuardPoint using the following command:
pcs resource create mysql-mgp ocf:heartbeat:mgp mgpdir=/var/lib/mysql \ [start services=true] [stop_services=false]
where:
-
start_services=true
is an optional command that tells the resource agent to start the CTE services before enabling any manual GuardPoints. The options aretrue
orfalse
, and the default is true. -
stop_services=false
is an optional command that tells the resource agent to stop the CTE services after disabling any manual GuardPoints. The options aretrue
orfalse
, and the default is false.
-
-
Create the required resource groups, colocation settings, and constraints.
Warning
If the resource groups and colocation contstraints are not configured properly, the PCS cluster could run resources on multiple nodes. If the ordering constraints are not set properly, the cluster could fail to start because the system tries to mount the GuardPoint before the file system has been mounted or tries to unmount the file system before the GuardPoint has been disabled. Make sure that the following resource groups and constraints are set properly.
To do so:
-
Create a MySQL file system group (
mysql-fsg
) that containsmysql-fs
andmysql-mgp
using the following command:pcs resource group add mysql-fsg mysql-fs mysql-mgp
-
Create a MySQL service group (
mysql-sg
) that containsmysql-server
andmysql-vip
using the following command:pcs resource group add mysql-sg mysql-vip mysql-server
-
Configure the colocation and ordering constraints so that:
mysql-sg
andmysql-fsg
are colocated.mysql-fs
starts beforemysql-mgp
.mysql-mpg
stops beforemysql-fs
.mysql-fsg
starts beforemysql-sg
.
To do so, use the following commands:
pcs constraint colocation add mysql-sg with mysql-fsg pcs constraint order start mysql-fs then start mysql-mgp pcs constraint order stop mysql-mgp then stop mysql-fs pcs constraint order start mysql-fsg then start mysql-sg
-
-
To check the status of the cluster after the configuration is complete, use the
pcs status
command.