Requirements and Considerations
Location of the CTE Private Region
The CTE Private Region contains the metadata CTE requires in order to support initial transformation of data on the device and subsequent data transformation to other encryption keys. By default, CTE creates the CTE Private Region at the beginning of the guarded device. If data already exists on the device, CTE requires that the device be expanded by 63 MB to make room for the CTE Private Region. The existing data in the first 63MB of the device is then migrated into the expanded space and the beginning of the device is reserved for the CTE metadata. This data relocation is completely transparent to applications and users.
With a Teradata Database Appliance, however, CTE cannot create the CTE Private Region at the beginning of the Teradata pdisk devices because the disks in the Appliance cannot be expanded. Therefore, for Teradata Databases, CTE stores the metadata in a special directory called the CTE Metadata Directory located at: /var/opt/teradata/vormetric/vte-metadata-dir
. This directory contains all of the metadata for every Teradata Database device that is protected by CTE.
While this does not affect the functionality of CTE, it does affect the way administrators need to back up the Teradata Database because both the Teradata Database and the metadata directory must be backed up together. You will not be able to restore a Teradata Database without access to the associated metadata in the CTE metadata directory.
Metadata File Access and Teradata Clusters
A Teradata cluster can contain multiple hosts. The members of a cluster that share access to pdisk devices belong to a clique. When you create an IDT-Capable GuardPoint on a pdisk, the metadata for that GuardPoint must be available to all members of the clique. CTE automatically replicates the metadata files across the members of a clique when the metadata is created or changed. For details, see Replication of IDT Metadata Files Across Members of a Clique.
Additional Requirements and Considerations
- The Teradata kernel must be at minimum version 4.4.140-96.54.TDC-default on every node of the Teradata Database Appliance on which you plan to install CTE. Refer to the release notes document for CTE releases for compatibility requirements between the kernel releases from Teradata, and the CTE releases.
Caution
-
Be sure the version of the Teradata Database is fully compatible with CTE.
• The Parallel Upgrade Tool (PUT) component of Teradata Database has been enhanced to discover CTE protected devices. The Parallel Upgrade Tool (PUT) component of Teradata (TDput) must be versionTDput-03.09.06.09
or higher. This PUT component must be available in your Teradata Database. -
The Parallel Database Extensions (PDE) component of Teradata (ppde) must be version
ppde-16.20.53.07
or higher. This PDE component must be available in your Teradata Database. -
The Teradata I/O Scheduler (tdsched) component must be version
01.04.02.02-1
or higher. This I/O Scheduler component must be available in your Teradata Database. -
Contact your Teradata Customer Support Representative if you are unsure of the availability of this functionality in your Teradata cluster.
-
The CTE Agent must be installed in
/opt/teradata/vormetric
on every node in the Teradata cluster. In order to specify this location, use the-d
option when installing CTE. For example:./vee-fs-7.3.0-135-sles12-x86_64.bin -d /opt/teradata/vormetric
Note
The CTE Agent can be installed without stopping the Teradata Database service.
-
The CTE metadata directory
/var/opt/teradata/vormetric/vte-metadata-dir
must be guarded by the Administrator to prevent accidental modification or deletion of the CTE metadata files. If the CTE metadata directory is not guarded, any attempt to configure or enable an IDT-Capable GuardPoint on the Teradata appliance will be rejected.The standard CTE policy associated with the metadata directory must:
-
Deny all users (including the root user) the ability to modify or remove any files in the metadata directory.
-
Specify the key
clear_key
in the key rule so that the metadata is stored in clear text. -
The following table displays how to set up your policy for guarding the CTE metadata directory:
Security Rules Action Effect Read Permit, Audit all_ops Deny, Audit Key Selection Rules Key clear_key
-
-
Stop the Teradata Database service when upgrading an existing CTE Agent installation, type:
tpareset -x Stopping Database
-
Stop the Teradata Database service so that CTE can rekey any guarded devices. The service must remain stopped until CTE has finished rekeying all of the guarded devices on the appliance.
Calculate Encryption/Key Rotation Time for Teradata Databases
The Thales Engineering team has a set of scripts which help to calculate the estimated encryption time for CTE on a Teradata database cluster.
Using the dd
command, the scripts simulate the I/O sequences for encrypting pdisks
during initial encryption. The scripts perform only read operations for simulation. Two parameters are required for simulation: the amount of data to read from each pdisk
, and the IDT concurrency level to be applied during simulation. The larger the data size, the more sample performance data will be available for estimation. Thales recommends a minimum of 4GB for data size.
The concurrency level is the IDT concurrency level for encrypting IDT configured devices. You can specify the number of segments to transform concurrently using -c
option of the voradmin
command when initializing the device. Choose a concurrency level that does not affect performance of your production workload. By default, CTE-IDT transforms 8 segments concurrently, if the concurrency level has not been specified through the voradmin command. You can specify 16 for concurrency level if the backend storage for pdisks
uses high speed flash drives, otherwise use 8.
Note
When choosing the concurrency level for your system you must consider the number of CPU cores, the total IOPS of your storage system and production workload, the size of the device to transform, and the duration for the data transformation.
The scripts collect the performance data in log files during execution. The logs are saved in /tmp/cte_data
and /tmp/perfmon_cte_dd.log
. The Thales team will need those logs to help you with encryption time estimation.
Obtaining the Scripts and Calculating Encryption Time
-
Download the scripts:
-
Modify the bash variable
PDISKS
in the script:con-pdisks-dd.sh
Example
PDISKS="list of pdisks assigned to this node ..." PDISKS="/dev/sda1 /dev/sda2 /dev/sdb1 /dev/sdb2"
-
Divide the disks assigned to the clique, equally amongst the nodes of the clique.
-
Copy all of the scripts to the same directory on each Cluster.
-
Run
cond-pdisks-test.sh
concurrently on all of the nodes with the selected devices.# sh con-pdisks-test.sh `<number of GBs to read from each `pdisk`>` `<concurrency level to be applied when reading `pdisks`>`
Example
# sh con-pdisks-test.sh 4 16
-
Contact Thales Technical Support. They will need the collected data to calculate encryption time.
Note
-
The scripts only perform read operations to generate IO load similar to the IDT encryption process, meaning, there is no write operations. Thales factors in a few parameters to estimate rekey time.
-
Thales recommends executing the scripts when the database is not busy.
-
If there are multiple cliques, and if the user requires estimation for each clique, collect the data for each clique separately.