Post-install
Configuring DNS Connectivity
To ensure uninterrupted connectivity to the UI, it is recommended to create a DNS entry for the active CipherTrust Manager node and use this entry when configuring the Agents and accessing the CM UI. This way the Agents will be able to reach CM even if its IP address changes.
Standalone : If the CipherTrust Manager appliance is standalone (not in a cluster), configure a DNS entry with its IP address.
Clustered : In a CipherTrust Manager cluster, configure a DNS entry with the IP address of the cluster's "active" node. See Identifying the Active DDC Node for details.
Configuring DDC in a CipherTrust Manager Cluster
Although running a DDC cluster is currently not supported, Agents can be deployed into a CipherTrust Manager cluster. In this configuration, one of the CipherTrust Manager nodes, has to be assigned the active DDC node, so all the Agents report to it. This DDC configuration does not support failover, so if the active node fails so will DDC. DDC will not work, as long as the active node is down, so the only way to retain the DDC operations without any data loss is to restore the original active node.
There are two possible scenarios for running DDC in a cluster:
"Greenfield" deployment: in this scenario DDC is deployed into a completely new environment. See Deploying DDC into a New Environment for more information.
"Brownfield" deployment: in this scenario DDC is deployed into an existing CipherTrust Manager cluster. See Deploying DDC into an Existing Cluster for more information.
In both these scenarios it is essential to identify the active DDC node as you need an active DDC node for DDC to function in a cluster. For information on assigning the active node and identifying the IP address/hostname of the active node, refer to Assigning the Active DDC Node and Identifying the Active DDC Node.
Deploying DDC into a New Environment
If you are deploying DDC into an environment where there is no existing CipherTrust Manager server that you could connect the Agents to, you have to start by deploying CipherTrust Manager first as you need a running, active CipherTrust Manager server to complete Agents’ installation and configuration. If you’re considering having a CipherTrust Manager cluster at some point, and using the cluster with DDC, you need to plan your deployment ahead and think it out carefully.
It is strongly recommended to create any planned cluster of CipherTrust Manager nodes prior to deploying DDC into it. Creating a CipherTrust Manager cluster out of CipherTrust Manager servers already hosting DDC poses a risk of a loss of data collected by DDC prior to the cluster creation. This is because any new CipherTrust Manager node added to the cluster gets its DDC database wiped out and replaced with a copy of the database of the active node.
This scenario may prove destructive for cases, for example, when you initially planned to have a few independent DDC systems (for example, for monitoring different segments of the network) with unclustered CipherTrust Manager servers but then decided to form a cluster out of them. In such a case, all CipherTrust Manager servers that also act as DDC servers will lose all their DDC data and settings except for the active node.
Deploying DDC into an Existing Cluster
Deploying DDC into an existing CipherTrust Manager cluster is a relatively straightforward task and it should not pose huge problems. After the installation, you just have to make one of the CipherTrust Manager nodes the active DDC node. Next, you have to connect up all the Agents to that active DDC node to complete their configuration.
From then on, the active DDC node will store all the configuration settings in its database. The database of the active node gets replicated over all cluster nodes, so every cluster member has an identical copy of the database. All the copies get synchronized and updated every time new data is inserted into the copy on the active node.
Assigning the Active DDC Node
DDC always requires one active node regardless of the number of CipherTrust Manager nodes (whether it is a single CipherTrust Manager node or a cluster of two or more CipherTrust Manager nodes). It is this active node that the DDC Agents point to (via DNS) and it is only this node that will respond to GUI operations.
Tip
For system performance and stability reasons, we recommend using 64 GB of RAM. One symptom indicating a memory deficit in the DDC active node is CM freezing while running a scan that would detect more than 300,000 Sensitive Data Objects.
To create an active DDC node you have to "earmark" one of the CipherTrust Manager nodes as the active DDC node. This is a manual procedure, which can be performed either through the CipherTrust Manager UI or the command line. In both cases, you have to have the DDC admin rights. The assignment of the active DDC node does not affect the normal CipherTrust Manager cluster operation.
Note
The assignment of a DDC active node cannot be undone!
To assign the active DDC node by using the CipherTrust Manager UI, follow this procedure:
Log in to the CipherTrust Manager node that you want to make the active DDC node.
Click the Data Discovery and Classification link to open the DDC app.
You should see the "The current node is inactive. The node must be activated to use DDC." message.
Click the Activate button below the message.
The CipherTrust Manager node becomes the active DDC node.
To assign the active DDC node by using the CipherTrust Manager command line, you need the ksctl tool installed and configured.
Connect to the CipherTrust Manager node that you want to make the active DDC node, and issue this command:
ksctl ddc active-node register
After you assign an active DDC node, you can perform all DDC related tasks through that node. The other nodes - non-active nodes - will not allow you to work with DDC. When you log in to a CipherTrust Manager node that is a non-active DDC node and enter the Data Discovery and Classification application, you will see this message displayed:
You are currently connected to an inactive node. You must switch to the active node
<active DDC node IP address>
to run DDC.
Identifying the Active DDC Node
In this section, we show how to locate the active node. For this procedure you need the ksctl tool. The tool must be configured to access any of the cluster nodes.
To find the IP address of the active CipherTrust Manager node, run this command:
ksctl ddc active-node info
The output shows the IP address of the cluster's active node. Use this IP address to configure a DNS entry for the active CipherTrust Manager. Use that DNS entry to configure the Agents and access the CipherTrust Manager UI.
{
"public_address": "mycluster.thalesgroup.com",
"host": "10.45.102.101"
}
If the CipherTrust Manager appliance is not in a cluster, the command returns the following error:
{
"code": 15,
"codeDesc": "NCERRBadRequest: Bad HTTP request",
"message": "oleander is not in cluster mode"
}
In this case, just use the IP address (or DNS entry) of this single CipherTrust Manager node.
Configuring TDP (On-prem)
Before you go on to configure Hadoop Services, a CipherTrust Manager administrator has to add a Hadoop Knox Connection in CipherTrust Manager for your TDP cluster through the Access Management > Connections Management page.
Note
When adding or updating a Hadoop Knox Connection in the Connections Management, have in mind these recommendations:
Use the hostname of your Knox server instead of IP address. The server certificate that you will import later will be based on the hostname. The default port is 8443.
Knox must also be DNS addressable, through a network DNS or by adding the DNS entry as described in the CipherTrust Manager Administration Guide section Configuring DNS Hosts.
Refer to Adding a New Connection for the procedure on how to add a new Hadoop Knox Connection.
To configure DDC with TDP, perform the following steps:
Type the CipherTrust Manager URL in the browser and log on to the CipherTrust Manager console as the administrator.
Click the Data Discovery and Classification tile.
In the sidebar on the left, click Settings > Hadoop Services. The Hadoop Services page is displayed. This page contains two tabs - HDFS and Livy.
Note
Only the users in the root domain have access to and can modify the Hadoop Services configuration. For more information about domains, refer to Domains.
Warning
After configured, the Hadoop Services settings must not be modified or you will lose access to all data.
Configuring TDP (On-prem) Service HDFS
To configure DDC for HDFS, click the HDFS tab, and configure the following HDFS settings:
Knox Connection: the name of your Hadoop connection as configured in the Connections Management in CipherTrust Manager.
Knox Gateway Hostname / Port: the connection details of the Knox server.
They are automatically retrieved from the Connections Management on the basis of the Knox Connection name. DDC will try to connect to the first URL from the list retrieved from the Connection Management. In case it cannot connect due to any error, it will fall back to the next URL and so forth. DDC will only error out if it cannot connect to any of the URLs in the list.
Note
Make sure that while creating the connection in the Connections Management you use the Knox gateway hostname as returned by the 'hostname' command on the node that is running Knox, not the Fully Qualified Domain Name (FQDN). For example, if the FQDN is "my-tdp-node1-269.sjinternal.com" and the 'hostname' command returns "my-tdp-node1-269", use "my-tdp-node1-269" in the Connections Management. Otherwise, you will get an error when you try to save the HDFS configuration.
If you make any change in the Knox Connection in Connections Management, you need to reload the configuration and save it.
URI: the path to HDFS as configured in Knox. For example:
/gateway/default/webhdfs/v1
where
gateway
is the Knox bit,default
is the topology name, andwebhdfs/v1
is the HDFS bit.Note
If you are not using the default topology, use your topology name instead of the
default
bit in the URI.Folder: type in the DDC file system directory in HDFS that you created earlier (for example
/ciphertrust_ddc
).IMPORTANT: Use the same directory name as you configured earlier in the Thales Data Platform Deployment Guide.
Click Save Changes to save the configuration.
The connection was successful if no error is returned and this message is displayed:
Success. HDFS settings have been updated
Configuring TDP (On-prem) Service Livy
To configure DDC for Livy, click the Livy tab and configure the following connection parameters:
Knox Connection: the name of your Hadoop connection as configured in the Connections Management in CipherTrust Manager.
Knox Gateway Hostname / Port: the connection details of the Knox server.
They are automatically retrieved from the Connections Management on the basis of the Knox Connection name. DDC will try to connect to the first URL from the list retrieved from the Connection Management. In case it cannot connect due to any error, it will fall back to the next URL and so forth. DDC will only error out if it cannot connect to any of the URLs in the list.
If you make any change in the Knox Connection in the Connections Management in CipherTrust Manager, you need to reload the configuration and save it.
URI: the path to Livy as configured in Knox. Here is an example:
/gateway/default/livy/v1
where
gateway
is the Knox bit,default
is the topology name, and the rest of the URI is the service name.
Note
If you are not using the default topology, use your topology name instead of the
default
bit in the URI.Click Save Changes to save the configuration.
Configuring TDPaaS
Caution
This feature is a technical preview for evaluation in non-production environments. A technical preview introduces new, limited functionality for customer feedback as we work on the feature. Details and functionality are subject to change. This includes API endpoints and UI elements. We cannot guarantee that data created as part of a technical preview will be retained after the feature is finalized.
You have the option to use the Thales Data Platform as a Service (TDPaaS) component instead of the Hadoop services provided by on-prem TDP. TDPaaS is server-less and operates without the need for manual administration.
The configuration process for TDPaaS is much simpler and doesn't require any external connections or configurations.
To configure TDPaaS in DDC:
Log in to CipherTrust Manager.
Go to Data Discovery and Classification > Settings > Data Management Service tab.
Select a region from the Region drop down for saving scan and report data.
Note
Currently, only the
us-central1
region is available.Region cannot be modified after the TDPaaS is configured. It is advised to choose the region based on proximity and sovereignty requirements.
Click Get Credentials.
The credentials are generated and saved with the CipherTrust Manager to be used by Data Discovery and Classification. A unique Customer ID is created corresponding to the current CipherTrust Manager instance.
Click the Download Credentials button to download and save these credentials for future use.
For troubleshooting errors when working with TDPaaS, refer to Troubleshooting section.