CTE Installation and Configuration
After configuring the Hadoop cluster for CTE:
-
Install and register CTE on the HDFS nodes.
-
You can do this to all the nodes at once, but the HDFS is unavailable during CTE installation and configuration.
-
You can also do this one node at a time. If you install and register CTE nodes one at a time, you must start from NameNode, then DataNode, and always keep NameNode service up once NameNodes are configured.
-
-
In either case, add the FQDN of the node to the CipherTrust Configuration Group, then proceed with agent installation and configuration. See Installing and Configuring CTE on an HDFS Node.
-
Modify the Host Group. See Modifying host settings for HDFS hosts on the CipherTrust Manager.
-
Configure CTE by running
config-hadoop.sh
on the HDFS node. See Configuring Hadoop to Use CTE. -
Review the SecFS configuration variables that support the HDFS name cache. HDFS Name Cache.
-
Installing and Configuring CTE on an HDFS Node
-
Using Ambari, add the FQDN of the node to the CipherTrust Configuration Group. See Create a CipherTrust Configuration Group.
-
Install, configure, and register CTE.
-
Modify the host settings for each node. See Modifying host settings for HDFS hosts on the CipherTrust Manager.
Modifying host settings for HDFS hosts on the CipherTrust Manager
The Hadoop service can start as root and then downgrade to an unprivileged user. If the unprivileged user is not authenticated by password, CTE flags the user as fake. CTE cannot allow a user to access a resource protected by a user rule when the user is faked, even if the user matches the permit rule. Because of this, modify the CipherTrust Manager host/client setting as follows:
- On Ambari, go to HDFS > Configs > Advanced > Advanced core-site and find out if hadoop.security.authentication mode is set to simple (no authentication) or Kerberos.
Modifying host settings for alternate versions of Java
Sometimes, you may have more than one version of Java installed on your systems. This can create issues while using Hadoop, which uses Java to launch different services.
CipherTrust Transparent Encryption expects the following version of Java to be installed on a system:
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre/bin/java
If you have a different version of java installed on any of your systems, then you must perform a simple modification and add authenticators to the Host settings section in CipherTrust Manager for that version of Java as well. See the following section for more information.
Simple Modification
To use a simple modification, ask the Administrator to add the following lines to the host/client group:
`/usr/jdk64/jdk1.8.0_40/bin/java` is the Java executable used to launch the HDFS services. Change the Java jdk path to reflect your end-user environment.
|authenticator+arg=+class=org.apache.hadoop.hdfs.server.namenode.NameNode|
/usr/jdk64/jdk1.8.0_40/bin/java
|authenticator+arg=+class=org.apache.hadoop.hdfs.server.datanode.DataNode|
/usr/jdk64/jdk1.8.0_40/bin/java
The entire host/client settings will look like this:
|authenticator|/usr/sbin/sshd
|authenticator|/usr/sbin/in.rlogind
|authenticator|/bin/login/
|authenticator|/usr/bin/gdm-binary
|authenticator|/usr/bin/kdm
|authenticator|/usr/sbin/vsftpd
|authenticator+arg=+class=org.apache.hadoop.hdfs.server.namenode.NameNode|usr/jdk64/jdk1.8.0_40/bin/java
|authenticator+arg=+class=org.apache.hadoop.hdfs.server.datanode.DataNode|/usr/jdk64/jdk1.8.0_40/bin/java
Modifying Host Group for HDFS NameNodes HA on CipherTrust Manager
To enable high availability (HA) for your HDFS NameNodes, ask the Administrator to add the following lines to the host/client group.
`/usr/jdk64/jdk1.8.0_40/bin/java` is the Java executable used to launch the HDFS services. Change the Java `jdk` path to reflect your end-user environment.
|authenticator+arg+class=org.apache.hadoop.hdfs.qjournal.server.JournalNode|/usr/jdk64/jdk1.8.0_40/bin/java
|authenticator+arg+class=org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer|/usr/jdk64/jdk1.8.0_40/bin/java
|trust+arg=+class=org.apache.hadoop.hdfs.tools.DFSZKFailoverController|/usr/jdk64/jdk1.8.0_40/bin/java
The entire host/client group for HA (in this example, with Kerberos) will look like this:
|authenticator|/usr/sbin/sshd
|authenticator|/usr/sbin/in.rlogind
|authenticator|/bin/login/
|authenticator|/usr/bin/gdm-binary
|authenticator|/usr/bin/kdm
|authenticator|/usr/sbin/vsftpd
|authenticator+arg+class=org.apache.hadoop.hdfs.server.namenode.NameNode|/usr/jdk64/jdk1.8.0_40/bin/java
|authenticator+arg=+class=org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter|/usr/lib/bigtop-utils/jsvc
|trust+arg+class=org.apache.hadoop.hdfs.qjournal.server.JournalNode|/usr/jdk64/jdk1.8.0_40/bin/java
|trust+arg=+class=org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer| \
/usr/jdk64/jdk1.8.0_40/bin/java
|trust+arg=+class=org.apache.hadoop.hdfs.tools.DFSZKFailoverController|/usr/jdk64/jdk1.8.0_40/bin/java