Using the CTE for Kubernetes Operator Resource
Note
-
The CTE for Kubernetes feature is supported with CTE for Kubernetes v1.2.0 and subsequent versions.
-
CTE for Kubernetes is certified with Redhat and supported for OpenShift Container Platform for CTE for Kubernetes v1.2.0 and subsequent versions.
-
CTE-Kubernetes Operator version v1.3.2 has been certified with Redhat and published on the Redhat portal: RedHat Portal.
-
For Kubernetes cluster, including managed Kubernetes clusters in the cloud, namely Google GKE cluster, Amazon EKS cluster and Microsoft’s AKS cluster, the operator is also certified and available at: Kubernetes Catalog.
-
CTE for Kubernetes Operator supports both x86_64 and arm64 deployment
Kubernetes has various default resources like Pod, Deployment, DaemonSet, etc. When you define a manifest for instantiating one of those resources, you must specify the Kind as Pod/Deployment/DaemonSet etc. Kubernetes provides a default set of controllers that understand the resource definitions and know how to manage their life cycle. For instance, the Deployment Controller manages the Create/Update/Delete of the Deployment resource.
Kubernetes architecture allows users to extend the API server in a way that users can create their own custom resource (CR) and write their own controller to manage the custom resource. An operator bundles a CR and the Controller that manages the CR. An operator can watch resources across the cluster and take actions when required. For more information on how to use operators to manage other applications, refer to the Kubernetes Operator pattern.
The CTE for Kubernetes Operator can deploy, monitor, upgrade and delete CTE for Kubernetes. When the CTE for Kubernetes Operator is deployed, its controller deploys the CTE for Kubernetes driver on the OpenShift cluster. The manifests required to deploy the CTE-K8s driver are bundled with the operator.
CTE for Kubernetes Operator
The CTE for Kubernetes Operator is an OpenShift operator that Thales created for CTE for Kubernetes. This operator contains a CR, an API for managing a CR, and a custom controller that manages this resource. When you install the CTE for Kubernetes Operator on an OpenShift cluster, the operator registers the new CR and the controller with the Kubernetes API server. Whenever the API server receives a request to create a resource, where Kind=CTEK8sOperator
, it passes on the request CTE for Kubernetes Operator’s Custom Controller. The controller contains all of the logic needed to complete tasks before, during or after the deployment of the CTE for Kubernetes driver.
You can deploy CTE for Kubernetes using a Custom Resource Definition (CRD). The following displays a sample CRD used to create an instance of the CTE for Kubernetes:
CTE-K8S-Operator-crd.yaml
apiVersion: cte-k8s-operator.csi.cte.cpl.thalesgroup.com/v1
kind: CteK8sOperator
metadata:
labels:
app.kubernetes.io/name: ctek8soperator
app.kubernetes.io/instance: ctek8soperator
app.kubernetes.io/part-of: cte-k8s-operator
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: cte-k8s-operator
name: ctek8soperator
spec:
replicas: 1
image: "docker.io/thalesciphertrust/ciphertrust-transparent-encryption-kubernetes"
version: "1.2.0-latest"
imagePullPolicy: Always
logLevel: 5
apiburst: 300
apiqps: 200
imagePullSecrets:
- name: cte-csi-secret
registrationCleanupInterval: 10
pauseimage: "k8s.gcr.io/pause:latest"
volumes:
- name: cri-sock
hostPath:
path: "/run/crio/crio.sock"
# The following parameters are optional. If values are not specified, CTE-K8s uses the default values
csiProvisionerImage: registry.k8s.io/sig-storage/csi-provisioner:v4.0.0
csiNodeDriverRegistrarImage: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
csiAttacherImage: k8s.gcr.io/sig-storage/csi-attacher:v3.3.0
csiSnapshotterImage: registry.k8s.io/sig-storage/csi-snapshotter:v6.3.3
snapImagePullPolicy: IfNotPresent
Applying a CRD
To apply the CRD, type:
kubectl apply –f `<path to\cte-k8s-operator.yaml>`
Installing the CTE for Kubernetes Operator
Warning
-
Do not uninstall CTE for Kubernetes and the operator from the CLI, if they were installed using the GUI.
-
Similarly, do not uninstall CTE for Kubernetes and the operator from the GUI, if it was installed using the CLI.
Note
You can install CTE for Kubernetes and the Operator only in the OpenShift-operators namespace. If you want to install in other namespaces, use the CLI option.
The CTE for Kubernetes Operator can be installed using one of two methods:
-
CLI
-
Cluster Console GUI
CLI Method Prerequisites
-
Install the Operator Lifecycle Manager (OLM) on the cluster. Refer to Installing OLM for instructions on how to install OLM on the cluster.
Note
The latest version of OpenShift is installed by default.
-
Install
OC
(OpenShift CLI command for OpenShift cluster) on the cluster.Note
The user installing the Operator must have Cluster Admin permissions.
-
Download the Deploy Scripts.
-
Execute the
deploy.sh
script from the deploy directory, type:./deploy.sh --operator --operator-ns=<namespace-in-which-to-deploy-the-operator> --cte-ns=<namespace-in-which-to-deploy-cte-4-k8s>
If either of the namespace options is not specified, the script sets
kube-system
as the default namespace for deployment.
Namespace Deployments
Ensure that the namespace passed to the deployment script exists before initiating deployment. This prevents the script from prompting for creation of namespace during deployment. For example, if the deployment script is invoked as:
./deploy.sh --operator --operator-ns=my-ns1 --cte-ns=my-ns2
where both namespaces my-ns1
and my-ns2
, do not exist, the script would prompt with the following response:
Starting the cte-csi containers
NAMESPACE my-ns1 not found!!!!!!
Namespace my-ns1 not found. Do you want to create it now [N/y]
Once the namespace information is available, the script proceeds to create the objects required and installs the operator. After installing the operator, the script deploys ctek8soperator
CRD. This deploys CTE for Kubernetes on the cluster in the namespace specified.
Using the Cluster Console Web GUI
The CTE for Kubernetes Operator is certified with Red Hat for the OpenShift platform. It is integrated with OperatorHub. The operator can be discovered on the OperatorHub page.
-
Open a browser and navigate to the Operators > OperatorHub link, in the left navigation panel on the console GUI. Type
CTE
in the search field under All Items to find the CTE for Kubernetes Operator. -
Click on the tile to access the install option.
-
Ensure that all prerequisites are met before installing the operator.
-
Click Install to install the operator. Do not change the default values on the install page.
Installing CTE for Kubernetes
To install CTE for Kubernetes (when CTE for Kubernetes Operator is already installed):
-
Navigate to the deploy directory that was downloaded for operator install.
-
Ensure the file
ctek8soperator-crd.yaml
has correct values. -
Type:
oc apply -f <path to>/ctek8soperator-crd.yaml
Updating CTE for Kubernetes
To update CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Navigate to the deploy directory that was downloaded for operator install.
-
Edit the
ctek8soperator-crd.yaml
file. -
Update the
version
field in the spec section to the latest version of CTE for Kubernetes. -
Apply the change with the command:
oc apply -f <path to>/ctek8soperator-crd.yaml
Using user-defined Configuration Parameters with RedHat OpenShift
You can user defined configuration parameters for CTE-K8s daemonset so user can control on which nodes CTE-K8s is deployed as an operator
cte-csi-node manifest
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2024-06-20T10:08:55Z"
generation: 1
name: cte-csi-node
namespace: cte-o
ownerReferences:
- apiVersion: cte-k8s-operator.csi.cte.cpl.thalesgroup.com/v1
blockOwnerDeletion: true
controller: true
kind: CteK8sOperator
name: ctek8soperator
uid: 292a9b6d-716d-4b35-b965-916ae58b709c
resourceVersion: "678877896"
uid: aa2160c0-030f-49e6-b02f-0249cb78d033
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: CteK8sOperator-Deployment
app.kubernetes.io/instance: ctek8soperator
app.kubernetes.io/managed-by: CteK8sOperator-Operator
app.kubernetes.io/name: CteK8sOperator
app.kubernetes.io/version: 1.2.0-latest
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: CteK8sOperator-Deployment
app.kubernetes.io/instance: ctek8soperator
app.kubernetes.io/managed-by: CteK8sOperator-Operator
app.kubernetes.io/name: CteK8sOperator
app.kubernetes.io/version: 1.2.0-latest
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- args:
- --endpoint=$(CSI_ENDPOINT)
- --nodeid=$(KUBE_NODE_NAME)
- --namespace=$(KUBE_NAMESPACE)
- --v=5
- --apiburst=300
- --apiqps=200
- --registration-cleanup-interval=10
- --pauseimage=registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: KUBE_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: docker.io/thalesciphertrust/ciphertrust-transparent-encryption-kubernetes:1.2.0-latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- rm -rf /csi/csi.sock
name: cte-csi
resources: {}
securityContext:
capabilities:
add:
- SYS_ADMIN
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: plugin-dir
- mountPath: /var/lib/kubelet/pods
mountPropagation: Bidirectional
name: mountpoint-dir
- mountPath: /etc/kubernetes/
name: kube-cred
- mountPath: /dev/shm
name: dshm
- mountPath: /var/log
name: varlog
- mountPath: /chroot
name: chroot
- mountPath: /var/run/cri.sock
name: cri-sock
- args:
- --agentlogs
image: docker.io/thalesciphertrust/ciphertrust-transparent-encryption-kubernetes:1.2.0-latest
imagePullPolicy: IfNotPresent
name: cte-agent-logs
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log
name: varlog
- mountPath: /chroot
name: chroot
- args:
- --csi-address=/csi/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/plugins/csi.cte.cpl.thalesgroup.com/csi.sock
- --v=5
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
imagePullPolicy: IfNotPresent
name: csi-sidecar-registrar
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: plugin-dir
- mountPath: /registration
name: registration-dir
- args:
- --csi-address=/csi/csi.sock
- --v=5
image: k8s.gcr.io/sig-storage/csi-attacher:v3.0.0
imagePullPolicy: IfNotPresent
name: csi-sidecar-attacher
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: plugin-dir
dnsPolicy: ClusterFirst
hostPID: true
imagePullSecrets:
- name: cte-csi-secret
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: cte-csi-node
serviceAccountName: cte-csi-node
terminationGracePeriodSeconds: 600
tolerations:
- operator: Exists
volumes:
- hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
name: registration-dir
- hostPath:
path: /var/lib/kubelet/plugins/csi.cte.cpl.thalesgroup.com/
type: DirectoryOrCreate
name: plugin-dir
- hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
name: mountpoint-dir
- hostPath:
path: /etc/kubernetes/
type: Directory
name: kube-cred
- emptyDir:
medium: Memory
name: dshm
- emptyDir: {}
name: varlog
- emptyDir: {}
name: chroot
- hostPath:
path: /run/crio/crio.sock
type: ""
name: cri-sock
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 15
desiredNumberScheduled: 15
numberAvailable: 12
numberMisscheduled: 0
numberReady: 12
numberUnavailable: 3
observedGeneration: 1
updatedNumberScheduled: 15
Upgrading CTE for Kubernetes
-
Check the Docker site for a new version of CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Expand Operators section and click Installed Operators page from the left hand bar on the page.
-
Click on the Name of the operator installed.
-
On the next page, select the YAML tab. This displays the manifest of CTE for Kubernetes instances installed on the cluster.
-
Update the version parameter in the CRD spec with the updated CTE-K8s version.
-
Click Save.
The upgrade process terminates all of the CTE for Kubernetes pods running the previous version, while activating new pods running the new version. Once the
cte-csi-node-XXXX
pods are running on all of the nodes of the cluster, the application that uses the CTE for Kubernetes volume can be re-deployed. -
If any of the other parameters in
CTE-K8s-Operator.yaml
, like logLevel, apiburst, apqps, etc. need to be updated, follow the same process as above to update the relevant parameter.
Deleting CTE for Kubernetes
To delete CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Navigate to the deploy directory that was downloaded for operator install.
-
Type:
oc delete -f <path to>/ctek8soperator-crd.yaml
Deploying the CTE for Kubernetes Operator after installation
Once the operator is installed, the Kubernetes API servers becomes aware of the Kubernetes customer resource (CR). The installation process registers the:
-
CR
-
API for managing the CR
-
Controller that handles the requests, for the CR, from the API Server
To instantiate the CR:
-
Click View Operator on the page displayed immediately after the operator is installed.
-
Alternatively:
-
Expand “Operators” section and click Installed Operators page from the left hand bar on the page.
-
Click on the Name of the operator installed.
-
Click Create Instance link on the page displayed.
-
Click Create to deploy CTE for Kubernetes.
-
Uninstalling CTE for Kubernetes Operator
To uninstall CTE for Kubernetes Operator from the User Interface:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Expand “Operators” section and click Installed Operators page from the left hand bar on the page.
-
Click on the Name of the operator installed.
-
On the next page, select the Actions tab.
-
Click Uninstall Operator in the list of actions displayed.
-
Click Uninstall on the confirmation pop-up to uninstall CTE for Kubernetes.
To uninstall the CTE for Kubernetes Operator using the CLI:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Navigate to the deploy directory that was downloaded for operator install, type:
./deploy.sh --operator --operator-ns=<namespace-in-which-ctek8soperator-is-deployed> --cte-ns=<namespace-in-which-ctek8s-is-deployed> --remove
This removes both CTE for Kubernetes and the operator.
Upgrading CTE for Kubernetes Operator
The operator is designed to upgrade to a newer version automatically as soon as a new version is published by Thales. The Operator Lifecycle Manager, or OLM, constantly polls for updates and upgrades the operator whenever an update is available.