Using the CTE for Kubernetes Operator resource
Note
-
The CTE for Kubernetes feature is supported with CTE for Kubernetes v1.2.0 and subsequent versions.
-
CTE for Kubernetes is certified with Redhat and supported for Openshift Container Platform for CTE for Kubernetes v1.2.0 and subsequent versions.
-
CTE-Kubernetes Operator version v1.3.2 has been certified with Redhat and published on the Redhat portal: RedHat Portal.
-
For Kubernetes cluster, including managed Kubernetes clusters in the cloud, namely Google GKE cluster, Amazon EKS cluster and Microsoft’s AKS cluster, the operator is also certified and available at: Kubernetes Catalog.
-
CTE for Kubernetes Operator supports both x86_64 and arm64 deployment
Kubernetes has various default resources like Pod, Deployment, DaemonSet, etc. When you define a manifest for instantiating one of those resources, you must specify the Kind as Pod/Deployment/DaemonSet etc. Kubernetes provides a default set of controllers that understand the resource definitions and know how to manage their life cycle. For instance, the Deployment Controller manages the Create/Update/Delete of the Deployment resource.
Kubernetes architecture allows users to extend the API server in a way that users can create their own custom resource (CR) and write their own controller to manage the custom resource. An operator bundles a CR and the Controller that manages the CR. An operator can watch resources across the cluster and take actions when required. For more information on how to use operators to manage other applications, refer to the Kubernetes Operator pattern.
The CTE for Kubernetes Operator can deploy, monitor, upgrade and delete CTE for Kubernetes. When the CTE for Kubernetes Operator is deployed, its controller deploys the CTE for Kubernetes driver on the OpenShift cluster. The manifests required to deploy the CTE-K8s driver are bundled with the operator.

Creating the Required Kubernetes Secret
For CTE for Kubernetes Operator v1.5.10, the Operator Controller Manager has a change that requires an additional step to be carried out by the cluster administrator before deployment. This prerequisite is mandatory.
There has been a change in how the Kubernetes Operator Controller Manager downloads images. The kube-rbac-proxy used by the operator-controller-manager pod performs RBAC authorizations with the Kubernetes API Server. The CTE for Kubernetes Operator previously pulled this image from gcr.io. However, this image was has been deprecated and will be removed from GCR. The CTE for Kubernetes Operator has shifted to pulling the image hosted by registry.redhat.io. While gcr.io is a public registry and does not require authentication for pulling images, registry.redhat.io requires authentication.
Due to this change, it is now a requirement to create a specific Kubernetes secret named rh-kube-proxy-secret. The cluster administrator must create this secret in the namespace in which they are deploying the operator. Without this secret, attempts to create new pods will result in errors due to the authentication requirement of registry.redhat.io.
-
Use a login account created on redhat to create the secret using the following command:
oc create secret docker-registry rh-kube-proxy-secret --docker-username="<username on Redhat portal>" --docker-password="<Redhat portal password>" --docker-server="registry.redhat.io" --namespace="<namespace in which CTE-K8s Operator is to be deployed>"If you miss this step, you will encounter the following in an
ErrImagePull/ImagePullBackOffin the Events section of thecte-k8s-operator-controller-manager-XXXXXXXXX-YYYYYpod:Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 18m (x3 over 19m) kubelet Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17.0-202503121206.p0.g7718265.assembly.stream.el9" Warning Failed 18m (x3 over 19m) kubelet Failed to pull image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17.0-202503121206.p0.g7718265.assembly.stream.el9": failed to pull and unpack image "registry.redhat.io/openshift4/ose-kubIf you run into this error, undeploy CTE-K8s Operator, create the secret as mentioned and then re-deploy the CTE-K8s Operator.e-rbac-proxy-rhel9:v4.17.0-202503121206.p0.g7718265.assembly.stream.el9": failed to resolve reference "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9:v4.17.0-202503121206.p0.g7718265.assembly.stream.el9": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://registry.redhat.io/auth/realms/rhcc/protocol/redhat-docker-v2/auth?scope=repository%3Aopenshift4%2Fose-kube-rbac-proxy-rhel9%3Apull&service=docker-registry: 401 Unauthorized -
If you experience this error:
a. Create a secret,
rh-kube-proxy-secret, in the kube-system namespace in which the operator is being installed:kubectl create secret docker-registry rh-kube-proxy-secret --docker-username="<your userid on Redhat portal>" --docker-password=<your redhat portal password> --docker-server="registry.redhat.io" -n <namespace of operator deploy>b. Verify the secret has been created successfully:
kubectl get secret rh-kube-proxy-secret -n <namespace>c. Once the secret is created, delete the pod that is showing
ImagePullErr:kubectl delete pod -n <namespace> cte-k8s-operator-controller-manager-c6997c9fd-lc9mq
This will cause the pod to restart, pick up the secret that you just created, and pull the image successfully.
Note
The previous steps do not have any effect on the CTE-K8s pods that have already been deployed by an earlier version of the Operator.
Installing the CTE for Kubernetes Operator
Warning
-
Do not uninstall CTE for Kubernetes and the operator from the CLI, if they were installed using the GUI.
-
Similarly, do not uninstall CTE for Kubernetes and the operator from the GUI, if it was installed using the CLI.
Note
You can install CTE for Kubernetes and the Operator only in the OpenShift-operators namespace. If you want to install in other namespaces, use the CLI option.
CTE for Kubernetes Operator
The CTE for Kubernetes Operator is an Openshift operator that Thales created for CTE for Kubernetes. This operator contains a CR, an API for managing a CR, and a custom controller that manages this resource. When you install the CTE for Kubernetes Operator on an OpenShift cluster, the operator registers the new CR and the controller with the Kubernetes API server. Whenever the API server receives a request to create a resource, where Kind=CTEK8sOperator, it passes on the request CTE for Kubernetes Operator’s Custom Controller. The controller contains all of the logic needed to complete tasks before, during or after the deployment of the CTE for Kubernetes driver.
You can deploy CTE for Kubernetes using a Custom Resource Definition (CRD). The following displays a sample CRD used to create an instance of the CTE for Kubernetes:
CTE-K8S-Operator-crd.yaml
apiVersion: cte-k8s-operator.csi.cte.cpl.thalesgroup.com/v1
kind: CteK8sOperator
metadata:
labels:
app.kubernetes.io/name: ctek8soperator
app.kubernetes.io/instance: ctek8soperator
app.kubernetes.io/part-of: cte-k8s-operator
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: cte-k8s-operator
name: ctek8soperator
spec:
replicas: 1
image: "docker.io/thalesciphertrust/ciphertrust-transparent-encryption-kubernetes"
version: "1.2.0-latest"
imagePullPolicy: Always
logLevel: 5
apiburst: 300
apiqps: 200
imagePullSecrets:
- name: cte-csi-secret
registrationCleanupInterval: 10
pauseimage: "k8s.gcr.io/pause:latest"
Applying a CRD
To apply the CRD, type:
kubectl apply –f `<path to\cte-k8s-operator.yaml>`
Installing the CTE for Kubernetes Operator | CLI Method
Warning
-
Do not uninstall CTE for Kubernetes and the operator from the CLI, if they were installed using the GUI.
-
Similarly, do not uninstall CTE for Kubernetes and the operator from the GUI, if it was installed using the CLI.
Note
You can install CTE for Kubernetes and the Operator only in the Openshift-operators namespace. If you want to install in other namespaces, use the CLI option.
The CTE for Kubernetes Operator can be installed using one of two methods:
-
CLI
-
Cluster Console GUI
CLI Method Prerequisites
-
Install the Operator Lifecycle Manager (OLM) on the cluster. Refer to Installing OLM for instructions on how to install OLM on the cluster.
Note
The latest version of Openshift is installed by default.
-
Install
OC(Openshift CLI command for Openshift cluster) on the cluster.Note
The user installing the Operator must have Cluster Admin permissions.
-
Download the Deploy Scripts.
-
Execute the
deploy.shscript from the deploy directory, type:./deploy.sh --operator --operator-ns=<namespace-in-which-to-deploy-the-operator> --cte-ns=<namespace-in-which-to-deploy-cte-4-k8s>If either of the namespace options is not specified, the script sets
kube-systemas the default namespace for deployment.
Namespace Deployments
Ensure that the namespace passed to the deployment script exists before initiating deployment. This prevents the script from prompting for creation of namespace during deployment. For example, if the deployment script is invoked as:
./deploy.sh --operator --operator-ns=my-ns1 --cte-ns=my-ns2
where both namespaces my-ns1 and my-ns2, do not exist, the script would prompt with the following response:
Starting the cte-csi containers
NAMESPACE my-ns1 not found!!!!!!
Namespace my-ns1 not found. Do you want to create it now [N/y]
Once the namespace information is available, the script proceeds to create the objects required and installs the operator. After installing the operator, the script deploys ctek8soperator CRD. This deploys CTE for Kubernetes on the cluster in the namespace specified.
Installing CTE for Kubernetes
To install CTE for Kubernetes (when CTE for Kubernetes Operator is already installed):
-
Navigate to the deploy directory that was downloaded for operator install.
-
Ensure the file
ctek8soperator-crd.yamlhas correct values. -
Type:
# oc apply -f <path to>/ctek8soperator-crd.yaml
Updating CTE for Kubernetes
To update CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Navigate to the deploy directory that was downloaded for operator install.
-
Edit the
ctek8soperator-crd.yamlfile. -
Update the
versionfield in the spec section to the latest version of CTE for Kubernetes. -
Apply the change with the command:
# oc apply -f <path to>/ctek8soperator-crd.yaml
Deleting CTE for Kubernetes
To delete CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Navigate to the deploy directory that was downloaded for operator install.
-
Type:
# oc delete -f <path to>/ctek8soperator-crd.yaml
Uninstalling CTE for Kubernetes Operator
To uninstall the CTE for Kubernetes Operator:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Navigate to the deploy directory that was downloaded for operator install, type:
# ./deploy.sh --operator --operator-ns=<namespace-in-which-ctek8soperator-is-deployed> --cte-ns=<namespace-in-which-ctek8s-is-deployed> --removeThis removes both CTE for Kubernetes and the operator.
Using the Cluster Console Web GUI
The CTE for Kubernetes Operator is certified with Red Hat for the Openshift platform. It is integrated with OperatorHub. The operator can be discovered on the OperatorHub page.
-
Open a browser and navigate to the Operators > OperatorHub link, in the left navigation panel on the console GUI. Type
CTEin the search field under All Items to find the CTE for Kubernetes Operator. -
Click on the tile to access the install option.
-
Ensure that all prerequisites are met before installing the operator.
-
Click Install to install the operator. Do not change the default values on the install page.
Deploying the CTE for Kubernetes Operator after installation
Once the operator is installed, the Kubernetes API servers becomes aware of the Kubernetes customer resource (CR). The installation process registers the:
-
CR
-
API for managing the CR
-
Controller that handles the requests, for the CR, from the API Server
To instantiate the CR:
-
Click View Operator on the page displayed immediately after the operator is installed.
-
Alternatively:
-
Expand “Operators” section and click Installed Operators page from the left hand bar on the page.
-
Click on the Name of the operator installed.
-
Click Create Instance link on the page displayed.
-
Click Create to deploy CTE for Kubernetes.
-
Upgrading CTE for Kubernetes
-
Check the Docker site for a new version of CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Expand “Operators” section and click Installed Operators page from the left hand bar on the page.
-
Click on the Name of the operator installed.
-
On the next page, select the YAML tab. This displays the manifest of CTE for Kubernetes instances installed on the cluster.
-
Update the version parameter in the CRD spec with the updated CTE-K8s version.
-
Click Save.
The upgrade process terminates all of the CTE for Kubernetes pods running the previous version, while activating new pods running the new version. Once the
cte-csi-node-XXXXpods are running on all of the nodes of the cluster, the application that uses the CTE for Kubernetes volume can be re-deployed. -
If any of the other parameters in
CTE-K8s-Operator.yaml, like logLevel, apiburst, apqps, etc. need to be updated, follow the same process as above to update the relevant parameter.
Uninstalling CTE for Kubernetes Operator
To uninstall CTE for Kubernetes Operator:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Expand “Operators” section and click Installed Operators page from the left hand bar on the page.
-
Click on the Name of the operator installed.
-
On the next page, select the Actions tab.
-
Click Uninstall Operator in the list of actions displayed.
-
Click Uninstall on the confirmation pop-up to uninstall CTE for Kubernetes.
Upgrading CTE for Kubernetes Operator
The operator is designed to upgrade to a newer version automatically as soon as a new version is published by Thales. The Operator Lifecycle Manager, or OLM, constantly polls for updates and upgrades the operator whenever an update is available.