Installing and Managing CTE for Kubernetes in an Operator Environment
Installing CTE for Kubernetes
To install CTE for Kubernetes (when CTE for Kubernetes Operator is already installed):
-
Navigate to the deploy directory that was downloaded for operator install.
-
Ensure the file
ctek8soperator-crd.yaml
has correct values. -
Type:
oc apply -f <path to>/ctek8soperator-crd.yaml
Updating CTE for Kubernetes
To update CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Navigate to the deploy directory that was downloaded for operator install.
-
Edit the
ctek8soperator-crd.yaml
file. -
Update the
version
field in the spec section to the latest version of CTE for Kubernetes. -
Apply the change with the command:
oc apply -f <path to>/ctek8soperator-crd.yaml
Using user-defined Configuration Parameters with RedHat OpenShift
You can user defined configuration parameters for CTE-K8s daemonset so user can control on which nodes CTE-K8s is deployed as an operator
cte-csi-node manifest
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
creationTimestamp: "2024-06-20T10:08:55Z"
generation: 1
name: cte-csi-node
namespace: cte-o
ownerReferences:
- apiVersion: cte-k8s-operator.csi.cte.cpl.thalesgroup.com/v1
blockOwnerDeletion: true
controller: true
kind: CteK8sOperator
name: ctek8soperator
uid: 292a9b6d-716d-4b35-b965-916ae58b709c
resourceVersion: "678877896"
uid: aa2160c0-030f-49e6-b02f-0249cb78d033
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: CteK8sOperator-Deployment
app.kubernetes.io/instance: ctek8soperator
app.kubernetes.io/managed-by: CteK8sOperator-Operator
app.kubernetes.io/name: CteK8sOperator
app.kubernetes.io/version: 1.2.0-latest
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: CteK8sOperator-Deployment
app.kubernetes.io/instance: ctek8soperator
app.kubernetes.io/managed-by: CteK8sOperator-Operator
app.kubernetes.io/name: CteK8sOperator
app.kubernetes.io/version: 1.2.0-latest
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
containers:
- args:
- --endpoint=$(CSI_ENDPOINT)
- --nodeid=$(KUBE_NODE_NAME)
- --namespace=$(KUBE_NAMESPACE)
- --v=5
- --apiburst=300
- --apiqps=200
- --registration-cleanup-interval=10
- --pauseimage=registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: KUBE_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: docker.io/thalesciphertrust/ciphertrust-transparent-encryption-kubernetes:1.2.0-latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- rm -rf /csi/csi.sock
name: cte-csi
resources: {}
securityContext:
capabilities:
add:
- SYS_ADMIN
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: plugin-dir
- mountPath: /var/lib/kubelet/pods
mountPropagation: Bidirectional
name: mountpoint-dir
- mountPath: /etc/kubernetes/
name: kube-cred
- mountPath: /dev/shm
name: dshm
- mountPath: /var/log
name: varlog
- mountPath: /chroot
name: chroot
- mountPath: /var/run/cri.sock
name: cri-sock
- args:
- --agentlogs
image: docker.io/thalesciphertrust/ciphertrust-transparent-encryption-kubernetes:1.2.0-latest
imagePullPolicy: IfNotPresent
name: cte-agent-logs
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log
name: varlog
- mountPath: /chroot
name: chroot
- args:
- --csi-address=/csi/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/plugins/csi.cte.cpl.thalesgroup.com/csi.sock
- --v=5
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
imagePullPolicy: IfNotPresent
name: csi-sidecar-registrar
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: plugin-dir
- mountPath: /registration
name: registration-dir
- args:
- --csi-address=/csi/csi.sock
- --v=5
image: k8s.gcr.io/sig-storage/csi-attacher:v3.0.0
imagePullPolicy: IfNotPresent
name: csi-sidecar-attacher
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: plugin-dir
dnsPolicy: ClusterFirst
hostPID: true
imagePullSecrets:
- name: cte-csi-secret
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: cte-csi-node
serviceAccountName: cte-csi-node
terminationGracePeriodSeconds: 600
tolerations:
- operator: Exists
volumes:
- hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
name: registration-dir
- hostPath:
path: /var/lib/kubelet/plugins/csi.cte.cpl.thalesgroup.com/
type: DirectoryOrCreate
name: plugin-dir
- hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
name: mountpoint-dir
- hostPath:
path: /etc/kubernetes/
type: Directory
name: kube-cred
- emptyDir:
medium: Memory
name: dshm
- emptyDir: {}
name: varlog
- emptyDir: {}
name: chroot
- hostPath:
path: /run/crio/crio.sock
type: ""
name: cri-sock
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 15
desiredNumberScheduled: 15
numberAvailable: 12
numberMisscheduled: 0
numberReady: 12
numberUnavailable: 3
observedGeneration: 1
updatedNumberScheduled: 15
Using ConfigMaps to customize deployment of CTE for Kubernetes
If you are deploying CTE for Kubernetes using Helm, you can customize this deployment by modifying parameters in values.yaml file or manifest files (cte-csi-controller.yaml, cte-csi-driver, and cte-csi-nodeserver.yaml) directly.
However, if you are deploying CTE for Kubernetes using operator, you can customize the deployment by modifying only small subset of parameters that are available in the ctek8soperator-crd.yaml CRD. The operator uses these values to update the values in the bundled CTE for Kubernetes manifest files before deploying it. To modify other parameters (such as, parameters in the manifest files), you require a new build of the operator, which supports that parameter in its CRD. This is because the manifest files required for deploying CTE for Kubernetes are bundled into an operator image at the time operator is built.
From version 1.5.0 and subsequent versions, this limitation has been addressed. Now, you can either use the CRD to customize small set of parameters, or modify the manifest files and upload them as a ConfigMap. The Kubernetes cluster administrator can customize the deployment by modifying the manifest files (cte-csi-controller.yaml and cte-csi-nodeserver.yaml) and uploading them as a ConfigMap.
Note
If ConfigMap is available then changes in ctek8soperator-crd.yaml CRD are ignored.
Once the operator finds the ConfigMap, it extracts the manifests from this ConfigMap and uses them to deploy CTE for Kubernetes. If the ConfigMap is not found or the operator is not able to successfully extract the manifests, then it falls back to the manifests packaged at the time of build.
To create a custom ConfigMap follow the steps below.
-
On the cluster where you want to deploy CTE for Kubernetes, clone the
ciphertrust-transparent-encryption-kubernetes.git
repository and navigate to this directory.git clone https://github.com/thalescpl-io/ciphertrust-transparent-encryption-kubernetes.git cd ciphertrust-transparent-encryption-kubernetes
-
Execute the detemplatize_manifest.sh command. To execute this command Helm must be installed.
./detemplatize_manifest.sh
This script creates a custom_manifests/<CTE-K8s version> directory tree.
-
Edit the manifest files in the custom_manifests directory corresponding to the version of CTE for Kubernetes being deployed. For instance, if you are deploying version 1.6.0 (1.6.0-latest), then edit the files in the custom_manifests/1.6.0 directory.
-
Execute the
kubectl
command to create the ConfigMap.Note
It is mandatory to name the ConfigMap as
ctecustomconfig
and create it in the same namespace where the CTE for Kubernetes operator is being deployed.Example: To create
ctecustomconfig
in the kube-system namespace for deployment of version 1.6.0, execute the below command:kubectl create configmap ctecustomconfig -n kube-system --from-file=./custom_manifests/1.6.0/
-
Verify that the manifest was created.
kubectl get configmap ctecustomconfig -n kube-system
After completing above steps, deploy the operator and CTE for Kubernetes using either GUI or the deployment scripts.
Upgrading CTE for Kubernetes
-
Check the Docker site for a new version of CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Expand Operators section and click Installed Operators page from the left hand bar on the page.
-
Click on the Name of the operator installed.
-
On the next page, select the YAML tab. This displays the manifest of CTE for Kubernetes instances installed on the cluster.
-
Update the version parameter in the CRD spec with the updated CTE-K8s version.
-
Click Save.
The upgrade process terminates all of the CTE for Kubernetes pods running the previous version, while activating new pods running the new version. Once the
cte-csi-node-XXXX
pods are running on all of the nodes of the cluster, the application that uses the CTE for Kubernetes volume can be re-deployed. -
If any of the other parameters in
CTE-K8s-Operator.yaml
, like logLevel, apiburst, apqps, etc. need to be updated, follow the same process as above to update the relevant parameter.
Deleting CTE for Kubernetes
To delete CTE for Kubernetes:
-
Stop any application that is using CTE for Kubernetes volumes.
-
Navigate to the deploy directory that was downloaded for operator install.
-
Type:
oc delete -f <path to>/ctek8soperator-crd.yaml