Using Dynamic PVCs with CTE for Kubernetes
Static provisioning for Persistent Volumes (PV) requires the administrator to make assumptions, in order to create PVs that applications may need. As your Kubernetes environment expands, this can become a bottleneck.
Dynamic provisioning solves this issue. Instead of the Kubernetes administrator creating specific PVs, the administrator defines Storage Classes. Each Storage Class has a specific storage pool from which PVs can be provisioned automatically to meet an application’s requirements.
Kubernetes provides a variety of internal provisioners. With dynamic provisioning, a developer can use a PVC to request a specific storage type and have a new PV provisioned automatically. The PVC must request a StorageClass that has already been created and configured on the target cluster, by an administrator, for dynamic provisioning to work.
CTE for Kubernetes now allows you to deploy HELM charts that use a StorageClass as input for creating volumes. Helm Charts help you define, install, and upgrade Kubernetes applications, thereby helping you manage Kubernetes clusters.
Specifying a Storage Class on a HELM chart means that the cluster will select volumes from the pool of volumes for that storage class. If no volumes exist, but the CTE-CSI driver has the capabilities for automatic provisioning, a new volume will be created and added to the cluster. Traditional CTE for Kubernetes PVCs must be specified with a source PVC in the definition, which makes it incompatible with this type of deployment. By using HELM charts, this will allow CTE for Kubernetes volumes to pass enough information to a source Storage Class so that dynamic provisioning in a CTE for Kubernetes PVC can dynamically provision and attach a data source PVC based on the PVC specifications.
Specifically, with dynamic PVCs now being supported, it changes the creation method for PVs. The CTE-K8s StorageClass is created with the new parameters and the CTE-K8s PVC is created without sourcePVC annotations, (policy annotation is optional). After the controller checks for the existence of policy and sourcePVC parameters, if it doesn't find sourcePVC parameters, it creates a new unprotected PVC (source PVC) using parameters from the CTE-K8s PVC and CTE-K8s StorageClass. CTE-K8s then binds the unprotected PVC to a PV if any qualifying PVs are available on the cluster. If not, it requests for a new PV to be provisioned. Then the CTE-K8s PVC provisions the PVC as normal. After that, the Application Pod can use the CTE-K8s PVC claim.
CTE-K8s Storage Class definition
To support dynamic PVCs, three new parameters must be added to the CTE-K8s Storage Class definition:
-
source_storage_class
-
default_policy
-
allow_source_pvc_delete
Example
Following is an example of an entire Persistent Volume Claim set with the new parameters:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-test-sc
provisioner: csi.cte.cpl.thalesgroup.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
# Domain name or IP address of the CipherTrust Manager (Required)
key_manager_addr: 192.168.70.1
# Name of the CipherTrust Manager K8s Storage Group. (Required)
k8_storage_group: test-group
# Kubernetes Secret with CipherTrust Manager registration token (Required)
registration_token_secret: cm-reg-token
# Kubernetes secret with External CA signed client certificate (Optional)
kubectl create secret generic <secret-name> --from-file=<clientName>.crt=<client_cert.pem> --from-file=<clientName>.key=<client_key.pem> --from-file=<clientName>.passphrase=<passphrase>
# Multiple client cert details can be added with same secret as well with above command
external_ca_client_secret: <secret-name>
# Small registration description to be displayed in the CipherTrust Manager (Optional)
client_description: "Describe your K8s client"
# Time in minutes to wait before unregistering from the CipherTrust Manager
# once all volumes have been unguarded. Parameter must be added as a string
# integer value. Default "10" minute. (Optional)
registration_period: "10"
# When specified, this parameter automatically adds the
# csi.cte.cpl.thalesgroup.com/source_pvc parameter to the CTE-K8s PVC based
# on the requested parameters (Optional)
source_storage_class: some_sc_name
# When specified, this parameter automatically adds the
# csi.cte.cpl.thalesgroup.com/policy parameter to the CTE-K8s PVC based on
# the requested parameters. (Optional)(Required if source_storage_class is set)
default_policy: <default_policy_name>
# When specified and set to "true", this parameter automatically deletes the dynamic sourcePVC,
# and possibly, the actual data volume, depending on the provisioner driver implementation.
# If set to "false," you must manually delete the created source PVC. (Optional)
allow_source_pvc_delete: "false"
New Persistent Volume Claim Usage
The support for dynamic PVCs changes some of the rules for PVCs:
-
The
source_pvc
andpolicy
parameters are now optional if you use the new Storage Class parameters -
If
source_pvc
is specified, but the policy is not, then the PVC uses the Storage Class default_policy -
If a policy is specified, but the
source_pvc
is not specified, then the PVC uses the storage size when requesting a new volume -
Specifying both policy and
source_pvc
parameters will permit the cte-csi driver work as it did previously, meaning:-
source_storage_class
anddefault_policy
parameters in the Storage Class are ignored -
storage parameter
in the Persistent Volume Claim is ignored
-
Example: Persistent Volume Claim
Using these new parameters, you could configure CTE for Kubernetes as follows:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cte-claim1
annotations: csi.cte.cpl.thalesgroup.com/policy: policy_1 csi.cte.cpl.thalesgroup.com/source_pvc: nfs-test-claim spec: storageClassName: csi-test-sc accessModes: - ReadWriteMany resources: requests: storage: 1Gi No longer ignored by the driver
CTE-CSI protected Persistent Storage Claim
Creating a CTE-CSI protected Persistent Storage Claim by deploying an application pod:
apiVersion: v1
kind: Pod
metadata:
name: cte-csi-demo
spec:
volumes:
- name: test-vol
persistentVolumeClaim:
claimName: cte-claim1
containers:
- name: ubuntu
image: ubuntu
volumeMounts:
- mountPath: "/data"
name: test-vol
command:
- "sleep"
- "604800"
imagePullPolicy: IfNotPresent
restartPolicy: Always
External Provisioner support with CTE-K8s for dynamically provision volume
If your driver is not capable of dynamically provisioning the persistent volume for a storage type, you can use an external provisioner for that specific storage type to create the volume dynamically and deploy with CTE for Kubernetes. Following are some external provisioners which provision data volumes dynamically for a storage class. You can use this storage class as the source StorageClass in cte-storageclass.yaml parameter (source_storage_class).
After deploying a cte-csi-claim.yaml, the data persistent volume is created dynamically.