Volume Snapshot and Restoring
Introduction
Kubernetes volume snapshots provide Kubernetes users with the ability to copy a volume at a specific point in time. You can use this copy to restore a volume back to a prior state, or to provision a new volume. Use snapshots to protect your workloads and achieve business-critical recovery point objectives in your disaster recovery plan, or if you have compliance needs to periodically save data.
Similar to how API resources PersistentVolume
and PersistentVolumeClaim
are used to provision volumes for users and administrators, VolumeSnapshotContent
and VolumeSnapshot API
resources are provided to create volume snapshots for users and administrators.
Terminology
-
VolumeSnapshotClass
: Similar to howStorageClass
provides a method for administrators to describe the classes of storage they offer when provisioning a volume,VolumeSnapshotClass
provides a method for describing the classes of storage when provisioning a volume snapshot. -
VolumeSnapshot
: User requests a snapshot of a volume. It is similar toPersistentVolumeClaim
. -
VolumeSnapshotContent
: Snapshot taken from a volume in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a Persistent Volume is a cluster resource.
Considerations
-
The CSI storage drivers that support provisioning volume snapshots, the ability to provision dynamic volumes, and can be used with the CTE for Kubernetes Volume Snapshot and Restore feature are referenced on the Kubernetes CSI Drivers List.
-
API Objects:
VolumeSnapshot
,VolumeSnapshotContent
, andVolumeSnapshotClass
are Custom Resource Definitions (CRD). They are not part of the core API.A CRD is an extension of the Kubernetes API that is not necessarily available in the default Kubernetes installation. It represents a customization of a particular Kubernetes installation. Without the required CRDs installed, attempts to create a VolumeSnapshot fail. Validate the CRDs by typing:
kubectl api-resources -o wide | grep -i VolumeSnapshot
If the snapshot CRD is installed and uses the snapshot API request, make sure that the volume snapshot controller is deployed. Snapshot CRDs may get installed by default by installing the respective storage snapshot controller. Alternatively, it may require installing the snapshot CRD along with the snapshot controller installation. Check with the appropriate storage driver documentation for installation instructions for snapshot CRDs and the snapshot controller.
Volume Snapshot Types
There are two methods for provisioning snapshots:
Pre-provisioned
A cluster administrator creates these VolumeSnapshotContents
. They carry the details of the volume snapshot, on the storage system, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
Note
This method IS NOT supported by CTE for Kubernetes.
Dynamically provisioned
You can take a snapshot dynamically from a PersistentVolumeClaim
. The VolumeSnapshotClass
specifies the storage provider-specific parameters to use when taking a snapshot. You can create a snapshot for CTE volumes (CTE-PVC) and you can restore using the new CTE-PVCs.
Note
This method IS supported by CTE for Kubernetes.
Setup
The following example assumes that:
-
CTE for Kubernetes setup is available
-
It uses Dynamically Provisioned volumes
-
It uses the compatible NFS-CSI driver
-
The Application Pod is running with a CTE-PVC (
cte-claim
) -
cte-claim
is using CTE-StorageClass (cte-test-sc
) and is bound and available -
cte-test-sc
is using thenfs-csi-sc-imm (nfs.csi.k8s.io)
source storageClass for NFS
cte-csi-storageclass.yaml (CTE-SC)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cte-test-sc
provisioner: csi.cte.cpl.thalesgroup.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
# Domain name or IP address of the CipherTrust Manager (Required)
key_manager_addr: 192.168.70.1
# Name of the CipherTrust Manager K8s Storage Group. (Required)
k8_storage_group: k8s-group1
# Kubernetes Secret with CipherTrust Manager registration token (Required)
registration_token_secret: cm-reg-token
# To provision a volume dynamically, add the underlying source storageClass.
# The CTE-K8s driver will create a source_pvc dynamically on this source storageClass.
# A new volume will be provisioned by the underlying provisioner. (Optional)(Required only for Dynamic PVC)
# Taking an NFS provisioning driver (`nfs.csi.k8s.io`) example: `nfs-csi-sc-imm`
source_storage_class: <underlying_source_sc_name>
# When specified and set to "true", this parameter will automatically
# delete the dynamic sourcePVC. This might delete the actual data volume, depending
# on the underlying provisioner driver implementation. Default is set to "false". (Optional)
allow_source_pvc_delete: "false"
# When specified, this parameter will automatically add the `csi.cte.cpl.thalesgroup.com/policy`
# parameter to the CTE-K8s PVC based on the requested parameters. (Optional)(Required if `source_storage_class` is set)
default_policy: <policy_1>
# Time in minutes to wait before unregistering from the CipherTrust Manager
# once all volumes have been unguarded. Parameter must be added as a string
# integer value. Default "10" minute. (Optional)
registration_period: "10"
The following three parameters are specific to dynamic provisioned (sourcePVC and sourcePV) in the above CTE-K8s Storage Class definition:
-
source_storage_class
-
default_policy
-
allow_source_pvc_delete
cte-csi-claim.yaml (source CTE-PVC)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cte-claim
annotations:
# If this field is used then this policy is used whether or not `default_policy: <policy_1>` in CTE-SC is provided or not.
csi.cte.cpl.thalesgroup.com/policy: policy_1
# (optional) If this field is used then the volume bounded with this pvc1 must be a dynamically provisioned volume.
#csi.cte.cpl.thalesgroup.com/source_pvc: <pvc1>
spec:
storageClassName: cte-test-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Examples Scripts for PV, PVC, SC with the deployments listed above
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/cte-test-sc csi.cte.cpl.thalesgroup.com Delete Immediate true 13m
storageclass.storage.k8s.io/nfs-csi-imm nfs.csi.k8s.io Delete Immediate false 56d
-
CTE-SC:
cte-test-sc
-
Underlying source SC:
nfs-csi-sc-imm
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/cte-claim Bound pvc-b168e8d9-e26b-4515-aab2-91dc241ce74a 2Gi RWX cte-test-sc <unset> 13m
persistentvolumeclaim/cte-claim-unprotected-cprfnzjxjxg3gpyabrqh5d Bound pvc-4183880c-6480-43e8-9564-b9f06fe557bf 2Gi RWX nfs-csi-imm <unset> 13m
-
CTE-PVC :
cte-claim
-
Source PVC :
cte-claim-unprotected-<UID>
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
persistentvolume/pvc-4183880c-6480-43e8-9564-b9f06fe557bf 2Gi RWX Delete Bound default/cte-claim-unprotected-cprfnzjxjxg3gpyabrqh5d nfs-csi-imm <unset> 13m
persistentvolume/pvc-b168e8d9-e26b-4515-aab2-91dc241ce74a 2Gi RWX Delete Bound default/cte-claim cte-test-sc <unset> 13m
-
CTE-PV :
pvc-<UID>
-
Source PV :
pvc-<UID>
Validate Source Setup for taking Snapshots
-
Get the
sourcePVC
name that is attached to the source CTE-PVC, type:kubectl get pv $(kubectl get pvc <Source_CTE-PVC_Name> -n <NameSpace> -o wide --no-headers -o custom-columns=":spec.volumeName") -o wide --no-headers -o custom-columns=":spec.csi.volumeAttributes" | awk '{for (i=1;i<=NF;i++){if ($i ~/csi.cte.cpl.thalesgroup.com\/source_pvc:/) {print $i}}}' | awk -F: '{print $2}'
-
<Source_CTE-PVC_Name>
: source CTE-PVC name, for example:cte-claim
-
<NameSpace>
: namespace in which the above CTE-PVC is deployed
Response
cte-claim-unprotected-<UID>
-
-
Verify that the sourcePVC is bounded with a dynamically provisioned volume, type:
kubectl get pv $(kubectl get pvc <SourcePVC_Name> -n <NameSpace> -o wide --no-headers -o custom-columns=":spec.volumeName") -o wide --no-headers -o custom-columns=":spec.csi"
-
<SourcePVC_Name>
: SourcePVC name obtained from previously used command -
<NameSpace>
: Namespace in which source CTE-PVC is deployed
Response
If the source volume is dynamically provisioned, then the output will contain data, for example:
map[driver:nfs.csi.k8s.io map[csi.storage.k8s.io/pv/name:pvc-caff4502-0123-4391-927a-5efa5cbb12d9 csi.storage.k8s.io/pvc/name:cte-claim-unprotected-mi5eclhwd4dyhjrjeqicbd csi.storage.k8s.io/pvc/namespace:default server:<some-IP> share:<share-path> storage.kubernetes.io/csiProvisionerIdentity:1724596308482-728-nfs.csi.k8s.io subdir:pvc-caff4502-0123-4391-927a-5efa5cbb12d9] volumeHandle:<Some-IP>.#<share-path>#pvc-caff4502-0123-4391-927a-5efa5cbb12d9##]
If the source volume is not dynamically provisioned, then the output will not contain data.
Note
If the sourcePVC has a bound volume which is not dynamically provisioned, then you cannot take a Volume Snapshot on the source CTE-PVC (i.e.
cte-claim
). -
Create a Snapshot
-
Create a
VolumeSnapshotClass
object to specify the CTE-K8s driver anddeletionPolicy
for your volume snapshot. You can referenceVolumeSnapshotClass
objects when you createVolumeSnapshot
objects.cte-csi-snapshotclass.yaml
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: cte-snapclass parameters: # Add the parameters if required for any storage specific. These parameters will be passed to the storage source snapshotClass by the CTE-K8s driver. <key1>: <value1> <key2>: <value2> #The following field must be used as it is. driver: csi.cte.cpl.thalesgroup.com deletionPolicy: Delete
Volume snapshot classes have parameters that describe volume snapshots belonging to the volume snapshot class. Different parameters may be accepted depending on the specific storage driver. Users must provide those required parameters with CTE-SnapshotClass (cte-csi-snapshotclass.yaml
).
**Sample Parameters**
The following are sample parameters that are used with different cloud storage provisioners. Check the specific storage driver documentation for the supported parameters based on the requirements.
:::yaml
parameters:
snapshot-type: images
image-family: <IMAGE_FAMILY>
storage-locations: us-east2
type: backup
resourceGroup: <EXISTING_RESOURCE_GROUP_NAME>
incremental: true
# The driver field value is static. Do not change it.
driver: csi.cte.cpl.thalesgroup.com
-
Apply the CTE-SnapshotClass manifest, type:
kubectl apply -f cte-csi-snapshotclass.yaml
-
Create a Volume Snapshot Object, which is a request for a snapshot of an existing PersistentVolumeClaim object.
cte-csi-snapshot.yaml
apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: cte-claim-snapshot spec: volumeSnapshotClassName: cte-snapclass source: # CTE-PVC on which the user wants to take the snapshot persistentVolumeClaimName: cte-claim
Note
The namespace of the Volume Snapshot (
cte-claim-snapshot
) must be the same as the namespace of the source Persistent Volume Claim (cte-claim
). Creating a snapshot fromvolumeSnapshotContent
is not supported with CTE for Kubernetes. You cannot referencevolumeSnapshotContentName
as a source (spec.source). -
Apply the CTE-Snapshot manifest, type:
kubectl apply -f cte-csi-snapshot.yaml
Once the previous CTE-Snapshot manifest, (cte-csi-snapshot.yaml
) deploys, the CTE-K8s driver creates a source Snapshot Class for the underlying storage driver. CTE for Kubernetes also creates a source Snapshot on the sourcePVC. Then, the underlying storage driver (for example: nfs.csi.k8s.io
) takes a snapshot on a physical volume.
Examples
Reference the following examples for VolumeSnapshotClass (vsclass
), VolumeSnapshot (vs
), and VolumeSnapshotContent (vsc
) from the deployments listed in the previous section.
kubectl get vsclass
NAME DRIVER DELETIONPOLICY AGE
volumesnapshotclass.snapshot.storage.k8s.io/cte-snapclass csi.cte.cpl.thalesgroup.com Delete 2m53s
volumesnapshotclass.snapshot.storage.k8s.io/cte-snapclass-sourcesnapshotclass-f610823c nfs.csi.k8s.io Delete 2m49s
-
CTE-Snapshot Class
:cte-snapclass
-
Source Snapshot Class
(underlying storage snapshotClass):cte-snapclass-sourcesnapshotclass-<UID>
// This is created by CTE-K8s driver based on input provided in CTE-SnapshotClass (cte-csi-snapshotclass.yaml
) and the source CTE-PVC provided in CTE-Snapshot (cte-csi-snapshot.yaml
)
kubectl get vsc
NAME READYTOUSE RESTORESIZE DELETIONPOLICY DRIVER VOLUMESNAPSHOTCLASS VOLUMESNAPSHOT VOLUMESNAPSHOTNAMESPACE AGE
volumesnapshotcontent.snapshot.storage.k8s.io/snapcontent-67a7d12c-67f3-472f-95ee-28f0f73d4a5c true 1966888 Delete nfs.csi.k8s.io cte-snapclass-sourcesnapshotclass-f610823c cte-claim-snapshot-sourcesnapshot-f610823c default 2m49s
volumesnapshotcontent.snapshot.storage.k8s.io/snapcontent-f610823c-b567-42b9-9558-d59c9487ad7b true 0 Delete csi.cte.cpl.thalesgroup.com cte-snapclass cte-claim-snapshot default 2m49s
-
CTE-SnapshotContent
:snapcontent-<UID>
-
Source SnapshotContent
:snapcontent-<UID>
kubectl get vs
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
volumesnapshot.snapshot.storage.k8s.io/cte-claim-snapshot true cte-claim 0 cte-snapclass snapcontent-f610823c-b567-42b9-9558-d59c9487ad7b 2m28s 2m49s
volumesnapshot.snapshot.storage.k8s.io/cte-claim-snapshot-sourcesnapshot-f610823c true cte-claim-unprotected-cprfnzjxjxg3gpyabrqh5d 1966888 cte-snapclass-sourcesnapshotclass-f610823c snapcontent-67a7d12c-67f3-472f-95ee-28f0f73d4a5c 2m29s 2m49s
-
CTE-Snapshot
:cte-claim-snapshot
// CTE for Kubernetes does not take any actual snapshot, that is why RESTORE SIZE is 0 for CTE-Snapshot -
Source Snapshot
:cte-claim-snapshot-sourcesnapshot-<UID>
// Actual snapshot data is created by the underlying source driver (for example:nfs.csi.k8s.io
)
Validate the Snapshot
Before restoring the volume from the snapshot, make sure that the snapshot is complete and ready to use. The snapshot status READY_TO_USE
should be set to true
. This indicates that the snapshot is successful. To verify, type:
kubectl get volumesnapshot -n <Namespace> -o custom-columns='NAME:.metadata.name,READY:.status.readyToUse'
<Namespace>
: Namespace in which the snapshot is deployed.
Verifying Snapshot Name
User can describe the snapshot to check the status:, type:
kubectl describe vs -n <Namespace> <CTE_Snapshot_Name>
-
<Namespace>
: Namespace in which the snapshot is deployed -
<CTE_Snapshot_Name>
: CTE-Snapshot name, for examplecte-claim-snapshot
Restore the Volume Snapshot
You can reference a VolumeSnapshot
in a PersistentVolumeClaim
to provision a new volume with data from an existing volume, or restore a volume to a state that you captured in the snapshot. To reference a VolumeSnapshot
in a PersistentVolumeClaim
, add the dataSource
field to your PersistentVolumeClaim
.
Restore CTE-PVC (cte-claim-restore.yaml
)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cte-claim-restore
annotations:
# Make sure that the policy is correct. The policy key must be used correctly. Check the policy
# that is used the source CTE-PVC (i.e. cte-claim)
csi.cte.cpl.thalesgroup.com/policy: <policy_1>
# The 'csi.cte.cpl.thalesgroup.com/source_pvc' annotation is not required. A new sourcePVC is
# created by the cte-k8s driver and a new volume is provisioned with restored data
#csi.cte.cpl.thalesgroup.com/source_pvc: <source-pvc>
spec:
storageClassName: cte-test-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
dataSource:
kind: VolumeSnapshot
# Name of the CTE-Snapshot to which the user wants to restore the data
name: cte-claim-snapshot
apiGroup: snapshot.storage.k8s.io
Notes and Considerations
-
The namespace of the restore
PersistentVolumeClaim
must be the same as the namespace of theVolumeSnapshot
. -
Make sure to use the correct policy (
csi.cte.cpl.thalesgroup.com/policy: <policy_1>
) with the restore CTE-PVC (cte-csi-claim-restore.yaml
). The snapshot is taken on encrypted data, and after restoring the data, the correct key must be provided with the policy in order to decrypt and access the data. If the Key rules are not correct with the policy, then the data will be corrupted while accessing it. Check the source CTE-PVC (cte-csi-claim.yaml
) policy for key information. -
Make sure that the restore CTE-PVC (
cte-csi-claim-restore,yaml
) is using the same underlying source storage driver as the source CTE-PVC (cte-csi-claim.yaml
). Restoring a volume with a different source storage driver is not supported. For example, if the source CTE-PVC is usingpd.csi.storage.gke.io
as the underlying source driver and the snapshot was created with it, then restoring the volume with a different storage driver will fail to restore the volume. -
You can create/restore unlimited CTE-PVC's from volume snapshot, as long as the snapshot is available.
Apply the Manifest
Use the following .yaml
script as an example when restoring a Volume Snapshot.
kubectl apply -f cte-csi-claim-restore.yaml
Based on the CTE Storage Class volumeBindingMode
and the underlying source Storage Class volumeBindingMode
, a new volume is provisioned with restored data as follows:
CTE Storage Class ( cte-test-sc ) ( volumeBindingMode ) |
Source Storage Class ( <underlying_source_sc_name> )( volumeBindingMode ) |
Comment |
---|---|---|
Immediate | Immediate | Provision volume with restored data when deploying the restored CTE-PVC (cte-csi-claim-restore.yaml ). |
Immediate | WaitForFirstConsumer | CTE for Kubernetes does not support this volumeBindingMode combination. |
WaitForFirstConsumer | Immediate | Provision volume with restored data after deploying the restore CTE-PVC (cte-csi-claim-clone.yaml ) and the Application Pod on this CTE-PVC. |
WaitForFirstConsumer | WaitForFirstConsumer | Provision volume with restored data after deploying the restore CTE-PVC (cte-csi-claim-clone.yaml ) and the Application Pod on this CTE-PVC. |