Individual Guard Policy Tuning
Note
In the code commands and responses, the term GuardPoint means Guard Policy.
Configuration
There are two scenarios for CTE for Kubernetes Guard Policy configuration tuning.
On the Guard
This is for creating a Guard Policy with the required configuration settings, other than the default values. Guard Policy creation occurs when you deploy an application pod using a CTE-PVC. Each CTE-PVC deployment results in the creation of a unique Guard Policy.
Note
-
If the same CTE-PVC is used by multiple application pods deployed on the same node, then the pod will contain a single Guard Policy.
-
If the same CTE-PVC is used by multiple application pods deployed on different nodes, then on each node there will be a unique Guard Policy and the same tuning configuration changes will be set on those Policies.
On the Run
Configuration settings while Guard Policy is active. Guard Policy active means application pod is already running with CTE-PVC and user want to configure a Guard Policy. Only debug log level configuration is supported on the run for now.
Tunable Parameters
Configurable Parameter | Default Value | Description | Status | On the Guard Tuning Support | On the Run Tuning Support |
---|---|---|---|---|---|
debug_all | 4 | Sets the sensitivity of the logs, can be adjusted later using `secfsd -log_level [4-8] | Change as needed | yes | yes |
debug_extra | 0 | Debug level | Change as needed | yes | no |
enable_xattr | 0 | Enables use of xattrs in the file system, default is off for best performance |
Set to to 1 if performing a restore from CTE with xattrs . Set to 0 for normal use. |
yes | no |
fileinfo_cache_timeout | 100 | Amount of milliseconds to keep file attribute data in cache | Change as needed | yes | no |
loginuid | 1 | Enforce loginuid. Without this set, su can bypass security |
Change as needed | yes | no |
max_worker_threads | 10 | Maximum parallel threads allowed | Change as needed | yes | no |
mixed_policy | 1 | If mixed modes needed (ex: apply key on read, no apply key on write), set this value; it causes an extra access check | Change as needed | yes | no |
nfs_user | 0 | UID of specific NFS user to use | Change as needed | yes | no |
parallel_writes | 1 | Allows non-overlapping writes to run in parallel | Change as needed | yes | no |
splice | 0 | Allows use of splice call from FUSE | Change as needed | yes | no |
writeback_cache_local | 1 | Uses writeback cache for local file systems (extx, xfs, btrfs) | Change as needed | yes | no |
writeback_cache_nfs | 0 | Uses writeback cache for NFS | Change as needed | yes | no |
Preparing CTE for Kubernetes deployment files
Creating a ConfigMap with tuning parameters
-
ConfigMap must be in the same namespace as the CTE-PVC namespace.
-
On the Guard ConfigMap settings require the prefix
guard
with the tuning parameter name. You can use multiple tuning parameters at same time to configure a Guard Policy. -
On the Run ConfigMap settings require the prefix
run
with the tuning parameter name. -
Create a
ConfigMap
with tuning parameters and key-value pairs. -
Following is a ConfigMap yaml example:
**gp-tuning-config-map.yaml** :::python apiVersion: v1 kind: ConfigMap metadata: name: cte-claim-gp-tuning-cm #namespace: <ns1> # If CTE-PVC will be deployed in ns1 namespace data: #guard.<config-parameter-name>: "<config-value>" # On the Guard tuning example #run.<config-parameter-name>: "<config-value>" # On the Run tuning example (This field can be added/updated on the Run) guard.debug_all: "8" guard.max_worker_threads: "20" guard.writeback_cache_nfs: "1" #run.debug_all: "8" **Where** `<config-parameter-name>`: Supported tuning parameter name (See above table) `<config-value>`: value corresponding to the parameter. Make sure values are in double quotation marks ("").
-
Apply the ConfigMap, type:
kubectl apply -f gp-tuning-config-map.yaml
Creating a CTE-PVC with tuning enabled for "On the Guard"
New annotation will require for this feature:
-
csi.cte.cpl.thalesgroup.com/guardpoint_tuning_configmap: <tuning-config-map-name>
-
<tuning-config-map-name>
: ConfigMap name which contains the tuning parameters. For example,cte-claim-gp-tuning-cm
.
cte-csi-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cte-claim
annotations:
# GuardPolicy from CipherTrust Manager
csi.cte.cpl.thalesgroup.com/policy: policy_1
#(Following is not required if using dynamic provisioning)
csi.cte.cpl.thalesgroup.com/source_pvc: nfs-test-claim
csi.cte.cpl.thalesgroup.com/guardpoint_tuning_configmap: <tuning-config-map-name>
spec:
storageClassName: <CHANGEME to the storageclass name deployed e.g. csi-test-sc>
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Ki
-
Apply the CTE PVC, type:
kubectl apply -f cte-csi-claim.yaml
Now you can use the CTE PVC with an application pod. All of the configuration changes take affect once the Guard Policy is created when the application pod deploys.
-
Validate the applied changes with the corresponding
cte-csi-node-xxxxx
pod log (with thecte-csi
driver logLevel 5):kubectl logs --namespace=kube-system cte-csi-node-xxxxx cte-csi
Example Response
[I0626 10:17:28.160501 3725529 guardpoint.go:1634] Guard Policy tuning is enabled for cte-pvc[cte-claim] namespace[default] policy[policy_1] path[/tmp 2e1df83a-33a5-11ef-823e-3e833176f6ce] [I0626 10:17:28.166396 3725529 chroot.go:108] Executing /usr/bin/voradmin secfs config debug_all 8 /tmp/2e1df83a-33a5-11ef-823e-3e833176f6ce on Chroot UUID e51ad1bb-3809-46b2-8497-40aec3d9eee1 [I0626 10:17:28.214394 3725529 chroot.go:108] Executing /usr/bin/voradmin secfs config max_worker_threads 20 /tmp/2e1df83a-33a5-11ef-823e-3e833176f6ce on Chroot UUID e51ad1bb-3809-46b2-8497-40aec3d9eee1 [I0626 10:17:28.258587 3725529 chroot.go:108] Executing /usr/bin/voradmin secfs config writeback_cache_nfs 1 /tmp/2e1df83a-33a5-11ef-823e-3e833176f6ce on Chroot UUID e51ad1bb-3809-46b2-8497-40aec3d9eee1
Using "On the Run" tuning
Use the existing ConfigMap, or a new ConfigMap, with the data field run.<config-parameter-name>: <config-value>
.
-
After adding/updating the parameter, apply the ConfigMap, type:
kubectl apply -f <gp-tuning-config-map.yaml>
Where
`<gp-tuning-config-map.yaml>`: ConfigMap yaml file which contains the tuning parameter
-
Add the annotation to the CTE PVC using the
kubectl annotate
command, type:kubectl annotate pvc
csi.cte.cpl.thalesgroup.com/guardpoint_tuning_configmap=' ' <cte-pvc-name>
: CTE-PVC name, for example,cte-claim
<tuning-config-map-name>
: ConfigMap name which contains the tuning parameters. For example,cte-claim-gp-tuning-cm
After applying the annotation, the applied configuration takes effect within a minute.
-
Validate applied changes with the corresponding
cte-csi-node-xxxxx
pod log, type:kubectl logs --namespace=kube-system cte-csi-node-xxxxx cte-csi
Example Response
[I0626 10:33:26.887047 3725529 guardpoint.go:1745] Runtime GuardPoint tuning is enabled. ConfigMap[cte-claim-gp-tuning] in CTE-PVC[cte-claim1] namespace[default] annotation for volumeID[2e1df83a-33a5-11ef-823e-3e833176f6ce] guardPolicy[pvc-clone-policy1] path[/tmp/2e1df83a-33a5-11ef-823e-3e833176f6ce] for regID[e51ad1bb-3809-46b2-8497-40aec3d9eee1] [I0626 10:33:26.889947 3725529 chroot.go:108] Executing /usr/bin/secfsd -log_level 7 /tmp/2e1df83a-33a5-11ef-823e-3e833176f6ce on Chroot UUID e51ad1bb-3809-46b2-8497-40aec3d9eee1 [I0626 10:33:26.929057 3725529 chroot.go:130] Log level [7][all][/tmp/2e1df83a-33a5-11ef-823e-3e833176f6ce]https://d23res6d8zycp3.cloudfront.net/ctp/cte-con/cte-k8s/latest/cte-4-k8s/certs/index.html