Guide to Configuring the Scheduler
Configuring the scheduler is essential for optimal performance and monitoring of your Luna Network HSM. This process allows you to manage various scheduler properties, such as device monitoring intervals and log monitoring frequencies, ensuring that critical data is tracked efficiently. The configuration involves creating a dedicated file named scheduler-config.conf and setting up the necessary directories and permissions, depending on your deployment environment. Whether you are using Podman, Kubernetes, or Helm, following the correct steps will enable you to tailor the scheduler to meet your specific operational requirements.
Create the configuration file
To customize and control the scheduling of various system monitoring tasks, you’ll need to configure a file with specific parameters and values. Follow the steps below to create and update the scheduler-config.conf file, which will define the intervals and purge periods for your monitoring processes.
Create a file named scheduler-config.conf
.
Use the following syntax to add the parameters and values. Ensure each parameter is followed by an equals sign (=) and the appropriate value. Modify the values as needed for your system:
# Device PUM Data Monitoring Interval (Minimum value: 300,000 ms) com.safenetinc.lunadirector.monitor.device.pum.interval=300000 # Device Monitoring Interval (Minimum value: 60,000 ms) com.safenetinc.lunadirector.monitor.device.interval=60000 # Luna Logs Monitoring Interval (Minimum value: 900,000 ms) com.safenetinc.lunadirector.monitor.lunalog.interval=900000 # Service Monitoring Interval (Minimum value: 900,000 ms) com.safenetinc.lunadirector.monitor.service.pum.interval=900000 # PUM Data Purge Period (Default value: 90 days) com.safenetinc.lunadirector.monitor.device.pum.purgePeriodDays=90 # Device Monitoring Data Purge Period (Default value: 100 days) com.safenetinc.lunadirector.monitor.device.purgePeriodDays=100
Once the parameters are configured, save the file. Ensure all values meet the minimum requirements and are adjusted according to your system's needs.
Configure the scheduler
Now that you've created and configured the scheduler file, the next step is to apply it within your containerized environment. Follow the instructions below to ensure the scheduler is set up correctly for container platforms such as Podman, Kubernetes, or Helm.
Configuring the scheduler for Podman users
To configure the scheduler for Podman users:
Create a directory at /home/ccc/conf
on the host system.
Set the directory's permissions to 777
:
chmod 777 /home/ccc/conf
Place the scheduler-config.conf
file inside /home/ccc/conf
.
Update your Docker Compose file to map /home/ccc/conf
to /usr/safenet/ccc/conf
in the container using the volume configuration.
Restart the Podman container to apply the new configuration.
Configuring the scheduler for Kubernetes users
To configure the scheduler for Kubernetes users:
On each worker node, create a directory at /home/ccc/conf
.
Set the directory permissions to 777
:
chmod 777 /home/ccc/conf
Place the scheduler-config.conf
file inside /home/ccc/conf
.
On the master node, create a new persistent volume file:
vi scheduler-conf.yaml ``` - Add the following content to `scheduler-conf.yaml`: ```yaml apiVersion: v1 kind: PersistentVolume metadata: name: ccc-scheduler labels: volume: scheduler spec: capacity: storage: 10Gi accessModes: - ReadWriteMany hostPath: path: "/home/ccc/conf" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ccc-scheduler-claim spec: selector: matchLabels: volume: scheduler accessModes: - ReadWriteMany resources: requests: storage: 3Gi
In your deployment.yaml
file, configure the volume mount for the scheduler:
volumeMounts: - name: scheduler mountPath: /usr/safenet/ccc/conf volumes: - name: scheduler persistentVolumeClaim: claimName: ccc-scheduler-claim
Apply the persistent volume and deployment configuration:
kubectl apply -f scheduler-conf.yaml
Configuring the scheduler for Helm users
To configure the scheduler for Helm users:
On each worker node, create a directory at /home/ccc/conf
.
Set the directory permissions to 777
:
chmod 777 /home/ccc/conf
Place the scheduler-config.conf
file inside /home/ccc/conf
.
Navigate to the helm/templates
folder on the master node and create a new persistent volume file:
vi scheduler-conf.yaml
Add the following content to scheduler-conf.yaml
:
apiVersion: v1 kind: PersistentVolume metadata: name: ccc-scheduler labels: volume: scheduler spec: capacity: storage: 10Gi accessModes: - ReadWriteMany hostPath: path: "/home/ccc/conf" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ccc-scheduler-claim spec: selector: matchLabels: volume: scheduler accessModes: - ReadWriteMany resources: requests: storage: 3Gi
In your deployment.yaml
file under the helm/templates
folder, configure the volume mount for the scheduler:
volumeMounts: - name: scheduler mountPath: /usr/safenet/ccc/conf volumes: - name: scheduler persistentVolumeClaim: claimName: ccc-scheduler-claim
Apply the configuration and update the Helm release:
helm upgrade ccc