Creating an Oracle ASM Disk Group for Guarding
Creating a new ASM Disk Group
This document is for Oracle 19c on AIX 7.
-
List the available cluster shared disks, type:
# xiv_devlist
Response
IBM storage devices ---------------------------------------------------------------------------------------------------------------------------- Device Size (GB) Paths Vol Name Vol ID Storage ID Storage Type Hyper-Scale Mobility ---------------------------------------------------------------------------------------------------------------------------- /dev/hdisk2 51.6 2/2 sjaix81lpar069_sjaix81lpar079_vol_001 1053 7825664 XIV Idle ---------------------------------------------------------------------------------------------------------------------------- /dev/hdisk3 154.9 2/2 sjaix81lpar069_sjaix81lpar079_vol_002 1054 7825664 XIV Idle ---------------------------------------------------------------------------------------------------------------------------- /dev/hdisk4 154.9 2/2 sjaix81lpar069_sjaix81lpar079_vol_003 1055 7825664 XIV Idle ---------------------------------------------------------------------------------------------------------------------------- /dev/hdisk5 154.9 2/2 sjaix81lpar069_sjaix81lpar079_vol_004 1056 7825664 XIV Idle ---------------------------------------------------------------------------------------------------------------------------- /dev/hdisk6 51.6 2/2 sjaix81lpar069_sjaix81lpar079_vol_005 1075 7825664 XIV Idle ----------------------------------------------------------------------------------------------------------------------------
-
Ensure that the disks are available, type:
# /usr/sbin/lsdev -Cc disk
Response
hdisk0 Available Virtual SCSI Disk Drive hdisk1 Available Virtual SCSI Disk Drive hdisk2 Available C5-T1-01 MPIO 2810 XIV Disk hdisk3 Available C5-T1-01 MPIO 2810 XIV Disk hdisk4 Available C5-T1-01 MPIO 2810 XIV Disk hdisk5 Available C5-T1-01 MPIO 2810 XIV Disk hdisk6 Available C5-T1-01 MPIO 2810 XIV Disk
-
To identify the device names for the physical disks that you want to use, type the following on any node:
# /usr/sbin/lspv | grep -i none
Response
hdisk1 00fa0087e313f5c9 None hdisk2 none None hdisk3 none None hdisk4 none None hdisk6 none None
-
Select available candidate disks for a new ASM disk group. On the Oracle system, type:
SYS@+ASM2> COLUMN path format a20 SYS@+ASM2> SELECT name, header_status, path FROM V$ASM_DISK;
Response
NAME HEADER_STATUS PATH ------------------------------ ------------ -------------------- FORMER /dev/rhdisk1 FORMER /dev/rhdisk2 CANDIDATE /dev/rhdisk3 TDE1_0000 MEMBER /dev/rhdisk4 GRID_0000 MEMBER /dev/rhdisk6
-
Prepare the Targeted Disk for CTE and ASM Diskgroup creation:
# chown grid:dba /dev/rhdisk3 # chmod 660 /dev/rhdisk3 # dd if=/dev/rhdisk3 of=zzz bs=4k count=1
Response
1+0 records in. 1+0 records out.
-
In CipherTrust Manager, create a key with the following characteristics:
-
Encryption mode: CBC
-
Algorithm: AES
-
Size: 128 or 256
-
-
Create a CipherTrust Transparent Encryption policy for Oracle on AIX.
a. Create a CBC key with CBC-AES128 or CBC-AES256.
b. Create a Security Rule:
-
Action: all_ops
-
Effect: Audit, Permit, Apply Key
c. Create a Key Selection Rule:
- Key: cte_cbc_aes256_key
d. Guard your targeted RAC raw devices so that you can use the secvm disk to create a guarded Oracle RAC ASM or ASMLib disk group.
/dev/rhdisk3
- Type = Raw or Block Device (Auto or Manual Guard)
Once you guard your target, CipherTrust Transparent Encryption creates the following:
/dev/secvm/dev/rhdisk3
-
-
Install the same version of CipherTrust Transparent Encryption on all nodes in the cluster. To check the version, type:
# vmd -v
Response
Version 7, Service Pack 3 7.3.0.35 2022-09-23 02:08:46 (PDT) Copyright (c) 2009-2022, Thales Inc. All rights reserved.
-
Guard targeted disk on all cluster nodes.
a. Check the guard status of the disk on all cluster nodes, type:
# secfsd -status guard
Response
GuardPoint Policy Type ConfigState Status Reason ---------- ------ ---- ----------- ------ ------ /dev/rhdisk3 encrypt_cbc_aes256_all rawdevice guarded guarded N/A
b. List the devices, type:
# ls -l /dev/secvm/dev/rhdisk3 crw-rw---- 1 grid dba 43, 1 Sep 28 15:21 /dev/secvm/dev/rhdisk3
-
Add the following client settings for both RAC nodes that are set in the $GRID_HOME & $ORACLE_HOME variables.
$ echo $GRID_HOME
Response
/u01/app/19.0.0/grid $ echo $ORACLE_HOME
Response
/u01/app/oracle/product/19.0.0/db_1
For each node in the cluster, in the client settings, type:
|authenticator_euid|/u01/app/19.0.0/grid/bin/grid |authenticator_euid|/u01/app/19.0.0/grid/bin/orarootagent.bin |authenticator_euid|/u01/app/oracle/product/19.0.0/db_1/oracle |authenticator_euid|/u01/app/oracle/product/19.0.0/db_1/bin/oracle
This step is optional because it does not effect Oracle behavior. However, without these setting, CipherTrust Transparent Encryption can generate authentication error messages in the CTE log in
/var/log/vormetric
. These errors do not interfere with Oracle functions. -
Launch the GUI for the GRID
(./asmca)
to create the new CipherTrust Transparent Encryption guarded ASM disk group: -
Update the discovery path to the following in order for both the baseline and guarded disks to be found:
/dev/rhdisk*,/dev/secvm/dev/rhdisk*
-
Select targeted
rhdisk3
disk with the guarded path ofsecvm
. -
The end result should show your new CTE guarded ASM disk group called CTE1.
-
When creating your RAC database, choose DB files to reside in the CTE guarded ASM disk group that you just created:
You can now use the secvm disk that you created to create a guarded Oracle RAC ASM or ASMLib disk group.