TR-4965: Oracle Database Deployment and Protection in AWS FSx/EC2 with iSCSI/ASM
Allen Cao, Niyaz Mohamed, NetApp
This solution provides overview and details for Oracle database deployment and protection in AWS FSx ONTAP storage and EC2 compute instance with iSCSI protocol and Oracle database configured in standalone ReStart using asm as volume manager.
Purpose
ASM (Automatic Storage Management) is a popular Oracle storage volume manager employed in many Oracle installations. It is also Oracle's recommended storage management solution. It provides an alternative to conventional volume managers and file systems. Since Oracle version 11g, ASM packaged with grid infrastructure rather than a database. As a result, in order to utilize Oracle ASM for storage management without RAC, you must install Oracle grid infrastructure in a standalone server, also known as Oracle Restart. Doing so certainly adds more complexity in Oracle database deployment. However, as the name implies, when Oracle deployed in Restart mode, failed Oracle services restarted automatically by grid infrastructure or after a host reboot without user intervention, which provides a certain degree of high availability or HA functionality.
In this documentation, we demonstrate how to deploy an Oracle database with the iSCSI protocol and Oracle ASM in an Amazon FSx ONTAP storage environment with EC2 compute instances. We also demonstrate how to use the NetApp SnapCenter service through the NetApp BlueXP console to backup, restore, and clone your Oracle database for dev/test or other use cases for storage-efficient database operation in the AWS public cloud.
This solution addresses the following use cases:
-
Oracle database deployment in Amazon FSx ONTAP storage and EC2 compute instances with iSCSI/ASM
-
Testing and validating an Oracle workload in the public AWS cloud with iSCSI/ASM
-
Testing and validating Oracle database Restart functionalities deployed in AWS
Audience
This solution is intended for the following people:
-
A DBA who would like to deploy Oracle in an AWS public cloud with iSCSI/ASM.
-
A database solution architect who would like to test Oracle workloads in the AWS public cloud.
-
The storage administrator who would like to deploy and manage an Oracle database deployed to AWS FSx storage.
-
The application owner who would like to stand up an Oracle database in AWS FSx/EC2.
Solution test and validation environment
The testing and validation of this solution was performed in an AWS FSx and EC2 environment that might not match the final deployment environment. For more information, see the section Key factors for deployment consideration.
Architecture
Hardware and software components
Hardware |
||
FSx ONTAP storage |
Current version offered by AWS |
One FSx HA cluster in the same VPC and availability zone |
EC2 instance for compute |
t2.xlarge/4vCPU/16G |
Two EC2 T2 xlarge EC2 instances, one as primary DB server and the other as a clone DB server |
Software |
||
RedHat Linux |
RHEL-8.6.0_HVM-20220503-x86_64-2-Hourly2-GP2 |
Deployed RedHat subscription for testing |
Oracle Grid Infrastructure |
Version 19.18 |
Applied RU patch p34762026_190000_Linux-x86-64.zip |
Oracle Database |
Version 19.18 |
Applied RU patch p34765931_190000_Linux-x86-64.zip |
Oracle OPatch |
Version 12.2.0.1.36 |
Latest patch p6880880_190000_Linux-x86-64.zip |
SnapCenter Service |
Version |
v2.3.1.2324 |
Key factors for deployment consideration
-
EC2 compute instances. In these tests and validations, we used an AWS EC2 t2.xlarge instance type for the Oracle database compute instance. NetApp recommends using an M5 type EC2 instance as the compute instance for Oracle in production deployment because it is optimized for database workloads. You need to size the EC2 instance appropriately for the number of vCPUs and the amount of RAM based on actual workload requirements.
-
FSx storage HA clusters single- or multi-zone deployment. In these tests and validations, we deployed an FSx HA cluster in a single AWS availability zone. For production deployment, NetApp recommends deploying an FSx HA pair in two different availability zones. An FSx HA cluster is alway provisioned in a HA pair that is sync mirrored in a pair of active-passive file systems to provide storage-level redundancy. Multi-zone deployment further enhances high availability in the event of failure in a single AWS zone.
-
FSx storage cluster sizing. An Amazon FSx ONTAP storage file system provides up to 160,000 raw SSD IOPS, up to 4GBps throughput, and a maximum of 192TiB capacity. However, you can size the cluster in terms of provisioned IOPS, throughput, and the storage limit (minimum 1,024 GiB) based on your actually requirements at the time of deployment. The capacity can be adjusted dynamically on the fly without affecting application availability.
-
Oracle data and logs layout. In our tests and validations, we deployed two ASM disk groups for data and logs respectively. Within the +DATA asm disk group, we provisioned four LUNs in a data volume. Within the +LOGS asm disk group, we provisioned two LUNs in a logs volume. In general, multiple LUNs laid out within an Amazon FSx ONTAP volume provides better performance.
-
iSCSI configuration. The EC2 instance database server connects to FSx storage with the iSCSI protocol. EC2 instances generally deploy with a single network interface or ENI. The single NIC interface carries both iSCSI and application traffic. It is important to gauge the Oracle database peak I/O throughput requirement by carefully analyzing the Oracle AWR report in order to choose a right EC2 compute instance that meets both application and iSCSI traffic-throughput requirements. NetApp also recommends allocating four iSCSI connections to both FSx iSCSI endpoints with multipath properly configured.
-
Oracle ASM redundancy level to use for each Oracle ASM disk group that you create. Because FSx already mirrors the storage on the FSx cluster level, you should use External Redundancy, which means that the option does not allow Oracle ASM to mirror the contents of the disk group.
-
Database backup. NetApp provides a SaaS version of SnapCenter software service for database backup, restore, and clone in the cloud that is available through the NetApp BlueXP console UI. NetApp recommends implementing such a service to achieve fast (under a minute) SnapShot backup, quick (few minutes) database restore, and database cloning.
Solution deployment
The following section provides step-by-step deployment procedures.
Prerequisites for deployment
Details
Deployment requires the following prerequisites.
-
An AWS account has been set up, and the necessary VPC and network segments have been created within your AWS account.
-
From the AWS EC2 console, you must deploy two EC2 Linux instances, one as the primary Oracle DB server and an optional alternative clone target DB server. See the architecture diagram in the previous section for more details about the environment setup. Also review the User Guide for Linux instances for more information.
-
From the AWS EC2 console, deploy Amazon FSx ONTAP storage HA clusters to host the Oracle database volumes. If you are not familiar with the deployment of FSx storage, see the documentation Creating FSx ONTAP file systems for step-by-step instructions.
-
Steps 2 and 3 can be performed using the following Terraform automation toolkit, which creates an EC2 instance named
ora_01
and an FSx file system namedfsx_01
. Review the instruction carefully and change the variables to suit your environment before execution.git clone https://github.com/NetApp-Automation/na_aws_fsx_ec2_deploy.git
Ensure that you have allocated at least 50G in EC2 instance root volume in order to have sufficient space to stage Oracle installation files. |
EC2 instance kernel configuration
Details
With the prerequisites provisioned, log into the EC2 instance as ec2-user and sudo to root user to configure the Linux kernel for Oracle installation.
-
Create a staging directory
/tmp/archive
folder and set the777
permission.mkdir /tmp/archive chmod 777 /tmp/archive
-
Download and stage the Oracle binary installation files and other required rpm files to the
/tmp/archive
directory.See the following list of installation files to be stated in
/tmp/archive
on the EC2 instance.[ec2-user@ip-172-30-15-58 ~]$ ls -l /tmp/archive total 10537316 -rw-rw-r--. 1 ec2-user ec2-user 19112 Mar 21 15:57 compat-libcap1-1.10-7.el7.x86_64.rpm -rw-rw-r-- 1 ec2-user ec2-user 3059705302 Mar 21 22:01 LINUX.X64_193000_db_home.zip -rw-rw-r-- 1 ec2-user ec2-user 2889184573 Mar 21 21:09 LINUX.X64_193000_grid_home.zip -rw-rw-r--. 1 ec2-user ec2-user 589145 Mar 21 15:56 netapp_linux_unified_host_utilities-7-1.x86_64.rpm -rw-rw-r--. 1 ec2-user ec2-user 31828 Mar 21 15:55 oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm -rw-rw-r-- 1 ec2-user ec2-user 2872741741 Mar 21 22:31 p34762026_190000_Linux-x86-64.zip -rw-rw-r-- 1 ec2-user ec2-user 1843577895 Mar 21 22:32 p34765931_190000_Linux-x86-64.zip -rw-rw-r-- 1 ec2-user ec2-user 124347218 Mar 21 22:33 p6880880_190000_Linux-x86-64.zip -rw-r--r-- 1 ec2-user ec2-user 257136 Mar 22 16:25 policycoreutils-python-utils-2.9-9.el8.noarch.rpm
-
Install Oracle 19c preinstall RPM, which satisfies most kernel configuration requirements.
yum install /tmp/archive/oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm
-
Download and install the missing
compat-libcap1
in Linux 8.yum install /tmp/archive/compat-libcap1-1.10-7.el7.x86_64.rpm
-
From NetApp, download and install NetApp host utilities.
yum install /tmp/archive/netapp_linux_unified_host_utilities-7-1.x86_64.rpm
-
Install
policycoreutils-python-utils
, which is not available in the EC2 instance.yum install /tmp/archive/policycoreutils-python-utils-2.9-9.el8.noarch.rpm
-
Install open JDK version 1.8.
yum install java-1.8.0-openjdk.x86_64
-
Install iSCSI initiator utils.
yum install iscsi-initiator-utils
-
Install
sg3_utils
.yum install sg3_utils
-
Install
device-mapper-multipath
.yum install device-mapper-multipath
-
Disable transparent hugepages in the current system.
echo never > /sys/kernel/mm/transparent_hugepage/enabled echo never > /sys/kernel/mm/transparent_hugepage/defrag
Add the following lines in
/etc/rc.local
to disabletransparent_hugepage
after reboot:# Disable transparent hugepages if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi
-
Disable selinux by changing
SELINUX=enforcing
toSELINUX=disabled
. You must reboot the host to make the change effective.vi /etc/sysconfig/selinux
-
Add the following lines to
limit.conf
to set the file descriptor limit and stack size without quotes" "
.vi /etc/security/limits.conf "* hard nofile 65536" "* soft stack 10240"
-
Add swap space to EC2 instance by following this instruction: How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file? The exact amount of space to add depends on the size of RAM up to 16G.
-
Change
node.session.timeo.replacement_timeout
in theiscsi.conf
configuration file from 120 to 5 seconds.vi /etc/iscsi/iscsid.conf
-
Enable and start the iSCSI service on the EC2 instance.
systemctl enable iscsid systemctl start iscsid
-
Retrieve the iSCSI initiator address to be used for database LUN mapping.
cat /etc/iscsi/initiatorname.iscsi
-
Add the ASM group to be used for the asm sysasm group.
groupadd asm
-
Modify the oracle user to add ASM as a secondary group (the oracle user should have been created after Oracle preinstall RPM installation).
usermod -a -G asm oracle
-
Stop and disable Linux firewall if it is active.
systemctl stop firewalld systemctl disable firewalld
-
Reboot the EC2 instance.
Provision and map database volumes and LUNs to the EC2 instance host
Details
Provision three volumes from the command line by login to FSx cluster via ssh as fsxadmin user with FSx cluster management IP to host the Oracle database binary, data, and logs files.
-
Log into the FSx cluster through SSH as the fsxadmin user.
ssh fsxadmin@172.30.15.53
-
Execute the following command to create a volume for the Oracle binary.
vol create -volume ora_01_biny -aggregate aggr1 -size 50G -state online -type RW -snapshot-policy none -tiering-policy snapshot-only
-
Execute the following command to create a volume for Oracle data.
vol create -volume ora_01_data -aggregate aggr1 -size 100G -state online -type RW -snapshot-policy none -tiering-policy snapshot-only
-
Execute the following command to create a volume for Oracle logs.
vol create -volume ora_01_logs -aggregate aggr1 -size 100G -state online -type RW -snapshot-policy none -tiering-policy snapshot-only
-
Create a binary LUN within the database binary volume.
lun create -path /vol/ora_01_biny/ora_01_biny_01 -size 40G -ostype linux
-
Create data LUNs within the database data volume.
lun create -path /vol/ora_01_data/ora_01_data_01 -size 20G -ostype linux lun create -path /vol/ora_01_data/ora_01_data_02 -size 20G -ostype linux lun create -path /vol/ora_01_data/ora_01_data_03 -size 20G -ostype linux lun create -path /vol/ora_01_data/ora_01_data_04 -size 20G -ostype linux
-
Create log LUNs within the database logs volume.
lun create -path /vol/ora_01_logs/ora_01_logs_01 -size 40G -ostype linux lun create -path /vol/ora_01_logs/ora_01_logs_02 -size 40G -ostype linux
-
Create an igroup for the EC2 instance with the initiator retrieved from step 14 of the EC2 kernel configuration above.
igroup create -igroup ora_01 -protocol iscsi -ostype linux -initiator iqn.1994-05.com.redhat:f65fed7641c2
-
Map the LUNs to the igroup created above. Increment the LUN ID sequentially for each additional LUN within a volume.
lun map -path /vol/ora_01_biny/ora_01_biny_01 -igroup ora_01 -vserver svm_ora -lun-id 0 lun map -path /vol/ora_01_data/ora_01_data_01 -igroup ora_01 -vserver svm_ora -lun-id 1 lun map -path /vol/ora_01_data/ora_01_data_02 -igroup ora_01 -vserver svm_ora -lun-id 2 lun map -path /vol/ora_01_data/ora_01_data_03 -igroup ora_01 -vserver svm_ora -lun-id 3 lun map -path /vol/ora_01_data/ora_01_data_04 -igroup ora_01 -vserver svm_ora -lun-id 4 lun map -path /vol/ora_01_logs/ora_01_logs_01 -igroup ora_01 -vserver svm_ora -lun-id 5 lun map -path /vol/ora_01_logs/ora_01_logs_02 -igroup ora_01 -vserver svm_ora -lun-id 6
-
Validate the LUN mapping.
mapping show
This is expected to return:
FsxId02ad7bf3476b741df::> mapping show (lun mapping show) Vserver Path Igroup LUN ID Protocol ---------- ---------------------------------------- ------- ------ -------- svm_ora /vol/ora_01_biny/ora_01_biny_01 ora_01 0 iscsi svm_ora /vol/ora_01_data/ora_01_data_01 ora_01 1 iscsi svm_ora /vol/ora_01_data/ora_01_data_02 ora_01 2 iscsi svm_ora /vol/ora_01_data/ora_01_data_03 ora_01 3 iscsi svm_ora /vol/ora_01_data/ora_01_data_04 ora_01 4 iscsi svm_ora /vol/ora_01_logs/ora_01_logs_01 ora_01 5 iscsi svm_ora /vol/ora_01_logs/ora_01_logs_02 ora_01 6 iscsi
Database storage configuration
Details
Now, import and set up the FSx storage for the Oracle grid infrastructure and database installation on the EC2 instance host.
-
Log into the EC2 instance via SSH as the ec2-user with your SSH key and EC2 instance IP address.
ssh -i ora_01.pem ec2-user@172.30.15.58
-
Discover the FSx iSCSI endpoints using either SVM iSCSI IP address. Then change to your environment-specific portal address.
sudo iscsiadm iscsiadm --mode discovery --op update --type sendtargets --portal 172.30.15.51
-
Establish iSCSI sessions by logging into each target.
sudo iscsiadm --mode node -l all
The expected output from the command is:
[ec2-user@ip-172-30-15-58 ~]$ sudo iscsiadm --mode node -l all Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.1f795e65c74911edb785affbf0a2b26e:vs.3, portal: 172.30.15.51,3260] Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.1f795e65c74911edb785affbf0a2b26e:vs.3, portal: 172.30.15.13,3260] Login to [iface: default, target: iqn.1992-08.com.netapp:sn.1f795e65c74911edb785affbf0a2b26e:vs.3, portal: 172.30.15.51,3260] successful. Login to [iface: default, target: iqn.1992-08.com.netapp:sn.1f795e65c74911edb785affbf0a2b26e:vs.3, portal: 172.30.15.13,3260] successful.
-
View and validate a list of active iSCSI sessions.
sudo iscsiadm --mode session
Return the iSCSI sessions.
[ec2-user@ip-172-30-15-58 ~]$ sudo iscsiadm --mode session tcp: [1] 172.30.15.51:3260,1028 iqn.1992-08.com.netapp:sn.1f795e65c74911edb785affbf0a2b26e:vs.3 (non-flash) tcp: [2] 172.30.15.13:3260,1029 iqn.1992-08.com.netapp:sn.1f795e65c74911edb785affbf0a2b26e:vs.3 (non-flash)
-
Verify that the LUNs were imported into the host.
sudo sanlun lun show
This will return a list of Oracle LUNs from FSx.
[ec2-user@ip-172-30-15-58 ~]$ sudo sanlun lun show controller(7mode/E-Series)/ device host lun vserver(cDOT/FlashRay) lun-pathname filename adapter protocol size product svm_ora /vol/ora_01_logs/ora_01_logs_02 /dev/sdn host3 iSCSI 40g cDOT svm_ora /vol/ora_01_logs/ora_01_logs_01 /dev/sdm host3 iSCSI 40g cDOT svm_ora /vol/ora_01_data/ora_01_data_03 /dev/sdk host3 iSCSI 20g cDOT svm_ora /vol/ora_01_data/ora_01_data_04 /dev/sdl host3 iSCSI 20g cDOT svm_ora /vol/ora_01_data/ora_01_data_01 /dev/sdi host3 iSCSI 20g cDOT svm_ora /vol/ora_01_data/ora_01_data_02 /dev/sdj host3 iSCSI 20g cDOT svm_ora /vol/ora_01_biny/ora_01_biny_01 /dev/sdh host3 iSCSI 40g cDOT svm_ora /vol/ora_01_logs/ora_01_logs_02 /dev/sdg host2 iSCSI 40g cDOT svm_ora /vol/ora_01_logs/ora_01_logs_01 /dev/sdf host2 iSCSI 40g cDOT svm_ora /vol/ora_01_data/ora_01_data_04 /dev/sde host2 iSCSI 20g cDOT svm_ora /vol/ora_01_data/ora_01_data_02 /dev/sdc host2 iSCSI 20g cDOT svm_ora /vol/ora_01_data/ora_01_data_03 /dev/sdd host2 iSCSI 20g cDOT svm_ora /vol/ora_01_data/ora_01_data_01 /dev/sdb host2 iSCSI 20g cDOT svm_ora /vol/ora_01_biny/ora_01_biny_01 /dev/sda host2 iSCSI 40g cDOT
-
Configure the
multipath.conf
file with following default and blacklist entries.sudo vi /etc/multipath.conf defaults { find_multipaths yes user_friendly_names yes } blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" devnode "^cciss.*" }
-
Start the multipath service.
sudo systemctl start multipathd
Now multipath devices appear in the
/dev/mapper
directory.[ec2-user@ip-172-30-15-58 ~]$ ls -l /dev/mapper total 0 lrwxrwxrwx 1 root root 7 Mar 21 20:13 3600a09806c574235472455534e68512d -> ../dm-0 lrwxrwxrwx 1 root root 7 Mar 21 20:13 3600a09806c574235472455534e685141 -> ../dm-1 lrwxrwxrwx 1 root root 7 Mar 21 20:13 3600a09806c574235472455534e685142 -> ../dm-2 lrwxrwxrwx 1 root root 7 Mar 21 20:13 3600a09806c574235472455534e685143 -> ../dm-3 lrwxrwxrwx 1 root root 7 Mar 21 20:13 3600a09806c574235472455534e685144 -> ../dm-4 lrwxrwxrwx 1 root root 7 Mar 21 20:13 3600a09806c574235472455534e685145 -> ../dm-5 lrwxrwxrwx 1 root root 7 Mar 21 20:13 3600a09806c574235472455534e685146 -> ../dm-6 crw------- 1 root root 10, 236 Mar 21 18:19 control
-
Log into the FSx cluster as the fsxadmin user via SSH to retrieve the serial-hex number for each LUN start with 6c574xxx…, the HEX number start with 3600a0980, which is AWS vendor ID.
lun show -fields serial-hex
and return as follow:
FsxId02ad7bf3476b741df::> lun show -fields serial-hex vserver path serial-hex ------- ------------------------------- ------------------------ svm_ora /vol/ora_01_biny/ora_01_biny_01 6c574235472455534e68512d svm_ora /vol/ora_01_data/ora_01_data_01 6c574235472455534e685141 svm_ora /vol/ora_01_data/ora_01_data_02 6c574235472455534e685142 svm_ora /vol/ora_01_data/ora_01_data_03 6c574235472455534e685143 svm_ora /vol/ora_01_data/ora_01_data_04 6c574235472455534e685144 svm_ora /vol/ora_01_logs/ora_01_logs_01 6c574235472455534e685145 svm_ora /vol/ora_01_logs/ora_01_logs_02 6c574235472455534e685146 7 entries were displayed.
-
Update the
/dev/multipath.conf
file to add a user-friendly name for the multipath device.sudo vi /etc/multipath.conf
with following entries:
multipaths { multipath { wwid 3600a09806c574235472455534e68512d alias ora_01_biny_01 } multipath { wwid 3600a09806c574235472455534e685141 alias ora_01_data_01 } multipath { wwid 3600a09806c574235472455534e685142 alias ora_01_data_02 } multipath { wwid 3600a09806c574235472455534e685143 alias ora_01_data_03 } multipath { wwid 3600a09806c574235472455534e685144 alias ora_01_data_04 } multipath { wwid 3600a09806c574235472455534e685145 alias ora_01_logs_01 } multipath { wwid 3600a09806c574235472455534e685146 alias ora_01_logs_02 } }
-
Reboot the multipath service to verify that the devices under
/dev/mapper
have changed to LUN names versus serial-hex IDs.sudo systemctl restart multipathd
Check
/dev/mapper
to return as following:[ec2-user@ip-172-30-15-58 ~]$ ls -l /dev/mapper total 0 crw------- 1 root root 10, 236 Mar 21 18:19 control lrwxrwxrwx 1 root root 7 Mar 21 20:41 ora_01_biny_01 -> ../dm-0 lrwxrwxrwx 1 root root 7 Mar 21 20:41 ora_01_data_01 -> ../dm-1 lrwxrwxrwx 1 root root 7 Mar 21 20:41 ora_01_data_02 -> ../dm-2 lrwxrwxrwx 1 root root 7 Mar 21 20:41 ora_01_data_03 -> ../dm-3 lrwxrwxrwx 1 root root 7 Mar 21 20:41 ora_01_data_04 -> ../dm-4 lrwxrwxrwx 1 root root 7 Mar 21 20:41 ora_01_logs_01 -> ../dm-5 lrwxrwxrwx 1 root root 7 Mar 21 20:41 ora_01_logs_02 -> ../dm-6
-
Partition the binary LUN with a single primary partition.
sudo fdisk /dev/mapper/ora_01_biny_01
-
Format the partitioned binary LUN with an XFS file system.
sudo mkfs.xfs /dev/mapper/ora_01_biny_01p1
-
Mount the binary LUN to
/u01
.sudo mount -t xfs /dev/mapper/ora_01_biny_01p1 /u01
-
Change
/u01
mount point ownership to the Oracle user and it's asssociated primary group.sudo chown oracle:oinstall /u01
-
Find the UUI of the binary LUN.
sudo blkid /dev/mapper/ora_01_biny_01p1
-
Add a mount point to
/etc/fstab
.sudo vi /etc/fstab
Add the following line.
UUID=d89fb1c9-4f89-4de4-b4d9-17754036d11d /u01 xfs defaults,nofail 0 2
It is important to mount the binary with only the UUID and with the nofail option to avoid possible root-lock issues during EC2-instance reboot. -
As the root user, add the udev rule for Oracle devices.
vi /etc/udev/rules.d/99-oracle-asmdevices.rules
Include following entries:
ENV{DM_NAME}=="ora*", GROUP:="oinstall", OWNER:="oracle", MODE:="660"
-
As the root user, reload the udev rules.
udevadm control --reload-rules
-
As the root user, trigger the udev rules.
udevadm trigger
-
As the root user, reload multipathd.
systemctl restart multipathd
-
Reboot the EC2 instance host.
Oracle grid infrastructure installation
Details
-
Log into the EC2 instance as the ec2-user via SSH and enable password authentication by uncommenting
PasswordAuthentication yes
and then commenting outPasswordAuthentication no
.sudo vi /etc/ssh/sshd_config
-
Restart the sshd service.
sudo systemctl restart sshd
-
Reset the Oracle user password.
sudo passwd oracle
-
Log in as the Oracle Restart software owner user (oracle). Create an Oracle directory as follows:
mkdir -p /u01/app/oracle mkdir -p /u01/app/oraInventory
-
Change the directory permission setting.
chmod -R 775 /u01/app
-
Create a grid home directory and change to it.
mkdir -p /u01/app/oracle/product/19.0.0/grid cd /u01/app/oracle/product/19.0.0/grid
-
Unzip the grid installation files.
unzip -q /tmp/archive/LINUX.X64_193000_grid_home.zip
-
From grid home, delete the
OPatch
directory.rm -rf OPatch
-
From grid home, unzip
p6880880_190000_Linux-x86-64.zip
.unzip -q /tmp/archive/p6880880_190000_Linux-x86-64.zip
-
From grid home, revise
cv/admin/cvu_config
, uncomment and replaceCV_ASSUME_DISTID=OEL5
withCV_ASSUME_DISTID=OL7
.vi cv/admin/cvu_config
-
Prepare a
gridsetup.rsp
file for silent installation and place the rsp file in the/tmp/archive
directory. The rsp file should cover sections A, B, and G with the following infomation:INVENTORY_LOCATION=/u01/app/oraInventory oracle.install.option=HA_CONFIG ORACLE_BASE=/u01/app/oracle oracle.install.asm.OSDBA=dba oracle.install.asm.OSOPER=oper oracle.install.asm.OSASM=asm oracle.install.asm.SYSASMPassword="SetPWD" oracle.install.asm.diskGroup.name=DATA oracle.install.asm.diskGroup.redundancy=EXTERNAL oracle.install.asm.diskGroup.AUSize=4 oracle.install.asm.diskGroup.disks=/dev/mapper/ora_01_data_01,/dev/mapper/ora_01_data_02,/dev/mapper/ora_01_data_03,/dev/mapper/ora_01_data_04 oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/* oracle.install.asm.monitorPassword="SetPWD" oracle.install.asm.configureAFD=true
-
Log into the EC2 instance as the root user and set
ORACLE_HOME
andORACLE_BASE
.export ORACLE_HOME=/u01/app/oracle/product/19.0.0/grid export ORACLE_BASE=/tmp cd /u01/app/oracle/product/19.0.0/grid/bin
-
Provision disk devices for use with the Oracle ASM filter driver.
./asmcmd afd_label DATA01 /dev/mapper/ora_01_data_01 --init ./asmcmd afd_label DATA02 /dev/mapper/ora_01_data_02 --init ./asmcmd afd_label DATA03 /dev/mapper/ora_01_data_03 --init ./asmcmd afd_label DATA04 /dev/mapper/ora_01_data_04 --init ./asmcmd afd_label LOGS01 /dev/mapper/ora_01_logs_01 --init ./asmcmd afd_label LOGS02 /dev/mapper/ora_01_logs_02 --init
-
Install
cvuqdisk-1.0.10-1.rpm
.rpm -ivh /u01/app/oracle/product/19.0.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm
-
Unset
$ORACLE_BASE
.unset ORACLE_BASE
-
Log into the EC2 instance as the Oracle user and extract the patch in the
/tmp/archive
folder.unzip /tmp/archive/p34762026_190000_Linux-x86-64.zip -d /tmp/archive
-
From grid home /u01/app/oracle/product/19.0.0/grid and as the oracle user, launch
gridSetup.sh
for grid infrastructure installation../gridSetup.sh -applyRU /tmp/archive/34762026/ -silent -responseFile /tmp/archive/gridsetup.rsp
Ignore the warnings about wrong groups for grid infrastructure. We are using a single Oracle user to manage Oracle Restart, so this is expected.
-
As root user, execute the following script(s):
/u01/app/oraInventory/orainstRoot.sh /u01/app/oracle/product/19.0.0/grid/root.sh
-
As root user, reload the multipathd.
systemctl restart multipathd
-
As the Oracle user, execute the following command to complete the configuration:
/u01/app/oracle/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/archive/gridsetup.rsp -silent
-
As the Oracle user, create the LOGS disk group.
bin/asmca -silent -sysAsmPassword 'yourPWD' -asmsnmpPassword 'yourPWD' -createDiskGroup -diskGroupName LOGS -disk 'AFD:LOGS*' -redundancy EXTERNAL -au_size 4
-
As the Oracle user, validate grid services after installation configuration.
bin/crsctl stat res -t + Name Target State Server State details Local Resources ora.DATA.dg ONLINE ONLINE ip-172-30-15-58 STABLE ora.LISTENER.lsnr ONLINE ONLINE ip-172-30-15-58 STABLE ora.LOGS.dg ONLINE ONLINE ip-172-30-15-58 STABLE ora.asm ONLINE ONLINE ip-172-30-15-58 Started,STABLE ora.ons OFFLINE OFFLINE ip-172-30-15-58 STABLE Cluster Resources ora.cssd ONLINE ONLINE ip-172-30-15-58 STABLE ora.diskmon OFFLINE OFFLINE STABLE ora.driver.afd ONLINE ONLINE ip-172-30-15-58 STABLE ora.evmd ONLINE ONLINE ip-172-30-15-58 STABLE
-
Valiate ASM filter driver status.
[oracle@ip-172-30-15-58 grid]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0/grid [oracle@ip-172-30-15-58 grid]$ export ORACLE_SID=+ASM [oracle@ip-172-30-15-58 grid]$ export PATH=$PATH:$ORACLE_HOME/bin [oracle@ip-172-30-15-58 grid]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 1048576 81920 81847 0 81847 0 N DATA/ MOUNTED EXTERN N 512 512 4096 1048576 81920 81853 0 81853 0 N LOGS/ ASMCMD> afd_state ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'ip-172-30-15-58.ec2.internal'
Oracle database installation
Details
-
Log in as the Oracle user and unset
$ORACLE_HOME
and$ORACLE_SID
if it is set.unset ORACLE_HOME unset ORACLE_SID
-
Create the Oracle DB home directory and change to it.
mkdir /u01/app/oracle/product/19.0.0/db1 cd /u01/app/oracle/product/19.0.0/db1
-
Unzip the Oracle DB installation files.
unzip -q /tmp/archive/LINUX.X64_193000_db_home.zip
-
From the DB home, delete the
OPatch
directory.rm -rf OPatch
-
From DB home, unzip
p6880880_190000_Linux-x86-64.zip
.unzip -q /tmp/archive/p6880880_190000_Linux-x86-64.zip
-
From DB home, revise
cv/admin/cvu_config
, and uncomment and replaceCV_ASSUME_DISTID=OEL5
withCV_ASSUME_DISTID=OL7
.vi cv/admin/cvu_config
-
From the
/tmp/archive
directory, unpack the DB 19.18 RU patch.unzip p34765931_190000_Linux-x86-64.zip
-
Prepare the DB silent install rsp file in
/tmp/archive/dbinstall.rsp
directory with the following values:oracle.install.option=INSTALL_DB_SWONLY UNIX_GROUP_NAME=oinstall INVENTORY_LOCATION=/u01/app/oraInventory ORACLE_HOME=/u01/app/oracle/product/19.0.0/db1 ORACLE_BASE=/u01/app/oracle oracle.install.db.InstallEdition=EE oracle.install.db.OSDBA_GROUP=dba oracle.install.db.OSOPER_GROUP=oper oracle.install.db.OSBACKUPDBA_GROUP=oper oracle.install.db.OSDGDBA_GROUP=dba oracle.install.db.OSKMDBA_GROUP=dba oracle.install.db.OSRACDBA_GROUP=dba oracle.install.db.rootconfig.executeRootScript=false
-
From db1 home /u01/app/oracle/product/19.0.0/db1, execute silent software-only DB installation.
./runInstaller -applyRU /tmp/archive/34765931/ -silent -ignorePrereqFailure -responseFile /tmp/archive/dbinstall.rsp
-
As root user, run the
root.sh
script after sofware-only installation./u01/app/oracle/product/19.0.0/db1/root.sh
-
As Oracle user, create the
dbca.rsp
file with the following entries:gdbName=db1.demo.netapp.com sid=db1 createAsContainerDatabase=true numberOfPDBs=3 pdbName=db1_pdb useLocalUndoForPDBs=true pdbAdminPassword="yourPWD" templateName=General_Purpose.dbc sysPassword="yourPWD" systemPassword="yourPWD" dbsnmpPassword="yourPWD" datafileDestination=+DATA recoveryAreaDestination=+LOGS storageType=ASM diskGroupName=DATA characterSet=AL32UTF8 nationalCharacterSet=AL16UTF16 listeners=LISTENER databaseType=MULTIPURPOSE automaticMemoryManagement=false totalMemory=8192
-
As Oracle user, lauch DB creation with dbca.
bin/dbca -silent -createDatabase -responseFile /tmp/archive/dbca.rsp output: Prepare for db operation 7% complete Registering database with Oracle Restart 11% complete Copying database files 33% complete Creating and starting Oracle instance 35% complete 38% complete 42% complete 45% complete 48% complete Completing Database Creation 53% complete 55% complete 56% complete Creating Pluggable Databases 60% complete 64% complete 69% complete 78% complete Executing Post Configuration Actions 100% complete Database creation complete. For details check the logfiles at: /u01/app/oracle/cfgtoollogs/dbca/db1. Database Information: Global Database Name:db1.demo.netapp.com System Identifier(SID):db1 Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/db1/db1.log" for further details.
-
As Oracle user, validate Oracle Restart HA services after DB creation.
[oracle@ip-172-30-15-58 db1]$ ../grid/bin/crsctl stat res -t Name Target State Server State details Local Resources ora.DATA.dg ONLINE ONLINE ip-172-30-15-58 STABLE ora.LISTENER.lsnr ONLINE ONLINE ip-172-30-15-58 STABLE ora.LOGS.dg ONLINE ONLINE ip-172-30-15-58 STABLE ora.asm ONLINE ONLINE ip-172-30-15-58 Started,STABLE ora.ons OFFLINE OFFLINE ip-172-30-15-58 STABLE Cluster Resources ora.cssd ONLINE ONLINE ip-172-30-15-58 STABLE ora.db1.db ONLINE ONLINE ip-172-30-15-58 Open,HOME=/u01/app/oracle/product/19.0.0/db1,STABLE ora.diskmon OFFLINE OFFLINE STABLE ora.driver.afd ONLINE ONLINE ip-172-30-15-58 STABLE ora.evmd ONLINE ONLINE ip-172-30-15-58 STABLE
-
Set the Oracle user
.bash_profile
.vi ~/.bash_profile
-
Add following entries:
export ORACLE_HOME=/u01/app/oracle/product/19.0.0/db1 export ORACLE_SID=db1 export PATH=$PATH:$ORACLE_HOME/bin alias asm='export ORACLE_HOME=/u01/app/oracle/product/19.0.0/grid;export ORACLE_SID=+ASM;export PATH=$PATH:$ORACLE_HOME/bin'
-
Validate the CDB/PDB created.
/home/oracle/.bash_profile sqlplus / as sysdba SQL> select name, open_mode from v$database; NAME OPEN_MODE DB1 READ WRITE SQL> select name from v$datafile; NAME +DATA/DB1/DATAFILE/system.256.1132176177 +DATA/DB1/DATAFILE/sysaux.257.1132176221 +DATA/DB1/DATAFILE/undotbs1.258.1132176247 +DATA/DB1/86B637B62FE07A65E053F706E80A27CA/DATAFILE/system.265.1132177009 +DATA/DB1/86B637B62FE07A65E053F706E80A27CA/DATAFILE/sysaux.266.1132177009 +DATA/DB1/DATAFILE/users.259.1132176247 +DATA/DB1/86B637B62FE07A65E053F706E80A27CA/DATAFILE/undotbs1.267.1132177009 +DATA/DB1/F7852758DCD6B800E0533A0F1EAC1DC6/DATAFILE/system.271.1132177853 +DATA/DB1/F7852758DCD6B800E0533A0F1EAC1DC6/DATAFILE/sysaux.272.1132177853 +DATA/DB1/F7852758DCD6B800E0533A0F1EAC1DC6/DATAFILE/undotbs1.270.1132177853 +DATA/DB1/F7852758DCD6B800E0533A0F1EAC1DC6/DATAFILE/users.274.1132177871 NAME +DATA/DB1/F785288BBCD1BA78E0533A0F1EACCD6F/DATAFILE/system.276.1132177871 +DATA/DB1/F785288BBCD1BA78E0533A0F1EACCD6F/DATAFILE/sysaux.277.1132177871 +DATA/DB1/F785288BBCD1BA78E0533A0F1EACCD6F/DATAFILE/undotbs1.275.1132177871 +DATA/DB1/F785288BBCD1BA78E0533A0F1EACCD6F/DATAFILE/users.279.1132177889 +DATA/DB1/F78529A14DD8BB18E0533A0F1EACB8ED/DATAFILE/system.281.1132177889 +DATA/DB1/F78529A14DD8BB18E0533A0F1EACB8ED/DATAFILE/sysaux.282.1132177889 +DATA/DB1/F78529A14DD8BB18E0533A0F1EACB8ED/DATAFILE/undotbs1.280.1132177889 +DATA/DB1/F78529A14DD8BB18E0533A0F1EACB8ED/DATAFILE/users.284.1132177907 19 rows selected. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED 2 PDB$SEED READ ONLY NO 3 DB1_PDB1 READ WRITE NO 4 DB1_PDB2 READ WRITE NO 5 DB1_PDB3 READ WRITE NO SQL>
-
Set the DB recovery destination size to the +LOGS disk group size.
alter system set db_recovery_file_dest_size = 80G scope=both;
-
Log into the database with sqlplus and enable archive log mode.
sqlplus /as sysdba. shutdown immediate; startup mount; alter database archivelog; alter database open;
This completes Oracle 19c version 19.18 Restart deployment on an Amazon FSx ONTAP and EC2 compute instance. If desired, NetApp recommends relocating the Oracle control file and online log files to the +LOGS disk group.
Automated deployment option
Refer to TR-4986: Simplified, Automated Oracle Deployment on Amazon FSx ONTAP with iSCSI for details.
Oracle Database backup, restore, and clone with SnapCenter Service
See SnapCenter Services for Oracle for details on Oracle database backup, restore, and clone with NetApp BlueXP console.
Where to find additional information
To learn more about the information described in this document, review the following documents and/or websites:
-
Installing Oracle Grid Infrastructure for a Standalone Server with a New Database Installation
-
Installing and Configuring Oracle Database Using Response Files
-
Amazon FSx ONTAP
-
Amazon EC2