Skip to main content
NetApp Solutions

TR-4974: Oracle 19c in Standalone Restart on AWS FSx/EC2 with NFS/ASM

Contributors kevin-hoke acao8888

Allen Cao, Niyaz Mohamed, NetApp

This solution provides overview and details for Oracle database deployment and protection in AWS FSx ONTAP storage and EC2 compute instance with NFS protocol and Oracle database configured in standalone ReStart using asm as volume manager.

Purpose

ASM (Automatic Storage Management) is a popular Oracle storage volume manager that is employed in many Oracle installations. It is also Oracle's recommended storage management solution. It provides an alternative to conventional volume managers and file systems. Since Oracle version 11g, ASM has been packaged with grid infrastructure rather than a database. As a result, in order to utilize Oracle ASM for storage management without RAC, you must install Oracle grid infrastructure in a standalone server, also known as Oracle Restart. Doing so certainly adds more complexity in an otherwise simpler Oracle database deployment. However, as the name implies, when Oracle is deployed in Restart mode, any failed Oracle services are restarted after a host reboot without user intervention, which provides a certain degree of high availability or HA functionality.

Oracle ASM is generally deployed in FC, iSCSI storage protocols and luns as raw storage devices. However, ASM on NFS protocol and NFS file system is also supported configuration by Oracle. In this documentation, we demonstrate how to deploy an Oracle 19c database with the NFS protocol and Oracle ASM in an Amazon FSx ONTAP storage environment with EC2 compute instances. We also demonstrate how to use the NetApp SnapCenter service through the NetApp BlueXP console to backup, restore, and clone your Oracle database for dev/test or other use cases for storage-efficient database operation in the AWS public cloud.

This solution addresses the following use cases:

  • Oracle database deployment in Amazon FSx ONTAP storage and EC2 compute instances with NFS/ASM

  • Testing and validating an Oracle workload in the public AWS cloud with NFS/ASM

  • Testing and validating Oracle database Restart functionalities deployed in AWS

Audience

This solution is intended for the following people:

  • A DBA who would like to deploy Oracle in an AWS public cloud with NFS/ASM.

  • A database solution architect who would like to test Oracle workloads in the AWS public cloud.

  • The storage administrator who would like to deploy and manage an Oracle database deployed to AWS FSx storage.

  • The application owner who would like to stand up an Oracle database in AWS FSx/EC2.

Solution test and validation environment

The testing and validation of this solution was performed in an AWS FSx and EC2 environment that might not match the final deployment environment. For more information, see the section Key factors for deployment consideration.

Architecture

This image provides a detailed picture of the Oracle deployment configuration in AWS public cloud with iSCSI and ASM.

Hardware and software components

Hardware

FSx ONTAP storage

Current version offered by AWS

One FSx HA cluster in the same VPC and availability zone

EC2 instance for compute

t2.xlarge/4vCPU/16G

Two EC2 T2 xlarge EC2 instances, one as primary DB server and the other as a clone DB server

Software

RedHat Linux

RHEL-8.6.0_HVM-20220503-x86_64-2-Hourly2-GP2

Deployed RedHat subscription for testing

Oracle Grid Infrastructure

Version 19.18

Applied RU patch p34762026_190000_Linux-x86-64.zip

Oracle Database

Version 19.18

Applied RU patch p34765931_190000_Linux-x86-64.zip

Oracle OPatch

Version 12.2.0.1.36

Latest patch p6880880_190000_Linux-x86-64.zip

SnapCenter Service

Version

v2.3.1.2324

Key factors for deployment consideration

  • EC2 compute instances. In these tests and validations, we used an AWS EC2 t2.xlarge instance type for the Oracle database compute instance. NetApp recommends using an M5 type EC2 instance as the compute instance for Oracle in production deployment because it is optimized for database workloads. You need to size the EC2 instance appropriately for the number of vCPUs and the amount of RAM based on actual workload requirements.

  • FSx storage HA clusters single- or multi-zone deployment. In these tests and validations, we deployed an FSx HA cluster in a single AWS availability zone. For production deployment, NetApp recommends deploying an FSx HA pair in two different availability zones. An FSx HA cluster is alway provisioned in a HA pair that is sync mirrored in a pair of active-passive file systems to provide storage-level redundancy. Multi-zone deployment further enhances high availability in the event of failure in a single AWS zone.

  • FSx storage cluster sizing. An Amazon FSx ONTAP storage file system provides up to 160,000 raw SSD IOPS, up to 4GBps throughput, and a maximum of 192TiB capacity. However, you can size the cluster in terms of provisioned IOPS, throughput, and the storage limit (minimum 1,024 GiB) based on your actually requirements at the time of deployment. The capacity can be adjusted dynamically on the fly without affecting application availability.

  • Oracle data and logs layout. In our tests and validations, we deployed two ASM disk groups for data and logs respectively. Within the +DATA asm disk group, we provisioned four disks in a data NFS file system mount point. Within the +LOGS asm disk group, we provisioned two disks in a logs NFS file system mount point. For large database deployment, ASM disk groups can be built to span multiple FSx file systems with ASM NFS disks distributed through multiple NFS mount points anchored on FSx file systems. This particular setup is designed to meet database throughput over 4GBps throughput and 160,000 raw SSD IOPS requirement.

  • dNFS configuration. dNFS is built into Oracle kernel and is known to dramatically increase Oracle database performance when Oracle is deployed to NFS storage. dNFS is packaged into Oracle binary but is not turned on by default. It should be turned on for any Oracle database deployment on NFS. For multiple FSx file systems deployment for large database, dNFS multi-path should be properly configured.

  • Oracle ASM redundancy level to use for each Oracle ASM disk group that you create. Because FSx already mirrors the storage on the FSx cluster level, you should ONLY use External Redundancy, which means that the option does not allow Oracle ASM to mirror the contents of the disk group. This is particularly important as NFS for Oracle database data storage requires HARD NFS mount option which is NOT desirable for mirroring ASM contents on the Oracle level.

  • Database backup. NetApp provides a SaaS version of SnapCenter software service for database backup, restore, and clone in the cloud that is available through the NetApp BlueXP console UI. NetApp recommends implementing such a service to achieve fast (under a minute) SnapShot backup, quick (few minutes) database restore, and database cloning.

Solution deployment

The following section provides step-by-step deployment procedures.

Prerequisites for deployment

Details

Deployment requires the following prerequisites.

  1. An AWS account has been set up, and the necessary VPC and network segments have been created within your AWS account.

  2. From the AWS EC2 console, you must deploy two EC2 Linux instances, one as the primary Oracle DB server and an optional alternative clone target DB server. See the architecture diagram in the previous section for more details about the environment setup. Also review the User Guide for Linux instances for more information.

  3. From the AWS EC2 console, deploy Amazon FSx ONTAP storage HA clusters to host the Oracle database volumes. If you are not familiar with the deployment of FSx storage, see the documentation Creating FSx ONTAP file systems for step-by-step instructions.

  4. Steps 2 and 3 can be performed using the following Terraform automation toolkit, which creates an EC2 instance named ora_01 and an FSx file system named fsx_01. Review the instruction carefully and change the variables to suit your environment before execution.

    git clone https://github.com/NetApp-Automation/na_aws_fsx_ec2_deploy.git
Note Ensure that you have allocated at least 50G in EC2 instance root volume in order to have sufficient space to stage Oracle installation files.

EC2 instance kernel configuration

Details

With the prerequisites provisioned, log into the EC2 instance as ec2-user and sudo to root user to configure the Linux kernel for Oracle installation.

  1. Create a staging directory /tmp/archive folder and set the 777 permission.

    mkdir /tmp/archive
    
    chmod 777 /tmp/archive
  2. Download and stage the Oracle binary installation files and other required rpm files to the /tmp/archive directory.

    See the following list of installation files to be stated in /tmp/archive on the EC2 instance.

    [ec2-user@ip-172-30-15-58 ~]$ ls -l /tmp/archive
    total 10537316
    -rw-rw-r--. 1 ec2-user ec2-user      19112 Mar 21 15:57 compat-libcap1-1.10-7.el7.x86_64.rpm
    -rw-rw-r--  1 ec2-user ec2-user 3059705302 Mar 21 22:01 LINUX.X64_193000_db_home.zip
    -rw-rw-r--  1 ec2-user ec2-user 2889184573 Mar 21 21:09 LINUX.X64_193000_grid_home.zip
    -rw-rw-r--. 1 ec2-user ec2-user     589145 Mar 21 15:56 netapp_linux_unified_host_utilities-7-1.x86_64.rpm
    -rw-rw-r--. 1 ec2-user ec2-user      31828 Mar 21 15:55 oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm
    -rw-rw-r--  1 ec2-user ec2-user 2872741741 Mar 21 22:31 p34762026_190000_Linux-x86-64.zip
    -rw-rw-r--  1 ec2-user ec2-user 1843577895 Mar 21 22:32 p34765931_190000_Linux-x86-64.zip
    -rw-rw-r--  1 ec2-user ec2-user  124347218 Mar 21 22:33 p6880880_190000_Linux-x86-64.zip
    -rw-r--r--  1 ec2-user ec2-user     257136 Mar 22 16:25 policycoreutils-python-utils-2.9-9.el8.noarch.rpm
  3. Install Oracle 19c preinstall RPM, which satisfies most kernel configuration requirements.

    yum install /tmp/archive/oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm
  4. Download and install the missing compat-libcap1 in Linux 8.

    yum install /tmp/archive/compat-libcap1-1.10-7.el7.x86_64.rpm
  5. From NetApp, download and install NetApp host utilities.

    yum install /tmp/archive/netapp_linux_unified_host_utilities-7-1.x86_64.rpm
  6. Install policycoreutils-python-utils, which is not available in the EC2 instance.

    yum install /tmp/archive/policycoreutils-python-utils-2.9-9.el8.noarch.rpm
  7. Install open JDK version 1.8.

    yum install java-1.8.0-openjdk.x86_64
  8. Install nfs-utils.

    yum install nfs-utils
  9. Disable transparent hugepages in the current system.

    echo never > /sys/kernel/mm/transparent_hugepage/enabled
    echo never > /sys/kernel/mm/transparent_hugepage/defrag

    Add the following lines in /etc/rc.local to disable transparent_hugepage after reboot:

      # Disable transparent hugepages
              if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
                echo never > /sys/kernel/mm/transparent_hugepage/enabled
              fi
              if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
                echo never > /sys/kernel/mm/transparent_hugepage/defrag
              fi
  10. Disable selinux by changing SELINUX=enforcing to SELINUX=disabled. You must reboot the host to make the change effective.

    vi /etc/sysconfig/selinux
  11. Add the following lines to limit.conf to set the file descriptor limit and stack size without quotes " ".

    vi /etc/security/limits.conf
      "*               hard    nofile          65536"
      "*               soft    stack           10240"
  12. Add swap space to EC2 instance by following this instruction: How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file? The exact amount of space to add depends on the size of RAM up to 16G.

  13. Add the ASM group to be used for the asm sysasm group

    groupadd asm
  14. Modify the oracle user to add ASM as a secondary group (the oracle user should have been created after Oracle preinstall RPM installation).

    usermod -a -G asm oracle
  15. Reboot the EC2 instance.

Provision and export NFS volumes to be mounted to EC2 instance host

Details

Provision three volumes from the command line by login to FSx cluster via ssh as fsxadmin user with FSx cluster management IP to host the Oracle database binary, data, and logs files.

  1. Log into the FSx cluster through SSH as the fsxadmin user.

    ssh fsxadmin@172.30.15.53
  2. Execute the following command to create a volume for the Oracle binary.

    vol create -volume ora_01_biny -aggregate aggr1 -size 50G -state online  -type RW -junction-path /ora_01_biny -snapshot-policy none -tiering-policy snapshot-only
  3. Execute the following command to create a volume for Oracle data.

    vol create -volume ora_01_data -aggregate aggr1 -size 100G -state online  -type RW -junction-path /ora_01_data -snapshot-policy none -tiering-policy snapshot-only
  4. Execute the following command to create a volume for Oracle logs.

    vol create -volume ora_01_logs -aggregate aggr1 -size 100G -state online  -type RW -junction-path /ora_01_logs -snapshot-policy none -tiering-policy snapshot-only
  5. Validate the DB volumes created.

    vol show

    This is expected to return:

    FsxId02ad7bf3476b741df::> vol show
      (vol show)
    FsxId06c3c8b2a7bd56458::> vol show
    Vserver   Volume       Aggregate    State      Type       Size  Available Used%
    --------- ------------ ------------ ---------- ---- ---------- ---------- -----
    svm_ora   ora_01_biny  aggr1        online     RW         50GB    47.50GB    0%
    svm_ora   ora_01_data  aggr1        online     RW        100GB    95.00GB    0%
    svm_ora   ora_01_logs  aggr1        online     RW        100GB    95.00GB    0%
    svm_ora   svm_ora_root aggr1        online     RW          1GB    972.1MB    0%
    4 entries were displayed.

Database storage configuration

Details

Now, import and set up the FSx storage for the Oracle grid infrastructure and database installation on the EC2 instance host.

  1. Log into the EC2 instance via SSH as the ec2-user with your SSH key and EC2 instance IP address.

    ssh -i ora_01.pem ec2-user@172.30.15.58
  2. Create /u01 directory to mount Oracle binary file system

    sudo mkdir /u01
  3. Mount the binary volume to /u01, changed to your FSx NFS lif IP address. If you deployed FSx cluster via NetApp automation toolkit, FSx virtual storage server NFS lif IP address will be listed in the output at the end of resources provision execution. Otherwise, it can be retrieved from AWS FSx console UI.

    sudo mount -t nfs 172.30.15.19:/ora_01_biny /u01 -o rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536
  4. Change /u01 mount point ownership to the Oracle user and it's associated primary group.

    sudo chown oracle:oinstall /u01
  5. Create /oradata directory to mount Oracle data file system

    sudo mkdir /oradata
  6. Mount the data volume to /oradata, changed to your FSx NFS lif IP address

    sudo mount -t nfs 172.30.15.19:/ora_01_data /oradata -o rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536
  7. Change /oradata mount point ownership to the Oracle user and it's associated primary group.

    sudo chown oracle:oinstall /oradata
  8. Create /oralogs directory to mount Oracle logs file system

    sudo mkdir /oralogs
  9. Mount the log volume to /oralogs, changed to your FSx NFS lif IP address

    sudo mount -t nfs 172.30.15.19:/ora_01_logs /oralogs -o rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536
  10. Change /oralogs mount point ownership to the Oracle user and it's associated primary group.

    sudo chown oracle:oinstall /oralogs
  11. Add a mount point to /etc/fstab.

    sudo vi /etc/fstab

    Add the following line.

    172.30.15.19:/ora_01_biny       /u01            nfs     rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536   0       0
    172.30.15.19:/ora_01_data       /oradata        nfs     rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536   0       0
    172.30.15.19:/ora_01_logs       /oralogs        nfs     rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536   0       0
  12. sudo to oracle user, create asm folders to store asm disk files

    sudo su
    su - oracle
    mkdir /oradata/asm
    mkdir /oralogs/asm
  13. As the oracle user, create asm data disk files, change the count to match to the disk size with block size.

    dd if=/dev/zero of=/oradata/asm/nfs_data_disk01 bs=1M count=20480 oflag=direct
    dd if=/dev/zero of=/oradata/asm/nfs_data_disk02 bs=1M count=20480 oflag=direct
    dd if=/dev/zero of=/oradata/asm/nfs_data_disk03 bs=1M count=20480 oflag=direct
    dd if=/dev/zero of=/oradata/asm/nfs_data_disk04 bs=1M count=20480 oflag=direct
  14. As the root user, change data disk file permission to 640

    chmod 640 /oradata/asm/*
  15. AS the oracle user, create asm logs disk files, change to count to match to the disk size with block size.

    dd if=/dev/zero of=/oralogs/asm/nfs_logs_disk01 bs=1M count=40960 oflag=direct
    dd if=/dev/zero of=/oralogs/asm/nfs_logs_disk02 bs=1M count=40960 oflag=direct
  16. As the root user, change logs disk file permission to 640

    chmod 640 /oralogs/asm/*
  17. Reboot the EC2 instance host.

Oracle grid infrastructure installation

Details
  1. Log into the EC2 instance as the ec2-user via SSH and enable password authentication by uncommenting PasswordAuthentication yes and then commenting out PasswordAuthentication no.

    sudo vi /etc/ssh/sshd_config
  2. Restart the sshd service.

    sudo systemctl restart sshd
  3. Reset the Oracle user password.

    sudo passwd oracle
  4. Log in as the Oracle Restart software owner user (oracle). Create an Oracle directory as follows:

    mkdir -p /u01/app/oracle
    mkdir -p /u01/app/oraInventory
  5. Change the directory permission setting.

    chmod -R 775 /u01/app
  6. Create a grid home directory and change to it.

    mkdir -p /u01/app/oracle/product/19.0.0/grid
    cd /u01/app/oracle/product/19.0.0/grid
  7. Unzip the grid installation files.

    unzip -q /tmp/archive/LINUX.X64_193000_grid_home.zip
  8. From grid home, delete the OPatch directory.

    rm -rf OPatch
  9. From grid home, copy p6880880_190000_Linux-x86-64.zip to the grid_home, and then unzip it.

    cp /tmp/archive/p6880880_190000_Linux-x86-64.zip .
    unzip p6880880_190000_Linux-x86-64.zip
  10. From grid home, revise cv/admin/cvu_config, uncomment and replace CV_ASSUME_DISTID=OEL5 with CV_ASSUME_DISTID=OL7.

    vi cv/admin/cvu_config
  11. Prepare a gridsetup.rsp file for silent installation and place the rsp file in the /tmp/archive directory. The rsp file should cover sections A, B, and G with the following infomation:

    INVENTORY_LOCATION=/u01/app/oraInventory
    oracle.install.option=HA_CONFIG
    ORACLE_BASE=/u01/app/oracle
    oracle.install.asm.OSDBA=dba
    oracle.install.asm.OSOPER=oper
    oracle.install.asm.OSASM=asm
    oracle.install.asm.SYSASMPassword="SetPWD"
    oracle.install.asm.diskGroup.name=DATA
    oracle.install.asm.diskGroup.redundancy=EXTERNAL
    oracle.install.asm.diskGroup.AUSize=4
    oracle.install.asm.diskGroup.disks=/oradata/asm/*,/oralogs/asm/*
    oracle.install.asm.diskGroup.diskDiscoveryString=/oradata/asm/nfs_data_disk01,/oradata/asm/nfs_data_disk02,/oradata/asm/nfs_data_disk03,/oradata/asm/nfs_data_disk04
    oracle.install.asm.monitorPassword="SetPWD"
    oracle.install.asm.configureAFD=false
  12. Log into the EC2 instance as the root user.

  13. Install cvuqdisk-1.0.10-1.rpm.

    rpm -ivh /u01/app/oracle/product/19.0.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm
  14. Log into the EC2 instance as the Oracle user and extract the patch in the /tmp/archive folder.

    unzip p34762026_190000_Linux-x86-64.zip
  15. From grid home /u01/app/oracle/product/19.0.0/grid and as the oracle user, launch gridSetup.sh for grid infrastructure installation.

     ./gridSetup.sh -applyRU /tmp/archive/34762026/ -silent -responseFile /tmp/archive/gridsetup.rsp

    Ignore the warnings about wrong groups for grid infrastructure. We are using a single Oracle user to manage Oracle Restart, so this is expected.

  16. As root user, execute the following script(s):

    /u01/app/oraInventory/orainstRoot.sh
    
    /u01/app/oracle/product/19.0.0/grid/root.sh
  17. As the Oracle user, execute the following command to complete the configuration:

    /u01/app/oracle/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/archive/gridsetup.rsp -silent
  18. As the Oracle user, create the LOGS disk group.

    bin/asmca -silent -sysAsmPassword 'yourPWD' -asmsnmpPassword 'yourPWD' -createDiskGroup -diskGroupName LOGS -disk '/oralogs/asm/nfs_logs_disk*' -redundancy EXTERNAL -au_size 4
  19. As the Oracle user, validate grid services after installation configuration.

    bin/crsctl stat res -t
    +
    Name                Target  State        Server                   State details
    Local Resources
    ora.DATA.dg         ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.LISTENER.lsnr   ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.LOGS.dg         ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.asm             ONLINE  ONLINE       ip-172-30-15-58          Started,STABLE
    ora.ons             OFFLINE OFFLINE      ip-172-30-15-58          STABLE
    Cluster Resources
    ora.cssd            ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.diskmon         OFFLINE OFFLINE                               STABLE
    ora.driver.afd      ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.evmd            ONLINE  ONLINE       ip-172-30-15-58          STABLE

Oracle database installation

Details
  1. Log in as the Oracle user and unset $ORACLE_HOME and $ORACLE_SID if it is set.

    unset ORACLE_HOME
    unset ORACLE_SID
  2. Create the Oracle DB home directory and change to it.

    mkdir /u01/app/oracle/product/19.0.0/db1
    cd /u01/app/oracle/product/19.0.0/db1
  3. Unzip the Oracle DB installation files.

    unzip -q /tmp/archive/LINUX.X64_193000_db_home.zip
  4. From the DB home, delete the OPatch directory.

    rm -rf OPatch
  5. From DB home, copy p6880880_190000_Linux-x86-64.zip to grid_home, and then unzip it.

    cp /tmp/archive/p6880880_190000_Linux-x86-64.zip .
    unzip p6880880_190000_Linux-x86-64.zip
  6. From DB home, revise cv/admin/cvu_config, and uncomment and replace CV_ASSUME_DISTID=OEL5 with CV_ASSUME_DISTID=OL7.

    vi cv/admin/cvu_config
  7. From the /tmp/archive directory, unpack the DB 19.18 RU patch.

    unzip p34765931_190000_Linux-x86-64.zip
  8. Prepare the DB silent install rsp file in /tmp/archive/dbinstall.rsp directory with the following values:

    oracle.install.option=INSTALL_DB_SWONLY
    UNIX_GROUP_NAME=oinstall
    INVENTORY_LOCATION=/u01/app/oraInventory
    ORACLE_HOME=/u01/app/oracle/product/19.0.0/db1
    ORACLE_BASE=/u01/app/oracle
    oracle.install.db.InstallEdition=EE
    oracle.install.db.OSDBA_GROUP=dba
    oracle.install.db.OSOPER_GROUP=oper
    oracle.install.db.OSBACKUPDBA_GROUP=oper
    oracle.install.db.OSDGDBA_GROUP=dba
    oracle.install.db.OSKMDBA_GROUP=dba
    oracle.install.db.OSRACDBA_GROUP=dba
    oracle.install.db.rootconfig.executeRootScript=false
  9. From db1 home /u01/app/oracle/product/19.0.0/db1, execute silent software-only DB installation.

     ./runInstaller -applyRU /tmp/archive/34765931/ -silent -ignorePrereqFailure -responseFile /tmp/archive/dbinstall.rsp
  10. As root user, run the root.sh script after sofware-only installation.

    /u01/app/oracle/product/19.0.0/db1/root.sh
  11. As Oracle user, create the dbca.rsp file with the following entries:

    gdbName=db1.demo.netapp.com
    sid=db1
    createAsContainerDatabase=true
    numberOfPDBs=3
    pdbName=db1_pdb
    useLocalUndoForPDBs=true
    pdbAdminPassword="yourPWD"
    templateName=General_Purpose.dbc
    sysPassword="yourPWD"
    systemPassword="yourPWD"
    dbsnmpPassword="yourPWD"
    storageType=ASM
    diskGroupName=DATA
    characterSet=AL32UTF8
    nationalCharacterSet=AL16UTF16
    listeners=LISTENER
    databaseType=MULTIPURPOSE
    automaticMemoryManagement=false
    totalMemory=8192
    Note Set the total memory based on available memory in EC2 instance host. Oracle allocates 75% of totalMemory to DB instance SGA or buffer cache.
  12. As Oracle user, lauch DB creation with dbca.

    bin/dbca -silent -createDatabase -responseFile /tmp/archive/dbca.rsp
    
    output:
    Prepare for db operation
    7% complete
    Registering database with Oracle Restart
    11% complete
    Copying database files
    33% complete
    Creating and starting Oracle instance
    35% complete
    38% complete
    42% complete
    45% complete
    48% complete
    Completing Database Creation
    53% complete
    55% complete
    56% complete
    Creating Pluggable Databases
    60% complete
    64% complete
    69% complete
    78% complete
    Executing Post Configuration Actions
    100% complete
    Database creation complete. For details check the logfiles at:
     /u01/app/oracle/cfgtoollogs/dbca/db1.
    Database Information:
    Global Database Name:db1.demo.netapp.com
    System Identifier(SID):db1
    Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/db1/db1.log" for further details.
  13. As Oracle user, validate Oracle Restart HA services after DB creation.

    [oracle@ip-172-30-15-58 db1]$ ../grid/bin/crsctl stat res -t
    --------------------------------------------------------------------------------
    Name           Target  State        Server                   State details
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.DATA.dg
                   ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.LOGS.dg
                   ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.asm
                   ONLINE  ONLINE       ip-172-30-15-58          Started,STABLE
    ora.ons
                   OFFLINE OFFLINE      ip-172-30-15-58          STABLE
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.cssd
          1        ONLINE  ONLINE       ip-172-30-15-58          STABLE
    ora.db1.db
          1        ONLINE  ONLINE       ip-172-30-15-58          Open,HOME=/u01/app/o
                                                                 racle/product/19.0.0
                                                                 /db1,STABLE
    ora.diskmon
          1        OFFLINE OFFLINE                               STABLE
    ora.evmd
          1        ONLINE  ONLINE       ip-172-30-15-58          STABLE
    --------------------------------------------------------------------------------
    [oracle@ip-172-30-15-58 db1]$
  14. Set the Oracle user .bash_profile.

    vi ~/.bash_profile
  15. Add following entries:

    export ORACLE_HOME=/u01/app/oracle/product/19.0.0/db1
    export ORACLE_SID=db1
    export PATH=$PATH:$ORACLE_HOME/bin
    alias asm='export ORACLE_HOME=/u01/app/oracle/product/19.0.0/grid;export ORACLE_SID=+ASM;export PATH=$PATH:$ORACLE_HOME/bin'
  16. Validate the CDB/PDB created.

    . ~/.bash_profile
    
    sqlplus / as sysdba
    
    SQL> select name, open_mode from v$database;
    
    NAME      OPEN_MODE
    
    DB1       READ WRITE
    
    SQL> select name from v$datafile;
    
    NAME
    
    +DATA/DB1/DATAFILE/system.256.1132176177
    +DATA/DB1/DATAFILE/sysaux.257.1132176221
    +DATA/DB1/DATAFILE/undotbs1.258.1132176247
    +DATA/DB1/86B637B62FE07A65E053F706E80A27CA/DATAFILE/system.265.1132177009
    +DATA/DB1/86B637B62FE07A65E053F706E80A27CA/DATAFILE/sysaux.266.1132177009
    +DATA/DB1/DATAFILE/users.259.1132176247
    +DATA/DB1/86B637B62FE07A65E053F706E80A27CA/DATAFILE/undotbs1.267.1132177009
    +DATA/DB1/F7852758DCD6B800E0533A0F1EAC1DC6/DATAFILE/system.271.1132177853
    +DATA/DB1/F7852758DCD6B800E0533A0F1EAC1DC6/DATAFILE/sysaux.272.1132177853
    +DATA/DB1/F7852758DCD6B800E0533A0F1EAC1DC6/DATAFILE/undotbs1.270.1132177853
    +DATA/DB1/F7852758DCD6B800E0533A0F1EAC1DC6/DATAFILE/users.274.1132177871
    
    NAME
    
    +DATA/DB1/F785288BBCD1BA78E0533A0F1EACCD6F/DATAFILE/system.276.1132177871
    +DATA/DB1/F785288BBCD1BA78E0533A0F1EACCD6F/DATAFILE/sysaux.277.1132177871
    +DATA/DB1/F785288BBCD1BA78E0533A0F1EACCD6F/DATAFILE/undotbs1.275.1132177871
    +DATA/DB1/F785288BBCD1BA78E0533A0F1EACCD6F/DATAFILE/users.279.1132177889
    +DATA/DB1/F78529A14DD8BB18E0533A0F1EACB8ED/DATAFILE/system.281.1132177889
    +DATA/DB1/F78529A14DD8BB18E0533A0F1EACB8ED/DATAFILE/sysaux.282.1132177889
    +DATA/DB1/F78529A14DD8BB18E0533A0F1EACB8ED/DATAFILE/undotbs1.280.1132177889
    +DATA/DB1/F78529A14DD8BB18E0533A0F1EACB8ED/DATAFILE/users.284.1132177907
    
    19 rows selected.
    
    SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    
             2 PDB$SEED                       READ ONLY  NO
             3 DB1_PDB1                       READ WRITE NO
             4 DB1_PDB2                       READ WRITE NO
             5 DB1_PDB3                       READ WRITE NO
    SQL>
  17. As oracle user, change to Oracle database home directory /u01/app/oracle/product/19.0.0/db1 and Enable dNFS

    cd /u01/app/oracle/product/19.0.0/db1
    
    mkdir rdbms/lib/odm
    
    cp lib/libnfsodm19.so rdbms/lib/odm/
  18. Configure oranfstab file in ORACLE_HOME

    vi $ORACLE_HOME/dbs/oranfstab
    
    add following entries:
    
    server: fsx_01
    local: 172.30.15.58 path: 172.30.15.19
    nfs_version: nfsv3
    export: /ora_01_biny mount: /u01
    export: /ora_01_data mount: /oradata
    export: /ora_01_logs mount: /oralogs
  19. As oracle user, login to database from sqlplus and set the DB recovery size and location to the +LOGS disk group.

    . ~/.bash_profile
    
    sqlplus / as sysdba
    
    alter system set db_recovery_file_dest_size = 80G scope=both;
    
    alter system set db_recovery_file_dest = '+LOGS' scope=both;
  20. Enable archive log mode and reboot Oracle DB instance

    shutdown immediate;
    
    startup mount;
    
    alter database archivelog;
    
    alter database open;
    
    alter system switch logfile;
  21. Validate DB log mode and dNFS after instance reboot

    SQL> select name, log_mode from v$database;
    
    NAME      LOG_MODE
    --------- ------------
    DB1       ARCHIVELOG
    
    SQL> select svrname, dirname from v$dnfs_servers;
    
    SVRNAME
    --------------------------------------------------------------------------------
    DIRNAME
    --------------------------------------------------------------------------------
    fsx_01
    /ora_01_data
    
    fsx_01
    /ora_01_biny
    
    fsx_01
    /ora_01_logs
  22. Validate Oracle ASM

    [oracle@ip-172-30-15-58 db1]$ asm
    [oracle@ip-172-30-15-58 db1]$ sqlplus / as sysasm
    
    SQL*Plus: Release 19.0.0.0.0 - Production on Tue May 9 20:39:39 2023
    Version 19.18.0.0.0
    
    Copyright (c) 1982, 2022, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.18.0.0.0
    
    SQL> set lin 200
    SQL> col path form a30
    SQL> select name, path, header_status, mount_status, state from v$asm_disk;
    
    NAME                           PATH                           HEADER_STATU MOUNT_S STATE
    ------------------------------ ------------------------------ ------------ ------- --------
    DATA_0002                      /oradata/asm/nfs_data_disk01   MEMBER       CACHED  NORMAL
    DATA_0000                      /oradata/asm/nfs_data_disk02   MEMBER       CACHED  NORMAL
    DATA_0001                      /oradata/asm/nfs_data_disk03   MEMBER       CACHED  NORMAL
    DATA_0003                      /oradata/asm/nfs_data_disk04   MEMBER       CACHED  NORMAL
    LOGS_0000                      /oralogs/asm/nfs_logs_disk01   MEMBER       CACHED  NORMAL
    LOGS_0001                      /oralogs/asm/nfs_logs_disk02   MEMBER       CACHED  NORMAL
    
    6 rows selected.
    
    
    SQL> select name, state, ALLOCATION_UNIT_SIZE, TOTAL_MB, FREE_MB from v$asm_diskgroup;
    
    NAME                           STATE       ALLOCATION_UNIT_SIZE   TOTAL_MB    FREE_MB
    ------------------------------ ----------- -------------------- ---------- ----------
    DATA                           MOUNTED                  4194304      81920      73536
    LOGS                           MOUNTED                  4194304      81920      81640
    
    This completes Oracle 19c version 19.18 Restart deployment on an Amazon FSx ONTAP and EC2 compute instance with NFS/ASM. If desired, NetApp recommends relocating the Oracle control file and online log files to the +LOGS disk group.

Automated deployment option

NetApp will release a fully automated solution deployment toolkit with Ansible to facilitate the implementation of this solution. Please check back for the availability of the toolkit. After it is released, a link will be posted here.

Oracle Database backup, restore, and clone with SnapCenter Service

At this moment, Oracle database with NFS and ASM storage option is only supported by traditional SnapCenter Server UI tool See Hybrid Cloud Database Solutions with SnapCenter for details on Oracle database backup, restore, and clone with NetApp SnapCenter UI tool.