Skip to main content
NetApp Solutions

TR-4992: Simplified, Automated Oracle Deployment on NetApp C-Series with NFS

Contributors acao8888

Allen Cao, Niyaz Mohamed, NetApp

This solution provides an overview and details for automated Oracle deployment in NetApp AFF C-Series as primary database storage with NFS protocol. The Oracle database deploys as a container database with dNFS enabled.

Purpose

NetApp AFF C-Series is a capacity flash storage that makes all-flash more accessible and affordable for unified storage. It is sufficient performance-wise for many tier 1 or tier 2 Oracle database workloads. Powered by NetApp ONTAP® data management software, AFF C-Series systems deliver industry-leading efficiency, superior flexibility, best-in-class data services, and cloud integration to help you scale your IT infrastructure, simplify your data management, and reduce storage cost and power consumption.

This documentation demonstrates the simplified deployment of Oracle databases in NetApp C-Series via NFS mounts using Ansible automation. The Oracle database deploys in a container database (CDB) and pluggable databases (PDB) configuration with Oracle dNFS protocol enabled to boost performance. Furthermore, the solution provides the best practices in setting up storage networking and storage virtual machine (SVM) with NFS protocol on C-Series storage controllers. The solution also includes information on fast Oracle database backup, restore, and clone with the NetApp SnapCenter UI tool.

This solution addresses the following use cases:

  • Automated Oracle container database deployment on NetApp C-Series storage controllers.

  • Oracle database protection and clone on C-Series with SnapCenter UI tool.

Audience

This solution is intended for the following people:

  • A DBA who would like to deploy Oracle on NetApp C-Series.

  • A database solution architect who would like to test Oracle workloads on NetApp C-Series.

  • A storage administrator who would like to deploy and manage an Oracle database on NetApp C-Series.

  • An application owner who would like to stand up an Oracle database on NetApp C-Series.

Solution test and validation environment

The testing and validation of this solution were performed in a lab setting that might not match the final deployment environment. See the section Key factors for deployment consideration for more information.

Architecture

This image provides a detailed picture of the Oracle deployment configuration in AWS public cloud with iSCSI and ASM.

Hardware and software components

Hardware

NetApp C-Series C400

ONTAP Version 9.13.1P3

Two disk shelves / 24 disks with 278 TiB capacity

VM for DB server

4 vCPUs, 16GiB RAM

Two Linux VM instances for concurrent deployment

VM for SnapCenter

4 vCPUs, 16GiB RAM

One Windows VM instance

Software

RedHat Linux

RHEL Linux 8.6 (LVM) - x64 Gen2

Deployed RedHat subscription for testing

Windows Server

2022 DataCenter x64 Gen2

Hosting SnapCenter server

Oracle Database

Version 19.18

Applied RU patch p34765931_190000_Linux-x86-64.zip

Oracle OPatch

Version 12.2.0.1.36

Latest patch p6880880_190000_Linux-x86-64.zip

SnapCenter Server

Version 5.0

Workgroup deployment

Open JDK

Version java-11-openjdk

SnapCenter plugin requirement on DB VMs

NFS

Version 3.0

Oracle dNFS enabled

Ansible

core 2.16.2

Python 3.6.8

Oracle database configuration in the lab environment

Server

Database

DB Storage

ora_01

NTAP1(NTAP1_PDB1,NTAP1_PDB2,NTAP1_PDB3)

/u01, /u02, /u03 NFS mounts on C400 volumes

ora_02

NTAP2(NTAP2_PDB1,NTAP2_PDB2,NTAP2_PDB3)

/u01, /u02, /u03 NFS mounts on C400 volumes

Key factors for deployment consideration

  • Oracle database storage layout. In this automated Oracle deployment, we provision three database volumes for each database to host Oracle binary, data, and logs by default. The volumes are mounted on Oracle DB server as /u01 - binary, /u02 - data, /u03 - logs via NFS. Dual control files are configured on /u02 and /u03 mount points for redundancy.

  • Multiple DB servers deployment. The automation solution can deploy an Oracle container database to multiple DB servers in a single Ansible playbook run. Regardless of the number of DB servers, the playbook execution remains the same. You can deploy multiple container databases to a single VM instance by repeating the deployment with different database instance IDs (Oracle SID). But ensure there is sufficient memory on the host to support deployed databases.

  • dNFS configuration. By using dNFS (available since Oracle 11g), an Oracle database running on a DB VM can drive significantly more I/O than the native NFS client. Automated Oracle deployment configures dNFS on NFSv3 by default.

  • Load balancing on C400 controller pair. Place Oracle database volumes on C400 controller nodes evenly to balance the workload. DB1 on controller 1, DB2 on controller 2, and so on. Mount the DB volumes to its local lif address.

  • Database backup. NetApp provides a SnapCenter software suite for database backup, restore, and cloning with a user-friendly UI interface. NetApp recommends implementing such a management tool to achieve fast (under a minute) snapshot backup, quick (minutes) database restore, and database clone.

Solution deployment

The following sections provide step-by-step procedures for automated Oracle 19c deployment and information for Oracle database protection and clone after deployment.

Prerequisites for deployment

Details

Deployment requires the following prerequisites.

  1. A NetApp C-Series storage controller pair is racked, stacked, and latest version of ONTAP operating system is installed and configured. Refer to this setup guide as necessary: Detailed guide - AFF C400

  2. Provision two Linux VMs as Oracle DB servers. See the architecture diagram in the previous section for details about the environment setup.

  3. Provision a Windows server to run the NetApp SnapCenter UI tool with the latest version. Refer to the following link for details: Install the SnapCenter Server

  4. Provision a Linux VM as the Ansible controller node with the latest version of Ansible and Git installed. Refer to the following link for details: Getting Started with NetApp solution automation in section -
    Setup the Ansible Control Node for CLI deployments on RHEL / CentOS or
    Setup the Ansible Control Node for CLI deployments on Ubuntu / Debian.

    Enable ssh public/private key authentication between Ansible controller and database VMs.

  5. From Ansible controller admin user home directory, clone a copy of the NetApp Oracle deployment automation toolkit for NFS.

    git clone https://bitbucket.ngage.netapp.com/scm/ns-bb/na_oracle_deploy_nfs.git
  6. Stage following Oracle 19c installation files on DB VM /tmp/archive directory with 777 permission.

    installer_archives:
      - "LINUX.X64_193000_db_home.zip"
      - "p34765931_190000_Linux-x86-64.zip"
      - "p6880880_190000_Linux-x86-64.zip"

Configure Networking and SVM on C-Series for Oracle

Details

This section of deployment guide demonstrates best practices to set up networking and storage virtual machine (SVM) on C-Series controller for Oracle workload with NFS protocol using ONTAP System Manager UI.

  1. Login to ONTAP System Manager to review that after initial ONTAP cluster installation, broadcast domains have been configured with ethernet ports properly assigned to each domain. Generally, there should be a broadcast domain for cluster, a broadcast domain for management, and a broadcast domain for workload such as data.

    This image provides screen shot for c-series controller config
  2. From NETWORK - Ethernet Ports, click Link Aggregate Group to create a LACP link aggregate group port a0a, which provides load balance and failover among the member ports in the aggregate group port. There are 4 data ports - e0e, e0f, e0g, e0h available on C400 controllers.

    This image provides screen shot for c-series controller config
  3. Select the ethernet ports in the group, LACP for mode, and Port for load distribution.

    This image provides screen shot for c-series controller config
  4. Validate LACP port a0a created and broadcast domain Data is now operating on LACP port.

    This image provides screen shot for c-series controller config
    This image provides screen shot for c-series controller config
  5. From Ethernet Ports, click VLAN to add a VLAN on each controller node for Oracle workload on NFS protocol.

    This image provides screen shot for c-series controller config
    This image provides screen shot for c-series controller config
    This image provides screen shot for c-series controller config
  6. Login to C-Series controllers from cluster management IP via ssh to validate that network failover groups are configured correctly. ONTAP create and manage failover groups automatically.

    HCG-NetApp-C400-E9U9::> net int failover-groups show
      (network interface failover-groups show)
                                      Failover
    Vserver          Group            Targets
    ---------------- ---------------- --------------------------------------------
    Cluster
                     Cluster
                                      HCG-NetApp-C400-E9U9a:e0c,
                                      HCG-NetApp-C400-E9U9a:e0d,
                                      HCG-NetApp-C400-E9U9b:e0c,
                                      HCG-NetApp-C400-E9U9b:e0d
    HCG-NetApp-C400-E9U9
                     Data
                                      HCG-NetApp-C400-E9U9a:a0a,
                                      HCG-NetApp-C400-E9U9a:a0a-3277,
                                      HCG-NetApp-C400-E9U9b:a0a,
                                      HCG-NetApp-C400-E9U9b:a0a-3277
                     Mgmt
                                      HCG-NetApp-C400-E9U9a:e0M,
                                      HCG-NetApp-C400-E9U9b:e0M
    3 entries were displayed.
  7. From STORAGE - Storage VMs, click +Add to create a SVM for Oracle.

    This image provides screen shot for c-series controller config
  8. Name your Oracle SVM, check Enable NFS and Allow NFS client access.

    This image provides screen shot for c-series controller config
  9. Add NFS export policy Default rules.

    This image provides screen shot for c-series controller config
  10. In NETWORK INTERFACE, fill in IP address on each node for NFS lif addresses.

    This image provides screen shot for c-series controller config
  11. Validate SVM for Oracle is up/running and NFS lifs status is active.

    This image provides screen shot for c-series controller config
    This image provides screen shot for c-series controller config
  12. From STORAGE-Volumes tab to add NFS volumes for Oracle database.

    This image provides screen shot for c-series controller config
  13. Name your volume, assign capacity, and performance level.

    This image provides screen shot for c-series controller config
  14. In Access Permission, choose the default policy created from previous step. Uncheck Enable Snapshot Copies as we prefer to use SnapCenter to create application consistent snapshots.

    This image provides screen shot for c-series controller config
  15. Create three DB volumes for each DB server: server_name_u01 - binary, server_name_u02 - data, server_name_u03 - logs.

    This image provides screen shot for c-series controller config
    Note The DB volume naming convention should strictly follow format as stated above to ensure automation to work correctly.

This completes the C-series controller configuration for Oracle.

Automation parameter files

Details

Ansible playbook executes database installation and configuration tasks with predefined parameters. For this Oracle automation solution, there are three user-defined parameter files that need user input before playbook execution.

  • hosts - define targets that the automation playbook is running against.

  • vars/vars.yml - the global variable file that defines variables that apply to all targets.

  • host_vars/host_name.yml - the local variable file that defines variables that apply only to a named target. In our use case, these are the Oracle DB servers.

In addition to these user-defined variable files, there are several default variable files that contain default parameters that do not require change unless necessary. The following sections show how to configure the user-defined variable files.

Parameter files configuration

Details
  1. Ansible target hosts file configuration:

    # Enter Oracle servers names to be deployed one by one, follow by each Oracle server public IP address, and ssh private key of admin user for the server.
    [oracle]
    ora_01 ansible_host=10.61.180.21 ansible_ssh_private_key_file=ora_01.pem
    ora_02 ansible_host=10.61.180.23 ansible_ssh_private_key_file=ora_02.pem
  2. Global vars/vars.yml file configuration

    ######################################################################
    ###### Oracle 19c deployment user configuration variables       ######
    ###### Consolidate all variables from ONTAP, linux and oracle   ######
    ######################################################################
    
    ###########################################
    ### ONTAP env specific config variables ###
    ###########################################
    
    # Prerequisite to create three volumes in NetApp ONTAP storage from System Manager or cloud dashboard with following naming convention:
    # db_hostname_u01 - Oracle binary
    # db_hostname_u02 - Oracle data
    # db_hostname_u03 - Oracle redo
    # It is important to strictly follow the name convention or the automation will fail.
    
    
    ###########################################
    ### Linux env specific config variables ###
    ###########################################
    
    redhat_sub_username: XXXXXXXX
    redhat_sub_password: XXXXXXXX
    
    
    ####################################################
    ### DB env specific install and config variables ###
    ####################################################
    
    # Database domain name
    db_domain: solutions.netapp.com
    
    # Set initial password for all required Oracle passwords. Change them after installation.
    initial_pwd_all: XXXXXXXX
  3. Local DB server host_vars/host_name.yml configuration such as ora_01.yml, ora_02.yml …​

    # User configurable Oracle host specific parameters
    
    # Enter container database SID. By default, a container DB is created with 3 PDBs within the CDB
    oracle_sid: NTAP1
    
    # Enter database shared memory size or SGA. CDB is created with SGA at 75% of memory_limit, MB. The grand total of SGA should not exceed 75% available RAM on node.
    memory_limit: 8192
    
    # Local NFS lif ip address to access database volumes
    nfs_lif: 172.30.136.68

Playbook execution

Details

There are a total of five playbooks in the automation toolkit. Each performs different task blocks and serves different purposes.

0-all_playbook.yml - execute playbooks from 1-4 in one playbook run.
1-ansible_requirements.yml - set up Ansible controller with required libs and collections.
2-linux_config.yml - execute Linux kernel configuration on Oracle DB servers.
4-oracle_config.yml - install and configure Oracle on DB servers and create a container database.
5-destroy.yml - optional to undo the environment to dismantle all.

There are three options to run the playbooks with the following commands.

  1. Execute all deployment playbooks in one combined run.

    ansible-playbook -i hosts 0-all_playbook.yml -u admin -e @vars/vars.yml
  2. Execute playbooks one at a time with the number sequence from 1-4.

    ansible-playbook -i hosts 1-ansible_requirements.yml -u admin -e @vars/vars.yml
    ansible-playbook -i hosts 2-linux_config.yml -u admin -e @vars/vars.yml
    ansible-playbook -i hosts 4-oracle_config.yml -u admin -e @vars/vars.yml
  3. Execute 0-all_playbook.yml with a tag.

    ansible-playbook -i hosts 0-all_playbook.yml -u admin -e @vars/vars.yml -t ansible_requirements
    ansible-playbook -i hosts 0-all_playbook.yml -u admin -e @vars/vars.yml -t linux_config
    ansible-playbook -i hosts 0-all_playbook.yml -u admin -e @vars/vars.yml -t oracle_config
  4. Undo the environment

    ansible-playbook -i hosts 5-destroy.yml -u admin -e @vars/vars.yml

Post execution validation

Details

After the playbook run, login to the Oracle DB server VM to validate that Oracle is installed and configured and a container database is created successfully. Following is an example of Oracle database validation on DB VM ora_01 or ora_02.

  1. Validate NFS mounts

    [admin@ora_01 ~]$ cat /etc/fstab
    
    #
    # /etc/fstab
    # Created by anaconda on Wed Oct 18 19:43:31 2023
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk/'.
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
    #
    # After editing this file, run 'systemctl daemon-reload' to update systemd
    # units generated from this file.
    #
    /dev/mapper/rhel-root   /                       xfs     defaults        0 0
    UUID=aff942c4-b224-4b62-807d-6a5c22f7b623 /boot                   xfs     defaults        0 0
    /dev/mapper/rhel-swap   none                    swap    defaults        0 0
    /root/swapfile swap swap defaults 0 0
    172.21.21.100:/ora_01_u01 /u01 nfs rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536 0 0
    172.21.21.100:/ora_01_u02 /u02 nfs rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536 0 0
    172.21.21.100:/ora_01_u03 /u03 nfs rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536 0 0
    
    
    [admin@ora_01 tmp]$ df -h
    Filesystem                 Size  Used Avail Use% Mounted on
    devtmpfs                   7.7G     0  7.7G   0% /dev
    tmpfs                      7.8G     0  7.8G   0% /dev/shm
    tmpfs                      7.8G   18M  7.8G   1% /run
    tmpfs                      7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/rhel-root       44G   28G   17G  62% /
    /dev/sda1                 1014M  258M  757M  26% /boot
    tmpfs                      1.6G   12K  1.6G   1% /run/user/42
    tmpfs                      1.6G  4.0K  1.6G   1% /run/user/1000
    172.21.21.100:/ora_01_u01   50G  8.7G   42G  18% /u01
    172.21.21.100:/ora_01_u02  200G  384K  200G   1% /u02
    172.21.21.100:/ora_01_u03  100G  320K  100G   1% /u03
    
    [admin@ora_02 ~]$ df -h
    Filesystem                 Size  Used Avail Use% Mounted on
    devtmpfs                   7.7G     0  7.7G   0% /dev
    tmpfs                      7.8G     0  7.8G   0% /dev/shm
    tmpfs                      7.8G   18M  7.8G   1% /run
    tmpfs                      7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/rhel-root       44G   28G   17G  63% /
    /dev/sda1                 1014M  258M  757M  26% /boot
    tmpfs                      1.6G   12K  1.6G   1% /run/user/42
    tmpfs                      1.6G  4.0K  1.6G   1% /run/user/1000
    172.21.21.101:/ora_02_u01   50G  7.8G   43G  16% /u01
    172.21.21.101:/ora_02_u02  200G  320K  200G   1% /u02
    172.21.21.101:/ora_02_u03  100G  320K  100G   1% /u03
  2. Validate Oracle listener

    [admin@ora_02 ~]$ sudo su
    [root@ora_02 admin]# su - oracle
    [oracle@ora_02 ~]$ lsnrctl status listener.ntap2
    
    LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 29-MAY-2024 12:13:30
    
    Copyright (c) 1991, 2022, Oracle.  All rights reserved.
    
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ora_02.cie.netapp.com)(PORT=1521)))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER.NTAP2
    Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
    Start Date                23-MAY-2024 16:13:03
    Uptime                    5 days 20 hr. 0 min. 26 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u01/app/oracle/product/19.0.0/NTAP2/network/admin/listener.ora
    Listener Log File         /u01/app/oracle/diag/tnslsnr/ora_02/listener.ntap2/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ora_02.cie.netapp.com)(PORT=1521)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=ora_02.cie.netapp.com)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/19.0.0/NTAP2/admin/NTAP2/xdb_wallet))(Presentation=HTTP)(Session=RAW))
    Services Summary...
    Service "192551f1d7e65fc3e06308b43d0a63ae.solutions.netapp.com" has 1 instance(s).
      Instance "NTAP2", status READY, has 1 handler(s) for this service...
    Service "1925529a43396002e06308b43d0a2d5a.solutions.netapp.com" has 1 instance(s).
      Instance "NTAP2", status READY, has 1 handler(s) for this service...
    Service "1925530776b76049e06308b43d0a49c3.solutions.netapp.com" has 1 instance(s).
      Instance "NTAP2", status READY, has 1 handler(s) for this service...
    Service "NTAP2.solutions.netapp.com" has 1 instance(s).
      Instance "NTAP2", status READY, has 1 handler(s) for this service...
    Service "NTAP2XDB.solutions.netapp.com" has 1 instance(s).
      Instance "NTAP2", status READY, has 1 handler(s) for this service...
    Service "ntap2_pdb1.solutions.netapp.com" has 1 instance(s).
      Instance "NTAP2", status READY, has 1 handler(s) for this service...
    Service "ntap2_pdb2.solutions.netapp.com" has 1 instance(s).
      Instance "NTAP2", status READY, has 1 handler(s) for this service...
    Service "ntap2_pdb3.solutions.netapp.com" has 1 instance(s).
      Instance "NTAP2", status READY, has 1 handler(s) for this service...
    The command completed successfully
    [oracle@ora_02 ~]$
  3. Validate Oracle database and dNFS

    [oracle@ora-01 ~]$ cat /etc/oratab
    #
    # This file is used by ORACLE utilities.  It is created by root.sh
    # and updated by either Database Configuration Assistant while creating
    # a database or ASM Configuration Assistant while creating ASM instance.
    
    # A colon, ':', is used as the field terminator.  A new line terminates
    # the entry.  Lines beginning with a pound sign, '#', are comments.
    #
    # Entries are of the form:
    #   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
    #
    # The first and second fields are the system identifier and home
    # directory of the database respectively.  The third field indicates
    # to the dbstart utility that the database should , "Y", or should not,
    # "N", be brought up at system boot time.
    #
    # Multiple entries with the same $ORACLE_SID are not allowed.
    #
    #
    NTAP1:/u01/app/oracle/product/19.0.0/NTAP1:Y
    
    
    [oracle@ora-01 ~]$ sqlplus / as sysdba
    
    SQL*Plus: Release 19.0.0.0.0 - Production on Thu Feb 1 16:37:51 2024
    Version 19.18.0.0.0
    
    Copyright (c) 1982, 2022, Oracle.  All rights reserved.
    
    
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.18.0.0.0
    
    SQL> select name, open_mode, log_mode from v$database;
    
    NAME      OPEN_MODE            LOG_MODE
    --------- -------------------- ------------
    NTAP1     READ WRITE           ARCHIVELOG
    
    SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             3 NTAP1_PDB1                     READ WRITE NO
             4 NTAP1_PDB2                     READ WRITE NO
             5 NTAP1_PDB3                     READ WRITE NO
    SQL> select name from v$datafile;
    
    NAME
    --------------------------------------------------------------------------------
    /u02/oradata/NTAP1/system01.dbf
    /u02/oradata/NTAP1/sysaux01.dbf
    /u02/oradata/NTAP1/undotbs01.dbf
    /u02/oradata/NTAP1/pdbseed/system01.dbf
    /u02/oradata/NTAP1/pdbseed/sysaux01.dbf
    /u02/oradata/NTAP1/users01.dbf
    /u02/oradata/NTAP1/pdbseed/undotbs01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb1/system01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb1/sysaux01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb1/undotbs01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb1/users01.dbf
    
    NAME
    --------------------------------------------------------------------------------
    /u02/oradata/NTAP1/NTAP1_pdb2/system01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb2/sysaux01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb2/undotbs01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb2/users01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb3/system01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb3/sysaux01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb3/undotbs01.dbf
    /u02/oradata/NTAP1/NTAP1_pdb3/users01.dbf
    
    19 rows selected.
    
    SQL> select name from v$controlfile;
    
    NAME
    --------------------------------------------------------------------------------
    /u02/oradata/NTAP1/control01.ctl
    /u03/orareco/NTAP1/control02.ctl
    
    SQL> select member from v$logfile;
    
    MEMBER
    --------------------------------------------------------------------------------
    /u03/orareco/NTAP1/onlinelog/redo03.log
    /u03/orareco/NTAP1/onlinelog/redo02.log
    /u03/orareco/NTAP1/onlinelog/redo01.log
    
    SQL> select svrname, dirname from v$dnfs_servers;
    
    SVRNAME
    --------------------------------------------------------------------------------
    DIRNAME
    --------------------------------------------------------------------------------
    172.21.21.100
    /ora_01_u02
    
    172.21.21.100
    /ora_01_u03
    
    172.21.21.100
    /ora_01_u01
  4. Login to Oracle Enterprise Manager Express to validate database.

    This image provides login screen for Oracle Enterprise Manager Express
    This image provides container database view from Oracle Enterprise Manager Express
    This image provides container database view from Oracle Enterprise Manager Express

Oracle backup, restore, and clone with SnapCenter

Details

NetApp recommends SnapCenter UI tool to manage Oracle database deployed in C-Series. Refer to TR-4979 Simplified, Self-managed Oracle in VMware Cloud on AWS with guest-mounted FSx ONTAP section Oracle backup, restore, and clone with SnapCenter for details on setting up SnapCenter and executing the database backup, restore, and clone workflows.

Where to find additional information

To learn more about the information described in this document, review the following documents and/or websites: