Skip to main content
NetApp Solutions

Step-by-step deployment procedure

Contributors kevin-hoke sufianNetApp

This page describes the Automated Data Protection of Oracle19c on NetApp ONTAP storage.

AWX/Tower Oracle Data Protection

Create the inventory, group, hosts, and credentials for your environment

This section describes the setup of inventory, groups, hosts, and access credentials in AWX/Ansible Tower that prepare the environment for consuming NetApp automated solutions.

  1. Configure the inventory.

    1. Navigate to Resources → Inventories → Add, and click Add Inventory.

    2. Provide the name and organization details, and click Save.

    3. On the Inventories page, click the inventory created.

    4. Navigate to the Groups sub-menu and click Add.

    5. Provide the name oracle for your first group and click Save.

    6. Repeat the process for a second group called dr_oracle.

    7. Select the oracle group created, go to the Hosts sub-menu and click Add New Host.

    8. Provide the IP address of the Source Oracle host's management IP, and click Save.

    9. This process must be repeated for the dr_oracle group and add the the DR/Destination Oracle host's management IP/hostname.

Note Below are instructions for creating the credential types and credentials for either On-Prem with ONTAP, or CVO on AWS.
On-Prem
  1. Configure the credentials.

  2. Create Credential Types. For solutions involving ONTAP, you must configure the credential type to match username and password entries.

    1. Navigate to Administration → Credential Types, and click Add.

    2. Provide the name and description.

    3. Paste the following content in Input Configuration:

      fields:
        - id: dst_cluster_username
          type: string
          label: Destination Cluster Username
        - id: dst_cluster_password
          type: string
          label: Destination Cluster Password
          secret: true
        - id: src_cluster_username
          type: string
          label: Source Cluster Username
        - id: src_cluster_password
          type: string
          label: Source Cluster Password
          secret: true
    4. Paste the following content into Injector Configuration and then click Save:

      extra_vars:
        dst_cluster_username: '{{ dst_cluster_username }}'
        dst_cluster_password: '{{ dst_cluster_password }}'
        src_cluster_username: '{{ src_cluster_username }}'
        src_cluster_password: '{{ src_cluster_password }}'
  3. Create Credential for ONTAP

    1. Navigate to Resources → Credentials, and click Add.

    2. Enter the name and organization details for the ONTAP Credentials

    3. Select the credential type that was created in the previous step.

    4. Under Type Details, enter the Username and Password for your Source and Destination Clusters.

    5. Click Save

  4. Create Credential for Oracle

    1. Navigate to Resources → Credentials, and click Add.

    2. Enter the name and organization details for Oracle

    3. Select the Machine credential type.

    4. Under Type Details, enter the Username and Password for the Oracle hosts.

    5. Select the correct Privilege Escalation Method, and enter the username and password.

    6. Click Save

    7. Repeat process if needed for a different credential for the dr_oracle host.

CVO
  1. Configure the credentials.

  2. Create credential types. For solutions involving ONTAP, you must configure the credential type to match username and password entries, we will also add entries for Cloud Central and AWS.

    1. Navigate to Administration → Credential Types, and click Add.

    2. Provide the name and description.

    3. Paste the following content in Input Configuration:

      fields:
        - id: dst_cluster_username
          type: string
          label: CVO Username
        - id: dst_cluster_password
          type: string
          label: CVO Password
          secret: true
        - id: cvo_svm_password
          type: string
          label: CVO SVM Password
          secret: true
        - id: src_cluster_username
          type: string
          label: Source Cluster Username
        - id: src_cluster_password
          type: string
          label: Source Cluster Password
          secret: true
        - id: regular_id
          type: string
          label: Cloud Central ID
          secret: true
        - id: email_id
          type: string
          label: Cloud Manager Email
          secret: true
        - id: cm_password
          type: string
          label: Cloud Manager Password
          secret: true
        - id: access_key
          type: string
          label: AWS Access Key
          secret: true
        - id: secret_key
          type: string
          label: AWS Secret Key
          secret: true
        - id: token
          type: string
          label: Cloud Central Refresh Token
          secret: true
    4. Paste the following content into Injector Configuration and click Save:

      extra_vars:
        dst_cluster_username: '{{ dst_cluster_username }}'
        dst_cluster_password: '{{ dst_cluster_password }}'
        cvo_svm_password: '{{ cvo_svm_password }}'
        src_cluster_username: '{{ src_cluster_username }}'
        src_cluster_password: '{{ src_cluster_password }}'
        regular_id: '{{ regular_id }}'
        email_id: '{{ email_id }}'
        cm_password: '{{ cm_password }}'
        access_key: '{{ access_key }}'
        secret_key: '{{ secret_key }}'
        token: '{{ token }}'
  3. Create Credential for ONTAP/CVO/AWS

    1. Navigate to Resources → Credentials, and click Add.

    2. Enter the name and organization details for the ONTAP Credentials

    3. Select the credential type that was created in the previous step.

    4. Under Type Details, enter the Username and Password for your Source and CVO Clusters, Cloud Central/Manager, AWS Access/Secret Key and Cloud Central Refresh Token.

    5. Click Save

  4. Create Credential for Oracle (Source)

    1. Navigate to Resources → Credentials, and click Add.

    2. Enter the name and organization details for Oracle host

    3. Select the Machine credential type.

    4. Under Type Details, enter the Username and Password for the Oracle hosts.

    5. Select the correct Privilege Escalation Method, and enter the username and password.

    6. Click Save

  5. Create Credential for Oracle Destination

    1. Navigate to Resources → Credentials, and click Add.

    2. Enter the name and organization details for the DR Oracle host

    3. Select the Machine credential type.

    4. Under Type Details, enter the Username (ec2-user or if you have changed it from default enter that), and the SSH Private Key

    5. Select the correct Privilege Escalation Method (sudo), and enter the username and password if needed.

    6. Click Save

Create a project

  1. Go to Resources → Projects, and click Add.

    1. Enter the name and organization details.

    2. Select Git in the Source Control Credential Type field.

    3. enter https://github.com/NetApp-Automation/na_oracle19c_data_protection.git as the source control URL.

    4. Click Save.

    5. The project might need to sync occasionally when the source code changes.

Configure global variables

Variables defined in this section apply to all Oracle hosts, databases, and the ONTAP cluster.

  1. Input your environment-specific parameters in following embedded global variables or vars form.

Note The items in blue must be changed to match your environment.
On-Prem
# Oracle Data Protection global user configuration variables
# Ontap env specific config variables
hosts_group: "ontap"
ca_signed_certs: "false"

# Inter-cluster LIF details
src_nodes:
  - "AFF-01"
  - "AFF-02"

dst_nodes:
  - "DR-AFF-01"
  - "DR-AFF-02"

create_source_intercluster_lifs: "yes"

source_intercluster_network_port_details:
  using_dedicated_ports: "yes"
  using_ifgrp: "yes"
  using_vlans: "yes"
  failover_for_shared_individual_ports: "yes"
  ifgrp_name: "a0a"
  vlan_id: "10"
  ports:
    - "e0b"
    - "e0g"
  broadcast_domain: "NFS"
  ipspace: "Default"
  failover_group_name: "iclifs"

source_intercluster_lif_details:
  - name: "icl_1"
    address: "10.0.0.1"
    netmask: "255.255.255.0"
    home_port: "a0a-10"
    node: "AFF-01"
  - name: "icl_2"
    address: "10.0.0.2"
    netmask: "255.255.255.0"
    home_port: "a0a-10"
    node: "AFF-02"

create_destination_intercluster_lifs: "yes"

destination_intercluster_network_port_details:
  using_dedicated_ports: "yes"
  using_ifgrp: "yes"
  using_vlans: "yes"
  failover_for_shared_individual_ports: "yes"
  ifgrp_name: "a0a"
  vlan_id: "10"
  ports:
    - "e0b"
    - "e0g"
  broadcast_domain: "NFS"
  ipspace: "Default"
  failover_group_name: "iclifs"

destination_intercluster_lif_details:
  - name: "icl_1"
    address: "10.0.0.3"
    netmask: "255.255.255.0"
    home_port: "a0a-10"
    node: "DR-AFF-01"
  - name: "icl_2"
    address: "10.0.0.4"
    netmask: "255.255.255.0"
    home_port: "a0a-10"
    node: "DR-AFF-02"

# Variables for SnapMirror Peering
passphrase: "your-passphrase"

# Source & Destination List
dst_cluster_name: "dst-cluster-name"
dst_cluster_ip: "dst-cluster-ip"
dst_vserver: "dst-vserver"
dst_nfs_lif: "dst-nfs-lif"
src_cluster_name: "src-cluster-name"
src_cluster_ip: "src-cluster-ip"
src_vserver: "src-vserver"

# Variable for Oracle Volumes and SnapMirror Details
cg_snapshot_name_prefix: "oracle"
src_orabinary_vols:
  - "binary_vol"
src_db_vols:
  - "db_vol"
src_archivelog_vols:
  - "log_vol"
snapmirror_policy: "async_policy_oracle"

# Export Policy Details
export_policy_details:
  name: "nfs_export_policy"
  client_match: "0.0.0.0/0"
  ro_rule: "sys"
  rw_rule: "sys"

# Linux env specific config variables
mount_points:
  - "/u01"
  - "/u02"
  - "/u03"
hugepages_nr: "1234"
redhat_sub_username: "xxx"
redhat_sub_password: "xxx"

# DB env specific install and config variables
recovery_type: "scn"
control_files:
  - "/u02/oradata/CDB2/control01.ctl"
  - "/u03/orareco/CDB2/control02.ctl"
CVO
###########################################
### Ontap env specific config variables ###
###########################################

#Inventory group name
#Default inventory group name - "ontap"
#Change only if you are changing the group name either in inventory/hosts file or in inventory groups in case of AWX/Tower
hosts_group: "ontap"

#CA_signed_certificates (ONLY CHANGE to "true" IF YOU ARE USING CA SIGNED CERTIFICATES)
ca_signed_certs: "false"

#Names of the Nodes in the Source ONTAP Cluster
src_nodes:
  - "AFF-01"
  - "AFF-02"

#Names of the Nodes in the Destination CVO Cluster
dst_nodes:
  - "DR-AFF-01"
  - "DR-AFF-02"

#Define whether or not to create intercluster lifs on source cluster (ONLY CHANGE to "No" IF YOU HAVE ALREADY CREATED THE INTERCLUSTER LIFS)
create_source_intercluster_lifs: "yes"

source_intercluster_network_port_details:
  using_dedicated_ports: "yes"
  using_ifgrp: "yes"
  using_vlans: "yes"
  failover_for_shared_individual_ports: "yes"
  ifgrp_name: "a0a"
  vlan_id: "10"
  ports:
    - "e0b"
    - "e0g"
  broadcast_domain: "NFS"
  ipspace: "Default"
  failover_group_name: "iclifs"

source_intercluster_lif_details:
  - name: "icl_1"
    address: "10.0.0.1"
    netmask: "255.255.255.0"
    home_port: "a0a-10"
    node: "AFF-01"
  - name: "icl_2"
    address: "10.0.0.2"
    netmask: "255.255.255.0"
    home_port: "a0a-10"
    node: "AFF-02"

###########################################
### CVO Deployment Variables ###
###########################################

####### Access Keys Variables ######

# Region where your CVO will be deployed.
region_deploy: "us-east-1"

########### CVO and Connector Vars ########

# AWS Managed Policy required to give permission for IAM role creation.
aws_policy: "arn:aws:iam::1234567:policy/OCCM"

# Specify your aws role name, a new role is created if one already does not exist.
aws_role_name: "arn:aws:iam::1234567:policy/OCCM"

# Name your connector.
connector_name: "awx_connector"

# Name of the key pair generated in AWS.
key_pair: "key_pair"

# Name of the Subnet that has the range of IP addresses in your VPC.
subnet: "subnet-12345"

# ID of your AWS secuirty group that allows access to on-prem resources.
security_group: "sg-123123123"

# You Cloud Manager Account ID.
account: "account-A23123A"

# Name of the your CVO instance
cvo_name: "test_cvo"

# ID of the VPC in AWS.
vpc: "vpc-123123123"

###################################################################################################
# Variables for - Add on-prem ONTAP to Connector in Cloud Manager
###################################################################################################

# For Federated users, Client ID from API Authentication Section of Cloud Central to generate access token.
sso_id: "123123123123123123123"

# For regular access with username and password, please specify "pass" as the connector_access. For SSO users, use "refresh_token" as the variable.
connector_access: "pass"

####################################################################################################
# Variables for SnapMirror Peering
####################################################################################################
passphrase: "your-passphrase"

#####################################################################################################
# Source & Destination List
#####################################################################################################
#Please Enter Destination Cluster Name
dst_cluster_name: "dst-cluster-name"

#Please Enter Destination Cluster (Once CVO is Created Add this Variable to all templates)
dst_cluster_ip: "dst-cluster-ip"

#Please Enter Destination SVM to create mirror relationship
dst_vserver: "dst-vserver"

#Please Enter NFS Lif for dst vserver (Once CVO is Created Add this Variable to all templates)
dst_nfs_lif: "dst-nfs-lif"

#Please Enter Source Cluster Name
src_cluster_name: "src-cluster-name"

#Please Enter Source Cluster
src_cluster_ip: "src-cluster-ip"

#Please Enter Source SVM
src_vserver: "src-vserver"

#####################################################################################################
# Variable for Oracle Volumes and SnapMirror Details
#####################################################################################################
#Please Enter Source Snapshot Prefix Name
cg_snapshot_name_prefix: "oracle"

#Please Enter Source Oracle Binary Volume(s)
src_orabinary_vols:
  - "binary_vol"
#Please Enter Source Database Volume(s)
src_db_vols:
  - "db_vol"
#Please Enter Source Archive Volume(s)
src_archivelog_vols:
  - "log_vol"
#Please Enter Destination Snapmirror Policy
snapmirror_policy: "async_policy_oracle"

#####################################################################################################
# Export Policy Details
#####################################################################################################
#Enter the destination export policy details (Once CVO is Created Add this Variable to all templates)
export_policy_details:
  name: "nfs_export_policy"
  client_match: "0.0.0.0/0"
  ro_rule: "sys"
  rw_rule: "sys"

#####################################################################################################
### Linux env specific config variables ###
#####################################################################################################

#NFS Mount points for Oracle DB volumes
mount_points:
  - "/u01"
  - "/u02"
  - "/u03"

# Up to 75% of node memory size divided by 2mb. Consider how many databases to be hosted on the node and how much ram to be allocated to each DB.
# Leave it blank if hugepage is not configured on the host.
hugepages_nr: "1234"

# RedHat subscription username and password
redhat_sub_username: "xxx"
redhat_sub_password: "xxx"

####################################################
### DB env specific install and config variables ###
####################################################
#Recovery Type (leave as scn)
recovery_type: "scn"

#Oracle Control Files
control_files:
  - "/u02/oradata/CDB2/control01.ctl"
  - "/u03/orareco/CDB2/control02.ctl"

Automation Playbooks

There are four separate playbooks that need to be ran.

  1. Playbook for Setting up your environment, On-Prem or CVO.

  2. Playbook for replicating Oracle Binaries and Databases on a schedule

  3. Playbook for replicating Oracle Logs on a schedule

  4. Playbook for Recovering your database on a destination host

ONTAP/CVO Setup

ONTAP and CVO Setup

Configure and launch the job template.

  1. Create the job template.

    1. Navigate to Resources → Templates → Add and click Add Job Template.

    2. Enter the name ONTAP/CVO Setup

    3. Select the Job type; Run configures the system based on a playbook.

    4. Select the corresponding inventory, project, playbook, and credentials for the playbook.

    5. Select the ontap_setup.yml playbook for an On-Prem environment or select the cvo_setup.yml for replicating to a CVO instance.

    6. Paste global variables copied from step 4 into the Template Variables field under the YAML tab.

    7. Click Save.

  2. Launch the job template.

    1. Navigate to Resources → Templates.

    2. Click the desired template and then click Launch.

      Note We will use this template and copy it out for the other playbooks.
Replication For Binary and Database Volumes

Scheduling the Binary and Database Replication Playbook

Configure and launch the job template.

  1. Copy the previously created job template.

    1. Navigate to Resources → Templates.

    2. Find the ONTAP/CVO Setup Template, and on the far right click on Copy Template

    3. Click Edit Template on the copied template, and change the name to Binary and Database Replication Playbook.

    4. Keep the same inventory, project, credentials for the template.

    5. Select the ora_replication_cg.yml as the playbook to be executed.

    6. The variables will remain the same, but the CVO cluster IP will need to be set in the variable dst_cluster_ip.

    7. Click Save.

  2. Schedule the job template.

    1. Navigate to Resources → Templates.

    2. Click the Binary and Database Replication Playbook template and then click Schedules at the top set of options.

    3. Click Add, add Name Schedule for Binary and Database Replication, choose the Start date/time at the beginning of the hour, choose your Local time zone, and Run frequency. Run frequency will be often the SnapMirror replication will be updated.

      Note A separate schedule will be created for the Log volume replication, so that it can be replicated on a more frequent cadence.
Replication for Log Volumes

Scheduling the Log Replication Playbook

Configure and launch the job template

  1. Copy the previously created job template.

    1. Navigate to Resources → Templates.

    2. Find the ONTAP/CVO Setup Template, and on the far right click on Copy Template

    3. Click Edit Template on the copied template, and change the name to Log Replication Playbook.

    4. Keep the same inventory, project, credentials for the template.

    5. Select the ora_replication_logs.yml as the playbook to be executed.

    6. The variables will remain the same, but the CVO cluster IP will need to be set in the variable dst_cluster_ip.

    7. Click Save.

  2. Schedule the job template.

    1. Navigate to Resources → Templates.

    2. Click the Log Replication Playbook template and then click Schedules at the top set of options.

    3. Click Add, add Name Schedule for Log Replication, choose the Start date/time at the beginning of the hour, choose your Local time zone, and Run frequency. Run frequency will be often the SnapMirror replication will be updated.

    Note It is recommended to set the log schedule to update every hour to ensure the recovery to the last hourly update.
Restore and Recover Database

Scheduling the Log Replication Playbook

Configure and launch the job template.

  1. Copy the previously created job template.

    1. Navigate to Resources → Templates.

    2. Find the ONTAP/CVO Setup Template, and on the far right click on Copy Template

    3. Click Edit Template on the copied template, and change the name to Restore and Recovery Playbook.

    4. Keep the same inventory, project, credentials for the template.

    5. Select the ora_recovery.yml as the playbook to be executed.

    6. The variables will remain the same, but the CVO cluster IP will need to be set in the variable dst_cluster_ip.

    7. Click Save.

    Note This playbook will not be ran until you are ready to restore your database at the remote site.

Recovering Oracle Database

  1. On-premises production Oracle databases data volumes are protected via NetApp SnapMirror replication to either a redundant ONTAP cluster in secondary data center or Cloud Volume ONTAP in public cloud. In a fully configured disaster recovery environment, recovery compute instances in secondary data center or public cloud are standby and ready to recover the production database in the case of a disaster. The standby compute instances are kept in sync with on-prem instances by running paraellel updates on OS kernel patch or upgrade in a lockstep.

  2. In this solution demonstrated, Oracle binary volume is replicated to target and mounted at target instance to bring up Oracle software stack. This approach to recover Oracle has advantage over a fresh installation of Oracle at last minute when a disaster occurred. It guarantees Oracle installation is fully in sync with current on-prem production software installation and patch levels etc. However, this may or may not have additional sofware licensing implication for the replicated Oracle binary volume at recovery site depending on how the software licensing is structured with Oracle. User is recommended to check with its software licensing personnel to assess the potential Oracle licensing requirement before deciding to use the same approach.

  3. The standby Oracle host at the destination is configured with the Oracle prerequisite configurations.

  4. The SnapMirrors are broken and the volumes are made writable and mounted to the standby Oracle host.

  5. The Oracle recovery module performs following tasks to recovery and startup Oracle at recovery site after all DB volumes are mounted at standby compute instance.

    1. Sync the control file: We deployed duplicate Oracle control files on different database volume to protect critical database control file. One is on the data volume and another is on log volume. Since data and log volumes are replicated at different frequency, they will be out of sync at the time of recovery.

    2. Relink Oracle binary: Since the Oracle binary is relocated to a new host, it needs a relink.

    3. Recover Oracle database: The recovery mechanism retrieves last System Change Number in last available archived log in Oracle log volume from control file and recovers Oracle database to recoup all business transactions that was able to be replicated to DR site at the time of failure. The database is then started up in a new incarnation to carry on user connections and business transaction at recovery site.

Note Before running the Recovering playbook make sure you have the following:
Make sure it copy over the /etc/oratab and /etc/oraInst.loc from the source Oracle host to the destination host