Step-by-step deployment procedure
This page describes the Automated Data Protection of Oracle19c on NetApp ONTAP storage.
AWX/Tower Oracle Data Protection
Create the inventory, group, hosts, and credentials for your environment
This section describes the setup of inventory, groups, hosts, and access credentials in AWX/Ansible Tower that prepare the environment for consuming NetApp automated solutions.
-
Configure the inventory.
-
Navigate to Resources → Inventories → Add, and click Add Inventory.
-
Provide the name and organization details, and click Save.
-
On the Inventories page, click the inventory created.
-
Navigate to the Groups sub-menu and click Add.
-
Provide the name oracle for your first group and click Save.
-
Repeat the process for a second group called dr_oracle.
-
Select the oracle group created, go to the Hosts sub-menu and click Add New Host.
-
Provide the IP address of the Source Oracle host's management IP, and click Save.
-
This process must be repeated for the dr_oracle group and add the the DR/Destination Oracle host's management IP/hostname.
-
Below are instructions for creating the credential types and credentials for either On-Prem with ONTAP, or CVO on AWS. |
-
Configure the credentials.
-
Create Credential Types. For solutions involving ONTAP, you must configure the credential type to match username and password entries.
-
Navigate to Administration → Credential Types, and click Add.
-
Provide the name and description.
-
Paste the following content in Input Configuration:
fields: - id: dst_cluster_username type: string label: Destination Cluster Username - id: dst_cluster_password type: string label: Destination Cluster Password secret: true - id: src_cluster_username type: string label: Source Cluster Username - id: src_cluster_password type: string label: Source Cluster Password secret: true
-
Paste the following content into Injector Configuration and then click Save:
extra_vars: dst_cluster_username: '{{ dst_cluster_username }}' dst_cluster_password: '{{ dst_cluster_password }}' src_cluster_username: '{{ src_cluster_username }}' src_cluster_password: '{{ src_cluster_password }}'
-
-
Create Credential for ONTAP
-
Navigate to Resources → Credentials, and click Add.
-
Enter the name and organization details for the ONTAP Credentials
-
Select the credential type that was created in the previous step.
-
Under Type Details, enter the Username and Password for your Source and Destination Clusters.
-
Click Save
-
-
Create Credential for Oracle
-
Navigate to Resources → Credentials, and click Add.
-
Enter the name and organization details for Oracle
-
Select the Machine credential type.
-
Under Type Details, enter the Username and Password for the Oracle hosts.
-
Select the correct Privilege Escalation Method, and enter the username and password.
-
Click Save
-
Repeat process if needed for a different credential for the dr_oracle host.
-
-
Configure the credentials.
-
Create credential types. For solutions involving ONTAP, you must configure the credential type to match username and password entries, we will also add entries for Cloud Central and AWS.
-
Navigate to Administration → Credential Types, and click Add.
-
Provide the name and description.
-
Paste the following content in Input Configuration:
fields: - id: dst_cluster_username type: string label: CVO Username - id: dst_cluster_password type: string label: CVO Password secret: true - id: cvo_svm_password type: string label: CVO SVM Password secret: true - id: src_cluster_username type: string label: Source Cluster Username - id: src_cluster_password type: string label: Source Cluster Password secret: true - id: regular_id type: string label: Cloud Central ID secret: true - id: email_id type: string label: Cloud Manager Email secret: true - id: cm_password type: string label: Cloud Manager Password secret: true - id: access_key type: string label: AWS Access Key secret: true - id: secret_key type: string label: AWS Secret Key secret: true - id: token type: string label: Cloud Central Refresh Token secret: true
-
Paste the following content into Injector Configuration and click Save:
extra_vars: dst_cluster_username: '{{ dst_cluster_username }}' dst_cluster_password: '{{ dst_cluster_password }}' cvo_svm_password: '{{ cvo_svm_password }}' src_cluster_username: '{{ src_cluster_username }}' src_cluster_password: '{{ src_cluster_password }}' regular_id: '{{ regular_id }}' email_id: '{{ email_id }}' cm_password: '{{ cm_password }}' access_key: '{{ access_key }}' secret_key: '{{ secret_key }}' token: '{{ token }}'
-
-
Create Credential for ONTAP/CVO/AWS
-
Navigate to Resources → Credentials, and click Add.
-
Enter the name and organization details for the ONTAP Credentials
-
Select the credential type that was created in the previous step.
-
Under Type Details, enter the Username and Password for your Source and CVO Clusters, Cloud Central/Manager, AWS Access/Secret Key and Cloud Central Refresh Token.
-
Click Save
-
-
Create Credential for Oracle (Source)
-
Navigate to Resources → Credentials, and click Add.
-
Enter the name and organization details for Oracle host
-
Select the Machine credential type.
-
Under Type Details, enter the Username and Password for the Oracle hosts.
-
Select the correct Privilege Escalation Method, and enter the username and password.
-
Click Save
-
-
Create Credential for Oracle Destination
-
Navigate to Resources → Credentials, and click Add.
-
Enter the name and organization details for the DR Oracle host
-
Select the Machine credential type.
-
Under Type Details, enter the Username (ec2-user or if you have changed it from default enter that), and the SSH Private Key
-
Select the correct Privilege Escalation Method (sudo), and enter the username and password if needed.
-
Click Save
-
Create a project
-
Go to Resources → Projects, and click Add.
-
Enter the name and organization details.
-
Select Git in the Source Control Credential Type field.
-
enter
https://github.com/NetApp-Automation/na_oracle19c_data_protection.git
as the source control URL. -
Click Save.
-
The project might need to sync occasionally when the source code changes.
-
Configure global variables
Variables defined in this section apply to all Oracle hosts, databases, and the ONTAP cluster.
-
Input your environment-specific parameters in following embedded global variables or vars form.
The items in blue must be changed to match your environment. |
# Oracle Data Protection global user configuration variables
# Ontap env specific config variables
hosts_group: "ontap"
ca_signed_certs: "false"
# Inter-cluster LIF details
src_nodes:
- "AFF-01"
- "AFF-02"
dst_nodes:
- "DR-AFF-01"
- "DR-AFF-02"
create_source_intercluster_lifs: "yes"
source_intercluster_network_port_details:
using_dedicated_ports: "yes"
using_ifgrp: "yes"
using_vlans: "yes"
failover_for_shared_individual_ports: "yes"
ifgrp_name: "a0a"
vlan_id: "10"
ports:
- "e0b"
- "e0g"
broadcast_domain: "NFS"
ipspace: "Default"
failover_group_name: "iclifs"
source_intercluster_lif_details:
- name: "icl_1"
address: "10.0.0.1"
netmask: "255.255.255.0"
home_port: "a0a-10"
node: "AFF-01"
- name: "icl_2"
address: "10.0.0.2"
netmask: "255.255.255.0"
home_port: "a0a-10"
node: "AFF-02"
create_destination_intercluster_lifs: "yes"
destination_intercluster_network_port_details:
using_dedicated_ports: "yes"
using_ifgrp: "yes"
using_vlans: "yes"
failover_for_shared_individual_ports: "yes"
ifgrp_name: "a0a"
vlan_id: "10"
ports:
- "e0b"
- "e0g"
broadcast_domain: "NFS"
ipspace: "Default"
failover_group_name: "iclifs"
destination_intercluster_lif_details:
- name: "icl_1"
address: "10.0.0.3"
netmask: "255.255.255.0"
home_port: "a0a-10"
node: "DR-AFF-01"
- name: "icl_2"
address: "10.0.0.4"
netmask: "255.255.255.0"
home_port: "a0a-10"
node: "DR-AFF-02"
# Variables for SnapMirror Peering
passphrase: "your-passphrase"
# Source & Destination List
dst_cluster_name: "dst-cluster-name"
dst_cluster_ip: "dst-cluster-ip"
dst_vserver: "dst-vserver"
dst_nfs_lif: "dst-nfs-lif"
src_cluster_name: "src-cluster-name"
src_cluster_ip: "src-cluster-ip"
src_vserver: "src-vserver"
# Variable for Oracle Volumes and SnapMirror Details
cg_snapshot_name_prefix: "oracle"
src_orabinary_vols:
- "binary_vol"
src_db_vols:
- "db_vol"
src_archivelog_vols:
- "log_vol"
snapmirror_policy: "async_policy_oracle"
# Export Policy Details
export_policy_details:
name: "nfs_export_policy"
client_match: "0.0.0.0/0"
ro_rule: "sys"
rw_rule: "sys"
# Linux env specific config variables
mount_points:
- "/u01"
- "/u02"
- "/u03"
hugepages_nr: "1234"
redhat_sub_username: "xxx"
redhat_sub_password: "xxx"
# DB env specific install and config variables
recovery_type: "scn"
control_files:
- "/u02/oradata/CDB2/control01.ctl"
- "/u03/orareco/CDB2/control02.ctl"
###########################################
### Ontap env specific config variables ###
###########################################
#Inventory group name
#Default inventory group name - "ontap"
#Change only if you are changing the group name either in inventory/hosts file or in inventory groups in case of AWX/Tower
hosts_group: "ontap"
#CA_signed_certificates (ONLY CHANGE to "true" IF YOU ARE USING CA SIGNED CERTIFICATES)
ca_signed_certs: "false"
#Names of the Nodes in the Source ONTAP Cluster
src_nodes:
- "AFF-01"
- "AFF-02"
#Names of the Nodes in the Destination CVO Cluster
dst_nodes:
- "DR-AFF-01"
- "DR-AFF-02"
#Define whether or not to create intercluster lifs on source cluster (ONLY CHANGE to "No" IF YOU HAVE ALREADY CREATED THE INTERCLUSTER LIFS)
create_source_intercluster_lifs: "yes"
source_intercluster_network_port_details:
using_dedicated_ports: "yes"
using_ifgrp: "yes"
using_vlans: "yes"
failover_for_shared_individual_ports: "yes"
ifgrp_name: "a0a"
vlan_id: "10"
ports:
- "e0b"
- "e0g"
broadcast_domain: "NFS"
ipspace: "Default"
failover_group_name: "iclifs"
source_intercluster_lif_details:
- name: "icl_1"
address: "10.0.0.1"
netmask: "255.255.255.0"
home_port: "a0a-10"
node: "AFF-01"
- name: "icl_2"
address: "10.0.0.2"
netmask: "255.255.255.0"
home_port: "a0a-10"
node: "AFF-02"
###########################################
### CVO Deployment Variables ###
###########################################
####### Access Keys Variables ######
# Region where your CVO will be deployed.
region_deploy: "us-east-1"
########### CVO and Connector Vars ########
# AWS Managed Policy required to give permission for IAM role creation.
aws_policy: "arn:aws:iam::1234567:policy/OCCM"
# Specify your aws role name, a new role is created if one already does not exist.
aws_role_name: "arn:aws:iam::1234567:policy/OCCM"
# Name your connector.
connector_name: "awx_connector"
# Name of the key pair generated in AWS.
key_pair: "key_pair"
# Name of the Subnet that has the range of IP addresses in your VPC.
subnet: "subnet-12345"
# ID of your AWS secuirty group that allows access to on-prem resources.
security_group: "sg-123123123"
# You Cloud Manager Account ID.
account: "account-A23123A"
# Name of the your CVO instance
cvo_name: "test_cvo"
# ID of the VPC in AWS.
vpc: "vpc-123123123"
###################################################################################################
# Variables for - Add on-prem ONTAP to Connector in Cloud Manager
###################################################################################################
# For Federated users, Client ID from API Authentication Section of Cloud Central to generate access token.
sso_id: "123123123123123123123"
# For regular access with username and password, please specify "pass" as the connector_access. For SSO users, use "refresh_token" as the variable.
connector_access: "pass"
####################################################################################################
# Variables for SnapMirror Peering
####################################################################################################
passphrase: "your-passphrase"
#####################################################################################################
# Source & Destination List
#####################################################################################################
#Please Enter Destination Cluster Name
dst_cluster_name: "dst-cluster-name"
#Please Enter Destination Cluster (Once CVO is Created Add this Variable to all templates)
dst_cluster_ip: "dst-cluster-ip"
#Please Enter Destination SVM to create mirror relationship
dst_vserver: "dst-vserver"
#Please Enter NFS Lif for dst vserver (Once CVO is Created Add this Variable to all templates)
dst_nfs_lif: "dst-nfs-lif"
#Please Enter Source Cluster Name
src_cluster_name: "src-cluster-name"
#Please Enter Source Cluster
src_cluster_ip: "src-cluster-ip"
#Please Enter Source SVM
src_vserver: "src-vserver"
#####################################################################################################
# Variable for Oracle Volumes and SnapMirror Details
#####################################################################################################
#Please Enter Source Snapshot Prefix Name
cg_snapshot_name_prefix: "oracle"
#Please Enter Source Oracle Binary Volume(s)
src_orabinary_vols:
- "binary_vol"
#Please Enter Source Database Volume(s)
src_db_vols:
- "db_vol"
#Please Enter Source Archive Volume(s)
src_archivelog_vols:
- "log_vol"
#Please Enter Destination Snapmirror Policy
snapmirror_policy: "async_policy_oracle"
#####################################################################################################
# Export Policy Details
#####################################################################################################
#Enter the destination export policy details (Once CVO is Created Add this Variable to all templates)
export_policy_details:
name: "nfs_export_policy"
client_match: "0.0.0.0/0"
ro_rule: "sys"
rw_rule: "sys"
#####################################################################################################
### Linux env specific config variables ###
#####################################################################################################
#NFS Mount points for Oracle DB volumes
mount_points:
- "/u01"
- "/u02"
- "/u03"
# Up to 75% of node memory size divided by 2mb. Consider how many databases to be hosted on the node and how much ram to be allocated to each DB.
# Leave it blank if hugepage is not configured on the host.
hugepages_nr: "1234"
# RedHat subscription username and password
redhat_sub_username: "xxx"
redhat_sub_password: "xxx"
####################################################
### DB env specific install and config variables ###
####################################################
#Recovery Type (leave as scn)
recovery_type: "scn"
#Oracle Control Files
control_files:
- "/u02/oradata/CDB2/control01.ctl"
- "/u03/orareco/CDB2/control02.ctl"
Automation Playbooks
There are four separate playbooks that need to be ran.
-
Playbook for Setting up your environment, On-Prem or CVO.
-
Playbook for replicating Oracle Binaries and Databases on a schedule
-
Playbook for replicating Oracle Logs on a schedule
-
Playbook for Recovering your database on a destination host
ONTAP and CVO Setup
Configure and launch the job template.
-
Create the job template.
-
Navigate to Resources → Templates → Add and click Add Job Template.
-
Enter the name ONTAP/CVO Setup
-
Select the Job type; Run configures the system based on a playbook.
-
Select the corresponding inventory, project, playbook, and credentials for the playbook.
-
Select the ontap_setup.yml playbook for an On-Prem environment or select the cvo_setup.yml for replicating to a CVO instance.
-
Paste global variables copied from step 4 into the Template Variables field under the YAML tab.
-
Click Save.
-
-
Launch the job template.
-
Navigate to Resources → Templates.
-
Click the desired template and then click Launch.
We will use this template and copy it out for the other playbooks.
-
Scheduling the Binary and Database Replication Playbook
Configure and launch the job template.
-
Copy the previously created job template.
-
Navigate to Resources → Templates.
-
Find the ONTAP/CVO Setup Template, and on the far right click on Copy Template
-
Click Edit Template on the copied template, and change the name to Binary and Database Replication Playbook.
-
Keep the same inventory, project, credentials for the template.
-
Select the ora_replication_cg.yml as the playbook to be executed.
-
The variables will remain the same, but the CVO cluster IP will need to be set in the variable dst_cluster_ip.
-
Click Save.
-
-
Schedule the job template.
-
Navigate to Resources → Templates.
-
Click the Binary and Database Replication Playbook template and then click Schedules at the top set of options.
-
Click Add, add Name Schedule for Binary and Database Replication, choose the Start date/time at the beginning of the hour, choose your Local time zone, and Run frequency. Run frequency will be often the SnapMirror replication will be updated.
A separate schedule will be created for the Log volume replication, so that it can be replicated on a more frequent cadence.
-
Scheduling the Log Replication Playbook
Configure and launch the job template
-
Copy the previously created job template.
-
Navigate to Resources → Templates.
-
Find the ONTAP/CVO Setup Template, and on the far right click on Copy Template
-
Click Edit Template on the copied template, and change the name to Log Replication Playbook.
-
Keep the same inventory, project, credentials for the template.
-
Select the ora_replication_logs.yml as the playbook to be executed.
-
The variables will remain the same, but the CVO cluster IP will need to be set in the variable dst_cluster_ip.
-
Click Save.
-
-
Schedule the job template.
-
Navigate to Resources → Templates.
-
Click the Log Replication Playbook template and then click Schedules at the top set of options.
-
Click Add, add Name Schedule for Log Replication, choose the Start date/time at the beginning of the hour, choose your Local time zone, and Run frequency. Run frequency will be often the SnapMirror replication will be updated.
It is recommended to set the log schedule to update every hour to ensure the recovery to the last hourly update. -
Scheduling the Log Replication Playbook
Configure and launch the job template.
-
Copy the previously created job template.
-
Navigate to Resources → Templates.
-
Find the ONTAP/CVO Setup Template, and on the far right click on Copy Template
-
Click Edit Template on the copied template, and change the name to Restore and Recovery Playbook.
-
Keep the same inventory, project, credentials for the template.
-
Select the ora_recovery.yml as the playbook to be executed.
-
The variables will remain the same, but the CVO cluster IP will need to be set in the variable dst_cluster_ip.
-
Click Save.
This playbook will not be ran until you are ready to restore your database at the remote site. -
Recovering Oracle Database
-
On-premises production Oracle databases data volumes are protected via NetApp SnapMirror replication to either a redundant ONTAP cluster in secondary data center or Cloud Volume ONTAP in public cloud. In a fully configured disaster recovery environment, recovery compute instances in secondary data center or public cloud are standby and ready to recover the production database in the case of a disaster. The standby compute instances are kept in sync with on-prem instances by running paraellel updates on OS kernel patch or upgrade in a lockstep.
-
In this solution demonstrated, Oracle binary volume is replicated to target and mounted at target instance to bring up Oracle software stack. This approach to recover Oracle has advantage over a fresh installation of Oracle at last minute when a disaster occurred. It guarantees Oracle installation is fully in sync with current on-prem production software installation and patch levels etc. However, this may or may not have additional sofware licensing implication for the replicated Oracle binary volume at recovery site depending on how the software licensing is structured with Oracle. User is recommended to check with its software licensing personnel to assess the potential Oracle licensing requirement before deciding to use the same approach.
-
The standby Oracle host at the destination is configured with the Oracle prerequisite configurations.
-
The SnapMirrors are broken and the volumes are made writable and mounted to the standby Oracle host.
-
The Oracle recovery module performs following tasks to recovery and startup Oracle at recovery site after all DB volumes are mounted at standby compute instance.
-
Sync the control file: We deployed duplicate Oracle control files on different database volume to protect critical database control file. One is on the data volume and another is on log volume. Since data and log volumes are replicated at different frequency, they will be out of sync at the time of recovery.
-
Relink Oracle binary: Since the Oracle binary is relocated to a new host, it needs a relink.
-
Recover Oracle database: The recovery mechanism retrieves last System Change Number in last available archived log in Oracle log volume from control file and recovers Oracle database to recoup all business transactions that was able to be replicated to DR site at the time of failure. The database is then started up in a new incarnation to carry on user connections and business transaction at recovery site.
-
Before running the Recovering playbook make sure you have the following: Make sure it copy over the /etc/oratab and /etc/oraInst.loc from the source Oracle host to the destination host |