Step-by-step deployment procedure
This document details the deployment of Oracle 19c using the automation command line interface (cli).
CLI deployment Oracle 19c Database
This section covers the steps required to prepare and deploy Oracle19c Database with the CLI. Make sure that you have reviewed the Getting Started and Requirements section and prepared your environment accordingly.
Download Oracle19c repo
-
From your ansible controller, run the following command:
git clone https://github.com/NetApp-Automation/na_oracle19c_deploy.git
-
After downloading the repository, change directories to na_oracle19c_deploy <cd na_oracle19c_deploy>.
Edit the hosts file
Complete the following before deployment:
-
Edit your hosts file na_oracle19c_deploy directory.
-
Under [ontap], change the IP address to your cluster management IP.
-
Under the [oracle] group, add the oracle hosts names. The host name must be resolved to its IP address either through DNS or the hosts file, or it must be specified in the host.
-
After you have completed these steps, save any changes.
The following example depicts a host file:
#ONTAP Host
[ontap]
"10.61.184.183"
#Oracle hosts
[oracle]
"rtpora01"
"rtpora02"
This example executes the playbook and deploys oracle 19c on two oracle DB servers concurrently. You can also test with just one DB server. In that case, you only need to configure one host variable file.
The playbook executes the same way regardless of how many Oracle hosts and databases you deploy. |
Edit the host_name.yml file under host_vars
Each Oracle host has its host variable file identified by its host name that contains host-specific variables. You can specify any name for your host. Edit and copy the host_vars
from the Host VARS Config section and paste it into your desired host_name.yml
file.
The items in blue must be changed to match your environment. |
Host VARS Config
######################################################################
############## Host Variables Configuration ##############
######################################################################
# Add your Oracle Host
ansible_host: "10.61.180.15"
# Oracle db log archive mode: true - ARCHIVELOG or false - NOARCHIVELOG
log_archive_mode: "true"
# Number of pluggable databases per container instance identified by sid. Pdb_name specifies the prefix for container database naming in this case cdb2_pdb1, cdb2_pdb2, cdb2_pdb3
oracle_sid: "cdb2"
pdb_num: "3"
pdb_name: "{{ oracle_sid }}_pdb"
# CDB listener port, use different listener port for additional CDB on same host
listener_port: "1523"
# CDB is created with SGA at 75% of memory_limit, MB. Consider how many databases to be hosted on the node and how much ram to be allocated to each DB. The grand total SGA should not exceed 75% available RAM on node.
memory_limit: "5464"
# Set "em_configuration: DBEXPRESS" to install enterprise manager express and choose a unique port from 5500 to 5599 for each sid on the host.
# Leave them black if em express is not installed.
em_configuration: "DBEXPRESS"
em_express_port: "5501"
# {{groups.oracle[0]}} represents first Oracle DB server as defined in Oracle hosts group [oracle]. For concurrent multiple Oracle DB servers deployment, [0] will be incremented for each additional DB server. For example, {{groups.oracle[1]}}" represents DB server 2, "{{groups.oracle[2]}}" represents DB server 3 ... As a good practice and the default, minimum three volumes is allocated to a DB server with corresponding /u01, /u02, /u03 mount points, which store oracle binary, oracle data, and oracle recovery files respectively. Additional volumes can be added by click on "More NFS volumes" but the number of volumes allocated to a DB server must match with what is defined in global vars file by volumes_nfs parameter, which dictates how many volumes are to be created for each DB server.
host_datastores_nfs:
- {vol_name: "{{groups.oracle[0]}}_u01", aggr_name: "aggr01_node01", lif: "172.21.94.200", size: "25"}
- {vol_name: "{{groups.oracle[0]}}_u02", aggr_name: "aggr01_node01", lif: "172.21.94.200", size: "25"}
- {vol_name: "{{groups.oracle[0]}}_u03", aggr_name: "aggr01_node01", lif: "172.21.94.200", size: "25"}
Edit the vars.yml file
The vars.yml
file consolidates all environment-specific variables (ONTAP, Linux, or Oracle) for Oracle deployment.
-
Edit and copy the variables from the VARS section and paste these variables into your
vars.yml
file.
#######################################################################
###### Oracle 19c deployment global user configuration variables ######
###### Consolidate all variables from ontap, linux and oracle ######
#######################################################################
###########################################
### Ontap env specific config variables ###
###########################################
#Inventory group name
#Default inventory group name - 'ontap'
#Change only if you are changing the group name either in inventory/hosts file or in inventory groups in case of AWX/Tower
hosts_group: "ontap"
#CA_signed_certificates (ONLY CHANGE to 'true' IF YOU ARE USING CA SIGNED CERTIFICATES)
ca_signed_certs: "false"
#Names of the Nodes in the ONTAP Cluster
nodes:
- "AFF-01"
- "AFF-02"
#Storage VLANs
#Add additional rows for vlans as necessary
storage_vlans:
- {vlan_id: "203", name: "infra_NFS", protocol: "NFS"}
More Storage VLANsEnter Storage VLANs details
#Details of the Data Aggregates that need to be created
#If Aggregate creation takes longer, subsequent tasks of creating volumes may fail.
#There should be enough disks already zeroed in the cluster, otherwise aggregate create will zero the disks and will take long time
data_aggregates:
- {aggr_name: "aggr01_node01"}
- {aggr_name: "aggr01_node02"}
#SVM name
svm_name: "ora_svm"
# SVM Management LIF Details
svm_mgmt_details:
- {address: "172.21.91.100", netmask: "255.255.255.0", home_port: "e0M"}
# NFS storage parameters when data_protocol set to NFS. Volume named after Oracle hosts name identified by mount point as follow for oracle DB server 1. Each mount point dedicates to a particular Oracle files: u01 - Oracle binary, u02 - Oracle data, u03 - Oracle redo. Add additional volumes by click on "More NFS volumes" and also add the volumes list to corresponding host_vars as host_datastores_nfs variable. For multiple DB server deployment, additional volumes sets needs to be added for additional DB server. Input variable "{{groups.oracle[1]}}_u01", "{{groups.oracle[1]}}_u02", and "{{groups.oracle[1]}}_u03" as vol_name for second DB server. Place volumes for multiple DB servers alternatingly between controllers for balanced IO performance, e.g. DB server 1 on controller node1, DB server 2 on controller node2 etc. Make sure match lif address with controller node.
volumes_nfs:
- {vol_name: "{{groups.oracle[0]}}_u01", aggr_name: "aggr01_node01", lif: "172.21.94.200", size: "25"}
- {vol_name: "{{groups.oracle[0]}}_u02", aggr_name: "aggr01_node01", lif: "172.21.94.200", size: "25"}
- {vol_name: "{{groups.oracle[0]}}_u03", aggr_name: "aggr01_node01", lif: "172.21.94.200", size: "25"}
#NFS LIFs IP address and netmask
nfs_lifs_details:
- address: "172.21.94.200" #for node-1
netmask: "255.255.255.0"
- address: "172.21.94.201" #for node-2
netmask: "255.255.255.0"
#NFS client match
client_match: "172.21.94.0/24"
###########################################
### Linux env specific config variables ###
###########################################
#NFS Mount points for Oracle DB volumes
mount_points:
- "/u01"
- "/u02"
- "/u03"
# Up to 75% of node memory size divided by 2mb. Consider how many databases to be hosted on the node and how much ram to be allocated to each DB.
# Leave it blank if hugepage is not configured on the host.
hugepages_nr: "1234"
# RedHat subscription username and password
redhat_sub_username: "xxx"
redhat_sub_password: "xxx"
####################################################
### DB env specific install and config variables ###
####################################################
db_domain: "your.domain.com"
# Set initial password for all required Oracle passwords. Change them after installation.
initial_pwd_all: "netapp123"
Run the playbook
After completing the required environment prerequisites and copying the variables into vars.yml
and your_host.yml
, you are now ready to deploy the playbooks.
<username> must be changed to match your environment. |
-
Run the ONTAP playbook by passing the correct tags and ONTAP cluster username. Fill the password for ONTAP cluster, and vsadmin when prompted.
ansible-playbook -i hosts all_playbook.yml -u username -k -K -t ontap_config -e @vars/vars.yml
-
Run the Linux playbook to execute Linux portion of deployment. Input for admin ssh password as well as sudo password.
ansible-playbook -i hosts all_playbook.yml -u username -k -K -t linux_config -e @vars/vars.yml
-
Run the Oracle playbook to execute Oracle portion of deployment. Input for admin ssh password as well as sudo password.
ansible-playbook -i hosts all_playbook.yml -u username -k -K -t oracle_config -e @vars/vars.yml
Deploy Additional Database on Same Oracle Host
The Oracle portion of the playbook creates a single Oracle container database on an Oracle server per execution. To create additional container database on the same server, complete the following steps:
-
Revise the host_vars variables.
-
Go back to step 3 - Edit the
host_name.yml
file underhost_vars
. -
Change the Oracle SID to a different naming string.
-
Change the listener port to different number.
-
Change the EM Express port to a different number if you have installed EM Express.
-
Copy and paste the revised host variables to the Oracle host variable file under
host_vars
.
-
-
Execute the playbook with the
oracle_config
tag as shown above in Run the playbook.
Validate Oracle installation
-
Log in to Oracle server as oracle user and execute the following commands:
ps -ef | grep ora
This will list oracle processes if installation completed as expected and oracle DB started -
Log in to the database to check the db configuration settings and the PDBs created with the following command sets.
[oracle@localhost ~]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 6 12:52:51 2021 Version 19.8.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.8.0.0.0 SQL> SQL> select name, log_mode from v$database; NAME LOG_MODE --------- ------------ CDB2 ARCHIVELOG SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 CDB2_PDB1 READ WRITE NO 4 CDB2_PDB2 READ WRITE NO 5 CDB2_PDB3 READ WRITE NO col svrname form a30 col dirname form a30 select svrname, dirname, nfsversion from v$dnfs_servers; SQL> col svrname form a30 SQL> col dirname form a30 SQL> select svrname, dirname, nfsversion from v$dnfs_servers; SVRNAME DIRNAME NFSVERSION ------------------------------ ------------------------------ ---------------- 172.21.126.200 /rhelora03_u02 NFSv3.0 172.21.126.200 /rhelora03_u03 NFSv3.0 172.21.126.200 /rhelora03_u01 NFSv3.0
This confirms that dNFS is working properly.
-
Connect to database via listener to check hte Oracle listener configuration with the following command. Change to the appropriate listener port and database service name.
[oracle@localhost ~]$ sqlplus system@//localhost:1523/cdb2_pdb1.cie.netapp.com SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 6 13:19:57 2021 Version 19.8.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Enter password: Last Successful login time: Wed May 05 2021 17:11:11 -04:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.8.0.0.0 SQL> show user USER is "SYSTEM" SQL> show con_name CON_NAME CDB2_PDB1
This confirms that Oracle listener is working properly.
Where to go for help?
If you need help with the toolkit, please join the NetApp Solution Automation community support slack channel and look for the solution-automation channel to post your questions or inquires.