Step-by-step deployment procedure

Contributors ageentap sufianNetApp acao8888 netapp-dorianh Download PDF of this page

AWX/Tower deployment Oracle 19c Database

1. Create the inventory, group, hosts, and credentials for your environment

This section describes the setup of inventory, groups, hosts, and access credentials in AWX/Ansible Tower that prepare the environment for consuming NetApp automated solutions.

  1. Configure the inventory.

    1. Navigate to Resources → Inventories → Add, and click Add Inventory.

    2. Provide the name and organization details, and click Save.

    3. On the Inventories page, click the inventory created.

    4. If there are any inventory variables, paste them in the variables field.

    5. Navigate to the Groups sub-menu and click Add.

    6. Provide the name of the group for ONTAP, paste the group variables (if any) and click Save.

    7. Repeat the process for another group for Oracle.

    8. Select the ONTAP group created, go to the Hosts sub-menu and click Add New Host.

    9. Provide the IP address of the ONTAP cluster management IP, paste the host variables (if any), and click Save.

    10. This process must be repeated for the Oracle group and Oracle host(s) management IP/hostname.

  2. Create credential types. For solutions involving ONTAP, you must configure the credential type to match username and password entries.

    1. Navigate to Administration → Credential Types, and click Add.

    2. Provide the name and description.

    3. Paste the following content in Input Configuration:

      fields:
        - id: username
          type: string
          label: Username
        - id: password
          type: string
          label: Password
          secret: true
        - id: vsadmin_password
          type: string
          label: vsadmin_password
          secret: true
    4. Paste the following content into Injector Configuration:

      extra_vars:
        password: '{{ password }}'
        username: '{{ username }}'
        vsadmin_password: '{{ vsadmin_password }}'
  3. Configure the credentials.

    1. Navigate to Resources → Credentials, and click Add.

    2. Enter the name and organization details for ONTAP.

    3. Select the custom Credential Type you created for ONTAP.

    4. Under Type Details, enter the username, password, and vsadmin_password.

    5. Click Back to Credential and click Add.

    6. Enter the name and organization details for Oracle.

    7. Select the Machine credential type.

    8. Under Type Details, enter the Username and Password for the Oracle hosts.

    9. Select the correct Privilege Escalation Method, and enter the username and password.

2. Create a project

  1. Go to Resources → Projects, and click Add.

    1. Enter the name and organization details.

    2. Select Git in the Source Control Credential Type field.

    3. enter https://github.com/NetApp-Automation/na_oracle19c_deploy.git as the source control URL.

    4. Click Save.

    5. The project might need to sync occasionally when the source code changes.

3. Configure Oracle host_vars

The variables defined in this section are applied to each individual Oracle server and database.

  1. Input your environment-specific parameters in the following embedded Oracle hosts variables or host_vars form.

The items in blue must be changed to match your environment.

Host VARS Config

###################################################################### ############## Host Variables Configuration ############## ###################################################################### # Add your Oracle Host ansible_host: 10.61.180.15 # Oracle db log archive mode: true - ARCHIVELOG or false - NOARCHIVELOG log_archive_mode: true # Number of pluggable databases per container instance identified by sid. Pdb_name specifies the prefix for container database naming in this case cdb2_pdb1, cdb2_pdb2, cdb2_pdb3 oracle_sid: cdb2 pdb_num: 3 pdb_name: "{{ oracle_sid }}_pdb" # CDB listener port, use different listener port for additional CDB on same host listener_port: 1523 # CDB is created with SGA at 75% of memory_limit, MB. Consider how many databases to be hosted on the node and how much ram to be allocated to each DB. The grand total SGA should not exceed 75% available RAM on node. memory_limit: 5464 # Set "em_configuration: DBEXPRESS" to install enterprise manager express and choose a unique port from 5500 to 5599 for each sid on the host. # Leave them black if em express is not installed. em_configuration: DBEXPRESS em_express_port: 5501 # "{{groups.oracle[0]}}" represents first Oracle DB server as defined in Oracle hosts group [oracle]. For concurrent multiple Oracle DB servers deployment, [0] will be incremented for each additional DB server. For example, "{{groups.oracle[1]}}" represents DB server 2, "{{groups.oracle[2]}}" represents DB server 3 ... As a good practice and the default, minimum three volumes is allocated to a DB server with corresponding /u01, /u02, /u03 mount points, which store oracle binary, oracle data, and oracle recovery files respectively. Additional volumes can be added by click on "More NFS volumes" but the number of volumes allocated to a DB server must match with what is defined in global vars file by volumes_nfs parameter, which dictates how many volumes are to be created for each DB server. host_datastores_nfs: - {vol_name: "{{groups.oracle[0]}}_u01", aggr_name: aggr01_node01, lif: 172.21.94.200, size: 25} - {vol_name: "{{groups.oracle[0]}}_u02", aggr_name: aggr01_node01, lif: 172.21.94.200, size: 25} - {vol_name: "{{groups.oracle[0]}}_u03", aggr_name: aggr01_node01, lif: 172.21.94.200, size: 25} More NFS volumes
Enter NFS volumes' details
  1. Fill in all variables in the blue fields.

  2. After completing variables input, click the Copy button on the form to copy all variables to be transferred to AWX or Tower.

  3. Navigate back to AWX or Tower and go to Resources → Hosts, and select and open the Oracle server configuration page.

  4. Under the Details tab, click edit and paste the copied variables from step 1 to the Variables field under the YAML tab.

  5. Click Save.

  6. Repeat this process for any additional Oracle servers in the system.

4. Configure global variables

Variables defined in this section apply to all Oracle hosts, databases, and the ONTAP cluster.

  1. Input your environment-specific parameters in following embedded global variables or vars form.

    The items in blue must be changed to match your environment.

VARS

####################################################################### ###### Oracle 19c deployment global user configuration variables ###### ###### Consolidate all variables from ontap, linux and oracle ###### ####################################################################### ########################################### ### Ontap env specific config variables ### ########################################### #Inventory group name #Default inventory group name - 'ontap' #Change only if you are changing the group name either in inventory/hosts file or in inventory groups in case of AWX/Tower hosts_group: ontap #CA_signed_certificates (ONLY CHANGE to 'true' IF YOU ARE USING CA SIGNED CERTIFICATES) ca_signed_certs: false #Names of the Nodes in the ONTAP Cluster nodes: - AFF-01 - AFF-02 #Storage VLANs #Add additional rows for vlans as necessary storage_vlans: - {vlan_id: "203", name: "infra_NFS", protocol: "NFS"} More Storage VLANs
Enter Storage VLANs details
#Details of the Data Aggregates that need to be created #If Aggregate creation takes longer, subsequent tasks of creating volumes may fail. #There should be enough disks already zeroed in the cluster, otherwise aggregate create will zero the disks and will take long time data_aggregates: - {aggr_name: aggr01_node01} - {aggr_name: aggr01_node02} #SVM name svm_name: ora_svm # SVM Management LIF Details svm_mgmt_details: - {address: 172.21.91.100, netmask: 255.255.255.0, home_port: e0M} # NFS storage parameters when data_protocol set to NFS. Volume named after Oracle hosts name identified by mount point as follow for oracle DB server 1. Each mount point dedicates to a particular Oracle files: u01 - Oracle binary, u02 - Oracle data, u03 - Oracle redo. Add additional volumes by click on "More NFS volumes" and also add the volumes list to corresponding host_vars as host_datastores_nfs variable. For multiple DB server deployment, additional volumes sets needs to be added for additional DB server. Input variable "{{groups.oracle[1]}}_u01", "{{groups.oracle[1]}}_u02", and "{{groups.oracle[1]}}_u03" as vol_name for second DB server. Place volumes for multiple DB servers alternatingly between controllers for balanced IO performance, e.g. DB server 1 on controller node1, DB server 2 on controller node2 etc. Make sure match lif address with controller node. volumes_nfs: - {vol_name: "{{groups.oracle[0]}}_u01", aggr_name: aggr01_node01, lif: 172.21.94.200, size: 25} - {vol_name: "{{groups.oracle[0]}}_u02", aggr_name: aggr01_node01, lif: 172.21.94.200, size: 25} - {vol_name: "{{groups.oracle[0]}}_u03", aggr_name: aggr01_node01, lif: 172.21.94.200, size: 25} More NFS volumes
Enter NFS volumes' details
#NFS LIFs IP address and netmask nfs_lifs_details: - address: 172.21.94.200 #for node-1 netmask: 255.255.255.0 - address: 172.21.94.201 #for node-2 netmask: 255.255.255.0 #NFS client match client_match: 172.21.94.0/24 ########################################### ### Linux env specific config variables ### ########################################### #NFS Mount points for Oracle DB volumes mount_points: - /u01 - /u02 - /u03 # Up to 75% of node memory size divided by 2mb. Consider how many databases to be hosted on the node and how much ram to be allocated to each DB. # Leave it blank if hugepage is not configured on the host. hugepages_nr: 1234 # RedHat subscription username and password redhat_sub_username: xxx redhat_sub_password: xxx #################################################### ### DB env specific install and config variables ### #################################################### db_domain: your.domain.com # Set initial password for all required Oracle passwords. Change them after installation. initial_pwd_all: netapp123
  1. Fill in all variables in blue fields.

  2. After completing variables input, click the Copy button on the form to copy all variables to be transferred to AWX or Tower into the following job template.

5. Configure and launch the job template.

  1. Create the job template.

    1. Navigate to Resources → Templates → Add and click Add Job Template.

    2. Enter the name and description

    3. Select the Job type; Run configures the system based on a playbook, and Check performs a dry run of a playbook without actually configuring the system.

    4. Select the corresponding inventory, project, playbook, and credentials for the playbook.

    5. Select the all_playbook.yml as the default playbook to be executed.

    6. Paste global variables copied from step 4 into the Template Variables field under the YAML tab.

    7. Check the box Prompt on Launch in the Job Tags field.

    8. Click Save.

  2. Launch the job template.

    1. Navigate to Resources → Templates.

    2. Click the desired template and then click Launch.

    3. When prompted on launch for Job Tags, type in requirements_config. You might need to click the Create Job Tag line below requirements_config to enter the job tag.

      requirements_config ensures that you have the correct libraries to run the other roles.
    4. Click Next and then Launch to start the job.

    5. Click View → Jobs to monitor the job output and progress.

    6. When prompted on launch for Job Tags, type in ontap_config. You might need to click the Create "Job Tag" line right below ontap_config to enter the job tag.

    7. Click Next and then Launch to start the job.

    8. Click View → Jobs to monitor the job output and progress

    9. After the ontap_config role has completed, run the process again for linux_config.

    10. Navigate to Resources → Templates.

    11. Select the desired template and then click Launch.

    12. When prompted on launch for the Job Tags type in linux_config, you might need to select the Create "job tag" line right below linux_config to enter the job tag.

    13. Click Next and then Launch to start the job.

    14. Select View → Jobs to monitor the job output and progress.

    15. After the linux_config role has completed, run the process again for oracle_config.

    16. Go to Resources → Templates.

    17. Select the desired template and then click Launch.

    18. When prompted on launch for Job Tags, type oracle_config. You might need to select the Create "Job Tag" line right below oracle_config to enter the job tag.

    19. Click Next and then Launch to start the job.

    20. Select View → Jobs to monitor the job output and progress.

6. Deploy additional database on same Oracle host

The Oracle portion of the playbook creates a single Oracle container database on an Oracle server per execution. To create additional container databases on the same server, complete the following steps.

  1. Revise host_vars variables.

    1. Go back to step 2 - Configure Oracle host_vars.

    2. Change the Oracle SID to a different naming string.

    3. Change the listener port to different number.

    4. Change the EM Express port to a different number if you are installing EM Express.

    5. Copy and paste the revised host variables to the Oracle Host Variables field in the Host Configuration Detail tab.

  2. Launch the deployment job template with only the oracle_config tag.

Validate Oracle installation

1. Log in to Oracle server as oracle user and execute the following commands:
ps -ef | grep ora
This will list oracle processes if installation completed as expected and oracle DB started
2. Log in to the database to check the db configuration settings and the PDBs created with the following command sets.
sqlplus / as sysdba

[oracle@localhost ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 6 12:52:51 2021
Version 19.8.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0

SQL>

select name, log_mode from v$database;

SQL> select name, log_mode from v$database;
NAME LOG_MODE
--------- ------------
CDB2 ARCHIVELOG

show pdbs;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 CDB2_PDB1                      READ WRITE NO
         4 CDB2_PDB2                      READ WRITE NO
         5 CDB2_PDB3                      READ WRITE NO
col svrname form a30
col dirname form a30
select svrname, dirname, nfsversion from v$dnfs_servers;

SQL> col svrname form a30
SQL> col dirname form a30
SQL> select svrname, dirname, nfsversion from v$dnfs_servers;

SVRNAME DIRNAME NFSVERSION
------------------------------ ------------------------------ ----------------
172.21.126.200 /rhelora03_u02 NFSv3.0
172.21.126.200 /rhelora03_u03 NFSv3.0
172.21.126.200 /rhelora03_u01 NFSv3.0

This confirms that dNFS is working properly.
3. Connect to database via listener to check hte Oracle listener configuration with the following command. Change to the appropriate listener port and database service name.
sqlplus system@//localhost:1523/cdb2_pdb1.cie.netapp.com

[oracle@localhost ~]$ sqlplus system@//localhost:1523/cdb2_pdb1.cie.netapp.com

SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 6 13:19:57 2021
Version 19.8.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Enter password:
Last Successful login time: Wed May 05 2021 17:11:11 -04:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0

SQL> show user
USER is "SYSTEM"
SQL> show con_name
CON_NAME
CDB2_PDB1

This confirms that Oracle listener is working properly.

Where to go for help?

If you need help with the toolkit, please join the NetApp Solution Automation community support slack channel and look for the solution-automation channel to post your questions or inquires.