NetApp configuration

Contributors netapp-dorianh Download PDF of this page

NetApp storage deployed for Epic software environments uses storage controllers in a high-availability (HA) pair configuration. Storage is required to be presented from both controllers to Epic database servers over the FC Protocol (FCP). The configuration presents storage from both controllers to evenly balance the application load during normal operation.

Epic requirements for separating production workloads into fault domains call pools is detailed in the Epic All-Flash Reference Architecture Strategy Handbook. Read this document in detail before continuing. Note that an ONTAP node can be considered a separate pool of storage.

ONTAP configuration

This section describes a sample deployment and provisioning procedures using the relevant ONTAP commands. The emphasis is to show how storage is provisioned to implement the storage layout recommended by NetApp, which uses an HA controller pair. One of the major advantages with ONTAP is the ability to scale out without disturbing the existing HA pairs.

Epic provides detailed storage performance requirements and layout guidance, including the storage presentation and host-side storage layout, to each customer. Epic will provide these custom documents:

  • The Epic Hardware Configuration Guide used for sizing during presales.

  • The Epic Database Storage Layout Recommendations used for LUN and volume layout during deployment.

A customer-specific storage system layout and configuration that meet these requirements must be developed by referring to the Epic Database Storage Layout Recommendations.

The following example describes the deployment of an AFF A700 storage system supporting a 10TB database. The provisioning parameters of the storage used to support the production database in the example deployment are shown in the table below.

Parameter Controller 1 Controller 2

Controller host name

Prod1-01

Prod1-02

Aggregates ONTAP

aggr0_prod1-01 (ADP 11-partitions)

aggr0_prod1-02 (ADP 11-partitions)

Aggregates data

Prod1-01_aggr1 (22-partitions)

Prod1-02_aggr1 (22-partitions)

Volumes (size)

epic_prod_db1 (2TB)
epic_prod_db2 (2TB)
epic_prod_db3 (2TB)
epic_prod_db4 (2TB)
epic_prod_db5 (2TB)
epic_prod_db6 (2TB)
epic_prod_db7 (2TB)
epic_prod_db8 (2TB)
epic_prod_inst (1TB)
epic_prod_jrn1 (1200GB)
epic_prod_jrn2 (1200GB)

epic_report_db1 (2TB)
epic_report_db2 (2TB)
epic_report_db3 (2TB)
epic_report_db4 (2TB)
epic_report_db5 (2TB)
epic_report_db6 (2TB)
epic_report_db7 (2TB)
epic_report_db8 (2TB)
epic_report_inst (1TB)
epic_report_jrn1 (1200GB)
epic_report_jrn2 (1200GB)

LUN paths (size)

/epic_prod_db1/epic_prod_db1 (1.4TB)
/epic_prod_db2/epic_prod_db2 (1.4TB)
/epic_prod_db3/epic_prod_db3 (1.4TB)
/epic_prod_db4/epic_prod_db4 (1.4TB)
/epic_prod_db5/epic_prod_db5 (1.4TB)
/epic_prod_db6/epic_prod_db6 (1.4TB)
/epic_prod_db7/epic_prod_db7 (1.4TB)
/epic_prod_db8/epic_prod_db8 (1.4TB)
/epic_prod_inst/epic_prod_inst (700GB)
/epic_prod_jrn1/epic_prod_jrn1 (800GB)
/epic_prod_jrn2/epic_prod_jrn2 (800GB)

/epic_prod_db1/epic_report_db1 (1.4TB)
/epic_prod_db2/epic_report_db2 (1.4TB)
/epic_prod_db3/epic_report_db3 (1.4TB)
/epic_prod_db4/epic_report_db4 (1.4TB)
/epic_prod_db5/epic_report_db5 (1.4TB)
/epic_prod_db6/epic_report_db6 (1.4TB)
/epic_prod_db7/epic_report_db7 (1.4TB)
/epic_prod_db8/epic_report_db8 (1.4TB)
/epic_report_inst/epic_report_inst (700GB)
/epic_report_jrn1/epic_report_jrn1 (800GB)
/epic_report_jrn2/epic_report_jrn2 (800GB)

VMs

RHEL

RHEL

LUN type

Linux (mounted as RDMs directly by the RHEL VMs using FC)

Linux (mounted as RDMs directly by the RHEL VMs using FC)

FCP initiator group (igroup) name

ig_epic_prod (Linux)

ig_epic_report (Linux)

Host operating system

VMware

VMware

Epic database server host name

epic_prod

epic_report

SVM

svm_prod

svm_ps (production services)
svm_cifs

ONTAP licenses

After the storage controllers are set up, apply licenses to enable ONTAP features recommended by NetApp. The licenses necessary for Epic workloads are FC, CIFS, Snapshot, SnapRestore, FlexClone, and SnapMirror.

To apply the licenses, open NetApp System Manager and go to Configuration-Licenses and add appropriate licenses. Alternatively, run the following command to add licenses using the CLI:

license add -license-code <code>

AutoSupport configuration

The AutoSupport tool sends summary support information to NetApp through HTTPS. To configure AutoSupport, run the following ONTAP commands:

autosupport modify -node * -state enable
autosupport modify -node * -mail-hosts <mailhost.customer.com>
autosupport modify -node prod1-01 -from prod1-01@customer.com
autosupport modify -node prod1-02 -from prod1-02@customer.com
autosupport modify -node * -to storageadmins@customer.com
autosupport modify -node * -support enable
autosupport modify -node * -transport https
autosupport modify -node * -hostnamesubj true

Hardware-assisted takeover configuration

On each node, enable hardware-assisted takeover to minimize the time required to initiate a takeover following the unlikely failure of a controller. To configure hardware-assisted takeover, complete the following steps:

  1. Run the following ONTAP command. Set the partner address option to the IP address of the management port for prod1-01.

    EPIC::> storage failover modify -node prod1-01 -hwassist-partner-ip <prod1-02-mgmt-ip>
  2. Run the following ONTAP command. Set the partner address option to the IP address of the management port for cluster1-02.

    EPIC::> storage failover modify -node prod1-02 -hwassist-partner-ip <prod1-01-mgmt-ip>
  3. Run the following ONTAP command to enable hardware-assisted takeover on both prod1-01 and prod1-02 HA controller pair:

    EPIC::> storage failover modify -node prod1-01 -hwassist true
    EPIC::> storage failover modify -node prod1-02 -hwassist true

ONTAP storage provisioning

The storage provisioning workflow is as follows:

  1. Create the aggregates.

  2. Create a storage virtual machine (SVM).

    After aggregate creation, the next step is to create an SVM. In ONTAP the storage is virtualized in the form of an SVM. Hosts and clients no longer access the physical storage hardware. Create an SVM using the System Manager GUI or the CLI.

  3. Create FC LIFs.

    Ports and storage are provisioned on the SVM and presented to hosts and clients through virtual ports called logical interfaces (LIFs).

    You can run all the workloads in one SVM with all the protocols. For Epic, NetApp recommends having an SVM for production FC and one SVM for CIFS.

    1. Enable and start FC from SVM settings in the System Manager GUI.

    2. Add FC LIFs to the SVM. Configure multiple FC LIFs on each storage node, depending on the number of paths architected per LUN.

  4. Create initiator groups (igroups).

    Igroups are tables of FC- protocol host WWPNs or iSCSI host node names that define which LUNs are available to the hosts. For example, if you have a host cluster, you can use igroups to ensure that specific LUNs are visible to only one host in the cluster or to all the hosts in the cluster. You can define multiple igroups and map them to LUNs to control which initiators have access to LUNs.

    Create FC igroups of type VMware using the System Manager GUI or the CLI.

  5. Create zones on the FC switch.

    An FC or FCoE zone is a logical grouping of one or more ports in a fabric. For devices to be able to see each other, connect, create sessions with one another, and communicate, both ports need to have a common zone membership. Single initiator zoning is recommended.

    1. Create zones on the switch and add the NetApp target and the Cisco UCS blade initiators in the zone.

      NetApp best practice is single initiator zoning. Each zone contains only one initiator and the target WWPN on the controller. The zones use the port name and not the node name.

  6. Create volumes and LUNs.

    1. Create volumes to host the LUNs using the System Manager GUI (or the CLI). All the storage efficiency settings and data protection are set by default on the volume. You can optionally turn on volume encryption and QoS policies on the volume using the vol modify command. Note that the volumes need to be large enough to contain the LUNs and Snapshot copies. To protect the volume from capacity issues, enable the autosize and autodelete options. After the volumes are created, create the LUNs that will house the Epic workload.

    2. Create FC LUNs of type VMware that will host the Epic workload using the System Manager GUI (or the CLI). NetApp has simplified LUN creation in a very easy to follow wizard in System Manager.

      You can also use VSC to provision volumes and LUNs. See the FC Configuration for ESX Express Guide.

      See the SAN Administration and the SAN Configuration Guide if you are not using VSC.

  7. Map the LUNs to the igroups.

    After the LUNs and igroups are created, map the LUNs to the relevant igroups that give the desired hosts access to the LUNs.

    The LUNs are now ready to be discovered and mapped to the ESXi servers. Refresh the storage on the ESXi hosts and add the newly discovered LUNs.