Skip to main content

Installation and configuration


NetApp Cloud Volumes ONTAP deployment

Complete the following steps to configure your Cloud Volumes ONTAP instance:

  1. Prepare the public cloud service provider environment.

    You must capture the environment details of your public cloud service provider for the solution configuration. For example, for Amazon Web Services (AWS) environment preparation, you need the AWS access key, the AWS secret key, and other network details like region, VPC, subnet, and so on.

  2. Configure the VPC endpoint gateway.

    A VPC endpoint gateway is required to enable the connection between the VPC and the AWS S3 service. This is used to enable the backup on CVO, an endpoint with the Gateway type.

  3. Access NetApp BlueXP.

    To access the NetApp BlueXP and other cloud services, you need to sign up on NetApp BlueXP. For setting up workspaces and users in the BlueXP account, click here. You need an account that has permission to deploy the Connector in your cloud provider directly from BlueXP. You can download the BlueXP policy from here.

  4. Deploy Connector.

    Before adding a Cloud Volume ONTAP working environment, you must deploy Connector. BlueXP prompts you if you try to create your first Cloud Volumes ONTAP working environment without Connector in place. To deploy Connector in AWS from BlueXP, see this link.

  5. Launch Cloud Volumes ONTAP in AWS.

    You can launch Cloud Volumes ONTAP in a single-system configuration or as an HA pair in AWS. Read the step-by-step instructions.

    For detailed information about these steps, see the Quick start guide for Cloud Volumes ONTAP in AWS.

    In this solution, we have deployed a single-node Cloud Volumes ONTAP system in AWS. The following figure depicts the NetApp BlueXP Dashboard with single-node CVO instance.

This screenshot shows the NetApp BlueXP Canvas screen with My Working Environments displayed.

On-premises FlexPod Deployment

To understand FlexPod with UCS X-Series, VMware, and NetApp ONTAP design details, see the FlexPod Datacenter with Cisco UCS X-Series design guide. This document provides design guidance for incorporating the Cisco Intersight-managed UCS X-Series platform within the FlexPod Datacenter infrastructure.

For deploying the on-premises FlexPod instance, see this deployment guide.

This document provides deployment guidance for incorporating the Cisco Intersight-managed UCS X-Series platform within a FlexPod Datacenter infrastructure. The document covers both configurations and best practices for a successful deployment.

FlexPod can be deployed in both UCS Managed Mode and Cisco Intersight Managed Mode (IMM). If you are deploying FlexPod in UCS Managed Mode, see this design guide and this deployment guide.

FlexPod deployment can be automated with Infrastructure as code using Ansible. Below are the links to the GitHub repositories for End-to-End FlexPod deployment:

  • Ansible configuration of FlexPod with Cisco UCS in UCS Managed Mode, NetApp ONTAP, and VMware vSphere can be seen here.

  • Ansible configuration of FlexPod with Cisco UCS in IMM, NetApp ONTAP, and VMware vSphere can be seen here.

On-premises ONTAP storage configuration

This section describes some of the important ONTAP configuration steps that are specific to this solution.

  1. Configure an SVM with the iSCSI service running.

    1. vserver create –vserver Healthcare_SVM –rootvolume Healthcare_SVM_root –aggregate aggr1_A400_G0312_01 –rootvolume-security-style unix
    2. vserver add-protocols -vserver Healthcare_SVM -protocols iscsi
    3. vserver iscsi create -vserver Healthcare_SVM
       To verify:
       A400-G0312::> vserver iscsi show -vserver Healthcare_SVM
       Vserver: Healthcare_SVM
       Target Name:
       Target Alias: Healthcare_SVM
       Administrative Status: up

    If the iSCSI license was not installed during cluster configuration, make sure to install the license before creating the iSCSI service.

  2. Create a FlexVol volume.

    1. volume create -vserver Healthcare_SVM -volume hc_iscsi_vol -aggregate aggr1_A400_G0312_01 -size 500GB -state online -policy default -space guarantee none
  3. Add interfaces for iSCSI access.

    1. network interface create -vserver Healthcare_SVM -lif iscsi-lif-01a -service-policy default-data-iscsi -home-node <st-node01> -home-port a0a-<infra-iscsi-a-vlan-id> -address <st-node01-infra-iscsi-a–ip> -netmask <infra-iscsi-a-mask> -status-admin up
    2. network interface create -vserver Healthcare_SVM -lif iscsi-lif-01b -service-policy default-data-iscsi -home-node <st-node01> -home-port a0a-<infra-iscsi-b-vlan-id> -address <st-node01-infra-iscsi-b–ip> -netmask <infra-iscsi-b-mask> –status-admin up
    3. network interface create -vserver Healthcare_SVM -lif iscsi-lif-02a -service-policy default-data-iscsi -home-node <st-node02> -home-port a0a-<infra-iscsi-a-vlan-id> -address <st-node02-infra-iscsi-a–ip> -netmask <infra-iscsi-a-mask> –status-admin up
    4. network interface create -vserver Healthcare_SVM -lif iscsi-lif-02b -service-policy default-data-iscsi -home-node <st-node02> -home-port a0a-<infra-iscsi-b-vlan-id> -address <st-node02-infra-iscsi-b–ip> -netmask <infra-iscsi-b-mask> –status-admin up

    In this solution, we created four iSCSI logical interfaces (LIFs), two on each node.

    After the FlexPod instance is up and running with vCenter deployed and all ESXi hosts added to it, we need to deploy a Linux VM that acts as a server that connects to and accesses the NetApp ONTAP storage. In this solution, we have installed a CentOS 8 instance in vCenter.

  4. Create a LUN.

    1. lun create -vserver Healthcare_SVM -path /vol/hc_iscsi_vol/iscsi_lun1 -size 200GB -ostype linux -space-reserve disabled

    For an EHR operational database (ODB), a journal, and application workloads, EHR recommends presenting storage to servers as iSCSI LUNs. NetApp also supports using FCP and NVMe/FC if you have versions of AIX and the RHEL operating systems that are capable, which enhances performance. FCP and NVMe/FC can coexist on the same fabric.

  5. Create an igroup.

    1. igroup create –vserver Healthcare_SVM –igroup ehr –protocol iscsi –ostype linux –initiator

    Igroups are used to allow server access to LUNs. For Linux host, the server IQN can be found in the file /etc/iscsi/initiatorname.iscsi.

  6. Map the LUN to the igroup.

    1. lun mapping create –vserver Healthcare_SVM –path /vol/hc_iscsi_vol/iscsi_lun1 –igroup ehr –lun-id 0

Add on-premises FlexPod storage to BlueXP

Complete the following steps to add your FlexPod storage to the working environment using NetApp BlueXP.

  1. From the navigation menu, select Storage > Canvas.

  2. On the Canvas page, click Add Working Environment and select On-Premises.

  3. Select On-Premises ONTAP. Click Next.

    This screenshot shows the BlueXP Add Working Group page with On-Premises ONTAP selected.

  4. On the ONTAP Cluster Details page, enter the cluster management IP address and the password for the admin user account. Then click Add.

    This screenshot shows the BlueXP Discover ONTAP Cluster page with the ONTAP Cluster Details entries.

  5. On the Details and Credentials page, enter a name and description for the working environment, and then click Go.

    BlueXP discovers the ONTAP cluster and adds it as a working environment on the Canvas.

    This screenshot shows the BlueXP Canvas page with the recently added Working Environments on the right.

For detailed information, see the page Discover on-premises ONTAP clusters.