Skip to main content

Host configuration and preparation checklist

Contributors netapp-cgoff dmp-netapp netapp-barbe

Prepare each of the hypervisor hosts where an ONTAP Select node is deployed. As part of preparing the hosts, carefully assess the deployment environment to make sure that the hosts are properly configured and ready to support the deployment of an ONTAP Select cluster.

Note The ONTAP Select Deploy administration utility does not perform the required network and storage configuration of the hypervisor hosts. You must manually prepare each host prior to deploying an ONTAP Select cluster.

General hypervisor preparation

You must prepare the hypervisor hosts.

KVM hypervisor

Prepare the Linux server

You must prepare each of the Linux KVM servers where an ONTAP Select node is deployed. You must also prepare the server where the ONTAP Select Deploy administration utility is deployed.

Install Red Hat Enterprise Linux

You must install the Red Hat Enterprise Linux (RHEL) operating system using the ISO image. During installation, you should configure the system as follows:

  • Select Default as the security policy

  • Choose the Virtualized Host software selection

  • The destination should be the local boot disk and not a RAID LUN used by ONTAP Select

  • Verify that the host management interface is up after you boot the system

Note You can edit the correct network configuration file under /etc/sysconfig/network-scripts and then bring up the interface by using the ifup command.
Install additional packages required for ONTAP Select

ONTAP Select requires several additional software packages. The exact list of packages varies based on the version of Linux you are using. As a first step, verify that the yum repository is available on your server. If it is not available, you can retrieve it using the wget your_repository_location command:

Note Some of the required packages might already be installed if you chose Virtualized Host for the software selection during installation of the Linux server. You might need to install the openvswitch package from source code as described in the Open vSwitch documentation.
For additional information about the necessary packages and other configuration requirements, see the link:https://imt.netapp.com/matrix/#welcome[NetApp Interoperability Matrix Tool^].
Additional packages required for RHEL 7.7

Install the same set of packages required for RHEL 7.6.

Additional packages required for RHEL 7.6

Verify that the following packages and dependencies are installed when using RHEL 7.6 or CentOS 7.6. In each case, the package name and version are included.

  • qemu-kvm (1.5.3-160)

    Note When using software RAID, you must use version 2.9.0 instead.
  • libvirt (4.5.0-10)

  • openvswitch (2.7.3)

  • virt-install (1.5.0-1)

  • lshw (B.02.18-12)

  • lsscsi (0.27-6)

  • lsof (4.87-6)

If you are using vNAS on KVM (external storage) and plan to migrate virtual machines from one host to another, you should install the following additional packages and dependencies:

  • fence-agents-all (4.2.1-11)

  • lvm2-cluster (2.02.180-8)

  • pacemaker (1.1.19-8)

  • pcs (0.9.165-6)

Additional packages required for RHEL 7.5

Verify that the following packages and dependencies are installed when using RHEL 7.5 or CentOS 7.5. In each case, the package name and version are included.

  • qemu-kvm (1.5.3-141)

    Note When using software RAID, you must use version 2.9.0 instead.
  • libvirt (3.9.0)

  • openvswitch (2.7.3)

  • virt-install (1.4.1-7)

  • lshw (B.02.18-12)

  • lsscsi (0.27-6)

  • lsof (4.87-5)

If you are using vNAS on KVM (external storage) and plan to migrate virtual machines from one host to another, you should install the following additional packages and dependencies:

  • fence-agents-all (4.0.11-86)

  • lvm2-cluster (2.02.177-4)

  • pacemaker (1.1.18-11)

  • pcs (0.9.16205)

Additional packages required for RHEL 7.4

Verify that the following packages and dependencies are installed when using RHEL 7.4 or CentOS 7.4. In each case the package name and version are included.

  • qemu-kvm (1.5.3-141)

    Note When using software RAID, you must use version 2.9.0 instead.
  • libvirt (3.2.0-14)

  • openvswitch (2.7.3)

  • virt-install (1.4.1-7)

  • lshw (B.02.18-7)

  • lsscsi (0.27-6)

  • lsof (4.87-4)

If you are using vNAS on KVM (external storage) and plan to migrate virtual machines from one host to another, you should install the following additional packages and dependencies:

  • fence-agents-all (4.0.11-66)

  • lvm2-cluster (2.02.171-8)

  • pacemaker (1.1.16-12)

  • pcs (0.9.158-6)

Configuration of the storage pools

An ONTAP Select storage pool is a logical data container that abstracts the underlying physical storage. You must manage the storage pools on the KVM hosts where ONTAP Select is deployed.

Create a storage pool

You must create at least one storage pool at each ONTAP Select node. If you use software RAID instead of a local hardware RAID, storage disks are attached to the node for the root and data aggregates. In this case, you must still create a storage pool for the system data.

Before you begin

Verify that you can sign in to the Linux CLI on the host where ONTAP Select is deployed.

About this task

The ONTAP Select Deploy administration utility expects the target location for the storage pool to be specified as /dev/<pool_name>, where <pool_name> is a unique pool name on the host.

Note The entire capacity of the LUN is allocated when a storage pool is created.
Steps
  1. Display the local devices on the Linux host and choose the LUN that will contain the storage pool:

    lsblk

    The appropriate LUN is likely to be the device with the largest storage capacity.

  2. Define the storage pool on the device:

    virsh pool-define-as <pool_name> logical --source-dev <device_name> --target=/dev/<pool_name>

    For example:

    virsh pool-define-as select_pool logical --source-dev /dev/sdb --target=/dev/select_pool
  3. Build the storage pool:

    virsh pool-build <pool_name>
  4. Start the storage pool:

    virsh pool-start <pool_name>
  5. Configure the storage pool to automatically start at system boot:

    virsh pool-autostart <pool_name>
  6. Verify that the storage pool has been created:

    virsh pool-list

Delete a storage pool

You can delete a storage pool when it is no longer needed.

Before you begin

Verify that you can sign in to the Linux CLI where ONTAP Select is deployed.

About this task

The ONTAP Select Deploy administration utility expects the target location for the storage pool to be specified as /dev/<pool_name>, where <pool_name> is a unique pool name on the host.

Steps
  1. Verify that the storage pool is defined:

    virsh pool-list
  2. Destroy the storage pool:

    virsh pool-destroy <pool_name>
  3. Undefine the configuration for the inactive storage pool:

    virsh pool-undefine <pool_nanme>
  4. Verify that the storage pool has been removed from the host:

    virsh pool-list
  5. Verify that all logical volumes for the storage pool volume group have been deleted.

    1. Display the logical volumes:

      lvs
    2. If any logical volumes exist for the pool, delete them:

      lvremove <logical_volume_name>
  6. Verify that the volume group has been deleted:

    1. Display the volume groups:

      vgs
    2. If a volume group exists for the pool, delete it:

      vgremove <volume_group_name>
  7. Verify that the physical volume has been deleted:

    1. Display the physical volumes:

      pvs
    2. If a physical volume exists for the pool, delete it:

      pvremove <physical_volume_name>

ESXi hypervisor

Each host must be configured with the following:

  • A pre-installed and supported hypervisor

  • A VMware vSphere license

Also, the same vCenter server must be able to manage all the hosts where an ONTAP Select node is deployed within the cluster.

In addition, you should make sure that the firewall ports are configured to allow access to vSphere. These ports must be open to support serial port connectivity to the ONTAP Select virtual machines.

By default, VMware allows access on the following ports:

  • Port 22 and ports 1024 – 65535 (inbound traffic)

  • Ports 0 – 65535 (outbound traffic)

NetApp recommends that the following firewall ports are opened to allow access to vSphere:

  • Ports 7200 – 7400 (both inbound and outbound traffic)

You should also be familiar with the vCenter rights that are required. See VMware vCenter server for more information.

ONTAP Select cluster network preparation

You can deploy ONTAP Select as either a multi-node cluster or a single-node cluster. In many cases, a multi-node cluster is preferable because of the additional storage capacity and HA capability.

Illustration of the ONTAP Select networks and nodes

The figures below illustrate the networks used with a single-node cluster and four-node cluster.

Single-node cluster showing one network

The following figure illustrates a single-node cluster. The external network carries client, management, and cross-cluster replication traffic (SnapMirror/SnapVault).

Single-node cluster showing one network

Four-node cluster showing two networks

The following figure illustrates a four-node cluster. The internal network enables communication among the nodes in support of the ONTAP cluster network services. The external network carries client, management, and cross-cluster replication traffic (SnapMirror/SnapVault).

Four-node cluster showing two networks

Single node within a four-node cluster

The following figure illustrates the typical network configuration for a single ONTAP Select virtual machine within a four-node cluster. There are two separate networks: ONTAP-internal and ONTAP-external.

Single node within a four-node cluster

KVM host

Configure Open vSwitch on a KVM host

You must configure a software-defined switch on each ONTAP Select node using Open vSwitch.

Before you begin

Verify that the network manager is disabled and the native Linux network service is enabled.

About this task

ONTAP Select requires two separate networks, both of which utilize port bonding to provide HA capability for the networks.

Steps
  1. Verify that Open vSwitch is active on the host:

    1. Determine if Open vSwitch is running:

      systemctl status openvswitch
    2. If Open vSwitch is not running, start it:

      systemctl start openvswitch
  2. Display the Open vSwitch configuration:

    ovs-vsctl show

    The configuration appears empty if Open vSwitch has not already been configured on the host.

  3. Add a new vSwitch instance:

    ovs-vsctl add-br <bridge_name>

    For example:

    ovs-vsctl add-br ontap-br
  4. Bring the network interfaces down:

    ifdown <interface_1>
    ifdown <interface_2>
  5. Combine the links using LACP:

    ovs-vsctl add-bond <internal_network> bond-br <interface_1> <interface_2> bond_mode=balance-slb lacp=active other_config:lacp-time=fast
Note You only need to configure a bond if there is more than one interface.
  1. Bring the network interfaces up:

    ifup <interface_1>
    ifup <interface_2>

ESXi host

vSwitch configuration on a hypervisor host

The vSwitch is the core hypervisor component used to support the connectivity for the internal and external networks. There are several things you should consider as part of configuring each hypervisor vSwitch.

vSwitch configuration for a host with two physical ports (2x10Gb)

When each host includes two 10Gb ports, you should configure the vSwitch as follows:

  • Configure a vSwitch and assign both the ports to the vSwitch. Create a NIC team using the two ports.

  • Set the load balancing policy to “Route based on the originating virtual port ID”.

  • Mark both adapters as “active” or mark one adapter as “active” and the other as “standby”.

  • Set the “Failback” setting to “Yes”.
    vSwitch properties)

  • Configure the vSwitch to use jumbo frames (9000 MTU).

  • Configure a port group on the vSwitch for the internal traffic (ONTAP-internal):

    • The port group is assigned to ONTAP Select virtual network adapters e0c-e0g used for the cluster, HA interconnect, and mirroring traffic.

    • The port group should be on a non-routable VLAN because this network is expected to be private. You should add the appropriate VLAN tag to the port group to take this into account.

    • The load balancing, failback, and failover order settings of the port group should be the same as the vSwitch.

  • Configure a port group on the vSwitch for the external traffic (ONTAP-external):

    • The port group is assigned to ONTAP Select virtual network adapters e0a-e0c used for data and management traffic.

    • The port group can be on a routable VLAN. Also, depending on the network environment, you should add an appropriate VLAN tag or configure the port group for VLAN trunking.

    • The load balancing, failback, and failover order settings of the port group should be same as vSwitch.

The above vSwitch configuration is for a host with 2x10Gb ports in a typical network environment.