Skip to main content
ONTAP MetroCluster

Configuring the MetroCluster software in ONTAP

Contributors netapp-aoife netapp-martyh netapp-folivia netapp-ahibbard netapp-thomi netapp-jtures netapp-aherbin netapp-ranuk

You must set up each node in the MetroCluster configuration in ONTAP, including the node-level configurations and the configuration of the nodes into two sites. You must also implement the MetroCluster relationship between the two sites.

workflow high level node and cluster configuration software
Steps
  1. Gather the required IP addresses for the controller modules before you begin the configuration process.

  2. Complete the IP network information worksheet for site A.

IP network information worksheet for Site A

You must obtain IP addresses and other network information for the first MetroCluster site (site A) from your network administrator before you configure the system.

Site A cluster creation information

When you first create the cluster, you need the following information:

Type of information

Your values

Cluster name. Example used in this information: site_A

DNS domain

DNS name servers

Location

Administrator password

Site A node information

For each node in the cluster, you need a management IP address, a network mask, and a default gateway.

Node

Port

IP address

Network mask

Default gateway

Node 1. Example used in this information: controller_A_1

Node 2. Not required if using two-node MetroCluster configuration (one node at each site).

Example used in this information: controller_A_2

Site A LIFs and ports for cluster peering

For each node in the cluster, you need the IP addresses of two intercluster LIFs, including a network mask and a default gateway. The intercluster LIFs are used to peer the clusters.

Node

Port

IP address of intercluster LIF

Network mask

Default gateway

Node 1 IC LIF 1

Node 1 IC LIF 2

Site A time server information

You must synchronize the time, which requires one or more NTP time servers.

Node

Host name

IP address

Network mask

Default gateway

NTP server 1

NTP server 2

Site A   AutoSupport information

You must configure AutoSupport on each node, which requires the following information:

Type of information

Your values

From email address

Mail hosts

IP addresses or names

Transport protocol

HTTP, HTTPS, or SMTP

Proxy server

Recipient email addresses or distribution lists

Full-length messages

Concise messages

Partners

Site A SP information

You must enable access to the Service Processor (SP) of each node for troubleshooting and maintenance. This requires the following network information for each node:

Node

IP address

Network mask

Default gateway

Node 1

IP network information worksheet for site B

You must obtain IP addresses and other network information for the second MetroCluster site (site B) from your network administrator before you configure the system.

Site B cluster creation information

When you first create the cluster, you need the following information:

Type of information

Your values

Cluster name. Example used in this information: site_B

DNS domain

DNS name servers

Location

Administrator password

Site B node information

For each node in the cluster, you need a management IP address, a network mask, and a default gateway.

Node

Port

IP address

Network mask

Default gateway

Node 1. Example used in this information: controller_B_1

Node 2. Not required for two-node MetroCluster configurations (one node at each site).

Example used in this information: controller_B_2

Site B LIFs and ports for cluster peering

For each node in the cluster, you need the IP addresses of two intercluster LIFs, including a network mask and a default gateway. The intercluster LIFs are used to peer the clusters.

Node

Port

IP address of intercluster LIF

Network mask

Default gateway

Node 1 IC LIF 1

Node 1 IC LIF 2

Site B time server information

You must synchronize the time, which requires one or more NTP time servers.

Node

Host name

IP address

Network mask

Default gateway

NTP server 1

NTP server 2

Site B   AutoSupport information

You must configure AutoSupport on each node, which requires the following information:

Type of information

Your values

From email address

Mail hosts

IP addresses or names

Transport protocol

HTTP, HTTPS, or SMTP

Proxy server

Recipient email addresses or distribution lists

Full-length messages

Concise messages

Partners

Site B   SP information

You must enable access to the Service Processor (SP) of each node for troubleshooting and maintenance, which requires the following network information for each node:

Node

IP address

Network mask

Default gateway

Node 1 (controller_B_1)

Similarities and differences between standard cluster and MetroCluster configurations

The configuration of the nodes in each cluster in a MetroCluster configuration is similar to that of nodes in a standard cluster.

The MetroCluster configuration is built on two standard clusters. Physically, the configuration must be symmetrical, with each node having the same hardware configuration, and all of the MetroCluster components must be cabled and configured. However, the basic software configuration for nodes in a MetroCluster configuration is the same as that for nodes in a standard cluster.

Configuration step

Standard cluster configuration

MetroCluster configuration

Configure management, cluster, and data LIFs on each node.

Same in both types of clusters

Configure the root aggregate.

Same in both types of clusters

Set up the cluster on one node in the cluster.

Same in both types of clusters

Join the other node to the cluster.

Same in both types of clusters

Create a mirrored root aggregate.

Optional

Required

Peer the clusters.

Optional

Required

Enable the MetroCluster configuration.

Does not apply

Required

Restoring system defaults and configuring the HBA type on a controller module

To ensure a successful MetroCluster installation, reset and restore defaults on the controller modules.

Important

This task is only required for stretch configurations using FC-to-SAS bridges.

Steps
  1. At the LOADER prompt, return the environmental variables to their default setting:

    set-defaults

  2. Boot the node into Maintenance mode, then configure the settings for any HBAs in the system:

    1. Boot into Maintenance mode:

      boot_ontap maint

    2. Check the current settings of the ports:

      ucadmin show

    3. Update the port settings as needed.

    If you have this type of HBA and desired mode…​

    Use this command…​

    CNA FC

    ucadmin modify -m fc -t initiator adapter_name

    CNA Ethernet

    ucadmin modify -mode cna adapter_name

    FC target

    fcadmin config -t target adapter_name

    FC initiator

    fcadmin config -t initiator adapter_name

  3. Exit Maintenance mode:

    halt

    After you run the command, wait until the node stops at the LOADER prompt.

  4. Boot the node back into Maintenance mode to enable the configuration changes to take effect:

    boot_ontap maint

  5. Verify the changes you made:

    If you have this type of HBA…​

    Use this command…​

    CNA

    ucadmin show

    FC

    fcadmin show

  6. Exit Maintenance mode:

    halt

    After you run the command, wait until the node stops at the LOADER prompt.

  7. Boot the node to the boot menu:

    boot_ontap menu

    After you run the command, wait until the boot menu is shown.

  8. Clear the node configuration by typing “wipeconfig” at the boot menu prompt, and then press Enter.

    The following screen shows the boot menu prompt:

    Please choose one of the following:
    
         (1) Normal Boot.
         (2) Boot without /etc/rc.
         (3) Change password.
         (4) Clean configuration and initialize all disks.
         (5) Maintenance mode boot.
         (6) Update flash from backup config.
         (7) Install new software first.
         (8) Reboot node.
         (9) Configure Advanced Drive Partitioning.
         Selection (1-9)?  wipeconfig
     This option deletes critical system configuration, including cluster membership.
     Warning: do not run this option on a HA node that has been taken over.
     Are you sure you want to continue?: yes
     Rebooting to finish wipeconfig request.

Configuring FC-VI ports on a X1132A-R6 quad-port card on FAS8020 systems

If you are using the X1132A-R6 quad-port card on a FAS8020 system, you can enter Maintenance mode to configure the 1a and 1b ports for FC-VI and initiator usage. This is not required on MetroCluster systems received from the factory, in which the ports are set appropriately for your configuration.

About this task

This task must be performed in Maintenance mode.

Note Converting an FC port to an FC-VI port with the ucadmin command is only supported on the FAS8020 and AFF 8020 systems. Converting FC ports to FCVI ports is not supported on any other platform.
Steps
  1. Disable the ports:

    storage disable adapter 1a

    storage disable adapter 1b

    *> storage disable adapter 1a
    Jun 03 02:17:57 [controller_B_1:fci.adapter.offlining:info]: Offlining Fibre Channel adapter 1a.
    Host adapter 1a disable succeeded
    Jun 03 02:17:57 [controller_B_1:fci.adapter.offline:info]: Fibre Channel adapter 1a is now offline.
    *> storage disable adapter 1b
    Jun 03 02:18:43 [controller_B_1:fci.adapter.offlining:info]: Offlining Fibre Channel adapter 1b.
    Host adapter 1b disable succeeded
    Jun 03 02:18:43 [controller_B_1:fci.adapter.offline:info]: Fibre Channel adapter 1b is now offline.
    *>
  2. Verify that the ports are disabled:

    ucadmin show

    *> ucadmin show
             Current  Current    Pending  Pending    Admin
    Adapter  Mode     Type       Mode     Type       Status
    -------  -------  ---------  -------  ---------  -------
      ...
      1a     fc       initiator  -        -          offline
      1b     fc       initiator  -        -          offline
      1c     fc       initiator  -        -          online
      1d     fc       initiator  -        -          online
  3. Set the a and b ports to FC-VI mode:

    ucadmin modify -adapter 1a -type fcvi

    The command sets the mode on both ports in the port pair, 1a and 1b (even though only 1a is specified in the command).

    *> ucadmin modify -t fcvi 1a
    Jun 03 02:19:13 [controller_B_1:ucm.type.changed:info]: FC-4 type has changed to fcvi on adapter 1a. Reboot the controller for the changes to take effect.
    Jun 03 02:19:13 [controller_B_1:ucm.type.changed:info]: FC-4 type has changed to fcvi on adapter 1b. Reboot the controller for the changes to take effect.
  4. Confirm that the change is pending:

    ucadmin show

    *> ucadmin show
             Current  Current    Pending  Pending    Admin
    Adapter  Mode     Type       Mode     Type       Status
    -------  -------  ---------  -------  ---------  -------
      ...
      1a     fc       initiator  -        fcvi       offline
      1b     fc       initiator  -        fcvi       offline
      1c     fc       initiator  -        -          online
      1d     fc       initiator  -        -          online
  5. Shut down the controller, and then reboot into Maintenance mode.

  6. Confirm the configuration change:

    ucadmin show local

    Node           Adapter  Mode     Type       Mode     Type       Status
    ------------   -------  -------  ---------  -------  ---------  -----------
    ...
    controller_B_1
                   1a       fc       fcvi       -        -          online
    controller_B_1
                   1b       fc       fcvi       -        -          online
    controller_B_1
                   1c       fc       initiator  -        -          online
    controller_B_1
                   1d       fc       initiator  -        -          online
    6 entries were displayed.

Verifying disk assignment in Maintenance mode in a two-node configuration

Before fully booting the system to ONTAP, you can optionally boot the system to Maintenance mode and verify the disk assignment on the nodes. The disks should be assigned to create a fully symmetric configuration with both sites owning their own disk shelves and serving data, where each node and each pool have an equal number of mirrored disks assigned to them.

Before you begin

The system must be in Maintenance mode.

About this task

New MetroCluster systems have disk assignments completed prior to shipment.

The following table shows example pool assignments for a MetroCluster configuration. Disks are assigned to pools on a per-shelf basis.

Disk shelf (example name)…​

At site…​

Belongs to…​

And is assigned to that node's…​

Disk shelf 1 (shelf_A_1_1)

Site A

Node A 1

Pool 0

Disk shelf 2 (shelf_A_1_3)

Disk shelf 3 (shelf_B_1_1)

Node B 1

Pool 1

Disk shelf 4 (shelf_B_1_3)

Disk shelf 9 (shelf_B_1_2)

Site B

Node B 1

Pool 0

Disk shelf 10 (shelf_B_1_4)

Disk shelf 11 (shelf_A_1_2)

Node A 1

Pool 1

Disk shelf 12 (shelf_A_1_4)

If your configuration includes DS460C disk shelves, you should manually assign the disks using the following guidelines for each 12-disk drawer:

Assign these disks in the drawer…​

To this node and pool…​

1 - 6

Local node's pool 0

7 - 12

DR partner's pool 1

This disk assignment pattern minimizes the effect on an aggregate if a drawer goes offline.

Steps
  1. If your system was received from the factory, confirm the shelf assignments:

    disk show –v

  2. If necessary, you can explicitly assign disks on the attached disk shelves to the appropriate pool

    disk assign

    Disk shelves at the same site as the node are assigned to pool 0 and disk shelves located at the partner site are assigned to pool 1. You should assign an equal number of shelves to each pool.

    1. If you have not done so, boot each system into Maintenance mode.

    2. On the node on site A, systematically assign the local disk shelves to pool 0 and the remote disk shelves to pool 1:
      disk assign -shelf disk_shelf_name -p pool

      If storage controller node_A_1 has four shelves, you issue the following commands:

      *> disk assign -shelf shelf_A_1_1 -p 0
      *> disk assign -shelf shelf_A_1_3 -p 0
      
      *> disk assign -shelf shelf_A_1_2 -p 1
      *> disk assign -shelf shelf_A_1_4 -p 1
    3. On the node at the remote site (site B), systematically assign its local disk shelves to pool 0 and its remote disk shelves to pool 1:
      disk assign -shelf disk_shelf_name -p pool

      If storage controller node_B_1 has four shelves, you issue the following commands:

      *> disk assign -shelf shelf_B_1_2   -p 0
      *> disk assign -shelf shelf_B_1_4  -p 0
      
      *> disk assign -shelf shelf_B_1_1 -p 1
       *> disk assign -shelf shelf_B_1_3 -p 1
    4. Show the disk shelf IDs and bays for each disk:
      disk show –v

Verifying the HA state of components

In a stretch MetroCluster configuration that is not preconfigured at the factory, you must verify that the HA state of the controller and chassis component is set to “mcc-2n” so that they boot up properly. For systems received from the factory, this value is preconfigured and you do not need to verify it.

Before you begin

The system must be in Maintenance mode.

Steps
  1. In Maintenance mode, view the HA state of the controller module and chassis:

    ha-config show

    The controller module and chassis should show the value “mcc-2n”.

  2. If the displayed system state of the controller is not “mcc-2n”, set the HA state for the controller:

    ha-config modify controller mcc-2n

  3. If the displayed system state of the chassis is not “mcc-2n”, set the HA state for the chassis:

    ha-config modify chassis mcc-2n

    Halt the node.

    Wait until the node is back at the LOADER prompt.

  4. Repeat these steps on each node in the MetroCluster configuration.

Setting up ONTAP in a two-node MetroCluster configuration

In a two-node MetroCluster configuration, on each cluster you must boot up the node, exit the Cluster Setup wizard, and use the cluster setup command to configure the node into a single-node cluster.

Before you begin

You must not have configured the Service Processor.

About this task

This task is for two-node MetroCluster configurations using native NetApp storage.

This task must be performed on both clusters in the MetroCluster configuration.

For more general information about setting up ONTAP, see the Setup ONTAP

Steps
  1. Power on the first node.

    Note You must repeat this step on the node at the disaster recovery (DR) site.

    The node boots, then the Cluster Setup wizard starts on the console informing you that AutoSupport will be enabled automatically.

    ::> Welcome to the cluster setup wizard.
    
    You can enter the following commands at any time:
      "help" or "?" - if you want to have a question clarified,
      "back" - if you want to change previously answered questions, and
      "exit" or "quit" - if you want to quit the cluster setup wizard.
         Any changes you made before quitting will be saved.
    
    You can return to cluster setup at any time by typing "cluster setup".
    To accept a default or omit a question, do not enter a value.
    
    This system will send event messages and periodic reports to NetApp Technical
    Support. To disable this feature, enter
    autosupport modify -support disable
    within 24 hours.
    
    Enabling AutoSupport can significantly speed problem determination and
    resolution, should a problem occur on your system.
    For further information on AutoSupport, see:
    http://support.netapp.com/autosupport/
    
    Type yes to confirm and continue {yes}: yes
    
    Enter the node management interface port [e0M]:
    Enter the node management interface IP address [10.101.01.01]:
    
    Enter the node management interface netmask [101.010.101.0]:
    Enter the node management interface default gateway [10.101.01.0]:
    
    
    
    Do you want to create a new cluster or join an existing cluster? {create, join}:
  2. Create a new cluster:

    create

  3. Choose whether the node is to be used as a single node cluster.

    Do you intend for this node to be used as a single node cluster? {yes, no} [yes]:
  4. Accept the system default “yes” by pressing Enter, or enter your own values by typing “no”, and then pressing Enter.

  5. Follow the prompts to complete the Cluster Setup wizard, pressing Enter to accept the default values or typing your own values and then pressing Enter.

    The default values are determined automatically based on your platform and network configuration.

  6. After you complete the Cluster Setup wizard and it exits, verify that the cluster is active and the first node is healthy:

    cluster show

    The following example shows a cluster in which the first node (cluster1-01) is healthy and eligible to participate:

    cluster1::> cluster show
    Node                  Health  Eligibility
    --------------------- ------- ------------
    cluster1-01           true    true

    If it becomes necessary to change any of the settings you entered for the admin SVM or node SVM, you can access the Cluster Setup wizard by using the cluster setup command.

Configuring the clusters into a MetroCluster configuration

You must peer the clusters, mirror the root aggregates, create a mirrored data aggregate, and then issue the command to implement the MetroCluster operations.

Peering the clusters

The clusters in the MetroCluster configuration must be in a peer relationship so that they can communicate with each other and perform the data mirroring essential to MetroCluster disaster recovery.

Configuring intercluster LIFs

You must create intercluster LIFs on ports used for communication between the MetroCluster partner clusters. You can use dedicated ports or ports that also have data traffic.

Configuring intercluster LIFs on dedicated ports

You can configure intercluster LIFs on dedicated ports. Doing so typically increases the available bandwidth for replication traffic.

Steps
  1. List the ports in the cluster:

    network port show

    For complete command syntax, see the man page.

    The following example shows the network ports in “cluster01”:

    cluster01::> network port show
                                                                 Speed (Mbps)
    Node   Port      IPspace      Broadcast Domain Link   MTU    Admin/Oper
    ------ --------- ------------ ---------------- ----- ------- ------------
    cluster01-01
           e0a       Cluster      Cluster          up     1500   auto/1000
           e0b       Cluster      Cluster          up     1500   auto/1000
           e0c       Default      Default          up     1500   auto/1000
           e0d       Default      Default          up     1500   auto/1000
           e0e       Default      Default          up     1500   auto/1000
           e0f       Default      Default          up     1500   auto/1000
    cluster01-02
           e0a       Cluster      Cluster          up     1500   auto/1000
           e0b       Cluster      Cluster          up     1500   auto/1000
           e0c       Default      Default          up     1500   auto/1000
           e0d       Default      Default          up     1500   auto/1000
           e0e       Default      Default          up     1500   auto/1000
           e0f       Default      Default          up     1500   auto/1000
  2. Determine which ports are available to dedicate to intercluster communication:

    network interface show -fields home-port,curr-port

    For complete command syntax, see the man page.

    The following example shows that ports “e0e” and “e0f” have not been assigned LIFs:

    cluster01::> network interface show -fields home-port,curr-port
    vserver lif                  home-port curr-port
    
    Cluster cluster01-01_clus1   e0a       e0a
    Cluster cluster01-01_clus2   e0b       e0b
    Cluster cluster01-02_clus1   e0a       e0a
    Cluster cluster01-02_clus2   e0b       e0b
    cluster01
            cluster_mgmt         e0c       e0c
    cluster01
            cluster01-01_mgmt1   e0c       e0c
    cluster01
            cluster01-02_mgmt1   e0c       e0c
  3. Create a failover group for the dedicated ports:

    network interface failover-groups create -vserver system_SVM -failover-group failover_group -targets physical_or_logical_ports

    The following example assigns ports “e0e” and “e0f” to the failover group “intercluster01” on system SVM “cluster01”:

    cluster01::> network interface failover-groups create -vserver cluster01 -failover-group
    intercluster01 -targets
    cluster01-01:e0e,cluster01-01:e0f,cluster01-02:e0e,cluster01-02:e0f
  4. Verify that the failover group was created:

    network interface failover-groups show

    For complete command syntax, see the man page.

    cluster01::> network interface failover-groups show
                                      Failover
    Vserver          Group            Targets
    ---------------- ---------------- --------------------------------------------
    Cluster
                     Cluster
                                      cluster01-01:e0a, cluster01-01:e0b,
                                      cluster01-02:e0a, cluster01-02:e0b
    cluster01
                     Default
                                      cluster01-01:e0c, cluster01-01:e0d,
                                      cluster01-02:e0c, cluster01-02:e0d,
                                      cluster01-01:e0e, cluster01-01:e0f
                                      cluster01-02:e0e, cluster01-02:e0f
                     intercluster01
                                      cluster01-01:e0e, cluster01-01:e0f
                                      cluster01-02:e0e, cluster01-02:e0f
  5. Create intercluster LIFs on the system SVM and assign them to the failover group.

    ONTAP version

    Command

    ONTAP 9.6 and later

    network interface create -vserver system_SVM -lif LIF_name -service-policy default-intercluster -home-node node -home-port port -address port_IP -netmask netmask -failover-group failover_group

    ONTAP 9.5 and earlier

    network interface create -vserver system_SVM -lif LIF_name -role intercluster -home-node node -home-port port -address port_IP -netmask netmask -failover-group failover_group

    For complete command syntax, see the man page.

    The following example creates intercluster LIFs “cluster01_icl01” and “cluster01_icl02” in the failover group “intercluster01”:

    cluster01::> network interface create -vserver cluster01 -lif cluster01_icl01 -service-
    policy default-intercluster -home-node cluster01-01 -home-port e0e -address 192.168.1.201
    -netmask 255.255.255.0 -failover-group intercluster01
    
    cluster01::> network interface create -vserver cluster01 -lif cluster01_icl02 -service-
    policy default-intercluster -home-node cluster01-02 -home-port e0e -address 192.168.1.202
    -netmask 255.255.255.0 -failover-group intercluster01
  6. Verify that the intercluster LIFs were created:

    ONTAP version

    Command

    ONTAP 9.6 and later

    network interface show -service-policy default-intercluster

    ONTAP 9.5 and earlier

    network interface show -role intercluster

    For complete command syntax, see the man page.

    cluster01::> network interface show -service-policy default-intercluster
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    cluster01
                cluster01_icl01
                           up/up      192.168.1.201/24   cluster01-01  e0e     true
                cluster01_icl02
                           up/up      192.168.1.202/24   cluster01-02  e0f     true
  7. Verify that the intercluster LIFs are redundant:

    ONTAP version

    Command

    ONTAP 9.6 and later

    network interface show -service-policy default-intercluster -failover

    In ONTAP 9.5 and earlier

    network interface show -role intercluster -failover

    For complete command syntax, see the man page.

    The following example shows that the intercluster LIFs “cluster01_icl01” and “cluster01_icl02” on the SVM port “e0e” will fail over to port “e0f”.

    cluster01::> network interface show -service-policy default-intercluster –failover
             Logical         Home                  Failover        Failover
    Vserver  Interface       Node:Port             Policy          Group
    -------- --------------- --------------------- --------------- --------
    cluster01
             cluster01_icl01 cluster01-01:e0e   local-only      intercluster01
                                Failover Targets:  cluster01-01:e0e,
                                                   cluster01-01:e0f
             cluster01_icl02 cluster01-02:e0e   local-only      intercluster01
                                Failover Targets:  cluster01-02:e0e,
                                                   cluster01-02:e0f
Configuring intercluster LIFs on shared data ports

You can configure intercluster LIFs on ports shared with the data network. Doing so reduces the number of ports you need for intercluster networking.

Steps
  1. List the ports in the cluster:

    network port show

    For complete command syntax, see the man page.

    The following example shows the network ports in “cluster01”:

    cluster01::> network port show
                                                                 Speed (Mbps)
    Node   Port      IPspace      Broadcast Domain Link   MTU    Admin/Oper
    ------ --------- ------------ ---------------- ----- ------- ------------
    cluster01-01
           e0a       Cluster      Cluster          up     1500   auto/1000
           e0b       Cluster      Cluster          up     1500   auto/1000
           e0c       Default      Default          up     1500   auto/1000
           e0d       Default      Default          up     1500   auto/1000
    cluster01-02
           e0a       Cluster      Cluster          up     1500   auto/1000
           e0b       Cluster      Cluster          up     1500   auto/1000
           e0c       Default      Default          up     1500   auto/1000
           e0d       Default      Default          up     1500   auto/1000
  2. Create intercluster LIFs on the system SVM:

    ONTAP version

    Command

    ONTAP 9.6 and later

    network interface create -vserver system_SVM -lif LIF_name -service-policy default-intercluster -home-node node -home-port port -address port_IP -netmask netmask

    ONTAP 9.5 and earlier

    network interface create -vserver system_SVM -lif LIF_name -role intercluster -home-node node -home-port port -address port_IP -netmask netmask

    For complete command syntax, see the man page.

    The following example creates intercluster LIFs “cluster01_icl01” and “cluster01_icl02”:

    cluster01::> network interface create -vserver cluster01 -lif cluster01_icl01 -service-
    policy default-intercluster -home-node cluster01-01 -home-port e0c -address 192.168.1.201
    -netmask 255.255.255.0
    
    cluster01::> network interface create -vserver cluster01 -lif cluster01_icl02 -service-
    policy default-intercluster -home-node cluster01-02 -home-port e0c -address 192.168.1.202
    -netmask 255.255.255.0
  3. Verify that the intercluster LIFs were created:

    ONTAP version

    Command

    ONTAP 9.6 and later

    network interface show -service-policy default-intercluster

    ONTAP 9.5 and earlier

    network interface show -role intercluster

    For complete command syntax, see the man page.

    cluster01::> network interface show -service-policy default-intercluster
                Logical    Status     Network            Current       Current Is
    Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    cluster01
                cluster01_icl01
                           up/up      192.168.1.201/24   cluster01-01  e0c     true
                cluster01_icl02
                           up/up      192.168.1.202/24   cluster01-02  e0c     true
  4. Verify that the intercluster LIFs are redundant:

    ONTAP version

    Command

    ONTAP 9.6 and later

    network interface show –service-policy default-intercluster -failover

    ONTAP 9.5 and earlier

    network interface show -role intercluster -failover

    For complete command syntax, see the man page.

    The following example shows that the intercluster LIFs “cluster01_icl01” and “cluster01_icl02” on port “e0c” will fail over to port “e0d”.

    cluster01::> network interface show -service-policy default-intercluster –failover
             Logical         Home                  Failover        Failover
    Vserver  Interface       Node:Port             Policy          Group
    -------- --------------- --------------------- --------------- --------
    cluster01
             cluster01_icl01 cluster01-01:e0c   local-only      192.168.1.201/24
                                Failover Targets: cluster01-01:e0c,
                                                  cluster01-01:e0d
             cluster01_icl02 cluster01-02:e0c   local-only      192.168.1.201/24
                                Failover Targets: cluster01-02:e0c,
                                                  cluster01-02:e0d

Creating a cluster peer relationship

You must create the cluster peer relationship between the MetroCluster clusters.

Creating a cluster peer relationship

You can use the cluster peer create command to create a peer relationship between a local and remote cluster. After the peer relationship has been created, you can run cluster peer create on the remote cluster to authenticate it to the local cluster.

Before you begin
  • You must have created intercluster LIFs on every node in the clusters that are being peered.

  • The clusters must be running ONTAP 9.3 or later.

Steps
  1. On the destination cluster, create a peer relationship with the source cluster:

    cluster peer create -generate-passphrase -offer-expiration MM/DD/YYYY HH:MM:SS|1…​7days|1…​168hours -peer-addrs peer_LIF_IPs -ipspace ipspace

    If you specify both -generate-passphrase and -peer-addrs, only the cluster whose intercluster LIFs are specified in -peer-addrs can use the generated password.

    You can ignore the -ipspace option if you are not using a custom IPspace. For complete command syntax, see the man page.

    The following example creates a cluster peer relationship on an unspecified remote cluster:

    cluster02::> cluster peer create -generate-passphrase -offer-expiration 2days
    
                         Passphrase: UCa+6lRVICXeL/gq1WrK7ShR
                    Expiration Time: 6/7/2017 08:16:10 EST
      Initial Allowed Vserver Peers: -
                Intercluster LIF IP: 192.140.112.101
                  Peer Cluster Name: Clus_7ShR (temporary generated)
    
    Warning: make a note of the passphrase - it cannot be displayed again.
  2. On source cluster, authenticate the source cluster to the destination cluster:

    cluster peer create -peer-addrs peer_LIF_IPs -ipspace ipspace

    For complete command syntax, see the man page.

    The following example authenticates the local cluster to the remote cluster at intercluster LIF IP addresses 192.140.112.101 and 192.140.112.102:

    cluster01::> cluster peer create -peer-addrs 192.140.112.101,192.140.112.102
    
    Notice: Use a generated passphrase or choose a passphrase of 8 or more characters.
            To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess.
    
    Enter the passphrase:
    Confirm the passphrase:
    
    Clusters cluster02 and cluster01 are peered.

    Enter the passphrase for the peer relationship when prompted.

  3. Verify that the cluster peer relationship was created:

    cluster peer show -instance

    cluster01::> cluster peer show -instance
    
                                   Peer Cluster Name: cluster02
                       Remote Intercluster Addresses: 192.140.112.101, 192.140.112.102
                  Availability of the Remote Cluster: Available
                                 Remote Cluster Name: cluster2
                                 Active IP Addresses: 192.140.112.101, 192.140.112.102
                               Cluster Serial Number: 1-80-123456
                      Address Family of Relationship: ipv4
                Authentication Status Administrative: no-authentication
                   Authentication Status Operational: absent
                                    Last Update Time: 02/05 21:05:41
                        IPspace for the Relationship: Default
  4. Check the connectivity and status of the nodes in the peer relationship:

    cluster peer health show

    cluster01::> cluster peer health show
    Node       cluster-Name                Node-Name
                 Ping-Status               RDB-Health Cluster-Health  Avail…
    ---------- --------------------------- ---------  --------------- --------
    cluster01-01
               cluster02                   cluster02-01
                 Data: interface_reachable
                 ICMP: interface_reachable true       true            true
                                           cluster02-02
                 Data: interface_reachable
                 ICMP: interface_reachable true       true            true
    cluster01-02
               cluster02                   cluster02-01
                 Data: interface_reachable
                 ICMP: interface_reachable true       true            true
                                           cluster02-02
                 Data: interface_reachable
                 ICMP: interface_reachable true       true            true
Creating a cluster peer relationship (ONTAP 9.2 and earlier)

You can use the cluster peer create command to initiate a request for a peering relationship between a local and remote cluster. After the peer relationship has been requested by the local cluster, you can run cluster peer create on the remote cluster to accept the relationship.

Before you begin
  • You must have created intercluster LIFs on every node in the clusters being peered.

  • The cluster administrators must have agreed on the passphrase each cluster will use to authenticate itself to the other.

Steps
  1. On the data protection destination cluster, create a peer relationship with the data protection source cluster:

    cluster peer create -peer-addrs peer_LIF_IPs -ipspace ipspace

    You can ignore the -ipspace option if you are not using a custom IPspace. For complete command syntax, see the man page.

    The following example creates a cluster peer relationship with the remote cluster at intercluster LIF IP addresses 192.168.2.201 and 192.168.2.202:

    cluster02::> cluster peer create -peer-addrs 192.168.2.201,192.168.2.202
    Enter the passphrase:
    Please enter the passphrase again:

    Enter the passphrase for the peer relationship when prompted.

  2. On the data protection source cluster, authenticate the source cluster to the destination cluster:

    cluster peer create -peer-addrs peer_LIF_IPs -ipspace ipspace

    For complete command syntax, see the man page.

    The following example authenticates the local cluster to the remote cluster at intercluster LIF IP addresses 192.140.112.203 and 192.140.112.204:

    cluster01::> cluster peer create -peer-addrs 192.168.2.203,192.168.2.204
    Please confirm the passphrase:
    Please confirm the passphrase again:

    Enter the passphrase for the peer relationship when prompted.

  3. Verify that the cluster peer relationship was created:

    cluster peer show –instance

    For complete command syntax, see the man page.

    cluster01::> cluster peer show –instance
    Peer Cluster Name: cluster01
    Remote Intercluster Addresses: 192.168.2.201,192.168.2.202
    Availability: Available
    Remote Cluster Name: cluster02
    Active IP Addresses: 192.168.2.201,192.168.2.202
    Cluster Serial Number: 1-80-000013
  4. Check the connectivity and status of the nodes in the peer relationship:

    cluster peer health show

    For complete command syntax, see the man page.

    cluster01::> cluster peer health show
    Node       cluster-Name                Node-Name
                 Ping-Status               RDB-Health Cluster-Health  Avail…
    ---------- --------------------------- ---------  --------------- --------
    cluster01-01
               cluster02                   cluster02-01
                 Data: interface_reachable
                 ICMP: interface_reachable true       true            true
                                           cluster02-02
                 Data: interface_reachable
                 ICMP: interface_reachable true       true            true
    cluster01-02
               cluster02                   cluster02-01
                 Data: interface_reachable
                 ICMP: interface_reachable true       true            true
                                           cluster02-02
                 Data: interface_reachable
                 ICMP: interface_reachable true       true            true

Mirroring the root aggregates

You must mirror the root aggregates to provide data protection.

About this task

By default, the root aggregate is created as RAID-DP type aggregate. You can change the root aggregate from RAID-DP to RAID4 type aggregate. The following command modifies the root aggregate for RAID4 type aggregate:

storage aggregate modify –aggregate aggr_name -raidtype raid4

Note On non-ADP systems, the RAID type of the aggregate can be modified from the default RAID-DP to RAID4 before or after the aggregate is mirrored.
Steps
  1. Mirror the root aggregate:

    storage aggregate mirror aggr_name

    The following command mirrors the root aggregate for “controller_A_1”:

    controller_A_1::> storage aggregate mirror aggr0_controller_A_1

    This mirrors the aggregate, so it consists of a local plex and a remote plex located at the remote MetroCluster site.

  2. Repeat the previous step for each node in the MetroCluster configuration.

Related information

Logical storage management

Creating a mirrored data aggregate on each node

You must create a mirrored data aggregate on each node in the DR group.

Before you begin
  • You should know what drives or array LUNs will be used in the new aggregate.

  • If you have multiple drive types in your system (heterogeneous storage), you should understand how you can ensure that the correct drive type is selected.

About this task
  • Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in that aggregate must be owned by the same node, which becomes the home node for that aggregate.

  • Aggregate names should conform to the naming scheme you determined when you planned your MetroCluster configuration.

Steps
  1. Display a list of available spares:

    storage disk show -spare -owner node_name

  2. Create the aggregate:

    storage aggregate create -mirror true

    If you are logged in to the cluster on the cluster management interface, you can create an aggregate on any node in the cluster. To ensure that the aggregate is created on a specific node, use the -node parameter or specify drives that are owned by that node.

    You can specify the following options:

    • Aggregate's home node (that is, the node that owns the aggregate in normal operation)

    • List of specific drives or array LUNs that are to be added to the aggregate

    • Number of drives to include

      Note In the minimum supported configuration, in which a limited number of drives are available, you must use the force-small-aggregate option to allow the creation of a three disk RAID-DP aggregate.
    • Checksum style to use for the aggregate

    • Type of drives to use

    • Size of drives to use

    • Drive speed to use

    • RAID type for RAID groups on the aggregate

    • Maximum number of drives or array LUNs that can be included in a RAID group

    • Whether drives with different RPM are allowed For more information about these options, see the storage aggregate create man page.

      The following command creates a mirrored aggregate with 10 disks:

      cluster_A::> storage aggregate create aggr1_node_A_1 -diskcount 10 -node node_A_1 -mirror true
      [Job 15] Job is queued: Create aggr1_node_A_1.
      [Job 15] The job is starting.
      [Job 15] Job succeeded: DONE
  3. Verify the RAID group and drives of your new aggregate:

    storage aggregate show-status -aggregate aggregate-name

Creating unmirrored data aggregates

You can optionally create unmirrored data aggregates for data that does not require the redundant mirroring provided by MetroCluster configurations.

Before you begin
  • You should know what drives or array LUNs will be used in the new aggregate.

  • If you have multiple drive types in your system (heterogeneous storage), you should understand how you can verify that the correct drive type is selected.

Example 1. About this task

ATTENTION: In MetroCluster FC configurations, the unmirrored aggregates will only be online after a switchover if the remote disks in the aggregate are accessible. If the ISLs fail, the local node may be unable to access the data in the unmirrored remote disks. The failure of an aggregate can lead to a reboot of the local node.

Note The unmirrored aggregates must be local to the node owning them.
  • Drives and array LUNs are owned by a specific node; when you create an aggregate, all drives in that aggregate must be owned by the same node, which becomes the home node for that aggregate.

  • Aggregate names should conform to the naming scheme you determined when you planned your MetroCluster configuration.

  • The Disks and aggregates management contains more information about mirroring aggregates.

Steps
  1. Display a list of available spares:

    storage disk show -spare -owner node_name

  2. Create the aggregate:

    storage aggregate create

    If you are logged in to the cluster on the cluster management interface, you can create an aggregate on any node in the cluster. To verify that the aggregate is created on a specific node, you should use the -node parameter or specify drives that are owned by that node.

    You can specify the following options:

    • Aggregate's home node (that is, the node that owns the aggregate in normal operation)

    • List of specific drives or array LUNs that are to be added to the aggregate

    • Number of drives to include

    • Checksum style to use for the aggregate

    • Type of drives to use

    • Size of drives to use

    • Drive speed to use

    • RAID type for RAID groups on the aggregate

    • Maximum number of drives or array LUNs that can be included in a RAID group

    • Whether drives with different RPM are allowed For more information about these options, see the storage aggregate create man page.

      The following command creates a unmirrored aggregate with 10 disks:

      controller_A_1::> storage aggregate create aggr1_controller_A_1 -diskcount 10 -node controller_A_1
      [Job 15] Job is queued: Create aggr1_controller_A_1.
      [Job 15] The job is starting.
      [Job 15] Job succeeded: DONE
  3. Verify the RAID group and drives of your new aggregate:

    storage aggregate show-status -aggregate aggregate-name

Implementing the MetroCluster configuration

You must run the metrocluster configure command to start data protection in a MetroCluster configuration.

Before you begin
  • There should be at least two non-root mirrored data aggregates on each cluster.

    Additional data aggregates can be either mirrored or unmirrored.

    Verify the aggregate types:

    storage aggregate show

    Note If you want to use a single mirrored data aggregate, then see Configure MCC software in ONTAP for instructions.
  • The ha-config state of the controllers and chassis must be “mcc-2n”.

About this task

You can issue the metrocluster configure command once, on any of the nodes, to enable the MetroCluster configuration. You do not need to issue the command on each of the sites or nodes, and it does not matter which node or site you choose to issue the command on.

Steps
  1. Configure the MetroCluster in the following format:

    If your MetroCluster configuration has…​

    Then do this…​

    Multiple data aggregates

    From any node's prompt, configure MetroCluster:

    metrocluster configure node-name

    A single mirrored data aggregate

    1. From any node's prompt, change to the advanced privilege level:

      set -privilege advanced

      You need to respond with “y” when you are prompted to continue into advanced mode and you see the advanced mode prompt (*>).

    2. Configure the MetroCluster with the -allow-with-one-aggregate true parameter:

      metrocluster configure -allow-with-one-aggregate true node-name

    3. Return to the admin privilege level:
      set -privilege admin

    Note The best practice is to have multiple data aggregates. If the first DR group has only one aggregate and you want to add a DR group with one aggregate, you must move the metadata volume off the single data aggregate. For more information on this procedure, see Moving a metadata volume in MetroCluster configurations.

    The following command enables the MetroCluster configuration on all of the nodes in the DR group that contains “controller_A_1”:

    cluster_A::*> metrocluster configure -node-name controller_A_1
    
    [Job 121] Job succeeded: Configure is successful.
  2. Verify the networking status on site A:

    network port show

    The following example shows the network port usage:

    cluster_A::> network port show
                                                              Speed (Mbps)
    Node   Port      IPspace   Broadcast Domain Link   MTU    Admin/Oper
    ------ --------- --------- ---------------- ----- ------- ------------
    controller_A_1
           e0a       Cluster   Cluster          up     9000  auto/1000
           e0b       Cluster   Cluster          up     9000  auto/1000
           e0c       Default   Default          up     1500  auto/1000
           e0d       Default   Default          up     1500  auto/1000
           e0e       Default   Default          up     1500  auto/1000
           e0f       Default   Default          up     1500  auto/1000
           e0g       Default   Default          up     1500  auto/1000
    
    7 entries were displayed.
  3. Verify the MetroCluster configuration from both sites in the MetroCluster configuration.

    1. Verify the configuration from site A:
      metrocluster show

      cluster_A::> metrocluster show
      
      Cluster                   Entry Name          State
      ------------------------- ------------------- -----------
       Local: cluster_A         Configuration state configured
                                Mode                normal
                                AUSO Failure Domain auso-on-cluster-disaster
      Remote: cluster_B         Configuration state configured
                                Mode                normal
                                AUSO Failure Domain auso-on-cluster-disaster
    2. Verify the configuration from site B:
      metrocluster show

      cluster_B::> metrocluster show
      Cluster                   Entry Name          State
      ------------------------- ------------------- -----------
       Local: cluster_B         Configuration state configured
                                Mode                normal
                                AUSO Failure Domain auso-on-cluster-disaster
      Remote: cluster_A         Configuration state configured
                                Mode                normal
                                AUSO Failure Domain auso-on-cluster-disaster

Configuring FC-to-SAS bridges for health monitoring

In systems running ONTAP versions prior to 9.8, if your configuration includes FC-to-SAS bridges, you must perform some special configuration steps to monitor the FC-to-SAS bridges in the MetroCluster configuration.

  • Third-party SNMP monitoring tools are not supported for FibreBridge bridges.

  • Beginning with ONTAP 9.8, FC-to-SAS bridges are monitored via in-band connections by default, and additional configuration is not required.

Note Beginning with ONTAP 9.8, the storage bridge command is replaced with system bridge. The following steps show the storage bridge command, but if you are running ONTAP 9.8 or later, the system bridge command is preferred.
Steps
  1. From the ONTAP cluster prompt, add the bridge to health monitoring:

    1. Add the bridge, using the command for your version of ONTAP:

      ONTAP version

      Command

      ONTAP 9.5 and later

      storage bridge add -address 0.0.0.0 -managed-by in-band -name bridge-name

      ONTAP 9.4 and earlier

      storage bridge add -address bridge-ip-address -name bridge-name

    2. Verify that the bridge has been added and is properly configured:

      storage bridge show

      It might take as long as 15 minutes to reflect all of the data because of the polling interval. The ONTAP health monitor can contact and monitor the bridge if the value in the “Status” column is “ok”, and other information, such as the worldwide name (WWN), is displayed.

      The following example shows that the FC-to-SAS bridges are configured:

      controller_A_1::> storage bridge show
      
      Bridge              Symbolic Name Is Monitored  Monitor Status  Vendor Model                Bridge WWN
      ------------------  ------------- ------------  --------------  ------ -----------------    ----------
      ATTO_10.10.20.10  atto01        true          ok              Atto   FibreBridge 7500N   	20000010867038c0
      ATTO_10.10.20.11  atto02        true          ok              Atto   FibreBridge 7500N   	20000010867033c0
      ATTO_10.10.20.12  atto03        true          ok              Atto   FibreBridge 7500N   	20000010867030c0
      ATTO_10.10.20.13  atto04        true          ok              Atto   FibreBridge 7500N   	2000001086703b80
      
      4 entries were displayed
      
       controller_A_1::>

Checking the MetroCluster configuration

You can check that the components and relationships in the MetroCluster configuration are working correctly. You should do a check after initial configuration and after making any changes to the MetroCluster configuration. You should also do a check before a negotiated (planned) switchover or a switchback operation.

If the metrocluster check run command is issued twice within a short time on either or both clusters, a conflict can occur and the command might not collect all data. Subsequent metrocluster check show commands do not show the expected output.

  1. Check the configuration:

    metrocluster check run

    The command runs as a background job and might not be completed immediately.

    cluster_A::> metrocluster check run
    The operation has been started and is running in the background. Wait for
    it to complete and run "metrocluster check show" to view the results. To
    check the status of the running metrocluster check operation, use the command,
    "metrocluster operation history show -job-id 2245"
    cluster_A::> metrocluster check show
    
    Component           Result
    ------------------- ---------
    nodes               ok
    lifs                ok
    config-replication  ok
    aggregates          ok
    clusters            ok
    connections         ok
    volumes             ok
    7 entries were displayed.
  2. Display more detailed results:

    metrocluster check run

    metrocluster check aggregate show

    metrocluster check cluster show

    metrocluster check config-replication show

    metrocluster check lif show

    metrocluster check node show

    The metrocluster check show commands show the results of the most recent metrocluster check run command. You should always run the metrocluster check run command prior to using the metrocluster check show commands so that the information displayed is current.

    The following example shows the metrocluster check aggregate show command output for a healthy four-node MetroCluster configuration:

    cluster_A::> metrocluster check aggregate show
    
    Last Checked On: 8/5/2014 00:42:58
    
    Node                  Aggregate                  Check                      Result
    ---------------       --------------------       ---------------------      ---------
    controller_A_1        controller_A_1_aggr0
                                                     mirroring-status           ok
                                                     disk-pool-allocation       ok
                                                     ownership-state            ok
                          controller_A_1_aggr1
                                                     mirroring-status           ok
                                                     disk-pool-allocation       ok
                                                     ownership-state            ok
                          controller_A_1_aggr2
                                                     mirroring-status           ok
                                                     disk-pool-allocation       ok
                                                     ownership-state            ok
    
    
    controller_A_2        controller_A_2_aggr0
                                                     mirroring-status           ok
                                                     disk-pool-allocation       ok
                                                     ownership-state            ok
                          controller_A_2_aggr1
                                                     mirroring-status           ok
                                                     disk-pool-allocation       ok
                                                     ownership-state            ok
                          controller_A_2_aggr2
                                                     mirroring-status           ok
                                                     disk-pool-allocation       ok
                                                     ownership-state            ok
    
    18 entries were displayed.

    The following example shows the metrocluster check cluster show command output for a healthy four-node MetroCluster configuration. It indicates that the clusters are ready to perform a negotiated switchover if necessary.

    Last Checked On: 9/13/2017 20:47:04
    
    Cluster               Check                           Result
    --------------------- ------------------------------- ---------
    mccint-fas9000-0102
                          negotiated-switchover-ready     not-applicable
                          switchback-ready                not-applicable
                          job-schedules                   ok
                          licenses                        ok
                          periodic-check-enabled          ok
    mccint-fas9000-0304
                          negotiated-switchover-ready     not-applicable
                          switchback-ready                not-applicable
                          job-schedules                   ok
                          licenses                        ok
                          periodic-check-enabled          ok
    10 entries were displayed.
Related information

Disk and aggregate management

Checking for MetroCluster configuration errors with Config Advisor

You can go to the NetApp Support Site and download the Config Advisor tool to check for common configuration errors.

Config Advisor is a configuration validation and health check tool. You can deploy it at both secure sites and non-secure sites for data collection and system analysis.

Note Support for Config Advisor is limited, and available only online.
  1. Go to the Config Advisor download page and download the tool.

  2. Run Config Advisor, review the tool's output and follow the recommendations in the output to address any issues discovered.

Verifying switchover, healing, and switchback

You should verify the switchover, healing, and switchback operations of the MetroCluster configuration.

  1. Use the procedures for negotiated switchover, healing, and switchback that are mentioned in the Perform switchover, healing, and switchback.

Protecting configuration backup files

You can provide additional protection for the cluster configuration backup files by specifying a remote URL (either HTTP or FTP) where the configuration backup files will be uploaded in addition to the default locations in the local cluster.

  1. Set the URL of the remote destination for the configuration backup files:

    system configuration backup settings modify URL-of-destination

    The Cluster Management with the CLI contains additional information under the section Managing configuration backups.