Skip to main content
ONTAP MetroCluster

Refreshing a four-node or an eight-node MetroCluster IP configuration (ONTAP 9.8 and later)

Contributors netapp-aoife netapp-folivia netapp-pcarriga netapp-ahibbard netapp-martyh

You can use this procedure to upgrade controllers and storage in four-node or eight-node configurations.

Beginning with ONTAP 9.13.1, you can upgrade the controllers and storage in an eight-node MetroCluster IP configuration by expanding the configuration to become a temporary twelve-node configuration and then removing the old disaster recovery (DR) groups.

Beginning with ONTAP 9.8, you can upgrade the controllers and storage in a four-node MetroCluster IP configuration by expanding the configuration to become a temporary eight-node configuration and then removing the old DR group.

About this task
  • If you have an eight-node configuration, your system must be running ONTAP 9.13.1 or later.

  • If you have a four-node configuration, your system must be running ONTAP 9.8 or later.

  • If you are also upgrading the IP switches, you must upgrade them before performing this refresh procedure.

  • This procedure describes the steps required to refresh one four-node DR group. If you have an eight-node configuration (two DR groups) you can refresh one or both DR groups.

    If you refresh both DR groups, you must refresh one DR group at a time.

  • References to "old nodes" mean the nodes that you intend to replace.

  • For eight-node configurations, the source and target eight-node MetroCluster platform combination must be supported.

    Note If you refresh both DR groups, the platform combination might not be supported after you refresh the first DR group. You must refresh both DR groups to achieve a supported eight-node configuration.
  • You can only refresh specific platform models using this procedure in a MetroCluster IP configuration.

  • The lower limits of the source and target platforms apply. If you transition to a higher platform, the limits of the new platform applies only after the tech refresh of all DR groups completes.

  • If you perform a tech refresh to a platform with lower limits than the source platform, you must adjust and reduce the limits to be at, or below, the target platform limits before performing this procedure.

Steps
  1. Verify that you have a default broadcast domain created on the old nodes.

    When you add new nodes to an existing cluster without a default broadcast domain, node management LIFs are created for the new nodes using universal unique identifiers (UUIDs) instead of the expected names. For more information, see the Knowledge Base article Node management LIFs on newly-added nodes generated with UUID names.

  2. Gather information from the old nodes.

    At this stage, the four-node configuration appears as shown in the following image:

    mcc dr group a

    The eight-node configuration appears as shown in the following image:

    mcc dr groups 8 node
  3. To prevent automatic support case generation, send an AutoSupport message to indicate the upgrade is underway.

    1. Issue the following command:
      system node autosupport invoke -node * -type all -message "MAINT=10h Upgrading old-model to new-model"

      The following example specifies a 10 hour maintenance window. You might want to allow additional time depending on your plan.

      If the maintenance is completed before the time has elapsed, you can invoke an AutoSupport message indicating the end of the maintenance period:

      system node autosupport invoke -node * -type all -message MAINT=end

    2. Repeat the command on the partner cluster.

  4. Remove the existing MetroCluster configuration from Tiebreaker, Mediator, or other software that can initiate switchover.

    If you are using…​

    Use this procedure…​

    Tiebreaker

    1. Use the Tiebreaker CLI monitor remove command to remove the MetroCluster configuration.

      In the following example, “cluster_A” is removed from the software:

      NetApp MetroCluster Tiebreaker :> monitor remove -monitor-name cluster_A
      Successfully removed monitor from NetApp MetroCluster Tiebreaker
      software.
    2. Confirm that the MetroCluster configuration is removed correctly by using the Tiebreaker CLI monitor show -status command.

      NetApp MetroCluster Tiebreaker :> monitor show -status

    Mediator

    Issue the following command from the ONTAP prompt:

    metrocluster configuration-settings mediator remove

    Third-party applications

    Refer to the product documentation.

  5. Perform all of the steps in Expanding a MetroCluster IP configuration to add the new nodes and storage to the configuration.

    When the expansion procedure is complete, the temporary configuration appears as shown in the following images:

    mcc dr group b
    Figure 1. Temporary eight-node configuration
    mcc dr group c4
    Figure 2. Temporary twelve-node configuration
  6. Confirm that takeover is possible and the nodes are connected by running the following command on both clusters:

    storage failover show

    cluster_A::> storage failover show
                                        Takeover
    Node           Partner              Possible    State Description
    -------------- -------------------- ---------   ------------------
    Node_FC_1      Node_FC_2              true      Connected to Node_FC_2
    Node_FC_2      Node_FC_1              true      Connected to Node_FC_1
    Node_IP_1      Node_IP_2              true      Connected to Node_IP_2
    Node_IP_2      Node_IP_1              true      Connected to Node_IP_1
  7. Move the CRS volumes.

  8. Move the data from the old nodes to the new nodes by using the following procedures:

    1. Perform all the steps in Create an aggregate and move volumes to the new nodes.

      Note You might choose to mirror the aggregate when or after it is created.
    2. Perform all the steps in Move non-SAN data LIFs and cluster-management LIFs to the new nodes.

  9. Modify the IP address for the cluster peer of the transitioned nodes for each cluster:

    1. Identify the cluster_A peer by using the cluster peer show command:

      cluster_A::> cluster peer show
      Peer Cluster Name         Cluster Serial Number Availability   Authentication
      ------------------------- --------------------- -------------- --------------
      cluster_B         1-80-000011           Unavailable    absent
      1. Modify the cluster_A peer IP address:

        cluster peer modify -cluster cluster_A -peer-addrs node_A_3_IP -address-family ipv4

    2. Identify the cluster_B peer by using the cluster peer show command:

      cluster_B::> cluster peer show
      Peer Cluster Name         Cluster Serial Number Availability   Authentication
      ------------------------- --------------------- -------------- --------------
      cluster_A         1-80-000011           Unavailable    absent
      1. Modify the cluster_B peer IP address:

        cluster peer modify -cluster cluster_B -peer-addrs node_B_3_IP -address-family ipv4

    3. Verify that the cluster peer IP address is updated for each cluster:

      1. Verify that the IP address is updated for each cluster by using the cluster peer show -instance command.

        The Remote Intercluster Addresses field in the following examples displays the updated IP address.

        Example for cluster_A:

        cluster_A::> cluster peer show -instance
        
        Peer Cluster Name: cluster_B
                   Remote Intercluster Addresses: 172.21.178.204, 172.21.178.212
              Availability of the Remote Cluster: Available
                             Remote Cluster Name: cluster_B
                             Active IP Addresses: 172.21.178.212, 172.21.178.204
                           Cluster Serial Number: 1-80-000011
                            Remote Cluster Nodes: node_B_3-IP,
                                                  node_B_4-IP
                           Remote Cluster Health: true
                         Unreachable Local Nodes: -
                  Address Family of Relationship: ipv4
            Authentication Status Administrative: use-authentication
               Authentication Status Operational: ok
                                Last Update Time: 4/20/2023 18:23:53
                    IPspace for the Relationship: Default
        Proposed Setting for Encryption of Inter-Cluster Communication: -
        Encryption Protocol For Inter-Cluster Communication: tls-psk
          Algorithm By Which the PSK Was Derived: jpake
        
        cluster_A::>

        Example for cluster_B

        cluster_B::> cluster peer show -instance
        
                               Peer Cluster Name: cluster_A
                   Remote Intercluster Addresses: 172.21.178.188, 172.21.178.196 <<<<<<<< Should reflect the modified address
              Availability of the Remote Cluster: Available
                             Remote Cluster Name: cluster_A
                             Active IP Addresses: 172.21.178.196, 172.21.178.188
                           Cluster Serial Number: 1-80-000011
                            Remote Cluster Nodes: node_A_3-IP,
                                                  node_A_4-IP
                           Remote Cluster Health: true
                         Unreachable Local Nodes: -
                  Address Family of Relationship: ipv4
            Authentication Status Administrative: use-authentication
               Authentication Status Operational: ok
                                Last Update Time: 4/20/2023 18:23:53
                    IPspace for the Relationship: Default
        Proposed Setting for Encryption of Inter-Cluster Communication: -
        Encryption Protocol For Inter-Cluster Communication: tls-psk
          Algorithm By Which the PSK Was Derived: jpake
        
        cluster_B::>
  10. Follow the steps in Removing a Disaster Recovery group to remove the old DR group.

  11. If you want to refresh both DR groups in an eight-node configuration, you must repeat the entire procedure for each DR group.

    After you have removed the old DR group, the configuration appears as shown in the following images:

    mcc dr group d
    Figure 3. Four-node configuration
    mcc dr group c5
    Figure 4. Eight-node configuration
  12. Confirm the operational mode of the MetroCluster configuration and perform a MetroCluster check.

    1. Confirm the MetroCluster configuration and that the operational mode is normal:

      metrocluster show

    2. Confirm that all expected nodes are shown:

      metrocluster node show

    3. Issue the following command:

      metrocluster check run

    4. Display the results of the MetroCluster check:

      metrocluster check show

  13. Restore monitoring if necessary, using the procedure for your configuration.

    If you are using…​

    Use this procedure

    Tiebreaker

    Adding MetroCluster configurations in the MetroCluster Tiebreaker Installation and Configuration.

    Mediator

    Configuring the ONTAP Mediator service from a MetroCluster IP configuration in the MetroCluster IP Installation and Configuration.

    Third-party applications

    Refer to the product documentation.

  14. To resume automatic support case generation, send an Autosupport message to indicate that the maintenance is complete.

    1. Issue the following command:

      system node autosupport invoke -node * -type all -message MAINT=end

    2. Repeat the command on the partner cluster.